CN110222210A - User's smart machine and its mood icon processing method - Google Patents
User's smart machine and its mood icon processing method Download PDFInfo
- Publication number
- CN110222210A CN110222210A CN201910395252.1A CN201910395252A CN110222210A CN 110222210 A CN110222210 A CN 110222210A CN 201910395252 A CN201910395252 A CN 201910395252A CN 110222210 A CN110222210 A CN 110222210A
- Authority
- CN
- China
- Prior art keywords
- mood
- user
- input information
- processing method
- mood icon
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application provides a kind of user's smart machine and its mood icon processing method, the input information of user's smart machine acquisition user, Emotion identification is carried out to user according to the input information, corresponding key message is obtained according to the result of Emotion identification, is generated to obtain corresponding mood icon according to the key message.The application can input information according to user and carry out Emotion identification, Emotional Picture, text or language etc. are generated into corresponding mood icon, realize the quick input effect of a variety of import pathway, it can also realize the Emotion identification to longer input information, individual demand is met, and realizes artificial intelligence to a certain extent.
Description
Technical field
This application involves technical field of information processing, and in particular to a kind of mood icon processing method, and described in application
User's smart machine of mood icon processing method.
Background technique
With the high speed development of individual mobile terminal technology, personal mobile communication function is also more and more perfect, correspondingly,
It is required when communication that the expression icon (combination of picture or various character strings) that both pictures and texts are excellent is used also gradually to move towards individual character
Change.
For example, when user is when instant communication software or social platform etc. input message, if it is desired to be inserted into emoticon, meeting
Some emoticon for wishing to be inserted into, or input text corresponding with default emoticon are chosen from existing emoticon, so
Some is selected in emoticon recommendation results afterwards.Wherein, " existing emoticon " is typically from application software default or net
Network downloading, and " emoticon " generally only refers to emoji and static or dynamic figure.Certainly, user can also be by shooting image
Or video, and text is added, to generate static or dynamic expression figure.
But user is only capable of using designed emoticon, and can only be inputted by clicking selection or text manually
Corresponding emoticon recommendation results are obtained, but the message of user's input is varied, not only having text, there are also voice, picture, views
Frequency etc., therefore this scheme is not able to satisfy other information input scenes.Moreover, the text sometimes inputted be also only capable of by with
The name of existing emoticon compares, so that the relevant emoticon recommendation results of input text are obtained, when input text has feelings
When thread meaning is still not exactly equal to the name of existing emoticon, emoticon recommendation results can not be obtained.For example, " I am good for input
Happily ", " happy " relevant emoticon can not be obtained to recommend, it is necessary to which inputting " happy " could obtain, and not meet user's
True use habit.Furthermore, it is understood that designed emoticon, not personalized enough, it is unable to satisfy the personalization of user instantly
Demand.
Equally, user generates static or dynamic emoticon by shooting image or video, also needs by adding text
Editors are waited to show emotional state, it is not smart enough light although meeting individual demand, increase burden for users, user
It experiences poor.
Insufficient for the various aspects of the prior art, present inventor proposes a kind of user's intelligence by further investigation
Equipment and its mood icon processing method.
Summary of the invention
The purpose of the application is, provides a kind of user's smart machine and its mood icon processing method, can according to
Family inputs information and carries out Emotion identification, and Emotional Picture, text or language etc. are generated corresponding mood icon, realized a variety of defeated
The quick input effect for entering approach can also be realized to the Emotion identification of longer input information, meet individual demand, and
Artificial intelligence is realized to a certain extent.
In order to solve the above technical problems, the application provides a kind of mood icon processing method, as one of embodiment party
Formula, the mood icon processing method comprising steps of
The input information of user's smart machine acquisition user;
Emotion identification is carried out to user according to the input information;
Corresponding key message is obtained according to the result of Emotion identification;
It is generated to obtain corresponding mood icon according to the key message.
It is the step of user's smart machine obtains the input information of user, specific to wrap as one of embodiment
It includes:
User's smart machine shoots the body language content of user by photographic device;
The body language content is pre-processed using as the input information.
It is described that the body language content is pre-processed to believe as the input as one of embodiment
The step of breath, specifically includes:
Action recognition is carried out to the body language content;
The target site that movement range changing value is greater than preset threshold is obtained according to the result of action recognition;
Using the movement of the target site as the input information.
As one of embodiment, the body language content is eyes, mouth, hand, head, trunk, foot
Or the movement of any two kinds or combination of the above.
It is the step of user's smart machine obtains the input information of user, specific to wrap as one of embodiment
It includes:
User's smart machine obtains the voice content of user by microphone;
The language content is pre-processed using as the input information.
As one of embodiment, described the step of Emotion identification is carried out to user according to the input information, tool
Body includes:
Emotion identification is carried out using based on data of the speech emotional analytical technology to the voice content.
It is the step of user's smart machine obtains the input information of user, specific to wrap as one of embodiment
It includes:
User's smart machine obtains the text of user by text message input frame;
The text is pre-processed using as the input information.
As one of embodiment, described the step of Emotion identification is carried out to user according to the input information, tool
Body includes:
Emotion identification is carried out using based on data of the text emotion analytical technology to the text.
It is described that the step of obtaining corresponding mood icon is generated according to the key message as one of embodiment
Later, further includes:
The mood icon is prompted to user and carries out preview;
Judge whether to get the confirmation operation of user;
If getting the confirmation operation of user, the mood icon is sent and/or stored.
It is described to be generated to obtain the step of corresponding mood icon according to the key message as one of embodiment
Suddenly, it specifically includes:
Corresponding emotion expression service carrier is obtained according to the key message;
It is generated to obtain corresponding mood icon according to the emotion expression service carrier.
As one of embodiment, the emotion expression service carrier is Emotional Picture, mood text, mood voice or three
Any combination of person.
As one of embodiment:
When the input information is body language content, the key message is eyes, mouth, hand, head, body
The mood action message that dry, foot or any two kinds or combination of the above are constituted;
When the input information is language content, the key message is sentence, the tone, word, scene or any two
The mood voice messaging that kind or combination of the above are constituted;
When the input information is text, the key message is sentence, word, scene, context or any
The mood text information that two kinds or combination of the above are constituted;
Wherein, the mood icon is by the eyes, mouth, hand, head, trunk, foot or any two kinds or more
Combine constituted mood action message, the mood that sentence, the tone, word, scene or any two kinds or combination of the above are constituted
Voice messaging, and/or, the mood text information that sentence, word, scene, context or any two kinds or combination of the above are constituted
Obtained by simulation.
As one of embodiment, the mood icon is for communication exchange including picture, text and/or language
Expression static map, expression small video, the expression Dynamic Graph of sound.
As one of embodiment, the mood icon is also used to when being sent to recipient according to recipient's
Program request movement generates interaction effect.
In order to solve the above technical problems, the application also provides a kind of user's smart machine, as one of embodiment,
User's smart machine includes processor, and the processor is for executing computer program, to realize mood as described above
Icon processing method.
User's smart machine provided by the present application and its mood icon processing method, user's smart machine obtain the defeated of user
Enter information, Emotion identification is carried out to user according to the input information, corresponding crucial letter is obtained according to the result of Emotion identification
Breath, generates to obtain corresponding mood icon according to the key message.The application can input information according to user and carry out mood
Emotional Picture, text or language etc. are generated corresponding mood icon by identification, realize the quick input effect of a variety of import pathway
Fruit can also be realized to the Emotion identification of longer input information, meet individual demand, and realize to a certain extent
Artificial intelligence.
Above description is only the general introduction of technical scheme, in order to better understand the technological means of the application,
And it can be implemented in accordance with the contents of the specification, and in order to allow the above and other objects, features and advantages of the application can
It is clearer and more comprehensible, it is special below to lift preferred embodiment, and cooperate attached drawing, detailed description are as follows.
Detailed description of the invention
Fig. 1 is the flow diagram of one embodiment of mood icon processing method of the application.
Fig. 2 is the module diagram of one embodiment of the application user's smart machine.
Specific embodiment
Further to illustrate that the application is the technical means and efficacy reaching predetermined application purpose and being taken, below in conjunction with
Attached drawing and preferred embodiment, to the application, detailed description are as follows.
By the explanation of specific embodiment, when can be to reach the technological means and effect that predetermined purpose is taken to the application
Fruit be able to more deeply and it is specific understand, however institute's accompanying drawings are only to provide reference and description is used, and are not used to this Shen
It please limit.
Referring to Fig. 1, Fig. 1 is the flow diagram of one embodiment of mood icon processing method of the application, this implementation
Mode mood icon processing method can be applied to users' intelligence such as mobile phone, laptop, tablet computer or wearable device
In energy equipment.
It should be noted that as shown in Figure 1, mood icon processing method described in present embodiment may include but unlimited
In the following steps.
Step S101, user's smart machine obtain the input information of user.
Step S102 carries out Emotion identification to user according to the input information.
Step S103 obtains corresponding key message according to the result of Emotion identification.
Step S104 generates to obtain corresponding mood icon according to the key message.
In the present embodiment, described that the step of obtaining corresponding mood icon is generated according to the key message, specifically
It include: that corresponding emotion expression service carrier is obtained according to the key message;It is generated and is corresponded to according to the emotion expression service carrier
Mood icon.
It needs to illustrate first, emotion expression service carrier described in present embodiment can be Emotional Picture, mood text, feelings
Any combination of thread voice or three.
In the present embodiment, when the input information is body language content, the key message is eyes, mouth
Bar, the mood action message that is constituted of hand, head, trunk, foot or any two kinds or combination of the above;When the input information
When for language content, feelings that the key message is made of sentence, the tone, word, scene or any two kinds or combination of the above
Thread voice messaging;When the input information is text, the key message is sentence, word, scene, context or appoints
The mood text information that two kinds of meaning or combination of the above are constituted.Wherein, in one embodiment, the mood icon is by the eye
The mood action message that eyeball, mouth, hand, head, trunk, foot or any two kinds or combination of the above are constituted, sentence, language
The mood voice messaging that gas, word, scene or any two kinds or combination of the above are constituted, and/or, sentence, word, scene, on
Hereafter or obtained by the mood text information simulation that is constituted of arbitrarily two kinds or combination of the above.
It should be noted that mood icon described in present embodiment be for communication exchange include picture, text and/or
Expression static map, expression small video, the expression Dynamic Graph of voice.
In addition, mood icon described in present embodiment when being sent to recipient, is also used to be moved according to the program request of recipient
Make to generate interaction or stress reaction effect.
Wherein, stress reaction effect described in present embodiment after referring to that such as mood icon issues recipient, is shown
It is shown as " anger ", and when recipient is to the different location program request of the mood icon, can generate different effects, such as if it is
The double-click apologized can then show the stress reaction of " forgiving you ";Or it is any if detecting that recipient does not make
Program request movement can then show " I am very angry ", the mood of " I extremely as mad as a wet hen " is deepened, reinforcing effect.
It should be noted that user's smart machine obtains the step of the input information of user in S101 described in present embodiment
Suddenly, can specifically include: user's smart machine shoots the body language content of user by photographic device;To the body language
Content is pre-processed using as the input information.
For example, user's smart machine can shoot the body language of user by multiple cameras such as preposition, postposition
Content can also be continuously shot, three-dimensional shoots and compares screening.
Furthermore, the body language content is pre-processed to believe as the input described in present embodiment
The step of breath, can specifically include: carry out action recognition to the body language content;It is obtained according to the result of action recognition dynamic
Make the target site that amplitude changing value is greater than preset threshold;Using the movement of the target site as the input information.
For example, for example, user want to express it is surprised, then movement range changing value it is maximum be usually eyes or mouth, than
As eyes open very big or mouth it is very big, but therefore face etc. may can most express without significant change
The meeting of mood is the movement range of eyes or mouth.
Certainly, it is contemplated that user can be cooperated by various motion expresses different moods, limb described in present embodiment
Body language content is the movement of eyes, mouth, hand, head, trunk, foot or any two kinds or combination of the above.
Present embodiment according to the input information to user carry out Emotion identification, can specifically include: be first for
The artis of human body is modeled, and human body is regarded as the rigid system for having inner link, it includes bone and pass
The relative motion of node, bone and artis constitutes the variation of human body attitude, i.e., usually described description movement, in human body crowd
In multi-joint point, according to the weight to emotion influence, ignore finger and toe, the backbone of human body is abstracted as neck, chest and abdomen
Three joints sum up a manikin, wherein the upper part of the body includes head, neck, chest, abdomen, two large arm and two forearms,
And the lower part of the body includes two thighs, two shanks;For a variety of emotional states of selection, in the case of having chosen human normal respectively
The expression of every kind of emotional state is carried out, and limbs are reacted and carry out detailed analysis;Since human body is abstracted into for rigid model,
It is the movement of gravity center of human body first, is divided into forward, backward and natural mode;Other than the movement of center of gravity, followed by artis
Rotation, human body generation movement variation, and artis relevant with mood includes head, thoracic cavity, shoulder and ancon, corresponding movement
For the bending on head, the rotation in thoracic cavity, the swing of upper arm and direction of extension and ancon bending, these parameters combine it is upper heavy
The movement of the heart includes the movement of 7 degree of freedom, gives expression to the movement of people's upper part of the body in total.In this way, may be used
To carry out Emotion identification according to the body language content of user.
For example, the photo or video comprising face that user is shot by preposition or rear camera, when user completes to clap
When taking the photograph, the face image data of user is acquired;Wherein, acquisition movement can betide face action maximum it is most obvious when, can
To be judged automatically by action recognition.
Then, present embodiment, which is used, carries out Emotion identification to face-image based on Image emotional semantic analytical technology and exports feelings
Thread result.Mood result described in present embodiment may include two layers of judgement: whether first layer is determined as comprising emotional information, such as
Fruit does not include, then mood result is loss of emotion information;If first layer is judged to sentencing comprising emotional information into the second layer
It is fixed, that is, be determined as a certain type of emotion, wherein type of emotion include and be not limited to it is happy, sad, gloomy, angry, fear,
Anxiety etc..
Present embodiment can judge whether to obtain corresponding emotion expression service carrier according to mood result, when mood result
Generation when for loss of emotion, without mood icon;When mood result is a certain type of emotion, will simultaneously be in a bad mood
Icon, preset writing text library are compared, and provide the user with the relevant mood icon of such type of emotion.
Furthermore, on the basis of filtering out mood icon relevant to certain type of emotion, user's selection is wanted
The mood icon used is added to mug shot or comprising in facial video, that is, producing new mood icon preview, at this point,
User, which can choose, saves this new mood icon into existing mood icon set, can also be made immediately by sending message
With.
It should be noted that user's smart machine obtains the step of the input information of user in S101 described in present embodiment
It suddenly, specifically can also include: voice content of user's smart machine by microphone acquisition user;The language content is carried out
Pretreatment is using as the input information.
For example, user can directly be carried out voice input " very tired ", when using equipment at this point, this embodiment party
Formula can be obtained " very tired " as input information by microphone.
Accordingly, the step of Emotion identification being carried out to user according to the input information in S102 described in present embodiment,
It can specifically include: carrying out Emotion identification using based on data of the speech emotional analytical technology to the voice content.
For example, user passes through the voice messaging that microphone inputs in speech message input frame, when user completes to input,
Acquire the voice data of user's input;Then, Emotion identification is carried out simultaneously to voice data using based on speech emotional analytical technology
Export mood result.Wherein, mood result described in present embodiment may include two layers of judgement: first layer be determined as whether include
Emotional information, if do not included, mood result is loss of emotion information;If first layer is determined as including emotional information, into
Enter the second layer judgement, that is, be determined as a certain type of emotion, wherein type of emotion include and be not limited to it is happy, sad, gloomy,
Indignation is feared, anxiety etc..
Then, present embodiment can judge whether to obtain corresponding emotion expression service carrier according to mood result, work as feelings
When thread result is loss of emotion, the generation without mood icon;When mood result is a certain type of emotion, by with it is existing
Mood icon is compared, and provides the user with the relevant mood icon of such type of emotion;Finally, can be in icon of being in a bad mood
On the basis of generate mood icon, be supplied to user's preview and used after selecting.
It should be noted that user's smart machine obtains the step of the input information of user in S101 described in present embodiment
Suddenly, can specifically include: user's smart machine obtains the text of user by text message input frame;To the text text
Word is pre-processed using as the input information.
Accordingly, the step of Emotion identification being carried out to user according to the input information in S102 described in present embodiment,
It can specifically include: carrying out Emotion identification using based on data of the text emotion analytical technology to the text.
For example, the writing text information that user inputs in text message input frame, when user completes to input, acquisition is used
The writing text data of family input;Then, Emotion identification and defeated is carried out to writing text using based on text emotion analytical technology
Mood result out.
Mood result may include two layers of judgement: whether first layer is determined as comprising emotional information, if do not included, feelings
Thread result is loss of emotion information;If first layer is judged to entering the second layer comprising emotional information and determining, that is, be determined as a certain
Kind of type of emotion, in the present embodiment, type of emotion may include and be not limited to it is happy, sad, gloomy, angry, fear, tightly
Etc..
Furthermore, present embodiment judges whether to obtain corresponding emotion expression service carrier based on mood result.For example,
Generation when mood result is loss of emotion, without mood icon;When mood result is a certain type of emotion, by with
Icon of being in a bad mood is compared, and provides the user with the relevant mood icon of such type of emotion.For example, when mood result is
" anger " will generate " anger " mood icon based on icon of being in a bad mood by comparing.Finally, can be in icon of being in a bad mood
On the basis of generate mood icon as a result, being supplied to user's preview and being used after selecting.
It is noted that being generated to obtain the step of corresponding mood icon according to the key message described in present embodiment
It can also include: that the mood icon is prompted to user to carry out preview after rapid;Judge whether that the confirmation for getting user is grasped
Make;If getting the confirmation operation of user, the mood icon is sent and/or stored.
For example, working as mood result for " anger ", by comparing, " anger " table all in icon of being in a bad mood will be filtered out
Feelings, and filter out " anger " related content such as " this baby anger " all in preset text/text library.User can be straight
It connected touch click or mouse is clicked to select mood icon or text/content of text, and may be added to that former face after selection
Portion's photo or comprising face video in, to generate talking mood icon.
It should be noted that the mood icon that present embodiment mood icon processing method is handled, can pass through APP
Third-party application is sent to third party and carries out communication interaction, also can store in local, cloud.
Please refer to figure 2, and the application also provides a kind of user's smart machine, described as one of embodiment
User's smart machine includes processor 21, and the processor 21 is for executing computer program, to realize mood as described above
Icon processing method.
Specifically, the processor 21 is used to obtain the input information of user;The processor 21 is used for according to
It inputs information and Emotion identification is carried out to user;The processor 21 is used to obtain corresponding crucial letter according to the result of Emotion identification
Breath;The processor 21 according to the key message for generating to obtain corresponding mood icon.
It should be noted that processor 21 described in present embodiment is used to obtain the input information of user, specifically can wrap
Include: the processor 21 is used to shoot the body language content of user by photographic device;The body language content is carried out
Pretreatment is using as the input information.
For example, the processor 21 by multiple cameras such as preposition, postposition for that can shoot the limbs of user
Language content can also be continuously shot, three-dimensional shoots and compares screening.
Furthermore, processor 21 described in present embodiment is used to pre-process the body language content to make
It for the input information, can specifically include: action recognition carried out to the body language content;According to the result of action recognition
Obtain the target site that movement range changing value is greater than preset threshold;Believe the movement of the target site as the input
Breath.
For example, for example, user want to express it is surprised, then movement range changing value it is maximum be usually eyes or mouth, than
As eyes open very big or mouth it is very big, but therefore face etc. may can most express without significant change
The meeting of mood is the movement range of eyes or mouth.
Certainly, it is contemplated that user can be cooperated by various motion expresses different moods, limb described in present embodiment
Body language content is the movement of eyes, mouth, hand, head, trunk, foot or any two kinds or combination of the above.
Processor 21 described in present embodiment is used for according to the input information to user's progress Emotion identification, specifically can be with
Including: is that the artis of human body is modeled first, and human body is regarded as the rigid system for having inner link, it
Comprising bone and artis, the relative motion of bone and artis constitutes the variation of human body attitude, i.e., usually described retouches
Movement is stated, in the numerous artis of human body, according to the weight to emotion influence, ignores finger and toe, the backbone of human body is taken out
As summing up a manikin for three neck, chest and abdomen joints, wherein above the waist including head, neck, chest, abdomen, two
Large arm and two forearms, and the lower part of the body includes two thighs, two shanks;For a variety of emotional states of selection, choose respectively
The expression of every kind of emotional state is carried out in the case of human normal, and limbs are reacted and carry out detailed analysis;Since human body is taken out
It is the movement of gravity center of human body first as becoming rigid model, is divided into forward, backward and natural mode;In addition to center of gravity movement it
Outside, the followed by rotation of artis, human body generation movement variation, and artis relevant with mood includes head, thoracic cavity, shoulder
And ancon, corresponding movement are the bending of the bending on head, the rotation in thoracic cavity, the swing of upper arm and direction of extension and ancon,
These parameters combine the movement of upper center of gravity, in total include the movement of 7 degree of freedom, give expression to the movement of people's upper part of the body.
In this way, Emotion identification can be carried out according to the body language content of user.
For example, the photo or video comprising face that user is shot by preposition or rear camera, when user completes to clap
When taking the photograph, the face image data of user is acquired;Wherein, acquisition movement can betide face action maximum it is most obvious when, can
To be judged automatically by action recognition.
Then, present embodiment can carry out Emotion identification and defeated to face-image using based on Image emotional semantic analytical technology
Mood result out.Mood result described in present embodiment may include two layers of judgement: first layer is determined as whether believing comprising mood
Breath, if do not included, mood result is loss of emotion information;If first layer is judged to entering second comprising emotional information
Layer determines, that is, is determined as a certain type of emotion, wherein type of emotion includes and is not limited to happy, sad, gloomy, angry, harmful
Fearness, anxiety etc..
Present embodiment can judge whether to obtain corresponding emotion expression service carrier according to mood result, when mood result
Generation when for loss of emotion, without mood icon;When mood result is a certain type of emotion, will simultaneously be in a bad mood
Icon, preset writing text library are compared, and provide the user with the relevant mood icon of such type of emotion.
Furthermore, on the basis of filtering out mood icon relevant to certain type of emotion, user's selection is wanted
The mood icon used is added to mug shot or comprising in facial video, that is, producing new mood icon preview, at this point,
User, which can choose, saves this new mood icon into existing mood icon set, can also be made immediately by sending message
With.
It, specifically can be with it should be noted that processor 21 described in present embodiment is used to obtain the input information of user
It include: the voice content that the processor 21 is used to obtain user by microphone;To the language content pre-processed with
As the input information.
For example, user can directly be carried out voice input " very tired ", when using equipment at this point, this embodiment party
Formula can be obtained " very tired " as input information by microphone.
Accordingly, processor 21 described in present embodiment is used to carry out Emotion identification to user according to the input information,
Can specifically include: the processor 21 is used to carry out using based on data of the speech emotional analytical technology to the voice content
Emotion identification.
For example, user passes through the voice messaging that microphone inputs in speech message input frame, when user completes to input,
Acquire the voice data of user's input;Then, Emotion identification is carried out simultaneously to voice data using based on speech emotional analytical technology
Export mood result.Wherein, mood result described in present embodiment may include two layers of judgement: first layer be determined as whether include
Emotional information, if do not included, mood result is loss of emotion information;If first layer is determined as including emotional information, into
Enter the second layer judgement, that is, be determined as a certain type of emotion, wherein type of emotion include and be not limited to it is happy, sad, gloomy,
Indignation is feared, anxiety etc..
Then, present embodiment can judge whether to obtain corresponding emotion expression service carrier according to mood result, work as feelings
When thread result is loss of emotion, the generation without mood icon;When mood result is a certain type of emotion, by with it is existing
Mood icon is compared, and provides the user with the relevant mood icon of such type of emotion;Finally, can be in icon of being in a bad mood
On the basis of generate mood icon, be supplied to user's preview and used after selecting.
It should be noted that processor 21 described in present embodiment is used to obtain the input information of user, specifically can wrap
Include: the processor 21 is used to obtain the text of user by text message input frame;The text is carried out pre-
Processing is using as the input information.
Accordingly, processor 21 described in present embodiment is used to carry out Emotion identification, tool to user according to the input information
Body may include: that the processor 21 is used to carry out feelings using based on data of the text emotion analytical technology to the text
Thread identification.
For example, the writing text information that user inputs in text message input frame, when user completes to input, acquisition is used
The writing text data of family input;Then, the processor 21 is used to use based on text emotion analytical technology to writing text
It carries out Emotion identification and exports mood result.
Mood result may include two layers of judgement: whether first layer is determined as comprising emotional information, if do not included, feelings
Thread result is loss of emotion information;If first layer is judged to entering the second layer comprising emotional information and determining, that is, be determined as a certain
Kind of type of emotion, in the present embodiment, type of emotion may include and be not limited to it is happy, sad, gloomy, angry, fear, tightly
Etc..
Furthermore, present embodiment judges whether to obtain corresponding emotion expression service carrier based on mood result.For example,
Generation when mood result is loss of emotion, without mood icon;When mood result is a certain type of emotion, by with
Icon of being in a bad mood is compared, and provides the user with the relevant mood icon of such type of emotion.For example, when mood result is
" anger " will generate " anger " mood icon based on icon of being in a bad mood by comparing.Finally, can be in icon of being in a bad mood
On the basis of generate mood icon as a result, being supplied to user's preview and being used after selecting.
In the present embodiment, when the input information is body language content, the key message is eyes, mouth
Bar, the mood action message that is constituted of hand, head, trunk, foot or any two kinds or combination of the above;When the input information
When for language content, feelings that the key message is made of sentence, the tone, word, scene or any two kinds or combination of the above
Thread voice messaging;When the input information is text, the key message is sentence, word, scene, context or appoints
The mood text information that two kinds of meaning or combination of the above are constituted.Preferably, the mood icon by the eyes, mouth, hand,
The mood action message that head, trunk, foot or any two kinds or combination of the above are constituted, sentence, the tone, word, scene or
The mood voice messaging that any two kinds or combination of the above are constituted, and/or, sentence, word, scene, context or two kinds any
Or combination of the above constituted mood text information simulation obtained by
It is noted that processor 21 described in present embodiment is corresponding for generating to obtain according to the key message
After mood icon, it may also be used for the mood icon is prompted to user and carries out preview;Judge whether to get user really
Recognize operation;If getting the confirmation operation of user, the mood icon is sent and/or stored.
For example, working as mood result for " anger ", by comparing, " anger " table all in icon of being in a bad mood will be filtered out
Feelings, and filter out " anger " related content such as " this baby anger " all in preset text/text library.User can be straight
It connected touch click or mouse is clicked to select mood icon or text/content of text, and may be added to that former face after selection
Portion's photo or comprising face video in, to generate talking mood icon.
It should be noted that the mood icon that present embodiment mood icon processing method is handled, can pass through APP
Third-party application is sent to third party and carries out communication interaction, also can store in local, cloud.
In addition, the application also provides a kind of storage medium, the storage medium is stored with computer program, the computer
Program is when being executed by processor, for realizing mood icon processing method described in embodiment as above.
The above is only the preferred embodiment of the application, not makes any form of restriction to the application, though
Right the application has been disclosed in a preferred embodiment above, however is not limited to the application, any technology people for being familiar with this profession
Member, is not departing within the scope of technical scheme, when the technology contents using the disclosure above make a little change or modification
For the equivalent embodiment of equivalent variations, but all technical spirits pair without departing from technical scheme content, according to the application
Any simple modification, equivalent change and modification made by above embodiments, in the range of still falling within technical scheme.
Claims (15)
1. a kind of mood icon processing method, which is characterized in that the mood icon processing method comprising steps of
The input information of user's smart machine acquisition user;
Emotion identification is carried out to user according to the input information;
Corresponding key message is obtained according to the result of Emotion identification;
It is generated to obtain corresponding mood icon according to the key message.
2. mood icon processing method according to claim 1, which is characterized in that user's smart machine obtains user
Input information the step of, specifically include:
User's smart machine shoots the body language content of user by photographic device;
The body language content is pre-processed using as the input information.
3. mood icon processing method according to claim 2, which is characterized in that it is described to the body language content into
The step of row pretreatment is using as the input information, specifically includes:
Action recognition is carried out to the body language content;
The target site that movement range changing value is greater than preset threshold is obtained according to the result of action recognition;
Using the movement of the target site as the input information.
4. mood icon processing method according to claim 3, which is characterized in that the body language content be eyes,
The movement of mouth, hand, head, trunk, foot or any two kinds or combination of the above.
5. mood icon processing method according to claim 1, which is characterized in that user's smart machine obtains user
Input information the step of, specifically include:
User's smart machine obtains the voice content of user by microphone;
The language content is pre-processed using as the input information.
6. mood icon processing method according to claim 5, which is characterized in that it is described according to the input information to
Family carries out the step of Emotion identification, specifically includes:
Emotion identification is carried out using based on data of the speech emotional analytical technology to the voice content.
7. mood icon processing method according to claim 1, which is characterized in that user's smart machine obtains user
Input information the step of, specifically include:
User's smart machine obtains the text of user by text message input frame;
The text is pre-processed using as the input information.
8. mood icon processing method according to claim 7, which is characterized in that it is described according to the input information to
Family carries out the step of Emotion identification, specifically includes:
Emotion identification is carried out using based on data of the text emotion analytical technology to the text.
9. mood icon processing method according to claim 1-8, which is characterized in that described according to the key
Information generated after the step of obtaining corresponding mood icon, further includes:
The mood icon is prompted to user and carries out preview;
Judge whether to get the confirmation operation of user;
If getting the confirmation operation of user, the mood icon is sent and/or stored.
10. mood icon processing method according to claim 1-8, which is characterized in that described according to the pass
Key information generates the step of obtaining corresponding mood icon, specifically includes:
Corresponding emotion expression service carrier is obtained according to the key message;
It is generated to obtain corresponding mood icon according to the emotion expression service carrier.
11. mood icon processing method according to claim 10, which is characterized in that the emotion expression service carrier is mood
Picture, mood text, mood voice or any combination of three.
12. mood icon processing method according to claim 1-8, it is characterised in that:
When the input information is body language content, the key message is eyes, mouth, hand, head, trunk, foot
The mood action message that portion or any two kinds or combination of the above are constituted;
When the input information be language content when, the key message be sentence, the tone, word, scene or it is any two kinds or
The mood voice messaging that combination of the above is constituted;
When the input information is text, the key message is sentence, word, scene, context or two kinds any
Or the mood text information that combination of the above is constituted;
Wherein, the mood icon is by the eyes, mouth, hand, head, trunk, foot or any two kinds or combination of the above
The mood action message constituted, the mood voice that sentence, the tone, word, scene or any two kinds or combination of the above are constituted
Information, and/or, the mood text information simulation that sentence, word, scene, context or any two kinds or combination of the above are constituted
It is acquired.
13. mood icon processing method according to claim 1-8, which is characterized in that the mood icon is
The expression static map including picture, text and/or voice, expression small video, expression Dynamic Graph for communication exchange.
14. mood icon processing method according to claim 13, which is characterized in that the mood icon connects being sent to
When debit, it is also used to act generation interaction or stress reaction effect according to the program request of recipient.
15. a kind of user's smart machine, which is characterized in that user's smart machine includes processor, and the processor is used for
Computer program is executed, to realize such as the described in any item mood icon processing methods of claim 1-14.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910395252.1A CN110222210A (en) | 2019-05-13 | 2019-05-13 | User's smart machine and its mood icon processing method |
PCT/CN2019/106033 WO2020228208A1 (en) | 2019-05-13 | 2019-09-16 | User smart device and emoticon processing method therefor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910395252.1A CN110222210A (en) | 2019-05-13 | 2019-05-13 | User's smart machine and its mood icon processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110222210A true CN110222210A (en) | 2019-09-10 |
Family
ID=67820927
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910395252.1A Pending CN110222210A (en) | 2019-05-13 | 2019-05-13 | User's smart machine and its mood icon processing method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110222210A (en) |
WO (1) | WO2020228208A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020228208A1 (en) * | 2019-05-13 | 2020-11-19 | 深圳传音控股股份有限公司 | User smart device and emoticon processing method therefor |
CN113050843A (en) * | 2019-12-27 | 2021-06-29 | 深圳富泰宏精密工业有限公司 | Emotion recognition and management method, computer program, and electronic device |
CN113450804A (en) * | 2021-06-23 | 2021-09-28 | 深圳市火乐科技发展有限公司 | Voice visualization method and device, projection equipment and computer readable storage medium |
CN113747249A (en) * | 2021-07-30 | 2021-12-03 | 北京达佳互联信息技术有限公司 | Live broadcast problem processing method and device and electronic equipment |
CN114745349A (en) * | 2021-01-08 | 2022-07-12 | 上海博泰悦臻网络技术服务有限公司 | Commenting method, electronic device and computer readable storage medium |
CN114883014A (en) * | 2022-04-07 | 2022-08-09 | 南方医科大学口腔医院 | Patient emotion feedback device and method based on biological recognition and treatment couch |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113704504B (en) * | 2021-08-30 | 2023-09-19 | 平安银行股份有限公司 | Emotion recognition method, device, equipment and storage medium based on chat record |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104423547B (en) * | 2013-08-28 | 2018-04-27 | 联想(北京)有限公司 | A kind of input method and electronic equipment |
CN105447164A (en) * | 2015-12-02 | 2016-03-30 | 小天才科技有限公司 | Method and apparatus for automatically pushing chat expressions |
CN106649712B (en) * | 2016-12-20 | 2020-03-03 | 北京小米移动软件有限公司 | Method and device for inputting expression information |
CN109550230A (en) * | 2018-11-28 | 2019-04-02 | 苏州中科先进技术研究院有限公司 | A kind of interactive experience device and method |
CN110222210A (en) * | 2019-05-13 | 2019-09-10 | 深圳传音控股股份有限公司 | User's smart machine and its mood icon processing method |
-
2019
- 2019-05-13 CN CN201910395252.1A patent/CN110222210A/en active Pending
- 2019-09-16 WO PCT/CN2019/106033 patent/WO2020228208A1/en active Application Filing
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020228208A1 (en) * | 2019-05-13 | 2020-11-19 | 深圳传音控股股份有限公司 | User smart device and emoticon processing method therefor |
CN113050843A (en) * | 2019-12-27 | 2021-06-29 | 深圳富泰宏精密工业有限公司 | Emotion recognition and management method, computer program, and electronic device |
CN114745349A (en) * | 2021-01-08 | 2022-07-12 | 上海博泰悦臻网络技术服务有限公司 | Commenting method, electronic device and computer readable storage medium |
CN114745349B (en) * | 2021-01-08 | 2023-12-26 | 上海博泰悦臻网络技术服务有限公司 | Comment method, electronic equipment and computer readable storage medium |
CN113450804A (en) * | 2021-06-23 | 2021-09-28 | 深圳市火乐科技发展有限公司 | Voice visualization method and device, projection equipment and computer readable storage medium |
CN113747249A (en) * | 2021-07-30 | 2021-12-03 | 北京达佳互联信息技术有限公司 | Live broadcast problem processing method and device and electronic equipment |
CN114883014A (en) * | 2022-04-07 | 2022-08-09 | 南方医科大学口腔医院 | Patient emotion feedback device and method based on biological recognition and treatment couch |
Also Published As
Publication number | Publication date |
---|---|
WO2020228208A1 (en) | 2020-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110222210A (en) | User's smart machine and its mood icon processing method | |
CN110900617B (en) | Robot and method for operating the same | |
CN111833418B (en) | Animation interaction method, device, equipment and storage medium | |
CN107340859B (en) | Multi-modal interaction method and system of multi-modal virtual robot | |
US20220150285A1 (en) | Communication assistance system, communication assistance method, communication assistance program, and image control program | |
KR102242779B1 (en) | Robot and method for operating the same | |
JP4395687B2 (en) | Information processing device | |
KR101894573B1 (en) | Smart phone interface management system by 3D digital actor | |
US20160134840A1 (en) | Avatar-Mediated Telepresence Systems with Enhanced Filtering | |
KR102148151B1 (en) | Intelligent chat based on digital communication network | |
KR20130032620A (en) | Method and apparatus for providing moving picture using 3d user avatar | |
WO2022252866A1 (en) | Interaction processing method and apparatus, terminal and medium | |
CA2520036A1 (en) | A behavioural translator for an object | |
CN110121026A (en) | Intelligent capture apparatus and its scene generating method based on living things feature recognition | |
CN113014471A (en) | Session processing method, device, terminal and storage medium | |
KR101913811B1 (en) | A method for analysing face information, and an appratus for analysing face information to present faces, identify mental status or compensate it | |
EP4071760A1 (en) | Method and apparatus for generating video | |
CN110139021B (en) | Auxiliary shooting method and terminal equipment | |
Martin et al. | Manual annotation and automatic image processing of multimodal emotional behaviors: validating the annotation of TV interviews | |
CN112634413B (en) | Method, apparatus, device and storage medium for generating model and generating 3D animation | |
CN113920229A (en) | Virtual character processing method and device and storage medium | |
WO2024066549A1 (en) | Data processing method and related device | |
Barrientos et al. | Cursive: Controlling expressive avatar gesture using pen gesture | |
JPWO2020170441A1 (en) | Information processing equipment, information processing methods, and programs | |
JP7474211B2 (en) | Dialogue program, device and method for forgetting nouns spoken by a user |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |