WO2020228208A1 - User smart device and emoticon processing method therefor - Google Patents

User smart device and emoticon processing method therefor Download PDF

Info

Publication number
WO2020228208A1
WO2020228208A1 PCT/CN2019/106033 CN2019106033W WO2020228208A1 WO 2020228208 A1 WO2020228208 A1 WO 2020228208A1 CN 2019106033 W CN2019106033 W CN 2019106033W WO 2020228208 A1 WO2020228208 A1 WO 2020228208A1
Authority
WO
WIPO (PCT)
Prior art keywords
emotion
user
emotional
information
input information
Prior art date
Application number
PCT/CN2019/106033
Other languages
French (fr)
Chinese (zh)
Inventor
易欣
周凡贻
尚国强
Original Assignee
深圳传音控股股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳传音控股股份有限公司 filed Critical 深圳传音控股股份有限公司
Publication of WO2020228208A1 publication Critical patent/WO2020228208A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying

Definitions

  • This application relates to the field of information processing technology, and in particular to an emotion icon processing method, and a user's smart device applying the emotion icon processing method.
  • a user when a user enters a message in instant messaging software or social platforms, if he wants to insert an emoticon, he will select an emoticon that he wants to insert from the existing emoticons, or enter an emoticon that corresponds to the preset emoticon Text, and then select one of the emoji recommendation results.
  • existing emoticons generally come from presets in application software or downloaded from the Internet
  • emoticons generally only refer to emoji and static or dynamic graphics.
  • users can also take pictures or videos and add text to generate static or dynamic emoticons.
  • the user can only use the designed emoji, and can only obtain the corresponding emoji recommendation results by manually clicking to select or text input, but the messages input by the user are various, not only text but also voice, pictures, and videos Etc. Therefore, this solution cannot meet other information input scenarios.
  • the input text can only be compared with the name of the existing emoji to obtain the emoji recommendation results related to the input text.
  • the input text has emotional meaning but is not completely equal to the name of the existing emoji, Unable to get emoji recommendation results.
  • the inventor of the present application has conducted in-depth research and proposed a user smart device and a method for processing emotional icons.
  • the purpose of this application is to provide a user's smart device and its emotional icon processing method, which can recognize emotions based on user input information, generate corresponding emotional icons from emotional pictures, text, or language, and realize multiple input channels
  • the fast input effect can also realize the emotion recognition of long input information, which meets the individual needs, and to a certain extent realizes artificial intelligence.
  • the present application provides an emotion icon processing method.
  • the emotion icon processing method includes the steps:
  • the user's smart device obtains the user's input information
  • the corresponding emotion icon is generated according to the key information.
  • the step of acquiring the user's input information by the user smart device specifically includes:
  • the user's smart device captures the user's body language content through the camera device;
  • the step of preprocessing the body language content as the input information specifically includes:
  • the body language content is eyes, mouth, hands, head, torso, feet, or any combination of two or more actions.
  • the step of acquiring the user's input information by the user smart device specifically includes:
  • the user's smart device obtains the user's voice content through the microphone
  • the step of performing emotion recognition on the user according to the input information specifically includes:
  • Emotion recognition is performed on the data of the voice content by using voice emotion analysis technology.
  • the step of acquiring the user's input information by the user smart device specifically includes:
  • the user’s smart device obtains the user’s text through the text message input box;
  • the text is preprocessed as the input information.
  • the step of performing emotion recognition on the user according to the input information specifically includes:
  • the method further includes:
  • the emotion icon is sent and/or stored.
  • the step of generating and obtaining corresponding emotion icons according to the key information specifically includes:
  • the corresponding emotion icon is generated according to the emotion expression vector.
  • the emotion expression carrier is an emotion picture, an emotion text, an emotion voice, or any combination of the three.
  • the key information is emotional action information composed of eyes, mouth, hands, head, torso, feet, or any combination of two or more;
  • the key information is emotional voice information composed of sentences, tone, words, scenes, or any combination of two or more;
  • the key information is emotional text information composed of sentences, words, scenes, context, or any combination of two or more;
  • the emotional icon is composed of the eyes, mouth, hands, head, torso, feet, or any two or more combinations of emotional action information, sentence, tone, word, scene or any two Or the emotional speech information formed by the above combination, and/or the emotional text information formed by the sentence, word, scene, context or any combination of two or more.
  • the emotion icon is an expression static image, an expression video, and an expression dynamic image including pictures, text, and/or voice for communication.
  • the emotion icon when the emotion icon is sent to the receiver, it is also used to generate an interactive effect according to the on-demand action of the receiver.
  • this application also provides a user smart device.
  • the user smart device includes a processor, and the processor is configured to execute a computer program to implement the emotion icon as described above. Approach.
  • the user smart device obtains user input information, performs emotion recognition on the user according to the input information, and obtains corresponding key information according to the result of the emotion recognition, and according to the key
  • the information is generated to obtain the corresponding emotion icon.
  • This application can perform emotion recognition based on user input information, generate corresponding emotion icons from emotion pictures, text or language, etc., realize the quick input effect of multiple input channels, and can also realize emotion recognition for longer input information, satisfying In order to meet individual needs, artificial intelligence has been realized to a certain extent.
  • FIG. 1 is a schematic flowchart of an embodiment of the emotion icon processing method of this application.
  • Fig. 2 is a schematic diagram of modules of an implementation manner of a user smart device of this application.
  • Figure 1 is a schematic flow chart of an embodiment of the emotional icon processing method of this application.
  • the emotional icon processing method of this embodiment can be applied to user smart devices such as mobile phones, laptops, tablets, or wearable devices. on.
  • the emotion icon processing method described in this embodiment may include but is not limited to the following steps.
  • Step S101 the user's smart device obtains the user's input information.
  • Step S102 Perform emotion recognition on the user according to the input information.
  • Step S103 Obtain corresponding key information according to the result of emotion recognition.
  • Step S104 Generate and obtain corresponding emotion icons according to the key information.
  • the step of generating and obtaining a corresponding emotion icon based on the key information specifically includes: obtaining a corresponding emotion expression vector according to the key information; generating a corresponding emotion image according to the emotion expression vector symbol.
  • the emotion expression carrier in this embodiment can be an emotion picture, an emotion text, an emotion voice or any combination of the three.
  • the key information when the input information is body language content, the key information is emotional action information composed of eyes, mouth, hands, head, torso, feet, or any combination of two or more;
  • the key information is emotional speech information composed of sentences, mood, words, scenes, or any combination of two or more;
  • the key information when the input information is text, the key information
  • the information is emotional text information composed of sentences, words, scenes, contexts, or any combination of two or more.
  • the emotional icon is composed of the eyes, mouth, hands, head, torso, feet, or any combination of two or more of emotional action information, sentences, tone, and words , Scenes, or any two or more combinations of emotional speech information, and/or, sentences, words, scenes, context, or any two or more combinations of emotional text information.
  • the emotion icon in this embodiment is an expression static image, an expression video, and an expression dynamic image including pictures, text, and/or voice for communication.
  • the emotion icon described in this embodiment is sent to the recipient, it is also used to generate an interaction or stress response effect according to the on-demand action of the recipient.
  • the stress response effect described in this embodiment refers to, for example, after the emotional icon is sent to the recipient, it is displayed as "angry", and when the recipient orders the emotional icon from different positions, it can produce different results. For example, if it is a double-click to apology, it can display a stress response of "forgive you”; or if it is detected that the receiver does not make any on-demand actions, it can display "I am very angry”, “I am very The mood of “very angry” deepens and strengthens the effect.
  • the step of acquiring the user’s input information by the user’s smart device in S101 of this embodiment may specifically include: the user’s smart device photographing the user’s body language content through a camera; and preprocessing the body language content to As the input information.
  • the user's smart device can capture the user's body language content through multiple cameras such as front and rear cameras, and it can also perform continuous shooting and three-dimensional shooting for comparison and screening.
  • the step of preprocessing the body language content as the input information in this embodiment may specifically include: performing action recognition on the body language content; obtaining the change in the motion amplitude according to the result of the action recognition A target part with a value greater than a preset threshold; taking the action of the target part as the input information.
  • the largest change in the amplitude of the action is usually the eyes or mouth.
  • the eyes or mouth For example, if the eyes are opened very wide, or the mouth is opened very wide, there may be no obvious changes in the face. Therefore, the most effective The expression of emotions will be the movements of the eyes or mouth.
  • the body language content in this embodiment is eyes, mouth, hands, head, torso, feet, or any combination of two or more actions.
  • This implementation manner performs emotion recognition on the user based on the input information, which may specifically include: firstly, modeling the joint points of the human body.
  • the human body is regarded as a rigid system with internal connections, which includes bones and joint points.
  • the relative movement with the joint points constitutes the change of the human body’s posture, that is, the usual description action.
  • the human spine is abstracted into the neck, chest and Three joints of the abdomen, sum up a human body model, in which the upper body includes the head, neck, chest, abdomen, two upper arms and two forearms, and the lower body includes two thighs and two lower legs; for a variety of emotional states selected ,
  • the human body is selected to express each emotional state under normal conditions, and the body response is analyzed in detail; since the human body is abstracted into a rigid body model, the first is the movement of the human center of gravity, which is divided into forward, backward and natural states ; In addition to the movement of the center of gravity, followed by the rotation of the joint points, the human body changes in motion, and the joint points related to emotions include the head, chest, shoulders and elbows.
  • the corresponding actions are head bending, chest rotation, The swing and extension direction of the upper arm, and the bending of the elbow, these parameters combined with the movement of the upper center of gravity, including a total of seven degrees of freedom of movement, express a person's upper body movement. In this way, emotion recognition can be performed based on the user's body language content.
  • a photo or video containing a face taken by the user through the front or rear camera when the user finishes the shooting, collect the user's facial image data; among them, the collection action can occur when the facial movement is the largest and most obvious, it can be Recognize and judge automatically.
  • the emotional result of this embodiment can include two levels of determination: the first level determines whether it contains emotional information, if it does not, the emotional result is no emotional information; if the first level determines that it contains emotional information, then enter the second level Judgment refers to a certain type of emotion, where the emotion type includes, but is not limited to, happiness, sadness, depression, anger, fear, tension, etc.
  • This embodiment can determine whether to obtain the corresponding emotion expression carrier according to the emotion result.
  • the emotion result is no emotion, the emotion icon is not generated; when the emotion result is a certain emotion type, it will be combined with the existing emotion at the same time.
  • the icons are compared with the preset text library to provide users with emotional icons related to this type of emotion.
  • the user selects the emotion icon he wants to use and adds it to the facial photo or video containing the face, and then a new emotion icon preview can be generated At this time, the user can choose to save the new emotion icon to the existing emotion icon set, or use it immediately by sending a message.
  • the step of acquiring the user’s input information by the user’s smart device in S101 of this embodiment may specifically include: the user’s smart device acquiring the user’s voice content through a microphone; and preprocessing the language content as ⁇ input information.
  • this embodiment can acquire "Very annoying" through a microphone as input information.
  • the step of performing emotion recognition on the user according to the input information in S102 of this embodiment may specifically include: using a voice emotion analysis technology to perform emotion recognition on the data of the voice content.
  • the emotional result of this embodiment may include two levels of determination: the first level determines whether it contains emotional information, if it does not, the emotional result is no emotional information; if the first level determines that it contains emotional information, enter the second
  • the second-level judgment refers to a certain type of emotion, where the emotion type includes but is not limited to happy, sad, depressed, angry, scared, nervous, etc.
  • this embodiment can determine whether to obtain the corresponding emotion expression vector according to the emotion result.
  • the emotion result is no emotion, the emotion icon is not generated;
  • the emotion result is a certain type of emotion, it can be compared with the existing Emotion icons are compared to provide users with emotion icons related to this type of emotion; finally, emotion icons can be generated on the basis of existing emotion icons, and provided to users for preview and selection before use.
  • the step of acquiring the user’s input information by the user’s smart device in S101 of this embodiment may specifically include: the user’s smart device acquiring the user’s text through a text message input box; and preprocessing the text to As the input information.
  • the step of performing emotion recognition on the user according to the input information in S102 of this embodiment may specifically include: performing emotion recognition on the data of the text text by using a text emotion analysis technology.
  • the text information entered by the user in the text message input box when the user completes the input, the text text data entered by the user is collected; then, the text sentiment analysis technology is used to perform emotion recognition on the text and the emotion result is output.
  • Emotional results can include two levels of judgment: the first level determines whether it contains emotional information, if not, the emotional result is no emotional information; if the first level determines that it contains emotional information, then it enters the second level of determination, that is, it is determined as A certain emotion type.
  • the emotion type may include, but is not limited to, happy, sad, depressed, angry, scared, nervous, and so on.
  • this embodiment judges whether to obtain the corresponding emotion expression vector based on the emotion result. For example, when the emotion result is no emotion, no emotion icon is generated; when the emotion result is a certain emotion type, the user will be provided with emotions related to this emotion type by comparing with existing emotion icons Icon. For example, when the emotional result is "angry”, through comparison, an "angry" emotional icon will be generated based on the existing emotional icon. Finally, the result of the emotion icon can be generated on the basis of the existing emotion icon, and the result is provided to the user for preview and selection before use.
  • the step of generating the corresponding emotion icon may further include: prompting the emotion icon to the user for preview; judging whether the user's confirmation is obtained Operation; if the user's confirmation operation is obtained, send and/or store the emotion icon.
  • the comparison will filter out all the “angry” expressions in the existing emotional icons, and filter out all the “angry” related content in the preset text/text library such as "The baby is angry.”
  • the user can directly select the emotion icon or text/text content by touching or clicking with the mouse, and then adding it to the original facial photo or video containing the face to generate an expressive emotion icon.
  • emotion icon processed by the emotion icon processing method of this embodiment can be sent to a third party through an APP third-party application for communication and interaction, or stored locally or in the cloud.
  • the user smart device includes a processor 21, and the processor 21 is configured to execute a computer program to implement the above Emotion icon processing method.
  • the processor 21 is used to obtain input information of the user; the processor 21 is used to perform emotion recognition on the user according to the input information; the processor 21 is used to obtain corresponding information according to the result of the emotion recognition Key information; the processor 21 is configured to generate corresponding emotion icons according to the key information.
  • the processor 21 in this embodiment is used to obtain user input information, which may specifically include: the processor 21 is used to capture the user's body language content through a camera device; Processing as the input information.
  • the processor 21 is configured to capture the user's body language content through multiple cameras such as front and rear cameras, and it may also perform continuous shooting and three-dimensional shooting for comparison and screening.
  • the processor 21 in this embodiment is configured to preprocess the body language content as the input information, which may specifically include: performing action recognition on the body language content; acquiring according to the result of the action recognition A target part whose motion amplitude change value is greater than a preset threshold; the motion of the target part is used as the input information.
  • the largest change in the amplitude of the action is usually the eyes or mouth.
  • the eyes or mouth For example, if the eyes are opened very wide, or the mouth is opened very wide, there may be no obvious changes in the face. Therefore, the most effective The expression of emotions will be the movements of the eyes or mouth.
  • the body language content in this embodiment is eyes, mouth, hands, head, torso, feet, or any combination of two or more actions.
  • the processor 21 in this embodiment is used to recognize the user's emotions according to the input information, which may specifically include: firstly, modeling the joint points of the human body, and treating the human body as a rigid system with internal connections. Including bones and joints, the relative movement of bones and joints constitutes the change of the human body’s posture, that is, the usual description of actions. Among the many joints of the human body, according to the degree of emotional impact, the fingers and toes are ignored. The spine is abstracted into three joints of the neck, chest and abdomen. A human body model is summarized.
  • the upper body includes the head, neck, chest, abdomen, two arms and two forearms, and the lower body includes two thighs and two lower legs;
  • the human body is selected to express each emotional state under normal conditions, and the body response is analyzed in detail; since the human body is abstracted into a rigid body model, the first is the movement of the human center of gravity, which is divided into Forward, backward and natural state; in addition to the movement of the center of gravity, followed by the rotation of the joint points, the human body changes in motion, and the joint points related to emotions include the head, chest, shoulders and elbows.
  • the corresponding motion is the head
  • a photo or video containing a face taken by the user through the front or rear camera when the user finishes the shooting, collect the user's facial image data; among them, the collection action can occur when the facial movement is the largest and most obvious, it can be Recognize and judge automatically.
  • image emotion analysis technology may be used to perform emotion recognition on facial images and output emotion results.
  • the emotional result of this embodiment can include two levels of determination: the first level determines whether it contains emotional information, if it does not, the emotional result is no emotional information; if the first level determines that it contains emotional information, then enter the second level Judgment refers to a certain type of emotion, where the emotion type includes, but is not limited to, happiness, sadness, depression, anger, fear, tension, etc.
  • This embodiment can determine whether to obtain the corresponding emotion expression carrier according to the emotion result.
  • the emotion result is no emotion, the emotion icon is not generated; when the emotion result is a certain emotion type, it will be combined with the existing emotion at the same time.
  • the icons are compared with the preset text library to provide users with emotional icons related to this type of emotion.
  • the user selects the emotion icon he wants to use and adds it to the facial photo or video containing the face, and then a new emotion icon preview can be generated At this time, the user can choose to save the new emotion icon to the existing emotion icon set, or use it immediately by sending a message.
  • the processor 21 in this embodiment is configured to obtain user input information, and specifically may further include: the processor 21 is configured to obtain the user's voice content through a microphone; and preprocess the language content to As the input information.
  • this embodiment can acquire "Very annoying" through a microphone as input information.
  • the processor 21 in this embodiment is configured to perform emotion recognition on the user according to the input information, and may specifically include: the processor 21 is configured to perform emotion recognition on the data of the voice content by using voice emotion analysis technology. Recognition.
  • the voice information entered by the user in the voice message input box through the microphone when the user completes the input, the voice data input by the user is collected; then, voice-based emotion analysis technology is used to perform emotion recognition on the voice data and output the emotion result.
  • the emotional result of this embodiment may include two levels of determination: the first level determines whether it contains emotional information, if it does not, the emotional result is no emotional information; if the first level determines that it contains emotional information, enter the second
  • the second-level judgment refers to a certain type of emotion, where the emotion type includes but is not limited to happy, sad, depressed, angry, scared, nervous, etc.
  • this embodiment can determine whether to obtain the corresponding emotion expression vector according to the emotion result.
  • the emotion result is no emotion, the emotion icon is not generated;
  • the emotion result is a certain type of emotion, it can be compared with the existing Emotion icons are compared to provide users with emotion icons related to this type of emotion; finally, emotion icons can be generated on the basis of existing emotion icons, and provided to users for preview and selection before use.
  • the processor 21 in this embodiment is configured to obtain user input information, which may specifically include: the processor 21 is configured to obtain the user's text and text through a text message input box; Processing as the input information.
  • the processor 21 of this embodiment is configured to perform emotion recognition on the user according to the input information, and may specifically include: the processor 21 is configured to perform emotion recognition on the data of the text text by using a text emotion analysis technology. Recognition.
  • the text information entered by the user in the text message input box when the user completes the input, the text text data entered by the user is collected; then, the processor 21 is used to perform emotion recognition on the text text by using text emotion analysis technology And output emotional results.
  • Emotional results can include two levels of judgment: the first level determines whether it contains emotional information, if not, the emotional result is no emotional information; if the first level determines that it contains emotional information, then it enters the second level of determination, that is, it is determined as A certain emotion type.
  • the emotion type may include, but is not limited to, happy, sad, depressed, angry, scared, nervous, and so on.
  • this embodiment judges whether to obtain the corresponding emotion expression vector based on the emotion result. For example, when the emotion result is no emotion, no emotion icon is generated; when the emotion result is a certain emotion type, the user will be provided with emotions related to this emotion type by comparing with existing emotion icons Icon. For example, when the emotional result is "angry", through the comparison, the "qi" emotional symbol will be generated based on the existing emotional symbol. Finally, the result of the emotion icon can be generated on the basis of the existing emotion icon, and the result is provided to the user for preview and selection before use.
  • the key information when the input information is body language content, the key information is emotional action information composed of eyes, mouth, hands, head, torso, feet, or any combination of two or more;
  • the key information is emotional speech information composed of sentences, mood, words, scenes, or any combination of two or more;
  • the key information when the input information is text, the key information
  • the information is emotional text information composed of sentences, words, scenes, contexts, or any combination of two or more.
  • the emotional icon is composed of the eyes, mouth, hands, head, torso, feet, or any two or more combination of emotional action information, sentence, tone, word, scene or any two Emotional speech information composed of one or more combinations, and/or, emotional text information composed of sentences, words, scenes, contexts, or any two or more combinations
  • the processor 21 in this embodiment is configured to generate and obtain the corresponding emotion icon according to the key information, and then may also be used to prompt the user to preview the emotion icon; to determine whether the user is obtained If the user's confirmation operation is obtained, send and/or store the emotion icon.
  • the comparison will filter out all the “angry” expressions in the existing emotional icons, and filter out all the “angry” related content in the preset text/text library such as "The baby is angry.”
  • the user can directly select the emotion icon or text/text content by touching or clicking with the mouse, and then adding it to the original facial photo or video containing the face to generate an expressive emotion icon.
  • emotion icon processed by the emotion icon processing method of this embodiment can be sent to a third party through an APP third-party application for communication and interaction, or stored locally or in the cloud.
  • the present application also provides a storage medium that stores a computer program, and when the computer program is executed by a processor, it is used to implement the emotion icon processing method described in the above embodiment.

Abstract

A user smart device and an emoticon processing method therefor. The method comprises: a user smart device obtains input information of a user; performs, according to the input information, emotion recognition on the user; obtains corresponding key information according to the emotion recognition result; and according to the key information, generates a corresponding emoticon. Emotion recognition can be performed according to the input information of the user, and emotion pictures, texts or languages, etc. can be generated into corresponding emoticons, which achieves the quick input effect of multiple input channels, performs emotion recognition on long input information, meets individual needs, and achieves the artificial intelligence to a certain extent.

Description

用户智能设备及其情绪图符处理方法User intelligent equipment and its emotional icon processing method
本专利申请要求2019年5月13日提交的中国专利申请号为201910395252.1,申请人为深圳传音控股股份有限公司,发明名称为“用户智能设备及其情绪图符处理方法”的优先权,该申请的全文以引用的方式并入本申请中。This patent application requires the priority of the Chinese patent application number 201910395252.1 filed on May 13, 2019, the applicant is Shenzhen Transsion Holdings Co., Ltd., and the invention title is "User's Smart Device and Its Emotional Symbol Processing Method". This application The full text of is incorporated into this application by reference.
技术领域Technical field
本申请涉及信息处理技术领域,具体涉及一种情绪图符处理方法,以及应用所述情绪图符处理方法的用户智能设备。This application relates to the field of information processing technology, and in particular to an emotion icon processing method, and a user's smart device applying the emotion icon processing method.
背景技术Background technique
随着个人移动终端技术的高速发展,个人的移动沟通功能也越来越完善,相应地,沟通时所需要使用到的图文并茂的表情图符(图片或各种字符串的组合)也逐渐走向个性化。With the rapid development of personal mobile terminal technology, personal mobile communication functions are becoming more and more perfect. Correspondingly, the emoticons (pictures or combinations of various strings) that need to be used in communication are gradually becoming individual化.
比如,当用户在即时通信软件或社交平台等输入消息时,如果想要插入表情符,会从已有的表情符中选取某个希望插入的表情符,或者输入与预设表情符相对应的文字,然后在表情符推荐结果中选择某个。其中,“已有的表情符”一般来自于应用软件中预设或网络下载,且“表情符”一般仅指emoji和静止或动态的图形。当然,用户也可以通过拍摄图像或视频,并添加文字,以生成静止或动态的表情图。For example, when a user enters a message in instant messaging software or social platforms, if he wants to insert an emoticon, he will select an emoticon that he wants to insert from the existing emoticons, or enter an emoticon that corresponds to the preset emoticon Text, and then select one of the emoji recommendation results. Among them, "existing emoticons" generally come from presets in application software or downloaded from the Internet, and "emoticons" generally only refer to emoji and static or dynamic graphics. Of course, users can also take pictures or videos and add text to generate static or dynamic emoticons.
但是,用户仅能使用已设计好的表情符,且仅能够通过手动点击选取或文字输入获得对应的表情符推荐结果,但用户输入的消息多种多样,不仅有文字还有语音、图片、视频等,因此这种方案不能满足其他的信息输入场景。而且,有时候输入的文字也仅能通过与已有表情符的名字相比较,从而获得输入文字相关的表情符推荐结果,当输入文字具有情绪含义但是不完全等于已有表情符的名字时,无法获得表情符推荐结果。比如,输入“我好开心呀”,无法获得“开心”相关的表情符推荐,必须输入“开心”才能获得,并不符合用户的真实使用习惯。进一步来说,已设计好的表情符,不够个性化,无法满足当下用户的个性化需求。However, the user can only use the designed emoji, and can only obtain the corresponding emoji recommendation results by manually clicking to select or text input, but the messages input by the user are various, not only text but also voice, pictures, and videos Etc. Therefore, this solution cannot meet other information input scenarios. Moreover, sometimes the input text can only be compared with the name of the existing emoji to obtain the emoji recommendation results related to the input text. When the input text has emotional meaning but is not completely equal to the name of the existing emoji, Unable to get emoji recommendation results. For example, if you enter "I am so happy", you cannot get the emoji recommendation related to "happy", you must enter "happy" to get it, which does not conform to the user's real habits. Furthermore, the emojis that have been designed are not personalized enough to meet the individual needs of current users.
同样,用户通过拍摄图像或视频来生成静止或动态的表情符,还需通过添加文字等编辑来表明情绪状态,虽然满足了个性化需求,但是不够智能轻便,增加用户负担,用户体验较差。Similarly, users generate static or dynamic emoticons by shooting images or videos, and also need to add text and other editing to indicate their emotional state. Although it meets individual needs, it is not smart and portable, which increases the user's burden, and the user experience is poor.
针对现有技术的多方面不足,本申请的发明人经过深入研究,提出一种用户智能设备及其情绪图符处理方法。In view of the various deficiencies of the prior art, the inventor of the present application has conducted in-depth research and proposed a user smart device and a method for processing emotional icons.
发明概述Summary of the invention
技术问题technical problem
问题的解决方案The solution to the problem
技术解决方案Technical solutions
本申请的目的在于,提供一种用户智能设备及其情绪图符处理方法,能够根据用户输入信息进行情绪识别,将情绪图片、文字或语言等生成对应的情绪图符,实现了多种输入途径的快捷输入效果,还可以实现对较长输入信息的情绪识别,满足了个性化需求,而且在一定程度上实现了人工智能。The purpose of this application is to provide a user's smart device and its emotional icon processing method, which can recognize emotions based on user input information, generate corresponding emotional icons from emotional pictures, text, or language, and realize multiple input channels The fast input effect can also realize the emotion recognition of long input information, which meets the individual needs, and to a certain extent realizes artificial intelligence.
为解决上述技术问题,本申请提供一种情绪图符处理方法,作为其中一种实施方式,所述情绪图符处理方法包括步骤:In order to solve the above technical problem, the present application provides an emotion icon processing method. As one of the implementation manners, the emotion icon processing method includes the steps:
用户智能设备获取用户的输入信息;The user's smart device obtains the user's input information;
根据所述输入信息对用户进行情绪识别;Perform emotion recognition on the user according to the input information;
根据情绪识别的结果得到对应的关键信息;Obtain corresponding key information according to the result of emotion recognition;
根据所述关键信息生成得到对应的情绪图符。The corresponding emotion icon is generated according to the key information.
作为其中一种实施方式,所述用户智能设备获取用户的输入信息的步骤,具体包括:As one of the implementation manners, the step of acquiring the user's input information by the user smart device specifically includes:
用户智能设备通过摄像装置拍摄用户的肢体语言内容;The user's smart device captures the user's body language content through the camera device;
对所述肢体语言内容进行预处理以作为所述输入信息。Preprocessing the body language content as the input information.
作为其中一种实施方式,所述对所述肢体语言内容进行预处理以作为所述输入信息的步骤,具体包括:As one of the implementation manners, the step of preprocessing the body language content as the input information specifically includes:
对所述肢体语言内容进行动作识别;Perform action recognition on the body language content;
根据动作识别的结果获取动作幅度变化值大于预设阈值的目标部位;Obtain the target part whose motion amplitude change value is greater than the preset threshold according to the result of motion recognition;
将所述目标部位的动作作为所述输入信息。Use the movement of the target part as the input information.
作为其中一种实施方式,所述肢体语言内容为眼睛、嘴巴、手部、头部、躯干、脚部或任意两种或以上组合的动作。As one of the implementation manners, the body language content is eyes, mouth, hands, head, torso, feet, or any combination of two or more actions.
作为其中一种实施方式,所述用户智能设备获取用户的输入信息的步骤,具体包括:As one of the implementation manners, the step of acquiring the user's input information by the user smart device specifically includes:
用户智能设备通过麦克风获取用户的语音内容;The user's smart device obtains the user's voice content through the microphone;
对所述语言内容进行预处理以作为所述输入信息。Preprocessing the language content as the input information.
作为其中一种实施方式,所述根据所述输入信息对用户进行情绪识别的步骤,具体包括:As one of the implementation manners, the step of performing emotion recognition on the user according to the input information specifically includes:
采用基于语音情感分析技术对所述语音内容的数据进行情绪识别。Emotion recognition is performed on the data of the voice content by using voice emotion analysis technology.
作为其中一种实施方式,所述用户智能设备获取用户的输入信息的步骤,具体包括:As one of the implementation manners, the step of acquiring the user's input information by the user smart device specifically includes:
用户智能设备通过文本消息输入框获取用户的文本文字;The user’s smart device obtains the user’s text through the text message input box;
对所述文本文字进行预处理以作为所述输入信息。The text is preprocessed as the input information.
作为其中一种实施方式,所述根据所述输入信息对用户进行情绪识别的步骤,具体包括:As one of the implementation manners, the step of performing emotion recognition on the user according to the input information specifically includes:
采用基于文本情感分析技术对所述文本文字的数据进行情绪识别。Using text-based sentiment analysis technology to perform sentiment recognition on the text data.
作为其中一种实施方式,所述根据所述关键信息生成得到对应的情绪图符的步骤之后,还包括:As one of the implementation manners, after the step of generating and obtaining the corresponding emotion icon according to the key information, the method further includes:
将所述情绪图符提示给用户进行预览;Prompt the user to preview the emotion icon;
判断是否获取到用户的确认操作;Determine whether the user's confirmation operation is obtained;
若获取到用户的确认操作,发送和/或存储所述情绪图符。If the user's confirmation operation is obtained, the emotion icon is sent and/or stored.
作为其中一种实施方式,所述根据所述关键信息生成得到对应的情绪图符的步骤,具体包括:As one of the implementation manners, the step of generating and obtaining corresponding emotion icons according to the key information specifically includes:
根据所述关键信息获取对应的情绪表达载体;Obtaining a corresponding emotion expression vector according to the key information;
根据所述情绪表达载体生成得到对应的情绪图符。The corresponding emotion icon is generated according to the emotion expression vector.
作为其中一种实施方式,所述情绪表达载体为情绪图片、情绪文字、情绪语音或三者的任意组合。As one of the embodiments, the emotion expression carrier is an emotion picture, an emotion text, an emotion voice, or any combination of the three.
作为其中一种实施方式:As one of the implementation methods:
当所述输入信息为肢体语言内容时,所述关键信息为眼睛、嘴巴、手部、头部、躯干、脚部或任意两种或以上组合所构成的情绪动作信息;When the input information is body language content, the key information is emotional action information composed of eyes, mouth, hands, head, torso, feet, or any combination of two or more;
当所述输入信息为语言内容时,所述关键信息为语句、语气、用词、场景或任意两种或以上组合所构成的情绪语音信息;When the input information is language content, the key information is emotional voice information composed of sentences, tone, words, scenes, or any combination of two or more;
当所述输入信息为文本文字时,所述关键信息为语句、用词、场景、上下文或任意两种或以上组合所构成的情绪文本信息;When the input information is text, the key information is emotional text information composed of sentences, words, scenes, context, or any combination of two or more;
其中,所述情绪图符由所述眼睛、嘴巴、手部、头部、躯干、脚部或任意两种或以上组合所构成的情绪动作信息,语句、语气、用词、场景或任意两种或以上组合所构成的情绪语音信息,和/或,语句、用词、场景、上下文或任意两种或以上组合所构成的情绪文本信息模拟所得到。Wherein, the emotional icon is composed of the eyes, mouth, hands, head, torso, feet, or any two or more combinations of emotional action information, sentence, tone, word, scene or any two Or the emotional speech information formed by the above combination, and/or the emotional text information formed by the sentence, word, scene, context or any combination of two or more.
作为其中一种实施方式,所述情绪图符为用于沟通交流的包括图片、文字和/或语音的表情静态图、表情小视频、表情动态图。As one of the implementation manners, the emotion icon is an expression static image, an expression video, and an expression dynamic image including pictures, text, and/or voice for communication.
作为其中一种实施方式,所述情绪图符在发送给接收方时,还用于根据接收方的点播动作产生互动效果。As one of the implementation manners, when the emotion icon is sent to the receiver, it is also used to generate an interactive effect according to the on-demand action of the receiver.
为解决上述技术问题,本申请还提供一种用户智能设备,作为其中一种实施方式,所述用户智能设备包括处理器,所述处理器用于执行计算机程序,以实现如上所述的情绪图符处理方法。In order to solve the above technical problems, this application also provides a user smart device. As one of the implementation manners, the user smart device includes a processor, and the processor is configured to execute a computer program to implement the emotion icon as described above. Approach.
发明的有益效果The beneficial effects of the invention
有益效果Beneficial effect
本申请提供的用户智能设备及其情绪图符处理方法,用户智能设备获取用户的输入信息,根据所述输入信息对用户进行情绪识别,根据情绪识别的结果得到对应的关键信息,根据所述关键信息生成得到对应的情绪图符。本申请能够根据用户输入信息进行情绪识别,将情绪图片、文字或语言等生成对应的情绪图符,实现了多种输入途径的快捷输入效果,还可以实现对较长输入信息的情绪识别,满足了个性化需求,而且在一定程度上实现了人工智能。In the user smart device and its emotional icon processing method provided in the present application, the user smart device obtains user input information, performs emotion recognition on the user according to the input information, and obtains corresponding key information according to the result of the emotion recognition, and according to the key The information is generated to obtain the corresponding emotion icon. This application can perform emotion recognition based on user input information, generate corresponding emotion icons from emotion pictures, text or language, etc., realize the quick input effect of multiple input channels, and can also realize emotion recognition for longer input information, satisfying In order to meet individual needs, artificial intelligence has been realized to a certain extent.
对附图的简要说明Brief description of the drawings
附图说明Description of the drawings
图1为本申请的情绪图符处理方法一实施方式的流程示意图。FIG. 1 is a schematic flowchart of an embodiment of the emotion icon processing method of this application.
图2为本申请用户智能设备一实施方式的模块示意图。Fig. 2 is a schematic diagram of modules of an implementation manner of a user smart device of this application.
发明实施例Invention embodiment
本发明的实施方式Embodiments of the invention
为更进一步阐述本申请为达成预定申请目的所采取的技术手段及功效,以下结合附图及较佳实施例,对本申请详细说明如下。In order to further illustrate the technical means and effects adopted by the application to achieve the intended application purpose, the application will be described in detail below with reference to the drawings and preferred embodiments.
通过具体实施方式的说明,当可对本申请为达成预定目的所采取的技术手段及效果得以更加深入且具体的了解,然而所附图式仅是提供参考与说明之用,并非用来对本申请加以限制。Through the description of the specific implementation manners, it is possible to gain a deeper and specific understanding of the technical means and effects adopted by this application to achieve the intended purpose. However, the attached drawings are only for reference and explanation purposes, and are not used to describe the application. limit.
请参阅图1,图1为本申请的情绪图符处理方法一实施方式的流程示意图,本实施方式情绪图符处理方法可以应用到手机、笔记本电0、平板电脑或者可穿戴设备等用户智能设备上。Please refer to Figure 1. Figure 1 is a schematic flow chart of an embodiment of the emotional icon processing method of this application. The emotional icon processing method of this embodiment can be applied to user smart devices such as mobile phones, laptops, tablets, or wearable devices. on.
需要说明的是,如图1所示,本实施方式所述的情绪图符处理方法可以包括但不限于如下几个步骤。It should be noted that, as shown in FIG. 1, the emotion icon processing method described in this embodiment may include but is not limited to the following steps.
步骤S101,用户智能设备获取用户的输入信息。Step S101, the user's smart device obtains the user's input information.
步骤S102,根据所述输入信息对用户进行情绪识别。Step S102: Perform emotion recognition on the user according to the input information.
步骤S103,根据情绪识别的结果得到对应的关键信息。Step S103: Obtain corresponding key information according to the result of emotion recognition.
步骤S104,根据所述关键信息生成得到对应的情绪图符。Step S104: Generate and obtain corresponding emotion icons according to the key information.
在本实施方式中,所述根据所述关键信息生成得到对应的情绪图符的步骤,具体包括:根据所述关键信息获取对应的情绪表达载体;根据所述情绪表达载体生成得到对应的情绪图符。In this embodiment, the step of generating and obtaining a corresponding emotion icon based on the key information specifically includes: obtaining a corresponding emotion expression vector according to the key information; generating a corresponding emotion image according to the emotion expression vector symbol.
需要首先说明的是,本实施方式所述情绪表达载体可以为情绪图片、情绪文字、情绪语音或三者的任意组合。It should be noted first that the emotion expression carrier in this embodiment can be an emotion picture, an emotion text, an emotion voice or any combination of the three.
在本实施方式中,当所述输入信息为肢体语言内容时,所述关键信息为眼睛、嘴巴、手部、头部、躯干、脚部或任意两种或以上组合所构成的情绪动作信息;当所述输入信息为语言内容时,所述关键信息为语句、语气、用词、场景或任意两种或以上组合所构成的情绪语音信息;当所述输入信息为文本文字时,所述关键信息为语句、用词、场景、上下文或任意两种或以上组合所构成的情绪文本信息。其中,在一实施方式中,所述情绪图符由所述眼睛、嘴巴、手部 、头部、躯干、脚部或任意两种或以上组合所构成的情绪动作信息,语句、语气、用词、场景或任意两种或以上组合所构成的情绪语音信息,和/或,语句、用词、场景、上下文或任意两种或以上组合所构成的情绪文本信息模拟所得到。In this embodiment, when the input information is body language content, the key information is emotional action information composed of eyes, mouth, hands, head, torso, feet, or any combination of two or more; When the input information is language content, the key information is emotional speech information composed of sentences, mood, words, scenes, or any combination of two or more; when the input information is text, the key information The information is emotional text information composed of sentences, words, scenes, contexts, or any combination of two or more. Wherein, in one embodiment, the emotional icon is composed of the eyes, mouth, hands, head, torso, feet, or any combination of two or more of emotional action information, sentences, tone, and words , Scenes, or any two or more combinations of emotional speech information, and/or, sentences, words, scenes, context, or any two or more combinations of emotional text information.
需要说明的是,本实施方式所述情绪图符为用于沟通交流的包括图片、文字和/或语音的表情静态图、表情小视频、表情动态图。It should be noted that the emotion icon in this embodiment is an expression static image, an expression video, and an expression dynamic image including pictures, text, and/or voice for communication.
此外,本实施方式所述情绪图符在发送给接收方时,还用于根据接收方的点播动作产生互动或应激反应效果。In addition, when the emotion icon described in this embodiment is sent to the recipient, it is also used to generate an interaction or stress response effect according to the on-demand action of the recipient.
其中,本实施方式所述的应激反应效果,指的是,比如情绪图符发给接收方后,显示为“生气”,而接收方对该情绪图符的不同位置点播时,能够产生不同的效果,比如如果是表示道歉的双击,则可以显示出“原谅你了”的应激反应;或者如果检测到接收方不作任何点播动作,则可以显示出“我很生气”、“我非常非常非常生气”的情绪加深、加强效果。Among them, the stress response effect described in this embodiment refers to, for example, after the emotional icon is sent to the recipient, it is displayed as "angry", and when the recipient orders the emotional icon from different positions, it can produce different results. For example, if it is a double-click to apologize, it can display a stress response of "forgive you"; or if it is detected that the receiver does not make any on-demand actions, it can display "I am very angry", "I am very The mood of “very angry” deepens and strengthens the effect.
需要说明的是,本实施方式所述S101中用户智能设备获取用户的输入信息的步骤,具体可以包括:用户智能设备通过摄像装置拍摄用户的肢体语言内容;对所述肢体语言内容进行预处理以作为所述输入信息。It should be noted that the step of acquiring the user’s input information by the user’s smart device in S101 of this embodiment may specifically include: the user’s smart device photographing the user’s body language content through a camera; and preprocessing the body language content to As the input information.
举例而言,用户智能设备可以通过前置、后置等多个摄像头拍摄用户的肢体语言内容,其还可以进行连续拍摄、三维拍摄而进行对比筛选。For example, the user's smart device can capture the user's body language content through multiple cameras such as front and rear cameras, and it can also perform continuous shooting and three-dimensional shooting for comparison and screening.
进一步而言,本实施方式所述对所述肢体语言内容进行预处理以作为所述输入信息的步骤,具体可以包括:对所述肢体语言内容进行动作识别;根据动作识别的结果获取动作幅度变化值大于预设阈值的目标部位;将所述目标部位的动作作为所述输入信息。Further, the step of preprocessing the body language content as the input information in this embodiment may specifically include: performing action recognition on the body language content; obtaining the change in the motion amplitude according to the result of the action recognition A target part with a value greater than a preset threshold; taking the action of the target part as the input information.
举例而言,比如用户想表达惊讶,则动作幅度变化值最大的一般是眼睛或嘴巴,比如眼睛睁得非常大,或者嘴巴张得非常大,但是脸部等可能没有明显变化,因此,最能表达情绪的会是眼睛或者嘴巴的动作幅度。For example, if the user wants to express surprise, the largest change in the amplitude of the action is usually the eyes or mouth. For example, if the eyes are opened very wide, or the mouth is opened very wide, there may be no obvious changes in the face. Therefore, the most effective The expression of emotions will be the movements of the eyes or mouth.
当然,考虑到用户可以通过多种动作配合而表达不同的情绪,本实施方式所述肢体语言内容为眼睛、嘴巴、手部、头部、躯干、脚部或任意两种或以上组合的动作。Of course, considering that the user can express different emotions through a variety of actions, the body language content in this embodiment is eyes, mouth, hands, head, torso, feet, or any combination of two or more actions.
本实施方式根据所述输入信息对用户进行情绪识别,具体可以包括:首先是对于人体的关节点进行建模,把人体看作是一个有着内在联系的刚性系统,它包含骨骼以及关节点,骨骼和关节点的相对运动构成了人体姿态的变化,即平时所说的描述动作,在人体众多关节点中,根据对情绪影响的轻重,忽略手指与脚趾,将人体的脊柱抽象为颈、胸和腹部三个关节,总结出一个人体模型,其中上半身包括头、颈、胸部、腹部、两个大臂和两个小臂,而下半身包括两个大腿、两个小腿;对于选择的多种情绪状态,分别选取了人体正常情况下进行每种情绪状态的表达,并对肢体反应进行详细分析;由于人体被抽象成为了刚体模型,首先是人体重心的移动,分为向前、向后和自然态;除了重心的移动之外,其次是关节点的转动,人体发生动作变化,并且和情绪相关的关节点包括头、胸腔、肩膀和肘部,对应的动作为头部的弯曲、胸腔的转动、上臂的摆动和伸展方向,以及肘部的弯曲,这些参数结合上重心的移动,总共包括了七个自由度的移动,表达出一个人上半身的动作。通过这种方式,可以根据用户的肢体语言内容进行情绪识别。This implementation manner performs emotion recognition on the user based on the input information, which may specifically include: firstly, modeling the joint points of the human body. The human body is regarded as a rigid system with internal connections, which includes bones and joint points. The relative movement with the joint points constitutes the change of the human body’s posture, that is, the usual description action. Among the many joints of the human body, according to the degree of emotional impact, the fingers and toes are ignored, and the human spine is abstracted into the neck, chest and Three joints of the abdomen, sum up a human body model, in which the upper body includes the head, neck, chest, abdomen, two upper arms and two forearms, and the lower body includes two thighs and two lower legs; for a variety of emotional states selected , The human body is selected to express each emotional state under normal conditions, and the body response is analyzed in detail; since the human body is abstracted into a rigid body model, the first is the movement of the human center of gravity, which is divided into forward, backward and natural states ; In addition to the movement of the center of gravity, followed by the rotation of the joint points, the human body changes in motion, and the joint points related to emotions include the head, chest, shoulders and elbows. The corresponding actions are head bending, chest rotation, The swing and extension direction of the upper arm, and the bending of the elbow, these parameters combined with the movement of the upper center of gravity, including a total of seven degrees of freedom of movement, express a person's upper body movement. In this way, emotion recognition can be performed based on the user's body language content.
比如,用户通过前置或后置摄像头拍摄的包含面部的照片或视频,当用户完成拍摄时,采集用户的面部图像数据;其中,采集动作可以发生于面部动作最大最明显时,其可以通过动作识别进行自动判断。For example, a photo or video containing a face taken by the user through the front or rear camera, when the user finishes the shooting, collect the user's facial image data; among them, the collection action can occur when the facial movement is the largest and most obvious, it can be Recognize and judge automatically.
接着,本实施方式采用基于图像情感分析技术对面部图像进行情绪识别并输出情绪结果。本实施方式所述情绪结果可以包含两层判定:第一层判定为是否包含情绪信息,如果不包含,则情绪结果为无情绪信息;如果第一层判定为包含情绪信息,则进入第二层判定,即判定为某一种情绪类型,其中,情绪类型包括且不限于开心、悲伤、郁闷、愤怒、害怕、紧张等。Next, this embodiment adopts image emotion analysis technology to perform emotion recognition on facial images and output emotion results. The emotional result of this embodiment can include two levels of determination: the first level determines whether it contains emotional information, if it does not, the emotional result is no emotional information; if the first level determines that it contains emotional information, then enter the second level Judgment refers to a certain type of emotion, where the emotion type includes, but is not limited to, happiness, sadness, depression, anger, fear, tension, etc.
本实施方式可以根据情绪结果而判断是否获取对应的情绪表达载体,当情绪结果为无情绪时,不进行情绪图符的生成;当情绪结果为某一种情绪类型时,将同时与已有情绪图符、预设的文字文本库进行比对,给用户提供此种情绪类型相关的情绪图符。This embodiment can determine whether to obtain the corresponding emotion expression carrier according to the emotion result. When the emotion result is no emotion, the emotion icon is not generated; when the emotion result is a certain emotion type, it will be combined with the existing emotion at the same time. The icons are compared with the preset text library to provide users with emotional icons related to this type of emotion.
进一步而言,在筛选出与某种情绪类型相关的情绪图符的基础上,用户选择想要使用的情绪图符添加至面部照片或包含面部的视频中,即可生成新的情绪图 符预览,此时,用户可以选择保存此新的情绪图符至已有的情绪图符集合中,也可以通过发送消息立即使用。Furthermore, on the basis of filtering out the emotion icons related to a certain emotion type, the user selects the emotion icon he wants to use and adds it to the facial photo or video containing the face, and then a new emotion icon preview can be generated At this time, the user can choose to save the new emotion icon to the existing emotion icon set, or use it immediately by sending a message.
需要说明的是,本实施方式所述S101中用户智能设备获取用户的输入信息的步骤,具体还可以包括:用户智能设备通过麦克风获取用户的语音内容;对所述语言内容进行预处理以作为所述输入信息。It should be noted that the step of acquiring the user’s input information by the user’s smart device in S101 of this embodiment may specifically include: the user’s smart device acquiring the user’s voice content through a microphone; and preprocessing the language content as述input information.
举例而言,用户在使用设备时,可以直接进行语音输入“很烦哦”,此时,本实施方式可以通过麦克风获取“很烦哦”作为输入信息。For example, when the user uses the device, he can directly perform voice input "Very annoying". At this time, this embodiment can acquire "Very annoying" through a microphone as input information.
对应地,本实施方式所述S102中根据所述输入信息对用户进行情绪识别的步骤,具体可以包括:采用基于语音情感分析技术对所述语音内容的数据进行情绪识别。Correspondingly, the step of performing emotion recognition on the user according to the input information in S102 of this embodiment may specifically include: using a voice emotion analysis technology to perform emotion recognition on the data of the voice content.
比如,用户通过麦克风在语音消息输入框中输入的语音信息,当用户完成输入时,采集用户输入的语音数据;接着,采用基于语音情感分析技术对语音数据进行情绪识别并输出情绪结果。其中,本实施方式所述情绪结果可以包含两层判定:第一层判定为是否包含情绪信息,如果不包含,则情绪结果为无情绪信息;如果第一层判定为包含情绪信息,则进入第二层判定,即判定为某一种情绪类型,其中,情绪类型包括且不限于开心、悲伤、郁闷、愤怒、害怕、紧张等。For example, when the user enters the voice information in the voice message input box through the microphone, when the user completes the input, the voice data input by the user is collected; then, the voice data based on voice emotion analysis technology is used to perform emotion recognition on the voice data and output the emotion result. Among them, the emotional result of this embodiment may include two levels of determination: the first level determines whether it contains emotional information, if it does not, the emotional result is no emotional information; if the first level determines that it contains emotional information, enter the second The second-level judgment refers to a certain type of emotion, where the emotion type includes but is not limited to happy, sad, depressed, angry, scared, nervous, etc.
接着,本实施方式可以根据情绪结果而判断是否获取对应的情绪表达载体,当情绪结果为无情绪时,不进行情绪图符的生成;当情绪结果为某一种情绪类型时,通过与已有情绪图符进行比对,给用户提供此种情绪类型相关的情绪图符;最后,可以在已有情绪图符的基础上生成情绪图符,提供给用户预览并进行选择后使用。Next, this embodiment can determine whether to obtain the corresponding emotion expression vector according to the emotion result. When the emotion result is no emotion, the emotion icon is not generated; when the emotion result is a certain type of emotion, it can be compared with the existing Emotion icons are compared to provide users with emotion icons related to this type of emotion; finally, emotion icons can be generated on the basis of existing emotion icons, and provided to users for preview and selection before use.
需要说明的是,本实施方式所述S101中用户智能设备获取用户的输入信息的步骤,具体可以包括:用户智能设备通过文本消息输入框获取用户的文本文字;对所述文本文字进行预处理以作为所述输入信息。It should be noted that the step of acquiring the user’s input information by the user’s smart device in S101 of this embodiment may specifically include: the user’s smart device acquiring the user’s text through a text message input box; and preprocessing the text to As the input information.
对应地,本实施方式所述S102中根据所述输入信息对用户进行情绪识别的步骤,具体可以包括:采用基于文本情感分析技术对所述文本文字的数据进行情绪识别。Correspondingly, the step of performing emotion recognition on the user according to the input information in S102 of this embodiment may specifically include: performing emotion recognition on the data of the text text by using a text emotion analysis technology.
比如,用户在文本消息输入框中输入的文字文本信息,当用户完成输入时,采集用户输入的文字文本数据;接着,采用基于文本情感分析技术对文字文本进行情绪识别并输出情绪结果。For example, the text information entered by the user in the text message input box, when the user completes the input, the text text data entered by the user is collected; then, the text sentiment analysis technology is used to perform emotion recognition on the text and the emotion result is output.
情绪结果可以包含两层判定:第一层判定为是否包含情绪信息,如果不包含,则情绪结果为无情绪信息;如果第一层判定为包含情绪信息,则进入第二层判定,即判定为某一种情绪类型,在本实施方式中,情绪类型可以包括且不限于开心、悲伤、郁闷、愤怒、害怕、紧张等。Emotional results can include two levels of judgment: the first level determines whether it contains emotional information, if not, the emotional result is no emotional information; if the first level determines that it contains emotional information, then it enters the second level of determination, that is, it is determined as A certain emotion type. In this embodiment, the emotion type may include, but is not limited to, happy, sad, depressed, angry, scared, nervous, and so on.
进一步而言,本实施方式基于情绪结果判断是否获取对应的情绪表达载体。比如,当情绪结果为无情绪时,不进行情绪图符的生成;当情绪结果为某一种情绪类型时,通过与已有情绪图符进行比对,给用户提供此种情绪类型相关的情绪图符。例如,当情绪结果为“生气”,通过比对,将基于已有情绪图符生成“生气”情绪图符。最后,可以在已有情绪图符的基础上生成情绪图符结果,提供给用户预览并进行选择后使用。Furthermore, this embodiment judges whether to obtain the corresponding emotion expression vector based on the emotion result. For example, when the emotion result is no emotion, no emotion icon is generated; when the emotion result is a certain emotion type, the user will be provided with emotions related to this emotion type by comparing with existing emotion icons Icon. For example, when the emotional result is "angry", through comparison, an "angry" emotional icon will be generated based on the existing emotional icon. Finally, the result of the emotion icon can be generated on the basis of the existing emotion icon, and the result is provided to the user for preview and selection before use.
值得一提的是,本实施方式所述根据所述关键信息生成得到对应的情绪图符的步骤之后,还可以包括:将所述情绪图符提示给用户进行预览;判断是否获取到用户的确认操作;若获取到用户的确认操作,发送和/或存储所述情绪图符。It is worth mentioning that after the step of generating the corresponding emotion icon according to the key information in this embodiment, it may further include: prompting the emotion icon to the user for preview; judging whether the user's confirmation is obtained Operation; if the user's confirmation operation is obtained, send and/or store the emotion icon.
例如,当情绪结果为“生气”,通过比对,将筛选出已有情绪图符中所有的“生气”表情,且筛选出预设的文字/文本库中所有的“生气”有关的内容如“本宝宝怒了”。用户可以直接通过触摸点击或鼠标点击来选择情绪图符或者文字/文本内容,选择后即可添加至原面部照片或包含面部的视频中,从而生成富有表情的情绪图符。For example, when the emotional result is "angry", the comparison will filter out all the "angry" expressions in the existing emotional icons, and filter out all the "angry" related content in the preset text/text library such as "The baby is angry." The user can directly select the emotion icon or text/text content by touching or clicking with the mouse, and then adding it to the original facial photo or video containing the face to generate an expressive emotion icon.
需要说明的是,本实施方式情绪图符处理方法处理得到的情绪图符,可以通过APP第三方应用发送给第三方进行沟通交互,也可以存储在本地、云端。It should be noted that the emotion icon processed by the emotion icon processing method of this embodiment can be sent to a third party through an APP third-party application for communication and interaction, or stored locally or in the cloud.
请接着参阅图2,本申请还提供一种用户智能设备,作为其中一种实施方式,所述用户智能设备包括处理器21,所述处理器21用于执行计算机程序,以实现如上所述的情绪图符处理方法。Please refer to FIG. 2 next. This application also provides a user smart device. As one of the implementation manners, the user smart device includes a processor 21, and the processor 21 is configured to execute a computer program to implement the above Emotion icon processing method.
具体而言,所述处理器21用于获取用户的输入信息;所述处理器21用于根据所述输入信息对用户进行情绪识别;所述处理器21用于根据情绪识别的结果得到 对应的关键信息;所述处理器21用于根据所述关键信息生成得到对应的情绪图符。Specifically, the processor 21 is used to obtain input information of the user; the processor 21 is used to perform emotion recognition on the user according to the input information; the processor 21 is used to obtain corresponding information according to the result of the emotion recognition Key information; the processor 21 is configured to generate corresponding emotion icons according to the key information.
需要说明的是,本实施方式所述处理器21用于获取用户的输入信息,具体可以包括:所述处理器21用于通过摄像装置拍摄用户的肢体语言内容;对所述肢体语言内容进行预处理以作为所述输入信息。It should be noted that the processor 21 in this embodiment is used to obtain user input information, which may specifically include: the processor 21 is used to capture the user's body language content through a camera device; Processing as the input information.
举例而言,所述处理器21用于可以通过前置、后置等多个摄像头拍摄用户的肢体语言内容,其还可以进行连续拍摄、三维拍摄而进行对比筛选。For example, the processor 21 is configured to capture the user's body language content through multiple cameras such as front and rear cameras, and it may also perform continuous shooting and three-dimensional shooting for comparison and screening.
进一步而言,本实施方式所述处理器21用于对所述肢体语言内容进行预处理以作为所述输入信息,具体可以包括:对所述肢体语言内容进行动作识别;根据动作识别的结果获取动作幅度变化值大于预设阈值的目标部位;将所述目标部位的动作作为所述输入信息。Further, the processor 21 in this embodiment is configured to preprocess the body language content as the input information, which may specifically include: performing action recognition on the body language content; acquiring according to the result of the action recognition A target part whose motion amplitude change value is greater than a preset threshold; the motion of the target part is used as the input information.
举例而言,比如用户想表达惊讶,则动作幅度变化值最大的一般是眼睛或嘴巴,比如眼睛睁得非常大,或者嘴巴张得非常大,但是脸部等可能没有明显变化,因此,最能表达情绪的会是眼睛或者嘴巴的动作幅度。For example, if the user wants to express surprise, the largest change in the amplitude of the action is usually the eyes or mouth. For example, if the eyes are opened very wide, or the mouth is opened very wide, there may be no obvious changes in the face. Therefore, the most effective The expression of emotions will be the movements of the eyes or mouth.
当然,考虑到用户可以通过多种动作配合而表达不同的情绪,本实施方式所述肢体语言内容为眼睛、嘴巴、手部、头部、躯干、脚部或任意两种或以上组合的动作。Of course, considering that the user can express different emotions through a variety of actions, the body language content in this embodiment is eyes, mouth, hands, head, torso, feet, or any combination of two or more actions.
本实施方式所述处理器21用于根据所述输入信息对用户进行情绪识别,具体可以包括:首先是对于人体的关节点进行建模,把人体看作是一个有着内在联系的刚性系统,它包含骨骼以及关节点,骨骼和关节点的相对运动构成了人体姿态的变化,即平时所说的描述动作,在人体众多关节点中,根据对情绪影响的轻重,忽略手指与脚趾,将人体的脊柱抽象为颈、胸和腹部三个关节,总结出一个人体模型,其中上半身包括头、颈、胸部、腹部、两个大臂和两个小臂,而下半身包括两个大腿、两个小腿;对于选择的多种情绪状态,分别选取了人体正常情况下进行每种情绪状态的表达,并对肢体反应进行详细分析;由于人体被抽象成为了刚体模型,首先是人体重心的移动,分为向前、向后和自然态;除了重心的移动之外,其次是关节点的转动,人体发生动作变化,并且和情绪相关的关节点包括头、胸腔、肩膀和肘部,对应的动作为头部的弯曲、胸腔 的转动、上臂的摆动和伸展方向,以及肘部的弯曲,这些参数结合上重心的移动,总共包括了七个自由度的移动,表达出一个人上半身的动作。通过这种方式,可以根据用户的肢体语言内容进行情绪识别。The processor 21 in this embodiment is used to recognize the user's emotions according to the input information, which may specifically include: firstly, modeling the joint points of the human body, and treating the human body as a rigid system with internal connections. Including bones and joints, the relative movement of bones and joints constitutes the change of the human body’s posture, that is, the usual description of actions. Among the many joints of the human body, according to the degree of emotional impact, the fingers and toes are ignored. The spine is abstracted into three joints of the neck, chest and abdomen. A human body model is summarized. The upper body includes the head, neck, chest, abdomen, two arms and two forearms, and the lower body includes two thighs and two lower legs; For the selected multiple emotional states, the human body is selected to express each emotional state under normal conditions, and the body response is analyzed in detail; since the human body is abstracted into a rigid body model, the first is the movement of the human center of gravity, which is divided into Forward, backward and natural state; in addition to the movement of the center of gravity, followed by the rotation of the joint points, the human body changes in motion, and the joint points related to emotions include the head, chest, shoulders and elbows. The corresponding motion is the head The bending of the chest, the rotation of the chest cavity, the swing and extension direction of the upper arm, and the bending of the elbow, these parameters combined with the movement of the upper center of gravity, including a total of seven degrees of freedom of movement, express a person's upper body movement. In this way, emotion recognition can be performed based on the user's body language content.
比如,用户通过前置或后置摄像头拍摄的包含面部的照片或视频,当用户完成拍摄时,采集用户的面部图像数据;其中,采集动作可以发生于面部动作最大最明显时,其可以通过动作识别进行自动判断。For example, a photo or video containing a face taken by the user through the front or rear camera, when the user finishes the shooting, collect the user's facial image data; among them, the collection action can occur when the facial movement is the largest and most obvious, it can be Recognize and judge automatically.
接着,本实施方式可以采用基于图像情感分析技术对面部图像进行情绪识别并输出情绪结果。本实施方式所述情绪结果可以包含两层判定:第一层判定为是否包含情绪信息,如果不包含,则情绪结果为无情绪信息;如果第一层判定为包含情绪信息,则进入第二层判定,即判定为某一种情绪类型,其中,情绪类型包括且不限于开心、悲伤、郁闷、愤怒、害怕、紧张等。Next, in this embodiment, image emotion analysis technology may be used to perform emotion recognition on facial images and output emotion results. The emotional result of this embodiment can include two levels of determination: the first level determines whether it contains emotional information, if it does not, the emotional result is no emotional information; if the first level determines that it contains emotional information, then enter the second level Judgment refers to a certain type of emotion, where the emotion type includes, but is not limited to, happiness, sadness, depression, anger, fear, tension, etc.
本实施方式可以根据情绪结果而判断是否获取对应的情绪表达载体,当情绪结果为无情绪时,不进行情绪图符的生成;当情绪结果为某一种情绪类型时,将同时与已有情绪图符、预设的文字文本库进行比对,给用户提供此种情绪类型相关的情绪图符。This embodiment can determine whether to obtain the corresponding emotion expression carrier according to the emotion result. When the emotion result is no emotion, the emotion icon is not generated; when the emotion result is a certain emotion type, it will be combined with the existing emotion at the same time. The icons are compared with the preset text library to provide users with emotional icons related to this type of emotion.
进一步而言,在筛选出与某种情绪类型相关的情绪图符的基础上,用户选择想要使用的情绪图符添加至面部照片或包含面部的视频中,即可生成新的情绪图符预览,此时,用户可以选择保存此新的情绪图符至已有的情绪图符集合中,也可以通过发送消息立即使用。Furthermore, on the basis of filtering out the emotion icons related to a certain emotion type, the user selects the emotion icon he wants to use and adds it to the facial photo or video containing the face, and then a new emotion icon preview can be generated At this time, the user can choose to save the new emotion icon to the existing emotion icon set, or use it immediately by sending a message.
需要说明的是,本实施方式所述处理器21用于获取用户的输入信息,具体还可以包括:所述处理器21用于通过麦克风获取用户的语音内容;对所述语言内容进行预处理以作为所述输入信息。It should be noted that the processor 21 in this embodiment is configured to obtain user input information, and specifically may further include: the processor 21 is configured to obtain the user's voice content through a microphone; and preprocess the language content to As the input information.
举例而言,用户在使用设备时,可以直接进行语音输入“很烦哦”,此时,本实施方式可以通过麦克风获取“很烦哦”作为输入信息。For example, when the user uses the device, he can directly perform voice input "Very annoying". At this time, this embodiment can acquire "Very annoying" through a microphone as input information.
对应地,本实施方式所述处理器21用于根据所述输入信息对用户进行情绪识别,具体可以包括:所述处理器21用于采用基于语音情感分析技术对所述语音内容的数据进行情绪识别。Correspondingly, the processor 21 in this embodiment is configured to perform emotion recognition on the user according to the input information, and may specifically include: the processor 21 is configured to perform emotion recognition on the data of the voice content by using voice emotion analysis technology. Recognition.
比如,用户通过麦克风在语音消息输入框中输入的语音信息,当用户完成输入 时,采集用户输入的语音数据;接着,采用基于语音情感分析技术对语音数据进行情绪识别并输出情绪结果。其中,本实施方式所述情绪结果可以包含两层判定:第一层判定为是否包含情绪信息,如果不包含,则情绪结果为无情绪信息;如果第一层判定为包含情绪信息,则进入第二层判定,即判定为某一种情绪类型,其中,情绪类型包括且不限于开心、悲伤、郁闷、愤怒、害怕、紧张等。For example, the voice information entered by the user in the voice message input box through the microphone, when the user completes the input, the voice data input by the user is collected; then, voice-based emotion analysis technology is used to perform emotion recognition on the voice data and output the emotion result. Among them, the emotional result of this embodiment may include two levels of determination: the first level determines whether it contains emotional information, if it does not, the emotional result is no emotional information; if the first level determines that it contains emotional information, enter the second The second-level judgment refers to a certain type of emotion, where the emotion type includes but is not limited to happy, sad, depressed, angry, scared, nervous, etc.
接着,本实施方式可以根据情绪结果而判断是否获取对应的情绪表达载体,当情绪结果为无情绪时,不进行情绪图符的生成;当情绪结果为某一种情绪类型时,通过与已有情绪图符进行比对,给用户提供此种情绪类型相关的情绪图符;最后,可以在已有情绪图符的基础上生成情绪图符,提供给用户预览并进行选择后使用。Next, this embodiment can determine whether to obtain the corresponding emotion expression vector according to the emotion result. When the emotion result is no emotion, the emotion icon is not generated; when the emotion result is a certain type of emotion, it can be compared with the existing Emotion icons are compared to provide users with emotion icons related to this type of emotion; finally, emotion icons can be generated on the basis of existing emotion icons, and provided to users for preview and selection before use.
需要说明的是,本实施方式所述处理器21用于获取用户的输入信息,具体可以包括:所述处理器21用于通过文本消息输入框获取用户的文本文字;对所述文本文字进行预处理以作为所述输入信息。It should be noted that the processor 21 in this embodiment is configured to obtain user input information, which may specifically include: the processor 21 is configured to obtain the user's text and text through a text message input box; Processing as the input information.
对应地,本实施方式所述处理器21用于据所述输入信息对用户进行情绪识别,具体可以包括:所述处理器21用于采用基于文本情感分析技术对所述文本文字的数据进行情绪识别。Correspondingly, the processor 21 of this embodiment is configured to perform emotion recognition on the user according to the input information, and may specifically include: the processor 21 is configured to perform emotion recognition on the data of the text text by using a text emotion analysis technology. Recognition.
比如,用户在文本消息输入框中输入的文字文本信息,当用户完成输入时,采集用户输入的文字文本数据;接着,所述处理器21用于采用基于文本情感分析技术对文字文本进行情绪识别并输出情绪结果。For example, the text information entered by the user in the text message input box, when the user completes the input, the text text data entered by the user is collected; then, the processor 21 is used to perform emotion recognition on the text text by using text emotion analysis technology And output emotional results.
情绪结果可以包含两层判定:第一层判定为是否包含情绪信息,如果不包含,则情绪结果为无情绪信息;如果第一层判定为包含情绪信息,则进入第二层判定,即判定为某一种情绪类型,在本实施方式中,情绪类型可以包括且不限于开心、悲伤、郁闷、愤怒、害怕、紧张等。Emotional results can include two levels of judgment: the first level determines whether it contains emotional information, if not, the emotional result is no emotional information; if the first level determines that it contains emotional information, then it enters the second level of determination, that is, it is determined as A certain emotion type. In this embodiment, the emotion type may include, but is not limited to, happy, sad, depressed, angry, scared, nervous, and so on.
进一步而言,本实施方式基于情绪结果判断是否获取对应的情绪表达载体。比如,当情绪结果为无情绪时,不进行情绪图符的生成;当情绪结果为某一种情绪类型时,通过与已有情绪图符进行比对,给用户提供此种情绪类型相关的情绪图符。例如,当情绪结果为“生气”,通过比对,将基于已有情绪图符生成“生 气”情绪图符。最后,可以在已有情绪图符的基础上生成情绪图符结果,提供给用户预览并进行选择后使用。Furthermore, this embodiment judges whether to obtain the corresponding emotion expression vector based on the emotion result. For example, when the emotion result is no emotion, no emotion icon is generated; when the emotion result is a certain emotion type, the user will be provided with emotions related to this emotion type by comparing with existing emotion icons Icon. For example, when the emotional result is "angry", through the comparison, the "qi" emotional symbol will be generated based on the existing emotional symbol. Finally, the result of the emotion icon can be generated on the basis of the existing emotion icon, and the result is provided to the user for preview and selection before use.
在本实施方式中,当所述输入信息为肢体语言内容时,所述关键信息为眼睛、嘴巴、手部、头部、躯干、脚部或任意两种或以上组合所构成的情绪动作信息;当所述输入信息为语言内容时,所述关键信息为语句、语气、用词、场景或任意两种或以上组合所构成的情绪语音信息;当所述输入信息为文本文字时,所述关键信息为语句、用词、场景、上下文或任意两种或以上组合所构成的情绪文本信息。优选地,所述情绪图符由所述眼睛、嘴巴、手部、头部、躯干、脚部或任意两种或以上组合所构成的情绪动作信息,语句、语气、用词、场景或任意两种或以上组合所构成的情绪语音信息,和/或,语句、用词、场景、上下文或任意两种或以上组合所构成的情绪文本信息模拟所得到In this embodiment, when the input information is body language content, the key information is emotional action information composed of eyes, mouth, hands, head, torso, feet, or any combination of two or more; When the input information is language content, the key information is emotional speech information composed of sentences, mood, words, scenes, or any combination of two or more; when the input information is text, the key information The information is emotional text information composed of sentences, words, scenes, contexts, or any combination of two or more. Preferably, the emotional icon is composed of the eyes, mouth, hands, head, torso, feet, or any two or more combination of emotional action information, sentence, tone, word, scene or any two Emotional speech information composed of one or more combinations, and/or, emotional text information composed of sentences, words, scenes, contexts, or any two or more combinations
值得一提的是,本实施方式所述处理器21用于根据所述关键信息生成得到对应的情绪图符之后,还可用于将所述情绪图符提示给用户进行预览;判断是否获取到用户的确认操作;若获取到用户的确认操作,发送和/或存储所述情绪图符。It is worth mentioning that the processor 21 in this embodiment is configured to generate and obtain the corresponding emotion icon according to the key information, and then may also be used to prompt the user to preview the emotion icon; to determine whether the user is obtained If the user's confirmation operation is obtained, send and/or store the emotion icon.
例如,当情绪结果为“生气”,通过比对,将筛选出已有情绪图符中所有的“生气”表情,且筛选出预设的文字/文本库中所有的“生气”有关的内容如“本宝宝怒了”。用户可以直接通过触摸点击或鼠标点击来选择情绪图符或者文字/文本内容,选择后即可添加至原面部照片或包含面部的视频中,从而生成富有表情的情绪图符。For example, when the emotional result is "angry", the comparison will filter out all the "angry" expressions in the existing emotional icons, and filter out all the "angry" related content in the preset text/text library such as "The baby is angry." The user can directly select the emotion icon or text/text content by touching or clicking with the mouse, and then adding it to the original facial photo or video containing the face to generate an expressive emotion icon.
需要说明的是,本实施方式情绪图符处理方法处理得到的情绪图符,可以通过APP第三方应用发送给第三方进行沟通交互,也可以存储在本地、云端。It should be noted that the emotion icon processed by the emotion icon processing method of this embodiment can be sent to a third party through an APP third-party application for communication and interaction, or stored locally or in the cloud.
此外,本申请还提供一种存储介质,所述存储介质存储有计算机程序,所述计算机程序在被处理器执行时,用于实现如上实施方式所述的情绪图符处理方法。In addition, the present application also provides a storage medium that stores a computer program, and when the computer program is executed by a processor, it is used to implement the emotion icon processing method described in the above embodiment.
以上所述,仅是本申请的较佳实施例而已,并非对本申请作任何形式上的限制,虽然本申请已以较佳实施例揭露如上,然而并非用以限定本申请,任何熟悉本专业的技术人员,在不脱离本申请技术方案范围内,当可利用上述揭示的技术内 容作出些许更动或修饰为等同变化的等效实施例,但凡是未脱离本申请技术方案内容,依据本申请的技术实质对以上实施例所作的任何简单修改、等同变化与修饰,均仍属于本申请技术方案的范围内。The above are only the preferred embodiments of the application, and do not limit the application in any form. Although the application has been disclosed as the preferred embodiments, it is not intended to limit the application. Anyone familiar with the profession The technical personnel, without departing from the scope of the technical solution of the present application, can use the technical content disclosed above to make slight changes or modification into equivalent embodiments with equivalent changes, provided that the content of the technical solution of the present application is not deviated from the technical solution of the present application. Any simple modifications, equivalent changes and modifications made to the above embodiments by the technical essence still fall within the scope of the technical solutions of the present application.

Claims (15)

  1. 一种情绪图符处理方法,其特征在于,所述情绪图符处理方法包括步骤:An emotion icon processing method, characterized in that the emotion icon processing method includes the steps:
    用户智能设备获取用户的输入信息;The user's smart device obtains the user's input information;
    根据所述输入信息对用户进行情绪识别;Perform emotion recognition on the user according to the input information;
    根据情绪识别的结果得到对应的关键信息;Obtain corresponding key information according to the result of emotion recognition;
    根据所述关键信息生成得到对应的情绪图符。The corresponding emotion icon is generated according to the key information.
  2. 根据权利要求1所述的情绪图符处理方法,其特征在于,所述用户智能设备获取用户的输入信息的步骤,具体包括:The emotional icon processing method according to claim 1, wherein the step of obtaining the user's input information by the user smart device specifically includes:
    用户智能设备通过摄像装置拍摄用户的肢体语言内容;The user's smart device captures the user's body language content through the camera device;
    对所述肢体语言内容进行预处理以作为所述输入信息。Preprocessing the body language content as the input information.
  3. 根据权利要求2所述的情绪图符处理方法,其特征在于,所述对所述肢体语言内容进行预处理以作为所述输入信息的步骤,具体包括:The emotion icon processing method according to claim 2, wherein the step of preprocessing the body language content as the input information specifically comprises:
    对所述肢体语言内容进行动作识别;Perform action recognition on the body language content;
    根据动作识别的结果获取动作幅度变化值大于预设阈值的目标部位;Obtain the target part whose motion amplitude change value is greater than the preset threshold according to the result of motion recognition;
    将所述目标部位的动作作为所述输入信息。Use the movement of the target part as the input information.
  4. 根据权利要求3所述的情绪图符处理方法,其特征在于,所述肢体语言内容为眼睛、嘴巴、手部、头部、躯干、脚部或任意两种或以上组合的动作。The emotional icon processing method according to claim 3, wherein the body language content is eyes, mouth, hands, head, torso, feet, or any combination of two or more actions.
  5. 根据权利要求1所述的情绪图符处理方法,其特征在于,所述用户智能设备获取用户的输入信息的步骤,具体包括:The emotional icon processing method according to claim 1, wherein the step of obtaining the user's input information by the user smart device specifically includes:
    用户智能设备通过麦克风获取用户的语音内容;The user's smart device obtains the user's voice content through the microphone;
    对所述语言内容进行预处理以作为所述输入信息。Preprocessing the language content as the input information.
  6. 根据权利要求5所述的情绪图符处理方法,其特征在于,所述根据所述输入信息对用户进行情绪识别的步骤,具体包括:The emotion icon processing method according to claim 5, wherein the step of performing emotion recognition on the user according to the input information specifically comprises:
    采用基于语音情感分析技术对所述语音内容的数据进行情绪识别 。Using voice-based emotion analysis technology to perform emotion recognition on the data of the voice content.
  7. 根据权利要求1所述的情绪图符处理方法,其特征在于,所述用户智能设备获取用户的输入信息的步骤,具体包括:The emotional icon processing method according to claim 1, wherein the step of obtaining the user's input information by the user smart device specifically includes:
    用户智能设备通过文本消息输入框获取用户的文本文字;The user’s smart device obtains the user’s text through the text message input box;
    对所述文本文字进行预处理以作为所述输入信息。The text is preprocessed as the input information.
  8. 根据权利要求7所述的情绪图符处理方法,其特征在于,所述根据所述输入信息对用户进行情绪识别的步骤,具体包括:8. The emotion icon processing method according to claim 7, wherein the step of performing emotion recognition on the user according to the input information specifically comprises:
    采用基于文本情感分析技术对所述文本文字的数据进行情绪识别。Using text-based sentiment analysis technology to perform sentiment recognition on the text data.
  9. 根据权利要求1-8任一项所述的情绪图符处理方法,其特征在于,所述根据所述关键信息生成得到对应的情绪图符的步骤之后,还包括:8. The emotion icon processing method according to any one of claims 1-8, wherein after the step of generating and obtaining the corresponding emotion icon according to the key information, the method further comprises:
    将所述情绪图符提示给用户进行预览;Prompt the user to preview the emotion icon;
    判断是否获取到用户的确认操作;Determine whether the user's confirmation operation is obtained;
    若获取到用户的确认操作,发送和/或存储所述情绪图符。If the user's confirmation operation is obtained, the emotion icon is sent and/or stored.
  10. 根据权利要求1-8任一项所述的情绪图符处理方法,其特征在于,所述根据所述关键信息生成得到对应的情绪图符的步骤,具体包括:8. The emotion icon processing method according to any one of claims 1-8, wherein the step of generating and obtaining corresponding emotion icons according to the key information specifically comprises:
    根据所述关键信息获取对应的情绪表达载体;Obtaining a corresponding emotion expression vector according to the key information;
    根据所述情绪表达载体生成得到对应的情绪图符。The corresponding emotion icon is generated according to the emotion expression vector.
  11. 根据权利要求10所述的情绪图符处理方法,其特征在于,所述情绪表达载体为情绪图片、情绪文字、情绪语音或三者的任意组合。10. The emotional icon processing method according to claim 10, wherein the emotional expression carrier is an emotional picture, an emotional text, an emotional voice, or any combination of the three.
  12. 根据权利要求1-8任一项所述的情绪图符处理方法,其特征在于:The emotion icon processing method according to any one of claims 1-8, wherein:
    当所述输入信息为肢体语言内容时,所述关键信息为眼睛、嘴巴、手部、头部、躯干、脚部或任意两种或以上组合所构成的情绪动作信息;When the input information is body language content, the key information is emotional action information composed of eyes, mouth, hands, head, torso, feet, or any combination of two or more;
    当所述输入信息为语言内容时,所述关键信息为语句、语气、用 词、场景或任意两种或以上组合所构成的情绪语音信息;When the input information is language content, the key information is emotional speech information composed of sentences, mood, words, scenes, or any combination of two or more;
    当所述输入信息为文本文字时,所述关键信息为语句、用词、场景、上下文或任意两种或以上组合所构成的情绪文本信息;When the input information is text, the key information is emotional text information composed of sentences, words, scenes, context, or any combination of two or more;
    其中,所述情绪图符由所述眼睛、嘴巴、手部、头部、躯干、脚部或任意两种或以上组合所构成的情绪动作信息,语句、语气、用词、场景或任意两种或以上组合所构成的情绪语音信息,和/或,语句、用词、场景、上下文或任意两种或以上组合所构成的情绪文本信息模拟所得到。Wherein, the emotional icon is composed of the eyes, mouth, hands, head, torso, feet, or any two or more combinations of emotional action information, sentence, tone, word, scene or any two Or the emotional speech information formed by the above combination, and/or the emotional text information formed by the sentence, word, scene, context or any combination of two or more.
  13. 根据权利要求1-8任一项所述的情绪图符处理方法,其特征在于,所述情绪图符为用于沟通交流的包括图片、文字和/或语音的表情静态图、表情小视频、表情动态图。The emotional icon processing method according to any one of claims 1-8, wherein the emotional icon is a static image of an expression, a small expression video, and an expression including pictures, text and/or voice used for communication. Dynamic emoticons.
  14. 根据权利要求13所述的情绪图符处理方法,其特征在于,所述情绪图符在发送给接收方时,还用于根据接收方的点播动作产生互动或应激反应效果。The method for processing emotion icons according to claim 13, wherein when the emotion icons are sent to the receiver, they are also used to generate interaction or stress response effects according to the on-demand actions of the receiver.
  15. 一种用户智能设备,其特征在于,所述用户智能设备包括处理器,所述处理器用于执行计算机程序,以实现如权利要求1-14任一项所述的情绪图符处理方法。A user smart device, wherein the user smart device includes a processor, and the processor is configured to execute a computer program to implement the emotion icon processing method according to any one of claims 1-14.
PCT/CN2019/106033 2019-05-13 2019-09-16 User smart device and emoticon processing method therefor WO2020228208A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910395252.1A CN110222210A (en) 2019-05-13 2019-05-13 User's smart machine and its mood icon processing method
CN201910395252.1 2019-05-13

Publications (1)

Publication Number Publication Date
WO2020228208A1 true WO2020228208A1 (en) 2020-11-19

Family

ID=67820927

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/106033 WO2020228208A1 (en) 2019-05-13 2019-09-16 User smart device and emoticon processing method therefor

Country Status (2)

Country Link
CN (1) CN110222210A (en)
WO (1) WO2020228208A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113704504A (en) * 2021-08-30 2021-11-26 平安银行股份有限公司 Emotion recognition method, device, equipment and storage medium based on chat records

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222210A (en) * 2019-05-13 2019-09-10 深圳传音控股股份有限公司 User's smart machine and its mood icon processing method
CN113050843A (en) * 2019-12-27 2021-06-29 深圳富泰宏精密工业有限公司 Emotion recognition and management method, computer program, and electronic device
CN114745349B (en) * 2021-01-08 2023-12-26 上海博泰悦臻网络技术服务有限公司 Comment method, electronic equipment and computer readable storage medium
CN113450804A (en) * 2021-06-23 2021-09-28 深圳市火乐科技发展有限公司 Voice visualization method and device, projection equipment and computer readable storage medium
CN113747249A (en) * 2021-07-30 2021-12-03 北京达佳互联信息技术有限公司 Live broadcast problem processing method and device and electronic equipment
CN114883014B (en) * 2022-04-07 2023-05-05 南方医科大学口腔医院 Patient emotion feedback device and method based on biological recognition and treatment bed

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447164A (en) * 2015-12-02 2016-03-30 小天才科技有限公司 Method and apparatus for automatically pushing chat expressions
CN106649712A (en) * 2016-12-20 2017-05-10 北京小米移动软件有限公司 Method and device for inputting expression information
CN108762480A (en) * 2013-08-28 2018-11-06 联想(北京)有限公司 A kind of input method and electronic equipment
CN109550230A (en) * 2018-11-28 2019-04-02 苏州中科先进技术研究院有限公司 A kind of interactive experience device and method
CN110222210A (en) * 2019-05-13 2019-09-10 深圳传音控股股份有限公司 User's smart machine and its mood icon processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108762480A (en) * 2013-08-28 2018-11-06 联想(北京)有限公司 A kind of input method and electronic equipment
CN105447164A (en) * 2015-12-02 2016-03-30 小天才科技有限公司 Method and apparatus for automatically pushing chat expressions
CN106649712A (en) * 2016-12-20 2017-05-10 北京小米移动软件有限公司 Method and device for inputting expression information
CN109550230A (en) * 2018-11-28 2019-04-02 苏州中科先进技术研究院有限公司 A kind of interactive experience device and method
CN110222210A (en) * 2019-05-13 2019-09-10 深圳传音控股股份有限公司 User's smart machine and its mood icon processing method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113704504A (en) * 2021-08-30 2021-11-26 平安银行股份有限公司 Emotion recognition method, device, equipment and storage medium based on chat records
CN113704504B (en) * 2021-08-30 2023-09-19 平安银行股份有限公司 Emotion recognition method, device, equipment and storage medium based on chat record

Also Published As

Publication number Publication date
CN110222210A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
WO2020228208A1 (en) User smart device and emoticon processing method therefor
CN111459290B (en) Interactive intention determining method and device, computer equipment and storage medium
CN111833418B (en) Animation interaction method, device, equipment and storage medium
KR102173479B1 (en) Method, user terminal and server for information exchange communications
US20210192824A1 (en) Automatically generating motions of an avatar
CN107153496B (en) Method and device for inputting emoticons
CN107632706B (en) Application data processing method and system of multi-modal virtual human
US20220150285A1 (en) Communication assistance system, communication assistance method, communication assistance program, and image control program
WO2021109678A1 (en) Video generation method and apparatus, electronic device, and storage medium
TWI482108B (en) To bring virtual social networks into real-life social systems and methods
US10834456B2 (en) Intelligent masking of non-verbal cues during a video communication
CN107480766B (en) Method and system for content generation for multi-modal virtual robots
Biancardi et al. Analyzing first impressions of warmth and competence from observable nonverbal cues in expert-novice interactions
WO2021212733A1 (en) Video adjustment method and apparatus, electronic device, and storage medium
WO2020215590A1 (en) Intelligent shooting device and biometric recognition-based scene generation method thereof
KR102148151B1 (en) Intelligent chat based on digital communication network
CN106649712B (en) Method and device for inputting expression information
WO2022100680A1 (en) Mixed-race face image generation method, mixed-race face image generation model training method and apparatus, and device
CN112911192A (en) Video processing method and device and electronic equipment
CN109166409B (en) Sign language conversion method and device
US20150181161A1 (en) Information Processing Method And Information Processing Apparatus
Geng et al. Affective faces for goal-driven dyadic communication
CN114882861A (en) Voice generation method, device, equipment, medium and product
CN113205569A (en) Image drawing method and device, computer readable medium and electronic device
CN112669846A (en) Interactive system, method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19928823

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 260422)

122 Ep: pct application non-entry in european phase

Ref document number: 19928823

Country of ref document: EP

Kind code of ref document: A1