CN105871696B - Information sending and receiving method and mobile terminal - Google Patents

Information sending and receiving method and mobile terminal Download PDF

Info

Publication number
CN105871696B
CN105871696B CN201610356810.XA CN201610356810A CN105871696B CN 105871696 B CN105871696 B CN 105871696B CN 201610356810 A CN201610356810 A CN 201610356810A CN 105871696 B CN105871696 B CN 105871696B
Authority
CN
China
Prior art keywords
user
emotion
information
communication message
preset code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610356810.XA
Other languages
Chinese (zh)
Other versions
CN105871696A (en
Inventor
蔡云涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201610356810.XA priority Critical patent/CN105871696B/en
Publication of CN105871696A publication Critical patent/CN105871696A/en
Application granted granted Critical
Publication of CN105871696B publication Critical patent/CN105871696B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/06Message adaptation to terminal or network requirements
    • H04L51/066Format adaptation, e.g. format conversion or compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/06Message adaptation to terminal or network requirements
    • H04L51/063Content adaptation, e.g. replacement of unsuitable content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/18Commands or executable codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Telephone Function (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses an information sending method, which comprises the following steps: the method comprises the steps of obtaining emotion information of a user when editing a first communication message, obtaining a pre-stored preset code corresponding to the emotion information, combining the preset code and the edited first communication message into second communication information, and sending the second communication information to a receiving end. The invention also discloses an information receiving method and a mobile terminal. The information sending method and the information receiving method disclosed by the invention are convenient to operate, and the problem of inconvenient communication caused by the use of the expression information is solved.

Description

Information sending and receiving method and mobile terminal
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an information sending method, an information receiving method, and a mobile terminal.
Background
In communication using short messages, instant messaging, Email, etc. as media, text forms are usually the main ones. However, the text form is relatively abstract and tedious, and for example, the emotion and body language cannot be well expressed before some images and information need to be sensed.
In response to this problem, the sender usually adds some facial expression information (including text expression and picture expression) to the information to compare and figure the information with the image information that the sender wants to express. However, this approach has many limitations, such as:
1. the expression information quantity is too much, and the time spent on searching the expression information which can be matched with the current actual expression of the user is long;
2. the operation is complicated. The switching between the input characters and the images is required to be carried out continuously;
3. the cost is high. The data volume of the graphic information is large, and more network resources are required to be occupied;
these problems all cause inconvenience to communication.
Disclosure of Invention
The embodiment of the invention provides an information sending and receiving method and a mobile terminal, aiming at solving the problem of inconvenient communication caused by the use of expression information in the communication process in the prior art.
In a first aspect, an information sending method is provided, which is applied to a sending end, and the information sending method includes:
acquiring emotion information of a user when editing a first communication message;
acquiring a pre-stored preset code corresponding to the emotion information;
and combining the preset code and the edited first communication message into second communication information and sending the second communication information to a receiving end.
In a second aspect, an information receiving method is provided, which is applied to a receiving end, and includes:
receiving a second communication message sent by a sending end, wherein the second communication message comprises: the method comprises the steps that a first communication message edited by a sending end user and a preset code corresponding to emotion information of the sending end user are obtained;
acquiring feedback information corresponding to the preset code and used for representing emotion, wherein the feedback information comprises: one or more of an image, a sound, and a vibration;
and displaying the second communication message and outputting the feedback information.
In a third aspect, a mobile terminal is provided, which is applied to a sending end, and includes:
the first acquisition module is used for acquiring emotion information of a user when editing the first communication message;
the second acquisition module is used for acquiring a pre-stored preset code corresponding to the emotion information acquired by the first acquisition module;
and the sending module is used for combining the preset codes acquired by the second acquisition module and the edited first communication message into second communication information and sending the second communication information to a receiving end.
In a fourth aspect, a mobile terminal is provided, which is applied to a receiving end, and includes:
a receiving module, configured to receive a second communication message sent by a sending end, where the second communication message includes: the method comprises the steps that a first communication message edited by a sending end user and a preset code corresponding to emotion information of the sending end user are obtained;
a third obtaining module, configured to obtain feedback information corresponding to the preset code obtained by the third obtaining module and used for representing an emotion, where the feedback information includes: one or more of an image, a sound, and a vibration;
and the processing module is used for displaying the first communication message and outputting the feedback information acquired by the third acquisition module.
Therefore, in the embodiment of the invention, by acquiring the emotion information of the user when editing the first communication message and the preset code corresponding to the emotion information, combining the preset code and the edited first communication message into the second communication message, and sending the second communication message to the receiving end, and after receiving the second communication message, the receiving end acquires the feedback information corresponding to the preset code and used for representing the emotion, and outputs the feedback information, the sending end user can accurately express the mood of the heart, the receiving end user can also make the emotion of the sending end user clear, accurate and effective communication is ensured, the operation is simple, and the use of the expression information is facilitated. In addition, the sending end sends a code corresponding to the emotion of the user instead of a real expression image, so that the data volume is small, the occupation of network resources can be reduced, and the cost is reduced. Furthermore, according to the technical scheme provided by the embodiment of the invention, the sending end can automatically acquire the emotion of the user and feed back the emotion of the user at the receiving end through some information, so that the user does not need to manually input the image representing the emotion, and the complicated operation of continuously switching between input characters and images is avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart illustrating a method for sending information according to a first embodiment of the present invention;
fig. 2 is a flowchart illustrating an information sending method according to a second embodiment of the present invention;
fig. 3 is a flowchart of an information receiving method according to a third embodiment of the present invention;
FIG. 4 is a schematic diagram of an information display interface according to a third embodiment of the present invention;
fig. 5 is a block diagram of a mobile terminal according to a fourth embodiment of the present invention;
fig. 6 shows another block diagram of a mobile terminal according to a fourth embodiment of the present invention;
fig. 7 is a block diagram of a mobile terminal according to a fifth embodiment of the present invention;
fig. 8 is a block diagram of a mobile terminal according to a sixth embodiment of the present invention;
fig. 9 is a block diagram of a mobile terminal according to a seventh embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
First embodiment
The embodiment of the invention provides an information sending method which is applied to a sending end. As shown in fig. 1, the information transmitting method includes:
s101, obtaining emotion information of a user when editing the first communication message.
The emotion information is information capable of reflecting the emotion of the user, such as a face image of the user obtained through a face recognition technology. The first communication message may be a short message, an instant communication message, an Email message, or other similar messages.
The emotion information can be acquired automatically by the mobile terminal or selectively controlled and acquired by the user. If the mobile terminal automatically acquires the emotion information, the acquiring process of the emotion information generally starts from the time of opening the message editing interface and before clicking the information sending key, or starts from the time of starting the input method and before quitting the input method after opening the message editing interface, or starts from the time of opening the message editing interface and before quitting the message editing interface. The specific case can be designed according to the actual needs, the present invention is not limited to this, and the foregoing is only for illustration. If the emotion information is selectively acquired by the user, the emotion information may be triggered when the user wants to send an expression image to a receiving end.
And S102, acquiring a preset code which is stored in advance and corresponds to the emotion information.
Wherein, the mobile terminal of the transmitting end is pre-stored with a code corresponding to the emotion information. For example, if the emotion information reflects the emotion of the user, the code corresponding to the emotion may be set to "a", and if the emotion information also reflects the degree of smiling, the degree of smiling may be encoded, for example, the code corresponding to smile is "001", the code corresponding to laugh is "002", and in summary, the code corresponding to the emotion information is "a 001" or "a 002". The above mentioned codes can be designed according to practical requirements, and the embodiment of the present invention is not limited to this.
S103, combining the preset codes and the edited first communication message into second communication information and sending the second communication information to the receiving end.
After the first communication message is edited, combining the preset code and the edited first communication message into second communication information and sending the second communication information to the receiving end. When the receiving end receives the second communication information, the receiving end analyzes the second communication information to analyze a preset code, and searches feedback information corresponding to the preset code and used for representing emotion, wherein the feedback information may include: one or more of sound, image and vibration, which is used to reflect the emotion of the user. And outputting the feedback information after the feedback information is found, such as displaying an image, playing sound, controlling vibration of a mobile phone, and the like.
In summary, the information sending method provided by the embodiment of the invention not only enables the sending end user to accurately express the mood of the heart, but also enables the receiving end user to clearly express the mood of the sending end user, ensures accurate and effective communication, is simple to operate, and facilitates the use of expression information. In addition, the sending end sends a code corresponding to the emotion of the user instead of a real expression image, so that the data volume is small, the occupation of network resources can be reduced, and the cost is reduced. Furthermore, according to the technical scheme provided by the embodiment of the invention, the sending end can automatically acquire the emotion of the user and feed back the emotion of the user at the receiving end through some information, so that the user does not need to manually input the image representing the emotion, and the complicated operation of continuously switching between input characters and images is avoided.
Second embodiment
The embodiment of the invention provides an information sending method which is applied to a sending end. As shown in fig. 2, the information transmitting method includes:
s201, acquiring a face image of a user when editing the first communication message, and then entering 203.
The face recognition can be carried out through a front camera of the mobile terminal, and the face image of the user is collected.
Before the facial image of the user is collected, whether the current environment can be normally identified or not can be judged. For example, by detecting whether the ambient light value is greater than or equal to a preset value, if so, the facial image of the user when editing the first communication message is acquired. The preset value is a value of which the ambient light value is suitable for the illumination intensity, and the face recognition requirement can be met when the ambient light value is more than or equal to 80 lux. If the ambient light value is smaller than the preset value, the frequency of 1 min/time can be kept, and ambient light identification is carried out in a circulating mode. The current ambient light value can be detected through a light sensor arranged on the mobile terminal.
S202, acquiring a pressing force value of the user on the screen of the mobile terminal when the user edits the first communication message, and then entering S204.
The pressing force degree of the person on the screen is different under different emotions, and the pressing force degree of the person on the screen is different under different degrees of the same emotion. Therefore, by acquiring the pressing force value of the user on the screen of the mobile terminal when the user edits the first communication message, the degree of the emotion of the user can be further determined so as to enrich the expression of the emotion of the user.
S203, according to the face image, a first preset code representing the emotion of the user is determined, and then the process goes to S205.
The face recognition method comprises the steps of determining face characteristics of a face image according to a face recognition technology, and determining a first preset code representing emotion of a user according to the face characteristics. The first preset code can be designed according to actual requirements, and if the emotion of the user is laughing, the code corresponding to the laughing emotion can be set to be 'A'; the user's emotion is anger, the code for the anger emotion may be set to "B".
The user emotion can be recognized by analyzing the human face features, for example, if the human face features are smiling muscle obvious stretching, orbicularis oris stretching, mouth horn rising, lip opening and the like, the user can be considered as smiling; if the facial features are contraction of eyebrow muscles, folds of eyebrow skin and the like, the emotion of the user can be considered as anger.
Furthermore, because different emotions may be generated when the user edits the first communication message, in order to improve the accuracy of determining the emotion, the embodiment of the invention can collect one or more facial images of the user when the user edits the first communication message, then identify the emotion of the user corresponding to each facial image, record the number of times of occurrence of each emotion of the user, and finally take the code corresponding to the emotion of the user with the largest number of occurrences as the first preset code. When the front camera is used for face recognition, the recognition frequency can be kept for 3 s/time.
S204, according to the pressing force value, a second preset code for representing the degree level of the emotion of the user is determined, and then the process goes to S205.
The second preset code can be designed according to actual requirements, for example, if the emotion of the user is smile in smiling emotions, the code corresponding to the smile can be set to be '001'; if the user's emotion is a laugh among laugh emotions, the code corresponding to the laugh may be set to "002".
Wherein, according to the pressing force value, the implementation process of determining the second preset code for representing the degree grade of the user emotion may refer to the following steps:
step one, calculating an average pressing force value of a user on a screen of the mobile terminal when the user edits the first communication message.
That is, the pressing force value of the screen of the mobile terminal is recorded every time when the first communication message is edited, and the average pressing force value is calculated in real time.
And step two, calculating the ratio of the average pressing force value to the total average pressing force value.
The total average pressing force value is the average pressing force value of the pressing force applied to the screen of the mobile terminal in the whole using process of the mobile terminal by the user. The whole process of using the mobile terminal by the user as described herein can be understood as starting from the first press of the screen of the mobile terminal until the user finishes editing the first communication message. The more the mobile terminal screen is pressed, the more likely it is to approach a compared value.
And step three, determining a second preset code according to the calculated ratio.
According to the ratio of the average pressing force value to the total average pressing force value, a second preset code representing the emotion degree level of the user can be determined. For example, when the ratio is in the interval [0.9, 1.1) (which may also be expressed as a percentage of 90% to 110%, and the same applies below), the user's emotion is determined to be smiling before that, the degree of smiling may be determined to be smiling; determining that the degree level of the smile is laugh if the ratio is in an interval of [0.75, 1.9) or [1.1, 1.25), and determining that the user's emotion is laugh before the ratio, and determining that the degree level of the smile is laugh if the ratio is in an interval of [0.5, 0.75) or [1.25, 1.5), and determining that the user's emotion is laugh before the ratio; if the user's emotion is determined to be laughing before the ratio is in the (0, 0.5) or [1.5, + ∞) interval, the degree of laughing may be rated as binge laughing. Or when the ratio is in the interval of [0.9, 1.1), determining that the emotion of the user is anger before the interval, and determining that the degree level of the anger is angry; if the ratio is in the interval of [0.75, 1.9) or [1.1, 1.25), the degree level of anger is determined to be anger if the emotion of the user is determined to be anger before the ratio, and if the ratio is in the interval of [0.5, 0.75) or [1.25, 1.5), the emotion of the user is determined to be anger before the ratio, the degree level of anger is determined to be anger; if the user's emotion is determined to be anger before the ratio is in the (0, 0.5) or [1.5, + ∞) interval, the level of anger may be determined to be fury. It should be noted that the foregoing lists are only for illustration purposes and are not intended to be specific limitations of the embodiments of the present invention, and the ratio interval can be adjusted and set according to actual situations.
Of course, it can be understood that a ratio of the total average pressing force value to the average pressing force value may also be calculated, and a ratio of a value obtained by subtracting the total average pressing force value from the average pressing force value to the total average pressing force value (which may also be understood as a percentage that the average pressing force value is greater than the total average pressing force value) may also be used, and the specific situation may be designed according to the needs. Wherein the ratio can be in the form of a decimal, a fraction, a percentage, or the like.
S205, combining the first preset code, the second preset code and the edited first communication message into second communication information and sending the second communication information to the receiving end.
Preferably, the first preset code and the second preset code are sent to the receiving end in an invisible field form, that is, sent in an invisible form for the user, so as to avoid interference on the display of the information content edited by the user.
As shown in table 1, an emotion comparison table may be pre-stored in the mobile terminal to facilitate query and call of information, where the emotion comparison table includes: it should be noted that the contents in table 1 are merely used for illustration and are not intended to specifically limit the embodiments of the present invention, and the basic emotions (e.g., laughing, anger, etc.) and their corresponding codes and facial features, the emotion degrees corresponding to different degrees of pressure and their codes and feedback information, and the codes of the basic emotion and each different emotion degree are common.
TABLE 1
Figure BDA0000999343460000071
Figure BDA0000999343460000091
In summary, the information sending method provided by the embodiment of the invention not only enables the sending end user to accurately express the mood of the heart, but also enables the receiving end user to clearly express the mood of the sending end user, ensures accurate and effective communication, is simple to operate, and facilitates the use of the expression information. In addition, the sending end sends a code corresponding to the emotion of the user instead of a real expression image, so that the data volume is small, the occupation of network resources can be reduced, and the cost is reduced. According to the technical scheme provided by the embodiment of the invention, the sending end can automatically acquire the emotion of the user and feed back the emotion of the user at the receiving end through some information, so that the user does not need to manually input the image representing the emotion, and the complicated operation of continuously switching between input characters and images is avoided. Furthermore, the embodiment of the invention fully utilizes the components of the mobile terminal equipment to collect the emotion information of the user, reduces the improvement on the mobile terminal hardware and reduces the research and development cost.
Third embodiment
The embodiment of the invention provides an information receiving method which is applied to a receiving end. As shown in fig. 3, the information receiving method includes:
s301, receiving a second communication message sent by the sending end.
Wherein the second communication message comprises: the first communication message edited by the sending end user and the preset code corresponding to the emotion information of the sending end user.
The first communication message may be a short message, an instant messaging message, an Email message, or other similar messages.
The emotion information is information capable of reflecting the emotion of the user, such as a face image of the user obtained by a face recognition technology, and the emotion of the user can be determined according to the face characteristics; for example, the pressing force degree of the screen of the mobile terminal is different under different emotions of a person, and the pressing force degree of the screen is different under different degrees of the same emotion. Therefore, by acquiring the pressing force value of the user on the screen of the mobile terminal when the user edits the first communication message, the degree of the emotion of the user can be further determined so as to enrich the expression of the emotion of the user.
The mobile terminal at the sending end stores a code (i.e., a preset code) corresponding to the emotion information in advance. For example, if the emotion information reflects the emotion of the user, the code corresponding to the emotion may be set to "a", and if the emotion information also reflects the degree of smiling, the degree of smiling may be encoded, for example, the code corresponding to smile is "001", the code corresponding to laugh is "002", and in summary, the code corresponding to the emotion information is "a 001" or "a 002". And when the sending end sends the first communication message, the preset code is sent to the receiving end at the same time. The above mentioned codes can be designed according to actual requirements, and the embodiment of the present invention is not limited to this.
S302, feedback information corresponding to the preset code and used for representing emotion is obtained.
Wherein the feedback information comprises: one or more of an image, a sound, and a vibration, thereby reflecting the emotion of the user. For example, by cheerful image content, the mood of the user's joy is indicated; the pleasant mood of the user is indicated through the bright laughter; by means of vibration, the degree of anger of the user is indicated, etc. The images, the sounds and the vibrations can be used independently or in combination, and the images, the sounds and the vibrations can be designed according to actual requirements under specific conditions. In addition, the types and the number of the images, the sounds and the vibrations can be designed according to actual requirements, and it can be understood that the more the types and the number of the images, the sounds and the vibrations are, the more the emotion of the user can be accurately expressed, and the more the accurate and effective communication can be favorably carried out.
The feedback information is pre-stored in the mobile terminal of the receiving end, so that the receiving end only needs to receive the first communication message edited by the sending end user and the preset code corresponding to the emotion information of the sending end user, and does not need to receive expression images and the like with large data volume, thereby reducing the occupation of network resources and lowering the cost.
And S303, displaying the first communication message and outputting the feedback information.
And after the feedback information corresponding to the preset code is found, outputting the feedback information and displaying the first communication message on a screen of the receiving end. Wherein, the outputting of the feedback message may include: displaying images, playing sound, and controlling vibration of the mobile terminal. As shown in fig. 5, in addition to displaying the first communication message "haohaha and you are really laughing" edited by the sending end user at the receiving end, an image (the animated image in fig. 4) corresponding to the emotion of the sending end user is displayed, and a corresponding laughter is played (the audio icon in fig. 4 is shown in the playing sound).
The preset code received by the receiving end is an invisible field, so that the interference on the display of the information content edited by the user at the transmitting end is avoided.
In the embodiment of the invention, the emotion comparison table shown in the table 1 in the first embodiment can be pre-stored in the mobile terminal, so that the information can be conveniently searched and called.
In summary, according to the information receiving method provided by the embodiment of the present invention, by receiving the preset code corresponding to the emotion information sent by the sending end, and searching for the information corresponding to the preset code and capable of feeding back the emotion of the sending end user, the sending end user can accurately express the emotion of the heart, the receiving end user can also clarify the emotion of the sending end user, accurate and effective communication is ensured, the operation is simple, and the use of the expression information is facilitated. In addition, the receiving end receives a code corresponding to the emotion of the user at the transmitting end instead of a real expression image, so that the data volume is small, the occupation of network resources can be reduced, and the cost is reduced.
Fourth embodiment
The embodiment of the invention provides a mobile terminal which is applied to a sending end. As shown in fig. 5, the mobile terminal includes:
a first obtaining module 501, configured to obtain emotion information of a user when editing the first communication message.
The emotion information is information capable of reflecting the emotion of the user, such as a face image of the user obtained through a face recognition technology. The first communication message may be a short message, an instant communication message, an Email message, or other similar messages.
The emotion information may be obtained automatically by the first obtaining module 501, or may be obtained by the user selectively controlling. If the first obtaining module 501 automatically obtains the emotion information, the obtaining process of the emotion information generally starts from the time when the message editing interface is opened to the time when the information sending key is clicked, or starts from the time when the input method is started to the time when the input method is exited after the message editing interface is opened, or starts from the time when the message editing interface is opened to the time when the message editing interface is exited. The specific case can be designed according to the actual needs, the present invention is not limited to this, and the foregoing is only for illustration. If the emotion information is selectively acquired by the user, the emotion information may be triggered when the user wants to send an expression image to a receiving end.
A second obtaining module 502, configured to obtain a pre-stored preset code corresponding to the emotion information obtained by the first obtaining module 501.
Wherein, the mobile terminal of the transmitting end is pre-stored with a code corresponding to the emotion information. For example, if the emotion information reflects the emotion of the user, the code corresponding to the emotion may be set to "a", and if the emotion information also reflects the degree of smiling, the degree of smiling may be encoded, for example, the code corresponding to smile is "001", the code corresponding to laugh is "002", and in summary, the code corresponding to the emotion information is "a 001" or "a 002". The above mentioned codes can be designed according to practical requirements, and the embodiment of the present invention is not limited to this.
A sending module 503, configured to combine the preset code acquired by the second acquiring module 502 and the edited first communication message into a second communication message, and send the second communication message to the receiving end.
After the first communication message is edited, combining the preset code and the edited first communication message into second communication information and sending the second communication information to the receiving end. When the receiving end receives the second communication information, the receiving end analyzes the second communication information to analyze a preset code, and searches feedback information corresponding to the preset code and used for representing emotion, wherein the feedback information may include: one or more of sound, image and vibration, which is used to reflect the emotion of the user. And displaying after the feedback information is found, such as displaying an image, playing sound or controlling vibration of a mobile phone.
In summary, the mobile terminal provided by the embodiment of the invention not only enables the sending end user to accurately express the mood of the heart, but also enables the receiving end user to clearly express the mood of the sending end user, thereby ensuring accurate and effective communication, being simple to operate and facilitating the use of expression information. In addition, the mobile terminal at the sending end sends a code corresponding to the emotion of the user instead of the real expression image, so that the data volume is small, the occupation of network resources can be reduced, and the cost is reduced. Furthermore, the mobile terminal provided by the embodiment of the invention can automatically acquire the emotion of the user, and the mobile terminal at the receiving end feeds back the emotion of the user at the transmitting end through some information, so that the user does not need to manually input the image representing the emotion, and the complicated operation of continuously switching between input characters and images is avoided.
Further, as shown in fig. 6, the first obtaining module 501 includes:
the acquisition unit 5011 is configured to acquire a face image of the user when editing the first communication message.
The face recognition can be carried out through a front camera of the mobile terminal, and the face image of the user is collected.
Before the facial image of the user is collected, whether the current environment can be normally identified or not can be judged. For example, by detecting whether the ambient light value is greater than or equal to a preset value, if so, the facial image of the user when editing the first communication message is acquired. The preset value is a value of which the ambient light value is suitable for the illumination intensity, and the face recognition requirement can be met when the ambient light value is more than or equal to 80 lux. If the ambient light value is smaller than the preset value, the frequency of 1 min/time can be kept, and ambient light identification is carried out in a circulating mode. The current ambient light value can be detected through a light sensor arranged on the mobile terminal.
Accordingly, as shown in fig. 6, the second obtaining module 502 includes:
a first determining unit 5021, configured to determine a first preset code representing the emotion of the user according to the facial image acquired by the acquiring unit 5011.
The face recognition method comprises the steps of determining face characteristics of a face image according to a face recognition technology, and determining a first preset code representing emotion of a user according to the face characteristics. The first preset code can be designed according to actual requirements, and if the emotion of the user is laughing, the code corresponding to the laughing emotion can be set to be 'A'; the user's emotion is anger, the code for the anger emotion may be set to "B".
The user emotion can be recognized by analyzing the human face features, for example, if the human face features are smiling muscle obvious stretching, orbicularis oris stretching, mouth horn rising, lip opening and the like, the user can be considered as smiling; if the facial features are contraction of eyebrow muscles, folds of eyebrow skin and the like, the emotion of the user can be considered as anger.
Further, as shown in fig. 6, the first obtaining module 501 further includes:
the obtaining unit 5012 is configured to obtain a pressing force value of the user on the screen of the mobile terminal when editing the first communication message.
The pressing force degree of the person on the screen is different under different emotions, and the pressing force degree of the person on the screen is different under different degrees of the same emotion. Therefore, by acquiring the pressing force value of the user on the screen of the mobile terminal when the user edits the first communication message, the degree of the emotion of the user can be further determined so as to enrich the expression of the emotion of the user.
Correspondingly, the second obtaining module 502 further includes:
a second determining unit 5022, configured to determine a second preset code representing the degree level of the user emotion according to the pressing force value acquired by the acquiring unit 5012.
The second preset code can be designed according to actual requirements, for example, if the emotion of the user is smile in smiling emotions, the code corresponding to the smile can be set to be '001'; if the user's emotion is a laugh among laugh emotions, the code corresponding to the laugh may be set to "002".
As shown in fig. 6, the sending module 503 includes:
a first sending unit 5031, configured to combine the first preset code determined by the first determining unit 5021, the second preset code determined by the second determining unit 5022, and the edited first communication message into a second communication message, and send the second communication message to the receiving end.
The first preset code and the second preset code are sent to the receiving end in an invisible field form, namely, the first preset code and the second preset code are sent in an invisible form, and therefore interference on display of information contents edited by a user is avoided.
Further, as shown in fig. 6, the first determining unit 5021 includes:
a first determining subunit 50211 is used for determining the facial features of the facial image.
The second determining subunit 50212 is configured to determine a first preset code according to the face feature determined by the first determining subunit 50211.
The user emotion can be recognized by analyzing the human face features, for example, if the human face features are smiling muscle obvious stretching, orbicularis oris stretching, mouth horn rising, lip opening and the like, the user can be considered as smiling; if the facial features are contraction of eyebrow muscles, folds of eyebrow skin and the like, the emotion of the user can be considered as anger.
Further, as shown in fig. 6, the acquisition unit 5011 includes:
the detecting subunit 50111 is configured to detect whether the ambient light value is greater than or equal to a preset value.
The first capturing sub-unit 50112 is configured to capture a face image of the user at the time of editing the first communication message when the ambient light value detected by the detecting sub-unit 50111 is greater than or equal to a preset value.
Before the facial image of the user is collected, whether the current environment can be normally identified or not can be judged. For example, by detecting whether the ambient light value is greater than or equal to a preset value, if so, the facial image of the user when editing the first communication message is acquired. The preset value is a value of which the ambient light value is suitable for the illumination intensity, and the face recognition requirement can be met when the ambient light value is more than or equal to 80 lux. If the ambient light value is smaller than the preset value, the ambient light identification can be carried out by keeping the frequency cycle of 1 min/time. The current ambient light value can be detected through a light sensor arranged on the mobile terminal.
Further, as shown in fig. 6, the second determining unit 5022 includes:
a first calculating subunit 50221 is configured to calculate an average pressing force value of the user on the screen of the mobile terminal when editing the first communication message.
That is, the first calculating subunit 50221 records each pressing force value that the screen of the mobile terminal is subjected to when the first communication message is edited, and calculates the average pressing force value in real time.
A second calculating subunit 50222, configured to calculate a ratio of the average pressing force value calculated by the first calculating subunit 50221 to the total average pressing force value.
The total average pressing force value is the average pressing force value of the pressing force applied to the screen of the mobile terminal in the whole using process of the mobile terminal by the user. The whole process of using the mobile terminal by the user as described herein can be understood as starting from the first press of the screen of the mobile terminal until the user finishes editing the first communication message. The more the mobile terminal screen is pressed, the more likely it is to approach a compared value.
A third determining subunit 50223, configured to determine a second preset code according to the ratio calculated by the second calculating subunit 50222.
According to the ratio of the average pressing force value to the total average pressing force value, a second preset code representing the emotion degree level of the user can be determined. For example, when the ratio is in the interval [0.9, 1.1) (which may also be expressed as a percentage of 90% to 110%, and the same applies below), the user's emotion is determined to be smiling before that, the degree of smiling may be determined to be smiling; determining that the degree level of the smile is laugh if the ratio is in an interval of [0.75, 1.9) or [1.1, 1.25), and determining that the user's emotion is laugh before the ratio, and determining that the degree level of the smile is laugh if the ratio is in an interval of [0.5, 0.75) or [1.25, 1.5), and determining that the user's emotion is laugh before the ratio; if the user's emotion is determined to be laughing before the ratio is in the (0, 0.5) or [1.5, + ∞) interval, the degree of laughing may be rated as binge laughing. Or when the ratio is in the interval of [0.9, 1.1), determining that the emotion of the user is anger before the interval, and determining that the degree level of the anger is angry; if the ratio is in the interval of [0.75, 1.9) or [1.1, 1.25), the degree level of anger is determined to be anger if the emotion of the user is determined to be anger before the ratio, and if the ratio is in the interval of [0.5, 0.75) or [1.25, 1.5), the emotion of the user is determined to be anger before the ratio, the degree level of anger is determined to be anger; if the user's emotion is determined to be anger before the ratio is in the (0, 0.5) or [1.5, + ∞) interval, the level of anger may be determined to be fury. It should be noted that the foregoing lists are only for illustration purposes and are not intended to be specific limitations of the embodiments of the present invention, and the ratio interval can be adjusted and set according to actual situations.
Of course, it can be understood that a ratio of the total average pressing force value to the average pressing force value may also be calculated, and a ratio of a value obtained by subtracting the total average pressing force value from the average pressing force value to the total average pressing force value (which may also be understood as a percentage that the average pressing force value is greater than the total average pressing force value) may also be used, and the specific situation may be designed according to the needs. Wherein the ratio can be in the form of a decimal, a fraction, a percentage, or the like.
Further, as shown in fig. 6, the acquisition unit 5011 includes:
the second capturing subunit 50113 is configured to capture one or more facial images of the user while editing the first communication message.
Wherein the first determining unit 5021 comprises:
the identifying subunit 50213 is configured to identify the emotion of the user corresponding to each facial image acquired by the second acquiring subunit 50113, and record the number of occurrences of each emotion of the user.
A processing subunit 50214, configured to use the code corresponding to the emotion of the user with the largest occurrence number recorded by the identifying subunit 50213 as the first preset code.
The embodiment of the invention can collect one or more facial images of the user when the user edits the first communication message, then identify the user emotion corresponding to each facial image, record the occurrence frequency of each user emotion, and finally take the code corresponding to the user emotion with the maximum occurrence frequency as the first preset code. When the front camera is used for face recognition, the recognition frequency can be kept for 3 s/time.
Further, as shown in fig. 6, the sending module 503 includes:
a second sending unit 5032, configured to send the preset code to the receiving end in the form of an invisible field.
In the embodiment of the invention, the emotion comparison table shown in the table 1 in the first embodiment can be pre-stored in the mobile terminal, so that the information can be conveniently searched and called.
In summary, the mobile terminal provided in the embodiment of the present invention not only enables the sending end user to accurately express the mood of the heart, but also enables the receiving end user to clarify the mood of the sending end user, so as to ensure accurate and effective communication, and the mobile terminal is simple in operation, and facilitates the use of the expression information. In addition, the mobile terminal at the sending end sends a code corresponding to the emotion of the user instead of a real expression image, so that the data volume is small, the occupation of network resources can be reduced, and the cost is reduced. The mobile terminal provided by the embodiment of the invention can automatically acquire the emotion of the user, and the mobile terminal at the receiving end feeds back the emotion of the user at the transmitting end through some information, so that the user does not need to manually input the image representing the emotion, and the complicated operation of continuously switching between input characters and images is avoided. Furthermore, the embodiment of the invention fully utilizes the components of the mobile terminal equipment to collect the emotion information of the user, reduces the improvement on the mobile terminal hardware and reduces the research and development cost.
Fifth embodiment
The embodiment of the invention provides a mobile terminal which is applied to a receiving end. As shown in fig. 7, the mobile terminal includes:
a receiving module 701, configured to receive the second communication message sent by the sending end.
Wherein the second communication message comprises: the first communication message edited by the sending end user and the preset code corresponding to the emotion information of the sending end user.
The first communication message may be a short message, an instant messaging message, an Email message, or other similar messages.
The emotion information is information capable of reflecting the emotion of the user, such as a face image of the user obtained by a face recognition technology, and the emotion of the user can be determined according to the face characteristics; for example, the pressing force degree of the screen of the mobile terminal is different under different emotions of a person, and the pressing force degree of the screen is different under different degrees of the same emotion. Therefore, by acquiring the pressing force value of the user on the screen of the mobile terminal when the user edits the first communication message, the degree of the emotion of the user can be further determined so as to enrich the expression of the emotion of the user.
The mobile terminal at the sending end stores a code (i.e., a preset code) corresponding to the emotion information in advance. For example, if the emotion information reflects the emotion of the user, the code corresponding to the emotion may be set to "a", and if the emotion information also reflects the degree of smiling, the degree of smiling may be encoded, for example, the code corresponding to smile is "001", the code corresponding to laugh is "002", and in summary, the code corresponding to the emotion information is "a 001" or "a 002". And when the sending end sends the first communication message, the preset code is sent to the receiving end at the same time. The above mentioned codes can be designed according to actual requirements, and the embodiment of the present invention is not limited to this.
A third obtaining module 702, configured to obtain feedback information corresponding to the preset code and used for representing the emotion.
Wherein the feedback information includes: one or more of an image, a sound, and a vibration, thereby reflecting the emotion of the user. For example, by cheerful image content, the mood of the user's joy is indicated; the pleasant mood of the user is indicated through the bright laughter; by means of vibration, the degree of anger of the user is indicated, etc. The images, the sounds and the vibrations can be used independently or in combination, and the images, the sounds and the vibrations can be designed according to actual requirements under specific conditions. In addition, the types and the number of the images, the sounds and the vibrations can be designed according to actual requirements, and it can be understood that the more the types and the number of the images, the sounds and the vibrations are, the more the emotion of the user can be accurately expressed, and the more the accurate and effective communication can be favorably carried out.
The feedback information is pre-stored in the mobile terminal of the receiving end, so that the receiving end only needs to receive the first communication message edited by the sending end user and the preset code corresponding to the emotion information of the sending end user, and does not need to receive expression images and the like with large data volume, thereby reducing the occupation of network resources and lowering the cost.
The processing module 703 is configured to display the first communication message and output the feedback information acquired by the third acquiring module 702.
And after the feedback information corresponding to the preset code is found, outputting the feedback information and displaying the first communication message on a screen of the receiving end. Wherein, the outputting of the feedback message may include: displaying images, playing sound, and controlling vibration of the mobile terminal. As shown in fig. 4, in addition to displaying the first communication message "haha, and you are really laughing" edited by the sending end user at the receiving end, an image (the animated image in fig. 4) corresponding to the emotion of the sending end user is displayed, and a corresponding laughter is played (the audio icon in fig. 4 is shown in the playing sound).
The preset code received by the receiving end is an invisible field, so that the interference on the display of the information content edited by the user at the transmitting end is avoided.
Further, the third obtaining module 702 is specifically configured to: and acquiring feedback information which is stored in advance and corresponds to the preset code.
In the embodiment of the invention, the emotion comparison table shown in the table 1 in the first embodiment can be pre-stored in the mobile terminal, so that the information can be conveniently searched and called.
In summary, the mobile terminal provided in the embodiment of the present invention receives the preset code corresponding to the emotion information sent by the sending end, and searches for the information corresponding to the preset code and capable of feeding back the emotion of the sending end user, so that the sending end user can accurately express the emotion of the heart, the receiving end user can also clarify the emotion of the sending end user, accurate and effective communication is ensured, the operation is simple, and the use of expression information is facilitated. In addition, the receiving end receives a code corresponding to the emotion of the user at the transmitting end instead of a real expression image, so that the data volume is small, the occupation of network resources can be reduced, and the cost is reduced.
Sixth embodiment
Fig. 8 is a block diagram of a mobile terminal according to another embodiment of the present invention. The mobile terminal 800 shown in fig. 8 includes: at least one processor 801, memory 802, at least one network interface 804, and other user interfaces 803. The various components in the mobile terminal 800 are coupled together by a bus system 805. It is understood that the bus system 805 is used to enable communications among the components connected. The bus system 805 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 805 in fig. 8.
The user interface 803 may include, among other things, a display, a keyboard or pointing device (e.g., a mouse, trackball), a touch pad or touch screen, etc.
It will be appreciated that the memory 802 in embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration, and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), double data Rate Synchronous Dynamic random access memory (ddr SDRAM ), Enhanced Synchronous SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 602 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 802 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 8021 and application programs 8022.
The operating system 8021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application program 8022 includes various application programs, such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services. A program implementing a method according to an embodiment of the present invention may be included in application program 8022.
In the embodiment of the present invention, by calling a program or an instruction stored in the memory 802, specifically, a program or an instruction stored in the application program 8022, the processor 801 is configured to acquire emotion information of a user when editing a first communication message, and a preset code that is pre-stored and corresponds to the emotion information, combine the preset code and the first communication message that is finished with editing into a second communication message, and send the second communication message to a receiving end.
The methods disclosed in the embodiments of the present invention described above may be implemented in the processor 801 or implemented by the processor 801. The processor 801 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 801. The Processor 801 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 802, and the processor 801 reads the information in the memory 802, and combines the hardware to complete the steps of the method.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, the processor 801 is further configured to: the method comprises the steps of collecting a face image of a user when the user edits a first communication message, and determining a first preset code representing the emotion of the user according to the face image.
Optionally, the processor 801 is further configured to: and acquiring the pressing force value of the user on the screen of the mobile terminal when the user edits the first communication message, and determining a second preset code for representing the degree grade of the emotion of the user according to the pressing force value.
Optionally, the processor 801 is further configured to: and combining the first preset code and the second preset code with the edited first communication message into second communication information and sending the second communication information to a receiving end.
Optionally, the processor 801 is further configured to: determining the face characteristics of the face image, and determining a first preset code according to the face characteristics.
Optionally, the processor 801 is further configured to: and detecting whether the ambient light value is greater than or equal to a preset value, and if so, acquiring a facial image of the user when editing the first communication message.
Optionally, the processor 801 is further configured to: and calculating the average pressing force value of the user to the screen of the mobile terminal when the user edits the first communication message and the ratio of the average pressing force value to the total average pressing force value, and determining a second preset code according to the calculated ratio.
Optionally, the processor 801 is further configured to: the method comprises the steps of collecting one or more facial images of a user when the user edits a first communication message, identifying user emotions corresponding to each facial image, recording the occurrence frequency of each user emotion, and taking a code corresponding to the user emotion with the largest occurrence frequency as a first preset code.
Optionally, the processor 801 is further configured to: and sending the preset code to a receiving end in the form of an invisible field.
In another embodiment of the present invention, the processor 801 is configured to receive a second communication message sent by a sending end, where the second communication message includes: the method comprises the steps that a first communication message edited by a sending end user and a preset code corresponding to emotion information of the sending end user are obtained; acquiring feedback information corresponding to a preset code and used for representing emotion, wherein the feedback information comprises: one or more of an image, a sound, and a vibration; displaying the first communication message and outputting the feedback information.
Optionally, the processor 801 is further configured to: and acquiring feedback information which is stored in advance and corresponds to the preset code.
The mobile terminal 800 can implement each process implemented by the mobile terminal in the foregoing embodiments, and details are not repeated here to avoid repetition.
In summary, the mobile terminal provided in the embodiment of the present invention not only enables the sending end user to accurately express the mood of the heart, but also enables the receiving end user to clarify the mood of the sending end user, so as to ensure accurate and effective communication, and the mobile terminal is simple in operation, and facilitates the use of the expression information. In addition, the sending end sends a code corresponding to the emotion of the user instead of a real expression image, so that the data volume is small, the occupation of network resources can be reduced, and the cost is reduced. The mobile terminal provided by the embodiment of the invention can automatically acquire the emotion of the user, and the mobile terminal at the receiving end feeds back the emotion of the user at the transmitting end through some information, so that the user does not need to manually input the image representing the emotion, and the complicated operation of continuously switching between input characters and images is avoided. Furthermore, the embodiment of the invention fully utilizes the components of the mobile terminal equipment to collect the emotion information of the user, reduces the improvement on the mobile terminal hardware and reduces the research and development cost.
Seventh embodiment
Fig. 9 is a schematic structural diagram of a mobile terminal according to another embodiment of the present invention. Specifically, the mobile terminal 900 in fig. 9 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal 900 in fig. 9 includes a Radio Frequency (RF) circuit 901, a memory 902, an input unit 903, a display unit 904, a processor 906, an audio circuit 907, a wifi (wireless fidelity) module 908, and a power supply 909.
The input unit 903 may be used, among other things, to receive numeric or character information input by a user and to generate signal inputs related to user settings and function control of the mobile terminal 900. Specifically, in the embodiment of the present invention, the input unit 903 may include a touch panel 9031. The touch panel 9031, also called a touch screen, may collect a touch operation performed by a user (for example, an operation performed by the user on the touch panel 9031 by using a finger, a stylus pen, or any other suitable object or accessory) thereon or nearby, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 9031 may include two parts, namely, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 906, and can receive and execute commands from the processor 906. In addition, the touch panel 9031 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 9031, the input unit 903 may further include other input devices 9032, and the other input devices 9032 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, and the like), a trackball, a mouse, a joystick, and the like.
Among other things, the display unit 904 may be used to display information input by or provided to the user and various menu interfaces of the mobile terminal 900. The display unit 904 may include a display panel 9041, and optionally, the display panel 9041 may be configured in the form of an LCD or an Organic Light-Emitting Diode (OLED).
It should be noted that the touch panel 9031 may overlay the display panel 9041 to form a touch display screen, and when the touch display screen detects a touch operation thereon or nearby, the touch display screen is transmitted to the processor 906 to determine the type of the touch event, and then the processor 906 provides a corresponding visual output on the touch display screen according to the type of the touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
The processor 906 is a control center of the mobile terminal 900, connects various parts of the entire mobile phone by using various interfaces and lines, and executes various functions and processes data of the mobile terminal 900 by operating or executing software programs and/or modules stored in the first memory 9021 and calling data stored in the second memory 9022, thereby integrally monitoring the mobile terminal 900. Optionally, processor 906 may include one or more processing units.
In the embodiment of the present invention, by calling the software program and/or the module stored in the first memory 9021 and/or the data stored in the second memory 9022, the processor 906 is configured to acquire emotion information of the user when editing the first communication message, acquire a pre-stored preset code corresponding to the emotion information, combine the preset code and the edited first communication message into the second communication message, and send the second communication message to the receiving end.
Optionally, the processor 906 is further configured to: the method comprises the steps of collecting a face image of a user when the user edits a first communication message, and determining a first preset code representing the emotion of the user according to the face image.
Optionally, the processor 906 is further configured to: and acquiring the pressing force value of the user on the screen of the mobile terminal when the user edits the first communication message, and determining a second preset code for representing the degree grade of the emotion of the user according to the pressing force value.
Optionally, the processor 906 is further configured to: and combining the first preset code and the second preset code with the edited first communication message into second communication information and sending the second communication information to a receiving end.
Optionally, the processor 906 is further configured to: determining the face characteristics of the face image, and determining a first preset code according to the face characteristics.
Optionally, the processor 906 is further configured to: and detecting whether the ambient light value is greater than or equal to a preset value, and if so, acquiring a facial image of the user when editing the first communication message.
Optionally, the processor 906 is further configured to: and calculating the average pressing force value of the user to the screen of the mobile terminal when the user edits the first communication message and the ratio of the average pressing force value to the total average pressing force value, and determining a second preset code according to the calculated ratio.
Optionally, the processor 906 is further configured to: the method comprises the steps of collecting one or more facial images of a user when the user edits a first communication message, identifying user emotions corresponding to each facial image, recording the occurrence frequency of each user emotion, and taking a code corresponding to the user emotion with the largest occurrence frequency as a first preset code.
Optionally, the processor 906 is further configured to: and sending the preset code to a receiving end in the form of an invisible field.
In another embodiment of the present invention, the processor 906 is configured to receive a second communication message sent by a sending end, where the second communication message includes: the method comprises the steps that a first communication message edited by a sending end user and a preset code corresponding to emotion information of the sending end user are obtained; acquiring feedback information corresponding to a preset code and used for representing emotion, wherein the feedback information comprises: one or more of an image, a sound, and a vibration; and outputs the feedback information; the display unit 904 is configured to display the first communication message.
Optionally, the processor 906 is further configured to: and acquiring feedback information which is stored in advance and corresponds to the preset code.
Therefore, the mobile terminal provided by the embodiment of the invention can accurately express the mood of the sending end user and make the receiving end user clearly define the mood of the sending end user by receiving the preset code corresponding to the mood information sent by the sending end and searching the information corresponding to the preset code and capable of feeding back the mood of the sending end user, thereby ensuring accurate and effective communication, being simple to operate and facilitating the use of the expression information. In addition, the sending end sends a code corresponding to the emotion of the user instead of a real expression image, so that the data volume is small, the occupation of network resources can be reduced, and the cost is reduced. In addition, the embodiment of the invention fully utilizes the components of the mobile terminal equipment to collect the emotion information of the user, reduces the improvement on the mobile terminal hardware and reduces the research and development cost.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
While the preferred embodiments of the present invention have been described, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

Claims (10)

1. An information sending method applied to a sending end is characterized by comprising the following steps:
acquiring emotion information of a user when editing a first communication message;
acquiring a pre-stored preset code corresponding to the emotion information;
combining the preset code and the edited first communication message into second communication information and sending the second communication information to a receiving end;
wherein, the step of obtaining the emotion information of the user when editing the first communication message comprises:
collecting a facial image of a user when editing a first communication message;
the step of acquiring a pre-stored preset code corresponding to the emotion information comprises the following steps:
determining a first preset code representing the emotion of the user according to the facial image;
wherein, the step of obtaining the emotion information of the user when editing the first communication message further comprises:
acquiring pressing force values of a user on a screen of the mobile terminal when the user edits a first communication message, wherein the pressing force values on the screen are different under different degrees of the same emotion;
the step of acquiring a pre-stored preset code corresponding to the emotion information comprises the following steps:
according to the pressing force value, determining a second preset code for representing the degree grade of the emotion of the user;
the step of combining the preset code and the edited first communication message into second communication message and sending the second communication message to a receiving end comprises the following steps:
combining the first preset code and the second preset code with the edited first communication message into second communication information and sending the second communication information to a receiving end;
wherein the step of determining a second preset code for representing a degree level of a user's emotion according to the pressing force value comprises:
calculating the average pressing force value of the user on the screen of the mobile terminal when editing the first communication message;
calculating the ratio of the average pressing force value to the total average pressing force value, wherein the total average pressing force value is the average pressing force value pressed by the screen of the mobile terminal in the whole using process of the mobile terminal by the user;
and determining the second preset code according to the calculated ratio.
2. The information transmission method according to claim 1, wherein the step of determining a first preset code representing a user's emotion from the face image includes:
determining facial features of the facial image;
and determining the first preset code according to the face features.
3. The method of claim 1, wherein the step of capturing the facial image of the user when editing the first communication message comprises:
detecting whether the ambient light value is greater than or equal to a preset value;
and if the first communication message is larger than or equal to the preset value, acquiring a facial image of the user when editing the first communication message.
4. The method of claim 1, wherein the step of capturing the facial image of the user when editing the first communication message comprises:
acquiring one or more facial images of a user when editing a first communication message;
wherein the step of determining a first preset code representing a user's mood from the facial image comprises:
identifying the user emotion corresponding to each facial image, and recording the occurrence frequency of each user emotion;
and taking the code corresponding to the emotion of the user with the largest occurrence frequency as a first preset code.
5. The method for sending information according to claim 1, wherein the step of sending the preset code to a receiving end comprises:
and sending the preset code to a receiving end in an invisible field form.
6. A mobile terminal applied to a transmitting end, the mobile terminal comprising:
the first acquisition module is used for acquiring emotion information of a user when editing the first communication message;
the second acquisition module is used for acquiring a pre-stored preset code corresponding to the emotion information acquired by the first acquisition module;
the sending module is used for combining the preset code acquired by the second acquisition module and the edited first communication message into second communication information and sending the second communication information to a receiving end;
wherein the first obtaining module comprises:
the acquisition unit is used for acquiring a facial image of a user when editing the first communication message;
wherein the second obtaining module comprises:
the first determining unit is used for determining a first preset code representing the emotion of the user according to the facial image acquired by the acquiring unit;
wherein the first obtaining module further comprises:
the mobile terminal comprises an acquisition unit, a display unit and a processing unit, wherein the acquisition unit is used for acquiring the pressing force values of a user on a screen of the mobile terminal when the user edits a first communication message, and the pressing force values of the user on the screen are different under different degrees of the same emotion;
wherein the second obtaining module further comprises:
a second determining unit, configured to determine, according to the pressing force value acquired by the acquiring unit, a second preset code for representing a degree level of a user emotion;
wherein the sending module comprises:
the first sending unit is used for combining the first preset code determined by the first determining unit, the second preset code determined by the second determining unit and the edited first communication message into second communication information and sending the second communication information to a receiving end;
wherein the second determination unit includes:
the first calculating subunit is used for calculating the average pressing force value of the user on the screen of the mobile terminal when the user edits the first communication message;
the second calculating subunit is configured to calculate a ratio of the average pressing force value calculated by the first calculating subunit to a total average pressing force value, where the total average pressing force value is an average pressing force value of a screen of the mobile terminal pressed by a user in the whole using process of the mobile terminal;
and the third determining subunit is configured to determine the second preset code according to the ratio calculated by the second calculating subunit.
7. The mobile terminal according to claim 6, wherein the first determining unit comprises:
a first determining subunit, configured to determine a facial feature of the facial image;
and the second determining subunit is used for determining the first preset code according to the face features determined by the first determining subunit.
8. The mobile terminal of claim 6, wherein the acquisition unit comprises:
the detection subunit is used for detecting whether the ambient light value is greater than or equal to a preset value;
and the first acquisition subunit is used for acquiring the facial image of the user when editing the first communication message when the ambient light value detected by the detection subunit is greater than or equal to a preset value.
9. The mobile terminal of claim 6, wherein the acquisition unit comprises:
the second acquisition subunit is used for acquiring one or more facial images of the user when editing the first communication message;
wherein the first determination unit includes:
the identification subunit is used for identifying the user emotion corresponding to each facial image acquired by the second acquisition subunit and recording the occurrence frequency of each user emotion;
and the processing subunit is used for taking the code corresponding to the user emotion with the largest occurrence frequency recorded by the identification subunit as a first preset code.
10. The mobile terminal of claim 6, wherein the sending module is specifically configured to: and sending the preset code to a receiving end in an invisible field form.
CN201610356810.XA 2016-05-25 2016-05-25 Information sending and receiving method and mobile terminal Active CN105871696B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610356810.XA CN105871696B (en) 2016-05-25 2016-05-25 Information sending and receiving method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610356810.XA CN105871696B (en) 2016-05-25 2016-05-25 Information sending and receiving method and mobile terminal

Publications (2)

Publication Number Publication Date
CN105871696A CN105871696A (en) 2016-08-17
CN105871696B true CN105871696B (en) 2020-02-18

Family

ID=56642271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610356810.XA Active CN105871696B (en) 2016-05-25 2016-05-25 Information sending and receiving method and mobile terminal

Country Status (1)

Country Link
CN (1) CN105871696B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106161215A (en) * 2016-08-31 2016-11-23 维沃移动通信有限公司 A kind of method for sending information and mobile terminal
CN106599204A (en) * 2016-12-15 2017-04-26 广州酷狗计算机科技有限公司 Method and device for recommending multimedia content
WO2018119924A1 (en) * 2016-12-29 2018-07-05 华为技术有限公司 Method and device for adjusting user mood
CN111200552B (en) * 2018-11-16 2022-05-13 腾讯科技(深圳)有限公司 Instant communication method and device, equipment and storage medium thereof
CN110069991A (en) * 2019-03-18 2019-07-30 深圳壹账通智能科技有限公司 Feedback information determines method, apparatus, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976450A (en) * 2010-11-02 2011-02-16 北京航空航天大学 Asymmetric human face expression coding method
CN102323919A (en) * 2011-08-12 2012-01-18 百度在线网络技术(北京)有限公司 Method for displaying input information based on user mood indication information and equipment
CN104239515A (en) * 2014-09-16 2014-12-24 广东欧珀移动通信有限公司 Mood information implementation method and system
CN104412258A (en) * 2014-05-22 2015-03-11 华为技术有限公司 Method and device utilizing text information to communicate
CN104753766A (en) * 2015-03-02 2015-07-01 小米科技有限责任公司 Expression sending method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662546A (en) * 2009-09-16 2010-03-03 中兴通讯股份有限公司 Method of monitoring mood and device thereof
US20150324348A1 (en) * 2014-05-09 2015-11-12 Lenovo (Singapore) Pte, Ltd. Associating an image that corresponds to a mood
CN105227765B (en) * 2015-09-10 2019-04-26 三星电子(中国)研发中心 Interactive approach and system in communication process

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976450A (en) * 2010-11-02 2011-02-16 北京航空航天大学 Asymmetric human face expression coding method
CN102323919A (en) * 2011-08-12 2012-01-18 百度在线网络技术(北京)有限公司 Method for displaying input information based on user mood indication information and equipment
CN104412258A (en) * 2014-05-22 2015-03-11 华为技术有限公司 Method and device utilizing text information to communicate
CN104239515A (en) * 2014-09-16 2014-12-24 广东欧珀移动通信有限公司 Mood information implementation method and system
CN104753766A (en) * 2015-03-02 2015-07-01 小米科技有限责任公司 Expression sending method and device

Also Published As

Publication number Publication date
CN105871696A (en) 2016-08-17

Similar Documents

Publication Publication Date Title
US10275022B2 (en) Audio-visual interaction with user devices
CN105871696B (en) Information sending and receiving method and mobile terminal
CN105224195B (en) Terminal operation method and device
CN107678644B (en) Image processing method and mobile terminal
CN107370887B (en) Expression generation method and mobile terminal
CN106056533B (en) A kind of method and terminal taken pictures
US9898090B2 (en) Apparatus, method and recording medium for controlling user interface using input image
CN110933511B (en) Video sharing method, electronic device and medium
US10860857B2 (en) Method for generating video thumbnail on electronic device, and electronic device
CN104092932A (en) Acoustic control shooting method and device
US20150149925A1 (en) Emoticon generation using user images and gestures
CN103529934A (en) Method and apparatus for processing multiple inputs
CN111447483A (en) Live broadcast room barrage sending method and device and corresponding terminal
CN106446180B (en) Song identification method and mobile terminal
CN107748615B (en) Screen control method and device, storage medium and electronic equipment
KR20180133743A (en) Mobile terminal and method for controlling the same
CN111884908B (en) Contact person identification display method and device and electronic equipment
KR102090948B1 (en) Apparatus saving conversation and method thereof
CN108376096B (en) Message display method and mobile terminal
WO2019205552A1 (en) Online document commenting method and apparatus
US20190377755A1 (en) Device for Mood Feature Extraction and Method of the Same
CN105844241A (en) Method and terminal for detecting touch control pressure
CN106921802B (en) Audio data playing method and device
CN106095128B (en) Character input method of mobile terminal and mobile terminal
CN108710521B (en) Note generation method and terminal equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant