WO2019201146A1 - 表情图像的显示方法及终端设备 - Google Patents

表情图像的显示方法及终端设备 Download PDF

Info

Publication number
WO2019201146A1
WO2019201146A1 PCT/CN2019/082229 CN2019082229W WO2019201146A1 WO 2019201146 A1 WO2019201146 A1 WO 2019201146A1 CN 2019082229 W CN2019082229 W CN 2019082229W WO 2019201146 A1 WO2019201146 A1 WO 2019201146A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
image
target
messages
terminal device
Prior art date
Application number
PCT/CN2019/082229
Other languages
English (en)
French (fr)
Inventor
李景
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2019201146A1 publication Critical patent/WO2019201146A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/06Message adaptation to terminal or network requirements
    • H04L51/063Content adaptation, e.g. replacement of unsuitable content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/18Commands or executable codes

Definitions

  • the present disclosure relates to the field of communications technologies, and in particular, to a method for displaying an expression image and a terminal device.
  • terminal devices With the rapid development of terminal devices, terminal devices have become an indispensable tool in people's lives, and have brought great convenience to all aspects of user life.
  • a user communicates with other users using social software on the terminal device, some expressions may be received, and sometimes many of the same expressions are received.
  • the same expressions occupy a relatively large space of the terminal device display interface, thereby causing the terminal device to display the interface content with poor readability.
  • An embodiment of the present disclosure provides a method for displaying an expression image and a terminal device, so that when the terminal device receives many identical expressions, the same expression occupies more space on the display interface of the terminal device, thereby causing the terminal device to display the interface content.
  • an embodiment of the present disclosure provides a method for displaying an expression image, including:
  • the first input is used to trigger a combination of multiple target messages in the dialog box, and each of the target messages includes the same expression image;
  • the plurality of target messages are merged such that the emoticon image is displayed in a message.
  • an embodiment of the present disclosure further provides a terminal device, including:
  • a receiving module configured to receive a first input, where the first input is used to trigger a combination of multiple target messages in the dialog box, and each of the target messages includes the same expression image;
  • a merging module configured to merge the plurality of target messages in response to the first input, so that the emoticon image is displayed in a message.
  • an embodiment of the present disclosure further provides a terminal device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program is executed by the processor The step of realizing the display method of the above-described expression image.
  • an embodiment of the present disclosure further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and the computer program is executed by a processor to implement the step of displaying the expression image.
  • a first input is received, where the first input is used to trigger a combination of multiple target messages in a dialog box, and each of the target messages includes the same expression image; in response to the first input, The plurality of target messages are merged such that the emoticon image is displayed in a message.
  • the plurality of target messages are merged such that the emoticon image is displayed in a message.
  • FIG. 1 is a flowchart of a method for displaying an emoticon image according to an embodiment of the present disclosure
  • FIG. 2 is a second flowchart of a method for displaying an expression image according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a manner of displaying an expression image according to an embodiment of the present disclosure
  • FIG. 4 is a second schematic diagram of an expression image display manner provided by an embodiment of the present disclosure.
  • FIG. 5 is a third schematic diagram of an expression image display manner according to an embodiment of the present disclosure.
  • FIG. 6 is a fourth schematic diagram of an expression image display manner provided by an embodiment of the present disclosure.
  • FIG. 7 is a fifth schematic diagram of an expression image display manner provided by an embodiment of the present disclosure.
  • FIG. 8 is a structural diagram of a terminal device according to an embodiment of the present disclosure.
  • FIG. 9 is a structural diagram of a merge module of a terminal device according to an embodiment of the present disclosure.
  • FIG. 10 is a second structural diagram of a terminal device according to an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of a method for displaying an emoticon image according to an embodiment of the present disclosure. As shown in FIG. 1 , the following steps are included: 101 and 102 .
  • Step 101 Receive a first input, where the first input is used to trigger a combination of multiple target messages in a dialog box, and each of the target messages includes the same expression image.
  • the first input may be a voice input of the user, or may be an input performed by a user on a touch operation on the terminal device, or may be a message sent by another terminal device.
  • the first input is an input made by the user on a touch operation on the terminal device, there may be multiple cases.
  • the user chooses to send a same emoticon image that already exists in the dialog box to trigger the merge; or the user may perform a click, double click or slide operation on the preset position on the display screen to trigger the merge; or the user receives New news and more.
  • the foregoing dialog box may be a dialog box of social software that is provided by the terminal device, or may also be a dialog box of third-party social software downloaded by the terminal device.
  • the above-mentioned expression image may be a dynamic expression, or may be a static expression, which is not limited in this embodiment of the present disclosure.
  • the same emoticons there are various ways to determine which emoticons are the same emoticons. For example, it can be determined whether there is an expression image with the same index information (such as an identification number), so that the same expression image with the same index information can be determined as the same expression image.
  • index information such as an identification number
  • some image comparison methods can also be used to judge.
  • the expression image is a static expression
  • it can be determined whether the similarity between the two expression images is greater than a preset threshold. If the expression is greater than the preset threshold, the two expression images are the same expression image.
  • the expression image is a dynamic expression
  • it may be first determined whether the number of frames between the two expression images is the same. Under the condition that the number of frames is the same, it is further determined whether the similarity of each frame between the two expression images is greater than a preset threshold, and the order is consistent. If yes, the two expression images are the same expression image.
  • the more mature image comparison method can be used to judge, and the comparison can be made from the perspective of pixels or colors, etc., which is not limited in this embodiment.
  • Step 102 Combine the plurality of target messages in response to the first input, so that the expression image is displayed in a message.
  • an emoticon image is displayed in a message, so that the display of the terminal device is more concise, and the readability of the content of the display device of the terminal device is improved.
  • the terminal device may be a mobile phone, a tablet personal computer, a laptop computer, a personal digital assistant (PDA), or a mobile Internet device (Mobile Internet Device, MID) or wearable device (Wearable Device) and so on.
  • PDA personal digital assistant
  • Mobile Internet Device MID
  • Wearable Device Wearable Device
  • a method for displaying an expression image receives a first input, the first input is used to trigger a combination of multiple target messages in a dialog box, and each of the target messages includes the same expression image;
  • the first input merges the plurality of target messages to cause the expression image to be displayed in a message.
  • an emoticon image is displayed in one message, so that the display of the terminal device is more concise, and the readability of the content of the display device of the terminal device is improved.
  • FIG. 2 is a flowchart of a method for displaying an expression image according to an embodiment of the present disclosure.
  • the plurality of target messages are combined in response to the first input, so that the expression image is displayed in a message, including: responding to the first input Obtaining a target message that includes the same emoticon image in the dialog box; deleting the target message; displaying the emoticon image in a message.
  • the following steps are included: 201 to 204.
  • Step 201 Receive a first input, where the first input is used to trigger a combination of multiple target messages in a dialog box, and each of the target messages includes the same expression image.
  • the first input may be a voice input of the user, or may be an input performed by a user on a touch operation on the terminal device, or may be a message sent by another terminal device.
  • the first input is an input made by the user on a touch operation on the terminal device, there are many cases.
  • the user chooses to send a same emoticon image that already exists in the dialog box to trigger the merge; or the user may perform a click, double click or slide operation on the preset position on the display screen to trigger the merge; or the user receives New news and more.
  • the foregoing dialog box may be a dialog box of social software that is provided by the terminal device, or may also be a dialog box of third-party social software downloaded by the terminal device.
  • the above-mentioned expression image may be a dynamic expression, or may be a static expression, which is not limited in this embodiment of the present disclosure.
  • the same emoticons there are various ways to determine which emoticons are the same emoticons. For example, it can be determined whether there is an expression image with the same index information (such as an identification number), so that the same expression image with the same index information can be determined as the same expression image.
  • index information such as an identification number
  • some image comparison methods can also be used to judge.
  • the expression image is a static expression
  • it can be determined whether the similarity between the two expression images is greater than a preset threshold. If the expression is greater than the preset threshold, the two expression images are the same expression image.
  • the expression image is a dynamic expression
  • it may be first determined whether the number of frames between the two expression images is the same. Under the condition that the number of frames is the same, it is further determined whether the similarity of each frame between the two expression images is greater than a preset threshold, and the order is consistent. If yes, the two expression images are the same expression image.
  • the more mature image comparison method can be used to judge, and the comparison can be made from the perspective of pixels or colors, etc., which is not limited in this embodiment.
  • Step 202 Acquire, in response to the first input, a target message that includes the same emoticon image in the dialog box.
  • the acquiring the target message that includes the same emoticon image in the dialog box may be that the same emoticon image is obtained by using the same identifier number, or may be directly obtained by determining the same emoticon image. .
  • Step 203 Hide the target message.
  • the target message is a target message that includes the same expression image, and the target message may be hidden, and may include a case where the target message may continue to be displayed later, and may also include deleting the target message and failing to display the display. . Moreover, if the target message needs to be deleted, in the process of deleting, all the target messages containing the same expression image may be deleted at one time, or the target message containing the same expression image may be deleted one by one. The example is not limited.
  • Step 204 Display the expression image in a message.
  • the foregoing message may be a newly created message, or may be a message retained in the process of deleting the target message, and the like. In this way, by deleting some redundant target messages, the above-mentioned expression image is displayed in a message, thereby making the display of the terminal device more concise, and improving the readability of the display interface content of the terminal device.
  • the acquiring the target message that includes the same expression image in the dialog box includes:
  • the dialog box includes the target message of the same expression image.
  • the foregoing M may be 5, 10, 20 or some other positive integer.
  • the preset time period may be 1 minute, 2 minutes, or some other time period, etc., and the present embodiment is not limited thereto.
  • the target message that exists in the foregoing dialog box may be a message sent by a user of the terminal device, or may be a message that the terminal device receives from other users. Alternatively, it may also be judged whether the previous emoticon image before receiving the first input and the emoticon image transmitted according to the first input are the same expression or the like.
  • the displaying the expression image in a message includes:
  • N is the number of the plurality of target messages.
  • FIG. 3 and FIG. 3 A schematic diagram of an expression image display manner provided by the disclosed embodiments.
  • the left side is the four messages sent by the same user, and each message contains the same expression image.
  • the right side of FIG. 3 can be seen.
  • only the emoticon image is displayed in one message
  • X4 is the first identifier, which is used to represent the four target messages.
  • the number of display of the expression image is reduced, and the number of display of the user avatar is also reduced.
  • the free display space can display more messages without causing the dialog box to display some repeated expression images multiple times, which is convenient for the user to view the messages in the dialog box and improve the readability of the display interface content of the terminal device.
  • FIG. 4 is a schematic diagram of an expression image display manner according to an embodiment of the present disclosure.
  • the left side is the four messages sent by the same user, and each message contains the same expression image.
  • the right side of Figure 4 can be seen.
  • only a plurality of emoticons are displayed in one message, and the plurality of emoticons are rearranged, which saves the display space of the dialog box and can be emphasized. effect.
  • the expression images can also be appropriately scaled.
  • the display in the dialog box is more concise, the display space of the dialog box is saved, and the readability of the display interface content of the terminal device is improved.
  • the display space of the dialog box can be saved, and the display of the terminal device can be saved. It is more concise and improves the readability of the display interface content of the terminal device.
  • the displaying the expression image in a message includes:
  • One of the expression images is displayed in a message, and the avatar of each of the different users is displayed in the order of the message transmission time.
  • FIG. 5 is a schematic diagram of an expression image display manner according to an embodiment of the present disclosure.
  • the left side has the expression images sent by the user A, the user B, the user A, and the user A in the order of time.
  • the user avatars of user A, user B, user A, and user A are displayed in the order of message sending time. .
  • the responsive to the first input combining the multiple target messages, so that the expression image is displayed in a message, including:
  • the message that does not include the expression image may be a character, a symbol, a letter, or an expression image different from the expression image, and the like, and the message may be a message sent by any user. If the number of messages that do not include the emoticon image is within a preset number, it may be that the same emoticon image is displayed in the dialog box, so that the plurality of target messages may be merged. After the merge, a message can be displayed at the position of the last same emoticon image, and the same emoticon image is displayed. In this way, the display of the terminal device is more concise, and the readability of the content of the display device of the terminal device is improved.
  • a message that is different from the emoticon image may be displayed at this time, and the same emoticon image is relatively less displayed.
  • the same emoticons can also be merged without the user, and the user can still view the more effective messages instead of the repeated emoticons.
  • the second identifier may be a triangle, a circle, or some other shape, which is not limited in this embodiment.
  • FIG. 6 and FIG. 7 are schematic diagram of an expression image display manner according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of an expression image display manner according to an embodiment of the present disclosure
  • FIG. 7 is another expression image display manner according to an embodiment of the present disclosure. Schematic diagram. As can be seen in FIG. 6, the left side has an emoticon image sent by the user A, an emoticon image sent by the user A, a "goodbye" sent by the user B, and an emoticon image sent by the user A, respectively. After the merging, at this time, as shown on the right side of FIG.
  • the "goodbye” sent by the user B is displayed, and under the "goodbye” is displayed an expression image, a triangle, X3, and a user avatar of the user A.
  • X3 indicates that user A has sent three identical expression images
  • the triangle is a second identifier, which is used to indicate a user corresponding to the message that does not include the expression image between the plurality of target messages, that is, user B.
  • the left side has an emoticon image sent by the user A, an emoticon image sent by the user B, a "goodbye” sent by the user A, and an emoticon image sent by the user A.
  • "goodbye” sent by the user A is displayed, and an expression image is displayed below the "goodbye”.
  • each user's avatar is displayed in the order in which the messages are sent, and a triangle.
  • the triangle is a second identifier, and is used to indicate a user corresponding to the message that does not include the expression image between the plurality of target messages, that is, the user A.
  • the user may be prompted by the second identifier to make the user know that there may be a user corresponding to the message that does not include the expression image in the multiple expression images, and the messages of the users are not missed after the merge.
  • the display state of the message before the merge can also be restored.
  • a plurality of identical expression images may also dynamically change as the chat progresses, and the newly added same emoticon image may change the first identifier, or change the number of times the user's avatar is displayed, etc., thereby making the display of the terminal device more concise. .
  • FIG. 8 is a structural diagram of a terminal device according to an embodiment of the present disclosure, which can implement the details of the method for displaying an expression image in the foregoing embodiment, and achieve the same effect.
  • the terminal device 800 includes a receiving module 801 and a merging module 802.
  • the receiving module 801 and the merging module 802 are connected, where:
  • the receiving module 801 is configured to receive a first input, where the first input is used to trigger a combination of multiple target messages in the dialog box, and each of the target messages includes the same expression image;
  • the merging module 802 is configured to merge the plurality of target messages in response to the first input, so that the emoticon image is displayed in a message.
  • the merging module 802 includes:
  • the obtaining submodule 8021 is configured to obtain, in response to the first input, a target message that includes the same emoticon image in the dialog box;
  • the display sub-module 8023 is configured to display the emoticon image in a message.
  • the obtaining sub-module 8021 is configured to: obtain a target message that includes the same emoticon image in the first M messages in the dialog box, where the M is a preset positive integer; or, obtain the current In the preset time period before the moment, the dialog box contains the target message of the same emoticon image.
  • the display sub-module 8023 is configured to display one of the expression images in a message of the user and to represent the multiple pieces. a first identifier of the number of target messages; or, N of the expression images are displayed in a message of the user, wherein the N is the number of the plurality of target messages.
  • the display sub-module 8023 is configured to: display one of the expression images in a message, and display the instructions according to a sequence of message sending times. The avatar of each user in different users.
  • the merging module 802 is configured to: if there is a message that does not include the emoticon image between the plurality of target messages, and the number of messages that do not include the emoticon image is at a preset number And responsive to the first input, combining the plurality of target messages to display the emoticon image and the second identifier in a message, the second identifier being used to represent the plurality of target messages The user corresponding to the message that does not contain the emoticon image.
  • the terminal device 800 can implement various processes implemented by the terminal device in the method embodiment of FIG. 1 to FIG. 2, and details are not described herein again to avoid repetition.
  • the terminal device 800 of the embodiment of the present disclosure receives a first input, where the first input is used to trigger a combination of multiple target messages in a dialog box, and each of the target messages includes the same expression image; Entering and merging the plurality of target messages to display the emoticon image in a message.
  • the first input is used to trigger a combination of multiple target messages in a dialog box, and each of the target messages includes the same expression image; Entering and merging the plurality of target messages to display the emoticon image in a message.
  • an emoticon image is displayed in one message, so that the display of the terminal device is more concise, and the readability of the content of the display device of the terminal device is improved.
  • FIG. 10 is a schematic diagram of a hardware structure of a terminal device that implements various embodiments of the present disclosure.
  • the terminal device 1000 includes, but is not limited to, a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, and a sensor. 1005.
  • the terminal device structure shown in FIG. 10 does not constitute a limitation on the terminal device, and the terminal device may include more or less components than those illustrated, or combine some components, or different components.
  • the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
  • the processor 1010 is configured to receive a first input, where the first input is used to trigger a combination of multiple target messages in the dialog box, each of the target messages includes the same expression image; and respond to the first input Merging the plurality of target messages to display the emoticon image in a message. In this way, by merging a plurality of target messages, an emoticon image is displayed in one message, so that the display of the terminal device is more concise, and the readability of the content of the display device of the terminal device is improved.
  • the processor 1010 is further configured to: in response to the first input, acquire a target message that includes the same emoticon image in the dialog box; hide the target message; and display the emoticon image in a message.
  • the processor 1010 is further configured to acquire a target message that includes the same emoticon image in the first M messages in the dialog box, where the M is a preset positive integer; or, before acquiring the current moment In the preset time period, the dialog box contains the target message of the same emoticon image.
  • the processor 1010 is further configured to display, in a message of the user, one of the expression images and the plurality of target messages. a first identifier of the quantity; or, N pieces of the expression image are displayed in a message of the user, wherein the N is the number of the plurality of target messages.
  • the processor 1010 is further configured to display one of the expression images in a message, and display the different users according to a sequence of message sending times. The avatar of each user.
  • the processor 1010 is further configured to: if there is a message that does not include the emoticon image between the plurality of target messages, and the number of messages that do not include the emoticon image is within a preset number, And responsive to the first input, combining the plurality of target messages, so that the emoticon image and the second identifier are displayed in a message, where the second identifier is used to represent the plurality of target messages, not The user corresponding to the message containing the emoticon image.
  • the radio frequency unit 1001 can be used for receiving and transmitting signals during and after receiving or transmitting information or a call, and specifically, after receiving downlink data from the base station, processing the processor 1010; The uplink data is sent to the base station.
  • radio frequency unit 1001 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio unit 1001 can also communicate with the network and other devices through a wireless communication system.
  • the terminal device provides the user with wireless broadband Internet access through the network module 1002, such as helping the user to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 1003 can convert the audio data received by the radio frequency unit 1001 or the network module 1002 or stored in the memory 1009 into an audio signal and output as a sound. Moreover, the audio output unit 1003 may also provide audio output (eg, call signal reception sound, message reception sound, etc.) related to a specific function performed by the terminal device 1000.
  • the audio output unit 1003 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 1004 is for receiving an audio or video signal.
  • the input unit 1004 may include a graphics processing unit (GPU) 10041 and a microphone 10042, and the graphics processor 10041 images an still picture or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode.
  • the data is processed.
  • the processed image frame can be displayed on the display unit 1006.
  • the image frames processed by the graphics processor 10041 may be stored in the memory 1009 (or other storage medium) or transmitted via the radio frequency unit 1001 or the network module 1002.
  • the microphone 10042 can receive sound and can process such sound as audio data.
  • the processed audio data can be converted to a format output that can be transmitted to the mobile communication base station via the radio unit 1001 in the case of a telephone call mode.
  • the terminal device 1000 also includes at least one type of sensor 1005, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 10061 according to the brightness of the ambient light, and the proximity sensor can close the display panel 10061 when the terminal device 1000 moves to the ear. / or backlight.
  • the accelerometer sensor can detect the acceleration of each direction (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the attitude of the terminal device (such as horizontal and vertical screen switching, related games).
  • sensor 1005 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, Infrared sensors and the like are not described here.
  • the display unit 1006 is for displaying information input by the user or information provided to the user.
  • the display unit 1006 can include a display panel 10061.
  • the display panel 10061 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the user input unit 1007 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the terminal device.
  • the user input unit 1007 includes a touch panel 10071 and other input devices 10072.
  • the touch panel 10071 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 10071 or near the touch panel 10071. operating).
  • the touch panel 10071 may include two parts of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 1010 receives the commands from the processor 1010 and executes them.
  • the touch panel 10071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the user input unit 1007 may also include other input devices 10072.
  • the other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control button, a switch button, etc.), a trackball, a mouse, and a joystick, which are not described herein.
  • the touch panel 10071 can be overlaid on the display panel 10061. After the touch panel 10071 detects a touch operation thereon or nearby, the touch panel 10071 transmits to the processor 1010 to determine the type of the touch event, and then the processor 1010 according to the touch. The type of event provides a corresponding visual output on display panel 10061.
  • the touch panel 10071 and the display panel 10061 are used as two independent components to implement the input and output functions of the terminal device in FIG. 10, in some embodiments, the touch panel 10071 and the display panel 10061 may be integrated. The input and output functions of the terminal device are implemented, and are not limited herein.
  • the interface unit 1008 is an interface in which an external device is connected to the terminal device 1000.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the interface unit 1008 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the terminal device 1000 or can be used at the terminal device 1000 and externally Data is transferred between devices.
  • an external device eg, data information, power, etc.
  • the memory 1009 can be used to store software programs as well as various data.
  • the memory 1009 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.).
  • the memory 1009 may include a high speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the processor 1010 is a control center of the terminal device that connects various portions of the entire terminal device using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 1009, and recalling data stored in the memory 1009. Perform various functions and processing data of the terminal device to perform overall monitoring on the terminal device.
  • the processor 1010 may include one or more processing units; optionally, the processor 1010 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application, etc., and a modulation solution
  • the processor mainly handles wireless communication. It will be appreciated that the above described modem processor may also not be integrated into the processor 1010.
  • the terminal device 1000 may further include a power source 1011 (such as a battery) for supplying power to the various components.
  • a power source 1011 such as a battery
  • the power source 1011 may be logically connected to the processor 1010 through a power management system to manage charging, discharging, and power consumption through the power management system. Management and other functions.
  • the terminal device 1000 includes some functional modules not shown, and details are not described herein again.
  • an embodiment of the present disclosure further provides a terminal device, including a processor 1010, a memory 1009, a computer program stored on the memory 1009 and executable on the processor 1010, and the computer program is executed by the processor 1010.
  • a terminal device including a processor 1010, a memory 1009, a computer program stored on the memory 1009 and executable on the processor 1010, and the computer program is executed by the processor 1010.
  • the embodiment of the present disclosure further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, implements various processes of the embodiment of the display method of the expression image, and can achieve the same The technical effect, in order to avoid duplication, will not be repeated here.
  • the computer readable storage medium such as a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本公开提供一种表情图像的显示方法及终端设备,该方法包括:接收第一输入,所述第一输入用于触发对话框中多条目标消息的合并,每条所述目标消息均包含相同的表情图像;响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像。

Description

表情图像的显示方法及终端设备
相关申请的交叉引用
本申请主张在2018年4月20日在中国提交的中国专利申请No.201810358537.3的优先权,其全部内容通过引用包含于此。
技术领域
本公开涉及通信技术领域,尤其涉及一种表情图像的显示方法及终端设备。
背景技术
随着终端设备的迅速发展,终端设备已经成为人们生活中必不可少的一种工具,并且为用户生活的各个方面带来了极大的便捷。终端设备上可以存在很多不同的社交软件,以方便用户之间的交流与沟通。当用户使用终端设备上的社交软件与其他用户进行交流时,可能会接收到一些表情,并且有时会接收到很多相同的表情。
当终端设备接收到很多相同的表情时,这些相同的表情会占用终端设备显示界面比较多的空间,从而导致终端设备显示界面内容的可读性比较差。
发明内容
本公开实施例提供一种表情图像的显示方法及终端设备,以解决终端设备接收到很多相同的表情时,这些相同的表情会占用终端设备显示界面比较多的空间,从而导致终端设备显示界面内容的可读性比较差的问题。
第一方面,本公开实施例提供了一种表情图像的显示方法,包括:
接收第一输入,所述第一输入用于触发对话框中多条目标消息的合并,每条所述目标消息均包含相同的表情图像;
响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像。
第二方面,本公开实施例还提供一种终端设备,包括:
接收模块,用于接收第一输入,所述第一输入用于触发对话框中多条目标消息的合并,每条所述目标消息均包含相同的表情图像;
合并模块,用于响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像。
第三方面,本公开实施例还提供一种终端设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现上述表情图像的显示方法的步骤。
第四方面,本公开实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现上述表情图像的显示方法的步骤。
在本公开实施例中,接收第一输入,所述第一输入用于触发对话框中多条目标消息的合并,每条所述目标消息均包含相同的表情图像;响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像。这样,通过将多条目标消息进行合并,在一条消息中显示表情图像,使终端设备的显示更加简洁,提高终端设备显示界面内容的可读性。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对本公开实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本公开实施例提供的表情图像的显示方法的流程图之一;
图2是本公开实施例提供的表情图像的显示方法的流程图之二;
图3是本公开实施例提供的表情图像显示方式的示意图之一;
图4是本公开实施例提供的表情图像显示方式的示意图之二;
图5是本公开实施例提供的表情图像显示方式的示意图之三;
图6是本公开实施例提供的表情图像显示方式的示意图之四;
图7是本公开实施例提供的表情图像显示方式的示意图之五;
图8是本公开实施例提供的终端设备的结构图之一;
图9是本公开实施例提供的终端设备的合并模块的结构图;
图10是本公开实施例提供的终端设备的结构图之二。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
参见图1,图1是本公开实施例提供的表情图像的显示方法的流程图,如图1所示,包括以下步骤:101和102。
步骤101、接收第一输入,所述第一输入用于触发对话框中多条目标消息的合并,每条所述目标消息均包含相同的表情图像。
本公开实施例中,上述第一输入,可以是用户的语音输入,或者也可以是用户在终端设备上的触控操作进行的输入,还可以是接收另一终端设备发送过来的消息等等。当第一输入为用户在终端设备上的触控操作进行的输入时,可以有多种情况。例如:用户选择发送一个与对话框中已存在的相同的表情图像触发合并;或者也可以是用户在显示屏上的预设位置进行一次点击、双击或者滑动等操作,从而触发合并;或者用户接收新消息等等。
本公开实施例中,上述对话框可以是终端设备自带的社交软件的对话框,或者也可以是终端设备下载的第三方社交软件的对话框。上述表情图像,可以是动态表情,或者也可以是静态表情,对此本公开实施例不作限定。
本实施方式中,确定哪些表情图像为相同的表情图像可以有多种方式。例如:可以判断是否存在索引信息(如标识号)相同的表情图像,从而可以将索引信息相同的表情图像确定为相同的表情图像。
或者,也可以使用一些图像比较的方法来判断。当表情图像为静态表情时,就可以判断两个表情图像之间的相似度是否大于预设阈值,若大于预设阈值,则说明该两个表情图像为相同的表情图像。当表情图像为动态表情时,可以首先判断两个表情图像之间帧数是否相同。在帧数相同的条件下,继续判断两个表情图像之间每一帧的相似度是否均大于预设阈值,且顺序一致。 若是则说明该两个表情图像为相同的表情图像。当然,除此之外还可以采用一些比较成熟的图像对比方法来判断,可以从像素或者颜色的角度进行比较等等,对此本公开实施例不作限定。
步骤102、响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像。
本公开实施例中,通过对多条目标消息进行合并,在一条消息中显示表情图像,从而使终端设备的显示更加简洁,提高终端设备显示界面内容的可读性。
本公开实施例中,上述终端设备可以是手机、平板电脑(Tablet Personal Computer)、膝上型电脑(Laptop Computer)、个人数字助理(personal digital assistant,简称PDA)、移动上网装置(Mobile Internet Device,MID)或可穿戴式设备(Wearable Device)等等。
本公开实施例的一种表情图像的显示方法,接收第一输入,所述第一输入用于触发对话框中多条目标消息的合并,每条所述目标消息均包含相同的表情图像;响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像。这样,通过将多条目标消息进行合并,在一条消息中显示表情图像,使终端设备的显示更加简洁,提高终端设备显示界面内容的可读性。
参见图2,图2是本公开实施例提供的一种表情图像的显示方法的流程图。本实施例与上个实施例的主要区别在于本方法中响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像,包括:响应所述第一输入,获取所述对话框中包含相同的表情图像的目标消息;删除所述目标消息;在一条消息中显示所述表情图像。如图2所示,包括以下步骤:201至204。
步骤201、接收第一输入,所述第一输入用于触发对话框中多条目标消息的合并,每条所述目标消息均包含相同的表情图像。
本公开实施例中,上述第一输入,可以是用户的语音输入,或者也可以是用户在终端设备上的触控操作进行的输入,还可以是接收另一终端设备发送过来的消息等等。当第一输入为用户在终端设备上的触控操作进行的输入 时,可以有多种情况。例如:用户选择发送一个与对话框中已存在的相同的表情图像触发合并;或者也可以是用户在显示屏上的预设位置进行一次点击、双击或者滑动等操作,从而触发合并;或者用户接收新消息等等。
本公开实施例中,上述对话框可以是终端设备自带的社交软件的对话框,或者也可以是终端设备下载的第三方社交软件的对话框。上述表情图像,可以是动态表情,或者也可以是静态表情,对此本公开实施例不作限定。
本实施方式中,确定哪些表情图像为相同的表情图像可以有多种方式。例如:可以判断是否存在索引信息(如标识号)相同的表情图像,从而可以将索引信息相同的表情图像确定为相同的表情图像。
或者,也可以使用一些图像比较的方法来判断。当表情图像为静态表情时,就可以判断两个表情图像之间的相似度是否大于预设阈值,若大于预设阈值,则说明该两个表情图像为相同的表情图像。当表情图像为动态表情时,可以首先判断两个表情图像之间帧数是否相同。在帧数相同的条件下,继续判断两个表情图像之间每一帧的相似度是否均大于预设阈值,且顺序一致。若是则说明该两个表情图像为相同的表情图像。当然,除此之外还可以采用一些比较成熟的图像对比方法来判断,可以从像素或者颜色的角度进行比较等等,对此本公开实施例不作限定。
步骤202、响应所述第一输入,获取所述对话框中包含相同的表情图像的目标消息。
本公开实施例中,上述获取所述对话框中包含相同的表情图像的目标消息,可以是通过相同的标识号获取相同的表情图像,或者也可以是在确定相同的表情图像的情况下直接获取。
步骤203、隐藏所述目标消息。
本公开实施例中,上述目标消息为包含相同表情图像的目标消息,隐藏上述目标消息,可以包括目标消息在后期还可以继续显示的情况,还可以包括将目标消息进行删除并且不可恢复显示的情况。并且,若需要对目标消息进行删除,在删除的过程中,可以一次性删除所有包含相同表情图像的目标消息,或者也可以对包含相同表情图像的目标消息一条一条进行删除,对此本公开实施例不作限定。
步骤204、在一条消息中显示所述表情图像。
本公开实施例中,上述一条消息可以是新建的一条消息,或者也可以是在删除目标消息的过程中保留的一条消息等等。这样,通过删除一些冗余的目标消息,在一条消息中显示上述表情图像,从而使终端设备的显示更加简洁,提高终端设备显示界面内容的可读性。
可选的,所述获取所述对话框中包含相同的表情图像的目标消息,包括:
获取所述对话框中前M个消息中包含相同的表情图像的目标消息,其中,所述M为预设的正整数;
或者,获取当前时刻之前的预设时间段内,所述对话框中包含相同的表情图像的目标消息。
本实施方式中,上述M可以是5、10、20或者一些其他正整数,上述预设时间段可以是1分钟、2分钟或者一些其他的时间段等等,对此本实施方式均不作限定。上述对话框中存在的目标消息,可以是终端设备的用户发送的消息,或者也可以是终端设备接收其他用户发送的消息。或者,还可以判断接收第一输入之前的上一条表情图像与根据第一输入发送的表情图像是否为相同的表情等等。
本实施方式中,通过限定消息的个数或者限定时间范围,可以排除一些时间相对比较久远的消息。而这些消息对于当前来说可能意义不是很大,所以可以对这些时间相对比较久远的消息不进行处理,从而简化终端设备的处理过程,减少终端设备内存的开销。
可选的,若所述多条目标消息为同一用户发送的消息,所述在一条消息中显示所述表情图像,包括:
在所述用户的一条消息中显示一个所述表情图像以及用于表示所述多条目标消息的数量的第一标识;
或者,在所述用户的一条消息中显示N个所述表情图像,其中,所述N为所述多条目标消息的数量。
本实施方式中,为了更好的理解在所述用户的一条消息中显示一个所述表情图像以及用于表示所述多条目标消息的数量的第一标识,请参阅图3,图3为本公开实施例提供的一种表情图像显示方式的示意图。图3中可以看 到,左侧为同一个用户发送的4条消息,且每条消息中均包含相同的表情图像。进行合并之后,图3的右侧可以看到,此时只在一条消息中显示表情图像,X4即为第一标识,用于表示4条目标消息。
本实施方式中,通过合并之后,减少了表情图像的显示个数,也减少了用户头像的显示个数。使对话框中的显示更加简洁,节省了对话框的显示空间。空余出来的显示空间可以显示更多的消息,而不至于使对话框多次显示一些重复的表情图像,方便用户查看对话框中的消息,提高终端设备显示界面内容的可读性。
本实施方式中,为了更好的理解在所述用户的一条消息中显示N个所述表情图像,请参阅图4,图4为本公开实施例提供的一种表情图像显示方式的示意图。图4中可以看到,左侧为同一个用户发送的4条消息,且每条消息中均包含相同的表情图像。进行合并之后,图4的右侧可以看到,此时只在一条消息中显示多个表情图像,并且对多个表情图像进行重新排列,节省了对话框的显示空间,并且可以起到强调的效果。
当然,在对多个表情图像进行重新排列时,还可以将表情图像进行适当的缩放。这样,通过减少用户头像的显示个数,使对话框中的显示更加简洁,节省了对话框的显示空间,提高终端设备显示界面内容的可读性。
本实施方式中,不管是减少用户头像的显示个数,还是既减少了表情图像的显示个数,也减少了用户头像的显示个数,均可节约对话框的显示空间,使终端设备的显示更加简洁,提高终端设备显示界面内容的可读性。
可选的,若所述多条目标消息为不同用户发送的消息,所述在一条消息中显示所述表情图像,包括:
在一条消息中显示一个所述表情图像,并按照消息发送时间的先后顺序显示所述不同用户中每个用户的头像。
本实施方式中,请参阅图5,便于对上述过程进行理解,图5为本公开实施例提供的一种表情图像显示方式的示意图。图5中可以看到,左侧按照时间的先后顺序分别有用户A、用户B、用户A和用户A发送的表情图像。进行合并之后,此时如图5右侧所示,仅显示一个表情图像,且在该表情下方的区域,按照消息发送时间的先后顺序显示用户A、用户B、用户A和用 户A的用户头像。
本实施方式中,当存在不同用户发送相同的表情图像时,只显示一个表情图像,从而节省了对话框的显示空间,避免了多次显示重复的表情图像,使终端设备的显示更加简洁。并且,按照消息发送时间的先后顺序显示所述不同用户中每个用户的头像,从而可以方便用户了解发送表情图像的用户的先后顺序,使用户有更好的体验。
可选的,所述响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像,包括:
若所述多条目标消息之间,存在不包含所述表情图像的消息,且不包含所述表情图像的消息的个数在预设个数以内,则响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像和第二标识,所述第二标识用于表示所述多条目标消息之间,不包含所述表情图像的消息对应的用户。
本实施方式中,上述不包含所述表情图像的消息,可以是文字、符号、字母或者与所述表情图像不同的表情图像等等,这些消息可以是任何用户发送的消息。若不包含所述表情图像的消息的个数在预设个数以内,则说明对话框中显示相同的表情图像可能比较多,从而可以合并所述多条目标消息。合并之后可以在最后一个相同表情图像的位置显示一条消息,并且显示相同的表情图像。这样使终端设备的显示更加简洁,提高终端设备显示界面内容的可读性。
本实施方式中,当不包含所述表情图像的消息的个数超过预设个数时,此时可能显示比较多的与上述表情图像不同的消息,相同的表情图像相对显示的比较少,此时也可以对相同的表情图像不进行合并,用户依旧可以查看比较有效的消息,而不是重复的表情图像。
本实施方式中,上述第二标识可以是一个三角形、圆形或者一些其他的形状的标识,对此本实施方式不作限定。为了更好的理解上述过程,可以参阅图6和图7,图6为本公开实施例提供的一种表情图像显示方式的示意图,图7为本公开实施例提供的另一种表情图像显示方式的示意图。图6中可以看到,左侧按照时间先后顺序分别有用户A发送的表情图像、用户A发送的 表情图像、用户B发送的“再见”和用户A发送的表情图像。进行合并之后,此时如图6右侧所示,显示有用户B发送的“再见”,在“再见”下方显示有一个表情图像、一个三角形、X3以及一个用户A的用户头像。X3表示用户A发送了三个相同的表情图像,该三角形即为第二标识,用于表示多条目标消息之间,不包含所述表情图像的消息对应的用户,即用户B。
请参阅图7,左侧按照时间先后顺序分别有用户A发送的表情图像、用户B发送的表情图像、用户A发送的“再见”和用户A发送的表情图像。进行合并之后,此时如图7右侧所示,显示有用户A发送的“再见”,在“再见”的下方显示有一个表情图像。在该表情图像的下方,按照消息发送的先后顺序显示每个用户的头像,以及一个三角形。该三角形即为第二标识,用于表示多条目标消息之间,不包含所述表情图像的消息对应的用户,即用户A。
本实施方式中,通过第二标识可以提示用户,以使用户得知在多个表情图像中可能存在不包含所述表情图像的消息对应的用户,不至于因为合并之后,漏掉这些用户的消息。当然,当用户点击第二标识时,还可以还原合并之前消息的显示状态。并且,多个相同的表情图像也可以随着聊天的进行产生动态变化,新增加的相同的表情图像可以改变第一标识,或者改变用户头像的显示次数等等,从而使终端设备的显示更加简洁。
参见图8,图8是本公开实施例提供的终端设备的结构图,能实现上述实施例中表情图像的显示方法的细节,并达到相同的效果。如图8所示,终端设备800包括接收模块801和合并模块802,接收模块801和合并模块802连接,其中:
接收模块801,用于接收第一输入,所述第一输入用于触发对话框中多条目标消息的合并,每条所述目标消息均包含相同的表情图像;
合并模块802,用于响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像。
可选的,如图9所示,所述合并模块802,包括:
获取子模块8021,用于响应所述第一输入,获取所述对话框中包含相同的表情图像的目标消息;
隐藏子模块8022,用于隐藏所述目标消息;
显示子模块8023,用于在一条消息中显示所述表情图像。
可选的,所述获取子模块8021,用于:获取所述对话框中前M个消息中包含相同的表情图像的目标消息,其中,所述M为预设的正整数;或者,获取当前时刻之前的预设时间段内,所述对话框中包含相同的表情图像的目标消息。
可选的,若所述多条目标消息为同一用户发送的消息,所述显示子模块8023,用于:在所述用户的一条消息中显示一个所述表情图像以及用于表示所述多条目标消息的数量的第一标识;或者,在所述用户的一条消息中显示N个所述表情图像,其中,所述N为所述多条目标消息的数量。
可选的,若所述多条目标消息为不同用户发送的消息,所述显示子模块8023,用于:在一条消息中显示一个所述表情图像,并按照消息发送时间的先后顺序显示所述不同用户中每个用户的头像。
可选的,所述合并模块802,用于:若所述多条目标消息之间,存在不包含所述表情图像的消息,且不包含所述表情图像的消息的个数在预设个数以内,则响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像和第二标识,所述第二标识用于表示所述多条目标消息之间,不包含所述表情图像的消息对应的用户。
终端设备800能实现图1至图2的方法实施例中终端设备实现的各个过程,为避免重复,这里不再赘述。
本公开实施例的终端设备800,接收第一输入,所述第一输入用于触发对话框中多条目标消息的合并,每条所述目标消息均包含相同的表情图像;响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像。这样,通过将多条目标消息进行合并,在一条消息中显示表情图像,使终端设备的显示更加简洁,提高终端设备显示界面内容的可读性。
参见图10,图10为实现本公开各个实施例的一种终端设备的硬件结构示意图,该终端设备1000包括但不限于:射频单元1001、网络模块1002、音频输出单元1003、输入单元1004、传感器1005、显示单元1006、用户输入单元1007、接口单元1008、存储器1009、处理器1010、以及电源1011等部件。本领域技术人员可以理解,图10中示出的终端设备结构并不构成对终 端设备的限定,终端设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本公开实施例中,终端设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。
其中,处理器1010,用于接收第一输入,所述第一输入用于触发对话框中多条目标消息的合并,每条所述目标消息均包含相同的表情图像;响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像。这样,通过将多条目标消息进行合并,在一条消息中显示表情图像,使终端设备的显示更加简洁,提高终端设备显示界面内容的可读性。
可选的,处理器1010,还用于响应所述第一输入,获取所述对话框中包含相同的表情图像的目标消息;隐藏所述目标消息;在一条消息中显示所述表情图像。
可选的,处理器1010,还用于获取所述对话框中前M个消息中包含相同的表情图像的目标消息,其中,所述M为预设的正整数;或者,获取当前时刻之前的预设时间段内,所述对话框中包含相同的表情图像的目标消息。
可选的,若所述多条目标消息为同一用户发送的消息,处理器1010,还用于在所述用户的一条消息中显示一个所述表情图像以及用于表示所述多条目标消息的数量的第一标识;或者,在所述用户的一条消息中显示N个所述表情图像,其中,所述N为所述多条目标消息的数量。
可选的,若所述多条目标消息为不同用户发送的消息,处理器1010,还用于在一条消息中显示一个所述表情图像,并按照消息发送时间的先后顺序显示所述不同用户中每个用户的头像。
可选的,处理器1010,还用于若所述多条目标消息之间,存在不包含所述表情图像的消息,且不包含所述表情图像的消息的个数在预设个数以内,则响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像和第二标识,所述第二标识用于表示所述多条目标消息之间,不包含所述表情图像的消息对应的用户。
应理解的是,本公开实施例中,射频单元1001可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理 器1010处理;另外,将上行的数据发送给基站。通常,射频单元1001包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元1001还可以通过无线通信系统与网络和其他设备通信。
终端设备通过网络模块1002为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元1003可以将射频单元1001或网络模块1002接收的或者在存储器1009中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元1003还可以提供与终端设备1000执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元1003包括扬声器、蜂鸣器以及受话器等。
输入单元1004用于接收音频或视频信号。输入单元1004可以包括图形处理器(Graphics Processing Unit,GPU)10041和麦克风10042,图形处理器10041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元1006上。经图形处理器10041处理后的图像帧可以存储在存储器1009(或其它存储介质)中或者经由射频单元1001或网络模块1002进行发送。麦克风10042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元1001发送到移动通信基站的格式输出。
终端设备1000还包括至少一种传感器1005,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板10061的亮度,接近传感器可在终端设备1000移动到耳边时,关闭显示面板10061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别终端设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器1005还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元1006用于显示由用户输入的信息或提供给用户的信息。显示单元1006可包括显示面板10061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板10061。
用户输入单元1007可用于接收输入的数字或字符信息,以及产生与终端设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元1007包括触控面板10071以及其他输入设备10072。触控面板10071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板10071上或在触控面板10071附近的操作)。触控面板10071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1010,接收处理器1010发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板10071。除了触控面板10071,用户输入单元1007还可以包括其他输入设备10072。具体地,其他输入设备10072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板10071可覆盖在显示面板10061上,当触控面板10071检测到在其上或附近的触摸操作后,传送给处理器1010以确定触摸事件的类型,随后处理器1010根据触摸事件的类型在显示面板10061上提供相应的视觉输出。虽然在图10中,触控面板10071与显示面板10061是作为两个独立的部件来实现终端设备的输入和输出功能,但是在某些实施例中,可以将触控面板10071与显示面板10061集成而实现终端设备的输入和输出功能,具体此处不做限定。
接口单元1008为外部装置与终端设备1000连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元1008可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传 输到终端设备1000内的一个或多个元件或者可以用于在终端设备1000和外部装置之间传输数据。
存储器1009可用于存储软件程序以及各种数据。存储器1009可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1009可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器1010是终端设备的控制中心,利用各种接口和线路连接整个终端设备的各个部分,通过运行或执行存储在存储器1009内的软件程序和/或模块,以及调用存储在存储器1009内的数据,执行终端设备的各种功能和处理数据,从而对终端设备进行整体监控。处理器1010可包括一个或多个处理单元;可选的,处理器1010可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。
终端设备1000还可以包括给各个部件供电的电源1011(比如电池),可选的,电源1011可以通过电源管理系统与处理器1010逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,终端设备1000包括一些未示出的功能模块,在此不再赘述。
可选的,本公开实施例还提供一种终端设备,包括处理器1010,存储器1009,存储在存储器1009上并可在所述处理器1010上运行的计算机程序,该计算机程序被处理器1010执行时实现上述表情图像的显示方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本公开实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述表情图像的显示方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟 或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本公开各个实施例所述的方法。
上面结合附图对本公开的实施例进行了描述,但是本公开并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本公开的启示下,在不脱离本公开宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本公开的保护之内。

Claims (14)

  1. 一种表情图像的显示方法,应用于终端设备,包括:
    接收第一输入,所述第一输入用于触发对话框中多条目标消息的合并,每条所述目标消息均包含相同的表情图像;
    响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像。
  2. 根据权利要求1所述的方法,其中,所述响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像,包括:
    响应所述第一输入,获取所述对话框中包含相同的表情图像的目标消息;
    隐藏所述目标消息;
    在一条消息中显示所述表情图像。
  3. 根据权利要求2所述的方法,其中,所述获取所述对话框中包含相同的表情图像的目标消息,包括:
    获取所述对话框中前M个消息中包含相同的表情图像的目标消息,其中,所述M为预设的正整数;
    或者,获取当前时刻之前的预设时间段内,所述对话框中包含相同的表情图像的目标消息。
  4. 根据权利要求2所述的方法,其中,若所述多条目标消息为同一用户发送的消息,所述在一条消息中显示所述表情图像,包括:
    在所述用户的一条消息中显示一个所述表情图像以及用于表示所述多条目标消息的数量的第一标识;
    或者,在所述用户的一条消息中显示N个所述表情图像,其中,所述N为所述多条目标消息的数量。
  5. 根据权利要求2所述的方法,其中,若所述多条目标消息为不同用户发送的消息,所述在一条消息中显示所述表情图像,包括:
    在一条消息中显示一个所述表情图像,并按照消息发送时间的先后顺序显示所述不同用户中每个用户的头像。
  6. 根据权利要求1所述的方法,其中,所述响应所述第一输入,合并所 述多条目标消息,以使在一条消息中显示所述表情图像,包括:
    若所述多条目标消息之间,存在不包含所述表情图像的消息,且不包含所述表情图像的消息的个数在预设个数以内,则响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像和第二标识,所述第二标识用于表示所述多条目标消息之间,不包含所述表情图像的消息对应的用户。
  7. 一种终端设备,包括:
    接收模块,用于接收第一输入,所述第一输入用于触发对话框中多条目标消息的合并,每条所述目标消息均包含相同的表情图像;
    合并模块,用于响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像。
  8. 根据权利要求7所述的终端设备,所述合并模块,包括:
    获取子模块,用于响应所述第一输入,获取所述对话框中包含相同的表情图像的目标消息;
    隐藏子模块,用于隐藏所述目标消息;
    显示子模块,用于在一条消息中显示所述表情图像。
  9. 根据权利要求8所述的终端设备,其中,所述获取子模块,用于:获取所述对话框中前M个消息中包含相同的表情图像的目标消息,其中,所述M为预设的正整数;或者,获取当前时刻之前的预设时间段内,所述对话框中包含相同的表情图像的目标消息。
  10. 根据权利要求8所述的终端设备,其中,若所述多条目标消息为同一用户发送的消息,所述显示子模块,用于:在所述用户的一条消息中显示一个所述表情图像以及用于表示所述多条目标消息的数量的第一标识;或者,在所述用户的一条消息中显示N个所述表情图像,其中,所述N为所述多条目标消息的数量。
  11. 根据权利要求8所述的终端设备,其中,若所述多条目标消息为不同用户发送的消息,所述显示子模块,用于:在一条消息中显示一个所述表情图像,并按照消息发送时间的先后顺序显示所述不同用户中每个用户的头像。
  12. 根据权利要求7所述的终端设备,其中,所述合并模块,用于:若所述多条目标消息之间,存在不包含所述表情图像的消息,且不包含所述表情图像的消息的个数在预设个数以内,则响应所述第一输入,合并所述多条目标消息,以使在一条消息中显示所述表情图像和第二标识,所述第二标识用于表示所述多条目标消息之间,不包含所述表情图像的消息对应的用户。
  13. 一种终端设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其中,所述计算机程序被所述处理器执行时实现如权利要求1至6中任一项所述的表情图像的显示方法的步骤。
  14. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的表情图像的显示方法的步骤。
PCT/CN2019/082229 2018-04-20 2019-04-11 表情图像的显示方法及终端设备 WO2019201146A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810358537.3 2018-04-20
CN201810358537.3A CN108600089B (zh) 2018-04-20 2018-04-20 一种表情图像的显示方法及终端设备

Publications (1)

Publication Number Publication Date
WO2019201146A1 true WO2019201146A1 (zh) 2019-10-24

Family

ID=63613689

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082229 WO2019201146A1 (zh) 2018-04-20 2019-04-11 表情图像的显示方法及终端设备

Country Status (2)

Country Link
CN (1) CN108600089B (zh)
WO (1) WO2019201146A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930410A (zh) * 2019-10-28 2020-03-27 维沃移动通信有限公司 一种图像处理方法、服务器及终端设备
CN111369645A (zh) * 2020-02-28 2020-07-03 北京百度网讯科技有限公司 表情信息的展现方法、装置、设备及介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108600089B (zh) * 2018-04-20 2020-06-30 维沃移动通信有限公司 一种表情图像的显示方法及终端设备
CN110196673B (zh) * 2019-06-04 2020-10-30 北京达佳互联信息技术有限公司 图片交互方法、装置、终端及存储介质
CN113114554A (zh) * 2020-01-13 2021-07-13 阿里巴巴集团控股有限公司 消息的展示方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160035123A1 (en) * 2014-07-31 2016-02-04 Emonster, Inc. Customizable animations for text messages
CN106250144A (zh) * 2016-07-28 2016-12-21 北京珠穆朗玛移动通信有限公司 一种通知栏消息显示方法及其移动终端
CN106372204A (zh) * 2016-08-31 2017-02-01 北京小米移动软件有限公司 推送消息处理方法及装置
CN107846352A (zh) * 2017-11-10 2018-03-27 维沃移动通信有限公司 一种信息显示方法、移动终端
CN108600089A (zh) * 2018-04-20 2018-09-28 维沃移动通信有限公司 一种表情图像的显示方法及终端设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100738531B1 (ko) * 2005-07-22 2007-07-11 삼성전자주식회사 디스플레이 세그먼트 제어 장치 및 그 방법
JP2009070278A (ja) * 2007-09-14 2009-04-02 Toshiba Corp コンテンツ類似性判定装置およびコンテンツ類似性判定方法
CN104281629B (zh) * 2013-07-12 2018-12-21 珠海豹好玩科技有限公司 从网页中提取图片的方法、装置及客户端设备
CN105354231B (zh) * 2015-09-30 2021-05-11 腾讯科技(深圳)有限公司 图片选择方法、装置、图片处理方法和装置
CN106888153A (zh) * 2016-06-12 2017-06-23 阿里巴巴集团控股有限公司 展示要素生成方法、展示要素生成装置、展示要素和通讯软件
CN106533899B (zh) * 2016-09-30 2019-12-10 宇龙计算机通信科技(深圳)有限公司 一种信息显示处理的方法、装置及系统
CN106897937B (zh) * 2017-02-15 2021-03-30 北京小米移动软件有限公司 一种展示社交分享信息的方法和装置
CN107231350B (zh) * 2017-05-24 2020-05-19 北京潘达互娱科技有限公司 一种消息处理方法与装置
CN107562475B (zh) * 2017-08-29 2019-02-05 Oppo广东移动通信有限公司 消息显示方法、装置及终端
CN107707452B (zh) * 2017-09-12 2021-03-30 创新先进技术有限公司 针对表情的信息展示方法、装置以及电子设备
CN107911283A (zh) * 2017-11-20 2018-04-13 珠海市魅族科技有限公司 消息显示方法及装置、计算机装置和计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160035123A1 (en) * 2014-07-31 2016-02-04 Emonster, Inc. Customizable animations for text messages
CN106250144A (zh) * 2016-07-28 2016-12-21 北京珠穆朗玛移动通信有限公司 一种通知栏消息显示方法及其移动终端
CN106372204A (zh) * 2016-08-31 2017-02-01 北京小米移动软件有限公司 推送消息处理方法及装置
CN107846352A (zh) * 2017-11-10 2018-03-27 维沃移动通信有限公司 一种信息显示方法、移动终端
CN108600089A (zh) * 2018-04-20 2018-09-28 维沃移动通信有限公司 一种表情图像的显示方法及终端设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930410A (zh) * 2019-10-28 2020-03-27 维沃移动通信有限公司 一种图像处理方法、服务器及终端设备
CN110930410B (zh) * 2019-10-28 2023-06-23 维沃移动通信有限公司 一种图像处理方法、服务器及终端设备
CN111369645A (zh) * 2020-02-28 2020-07-03 北京百度网讯科技有限公司 表情信息的展现方法、装置、设备及介质
CN111369645B (zh) * 2020-02-28 2023-12-05 北京百度网讯科技有限公司 表情信息的展现方法、装置、设备及介质

Also Published As

Publication number Publication date
CN108600089B (zh) 2020-06-30
CN108600089A (zh) 2018-09-28

Similar Documents

Publication Publication Date Title
WO2019154181A1 (zh) 显示控制方法及移动终端
CN109388297B (zh) 表情展示方法、装置、计算机可读存储介质及终端
WO2019196707A1 (zh) 一种移动终端控制方法及移动终端
WO2019201146A1 (zh) 表情图像的显示方法及终端设备
WO2021017763A1 (zh) 事件处理方法、终端设备及计算机可读存储介质
CN108540655B (zh) 一种来电显示处理方法及移动终端
WO2019196691A1 (zh) 一种键盘界面显示方法和移动终端
WO2021017776A1 (zh) 信息处理方法及终端
WO2019120087A1 (zh) 降低功耗的处理方法及移动终端
US11658932B2 (en) Message sending method and terminal device
WO2019196929A1 (zh) 一种视频数据处理方法及移动终端
WO2019196864A1 (zh) 一种虚拟按键控制方法及移动终端
WO2019062364A1 (zh) 显示方法及移动终端
WO2019114530A1 (zh) 信息提示方法及移动终端
WO2019080775A1 (zh) 通知消息的提示方法及移动终端
WO2019201271A1 (zh) 通话处理方法及移动终端
WO2019120192A1 (zh) 文本编辑方法及移动终端
WO2021098633A1 (zh) 信息显示、发送方法及电子设备
CN107734170B (zh) 一种通知消息处理方法、移动终端及穿戴设备
WO2019223569A1 (zh) 信息处理方法及移动终端
CN109993821B (zh) 一种表情播放方法及移动终端
WO2021083036A1 (zh) 消息回复方法、服务器和电子设备
WO2020238536A1 (zh) 信息处理方法及终端设备
WO2019154360A1 (zh) 界面切换方法及移动终端
WO2020228538A1 (zh) 截图的方法和移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19788489

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19788489

Country of ref document: EP

Kind code of ref document: A1