CN115412518A - Expression sending method and device, storage medium and electronic equipment - Google Patents

Expression sending method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN115412518A
CN115412518A CN202210999943.4A CN202210999943A CN115412518A CN 115412518 A CN115412518 A CN 115412518A CN 202210999943 A CN202210999943 A CN 202210999943A CN 115412518 A CN115412518 A CN 115412518A
Authority
CN
China
Prior art keywords
expression
target
user
user terminal
emotional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210999943.4A
Other languages
Chinese (zh)
Inventor
李炳翰
唐慧
张京
姜宇巍
王悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Media Technology Beijing Co Ltd
Original Assignee
Netease Media Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Media Technology Beijing Co Ltd filed Critical Netease Media Technology Beijing Co Ltd
Priority to CN202210999943.4A priority Critical patent/CN115412518A/en
Publication of CN115412518A publication Critical patent/CN115412518A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72427User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting games or graphical animations

Abstract

The disclosure provides an expression sending method, an expression sending device, a storage medium and electronic equipment, and relates to the technical field of communication. In the embodiment of the disclosure, an expression area corresponding to the expression type is displayed in response to the expression type indicated by the selection operation, and a target expression is sent to the second user terminal in response to the target expression determined in the expression area, wherein the target expression is a target emotional expression or a target interactive expression. In this way, because different expression types have different target expression determining modes and different contents sent to the second user terminal by different expression types, different expression sending methods can be determined based on different expression types, the expression sending modes are enriched, the interaction with other users during the expression sending can be increased according to the expression types, and the interestingness during the expression sending is improved.

Description

Expression sending method and device, storage medium and electronic equipment
Technical Field
The embodiment of the disclosure relates to the technical field of communication, and more particularly, to an expression sending method, an expression sending device, a storage medium and an electronic device.
Background
With the rapid development of the internet, a variety of online social applications for social activities, such as instant messaging using instant messaging applications, have emerged. In the process of using the social applications, in order to express the words of thinking more vividly and vividly, users often send some dynamic expressions as conversation messages, and the interestingness of communication among the users can be greatly promoted in a mode of carrying out conversation through the dynamic expressions.
In practical application, it is common practice to put all available free emoticons or pay emoticons unlocked at one time after paying. The expression is generally an expression sent by a single party, and no subsequent function exists after the expression is sent, and because the expression function is basic, all expressions only need to be listed directly, so that the user does not have the desire to explore the expressions. Therefore, the current expression sending method is single and lacks interaction, so that the expression sending interestingness is low.
This section is intended to provide a background or context to the embodiments of the disclosure recited in the claims and the description herein is not admitted to be prior art by inclusion in this section.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides an expression sending method, an expression sending device, a storage medium, and an electronic device.
According to a first aspect of the present disclosure, there is provided an expression sending method applied to a first user terminal, the method including:
responding to the expression type indicated by the selection operation, and displaying an expression area corresponding to the expression type;
sending the target expression to a second user terminal in response to the target expression determined in the expression area,
and the target expression is a target emotional expression or a target interactive expression.
According to a second aspect of the present disclosure, there is provided an expression transmitting apparatus applied to a first user terminal, the apparatus including:
the first display module is used for responding to the expression type indicated by the selection operation and displaying an expression area corresponding to the expression type;
and the sending module is used for responding to the target expression determined in the expression area and sending the target expression to a second user terminal, wherein the target expression is a target emotional expression or a target interactive expression.
According to a third aspect of the present disclosure, there is provided a storage medium having stored thereon a computer program that, when executed by a processor, causes the expression transmitting method of the first aspect described above.
In accordance with a fourth aspect of the present disclosure, there is provided an electronic device comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the expression sending method of the first aspect described above via execution of the executable instructions.
To sum up, the expression sending method provided by the embodiment of the present disclosure may display an expression area corresponding to an expression type in response to the expression type indicated by the selection operation, and send a target expression to a second user terminal in response to a target expression determined in the expression area, where the target expression is a target emotional expression or a target interactive expression. In this way, because different expression types have different target expression determining modes and different contents sent to the second user terminal by different expression types, different expression sending methods can be determined based on different expression types, the expression sending modes are enriched, the interaction with other users during the expression sending can be increased according to the expression types, and the interestingness during the expression sending is improved.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
fig. 1 schematically illustrates a block diagram of a computer system provided by an embodiment of the present disclosure;
fig. 2 is a flowchart schematically illustrating steps of an expression sending method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating an interface for transmitting an expression according to an embodiment of the present disclosure;
fig. 4 is a schematic interface diagram schematically illustrating still another expression transmission provided by an embodiment of the present disclosure;
fig. 5 is a schematic interface diagram schematically illustrating still another expression transmission provided by an embodiment of the present disclosure;
fig. 6 is a schematic interface diagram schematically illustrating still another expression transmission provided by an embodiment of the present disclosure;
fig. 7 schematically illustrates another interface diagram for transmitting an expression provided by the embodiment of the present disclosure;
fig. 8 schematically illustrates another interface diagram for transmitting an expression provided by the embodiment of the present disclosure;
fig. 9 schematically illustrates another interface diagram for transmitting an expression provided by the embodiment of the present disclosure;
fig. 10 schematically illustrates another interface diagram for sending an expression provided by the embodiment of the present disclosure;
fig. 11 schematically illustrates another interface diagram for sending an expression provided by the embodiment of the present disclosure;
fig. 12 is a schematic view of another expression sending interface provided in the embodiment of the present disclosure
Fig. 13 is a schematic view illustrating another interface for transmitting an expression provided by an embodiment of the present disclosure;
fig. 14 is a schematic view illustrating an interface for sending another expression provided by an embodiment of the present disclosure;
fig. 15 schematically illustrates another interface diagram for sending an expression provided by the embodiment of the present disclosure;
fig. 16 is a schematic view illustrating an interface for sending another expression provided by the embodiment of the present disclosure;
fig. 17 is a schematic diagram illustrating another interface for sending an expression according to an embodiment of the present disclosure;
fig. 18 is a schematic diagram illustrating another interface for sending an expression according to an embodiment of the present disclosure;
fig. 19 is a schematic view of another expression transmission interface provided in the embodiment of the present disclosure;
fig. 20 is a block diagram schematically illustrating an expression transmitting apparatus according to an embodiment of the present disclosure;
fig. 21 schematically illustrates a block diagram of an electronic device provided by an embodiment of the present disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present disclosure will be described below with reference to several exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the present disclosure, and are not intended to limit the scope of the present disclosure in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software, the data involved in the present disclosure may be data that is authorized by the user or fully authorized by various parties. In this document, any number of elements in the drawings is by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
Before further detailed description of the embodiments of the present disclosure, terms and expressions referred to in the embodiments of the present disclosure are explained, and the terms and expressions referred to in the embodiments of the present disclosure are applied to the following explanations.
1) Expression, which forms a popular culture after social application is active to express specific emotions, such as those expressed on the face or posture of a user; in practical application, the expressions can be divided into symbolic expressions, static picture expressions, dynamic picture expressions, video expressions, and the like, for example, the expressions can be made of faces expressing various emotions of a user, or popular stars, cartoons, movie screenshots, and the like, and then a series of matched characters are matched with the materials.
2) In response to the condition or state indicating that the executed operation depends on, one or more of the executed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) User Interface (UI) controls are controls or elements, such as pictures, input boxes, text boxes, buttons, tabs, etc., that may or may not be visible on the User Interface of an application. For example, when the UI controls are invisible controls, the user may trigger these invisible controls by triggering a designated area on the user interface. Some of the UI controls are responsive to user actions, such as a send control for sending information in the information input area. The UI control involved in the embodiments of the present disclosure includes, but is not limited to: sending control and adjusting control.
Fig. 1 schematically illustrates a block diagram of a computer system according to an embodiment of the present disclosure. The computer system 100 includes: a first user terminal 110, a server 120, a second user terminal 130.
The first user terminal 110 is installed and operated with a client 111 supporting messaging, the client 111 may be a social application, and the first user 112 may be a user using the first user terminal 110. When the first user terminal 110 runs the client 111, a user interface of the client 111 is displayed on a screen of the first user terminal 110, and accordingly, the first user 112 may operate on the user interface of the display client 111. The client may be an application with a messaging function, for example, any one of an instant messaging application, a real-time communication application, and an application with a comment function, for example, any one of a social program, a forum program, a mail program, a local life program, a shopping program, a game program, and a video program.
The second user terminal 130 is installed and operated with a client 131 supporting message transmission, the client 131 may be a social application, and the second user 132 may be a user using the second user terminal 130. When the second user terminal 130 runs the client 131, a user interface of the client 131 is displayed on a screen of the second user terminal 130, and accordingly, the second user 132 may operate on the user interface of the display client 131. The client may be an application with a messaging function, for example, any one of an instant messaging application, a real-time communication application, and an application with a comment function, for example, any one of a social program, a forum program, a mail program, a local life program, a shopping program, a game program, and a video program.
Optionally, the clients installed on the first user terminal 110 and the second user terminal 130 are the same, or the clients installed on the two terminals are the same type of client on different operating system platforms (android or IOS). The first user terminal 110 may generally refer to one of a plurality of terminals, and the second user terminal 130 may generally refer to another of the plurality of terminals, and this embodiment is only exemplified by the first user terminal 110 and the second user terminal 130. The first user terminal 110 and the second user terminal 130 may be of the same or different device types, including: at least one of a smartphone, a tablet, an ebook reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but there are a plurality of other terminals 140 that may access the server 120 in different embodiments. Optionally, there are one or more terminals 140 corresponding to the developer, a development and editing platform supporting a client for message sending is installed on the terminal 140, the developer may edit and update the client on the terminal 140, and transmit the updated client installation package to the server 120 through a wired or wireless network, and the first user terminal 110 and the second user terminal 130 may download the client installation package from the server 120 to implement the update of the client.
The first user terminal 110, the second user terminal 130, and the other terminals 140 are connected to the server 120 through a wireless network or a wired network.
The server 120 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 120 is used for providing a background service for the client supporting three-dimensional message sending. Optionally, the server 120 undertakes primary computational work and the terminals undertake secondary computational work; alternatively, the server 120 undertakes the secondary computing work and the terminal undertakes the primary computing work; alternatively, the server 120 and the terminal perform cooperative computing by using a distributed computing architecture.
In one illustrative example, the server 120 includes a processor 122, a user account database 123, a messaging service module 124, and a user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load an instruction stored in the server 120, and process data in the user account database 123 and the messaging service module 124; the user account database 123 is configured to store data of user accounts used by the first user terminal 110, the second user terminal 130, and the other terminals 140, such as a head portrait of the user account, a nickname of the user account, and a service area where the user account is located; the messaging service module 124 is used for providing messaging services; the user-facing I/O interface 125 is used to establish communication with the first user terminal 110 and/or the second user terminal 130 through a wireless network or a wired network to exchange data.
With reference to the above description of message transmission and description of implementation environment, the expression transmission method provided in the embodiment of the present disclosure is described, and an execution subject of the expression transmission method is exemplified by a client running on a terminal shown in fig. 1. The client operated by the terminal is the client of the application program, and the application program supports message sending.
Fig. 2 schematically shows a flowchart of steps of an expression sending method provided by an embodiment of the present disclosure, and as shown in fig. 2, the method is applied to a first user terminal, and may include:
step S101, responding to the expression type indicated by the selection operation, and displaying an expression area corresponding to the expression type.
In this disclosure, the selection operation may be performed on a message sending window displayed on the first user terminal, specifically, the message sending window may display a plurality of type controls, each type control corresponds to one expression type, the first user may perform the selection operation on the type control when the expression type needs to be determined, accordingly, the first user terminal may determine the type control indicated by the selection operation in response to the selection operation, and then determine the expression type indicated by the selection operation according to the expression type corresponding to the type control, where the selection operation may be a touch operation such as clicking, double-clicking, and re-pressing. The expression types may include an emotion type and an interaction type.
In the embodiment of the present disclosure, in response to the expression type indicated by the selection operation, an expression area corresponding to the expression type is displayed, after the expression type indicated by the selection operation is determined, content required to be displayed by the expression type is determined, where the displayed content may be a plurality of expression images associated with the expression type, or a plurality of expression templates associated with the expression type, the content required to be displayed by the expression type is displayed on a preset expression panel, and an area where the preset expression panel is located may be used as the expression area corresponding to the expression type.
Step S102, responding to the target expression determined in the expression area, and sending the target expression to a second user terminal, wherein the target expression is a target emotional expression or a target interactive expression.
In the embodiment of the present disclosure, since a plurality of expressions corresponding to the expression type are displayed in the expression area, determining the target expression in the expression area may be the target expression determined by a selection operation performed by a user, may also be the target expression determined according to a trigger action performed by the user, and may also be the target expression determined based on other operations performed by the user. It should be noted that, under different expression types, the target expression may be determined in different manners, for example, when the expression type is an emotional expression, the target expression may be determined in a manner that the user executes a corresponding emotional action, and when the expression type is an interactive expression, the target expression may be determined in a manner that the user selects an interactive expression from the expression area.
In the embodiment of the disclosure, the target expression is sent to the second user terminal in response to the target expression determined in the expression area, the target expression may be directly sent to the second user terminal after the target expression is determined, or information with the target expression may be sent to the second user terminal. When the expression type is an emotional expression, the target expression may be a target emotional expression, and when the expression type is an interactive expression, the target expression may be a target interactive expression. It should be noted that, the content sent to the second user terminal may be different in different expression types, for example, when the expression type is an emotional expression, the target emotional expression may be directly sent to the second user terminal, or the target emotional expression with the display indication may be sent to the second user terminal, and when the expression type is an interactive expression, the prompt information generated according to the target interactive expression may be sent to the second user terminal.
To sum up, the expression sending method provided in the embodiment of the present disclosure may display an expression area corresponding to an expression type in response to the expression type indicated by the selection operation, and send a target expression to the second user terminal in response to a target expression determined in the expression area, where the target expression is a target emotional expression or a target interactive expression. In this way, because different expression types have different target expression determining modes and different contents sent to the second user terminal by different expression types, different expression sending methods can be determined based on different expression types, the expression sending modes are enriched, the interaction with other users during the expression sending can be increased according to the expression types, and the interestingness during the expression sending is improved.
Optionally, in this embodiment of the present disclosure, before the operation responding to the expression type indicated by the selection operation, the method may further include:
and responding to the triggering operation of the expression control, and displaying the expression panel.
In the embodiment of the present disclosure, the triggering operation performed on the emotion control may be performed in a message sending window, where the message sending window may be an information editing and sending window supporting emotion input and emotion sending. For example, the message sending window may be a chat window for chatting, a comment window for posting comments, a pop-up window for sending a video pop-up, or an information editing window for posting information (forum, post, marketing information, renting and selling information, diary, personal life share). The expression control can be a control for displaying an expression panel, the specific style of the expression control is not limited in the present disclosure, and a user can perform touch operation on the expression control when the expression panel needs to be expanded, and accordingly, the user terminal can respond to the touch operation and display the expression panel corresponding to the expression control. The touch operation may be a single click, a double click, or the like, and the disclosure is not limited thereto. The expression panel may be a panel for displaying a plurality of contained expressions, and a specific display style of the expression panel may be preset, which is not limited in this disclosure.
For example, fig. 3 schematically illustrates an interface schematic diagram of expression sending provided by the embodiment of the present disclosure, as shown in fig. 3, a message sending window 01 corresponding to social software is displayed on a terminal of a user a, the message sending window 01 is a private chat interface between the user a and a user B, and an expression control 02 is displayed below the message sending window 01.
For example, fig. 4 schematically illustrates another expression sending interface diagram provided by the embodiment of the present disclosure, as shown in fig. 4, a private chat message sending window 01 between a user a and a user B is displayed on a terminal of the user a, and an expression panel 03 is displayed in response to a touch operation performed by the user a on the expression control 02. An expression type control and an expression area corresponding to the expression type are displayed on the expression panel 03, the expression type control comprises an interactive expression control 04 and an emotional expression control 05, and the expression area 11 corresponding to the emotional expression is displayed in response to the operation of the user a on the emotional expression control 05. A plurality of emotional expressions are displayed in the expression area 11 corresponding to the emotional expression.
In an implementation manner of the embodiment of the present disclosure, in the case that the expression type indicated by the selection operation in step S101 is an emotional expression, the above operation of displaying the expression area corresponding to the expression type may specifically include:
displaying an expression area containing emotional expressions, wherein the emotional expressions comprise emotional expressions to be unlocked; the emotional expression to be unlocked is covered with a covering layer.
In the embodiment of the present disclosure, a plurality of emotional expressions are displayed in the expression area corresponding to the emotional expression, and the display mode of the expression area may be preset, which is not limited in the present disclosure. The emotional expressions displayed in the expression area may include an emotional expression to be unlocked, and the emotional expression to be unlocked may be obtained by covering the emotional expression with a masking layer, and a display style of the masking layer may be preset, for example, a color of the masking layer may be gray, and a transparency of the masking layer is 50%.
Optionally, in this disclosure, an emotion tag and a locking tag corresponding to the emotional expression are disposed on the cover layer.
In the embodiment of the present disclosure, the emotion tag may be a name corresponding to an emotional expression, the lock tag may be a graphic for indicating the locking of the expression, and specific display styles of the emotion tag and the lock tag may be preset, which is not limited to the present disclosure. For example, as shown in fig. 4, a plurality of emotional expressions are displayed in the expression area 11 corresponding to the emotional expressions, where the emotional expressions "haar", "angry", "wave", "no words", "fish touch", "beep mouth", "surprise" are to-be-unlocked emotional expressions, and "sad" is to-be-unlocked emotional expressions. And displaying a cover layer filled with oblique lines on the expression to be unlocked, and displaying the simple strokes corresponding to the emotional expression and the simple strokes representing the expression locking on the cover layer, wherein the simple strokes corresponding to the emotional expression can be used as an emotional label corresponding to the emotional expression, and the simple strokes representing the expression locking can be used as a locking label. And the unlocked emotional expression only displays the corresponding emotional expression.
Optionally, in the operation of determining the target expression in the expression area according to the embodiment of the present disclosure, the method may further include:
and determining the target emotional expression according to the expression information of the user or according to the selection operation of the user.
In the embodiment of the disclosure, when the target emotional expression to be sent to the second user terminal is determined, the target emotional expression may be determined according to the user expression information of the first user, and may also be determined according to the selection operation of the first user. The emotional expressions comprise the emotional expressions to be unlocked and the unlocked emotional expressions, the determination modes of the emotional expressions to be unlocked can be different from the determination modes of the unlocked emotional expressions, the emotional expressions to be unlocked can be determined as the target emotional expressions only through the user expression information, and the unlocked emotional expressions can be determined as the target emotional expressions through the selection operation of the first user and can also be determined as the target emotional expressions through the user expression information.
In the embodiment of the disclosure, the target emotional expression is determined according to the user expression information, which may be by acquiring the user expression information of the first user, and determining a trigger action matched with the user expression information based on a trigger action corresponding to each emotional expression, and then taking the emotional expression corresponding to the trigger action as the target emotional expression matched with the user expression information. The triggering actions corresponding to different emotional expressions may be different, for example, the triggering action corresponding to the emotional expression "happy" is that two ends of a mouth corner are upward or a mouth is enlarged, and when the rising amplitude of the mouth corner in the user expression information exceeds a preset amplitude, it may be determined that the emotional expression "happy" is triggered by the user expression information; the triggering action corresponding to the emotional expression "sadness" is that the two ends of the mouth corner are downward, and when the downward amplitudes of the two ends of the mouth corner in the user expression information exceed the preset amplitudes, it can be determined that the emotional expression "sadness" is triggered by the user expression information; the triggering action corresponding to the emotional expression 'wave up' is to tilt the mouth leftwards or to tilt the mouth rightwards, and when the angle of the left mouth angle or the right mouth angle in the user expression information exceeds a preset angle, the user expression information can be determined to trigger the emotional expression 'wave up'; the triggering action corresponding to the emotional expression 'fish touch' is that two eyes are closed at the same time, and when the time for closing the two eyes in the user expression information at the same time exceeds the preset time, the user expression information can be determined to trigger the emotional expression 'fish touch'; the triggering action corresponding to the emotional expression 'angry' is that two eyebrows move downwards at the same time, and when the angle of the two eyebrows moving downwards at the same time in the user expression information exceeds a preset angle, the user expression information can be determined to trigger the emotional expression 'angry'; the triggering action corresponding to the emotional expression 'no language' is that two eyes are upward at the same time, and when the time that the two eyes are upward at the same time in the user expression information exceeds the preset time length, it can be determined that the user expression information triggers the emotional expression 'no language'; the triggering action corresponding to the emotional expression 'surprise' is that the two eye sockets are enlarged or the two eyebrows move upwards at the same time, and when the magnification of the two eye sockets in the user expression information exceeds a preset multiple or the distance of the two eyebrows moving upwards at the same time exceeds a preset distance, the fact that the emotional expression 'surprise' is triggered by the user expression information can be determined; the triggering action corresponding to the emotional expression 'Dudu mouth' is that the mouth bleeps, and when the amplitude of the mouth bleeps in the user expression information exceeds the preset amplitude, the user expression information can be determined to trigger the emotional expression 'Dudu mouth'. The preset amplitude, the preset angle, the preset duration and the preset distance may be experience values preset according to practical application.
In the embodiment of the disclosure, the target emotional expression is determined according to the selection operation of the user, which may be receiving a selection operation performed by the first user on an unlocked emotional expression in the expression area, and accordingly, the first user terminal may, in response to the selection operation, take the unlocked emotional expression indicated by the selection operation as the target emotional expression.
Optionally, the expression sending method in the embodiment of the present disclosure may further include:
displaying a shooting permission opening message;
and responding to the confirmation operation corresponding to the shooting permission opening message, and calling a camera to acquire the expression information of the user.
In the embodiment of the disclosure, because the user expression information of the user needs to be collected when the emotional expression to be unlocked is unlocked, the shooting permission enabling message can be displayed after the expression area corresponding to the emotional expression is displayed, so that the problem that the user cannot obtain the user expression information because the shooting permission is not enabled by the user is avoided. The step of displaying the shooting permission enabling message may be that the shooting permission enabling message is displayed in a new prompt page after an expression area corresponding to the emotional expression is displayed, the shooting permission enabling message may be generated based on a current shooting permission state of the terminal, and a specific display style of the shooting permission enabling message is not limited in this disclosure.
In the embodiment of the disclosure, after the first user performs the confirmation operation on the shooting permission activation message, the first user terminal may call the camera to acquire the user expression information of the first user. The method for acquiring the expression information of the user may be a Principal Component Analysis (PCA) method based on a characteristic face, an Independent Component Analysis (ICA), or other machine learning methods, and the disclosure is not limited.
Optionally, when the target emotional expression is an emotional expression to be unlocked, the expression sending method in the embodiment of the disclosure may further include:
and removing the corresponding masking layer covered on the emotional expression to be unlocked according to the user expression information so as to unlock the emotional expression to be unlocked.
In the embodiment of the disclosure, after the emotional expression to be unlocked, which is matched with the user expression information, is determined, that is, the user expression information conforms to the trigger action corresponding to the emotional expression to be unlocked, the emotional expression to be unlocked can be unlocked. Specifically, unlocking the emotional expression to be unlocked may be unlocking the emotional expression to be unlocked displayed in the emotional area, and removing a covering layer covering the emotional expression to be unlocked.
Optionally, the expression sending method in the embodiment of the present disclosure may further include:
and removing the locking label and the emotion label on the covering layer.
In the embodiment of the disclosure, the covering layer on the emotional expression to be unlocked is removed, and the locking label and the emotional label on the covering layer are also removed, that is, only the corresponding emotional expression is displayed for the emotional expression to be unlocked after unlocking.
For example, fig. 5 schematically illustrates another expression sending interface diagram provided by the embodiment of the present disclosure, as shown in fig. 5, on a terminal of a user a, on an expression area 11 corresponding to an emotional expression in a message sending window 01, a "haha" emotional expression 12 is an unlocked emotional expression to be unlocked, a covering layer covering the previous emotional expression "haha" has been removed, and a locking label and an emotional label on the covering layer.
Optionally, in this embodiment of the present disclosure, when unlocking the emotional expression to be unlocked successfully, the expression sending method may further include:
and displaying unlocking success prompt information.
In the embodiment of the disclosure, when unlocking of the emotional expression to be unlocked by the user expression information is successful, the unlocking success prompt information may be displayed, and the display style of the unlocking success prompt information may be preset, which is not limited in the disclosure. For example, as shown in fig. 5, an unlocking success prompt message 13 may be displayed on the upper part of the message transmission window 01, and the prompt message 13 may display a content of "congratulating you have unlocked a new emotional expression".
Optionally, in this embodiment of the present disclosure, when unlocking the emotional expression to be unlocked successfully, the expression sending method may further include:
displaying the user head portrait with the emotional expression of which the unlocking is successful in the head portrait display area of the message sending window; the message sending window is a message sending window of the first user terminal and the second user terminal, and the avatar display area is an area on the message sending window for displaying the avatar of the user corresponding to the first user terminal.
In the embodiment of the present disclosure, the message sending window may be a private chat window between the first user terminal and the second user terminal, and the message sending window includes a display area for displaying the avatar corresponding to the first user terminal and a display area for displaying the avatar corresponding to the second user terminal. When the emotional expression to be unlocked is successfully unlocked by the user expression information of the first user, the user avatar with the emotional expression successfully unlocked can be displayed in the display area of the avatar corresponding to the first user terminal. The method for displaying the user avatar with the emotional expression successfully unlocked includes replacing the initial user avatar with the emotional expression successfully unlocked, and displaying the dynamic effect corresponding to the emotional expression successfully unlocked on the initial user avatar in an overlapping mode.
It should be noted that the avatar of the user with the emotion expression successfully unlocked may be displayed on the display area of the avatar corresponding to the first user terminal, and the display area of the avatar corresponding to the first user terminal is restored and displayed as the initial avatar of the user after the preset display duration is met.
Optionally, in the embodiment of the present disclosure, the display attribute of the avatar of the user having the emotional expression of which the unlocking is successful is different from the display attribute of the initial avatar of the user.
In the embodiment of the present disclosure, the display attribute may include a size, a shape, a color, and the like of the display, for example, the avatar of the user with the emotional expression successfully unlocked may be displayed in an enlarged manner compared to the size of the initial avatar of the user; the shape of the initial user head portrait is circular, and the shape of the user head portrait displaying the emotional expression successfully unlocked can be square; the color of the initial user avatar may be blue, and the color of the user avatar showing the emotional expression of successful unlocking may be yellow.
For example, as shown in fig. 5, a display area 14 of the avatar corresponding to the user a terminal and a display area 15 of the avatar corresponding to the user B terminal may be displayed on the upper portion of the message sending window 01, and when the user a terminal successfully unlocks the "haha" emotional expression 12, an image of the emotional expression "haha" may be displayed in the display area 14 of the avatar corresponding to the user a terminal.
In the embodiment of the disclosure, when the emotional expression to be unlocked is successfully unlocked, the emotional expression which is successfully unlocked can be used as chat content to be sent to the second user terminal in the message sending window. For example, fig. 6 schematically illustrates another expression sending interface diagram provided by the embodiment of the present disclosure, as shown in fig. 6, on a terminal of a user a, on an expression area 11 corresponding to an emotional expression in a message sending window 01, when the terminal of the user a successfully unlocks a "haar" emotional expression 12, the user a may send the "haar" emotional expression 12 that was successfully unlocked to a user B as chat information 16.
It should be noted that the expression sending method for emotional expressions may also be applied to a group chat scene, that is, an expression area including emotional expressions is displayed in a group of one first user terminal and a plurality of second user terminals, a target emotional expression is determined according to user expression information or according to a selection operation of a user, when the target emotional expression is an emotional expression to be unlocked, a masking layer covering the corresponding emotional expression to be unlocked is removed according to the user expression information to unlock the emotional expression to be unlocked, and the successfully unlocked emotional expression is sent to the group chat group as chat content in a message sending window, where specific operations are similar to the above-mentioned contents and are not described again.
In another implementation manner of the embodiment of the present disclosure, when the expression type indicated by the selecting operation in step S101 is an interactive expression type, in a private chat scenario, the operation of sending the target expression to the second user terminal in response to the target expression determined in the expression area may specifically include:
and responding to the triggering operation of the target interactive expression template, displaying first prompt information generated according to the target interactive expression template, and sending second prompt information generated according to the target interactive expression template to the second user terminal.
In the embodiment of the disclosure, when the expression type indicated by the selection operation is the interactive expression type, an expression area corresponding to the interactive expression is displayed, and a plurality of interactive expression templates are displayed in the expression area corresponding to the interactive expression. And receiving selection operation executed by a first user aiming at a plurality of interactive expression templates displayed in the expression area, and taking the interactive expression model indicated by the selection operation as a target interactive expression template. In response to the trigger operation on the target interactive expression template, the selection operation for determining the target interactive template may be directly used as the trigger operation, or a touch operation executed again after the target interactive template is determined may be used as the trigger operation, and the trigger operation may be a touch operation such as clicking or an input operation such as voice.
In the embodiment of the disclosure, in the one-to-one private chat scenario, in response to a trigger operation on the target interactive emotion template, first prompt information generated according to the target interactive emotion template is displayed, where the first prompt information inviting the second user terminal to participate may be displayed on the first user terminal, and the first prompt information may carry the target interactive emotion template. And sending second prompt information generated according to the target interactive expression template to a second user terminal, wherein the second prompt information received from the first user terminal can be displayed on the second user terminal, and the second prompt information can be information carrying the target interactive expression template. It should be noted that, because the interactive expression template includes templates of two interactive objects and templates of more than two interactive objects, in the one-to-one private chat scene, the trigger operation may be executed only on the templates of two interactive objects, and the trigger operation is not executed on the templates of more than two interactive objects.
For example, fig. 7 schematically illustrates another expression sending interface diagram provided by the embodiment of the present disclosure, as shown in fig. 7, a private chat message sending window 01 between a user a and a user B is displayed on a terminal of the user a, and an expression panel 03 is displayed in response to a touch operation performed by the user a on the expression control 02. An expression type control and an expression area corresponding to an expression type are displayed on the expression panel 03, the expression type control includes an interactive expression control 04 and an emotional expression control 05, and in response to an operation of the user a on the interactive expression control 04, an expression area 21 corresponding to an interactive expression is displayed. A plurality of interactive emotion templates are displayed in the emotion area 21 corresponding to the interactive emotion, where the interactive emotion templates include "biubiubiubiu", "paste", "forgiveness", "one stamp", "heart", "leisure blow", "join", "kiss invitation", and "kiss invitation".
For example, fig. 8 schematically illustrates another emotion sending interface diagram provided by the embodiment of the present disclosure, as shown in fig. 8, in a private chat message sending window 01 of a user a and a user B, in response to a click operation performed by the user a on an "invite-to-photo" interactive emotion template 22 in an emotion area 21, the "invite-to-photo" interactive emotion template 22 may be determined as a target interactive emotion template.
Optionally, the expression sending method in the embodiment of the present disclosure may further include:
and responding to feedback information returned by the second user terminal, and displaying the interactive effect of transforming the first prompt information based on the feedback information.
In this embodiment of the disclosure, after the second prompt message is sent to the second user terminal, correspondingly, the second user terminal may receive and display the second prompt message, where the second prompt message includes the to-be-confirmed message of "whether to receive an invitation", and then the second user terminal may receive a determination operation of the second user on the second prompt message, and return the feedback message generated according to the determination operation to the first user terminal. The first user terminal may display an interactive effect of transforming the first prompt information based on the feedback information on the first user terminal in response to the feedback information returned by the second user terminal. The interactive dynamic effect can be a dynamic effect preset according to feedback information, and different feedback information can correspond to different interactive dynamic effects.
Optionally, in the embodiment of the present disclosure, the displaying, in response to the feedback information returned by the second user terminal, an operation of transforming the interactive action of the first prompt information based on the feedback information may specifically include:
if the feedback information is interactive, displaying an interactive effect of converting the first prompt information into third prompt information;
and if the feedback information is refused to interact, displaying an interaction effect of converting the first prompt information into fourth prompt information.
In the embodiment of the present disclosure, when the feedback information of the second user terminal is to accept the interaction, an interaction dynamic effect that the first prompt information is converted into the third prompt information may be displayed at the first user terminal, the third prompt information may be generated according to the information that the second user terminal accepts the interaction, and the interaction dynamic effect may be a conversion dynamic effect that the first prompt information is converted into the third prompt information, or may be a preset dynamic effect that the interaction is accepted. When the feedback information of the second user terminal is interaction refusal, an interaction effect that the first prompt information is converted into fourth prompt information can be displayed on the first user terminal, the fourth prompt information can be generated according to the interaction refusal information of the second user terminal, and the interaction effect can be a conversion effect that the first prompt information is converted into the fourth prompt information or a preset interaction refusal dynamic effect.
For example, fig. 9 schematically illustrates another expression sending interface diagram provided by the embodiment of the present disclosure, as shown in fig. 9, on a terminal of a user a, in a private chat message sending window 01 between the user a and the user B, first prompt information 23 sent by the user a to the user B is displayed, the first prompt information includes an "invite photo" interactive expression template, and the first prompt information 23 is "i invite the other party to take a photo".
For example, fig. 10 schematically illustrates another expression sending interface diagram provided by the embodiment of the present disclosure, as shown in fig. 10, on a terminal of a user B, in a private chat message sending window 06 between the user B and the user a, second prompt information 24 sent by the user a to the user B is displayed, where the second prompt information 24 includes an "invite to take a photo" interactive expression template, and the second prompt information 24 is "invite me to take a photo for the opposite party. Accept/reject ".
Optionally, after the operation of displaying the interactive action effect of converting the first prompt information into the third prompt information is performed in the embodiment of the present disclosure, the expression sending method may further include:
and displaying the target interactive expression generated based on the target interactive expression template, the virtual image corresponding to the first user terminal and the virtual image corresponding to the second user terminal.
In the embodiment of the present disclosure, the avatar corresponding to the first user terminal and the avatar corresponding to the second user terminal may be avatars preset in the social software by the first user terminal and the second user terminal, respectively, and the avatars may be characters such as cartoon characters and stars, which is not limited in this disclosure. The target interactive expression can be an interactive expression synthesized by the virtual image corresponding to the first user terminal and the virtual image corresponding to the second user terminal according to the target interactive expression template. Specifically, after the second user terminal feeds back the information for accepting the interaction, the second user terminal may send the target interactive expression as a chat message to a private chat message window, or synchronously display the target interactive expression on a preset display interface corresponding to the first user terminal and the second user terminal.
For example, fig. 11 schematically illustrates another interface diagram for sending emoticons provided by the embodiment of the present disclosure, as shown in fig. 11, in a private chat message sending window 06 between a user B and a user a on a terminal of the user B, after the user B accepts an invitation, the second prompt information is converted into a prompt information 25 for accepting the invitation, and the prompt information 25 is "i agree with the invitation of the other party". Accordingly, user B sends chat message 26 containing the target interactive emoticon.
For example, fig. 12 schematically illustrates another emotion sending interface diagram provided by the embodiment of the present disclosure, as shown in fig. 12, in a private chat message sending window 01 between a user a and a user B on a terminal of the user a, after the user B accepts an invitation, the first prompt information is converted into third prompt information 27, where the third prompt information is "the opposite party agrees my invitation", and the chat message 28 including a target interactive emotion is sent by the user B.
For example, fig. 13 schematically illustrates another expression sending interface diagram provided by the embodiment of the present disclosure, and as shown in fig. 13, on a terminal of a user a, in a private chat message sending window 01 between the user a and a user B, a target interactive expression between the user a and the user B is displayed in a preset display interface 29.
Optionally, in the embodiment of the present disclosure, when the expression type indicated by the selection operation is an interactive expression type, in a group chat scene, the operation of sending the target expression to the second user terminal in response to the target expression determined in the expression area may specifically include:
responding to the triggering operation of the target interactive expression template, and displaying an interactive object selection list;
responding to the selection operation of one or more target interactive objects in the interactive object selection list, displaying fifth prompt information generated according to the target interactive expression template, and sending sixth prompt information generated according to the target interactive expression template to a second user terminal corresponding to the target interactive object.
In the embodiment of the present disclosure, in a one-to-many group chat scenario, that is, in a group consisting of one first user terminal and a plurality of second user terminals, a trigger operation may be performed on templates of two interactive objects, or may be performed on templates of more than two interactive objects. The method includes the steps of responding to a triggering operation of a target interactive expression template, displaying an interactive object selection list, wherein the interactive object selection list can be displayed on a new interface, all member names and corresponding selection controls in a group chat group are displayed in the interactive object selection list, when a user performs a selection operation on one or more users, the selection controls corresponding to the users are changed into a selected state, the selection controls corresponding to the users who do not receive the selection operation are still kept in an unselected state, and the specific display style of the interactive object selection list is not limited in the embodiment of the disclosure.
In this embodiment of the disclosure, the number of the target interactive objects selected by the first user in the interactive object selection list may be determined according to the number of the corresponding interactive objects in the target interactive expression template, for example, if the target interactive expression template is a template of two interactive objects, the first user may only select two target interactive objects in the interactive object selection list, and if the target interactive expression template is a template of four interactive objects, the first user may only select four target interactive objects in the interactive object selection list.
In the embodiment of the disclosure, in the one-to-many group chat scenario, fifth prompt information generated according to the target interaction expression template is displayed, where the fifth prompt information is displayed on the first user terminal to invite one or more target interaction objects, and the fifth prompt information may carry the target interaction expression template. And sending sixth prompt information generated according to the target interactive expression template to a second user terminal corresponding to the target interactive object, wherein the sixth prompt information is obtained by displaying the sixth prompt information of the target interactive object received from the first user terminal, and the sixth prompt information can be information carrying the target interactive expression template.
For example, fig. 14 schematically illustrates another interface diagram for emotion sending provided by the embodiment of the present disclosure, as shown in fig. 14, on a terminal of a user a, in a group chat message sending window 07, in response to a click operation performed by the user a on a "sticky" interactive emotion template 32 in an emotion area 31, the "sticky" interactive emotion template 32 may be determined as a target interactive emotion template.
For example, fig. 15 schematically illustrates another expression sending interface diagram provided by the embodiment of the present disclosure, as shown in fig. 15, in a group chat message sending window 07, in response to a trigger operation on a "sticker" target interactive expression template 32, an interactive object selection list 33 is displayed on a terminal of a user a, and all members in a group, specifically, "user a, user B, user C, user D, user E, and user F", are displayed in the interactive object selection list 33. In response to a selection operation of the user C in the interaction object selection list 33 by the user a, the user C serves as a target interaction object.
Optionally, in this embodiment of the present disclosure, the number of the second user terminals is multiple, and the expression sending method may further include:
and responding to feedback information returned by each second user terminal, and displaying an interactive effect of transforming the fifth prompt information based on the feedback information.
In this embodiment of the disclosure, after the sixth prompt information is sent to each second user terminal, each second user terminal may correspondingly receive and display the sixth prompt information, where the sixth prompt information includes information to be confirmed whether to receive an invitation, and each second user terminal may receive a determination operation of each second user on the sixth prompt information and return feedback information generated according to the determination operation of each second user terminal to the first user terminal. The first user terminal may display an interactive effect of transforming the fifth prompt information based on the feedback information on the first user terminal in response to the feedback information returned by each of the second user terminals. The interactive dynamic effect can be a dynamic effect preset according to feedback information, and different feedback information can correspond to different interactive dynamic effects.
Optionally, the displaying, in response to feedback information returned by each of the second user terminals, an interaction effect of transforming the fifth prompt information based on the feedback information includes:
if the feedback information returned by each second user terminal is received interaction, displaying an interaction effect of converting the fifth prompt information into seventh prompt information;
and if the feedback information returned by at least one second user terminal is refused to interact, displaying the interaction effect of converting the fifth prompt information into the eighth prompt information.
In the embodiment of the present disclosure, when the feedback information of each second user terminal is received to interact, an interaction effect that the fifth prompt information is converted into the seventh prompt information may be displayed at the first user terminal, and the seventh prompt information may be generated according to the information that each second user terminal receives the interaction. When the feedback information returned by the at least one second user terminal is interaction refusal, the interaction effect that the fifth prompt information is converted into the eighth prompt information can be displayed on the first user terminal, and the eighth prompt information can be generated according to the interaction refusal information of the at least one second user terminal.
For example, fig. 16 schematically illustrates another emotion sending interface diagram provided by the embodiment of the present disclosure, as shown in fig. 16, a fifth prompt message 34 sent by the user a to the user C is displayed on the terminal of the user a in the message sending window 07 of the group chat user a, where the fifth prompt message includes an "paste" interactive emotion template, and the fifth prompt message 34 is "i invite the user C to paste".
For example, fig. 17 schematically illustrates another expression sending interface diagram provided in the embodiment of the present disclosure, as shown in fig. 17, a sixth prompt message 35 sent by the user a to the user C is displayed on the terminal of the user C in a message sending window 08 of the group chat user C, where the sixth prompt message 35 includes an "sticky note" interactive expression template, and the sixth prompt message 35 is "the user a invites me to get a sticky note". Accept/reject ".
Optionally, after the operation of displaying the interactive action of converting the fifth prompt information into the seventh prompt information is performed in the embodiment of the present disclosure, the expression sending method may further include:
and displaying target interactive expressions generated based on the target interactive expression template, the virtual images corresponding to the first user terminals and the virtual images corresponding to the second user terminals.
In this embodiment of the disclosure, the target interactive expression may be an interactive expression synthesized by the avatar corresponding to the first user terminal and the avatars corresponding to the second user terminals according to the target interactive expression template. Specifically, after the feedback information of each second user terminal is received for interaction, the target interactive expression may be sent to the group chat message window as a chat message by the last second user terminal receiving interaction, or the target interactive expression composed of the first user terminal and each second user terminal may be synchronously displayed on the preset display interface corresponding to the group.
For example, fig. 18 schematically illustrates another emotion sending interface diagram provided by the embodiment of the present disclosure, as shown in fig. 18, in a message sending window 08 of a group chat user C on a terminal of the user C, when the user C accepts an invitation, a sixth prompt message is converted into a prompt message 36 for accepting the invitation, where the prompt message 36 is "i agree with the invitation of the user a". Accordingly, user C sends chat message 37 containing the target interactive emoticon.
For example, fig. 19 schematically illustrates another emotion sending interface diagram provided by the embodiment of the present disclosure, as shown in fig. 19, in a message sending window 07 of a group chat user a, when a user C accepts an invitation, a fifth prompt message is converted into a seventh prompt message 38, where the seventh prompt message is "the user C agrees with my invitation", and a chat message 39 containing a target interactive emotion is sent by the user C.
Optionally, in the target interactive expression, the avatar in the target interactive expression template is replaced with the avatar corresponding to the first user terminal and the avatar corresponding to the second user terminal.
In the embodiment of the present disclosure, the avatar corresponding to the first user terminal and the avatars corresponding to the second user terminals may be obtained based on the number of the interactive objects indicated by the target interactive expression template, and the avatar in the target interactive expression template is respectively and correspondingly replaced by the avatar corresponding to the first user terminal and the avatar corresponding to the second user terminals, where a specific replacement manner is not limited in the present disclosure. It should be noted that, in a group chat scenario, when a plurality of target interactive objects in an interactive object selection list are selected, the target interactive objects may not include the first user terminal, that is, the target interactive objects are N second user terminals, a message inviting the N second user terminals to generate interactive expressions is displayed on the first user terminal, and corresponding invitation prompt information is sent to the N second user terminals. When the N second user terminals all accept the invitation, the prompt information of accepting the invitation can be displayed, the target interactive expression containing the virtual images corresponding to the N second user terminals is displayed, and when at least one of the N second user terminals refuses the invitation, the prompt information of refusing the invitation can be displayed.
It should be noted that, in the expression sending method provided in the embodiment of the present disclosure, the execution main body may be an expression sending device, or alternatively, a control module in the expression sending device, configured to execute the loaded expression sending method. The expression sending method provided by the embodiment of the present disclosure is described by taking an example in which an expression sending device executes a loaded expression sending method. Next, an expression transmission device according to an exemplary embodiment of the present disclosure will be described with reference to fig. 20.
Fig. 20 schematically shows a block diagram of an emotion transmitting apparatus according to an embodiment of the present disclosure, and as shown in fig. 20, applied to a first user terminal, the emotion transmitting apparatus 50 may include:
a first display module 501, configured to respond to an expression type indicated by a selection operation, and display an expression area corresponding to the expression type;
a sending module 502, configured to send the target expression to a second user terminal in response to the target expression determined in the expression area, where the target expression is a target emotional expression or a target interactive expression.
To sum up, the expression sending method provided by the embodiment of the present disclosure may display an expression area corresponding to an expression type in response to the expression type indicated by the selection operation, and send a target expression to a second user terminal in response to a target expression determined in the expression area, where the target expression is a target emotional expression or a target interactive expression. In this way, because different expression types have different target expression determining modes and different contents sent to the second user terminal by different expression types, different expression sending methods can be determined based on different expression types, the expression sending modes are enriched, the interaction with other users during the expression sending can be increased according to the expression types, and the interestingness during the expression sending is improved.
Optionally, in a case that the expression type indicated by the selection operation is an emotional expression, the first display module 501 is further configured to:
displaying an expression area containing emotional expressions, wherein the emotional expressions comprise emotional expressions to be unlocked; the emotional expression to be unlocked is covered with a covering layer.
Optionally, an emotion tag and a locking tag corresponding to the emotional expression are arranged on the cover layer.
Optionally, the target expression is determined in the expression area, and the apparatus 50 further includes:
and the determining module is used for determining the target emotional expression according to the expression information of the user or according to the selection operation of the user.
Optionally, the apparatus 50 further includes:
and the first removing module is used for removing the corresponding covering layer covered on the emotional expression to be unlocked according to the user expression information so as to unlock the emotional expression to be unlocked.
Optionally, the apparatus 50 further includes:
and the second removing module is used for removing the locking label and the emotion label on the Mongolian layer.
Optionally, the apparatus 50 further includes:
the second display module is used for displaying the shooting permission opening message;
and the calling module is used for responding to the confirmation operation corresponding to the shooting permission opening message and calling a camera to acquire the user expression information.
Optionally, when unlocking the emotional expression to be unlocked is successful, the apparatus 50 further includes:
and the third display module is used for displaying the prompt message of successful unlocking.
Optionally, when unlocking the emotional expression to be unlocked is successful, the apparatus 50 further includes:
the fourth display module is used for displaying the user head portrait with the emotional expression successfully unlocked in the head portrait display area of the message sending window; the message sending window is a message sending window of the first user terminal and the second user terminal, and the avatar display area is an area on the message sending window for displaying the avatar of the user corresponding to the first user terminal.
Optionally, the display attribute of the avatar of the user with the emotional expression of which the unlocking is successful is different from the display attribute of the initial avatar of the user.
Optionally, in a case that the expression type indicated by the selection operation is an interactive expression type, the sending module 502 is further configured to:
and responding to the triggering operation of the target interactive expression template, displaying first prompt information generated according to the target interactive expression template, and sending second prompt information generated according to the target interactive expression template to the second user terminal.
Optionally, the apparatus 50 further includes:
and the fifth display module is used for responding to feedback information returned by the second user terminal and displaying the interactive effect of transforming the first prompt information based on the feedback information.
Optionally, the fifth display module is further configured to:
if the feedback information is interactive, displaying an interactive effect of converting the first prompt information into third prompt information;
and if the feedback information is refused to interact, displaying an interaction effect of converting the first prompt information into fourth prompt information.
Optionally, after the displaying the interactive effect of transforming the first prompt message into the third prompt message, the apparatus 50 further includes:
and the sixth display module is used for displaying the target interactive expression generated based on the target interactive expression template, the virtual image corresponding to the first user terminal and the virtual image corresponding to the second user terminal.
Optionally, in a case that the expression type indicated by the selection operation is an interactive expression type, the sending module 502 is further configured to:
responding to the triggering operation of the target interactive expression template, and displaying an interactive object selection list;
responding to the selection operation of one or more target interactive objects in the interactive object selection list, displaying fifth prompt information generated according to the target interactive expression template, and sending sixth prompt information generated according to the target interactive expression template to a second user terminal corresponding to the target interactive object.
Optionally, the number of the second user terminals is multiple; the apparatus 50 further comprises:
and the seventh display module is used for responding to the feedback information returned by each second user terminal and displaying the interactive effect of transforming the fifth prompt information based on the feedback information.
Optionally, the seventh display module is further configured to:
if the feedback information returned by each second user terminal is received interaction, displaying an interaction effect of converting the fifth prompt information into seventh prompt information;
and if the feedback information returned by at least one second user terminal is refused to interact, displaying the interaction effect of converting the fifth prompt information into the eighth prompt information.
Optionally, after the displaying the interactive effect of transforming the fifth prompt message into the seventh prompt message, the apparatus 50 further includes:
and the eighth display module is used for displaying the target interactive expression generated based on the target interactive expression template, the virtual images corresponding to the first user terminal and the virtual images corresponding to the second user terminals.
Optionally, the target interactive expression is to replace each character image in the target interactive expression template with an avatar corresponding to the first user terminal and an avatar corresponding to the second user terminal respectively.
Optionally, before the expression type indicated in response to the selection operation, the apparatus 50 further includes:
and the ninth display module is used for responding to the triggering operation of the expression control and displaying the expression panel.
It should be noted that: the expression transmitting device provided in the above embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the expression sending apparatus provided in the above embodiment and the expression sending method embodiment belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiment and are not described herein again.
The disclosure also provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the expression sending method provided by the foregoing method embodiments. It should be noted that the electronic device may be the electronic device provided in fig. 21 as follows.
Fig. 21 schematically illustrates a block diagram of an electronic device 600 according to an exemplary embodiment of the present disclosure. The electronic device 600 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer iv, motion Picture Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The electronic device 600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, the electronic device 600 includes: a processor 601 and a memory 602.
The processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor, also called a CPU, for processing data in an awake state; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, processor 601 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 602 may include one or more computer-readable storage media, which may be non-transitory. Memory 602 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 602 is used to store at least one instruction for execution by the processor 601 to implement the emotive transmission method provided by the method embodiments of the present disclosure.
In some embodiments, the electronic device 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a display 605, a camera assembly 606, an audio circuit 607, a positioning component 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited by this disclosure.
The display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or over the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 605 may be one, providing the front panel of the electronic device 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the electronic device 600 or in a foldable design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or on a folded surface of the electronic device 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of an electronic apparatus, and a rear camera is disposed on a rear surface of the electronic apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and disposed at different locations of the electronic device 600. The microphone may also be an array microphone or an omni-directional acquisition microphone. The speaker is used to convert electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.
The positioning component 608 is used to locate the current geographic location of the electronic device 600 to implement a navigation or LBS (location based Service). The positioning component 608 can be a positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 609 is used to supply power to various components in the electronic device 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery can also be used to support fast charge technology.
In some embodiments, the electronic device 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 611 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic device 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the display screen 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the electronic device 600, and the gyro sensor 612 and the acceleration sensor 611 may cooperate to acquire a 3D motion of the user on the electronic device 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 613 may be disposed on a side bezel of the electronic device 600 and/or on a lower layer of the display screen 605. When the pressure sensor 613 is disposed on a side frame of the electronic device 600, a holding signal of the user to the electronic device 600 can be detected, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is arranged at the lower layer of the display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 605. The operability control comprises at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of a user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be disposed on the front, back, or side of the electronic device 600. When a physical button or vendor Logo is provided on the electronic device 600, the fingerprint sensor 614 may be integrated with the physical button or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the brightness of the display of display screen 605 based on the intensity of ambient light collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the display screen 605 is increased; when the ambient light intensity is low, the display brightness of the display screen 605 is reduced. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
Proximity sensor 616, also referred to as a distance sensor, is typically disposed on the front panel of electronic device 600. The proximity sensor 616 is used to capture the distance between the user and the front of the electronic device 600. In one embodiment, the display screen 605 is switched from the bright screen state to the dark screen state, controlled by the processor 601, when the proximity sensor 616 detects that the distance between the user and the front surface of the electronic device 600 is gradually decreased; when the proximity sensor 616 detects that the distance between the user and the front surface of the electronic device 600 is gradually increased, the display screen 605 is switched from the breath-screen state to the bright-screen state under the control of the processor 601.
Those skilled in the art will appreciate that the configuration shown in fig. 21 is not intended to be limiting of the electronic device 600 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The memory further includes one or more programs, the one or more programs are stored in the memory, and the one or more programs include instructions for performing the expression transmission method provided by the embodiments of the present disclosure.
The present disclosure provides a computer-readable storage medium, where at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the method for sending an expression provided by the foregoing method embodiments.
The present disclosure also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the expression sending method provided in the above-mentioned optional implementation mode.
The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended to be exemplary only and not to limit the present disclosure, and any modification, equivalent replacement, or improvement made without departing from the spirit and scope of the present disclosure is to be considered as the same as the present disclosure.

Claims (10)

1. An expression sending method is applied to a first user terminal, and comprises the following steps:
responding to the expression type indicated by the selection operation, and displaying an expression area corresponding to the expression type;
sending the target expression to a second user terminal in response to the target expression determined in the expression area,
and the target expression is a target emotional expression or a target interactive expression.
2. The method of claim 1, wherein, in a case that the expression type indicated by the selection operation is an emotional expression, the displaying an expression area corresponding to the expression type comprises:
displaying an expression area containing emotional expressions, wherein the emotional expressions comprise emotional expressions to be unlocked; the emotional expression to be unlocked is covered with a covering layer.
3. The method of claim 2, wherein a target expression is determined in the expression region, the method further comprising:
and determining the target emotional expression according to the expression information of the user or the selection operation of the user.
4. The method of claim 2, further comprising:
and removing the corresponding masking layer covered on the emotional expression to be unlocked according to the user expression information so as to unlock the emotional expression to be unlocked.
5. The method of claim 4, wherein when unlocking the emotional expression to be unlocked is successful, the method further comprises:
displaying the user head portrait with the emotional expression of which the unlocking is successful in the head portrait display area of the message sending window; the message sending window is a message sending window of the first user terminal and the second user terminal, and the avatar display area is an area on the message sending window for displaying the avatar of the user corresponding to the first user terminal.
6. The method of claim 1, wherein in a case that the expression type indicated by the selection operation is an interactive expression type, the sending the target expression to the second user terminal in response to the target expression determined in the expression area comprises:
and responding to the triggering operation of the target interactive expression template, displaying first prompt information generated according to the target interactive expression template, and sending second prompt information generated according to the target interactive expression template to the second user terminal.
7. The method of claim 1, wherein in a case that the expression type indicated by the selection operation is an interactive expression type, the sending the target expression to a second user terminal in response to the target expression determined in the expression area comprises:
responding to the trigger operation of the target interactive expression template, and displaying an interactive object selection list;
responding to the selection operation of one or more target interactive objects in the interactive object selection list, displaying fifth prompt information generated according to the target interactive expression template, and sending sixth prompt information generated according to the target interactive expression template to a second user terminal corresponding to the target interactive object.
8. An expression transmitting device, applied to a first user terminal, the device comprising:
the first display module is used for responding to the expression type indicated by the selection operation and displaying the expression area corresponding to the expression type;
and the sending module is used for responding to the target expression determined in the expression area and sending the target expression to a second user terminal, wherein the target expression is a target emotional expression or a target interactive expression.
9. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the emotive transmission method of any of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the expression transmitting method of any one of claims 1-7 via execution of the executable instructions.
CN202210999943.4A 2022-08-19 2022-08-19 Expression sending method and device, storage medium and electronic equipment Pending CN115412518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210999943.4A CN115412518A (en) 2022-08-19 2022-08-19 Expression sending method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210999943.4A CN115412518A (en) 2022-08-19 2022-08-19 Expression sending method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115412518A true CN115412518A (en) 2022-11-29

Family

ID=84161688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210999943.4A Pending CN115412518A (en) 2022-08-19 2022-08-19 Expression sending method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115412518A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394057A (en) * 2013-11-04 2015-03-04 贵阳朗玛信息技术股份有限公司 Expression recommendation method and device
CN104850335A (en) * 2015-05-28 2015-08-19 瞬联软件科技(北京)有限公司 Expression curve generating method based on voice input
US20150264145A1 (en) * 2014-03-13 2015-09-17 International Business Machines Corporation Communications responsive to recipient sentiment
CN107784114A (en) * 2017-11-09 2018-03-09 广东欧珀移动通信有限公司 Recommendation method, apparatus, terminal and the storage medium of facial expression image
CN110276406A (en) * 2019-06-26 2019-09-24 腾讯科技(深圳)有限公司 Expression classification method, apparatus, computer equipment and storage medium
CN111515970A (en) * 2020-04-27 2020-08-11 腾讯科技(深圳)有限公司 Interaction method, mimicry robot and related device
CN113536262A (en) * 2020-09-03 2021-10-22 腾讯科技(深圳)有限公司 Unlocking method and device based on facial expression, computer equipment and storage medium
CN113867876A (en) * 2021-10-08 2021-12-31 北京字跳网络技术有限公司 Expression display method, device, equipment and storage medium
WO2022089192A1 (en) * 2020-10-28 2022-05-05 北京有竹居网络技术有限公司 Interaction processing method and apparatus, electronic device, and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394057A (en) * 2013-11-04 2015-03-04 贵阳朗玛信息技术股份有限公司 Expression recommendation method and device
US20150264145A1 (en) * 2014-03-13 2015-09-17 International Business Machines Corporation Communications responsive to recipient sentiment
CN104850335A (en) * 2015-05-28 2015-08-19 瞬联软件科技(北京)有限公司 Expression curve generating method based on voice input
CN107784114A (en) * 2017-11-09 2018-03-09 广东欧珀移动通信有限公司 Recommendation method, apparatus, terminal and the storage medium of facial expression image
CN110276406A (en) * 2019-06-26 2019-09-24 腾讯科技(深圳)有限公司 Expression classification method, apparatus, computer equipment and storage medium
CN111515970A (en) * 2020-04-27 2020-08-11 腾讯科技(深圳)有限公司 Interaction method, mimicry robot and related device
CN113536262A (en) * 2020-09-03 2021-10-22 腾讯科技(深圳)有限公司 Unlocking method and device based on facial expression, computer equipment and storage medium
WO2022089192A1 (en) * 2020-10-28 2022-05-05 北京有竹居网络技术有限公司 Interaction processing method and apparatus, electronic device, and storage medium
CN113867876A (en) * 2021-10-08 2021-12-31 北京字跳网络技术有限公司 Expression display method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110109608B (en) Text display method, text display device, text display terminal and storage medium
CN113709022B (en) Message interaction method, device, equipment and storage medium
CN112163406A (en) Interactive message display method and device, computer equipment and storage medium
CN111126958B (en) Schedule creation method, schedule creation device, schedule creation equipment and storage medium
CN110837300B (en) Virtual interaction method and device, electronic equipment and storage medium
CN112870697B (en) Interaction method, device, equipment and medium based on virtual relation maintenance program
CN111158576A (en) Social relationship establishing method and device based on live broadcast scene and storage medium
CN111949116A (en) Virtual item package picking method, virtual item package sending method, virtual item package picking device, virtual item package receiving terminal, virtual item package receiving system and virtual item package receiving system
CN112023403B (en) Battle process display method and device based on image-text information
CN113965539A (en) Message sending method, message receiving method, device, equipment and medium
CN113709020B (en) Message sending method, message receiving method, device, equipment and medium
CN112468884A (en) Dynamic resource display method, device, terminal, server and storage medium
CN114327197B (en) Message sending method, device, equipment and medium
CN112311661B (en) Message processing method, device, equipment and storage medium
CN115412518A (en) Expression sending method and device, storage medium and electronic equipment
CN112291133A (en) Method, device, equipment and medium for sending files across terminals
CN114330403B (en) Graphic code processing method, device, equipment and medium
CN113873192B (en) Session display method, device, computer equipment and medium
CN113010308B (en) Resource transfer method, device, electronic equipment and computer readable storage medium
CN113225518B (en) Processing method, device, terminal and storage medium for conference recording file
CN113220203B (en) Activity entry display method, device, terminal and storage medium
CN114115660B (en) Media resource processing method, device, terminal and storage medium
WO2023246207A1 (en) Interface display method and apparatus, and device and medium
CN112910752A (en) Voice expression display method and device and voice expression generation method and device
CN117654062A (en) Virtual character display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination