WO2023284630A1 - 一种表情图像添加方法、装置、设备和存储介质 - Google Patents

一种表情图像添加方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2023284630A1
WO2023284630A1 PCT/CN2022/104495 CN2022104495W WO2023284630A1 WO 2023284630 A1 WO2023284630 A1 WO 2023284630A1 CN 2022104495 W CN2022104495 W CN 2022104495W WO 2023284630 A1 WO2023284630 A1 WO 2023284630A1
Authority
WO
WIPO (PCT)
Prior art keywords
emoticon
user
image
expression
input candidate
Prior art date
Application number
PCT/CN2022/104495
Other languages
English (en)
French (fr)
Inventor
余洁
海坤
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to US17/882,487 priority Critical patent/US12056329B2/en
Publication of WO2023284630A1 publication Critical patent/WO2023284630A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Definitions

  • the present application relates to the field of computers, in particular to a method, device, equipment and storage medium for adding emoticon images.
  • the embodiments of the present application provide a method, device, device and storage medium for adding emoticon images.
  • An embodiment of the present application provides a method for adding an emoticon image, the method comprising:
  • the first emoticon image being the emoticon image that the second user replies to the target message in the first conversation interface
  • An embodiment of the present application provides an emoticon image adding device, the device includes:
  • the first receiving unit is configured to receive an emoticon adding instruction triggered by the first user on the first emoticon image in the first conversation interface, and the first emoticon image is the second user's reply to the target message in the first conversation interface emoticon image;
  • An adding unit configured to add the first emoticon image to the first user's expression input candidate box according to the expression adding instruction.
  • An embodiment of the present application provides a processing device, and the device includes: a processor and a memory;
  • the memory is used to store instructions
  • the processor is configured to execute the instructions in the memory, and execute the methods described in the foregoing embodiments.
  • An embodiment of the present application provides a computer-readable storage medium, including instructions, which, when run on a computer, cause the computer to execute the methods described in the foregoing embodiments.
  • the emoticon image adding method when the second user replies with an emoticon image to the target message in the first conversation interface, the first user can trigger an emoticon adding instruction for the emoticon image, and the emoticon image is added to the first The user's emoticon is entered into the candidate box. It can be seen that the emoticon image adding method provided by the embodiment of the present application can add the emoticon that the second user replies to the target message to the first user's emoticon input candidate box, enriching the first user's emoticon input candidate box. The emoticon image satisfies the requirement of the first user to be able to send more emoticon images.
  • Fig. 1 is the flow chart of a kind of emoticon image adding method that the embodiment of the present application provides;
  • FIG. 2 is a schematic diagram of a display interface of a client provided in an embodiment of the present application
  • FIG. 3 is another schematic diagram of a display interface of a client provided in an embodiment of the present application.
  • FIG. 4 is another schematic diagram of a display interface of a client provided in an embodiment of the present application.
  • FIG. 5 is another schematic diagram of the display interface of the client provided by the embodiment of the present application.
  • FIG. 6 is another schematic diagram of the display interface of the client provided by the embodiment of the present application.
  • FIG. 7 is another schematic diagram of the display interface of the client provided by the embodiment of the present application.
  • FIG. 8 is another schematic diagram of the display interface of the client provided by the embodiment of the present application.
  • FIG. 9 is another schematic diagram of the display interface of the client provided by the embodiment of the present application.
  • FIG. 10 is another schematic diagram of the display interface of the client provided by the embodiment of the present application.
  • Fig. 11 is a structural block diagram of an emoticon image adding device provided by an embodiment of the present application.
  • FIG. 12 is a structural block diagram of an emoticon image adding device provided by an embodiment of the present application.
  • this figure is a flow chart of a method for adding emoticons provided by an embodiment of the present application.
  • the emoticon adding method provided by the embodiment of the present application is applied to the client.
  • the client may be a terminal device, and the terminal device is a hardware device capable of communicating, such as a mobile phone, a tablet computer, or a PC (Personal Computer, personal computer).
  • the client may also be a software client for communication.
  • the emoticon adding method provided in the following embodiments is introduced as being applied to a software client.
  • S101 Receive an emoticon adding instruction triggered by a first user for a first emoticon image in a first conversation interface.
  • the first user is a user who communicates in the first session interface of the client.
  • the first conversation interface is an interface for voice or text communication between the first user and the second user through the client.
  • the first conversational interface may only include the first user and the second user for separate one-to-one communication, that is, the first conversational interface is a single-chat message interface, and the first conversational interface may also include at least 3 users,
  • the first user and the second user are two users among at least three users, that is, the first conversation interface may be a group chat message interface.
  • FIG. 2 it is a schematic diagram of a display interface of a client provided by an embodiment of the present application.
  • the display area of the client includes a basic information display area 210 , a control display area 220 , a session list display area 230 and a first session interface 240 .
  • the basic information display area 210 may include a user avatar display area 211 , a user name display area 212 , a content search box 213 , a search control 214 and an operation control set 215 .
  • the user avatar display area 211 is used to display the avatar set by the user who is currently logged into the client.
  • the user name display area 212 is used to display the user name or nickname of the user currently logged into the client.
  • the content search box 213 is used to receive keywords or other content input by the user.
  • the search control 214 is used to trigger the search operation of the client.
  • the operation control set 215 may include at least one operation control, and the user may trigger operations such as closing the client operation and minimizing the client operation by triggering the operation control in the operation control set.
  • the control display area 220 displays at least one operation control, and the control display area 220 includes a message viewing control 221 and an address book viewing control 222 .
  • the message viewing control 221 is used to enable the client to display a message notification in the conversation list display area 230 after being triggered, such as a group chat message notification or a single chat message notification.
  • the message viewing control 221 is in a triggered state.
  • the address book viewing control 222 is used to enable the client to display the address book after being triggered. In FIG. 2 , the address book viewing control 222 is not activated.
  • the session list display area 230 includes one or more message notifications 231.
  • the message notifications 231 include group chat message notifications and individual chat message notifications.
  • the single chat message notifications are identified by the nicknames edited by the first user to other users, and the group chat message notifications are identified by The group chat name serves as an identifier.
  • the group chat message identified as a work communication group is in a triggered state.
  • the first conversation interface 240 includes a title display area 241 , a message content display area 242 and an input area 243 .
  • the first conversation interface 240 is a group chat interface for displaying group chat messages.
  • the title display area 241 can be used to display the group chat name corresponding to the group chat message.
  • the message content display area 242 can be used to display at least one group chat message 2421, and can also be used to display related information of the sender of the group chat message.
  • the input area 243 is used for receiving and sending the group chat message that the first user wants to send.
  • the input area 243 also includes an emoticon input control 2431.
  • the area of displays an expression input candidate box 244, and the expression input candidate box 244 includes one or more expression images.
  • the expression input candidate box 244 includes a smiling expression image 2441 , a crying expression image 2442 and a heart-to-heart expression image 2443 .
  • the display interface and the first session interface of the client are specifically introduced above by taking FIG. 2 and FIG. 3 as examples. It can be seen from FIG. 2 or FIG. 3 that in the first conversation interface, the first user and other users can communicate by sending text messages, and can also send emoticons for communication.
  • One or more users send messages on the first conversation interface 240 for communication.
  • the first user can use emoticons to reply to the first message, and the reply emoticon is located in the same message as the first message.
  • the first message is a message for replying with an emoticon image, and the first message may be a text message, a voice message, or an emoticon image.
  • the client is the communication software in the personal computer, as shown in FIG. 4
  • the first conversation interface shown in FIG. 4 is based on the first conversation interface in FIG. 2 .
  • the message content display area 242 of the first conversational interface 240 displays the first message 2422 .
  • the first conversation interface 240 displays the message operation control 260 .
  • the message operation control 260 includes an emoticon input control 2601 and a message forwarding control 2602 . Among them, the message forwarding control 2602 is used to forward the first message 2422 , and the expression input control 2601 is used to reply the first message 2422 with an expression.
  • the first conversation interface shown in FIG. 5 is based on the first conversation interface in FIG. 4 .
  • the expression input candidate box 244 is displayed in the area where the first conversation interface 240 is located , the emoticon input candidate box 244 includes one or more emoticon images.
  • the emoticon input candidate box 244 includes a like emoticon image 2444 , a good (OK) emoticon image 2445 and a refueling emoticon image 2446 .
  • the first conversation interface shown in FIG. 6 is based on the first conversation interface in FIG. 5 .
  • the first user triggers an instruction to select an OK (OK) emoticon image 2445 in the emoticon input candidate box 244 , and according to the selection instruction, an OK (OK) emoticon image 2445 is displayed in the emoticon reply area 245 of the first conversation interface 240 .
  • the emoticon reply area 245 is located in the area where the first message 2422 is located, and the good (OK) emoticon image 2445 and the first message 2422 shown in FIG. below the target message.
  • the good (OK) emoticon image 2445 is the first user's reply to the first message 2422 .
  • the user name or nickname of the first user may also be displayed, so as to reflect in the group chat message that it is the first user replying to the first message with an emoticon image.
  • the first emoticon image when the second user replies with the first emoticon image to the target message in the first conversation interface, and the first emoticon image may be an emoticon image that is not included in the emoticon input candidate box of the first user , the first user can trigger the expression adding instruction of the first emoticon image, and the client can add the first emoticon image to the first user's expression input candidate box according to the expression adding instruction.
  • the specific execution steps of the second user replying the first emoticon image to the target message on the first conversation interface are similar to the specific execution steps of the first user replying the first message with emoticon in FIG. 6 , and will not be repeated here.
  • the target message is a message for replying to the first emoticon image
  • the target message may be a text message, a voice message, or an emoticon image.
  • the first emoticon image is an emoticon image of the second user replying to the target message in the first conversation interface.
  • the specific realization of receiving the emoticon adding instruction triggered by the first user for the first emoticon image in the first conversation interface may include:
  • the emoticon In response to the first user moving the mouse control to the area where the first emoticon image replying to the target message is located or the first user clicking the area where the first emoticon image replying to the target message is located, the emoticon is displayed in the area where the first conversation interface is located Adding a control, and in response to the first user clicking the emoticon adding control, it is determined that an emoticon adding instruction triggered by the first user in the first conversation interface for the first emoticon image is received.
  • the heart emoticon image 730 is the first emoticon image.
  • the first user moving the mouse control 740 to the area where the heart emoticon 730 replying to the target message 720 is located or the first user using the mouse control 740 to click on the area where the love emoticon 730 replying to the target message 720 is located, in the first
  • the area where the conversation interface 710 is located displays an emoticon adding control 750 , and when the first user clicks on the emoticon adding control 750 , an emoticon adding instruction for the heart emoticon image 730 replied to the target message 720 is triggered.
  • the heart emoticon image 730 is the first emoticon image.
  • an emoticon adding instruction to the heart emoticon 730 replying to the target message 720 is triggered.
  • the first user clicks on the heart emoticon 730 to reply to the target message 720 the first user may also reply to the target message 720 with the heart emoticon 730 .
  • the client adds the first emoticon image replying to the target message to the emoticon input candidate box of the first user according to the emoticon adding instruction for the first emoticon image replying to the target message triggered by the first user.
  • the first conversation interface shown in FIG. 8 is based on the first conversation interface in FIG. 7 .
  • the client adds the heart emoticon image 730 to the emoticon input candidate box 760 of the first user.
  • the emoticon input candidate box 760 includes a newly added heart emoticon image 730 in reply to the target message 720 .
  • the expression input candidate box is displayed on the second conversation interface.
  • the instruction for triggering the display of the emoticon input candidate box may be that the first user clicks the emoticon input control on the second conversation interface, so as to display the emoticon input candidate box on the second conversation interface.
  • the first emoticon image replied by the second user to the target image is displayed in the emoticon input candidate box.
  • the client sends the first emoticon image to the second conversation interface according to the selection instruction.
  • the instruction to trigger the selection of the first emoticon image may be that the first user clicks on the emoticon image.
  • the emoticon input control may be a control for replying to messages sent by other users on the second conversation interface, or a control for sending messages in the input area.
  • the second session interface may be the same session interface as the first session interface, or may be a different session interface.
  • the client adds the heart emoticon image 730 replying to the target message 720 in the first conversation interface 710 to the emoticon input candidate box 760 of the first user.
  • the emoticon input candidate box 760 includes a newly added heart emoticon image 730 in reply to the target message 720 .
  • the single-chat message identified as BBB is in a triggered state.
  • the first user conducts a separate one-to-one communication with the user BBB, that is, the second conversation interface 910 is a single-chat message interface, and the first user can trigger the display Input the instruction in the expression candidate box 760 , and display the expression input candidate box on the second conversation interface 910 .
  • the emoticon input candidate box 760 includes a newly added heart emoticon image 730 .
  • the client sends the heart emoticon image 730 to the second conversation interface 910 according to the selection instruction.
  • the emoticon input candidate box includes a recently used emoticon area, and the recently used emoticon area reflects emoticon images frequently selected by the first user within a preset time period.
  • the first emoticon image can be displayed in the recently used emoticon area or in all emoticon areas.
  • the emoticon input candidate box 1020 is displayed on the first conversation interface 1010, the emoticon input candidate box 1020 includes the recently used emoticon area 1021 and the all used emoticon area 1022, and the first emoticon image 1030 of the second user's reply to the target message It is displayed in the recently used emoticon area 1021, which is convenient for the user to select the newly added emoticon.
  • a fixed number of expression images can be accommodated in the most recently used expression area, that is, the number of expression images in the recently used expression area is fixed, and the expression images in the recently used expression area are sorted according to the latest use time, For example, the latest emoticon images may be ranked at the top.
  • the emoticon images in the recently used emoticon area may change, and other emoticon images originally displayed in the recently used emoticon area will not continue to be displayed in the recently used emoticon area.
  • the first emoticon image may be squeezed out of the recently used emoticon area by other recently used emoticons, and the recently used emoticon area or the emoticon input candidate box will no longer display the first emoticon image.
  • First emoticon image may be squeezed out of the recently used emoticon area by other recently used emoticons, and the recently used emoticon area or the emoticon input candidate box will no longer display the first emoticon image.
  • the recently used expression area includes the head position and the tail position. New emoticon images added to the recently used expression area will be displayed at the head position, while other emoticon images previously displayed at the tail position will not continue to be displayed in the recently used expression area.
  • a smiley emoticon image is added to the recently used emoticon area, and the smiling emoticon image is displayed at the head position, and the refueling emoticon image originally displayed at the tail position is not continued to be displayed in the recently used emoticon area.
  • the first user before the first emoticon image is added to the first user's expression input candidate box, the first user can also be authenticated according to the expression adding instruction triggered by the first user, so as to determine the first user Whether it has the permission to add the first emoticon image. If the authentication result is that the first user has the authority to add the first emoticon image, then the first emoticon image can be added to the first user's emoticon input candidate box, and the first emoticon image is added to the first user's Emoticons enter the steps in the candidate box. If the authentication result is that the first user does not have the right to add the first emoticon image, the first emoticon image cannot be added to the first user's emoticon input candidate box.
  • the conditions for determining whether the first user has the right to add the first emoticon image may be the department where the first user is located, the gender of the first user, and the time of entry of the first user.
  • the embodiment of the present application does not specifically limit the permission conditions, which can be designed according to the actual situation.
  • the client may directly authenticate, or the server may authenticate and send the authentication result to the client.
  • a target task before adding the first emoticon image to the first user's expression input candidate box, a target task can also be set for the first user.
  • the first user completes the target task, the first Add an emoticon image to the user's emoticon input candidate box.
  • the target task is acquired by the client according to the emoticon adding command triggered by the first user.
  • the target task can be forwarding the target message or sharing the target link.
  • the embodiment of this application does not specifically limit the content of the target task, and it can be based on the actual situation. Design it yourself.
  • the target task may be automatically generated by the client according to the emoticon adding instruction, or the client may send a request for obtaining the target task to the server after receiving the emoticon adding instruction triggered by the first user, and the server sends a request to the server according to the request for obtaining the target task.
  • the client sends the target task.
  • the scenario where the first possible addition method is applied is that the first user's client has stored the first emoticon image that the second user replies to the target message, but it is not displayed in the first user's emoticon input candidate box.
  • the client can acquire the first emoticon ID of the first emoticon image according to the emoticon adding instruction triggered by the first user, directly acquire the corresponding first emoticon image on the client side according to the first emoticon ID, and add the first emoticon image to the first emoticon image.
  • the user's emoticon is entered into the candidate box.
  • the first expression mark is a unique mark of the first expression image, and the first expression image can be uniquely determined by the first expression mark.
  • the second possible addition method is applied in a scenario where the first user's client does not store the first emoticon image of the second user's reply to the target message, and needs to send an emoticon acquisition request including the first emoticon identifier to the server.
  • the server determines the corresponding first emoticon image according to the first emoticon identifier, and sends the first emoticon image to the first user's client, and the client adds the first emoticon image to the first user's client after receiving the first emoticon image.
  • a user's facial expression is input into the candidate box.
  • the client before the client sends an emoticon acquisition request including the first emoticon identifier to the server, the client first receives the first emoticon identifier sent by the server.
  • the corresponding relationship between the first emoticon image and the first emoticon logo is stored in the client, so that when responding to the user's emoticon addition request, the client determines the first emoticon logo according to the emoticon addition request, and then sends the first emoticon logo to the server. emoticon acquisition request.
  • the client may also receive the emoticon text sent by the server according to the first emoticon mark, and the client may store the correspondence between the first emoticon mark, the first emoticon image, and the emoticon text.
  • the emoticon text is a text reflecting the meaning of the first emoticon image, for example, if the meaning reflected by the first emoticon image is a smile, the emoticon text is a smile.
  • the language of the emoticon text may be Chinese, English, or Japanese, etc. The embodiment of the present application does not specifically limit the language of the emoticon text.
  • the client may receive the emoticon text input by the user in the text input box of the input area of the first conversation interface, and according to the correspondence between the first emoticon logo, the first emoticon image, and the emoticon text, at the A conversation interface displays a first emoticon image corresponding to the emoticon text.
  • the smile emoticon image corresponding to smile is displayed on the first conversation interface.
  • the emoticon image can be processed offline.
  • the client receives the emoticon offline request from the server including the first emoticon identifier, and the client deletes the emoticon image from the emoticon input candidate box according to the first emoticon identifier. , to process the emoticon image offline.
  • the client first receives the second emoticon identifier sent by the server, and then the client can obtain the corresponding second emoticon image by sending an emoticon acquisition request carrying the second emoticon identifier to the server.
  • the second expression mark is a unique mark of the second expression image, and the second expression image can be uniquely determined through the second expression mark.
  • the server determines the corresponding second emoticon image according to the second emoticon identifier in the emoticon acquisition request, and sends the second emoticon image to the client.
  • the client after receiving the second emoticon image corresponding to the second emoticon logo, the client adds the second emoticon image to the expression input candidate box, and displays the second emoticon image in the expression input candidate box .
  • the correspondence between the second emoticon image and the second emoticon logo is stored in the client, so that when responding to the user's emoticon addition request, the client determines the second emoticon logo according to the emoticon addition request, and then Sending an emoticon acquisition request including the second emoticon identifier to the server.
  • the client may also receive the emoticon text sent by the server according to the second emoticon mark, and the client may store the correspondence between the second emoticon mark, the second emoticon image, and the emoticon text.
  • the emoticon text is text reflecting the meaning of the second emoticon image, for example, if the meaning reflected by the second emoticon image is a smile, the emoticon text is a smile.
  • the language of the emoticon text may be Chinese, English, or Japanese, etc. The embodiment of the present application does not specifically limit the language of the emoticon text.
  • the client may receive the emoticon text input by the user in the text input box of the input area of the first conversation interface, and according to the correspondence between the second emoticon logo, the second emoticon image, and the emoticon text, at the A conversation interface displays a second emoticon image corresponding to the emoticon text.
  • the corresponding second emoticon image is obtained from the server, and the emoticon resource can be updated without relying only on the version update of the client, and the emoticon can be launched, updated, modified and maintained at any time, shortening the expression
  • the cycle of going online improves the experience of customers using emojis.
  • the emoticon image adding method provided by the embodiment of the present application has been introduced in detail above. It can be seen that, in the emoticon image adding method provided by the embodiment of the present application, in the first conversation interface, the second user utilizes the first emoticon image to target the message. In reply, the first user triggers an expression adding instruction for the first emoticon image, and the first emoticon image is added to the first user's expression input candidate box. That is to say, the embodiment of the present application can add the first emoticon image of the second user's reply to the target message to the first user's emoticon input candidate box, so as to meet the requirement of the first user to send multiple emoticons.
  • the embodiment of the present application further provides an emoticon image adding device.
  • FIG. 11 is a structural block diagram of an emoticon image adding device provided by an embodiment of the present application.
  • the emoticon image adding device 1100 provided in this embodiment includes:
  • the first receiving unit 1110 is configured to receive an emoticon adding instruction triggered by the first user for the first emoticon image in the first conversation interface, and the first emoticon image is the target message for the second user in the first conversation interface Reply emoticon image;
  • the adding unit 1120 is configured to add the first emoticon image to the first user's expression input candidate box according to the expression adding instruction.
  • the device also includes:
  • the second receiving unit is configured to receive a selection instruction for the first emoticon image triggered by the first user in the emoticon input candidate box of the second conversation interface;
  • a first sending unit configured to send the first emoticon image to the second conversation interface according to the selection instruction.
  • the expression input candidate box includes a recently used expression area
  • the adding unit is specifically used for:
  • a fixed number of emoticon images can be accommodated in the recently used emoticon area of the first user, and sorted according to the latest use time;
  • the device also includes:
  • the extrusion unit is configured to, when the first emoticon image is squeezed out of the recently used emoticon area by other recently used emoticons, the first user's emoticon input candidate box no longer has the first emoticon image.
  • the device also includes:
  • An authentication unit configured to obtain an authentication result of whether the first user has the right to add the first emoticon image according to the expression adding instruction
  • the adding unit executes the step of adding the first emoticon image to the first user's emoticon input candidate box.
  • the device also includes:
  • An acquisition unit configured to acquire a target task according to the expression adding instruction
  • the adding unit executes the step of adding the first emoticon image to the first user's emoticon input candidate box.
  • the target task includes one or more of the following:
  • the adding unit is specifically used for:
  • the adding unit is specifically used for:
  • the emoticon acquisition request including the first emoticon identifier
  • the device also includes:
  • the first storage unit is configured to store the correspondence between the first expression mark and the first expression image.
  • the device also includes:
  • a third receiving unit configured to receive the emoticon text sent by the server according to the first emoticon identifier
  • the second storage unit is configured to store the correspondence between the first emoticon mark, the first emoticon image, and the emoticon text.
  • the device also includes:
  • a fourth receiving unit configured to receive the emoticon text input by the user in the text input box
  • a display unit configured to display a first emoticon image corresponding to the emoticon text according to the correspondence.
  • the device also includes:
  • a fifth receiving unit configured to receive an emoticon offline request from the server, where the emoticon offline request includes the first emoticon identifier
  • a deleting unit configured to delete the first expression image from the expression input candidate box according to the first expression identifier.
  • the device also includes:
  • a sixth receiving unit configured to receive the second emoticon sent by the server
  • a second sending unit configured to send an expression acquisition request to the server, where the expression acquisition request includes the second expression identifier
  • a seventh receiving unit configured to receive the second emoticon image sent by the server according to the second emoticon identifier.
  • the first receiving unit is specifically configured to:
  • the emoticon In response to the first user moving the mouse control to the area where the first emoticon image replying to the target message is located or the first user clicking the area where the first emoticon image replying to the target message is located, the emoticon is displayed in the area where the first conversation interface is located Adding a control, and in response to the first user clicking the emoticon adding control, it is determined that an emoticon adding instruction triggered by the first user in the first conversation interface for the first emoticon image is received.
  • the embodiment of the present application also provides an emoticon image adding device, and the emoticon image adding device 1200 includes:
  • the number of the processor 1210 and the memory 1220 may be one or more. In some embodiments of the present application, the processor and the memory may be connected through a bus or in other ways.
  • the memory which can include read only memory and random access memory, provides instructions and data to the processor.
  • a portion of the memory may also include NVRAM.
  • the memory stores operating systems and operating instructions, executable modules or data structures, or their subsets, or their extended sets, wherein the operating instructions may include various operating instructions for implementing various operations.
  • the operating system may include various system programs for implementing various basic services and processing hardware-based tasks.
  • the processor controls the operation of the terminal device, and the processor may also be referred to as a CPU.
  • the methods disclosed in the foregoing embodiments of the present application may be applied to, or implemented by, a processor.
  • the processor can be an integrated circuit chip with signal processing capability.
  • each step of the above method can be completed by an integrated logic circuit of hardware in a processor or an instruction in the form of software.
  • the above-mentioned processor may be a general-purpose processor, DSP, ASIC, FPGA or other programmable logic devices, discrete gate or transistor logic devices, and discrete hardware components.
  • Various methods, steps, and logic block diagrams disclosed in the embodiments of the present application may be implemented or executed.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • An embodiment of the present application further provides a computer-readable storage medium for storing program code, and the program code is used to execute any implementation manner of the methods in the foregoing embodiments.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM), etc.
  • each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments.
  • the description is relatively simple, and for relevant parts, please refer to part of the description of the method embodiment.
  • the device embodiments described above are only illustrative, and the units and modules described as separate components may or may not be physically separated. In addition, some or all of the units and modules can also be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本申请实施例公开了一种表情图像添加方法、装置、设备和存储介质,当第二用户在第一会话界面中针对目标消息回复了表情图像,第一用户可以针对该表情图像触发表情添加指令,该表情图像被添加至第一用户的表情输入候选框中。由此可见,本申请实施例提供的表情图像添加方法,能够将第二用户针对目标消息进行回复的表情添加至第一用户的表情输入候选框中,丰富第一用户的表情输入候选框中的表情图像,满足第一用户能够发送较多表情图像的需求。

Description

一种表情图像添加方法、装置、设备和存储介质
本申请要求于2021年07月15日提交中国国家知识产权局、申请号为202110800478.2、发明名称为“一种表情图像添加方法、装置、设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,尤其涉及一种表情图像添加方法、装置、设备和存储介质。
背景技术
随着互联网相关技术的快速发展,用户可以通过多种通讯软件进行通讯交流。在用户进行通讯交流的过程中,会经常使用表情图像来反映交流的内容或情绪。
相关技术中用户能够使用的表情图像较少,不能满足用户发送较多表情图像的需求。
发明内容
为了解决相关技术中用户发送较多表情图像的需求,本申请实施例提供了一种表情图像添加方法、装置、设备和存储介质。
本申请实施例提供了一种表情图像添加方法,所述方法包括:
接收第一用户在第一会话界面中针对第一表情图像触发的表情添加指令,所述第一表情图像为在所述第一会话界面中第二用户针对目标消息回复的表情图像;
根据所述表情添加指令将所述第一表情图像添加到所述第一用户的表情输入候选框中。
本申请实施例提供一种表情图像添加装置,所述装置包括:
第一接收单元,用于接收第一用户在第一会话界面中针对第一表情图像触发的表情添加指令,所述第一表情图像为在所述第一会话界面中第二用户针对目标消息回复的表情图像;
添加单元,用于根据所述表情添加指令将所述第一表情图像添加到所述第一用户的表情输入候选框中。
本申请实施例提供一种处理设备,所述设备包括:处理器和存储器;
所述存储器,用于存储指令;
所述处理器,用于执行所述存储器中的所述指令,执行上述实施例所述的方法。
本申请实施例提供一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行上述实施例所述的方法。
本申请实施例提供的表情图像添加方法,当第二用户在第一会话界面中针对目标消息回复了表情图像,第一用户可以针对该表情图像触发表情添加指令,该表情图像被添加至第一用户的表情输入候选框中。由此可见,本申请实施例提供的表情图像添加方法,能够将第二用户针对目标消息进行回复的表情添加至第一用户的表情输入候选框中,丰富第一用户的表情输入候选框中的表情图像,满足第一用户能够发送较多表情图像的需求。
附图说明
为了更清楚地说明本申请实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为本申请实施例提供的一种表情图像添加方法的流程图;
图2为本申请实施例提供的客户端的显示界面的一种示意图;
图3为本申请实施例提供的客户端的显示界面的另一种示意图;
图4为本申请实施例提供的客户端的显示界面的又一种示意图;
图5为本申请实施例提供的客户端的显示界面的再一种示意图;
图6为本申请实施例提供的客户端的显示界面的还一种示意图;
图7为本申请实施例提供的客户端的显示界面的还一种示意图;
图8为本申请实施例提供的客户端的显示界面的还一种示意图;
图9为本申请实施例提供的客户端的显示界面的还一种示意图;
图10为本申请实施例提供的客户端的显示界面的还一种示意图;
图11为本申请实施例提供的一种表情图像添加装置的结构框图;
图12为本申请实施例提供的一种表情图像添加设备的结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
参见图1,该图为本申请实施例提供的一种表情添加方法的流程图。
本申请实施例提供的表情添加方法应用于客户端。这里,客户端可以是终端设备,终端设备为可以进行通讯交流的硬件设备,例如移动手机、平板电脑或PC(Personal Computer,个人计算机)等。客户端也可以是用于进行通讯交流的软件客户端。以下实施例提供的表情添加方法以应用于软件客户端来进行介绍。
本实施例提供的表情添加方法包括如下步骤:
S101,接收第一用户在第一会话界面中针对第一表情图像触发的表情添加指令。
S102,根据所述表情添加指令将所述第一表情图像添加到所述第一用户的表情输入候选框中。
在本申请的实施例中,第一用户为在客户端的第一会话界面中进行通讯交流的用户。第一会话界面为第一用户和第二用户通过客户端进行语音或文字交流的界面。第一会话界面中可以只包括第一用户和第二用户,进行单独的一对一的通讯交流,即第一会话界面为单聊消息界面,第一会话界面中也可以包括至少3名用户,第一用户和第二用户为至少3名用户中的其中两名用户,即第一会话界面可以是群聊消息界面。
参考图2所示,为本申请实施例提供的一种客户端的显示界面的示意 图。客户端的显示区域内,包括基础信息显示区域210、控件显示区域220、会话列表显示区域230和第一会话界面240。
基础信息显示区域210可以包括用户头像显示区域211、用户名显示区域212、内容搜索框213、搜索控件214和操作控件集合215。其中,用户头像显示区域211用于显示当前登录客户端的用户设置的头像。用户名显示区域212用于显示当前登录客户端的用户的用户名或昵称。内容搜索框213用于接收用户输入的关键词或其他内容。搜索控件214用于触发客户端的搜索操作。操作控件集合215可以包括至少一个操作控件,用户可以通过触发操作控件集合中的操作控件触发关闭客户端操作、最小化客户端操作等操作。
控件显示区域220显示至少一个操作控件,控件显示区域220包括消息查看控件221、通讯录查看控件222。其中,消息查看控件221用于在触发后使得客户端在会话列表显示区域230显示消息通知,例如群聊消息通知或单聊消息通知。如图2所示,消息查看控件221处于已经被触发的状态。通讯录查看控件222用于在触发后使得客户端显示通讯录。在图2中,通讯录查看控件222处于未被触发的状态。
会话列表显示区域230包括一个或多个消息通知231,消息通知231包括群聊消息通知和单聊消息通知,单聊消息通知以第一用户对其他用户编辑的昵称作为标识,群聊消息通知以群聊名称作为标识。如图2所示,标识为工作交流群的群聊消息处于已经被触发的状态。
第一会话界面240包括标题显示区域241、消息内容显示区域242和输入区域243。参考图2所示,第一会话界面240为群聊界面,用于显示群聊消息。相应地,标题显示区域241可以用于显示群聊消息对应的群聊名称。消息内容显示区域242可以用于显示至少一条群聊消息2421,也可以用于显示该群聊消息的发送者的相关信息。输入区域243用于接收第一用户想要发送的群聊消息并进行发送。
参考图3所示,在图2所示的第一会话界面240的基础上,输入区域243还包括表情输入控件2431,响应于第一用户触发表情输入控件2431之后,在第一会话界面240所在的区域显示表情输入候选框244,表情输入 候选框244中包括一个或多个表情图像。参考图3所示,表情输入候选框244中包括微笑表情图像2441,哭泣表情图像2442和比心表情图像2443。
以上以图2和图3为例,对客户端的显示界面和第一会话界面进行了具体介绍。由图2或图3可知,在第一会话界面中,第一用户和其他用户可以发送文字消息进行通讯交流,还可以发送表情图像进行通讯交流。
一个或多个用户在第一会话界面240发送消息进行通讯交流,第一用户可以利用表情图像针对第一消息进行回复,回复的表情图像与第一消息位于同一条消息中。其中,第一消息为进行表情图像回复的消息,第一消息可以是文字消息,也可以是语音消息,还可以是表情图像。
作为一种可能的实现方式,客户端为个人计算机中的通讯软件,参考图4所示,图4所示的第一会话界面是基于图2中的第一会话界面。
响应于其他用户发送第一消息2422,第一会话界面240的消息内容显示区域242显示第一消息2422。响应于第一用户将鼠标控件250移动至第一消息2422所在的区域或第一用户利用鼠标控件250点击第一消息2422所在的区域,第一会话界面240显示消息操作控件260。消息操作控件260包括表情输入控件2601和消息转发控件2602。其中,消息转发控件2602用于对第一消息2422进行转发,表情输入控件2601用于对第一消息2422进行表情回复。
继续参考图5所示,图5所示的第一会话界面是基于图4中的第一会话界面。响应于第一用户将鼠标控件250移动至表情输入控件2601所在的区域或第一用户利用鼠标控件250点击表情输入控件2601所在的区域,在第一会话界面240所在的区域显示表情输入候选框244,表情输入候选框244中包括一个或多个表情图像。参考图5所示,表情输入候选框244中包括点赞表情图像2444,好的(OK)表情图像2445和加油表情图像2446。
继续参考图6所示,图6所示的第一会话界面是基于图5中的第一会话界面。第一用户在表情输入候选框244中触发对好的(OK)表情图像2445的选择指令,根据该选择指令,在第一会话界面240的表情回复区域245显示好的(OK)表情图像2445。其中,表情回复区域245,位于第一消息2422所在的区域中,图6中示出的好的(OK)表情图像2445和第一消息 2422处于同一消息中,好的(OK)表情图像2445可以位于目标消息的下方。好的(OK)表情图像2445即为第一用户针对第一消息2422的回复。
在显示第一用户针对第一消息2422进行回复的表情图像时,还可以显示第一用户的用户名或昵称,以在群聊消息中反映是第一用户针对第一消息进行表情图像的回复。
在本申请的实施例中,当第二用户针对第一会话界面中的目标消息进行第一表情图像的回复,并且该第一表情图像可能是第一用户的表情输入候选框中没有的表情图像时,第一用户可以触发该第一表情图像的表情添加指令,客户端就可以根据表情添加指令将该第一表情图像添加到第一用户的表情输入候选框中。其中,第二用户针对第一会话界面的目标消息进行第一表情图像的回复的具体执行步骤与图6中第一用户针对第一消息进行表情回复的具体执行步骤类似,此处不再赘述。其中,目标消息为进行第一表情图像回复的消息,目标消息可以是文字消息,也可以是语音消息,还可以是表情图像。第一表情图像为在第一会话界面中第二用户针对目标消息回复的表情图像。
在本申请的实施例中,接收第一用户在第一会话界面中针对第一表情图像触发的表情添加指令的具体实现可以包括:
响应于第一用户在第一会话界面中点击针对目标消息回复的第一表情图像,确定接收到了第一用户在第一会话界面中针对第一表情图像触发的表情添加指令;
或者,
响应于第一用户将鼠标控件移动至针对目标消息回复的第一表情图像所在的区域或第一用户点击针对目标消息回复的第一表情图像所在的区域,在第一会话界面所在的区域显示表情添加控件,以及响应于第一用户点击所述表情添加控件,确定接收到了第一用户在第一会话界面中针对第一表情图像触发的表情添加指令。
作为一种可能的实现方式,参考图7所示,图7所示的第一会话界面710中第二用户针对目标消息720回复爱心表情图像730,其中,第二用户为图7中所示的员工B,爱心表情图像730为第一表情图像。响应于第一 用户将鼠标控件740移动至针对目标消息720回复的爱心表情图像730所在的区域或第一用户利用鼠标控件740点击针对目标消息720回复的爱心表情图像730所在的区域,在第一会话界面710所在的区域显示表情添加控件750,当第一用户点击表情添加控件750时,触发针对目标消息720回复的爱心表情图像730的表情添加指令。
作为另一种可能的实现方式,参考图7所示,图7所示的第一会话界面710中第二用户针对目标消息720回复爱心表情图像730,其中,第二用户为图7中所示的员工B,爱心表情图像730为第一表情图像。响应于第一用户点击针对目标消息720回复的爱心表情图像730,触发针对目标消息720回复的爱心表情图像730的表情添加指令。同时,第一用户点击针对目标消息720回复的爱心表情图像730时,第一用户也可以针对目标消息720回复爱心表情图像730。
客户端根据第一用户触发的针对目标消息回复的第一表情图像的表情添加指令,将针对目标消息回复的第一表情图像添加到第一用户的表情输入候选框中。
参考图8所示,图8所示的第一会话界面是基于图7中的第一会话界面。在第一用户触发针对目标消息720回复的爱心表情图像730的表情添加指令之后,客户端将爱心表情图像730添加到第一用户的表情输入候选框760中。参考图8所示,表情输入候选框760中包括新添加的针对目标消息720回复的爱心表情图像730。
在本申请的实施例中,当第一用户在第二会话界面中触发显示表情输入候选框的指令时,第二会话界面上显示表情输入候选框。触发显示表情输入候选框的指令可以是第一用户在第二会话界面点击表情输入控件,以在第二会话界面上显示表情输入候选框。表情输入候选框中显示第二用户针对目标图像回复的第一表情图像。响应于第一用户在表情输入候选框中触发的对该第一表情图像的选择指令,客户端根据选择指令将该第一表情图像发送至第二会话界面中。触发对该第一表情图像的选择指令可以是第一用户点击该表情图像。其中,表情输入控件可以是针对第二会话界面上其他用户发送的消息回复的控件,也可以是输入区域进行消息发送的控件。 第二会话界面可以和第一会话界面是同一个会话界面,也可以是不同的会话界面。
在客户端将第一会话界面710中针对目标消息720回复的爱心表情图像730添加到第一用户的表情输入候选框760中之后。表情输入候选框760中包括新添加的针对目标消息720回复的爱心表情图像730。参考图9所示,标识为BBB的单聊消息处于已经被触发的状态。在标识为BBB的单聊消息的第二会话界面910中,第一用户与用户BBB进行单独的一对一的通讯交流,即第二会话界面910为单聊消息界面,第一用户可以触发显示表情输入候选框760的指令,第二会话界面910上显示表情输入候选框。参考图9所示,表情输入候选框760中包括新添加的爱心表情图像730。响应于第一用户点击爱心表情图像730所在的区域触发的对爱心表情图像730的选择指令,客户端根据该选择指令将爱心表情图像730发送到第二会话界面910中。
在本申请的实施例中,表情输入候选框包括最近使用表情区域,最近使用表情区域反映在预设时间段内,第一用户经常选择的表情图像。当第一用户将第二用户针对目标消息回复的第一表情图像添加至表情输入候选框后,该第一表情图像可以显示在最近使用表情区域,也可以显示在全部表情区域。
参考图10所示,在第一会话界面1010显示表情输入候选框1020,表情输入候选框1020包括最近使用表情区域1021和全部使用表情区域1022,第二用户针对目标消息回复的第一表情图像1030显示在最近使用表情区域1021,方便用户对该新添加的表情进行选择。
在本申请的实施例中,最近使用表情区域中可容纳固定数量的表情图像,即最近使用表情区域的表情图像的数量是固定的,并且最近使用表情区域的表情图像按照最近使用时间进行排序,例如最新使用的表情图像可以排在头位置。随着新的表情图像添加至最近使用表情区域,最近使用表情区域的表情图像可以变化,原来在最近使用表情区域进行显示的其他表情图像不在最近使用表情区域继续进行显示。也就是说,随着最近使用的其他表情添加至最近使用表情区域,第一表情图像可以被最近使用的其他 表情挤出最近使用表情区域,最近使用表情区域或表情输入候选框中不再显示该第一表情图像。
由于最近使用表情区域的表情图像的数量是固定的,因此最近使用表情区域显示表情图像的位置的数量也是固定的。最近使用表情区域包括头位置和尾位置,新的表情图像添加至最近使用表情区域,会在头位置进行显示,而之前在尾位置显示的其他表情图像不在最近使用表情区域继续进行显示。
作为一种示例,在最近使用表情区域添加微笑表情图像,微笑表情图像在头位置进行显示,原来在尾位置显示的加油表情图像,不在最近使用表情区域继续进行显示。
在本申请的实施例中,在将第一表情图像添加到第一用户的表情输入候选框之前,还可以根据第一用户触发的表情添加指令对第一用户进行鉴权,以确定第一用户是否具有添加该第一表情图像的权限。若鉴权结果为第一用户具有添加该第一表情图像的权限,则该第一表情图像可以被添加至第一用户的表情输入候选框中,执行将第一表情图像添加到第一用户的表情输入候选框中的步骤。若鉴权结果为第一用户不具有添加该第一表情图像的权限,则该第一表情图像不可以被添加至第一用户的表情输入候选框中。其中,确定第一用户是否具有添加该第一表情图像的权限的条件可以是第一用户所处的部门、第一用户的性别和第一用户的入职时间等。本申请实施例对于权限的条件不进行具体限定,可以根据实际情况自行设计。在对第一用户进行鉴权时,可以是客户端直接进行鉴权,也可以是服务器进行鉴权,并将鉴权结果发送给客户端。
在本申请的实施例中,在将第一表情图像添加到第一用户的表情输入候选框之前,还可以为第一用户设置目标任务,当第一用户执行完成目标任务,即可在第一用户的表情输入候选框中添加表情图像。目标任务是客户端根据第一用户触发的表情添加指令获取的,目标任务可以是转发目标消息,也可以是分享目标链接,本申请实施例对于目标任务的内容不进行具体限定,可以根据实际情况自行设计。其中,目标任务可以是客户端根据表情添加指令自动生成的,也可以是客户端在接收第一用户触发的表情 添加指令后,向服务器发送获取目标任务的请求,服务器根据获取目标任务的请求向客户端发送目标任务。
在本申请的实施例中,客户端在根据第一用户触发的表情添加指令将表情图像添加到第一用户的表情输入候选框中时,有以下两种可能的添加方式:
第一种可能的添加方式应用的场景为第一用户的客户端已经存储第二用户针对目标消息回复的第一表情图像,但是没有显示在第一用户的表情输入候选框中。客户端可以根据第一用户触发的表情添加指令获取第一表情图像的第一表情标识,根据第一表情标识在客户端直接获取对应的第一表情图像,并将第一表情图像添加到第一用户的表情输入候选框中。其中,第一表情标识为第一表情图像的唯一标识,可以通过第一表情标识唯一的确定第一表情图像。
第二种可能的添加方式应用的场景为第一用户的客户端没有存储第二用户针对目标消息回复的第一表情图像,需要向服务器发送包括第一表情标识的表情获取请求。服务器根据第一表情标识,确定对应的第一表情图像,并将该第一表情图像发送给第一用户的客户端,客户端接收到该第一表情图像后,将第一表情图像添加到第一用户的表情输入候选框中。
在本申请的实施例中,在客户端向服务器发送包括第一表情标识的表情获取请求之前,客户端先接收服务器发送的第一表情标识。客户端中存储了第一表情图像和第一表情标识的对应关系,这样客户端在响应于用户的表情添加请求时,根据表情添加请求确定第一表情标识,之后向服务器发送包括第一表情标识的表情获取请求。
在本申请的实施例中,客户端还可以接收服务器根据第一表情标识发送的表情文本,客户端可以存储第一表情标识、第一表情图像以及表情文本之间的对应关系。其中,表情文本为反映第一表情图像含义的文本,例如,第一表情图像反映的含义为微笑,则表情文本为微笑。表情文本的语言可以是中文,也可以是英文,还可以是日文等,本申请实施例不具体限定表情文本的语言。
在本申请的实施例中,客户端可以接收用户在第一会话界面输入区域 的文本输入框输入的表情文本,根据第一表情标识、第一表情图像以及表情文本之间的对应关系,在第一会话界面显示与该表情文本对应的第一表情图像。
作为一种示例,在第一会话界面输入区域的文本输入框输入的表情文本smile,则在第一会话界面显示与smile对应的微笑表情图像。
在本申请的实施例中,表情图像可以进行下线处理,客户端接收来自服务器包括第一表情标识的表情下线请求,客户端根据该第一表情标识从表情输入候选框中删除该表情图像,以对该表情图像进行下线处理。
在本申请的实施例中,客户端先接收服务器发送的第二表情标识,之后客户端可以通过向服务器发送携带有第二表情标识的表情获取请求,以获取相应的第二表情图像。其中,第二表情标识为第二表情图像的唯一标识,可以通过第二表情标识唯一的确定第二表情图像。
在本申请的实施例中,服务器根据表情获取请求中的第二表情标识,确定对应的第二表情图像,并将该第二表情图像发送给客户端。
在本申请的实施例中,客户端接收到第二表情标识对应的第二表情图像后,将第二表情图像添加到表情输入候选框中,并在表情输入候选框中显示该第二表情图像。
在本申请的实施例中,客户端中存储了第二表情图像和第二表情标识的对应关系,这样客户端在响应于用户的表情添加请求时,根据表情添加请求确定第二表情标识,之后向服务器发送包括第二表情标识的表情获取请求。
在本申请的实施例中,客户端还可以接收服务器根据第二表情标识发送的表情文本,客户端可以存储第二表情标识、第二表情图像以及表情文本之间的对应关系。其中,表情文本为反映第二表情图像含义的文本,例如,第二表情图像反映的含义为微笑,则表情文本为微笑。表情文本的语言可以是中文,也可以是英文,还可以是日文等,本申请实施例不具体限定表情文本的语言。
在本申请的实施例中,客户端可以接收用户在第一会话界面输入区域的文本输入框输入的表情文本,根据第二表情标识、第二表情图像以及表 情文本之间的对应关系,在第一会话界面显示与该表情文本对应的第二表情图像。
在本申请的实施例中,根据第二表情标识从服务器获取对应的第二表情图像,无需仅仅依赖客户端的版本更新才能更新表情资源,可以随时进行表情的上线、更新、修改和维护,缩短表情上线的周期,提升客户使用表情的体验。
以上对本申请实施例提供的表情图像添加方法做了详细介绍,由此可见,本申请实施例提供的表情图像添加方法,在第一会话界面中,第二用户利用第一表情图像针对目标消息进行回复,第一用户针对该第一表情图像触发表情添加指令,该第一表情图像被添加至第一用户的表情输入候选框中。也就是说,本申请实施例能够将第二用户针对目标消息进行回复的第一表情图像添加至第一用户的表情输入候选框中,满足第一用户发送多种表情的需求。
基于以上实施例提供的表情图像添加方法,本申请实施例还提供了一种表情图像添加装置。
参见图11,该图为本申请实施例提供的一种表情图像添加装置的结构框图。
本实施例提供的表情图像添加装置1100包括:
第一接收单元1110,用于接收第一用户在第一会话界面中针对第一表情图像触发的表情添加指令,所述第一表情图像为在所述第一会话界面中第二用户针对目标消息回复的表情图像;
添加单元1120,用于根据所述表情添加指令将所述第一表情图像添加到所述第一用户的表情输入候选框中。
可选地,所述装置还包括:
第二接收单元,用于接收所述第一用户在第二会话界面的所述表情输入候选框中触发的对所述第一表情图像的选择指令;
第一发送单元,用于根据所述选择指令将所述第一表情图像发送到所述第二会话界面中。
可选地,所述表情输入候选框包括最近使用表情区域,所述添加单元具体用于:
根据所述表情添加指令将所述第一表情图像添加到所述第一用户的最近使用表情区域中。
可选地,所述第一用户的最近使用表情区域中可容纳固定数量的表情图像,且按照最近使用时间进行排序;
所述装置还包括:
挤出单元,用于当所述第一表情图像被最近使用的其他表情挤出所述最近使用表情区域时,所述第一用户的表情输入候选框中不再存在所述第一表情图像。
可选地,所述装置还包括:
鉴权单元,用于根据所述表情添加指令获取所述第一用户是否具有添加所述第一表情图像的权限的鉴权结果;
响应于所述鉴权结果为所述第一用户具有所述权限,所述添加单元执行所述将所述第一表情图像添加到所述第一用户的表情输入候选框中的步骤。
可选地,所述装置还包括:
获取单元,用于根据所述表情添加指令获取目标任务;
响应于所述第一用户执行完成所述目标任务,所述添加单元执行所述将所述第一表情图像添加到所述第一用户的表情输入候选框中的步骤。
可选地,所述目标任务包括以下其中一种或多种:
转发目标消息和分享目标链接。
可选地,所述添加单元具体用于:
根据所述表情添加指令获取所述第一表情图像的第一表情标识;
根据所述第一表情标识获取对应的第一表情图像;
将所述第一表情图像添加到所述第一用户的表情输入候选框中。
可选地,所述添加单元具体用于:
向服务器发送表情获取请求,所述表情获取请求中包括所述第一表情标识;
接收所述服务器根据所述第一表情标识发送的第一表情图像。
第一表情标识可选地,所述装置还包括:
第一存储单元,用于存储所述第一表情标识与所述第一表情图像之间的对应关系。
可选地,所述装置还包括:
第三接收单元,用于接收所述服务器根据所述第一表情标识发送的表情文本;
第二存储单元,用于存储所述第一表情标识、所述第一表情图像以及表情文本之间的对应关系。
可选地,所述装置还包括:
第四接收单元,用于接收用户在文本输入框输入的所述表情文本;
显示单元,用于根据所述对应关系显示与所述表情文本对应的第一表情图像。
可选地,所述装置还包括:
第五接收单元,用于接收来自所述服务器的表情下线请求,所述表情下线请求包括所述第一表情标识;
删除单元,用于根据所述第一表情标识从所述表情输入候选框中删除所述第一表情图像。
可选地,所述装置还包括:
第六接收单元,用于接收所述服务器发送的第二表情标识;
第二发送单元,用于向所述服务器发送表情获取请求,所述表情获取请求中包括所述第二表情标识;
第七接收单元,用于接收所述服务器根据所述第二表情标识发送的第二表情图像。
可选地,所述第一接收单元具体用于:
响应于第一用户在第一会话界面中点击针对目标消息回复的第一表情图像,确定接收到了第一用户在第一会话界面中针对第一表情图像触发的表情添加指令;
或者,
响应于第一用户将鼠标控件移动至针对目标消息回复的第一表情图像所在的区域或第一用户点击针对目标消息回复的第一表情图像所在的区域,在第一会话界面所在的区域显示表情添加控件,以及响应于第一用户点击所述表情添加控件,确定接收到了第一用户在第一会话界面中针对第一表情图像触发的表情添加指令。
基于以上实施例提供的一种表情图像添加方法,本申请实施例还提供了一种表情图像添加设备,表情图像添加设备1200包括:
处理器1210和存储器1220,处理器的数量可以一个或多个。在本申请的一些实施例中,处理器和存储器可通过总线或其它方式连接。
存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。存储器的一部分还可以包括NVRAM。存储器存储有操作系统和操作指令、可执行模块或者数据结构,或者它们的子集,或者它们的扩展集,其中,操作指令可包括各种操作指令,用于实现各种操作。操作系统可包括各种系统程序,用于实现各种基础业务以及处理基于硬件的任务。
处理器控制终端设备的操作,处理器还可以称为CPU。
上述本申请实施例揭示的方法可以应用于处理器中,或者由处理器实现。处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
本申请实施例还提供一种计算机可读存储介质,用于存储程序代码,该程序代码用于执行前述各个实施例的方法中的任意一种实施方式。
当介绍本申请的各种实施例的元件时,冠词“一”、“一个”、“这个”和“所述”都意图表示有一个或多个元件。词语“包括”、“包含”和“具有”都是包括性的并意味着除了列出的元件之外,还可以有其它元件。
需要说明的是,本领域普通技术人员可以理解实现上述方法实施例中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。其中,所述存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元及模块可以是或者也可以不是物理上分开的。另外,还可以根据实际的需要选择其中的部分或者全部单元和模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述仅是本申请的具体实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请实施例原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请实施例的保护范围。

Claims (18)

  1. 一种表情图像添加方法,所述方法包括:
    接收第一用户在第一会话界面中针对第一表情图像触发的表情添加指令,所述第一表情图像为在第一会话界面中第二用户针对目标消息回复的表情图像;
    根据所述表情添加指令将所述第一表情图像添加到所述第一用户的表情输入候选框中。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    接收所述第一用户在第二会话界面的所述表情输入候选框中触发的对所述第一表情图像的选择指令;
    根据所述选择指令将所述第一表情图像发送到所述第二会话界面中。
  3. 根据权利要求1所述的方法,其中,所述表情输入候选框包括最近使用表情区域,所述根据所述表情添加指令将所述第一表情图像添加到所述第一用户的表情输入候选框中包括:
    根据所述表情添加指令将所述第一表情图像添加到所述第一用户的最近使用表情区域中。
  4. 根据权利要求3所述的方法,其中,所述第一用户的最近使用表情区域中可容纳固定数量的表情图像,且按照最近使用时间进行排序;
    所述方法还包括:
    当所述第一表情图像被最近使用的其他表情挤出所述最近使用表情区域时,所述第一用户的表情输入候选框中不再存在所述第一表情图像。
  5. 根据权利要求1所述的方法,其中,在所述将所述第一表情图像添加到所述第一用户的表情输入候选框中的步骤之前,所述方法还包括:
    根据所述表情添加指令获取所述第一用户是否具有添加所述第一表情图像的权限的鉴权结果;
    响应于所述鉴权结果为所述第一用户具有所述权限,执行所述将所述第一表情图像添加到所述第一用户的表情输入候选框中的步骤。
  6. 根据权利要求1所述的方法,其中,在所述将所述第一表情图像添加到所述第一用户的表情输入候选框中的步骤之前,所述方法还包括:
    根据所述表情添加指令获取目标任务;
    响应于所述第一用户执行完成所述目标任务,执行所述将所述第一表情图像添加到所述第一用户的表情输入候选框中的步骤。
  7. 根据权利要求6所述的方法,其中,所述目标任务包括以下其中一种或多种:转发目标消息和分享目标链接。
  8. 根据权利要求1所述的方法,其中,所述根据所述表情添加指令将所述第一表情图像添加到所述第一用户的表情输入候选框中包括:
    根据所述表情添加指令获取所述第一表情图像的第一表情标识;
    根据所述第一表情标识获取对应的第一表情图像;
    将所述第一表情图像添加到所述第一用户的表情输入候选框中。
  9. 根据权利要求8所述的方法,其中,所述根据所述第一表情标识获取对应的表情图像包括:
    向服务器发送表情获取请求,所述表情获取请求中包括所述第一表情标识;
    接收所述服务器根据所述第一表情标识发送的第一表情图像。
  10. 根据权利要求9所述的方法,其中,所述方法还包括:
    存储所述第一表情标识与所述第一表情图像之间的对应关系。
  11. 根据权利要求9所述的方法,其中,所述方法还包括:
    接收所述服务器根据所述第一表情标识发送的表情文本;
    存储所述第一表情标识、所述第一表情图像以及所述表情文本之间的对应关系。
  12. 根据权利要求11所述的方法,其中,所述方法还包括:
    接收用户在文本输入框输入的所述表情文本;
    根据所述对应关系显示与所述表情文本对应的第一表情图像。
  13. 根据权利要求9所述的方法,其中,所述方法还包括:
    接收来自所述服务器的表情下线请求,所述表情下线请求包括所述第一表情标识;
    根据所述第一表情标识从所述表情输入候选框中删除所述第一表情图像。
  14. 根据权利要求1所述的方法,其中,所述方法还包括:
    接收所述服务器发送的第二表情标识;
    向所述服务器发送表情获取请求,所述表情获取请求中包括所述第二表情标识;
    接收所述服务器根据所述第二表情标识发送的第二表情图像。
  15. 根据权利要求1所述的方法,其中,所述接收第一用户在第一会话界面中针对第一表情图像触发的表情添加指令包括:
    响应于第一用户在第一会话界面中点击针对目标消息回复的第一表情图像,确定接收到了第一用户在第一会话界面中针对第一表情图像触发的表情添加指令;
    或者,
    响应于第一用户将鼠标控件移动至针对目标消息回复的第一表情图像所在的区域或第一用户点击针对目标消息回复的第一表情图像所在的区域,在第一会话界面所在的区域显示表情添加控件,以及响应于第一用户点击所述表情添加控件,确定接收到了第一用户在第一会话界面中针对第一表情图像触发的表情添加指令。
  16. 一种表情图像添加装置,所述装置包括:
    第一接收单元,用于接收第一用户在第一会话界面中针对第一表情图像触发的表情添加指令,所述第一表情图像为在所述第一会话界面中第二用户针对目标消息回复的表情图像;
    添加单元,用于根据所述表情添加指令将所述第一表情图像添加到所述第一用户的表情输入候选框中。
  17. 一种处理设备,所述设备包括:处理器和存储器;
    所述存储器,用于存储指令;
    所述处理器,用于执行所述存储器中的所述指令,执行如权利要求1-15中任一项所述的方法。
  18. 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1-15任意一项所述的方法。
PCT/CN2022/104495 2021-07-15 2022-07-08 一种表情图像添加方法、装置、设备和存储介质 WO2023284630A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/882,487 US12056329B2 (en) 2021-07-15 2022-08-05 Method and device for adding emoji, apparatus and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110800478.2A CN114461102A (zh) 2021-07-15 2021-07-15 一种表情图像添加方法、装置、设备和存储介质
CN202110800478.2 2021-07-15

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/882,487 Continuation US12056329B2 (en) 2021-07-15 2022-08-05 Method and device for adding emoji, apparatus and storage medium

Publications (1)

Publication Number Publication Date
WO2023284630A1 true WO2023284630A1 (zh) 2023-01-19

Family

ID=81405467

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/104495 WO2023284630A1 (zh) 2021-07-15 2022-07-08 一种表情图像添加方法、装置、设备和存储介质

Country Status (2)

Country Link
CN (1) CN114461102A (zh)
WO (1) WO2023284630A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12056329B2 (en) 2021-07-15 2024-08-06 Beijing Zitiao Network Technology Co., Ltd. Method and device for adding emoji, apparatus and storage medium
CN114461102A (zh) * 2021-07-15 2022-05-10 北京字跳网络技术有限公司 一种表情图像添加方法、装置、设备和存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180373683A1 (en) * 2014-04-23 2018-12-27 Klickafy, Llc Clickable emoji
CN111756917A (zh) * 2019-03-29 2020-10-09 上海连尚网络科技有限公司 信息交互方法、电子设备和计算机可读介质
CN114461102A (zh) * 2021-07-15 2022-05-10 北京字跳网络技术有限公司 一种表情图像添加方法、装置、设备和存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146056A (zh) * 2007-09-24 2008-03-19 腾讯科技(深圳)有限公司 一种表情图标的显示方法及系统
CN103905293A (zh) * 2012-12-28 2014-07-02 北京新媒传信科技有限公司 一种获取表情信息的方法及装置
CN104618222B (zh) * 2015-01-07 2017-12-08 腾讯科技(深圳)有限公司 一种匹配表情图像的方法及装置
CN107483315B (zh) * 2016-06-07 2020-10-09 腾讯科技(深圳)有限公司 表情获取方法、装置及系统
CN107145270A (zh) * 2017-04-25 2017-09-08 北京小米移动软件有限公司 表情图标排序方法及装置
CN109871165B (zh) * 2019-02-01 2022-03-01 天津字节跳动科技有限公司 表情回应的显示方法、装置、终端设备和服务器

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180373683A1 (en) * 2014-04-23 2018-12-27 Klickafy, Llc Clickable emoji
CN111756917A (zh) * 2019-03-29 2020-10-09 上海连尚网络科技有限公司 信息交互方法、电子设备和计算机可读介质
CN114461102A (zh) * 2021-07-15 2022-05-10 北京字跳网络技术有限公司 一种表情图像添加方法、装置、设备和存储介质

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "How to add emoticons sent by friends on WeChat", 23 February 2018 (2018-02-23), pages 1 - 2, XP093024563, Retrieved from the Internet <URL:https://jingyan.baidu.com/article/bad08e1ed03bd509c8512120.html> [retrieved on 20230216] *
ANONYMOUS: "How to add emoticons sent by others to WeChat", 18 October 2022 (2022-10-18), pages 1 - 9, XP093024576, Retrieved from the Internet <URL:https://www.bkqs.com.cn/content/zpe59mgpy.html> [retrieved on 20230216] *
LIU YONG: "Fun QQ custom emoticons", PCFAN, 1 September 2005 (2005-09-01), pages 16 - 17, XP093024569, ISSN: 1672-528X *

Also Published As

Publication number Publication date
CN114461102A (zh) 2022-05-10

Similar Documents

Publication Publication Date Title
US11095582B2 (en) Systems and methods for supplementing real-time exchanges of instant messages with automatically updateable content
US10129199B2 (en) Ensuring that a composed message is being sent to the appropriate recipient
US9171291B2 (en) Electronic device and method for updating message body content based on recipient changes
US9406049B2 (en) Electronic device and method for updating message recipients based on message body indicators
WO2023284630A1 (zh) 一种表情图像添加方法、装置、设备和存储介质
US8516049B2 (en) Administering instant messaging (‘IM’) chat sessions
JP6501919B2 (ja) 音声チャットモード自己適応方法及び装置
KR20180051556A (ko) 서비스 기능을 구현하는 방법 및 디바이스
US8943147B2 (en) Sending a chat context to a recipient
CN109729005B (zh) 消息处理方法、装置、计算机设备和存储介质
WO2017172427A1 (en) Cross-mode communication
EP2658189B1 (en) Electronic device and method for updating message body content based on recipient changes
US9083693B2 (en) Managing private information in instant messaging
CN114500432A (zh) 会话消息收发方法及装置、电子设备、可读存储介质
US10200338B2 (en) Integrating communication modes in persistent conversations
US20180189017A1 (en) Synchronized, morphing user interface for multiple devices with dynamic interaction controls
WO2023016536A1 (zh) 一种交互方法、装置、设备和存储介质
KR102054728B1 (ko) 이메일 연계 채팅방 관리 장치 및 방법
US12056329B2 (en) Method and device for adding emoji, apparatus and storage medium
CN107222398B (zh) 社交消息控制方法、装置、存储介质和计算机设备
US8214442B2 (en) Facilitating an extended IM session in a secure way
WO2023093325A1 (zh) 添加好友的方法、装置、服务器及存储介质
CN116389401A (zh) 基于业务维度的交流方法、装置、计算机设备及存储介质
CN115639939A (zh) 发起待办流程的方法、装置、电子设备及可读存储介质
CA2810691C (en) Electronic device and method for updating message recipients based on message body indicators

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22841266

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22841266

Country of ref document: EP

Kind code of ref document: A1