CN112817670A - Information display method, device, equipment and storage medium based on session - Google Patents

Information display method, device, equipment and storage medium based on session Download PDF

Info

Publication number
CN112817670A
CN112817670A CN202010780272.3A CN202010780272A CN112817670A CN 112817670 A CN112817670 A CN 112817670A CN 202010780272 A CN202010780272 A CN 202010780272A CN 112817670 A CN112817670 A CN 112817670A
Authority
CN
China
Prior art keywords
expression
elements
message
session
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010780272.3A
Other languages
Chinese (zh)
Other versions
CN112817670B (en
Inventor
沙莎
肖仙敏
刘立强
陈世玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010780272.3A priority Critical patent/CN112817670B/en
Publication of CN112817670A publication Critical patent/CN112817670A/en
Application granted granted Critical
Publication of CN112817670B publication Critical patent/CN112817670B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method, a device and equipment for displaying information based on a session and a computer-readable storage medium; the method comprises the following steps: receiving a conversation message in a conversation interface, wherein the conversation message comprises text content and a media element, and when the text content is a message type conforming to the association logic, the emotion element and the media element which have the association logic with the text content are displayed in the conversation interface in a specified form. Through the application, the richness of the expression elements can be improved.

Description

Information display method, device, equipment and storage medium based on session
Technical Field
The present application relates to the field of mobile communications technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for displaying information based on a session.
Background
With the development of mobile communication technology, in order to better convey emotion in the conversation process of instant messaging, when a conversation message sent by a user contains certain keywords triggering expression elements, the expression elements related to the keywords are dynamically displayed in a conversation interface, for example, when the user inputs the conversation message of 'happy birthday', the expression elements in the style of 'cake' are presented in the conversation interface in the mode of colored egg expression rain.
In the related art, when an input conversation message contains media elements such as expression elements or image elements in addition to keywords triggering the expression elements, only the expression elements associated with the keywords can be dynamically displayed in a conversation interface, so that the displayed expression elements are relatively single, and bad experience is brought to a user.
Disclosure of Invention
The embodiment of the application provides a session-based information display method, device and equipment and a computer-readable storage medium, which can improve the richness of expression elements.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an information display method based on a session, which comprises the following steps:
receiving a session message in a session interface, wherein the session message comprises text content and media elements; and when the text content is a message type conforming to the association logic, displaying the expression elements and the media elements which have the association logic with the text content in a session interface in a specified form.
An embodiment of the present application provides an information display device based on a session, including:
the message receiving module is used for receiving a session message in a session interface, wherein the session message comprises text content and media elements;
and the message display module is used for displaying the expression elements and the media elements which have the association logic with the text content in a session interface in a specified form when the text content is the message type conforming to the association logic.
In the above solution, the apparatus further includes an editing module, where the editing module is configured to, when the media element includes an emoticon, before receiving a conversation message in a conversation interface,
presenting a text editing interface, and presenting a text editing box and an expression selection function item in the text editing interface;
in response to a text editing operation triggered based on the text editing box and an expression selection operation triggered based on the expression selection function item, presenting text content edited by the text editing operation and expression elements selected by the expression selection operation;
and responding to a message sending operation triggered based on the text editing interface, and sending a conversation message containing the edited text and the selected emoticons.
In the above solution, the editing module is further configured to, when the media element includes a picture element, before receiving a session message in a session interface,
presenting a text editing interface, and presenting a text editing box and a picture selection function item in the text editing interface;
in response to a text editing operation triggered based on the text editing box and a picture selecting operation triggered based on the picture selecting function item, presenting text content edited by the text editing operation and picture elements selected by the picture selecting operation;
and responding to a message sending operation triggered based on the text editing interface, and sending a conversation message containing the edited text and the selected picture element.
In the foregoing solution, when the media element includes a first number of picture elements and the first number is greater than a second number, the message display module is further configured to
Presenting the conversation message including the text content in a conversation interface, and
independently presenting a second number of the first number of picture elements in the session message, and overlappingly presenting picture elements of the first number of picture elements except the second number of picture elements.
In the foregoing solution, when the media element includes at least one of a video element and an audio element, the message display module is further configured to
And in a session interface, presenting the text content and the media element by adopting a session message, wherein the text content and the media element form the message content of the session message.
In the above solution, when the media element includes a video element, the apparatus further includes an image capture module, where the image capture module is configured to capture an image of the video element
In response to a static image interception instruction for the selected video element, intercepting a first frame image of the video element, and determining the intercepted first frame image as a video image corresponding to the video element, so that the text content and the video image are presented in the session interface by adopting a session message; or,
and in response to a dynamic image intercepting instruction aiming at the selected video element, intercepting to obtain a dynamic video image corresponding to the video element, determining the dynamic video image as a video image corresponding to the video element, presenting the text content and the video image by adopting a session message in the session interface, and continuously intercepting and combining a plurality of sequence frame images by the dynamic video image based on a first frame image of the video element.
In the foregoing solution, the message display module is further configured to display, in the session interface, a plurality of first emoticons of the emoticon that has association logic with the text content, and a plurality of second emoticons of the media element that are merged into the plurality of first emoticons, and display the plurality of second emoticons of the media element
And showing the moving processes of the first expression copies and the second expression copies.
In the foregoing solution, when the media element includes at least one of a video element, an audio element and a picture element, the apparatus further includes a detail presenting module, configured to present details of the at least one of the video element, the audio element and the picture element
Receiving a trigger operation aiming at the second expression copy in the moving process of the first expression copies and the second expression copies;
and responding to the trigger operation, presenting a detail page corresponding to the media element, and presenting the content of the media element in the detail page.
In the above scheme, when the media elements are expression elements and the number of the media elements exceeds the target number,
the message display module is also used for combining the expression elements which have the association logic with the text content and the media elements with the target number to obtain combined expression elements;
and displaying the moving process of the combined emoticons in the session interface.
In the above scheme, the message display module is further configured to perform superposition combination or parallel combination on the expression elements having the association logic with the text content and the media elements to obtain combined expression elements;
and displaying the bounce process of the combined expression element in the session interface.
In the above scheme, the message display module is further configured to combine the expression element having the association logic with the text content with the media element to obtain a combined expression element;
and displaying the process that the combined expression element moves along the target track of the corresponding target pattern in the session interface.
In the above scheme, the expression display module is further configured to combine the expression elements having the association logic with the text content and the media elements to obtain combined expression elements;
and in the session interface, showing a plurality of third emotion copies of the combined expression element, and showing the moving process of the third emotion copies.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the information display method based on the conversation provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium, so as to implement the session-based information display method provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects:
when the text content in the conversation message is the message type which accords with the association logic, the expression element which has the association logic with the text content is combined with the media element in the conversation message to be displayed in a designated form in a conversation interface, so that the expression element displayed in the conversation interface comprises the expression element which is associated with the text and the media element carried in the conversation message, and the richness of the expression element is improved, so that the existing single plain text message supports information display or the plain text message triggers egg display, the message type is more abundant, the use range of the information display is expanded, the user viscosity of the product is improved, and the display requirement that the user information sharing is more diversified is met.
Drawings
FIGS. 1A-1D are schematic diagrams of a session-based information presentation interface provided by an embodiment of the present application;
FIG. 2 is an alternative architectural diagram of a session-based information presentation system according to an embodiment of the present application;
fig. 3 is an alternative structural schematic diagram of an electronic device provided in an embodiment of the present application;
fig. 4 is an alternative flowchart of a session-based information presentation method according to an embodiment of the present application;
FIGS. 5A-5C are schematic diagrams of display interfaces provided by embodiments of the present application;
6A-6D are schematic diagrams of display interfaces provided by embodiments of the present application;
FIGS. 7A-7D are schematic diagrams of display interfaces provided by embodiments of the present application;
FIGS. 8A-8C are schematic diagrams of display interfaces provided by embodiments of the present application;
fig. 9 is an alternative flowchart of a session-based information presentation method according to an embodiment of the present application;
fig. 10 is an alternative flowchart of a session-based information presentation method according to an embodiment of the present application;
fig. 11 is an alternative flowchart of a session-based information presentation method according to an embodiment of the present application;
fig. 12 is an alternative flowchart of a session-based information presentation method according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a session-based information presentation apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, reference is made to the term "first \ second …" merely to distinguish between similar objects and not to represent a particular ordering for the objects, it being understood that "first \ second …" may be interchanged in a particular order or sequence of orders as permitted to enable embodiments of the application described herein to be practiced in other than the order illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The client side, and the application program running in the terminal for providing various services, such as a video playing client side, an instant messaging client side, a live broadcast client side, and the like.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) The method comprises the steps that when conversation messages of a user mode contain certain keywords, the colored egg expression rain is triggered in a conversation interface.
4) Public chat window components (AIO, All In One), users In QQ mobile phone versions participate In many different types of conversations such as friends, groups, public accounts and the like, In order to provide uniform interaction experience for users, chat window components shared by different conversations are provided In software, and behavior habits such as input, click operation and the like of the users In the components can be regarded as consistent.
In the conversation process, when a conversation message sent by a user contains certain keywords triggering expression elements, expression elements associated with the keywords are dynamically displayed in a conversation interface, referring to fig. 1A-1D, fig. 1A-1D are schematic diagrams of the conversation-based information display interface provided by the embodiment of the application, and the user inputs "happy birthday in fig. 1A
Figure BDA0002619952980000071
"this conversation message, in the conversation interface shown in fig. 1B, dynamically showing the emoticons of" cake "style corresponding to the text" happy birthday "; user input in FIG. 1C "Happy birthday
Figure BDA0002619952980000072
The "this conversation message" dynamically shows the emoticons of the "cake" style corresponding to the text "happy birthday" in the conversation interface shown in fig. 1D. Therefore, the dynamically displayed expression elements only contain expression elements related to certain keywords, so that the displayed expression elements are relatively single.
In view of this, embodiments of the present application provide a method, an apparatus, a device, and a computer-readable storage medium for displaying information based on a session, so as to improve richness of emoticons.
Referring to fig. 2, fig. 2 is an alternative architecture diagram of the session-based information presentation system 100 according to an embodiment of the present invention, in order to support an exemplary application, the terminal 400 (the terminal 400-1 and the terminal 400-2 are exemplarily shown) is connected to the server 200 through the network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two, and uses a wireless link to implement data transmission.
In practical applications, the terminal 400 may be various types of user terminals such as a smart phone, a tablet computer, a notebook computer, and the like, and may also be a desktop computer, a game console, a television, or a combination of any two or more of these data processing devices; the server 200 may be a single server configured to support various services, may also be configured as a server cluster, may also be a cloud server, and the like.
In actual implementation, a client, such as a video playing client, an instant messaging client, a live broadcast client, etc., is provided on the terminal 400, and when a user opens the client on the terminal 400 for a session, the terminal 400 is configured to edit a session message including text content and media elements in response to an editing operation for the session message, and send the session message to the server 200 in response to a message sending operation for the session message;
the server 200 is configured to determine whether the text content is a message type conforming to the association logic based on the session message, acquire an emoticon having the association logic with the text content when it is determined that the message type corresponding to the text content is the message type conforming to the association logic, and return the acquired emoticon having the association logic with the text content and a media element included in the session message to the terminal 400;
and the terminal 400 is used for displaying the emoticon which has the associated logic with the text content and the media element contained in the session message in a specified form in the session interface.
Referring to fig. 3, fig. 3 is an optional schematic structural diagram of an electronic device 500 provided in the embodiment of the present application, in practical applications, the electronic device 500 may be the terminal 400 or the server 200 in fig. 2, and an electronic device implementing the live broadcast information processing method in the embodiment of the present application is described by taking the electronic device as the terminal 400 shown in fig. 2 as an example. The electronic device 500 shown in fig. 3 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 3.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the session-based information presentation apparatus provided by the embodiments of the present application may be implemented in software, and fig. 3 illustrates a session-based information presentation apparatus 555 stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: a message presentation module 5551 and an emoticon display module 5552, which are logical and thus can be arbitrarily combined or further separated according to the implemented functions.
The functions of the respective modules will be explained below.
In other embodiments, the session-based information presentation apparatus provided in the embodiments of the present Application may be implemented in hardware, for example, the session-based information presentation apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the session-based information presentation method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be implemented by one or more Application Specific Integrated Circuits (ASICs), DS ps, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate arrays (fpgas), or other electronic components.
Next, a description is given of the session-based information presentation method according to the embodiment of the present application, and in actual implementation, the session-based information presentation method according to the embodiment of the present application may be implemented by a server or a terminal alone, or may be implemented by a server and a terminal in cooperation.
Referring to fig. 4, fig. 4 is an alternative flowchart of a session-based information presentation method provided in the embodiment of the present application, and the steps shown in fig. 4 will be described in detail.
Step 101: the terminal receives a session message in the session interface, wherein the session message comprises text content and media elements.
In practical application, a terminal is provided with a client, such as an instant messaging client, a live broadcast client and the like, and when a user opens the client on the terminal to perform a conversation, a conversation message is received, wherein the conversation message can be a message edited by the user based on a message editing frame or a conversation message edited by a conversation object sent by a server.
When editing the conversation message, the terminal responds to the editing operation and presents the edited conversation message, wherein the conversation message is composed of text content and media elements, and the media elements comprise at least one of the following components: expression elements, picture elements, audio elements, and video elements.
In some embodiments, when the media element includes an emoticon, before the terminal receives the conversation message, the terminal may obtain the edited conversation message by:
presenting a text editing interface, and presenting a text editing box and an expression selection function item in the text editing interface; responding to a text editing operation triggered based on the text editing box and an expression selection operation triggered based on the expression selection function item, and presenting text contents edited by the text editing operation and expression elements edited by the expression selection operation; and responding to a message sending operation triggered based on the text editing interface, and sending a conversation message containing the edited text content and the edited emoticons.
Here, the emoticon refers to a pictographic emoticon, also called an emoji or a small yellow face emoticon, and is classified into various emoticons such as animal series, fruit food, expression series, plant nature, zodiac constellation, sports and leisure, design celebration, character series, and the like, for example, smiling face represents smile, cake represents food, and the like. When the conversation message is edited, the categories of the expression elements can be the same or different, and the number of the expression elements can be 1 or more.
Referring to fig. 5A to 5C, fig. 5A to 5C are schematic diagrams of display interfaces provided in an embodiment of the present application, in fig. 5A, a text edited by a terminal based on a text editing box a1 triggering a text editing operation is "happy birthday", and an expression element edited based on an expression option function item a2 triggering an expression selection operation is "expression element" edited
Figure BDA0002619952980000111
And receives 'happy birthday' based on the message sending operation triggered by the sending function item A3
Figure BDA0002619952980000112
"this session message, and is presented in the session interface shown in FIG. 5B; in fig. 5C, when the emoticon is included in the conversation message in a large amount, "happy birthday" as shown in fig. 5C is presented
Figure BDA0002619952980000113
"this session message.
In some embodiments, when the media element includes a picture element, before the session message is received by the terminal, the terminal may obtain the edited session message by:
presenting a text editing interface, and presenting a text editing box and a picture selection function item in the text editing interface; in response to a text editing operation triggered based on a text editing box and a picture element adding operation triggered based on a picture selection function item, presenting text contents edited by the text editing operation and a picture element selected by the picture element adding operation; and responding to a message sending operation triggered based on the text editing interface, and sending a conversation message containing the edited text content and the selected picture element.
In some embodiments, when the media element includes a first number of picture elements and the first number is greater than the second number, the received conversation message may be presented in a conversation interface of the terminal by:
presenting a conversation message including text content in the conversation interface, independently presenting a second number of picture elements in the first number of picture elements in the conversation message, and presenting picture elements except the second number of picture elements in the first number of picture elements in an overlapping manner.
Here, when editing the conversation message, when the number of selected picture elements is large, in order to ensure the timeliness of presentation for the conversation message, the first several picture elements may be sequentially presented in the conversation message, the latter picture elements may be presented after being stacked, and when the user triggers a stacked picture element, the terminal presents a detail page corresponding to the picture element in response to the triggering operation, and presents picture details in the detail page.
Referring to fig. 6A to 6D, fig. 6A to 6D are schematic diagrams of a display interface provided in an embodiment of the present application, in fig. 6A, a text is edited based on a text editing box B1 triggering a text editing operation, a picture selection function item B2 is used for adding a selected picture element, a text edited by a terminal based on a text editing box B1 triggering a text editing operation is "happy birthday", several pictures are added based on a picture selection function item B2, a session message containing text content and a selected picture is received based on a message sending operation triggered by a sending function item B3, and a session message containing text content and a selected picture element is presented as shown in fig. 6B; in fig. 6C, when the number of selected pictures is too large, for example, 8, in editing the conversation message, the first 4 pictures shown as "a" may be sequentially presented in the conversation message, the remaining 4 pictures may be presented in a stack in the manner described in "B", and when the user triggers the stacked pictures, the terminal responds to the triggering operation to present the detail page of the corresponding picture shown in fig. 6D, and present the picture details in the detail page.
In some embodiments, when the media element includes at least one of a video element or an audio element, the terminal may further obtain the edited conversation message including the text content and the video element or the audio element before presenting the received conversation message in the conversation interface of the terminal, and present the received conversation message in the conversation interface by:
in the session interface, a session message is used to present text content and media elements, which constitute the message content of the session message.
Here, one conversation message includes both text content and a video element, or one conversation message includes both text content and an audio element.
Referring to fig. 7A-7B, fig. 7A-7B are schematic diagrams of display interfaces provided by an embodiment of the present application, in fig. 7A, a text is edited based on a text editing box C1 triggering a text editing operation, a media addition function item C2 is used for adding a video element, the text content edited by the terminal based on a text editing box C1 triggering a text editing operation is "happy birthday", a plurality of video elements are added based on a media addition function item C2, and a session message containing the text content and the added video element is received based on a message sending operation triggered by a sending function item C3, and a session message containing the text content and the added video element is presented as shown in fig. 7B, wherein the session message contains a session message of the text content and two video elements.
In some embodiments, when the media element includes a video element, the video element presented in the piece of session message may be a video image corresponding to the video element, and in practical implementation, the terminal may intercept the video image corresponding to the video element by:
in response to a static image intercepting instruction aiming at the selected video element, intercepting a first frame image of the video element, determining the intercepted first frame image as a video image of the corresponding video element, and presenting text content and the video image by adopting a session message in a session interface; or,
and in response to a dynamic image intercepting instruction aiming at the selected video element, intercepting to obtain a dynamic video image of the corresponding video element, determining the dynamic video image as a video image of the corresponding video element, and presenting text content and the video image by adopting a session message in a session interface, wherein the dynamic video image is obtained by continuously intercepting and combining a plurality of sequence frame images based on a first frame image of the video element.
When the video element is dynamically captured, a preset number of frame images may be captured from a first frame image of the video element, and the captured preset number of frame images are combined to obtain a dynamic image (i.e., gif image) corresponding to the video element, or a video clip with a preset duration (e.g., 1.5 seconds) may be captured, and the captured video clip is converted into a dynamic image.
And the terminal uploads each video element and the corresponding dynamic image to the cos platform, and assembles each video element, the corresponding dynamic image and the text content based on the resource identifier of each video element and the corresponding dynamic image. Here, the assembled message is added with the dynamic image resource and video element resource fields to represent the dynamic image and the corresponding video element. The terminal sends the assembled message to a server, and the server returns the expression elements with the text content associated with the logic in the session message and the video images corresponding to the video elements carried by the session message to the terminal so as to present the expression elements with the text content associated with the logic and the video images corresponding to the video elements carried by the session message in a session interface of the terminal; or the server combines the expression elements associated with the text content in the session message with the video images corresponding to the video elements carried in the session message to obtain combined expression elements, and returns the combined expression elements to the terminal to display the session message containing the text content and the video elements in a session interface of the terminal and dynamically display the combined expression elements.
When the number of the selected video elements is large, in order to ensure the presentation timeliness of the session message, the video images corresponding to the first video elements can be presented in the session message in sequence, and the video images corresponding to the remaining video elements are presented after being stacked.
Specifically, when the media elements include a third number of video images and the third number is greater than the fourth number, the received conversation message may be presented in a conversation interface of the terminal by:
and presenting a session message comprising text content in the session interface, independently presenting a fourth number of video images in the third number of video images in the session message, and presenting video images except the fourth number of video images in the third number of video images in an overlapping manner.
Referring to fig. 7C-7D, fig. 7C-7D are schematic diagrams of display interfaces provided by an embodiment of the present application, in fig. 7C, when a selected video image is too many, for example, 10, in editing a conversation message, the first 5 pictures shown as "a" may be sequentially presented in the conversation message, the remaining 5 pictures are presented in a stack in the manner described in "B", and when a user triggers a stacked video image, the terminal responds to the triggering operation to present a detail page of a corresponding video element shown in fig. 7D and play video details in the detail page.
Step 102: and when the message type corresponding to the text content is the message type conforming to the association logic, displaying the emoticons and the media elements which have the association logic with the text content in a specified form in the conversation interface.
In some embodiments, after receiving the session message, the terminal also presents the session message including the text content and the emoticon; here, the description is given to the presentation of the session message and the execution sequence of step 102 in this embodiment, in some embodiments, the session message including the text content and the emoticon may be presented first, and then after a period of time (which may be set according to actual needs, for example, 3 seconds), step 102 is executed, that is, when the message type corresponding to the text content is a message type conforming to the association logic, the emoticon and the media element having the association logic with the text content are presented in the session interface in a designated form;
in other embodiments, step 102 may be executed first, that is, when the message type corresponding to the text content is a message type conforming to the association logic, the emoticon and the media element having the association logic with the text content are displayed in the session interface in a specified form, and then after a period of time (which may be set according to actual needs, such as 2 seconds), the session message including the text content and the emoticon is presented in the session interface;
in other embodiments, the presenting the conversation message and the step 102 may be performed simultaneously, that is, while the conversation message including the text content and the emoticon is presented in the conversation interface, when the message type corresponding to the text content is a message type conforming to the association logic, the emoticon and the media element having the association logic with the text content are presented in the conversation interface in a specified form.
Next, a message type corresponding to the text content will be described. The message type corresponding to the text content, that is, the message type of the session message, may include a message type conforming to the association logic and a message type not conforming to the association logic.
For a message type conforming to the association logic, the text content has an emoticon corresponding to the text content, that is, the text content and the emoticon have the association logic, specifically:
in some embodiments, the text content includes a keyword, the keyword has one or more expression elements corresponding to the keyword, whether the expression element corresponding to the extracted keyword exists can be determined by extracting the keyword of the text content, searching a mapping relation table between the keyword and the expression element based on the keyword, and then determining that the message type is a message type conforming to the association logic when the one or more expression elements corresponding to the extracted keyword exist; otherwise, determining the message type as the message type not conforming to the associated logic;
in other embodiments, the session message carries a message type identifier, where the message type identifier is used to identify that the message type of the session message is a message type conforming to the association logic, that is, after the session message is received, the session message is analyzed and whether the session message carries a message type identifier is checked, and then when the session message carries a message type identifier, the message type is determined to be a message type conforming to the association logic; otherwise, the message type is determined to be the message type which does not conform to the associated logic.
In some embodiments, when the terminal receives the conversation message, it is first required to determine whether the text content in the conversation message is a message type that conforms to the association logic, such as determining whether the conversation message contains a keyword that triggers the colored egg expression rain, for example, happy birthday, want you, money source rolling, red fire, good fortune, surplus year after year, success in arrival, and so on.
In some embodiments, keyword extraction may be performed on text content of the session message to obtain a first keyword corresponding to the session message; matching the first keywords with second keywords corresponding to the candidate expression elements; and when the first keyword is matched with the second keyword, determining that the text content of the conversation message is associated with an expression element, and taking the candidate expression element corresponding to the second keyword as the expression element associated with the text content.
And when the text content in the conversation message is determined to be the message type conforming to the association logic, the emoticons which have the association logic with the text content and the media elements carried in the conversation message are displayed in a conversation interface in a specified form.
The specified form may be that the expression element associated with the text content and the media element carried in the session message are independently displayed in the session interface, or that the expression element associated with the text content and the media element carried in the session message are combined, and the combined expression element and media element are dynamically displayed.
It should be noted that, in actual implementation, the presentation order of the session message in the session interface may be the same or different, and for example, the session message is presented in the session interface first, and the expression element and the media element are presented after the session message is presented for a certain time; or presenting the expression elements and the media elements while presenting the session messages in the session interface; or, the expression elements and the media elements are presented in the session interface first, and when the expression elements and the media elements are presented for a certain time, the session message is presented.
In some embodiments, the combined emoticons and media elements may be dynamically presented as follows:
and displaying a plurality of first expression copies of the expression elements and a plurality of second expression copies of the media elements fused in the plurality of first expression copies, and displaying the moving processes of the plurality of first expression copies and the plurality of second expression copies in a session interface.
Here, the media element includes at least one of the following elements: expression elements, picture elements, video elements, and audio elements. The first expression copy corresponds to expression elements associated with the text in the conversation message, the corresponding expression elements are copied, and the size and the style of the expression elements are the same as those of the original expression elements; the second expression element copy corresponds to the media element carried in the session message, and when the media element is an expression element, the second expression copy is obtained by copying the corresponding media element; when the media element is a picture element or a video element, the second expression copy is obtained by copying a video image corresponding to the media element and then reducing the video image to a fixed size.
That is to say, for a picture element, the displayed second expression copy is a thumbnail of the original picture element, and in the process of dynamic display, the second expression copy corresponding to each picture element can be in a fixed size, and when the length, width and fixed size ratio are not consistent, the second expression copy can be compressed in equal proportion, so that the authenticity of the picture size is maintained as much as possible, and incomplete display of the picture element caused by improper filling mode is avoided. For the video element, since the second expression copy is a thumbnail of the corresponding video image, the display form and size of the second expression copy corresponding to the picture element are the same, and details are not repeated here.
The fusion is not the fusion of two expression copies, but a plurality of first expression copies of the expression elements are taken as a whole, and a plurality of second expression copies of the media elements are taken as a whole, so that the expression copies between the two whole are displayed in a crossed manner.
After receiving the plurality of first expression copies of the expression elements and the plurality of second expression copies of the media elements, the terminal drops one expression copy at an average interval time according to the total number and time of the first expression copies and the second expression copies, the positions of the dropped expression copies in a screen are random, and the expression copy needed to drop each time is also selected according to a random rule, namely the drop rule can be that the first expression copy and the second expression copy drop randomly. For example, the total number of the first expression copies and the second expression copies is n, and the first expression copies and the second expression copies are represented by a linked list of n, wherein each expression copy corresponds to one data in the linked list, a number m in n is randomly generated each time, the expression copies corresponding to the m position are taken out and dropped, the expression copies corresponding to the m position are deleted from the linked list when one expression copy is dropped, and the process is circulated until the linked list is empty, so that the first expression copies and the second expression copies do not drift according to a fixed drift mode, and different experience is brought to a user.
In the moving process of the plurality of first expression copies and the plurality of second expression copies, the first expression copies and the second expression copies can move from the upper side to the lower side on the conversation interface at the same or different moving tracks and the same or different moving rates, for example, move from the top to the bottom in a free fall manner, so as to achieve the effect of colored egg expression rain, and a specific dynamic display form can be seen in fig. 5B-5C.
In some embodiments, when the media element includes at least one of a video element, an audio element, and a picture element, the terminal may also present the content of the media element by:
receiving a trigger operation aiming at a second expression copy in the moving process of the first expression copies and the second expression copies; and in response to the triggering operation, presenting a detail page of the corresponding media element, and presenting the content of the media element in the detail page.
In practical application, function items such as saving/downloading/forwarding and the like can be presented in the detail page so as to save/download/forward the content of the presented media expression and the like.
For example, for the second emoticon corresponding to the picture element drifted in fig. 6B-6C, when the user triggers the second emoticon, the terminal presents the detail page of the corresponding picture as shown in fig. 6D in response to the triggering operation, and presents the original image of the picture element in the detail page. For the second expression copy corresponding to the video element that drifted off in fig. 7B-7C, when the user triggers the second expression copy, the terminal responds to the trigger operation to present the detail page of the corresponding video element as shown in fig. 7D, if the video image is a dynamic image, the dynamic image is preferentially played on the detail page shown in fig. 7D, and after the video is downloaded locally, the detailed content of the video is continuously played.
In some embodiments, when the media elements are emoticons and the number of media elements exceeds the target number, the terminal may present the emoticons and the media elements having the association logic with the text content in a specified form in the conversation interface by:
combining the expression elements associated with the text content with the target number of media elements in the session message to obtain combined expression elements;
and in the conversation interface, showing the moving process of the combined emoticons.
Here, when the emoticons carried in the conversation message are too many, only the emoticons of the target number are taken to be combined with the emoticons associated with the text content, for example, in fig. 5C, 14 emoticons in the conversation message are taken, the first 10 emoticons are taken to be combined with the emoticons associated with the text content, and the moving process of the combined emoticons is shown.
In some embodiments, the terminal may present the emoticons and the media elements in a specified form in the conversation interface in a manner that the emoticons and the media elements have associated logic with the text content:
carrying out superposition combination or parallel combination on the expression elements and the media elements to obtain combined expression elements;
and in the session interface, showing the bounce process of the combined expression elements.
When the media element is an expression element, combining the expression element associated with the text content with the expression element carried by the session message to obtain a combined expression element, wherein the combined expression element moves in a bouncing manner in the session interface, and the direction of bouncing can be any, for example, bouncing upwards from the bottom of the session interface until jumping out of the session interface, or bouncing towards the left side from the right side of the session interface until jumping out of the session interface, and the like.
Referring to fig. 8A, fig. 8A is a schematic view of a display interface provided in the embodiment of the present application, in fig. 8A, an expression element in a "cake" style corresponding to a text content of "happy birthday" is combined with an expression element in a "rose" style carried in a session message, and the obtained combined expression element bounces from the upper portion of the session interface downward in a direction indicated by an "arrow" until the combined expression element bounces out of the session interface.
In some embodiments, the terminal may present the emoticons and the media elements in a specified form in the conversation interface in a manner that the emoticons and the media elements have associated logic with the text content:
combining the expression elements with the media elements to obtain combined expression elements;
and displaying the process that the combined emoticons move along the target tracks of the corresponding target patterns in the conversation interface.
Here, the target pattern may be any pattern, such as a drawn "love heart" pattern, a "like" pattern, a written text pattern, and in the session interface, the combined emoticon moves along the target trajectory corresponding to the target pattern, and the moving process is displayed.
Referring to fig. 8B, fig. 8B is a schematic view of an expression element display interface provided in the embodiment of the present application, and in fig. 8B, an expression element in a "cake" style corresponding to a text content of "happy birthday" is combined with an expression element in a "arrow-through" style carried in a session message, and a process of moving a combined expression element obtained by combining the expression element along a target trajectory corresponding to a "love" pattern is performed.
In some embodiments, the terminal may present the emoticons and the media elements in a specified form in the conversation interface in a manner that the emoticons and the media elements have associated logic with the text content:
combining the expression elements with the media elements to obtain combined expression elements;
and displaying a plurality of third emotion copies of the combined expression element, and displaying the moving process of the third emotion copies.
The expression elements associated with the text are combined with the media elements carried in the session message to obtain a combined expression element, a plurality of third expression copies corresponding to the combined expression element are obtained, and the moving process of the third expression copies is displayed.
Referring to fig. 8C, fig. 8C is a schematic view of a display interface of expression elements provided in the embodiment of the present application, in fig. 8C, expression elements in a "cake" style corresponding to a text content of "happy birthday" are combined with expression elements in a "rose" style carried in a session message to obtain a third expression copy of the combined expression elements, and a plurality of third expression copies can be moved from the top to the bottom on the session interface in the same or different movement trajectories and at the same or different movement rates, for example, moved from top to bottom in a free-fall manner, so as to achieve the effect of color egg expression rain.
Through the method, when the terminal receives the session message, whether the text content in the session message is the message type conforming to the association logic is judged, and when the text content in the session message is determined to be the message type conforming to the association logic, the expression element with the association logic existing in the text content is combined with the media element in the session message to be displayed in a session interface in an appointed form; therefore, the expression elements presented in the conversation interface comprise the expression elements associated with the text content and also comprise media elements carried by the conversation message, a certain DIY permission is given to the user, each user can trigger different expression elements, and the richness of the expression elements is improved; and the expression elements and the media elements move in various modes in a conversation interface, so that the sizes and the movement tracks of the expression elements are rich and diversified, the emotional expression among users is facilitated, the interestingness of conversation is increased, the existing single pure text message supports information display or the pure text message triggers egg-colored display, the information display is expanded to richer message types, the use range of the information display is expanded, the user viscosity of products is improved, and the display requirement that the user information sharing is increasingly diversified is met.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
In practical application, a terminal is provided with a client, such as an instant messaging client such as a QQ, a WeChat and the like, a user can perform a conversation based on the client, and when a conversation message input by the user contains a keyword (namely, text content associated with an expression element) for triggering the confetti expression rain, the corresponding confetti expression rain (namely, the expression element associated with the text content is dynamically displayed) is triggered in a conversation interface. However, the above-mentioned session-based information presentation method has at least the following problems:
1) the color egg expression rain triggered by the keywords is prefabricated by the system, and the size, the motion trail and the expression style of the color egg expression rain are single. For example, QQ, WeChat color egg expression rain are all pre-made systematically, color egg expression rain is composed of expressions with fixed sizes, falls along one or a plurality of fixed tracks, the falling frequency is consistent, the variation is absent, the personal emotion is difficult to express, and especially in group chat, a user feels boring after seeing repeated color egg expression rain every time.
2) The trigger mechanism is single. Only the conversation message contains the keyword for triggering the rain with the color egg expression can trigger the rain with the color egg expression, and no other triggering mode exists.
In view of this, an embodiment of the present application provides a method for displaying session-based information to solve at least the above problem, where when a session message edited by a session object is received, it is first determined whether the session message includes a text content (i.e., a trigger word) associated with an emoticon, and when it is determined that the session message includes a text content associated with an emoticon, the text content is associated with the emoticon and a media element in the session message are combined, and a moving process of the combined emoticon and the media element is dynamically displayed. Therefore, the color egg expression rain triggered by the keywords can simultaneously carry the media elements in the conversation message, a user is given a certain DIY permission, and the richness of the color egg expression rain is increased. Next, the explanation will be made one by one.
1. Color egg expression rain carrying expression elements edited by user
In practical applications, the media elements include at least one of: expression elements, picture elements, audio elements, and video elements. The conversation information edited by the user comprises text content triggering the colored egg expression rain and expression elements, wherein the expression elements support yellow face small expressions in emoji and QQ, the client on the terminal sends the conversation message to the server after receiving the conversation message, and the server sends the conversation message and the corresponding colored egg expression rain to the client after processing the conversation message so as to present the colored egg expression rain on the client.
Referring to fig. 9, fig. 9 is an alternative flowchart of a session-based information presentation method according to an embodiment of the present application, which will be described with reference to fig. 9.
Step 201: the client receives a conversation message containing the edited text content and the edited emoticons.
In actual implementation, a text editing interface is presented in a session interface of a client, and a text editing box and an expression selection function item are presented in the text editing interface; responding to a text editing operation triggered based on the text editing box and an expression selection operation triggered based on the expression selection function item, and presenting text contents edited by the text editing operation and expression elements edited by the expression selection operation; and receiving a conversation message containing edited text content and edited emoticons in response to a message sending operation triggered based on the text editing interface.
Here, the emotions in the conversation message support the yellow-face emotions in emoji and QQ. And after receiving the session message, the client sends the session message to the server for processing.
Step 202: the server preprocesses the session message.
Here, the server preprocesses the received session message and sends the preprocessed session message to the client, so as to present the session message in a session interface of the client.
Step 203: presenting the conversation message in a conversation interface of the client.
Step 204: the server judges whether the conversation message contains text content triggering the colored egg expression rain or not.
Here, the text content containing the triggering color egg expression rain is the triggering word for triggering color egg expression rain, for example, whether the conversation message contains triggering words such as happy birthday, want you, rolling wealth, red and red fire, good fortune, surplus year after year, success in arrival and the like is judged, and when it is determined that the conversation message contains the triggering word for triggering color egg expression rain, step 205 is executed; when it is determined that the conversation message does not contain the trigger word for triggering the color egg expression rain, step 203 is executed.
Step 205: the server sets the number of expression elements in the colored egg expression rain associated with the text content.
Here, the total number and the minimum number of the expression elements in the colored egg expression rain associated with the text need to be set.
Step 206: and the server identifies the emoticons in the conversation message.
Here, the server processes the emoticons in the conversation message to further identify whether emoji and QQ emoticons exist in the conversation message, the identification rule is that the system emoji emoticons special characters, similar to 0x E44B, only need to identify characters, if the system emoji emoticons special characters, the characters need to be identified/started, and at the same time, whether the characters in the following are identical to the character strings defined by the QQ emoticons, for example, whether the characters in the following are identical to the character strings defined by the QQ emoticons or not is detected, and
Figure BDA0002619952980000221
expressed as a/smile, these server servers and clients agree on rules.
Step 207: the server generates a combined emoticon.
Here, the server combines the expression elements associated with the text in the session message with the expression elements carried by the session message to obtain a combined expression element, the combined expression element is represented by a json string of the egg expression, and the json fields are as follows:
Figure BDA0002619952980000222
wherein, each field represents different meanings, and the client executes corresponding processing logic according to the fields:
face _ eg _ count: triggering the total number of expression elements contained in the colored egg expression rain at one time by the text content;
default _ face _ min _ count: the minimum number of the expression elements triggered by the text content to be displayed;
default _ face: a string representing a trigger emoticon;
faces: an expression element array in the message;
type: types of expression elements in the message, such as system default emoji, QQ expressions and picture elements;
value: the value of the expression element is different in meaning according to the type of the corresponding expression element and different types of expression elements, if the expression element is the default emoji of the system, the value is 0xE44B, if the expression element is a QQ expression, the value is/smile, if the expression element is a picture element, the value is represented by the id of the picture, the unique identifier of the resource on the cos platform corresponds to, and the client can download the corresponding picture according to the identifier.
And finally, the server pushes the generated combined expression elements to clients corresponding to the sender and the receiver (namely, conversation objects) for displaying, wherein all users in conversation or conversation groups of the sender and the receiver can only see the colored egg expression rain, and if the users are not online, the combined expression elements pushed by the server cannot be received.
Step 208: and dynamically displaying the combined expression elements by the client.
The client analyzes the json after receiving the combined expression elements (namely, the push strings of the egg-preserved expressions) pushed by the server, and the number and time of the expression elements are drifted down as required, and one expression element is dropped at an average interval time, the positions of the dropped expression elements in the screen are random, and the expression elements required to be dropped each time are also selected according to a random rule, namely, the dropping rule can be expression elements related to text content in the conversation message and expression elements carried by the conversation message fall down randomly, when too many expression elements are carried, only the first few (for example, the first 10) expression elements can be dynamically displayed, and the dynamic display form of the combined expression elements can be seen in fig. 5B and 5C.
For example, the number of the expression elements is n, and the expression elements are represented by a linked list of n, wherein each expression element corresponds to one data in the linked list, a number m in n is randomly generated each time, the expression element corresponding to the position of m is taken out and dropped, the expression element corresponding to the position of m is deleted from the linked list each time the expression element is dropped, and the process is circulated until the linked list is empty, so that the expression elements related to the text in the session message and the expression elements carried by the session message do not drift according to a fixed drift mode, and different experiences are brought to the user.
2. Color egg expression rain carries picture elements selected by user
Here, the session information edited by the user includes text content and a picture element that trigger the egyptian, that is, the session message is a rich media message, after the client on the terminal receives the session message, the session message is sent to the server, and when the server receives the session message, a picture element judgment logic is added to the processing logic of the egyptian expression shown in fig. 9, specifically, refer to fig. 10, where fig. 10 is an optional flow diagram of the session-based information presentation method provided in the embodiment of the present application, and will be described with reference to fig. 10.
Step 301: the client receives a conversation message containing the edited text content and the edited emoticons.
In actual implementation, a text editing interface is presented in a session interface of a client, and a text editing box and a picture selection function item are presented in the text editing interface; in response to a text editing operation triggered based on a text editing box and a picture element adding operation triggered based on a picture selection function item, presenting text contents edited by the text editing operation and a picture element selected by the picture element adding operation; in response to a message sending operation triggered based on the text editing interface, a conversation message containing edited text content and the selected picture element is received.
And after receiving the session message, the client sends the session message to the server for processing.
Step 302: the server preprocesses the session message.
Here, the server preprocesses the received session message and sends the preprocessed session message to the client, so as to present the session message in a session interface of the client.
Step 303: presenting the conversation message in a conversation interface of the client.
Step 304: the server judges whether the conversation message contains text content triggering the colored egg expression rain or not.
Here, the text content containing the triggering color egg expression rain is the triggering word for triggering color egg expression rain, for example, whether the conversation message contains triggering words such as happy birthday, you want, money source rolling, red and red fire, good fortune, surplus year after year, success in arrival and the like is judged, and when the conversation message contains the triggering word for triggering color egg expression rain, step 305 is executed; when it is determined that the conversation message does not contain the trigger word for triggering the color egg expression rain, step 303 is executed.
Step 305: the server sets the number of expression elements in the colored egg expression rain associated with the text content.
Here, the total number and the minimum number of expression elements in the colored egg expression rain associated with the text content need to be set.
Step 306: and the server identifies the emoticons in the conversation message.
Here, the server processes the emoticons in the conversation message to further identify whether emoji and QQ emoticons exist in the conversation message, the identification rule is that the system emoji emoticons special characters, similar to 0x E44B, only need to identify characters, if the system emoji emoticons special characters, the characters need to be identified/started, and at the same time, whether the characters in the following are identical to the character strings defined by the QQ emoticons, for example, whether the characters in the following are identical to the character strings defined by the QQ emoticons or not is detected, and
Figure BDA0002619952980000251
expressed as a/smile, these server servers and clients agree on rules.
Step 307: the server processes the picture elements in the session message.
Here, the server checks whether the session message contains picture elements, and when the session message contains picture elements, the pictures are uploaded to the cos platform one by one, specifically, for example, the number of the pictures is n, which is represented by a linked list of n, wherein each picture element corresponds to one data in the linked list, a number m in n is randomly generated each time, the picture corresponding to the m position is uploaded to the cos platform, and each time one picture element is uploaded, the linked list deletes the picture at the corresponding position, and so on until the linked list is empty.
Step 308: the server generates a combined emoticon.
The server combines the expression elements associated with the text content in the session message, the expression elements edited in the session message and the picture elements in the session message to obtain combined expression elements, processes the picture in the session message, generates a json string format identical to the json string format generated by processing the expression elements in the session message, and distinguishes that the type corresponds to the image in the faces array, and the value of the value corresponds to the unique resource identifier (namely, the picture resource identifier) on the cos platform, and the client can download the corresponding picture elements according to the identifier.
Step 309: and dynamically displaying the combined expression elements by the client.
In actual implementation, a processing process of the client after receiving the combined emoticon (i.e., the push string of the egg emoticon) pushed by the server may be as shown in fig. 11, and fig. 11 is an optional flow diagram of the session-based information presentation method provided in the embodiment of the present application and will be described with reference to fig. 11.
Step 401: the client receives the combined expression elements pushed by the server.
Here, the combined emoticon elements are characterized by a json string of confetti expressions.
Step 402: and analyzing the combined expression elements.
Here, the client analyzes the json string of the egg expression corresponding to the combined expression element.
Step 403: and judging whether the combined expression element contains the picture element.
Here, it is necessary to determine whether picture elements exist in the faces, and when picture elements are included, step 404 is executed, otherwise step 407 is executed.
Step 404: and downloading the thumbnail corresponding to the picture according to the picture resource identifier.
Step 405: and saving the thumbnail to the local.
Step 406: and subtracting 1 from the number of pictures.
Through the above steps 404 and 406, after each picture element is downloaded to the local, the color egg falling processing is performed, and the picture element corresponds to one expression in the expression color egg, so as to dynamically display the combined expression element.
Step 407: and dynamically displaying the combined expression elements.
In the process of dynamically displaying the combined emoticons, the emoticons associated with the text content in the conversation message and the emoticons carried in the conversation message can be dynamically displayed first, then the picture elements in the conversation message are dynamically displayed, when a plurality of picture elements exist, the first pictures are sequentially displayed, the pictures at the back are displayed in a stacked mode, a user can click the pictures to enter a big picture viewing page to view all the pictures, and the specific dynamic display can be shown in fig. 6B-6D.
When the pictures in the conversation messages are dynamically displayed, the different expression from the emoji expression and the QQ expression is that the display sizes of the picture elements are larger, each picture element is fixed, when the length, the width and the fixed size ratio are inconsistent, the picture elements need to be compressed in equal proportion, the authenticity of the picture size is kept as far as possible, and incomplete picture display caused by improper filling modes is avoided.
Because the size of the displayed picture elements is large, if two adjacent picture elements possibly have an overlapping problem in the expression drifting process, the client needs to judge the x coordinate of the positive drifting picture element, calculates a proper position according to the x coordinate of the previous picture element, and stores the position as far as possible without overlapping, so that the picture elements can be complete and can drift at uniform intervals.
It should be noted that, for the rich media message combining the text and the picture, the number of pictures needs to be controlled, and after all, the number of pictures is too large, which results in that the uploading and downloading time is too long, so that the eggs are not in time. The expression colored eggs can be clicked only when picture elements are interacted, a user clicks effectively when a client side can be an image according to the type, when the picture elements are clicked, the client side opens a picture previewer, thumbnails are displayed on the previewer firstly, then the original images are downloaded to be displayed to the user by a cos platform according to resource identification, the cos platform supports downloading of pictures with different sizes, corresponding resource identifications are the same, and therefore the downloaded picture types need to be added when downloading is requested.
3. Color egg expression rain carrying video elements edited by user
When the conversation message edited by the user contains the text content and the video element of the associated expression element, the triggered expression colored egg rain will display the video element in the conversation message at last, and the user can click the video element to perform a video playing interface to view the played video content, which can be specifically shown in fig. 7B-7D.
Referring to fig. 12, fig. 12 is an alternative flowchart of a session-based information presentation method according to an embodiment of the present application, which will be described with reference to fig. 12.
Step 501: and the client responds to the text editing operation to acquire the text content edited by the text editing operation.
Step 502: and the client responds to the video adding operation and acquires the video elements edited by the video adding operation.
Step 503: the client receives a session message containing edited text content and video elements in response to the message sending operation.
Here, a text editing interface is presented in a session interface of the client, and a text editing box and a media adding function item are presented in the text editing interface; in response to a text editing operation triggered based on a text editing box and a video element adding operation triggered based on a media adding function item, presenting a text edited by the text editing operation and a video element added by the media element adding operation; in response to a message sending operation triggered based on a text editing interface, a session message containing edited text content and video elements is received.
Step 504: and the client responds to the interception operation aiming at the video element in the session message and acquires the dynamic image corresponding to the video element.
Here, the selected video in the session message is intercepted, for example, a video segment of 1.5 seconds is intercepted and converted into a dynamic image (i.e., gif image).
Step 505: and the client uploads each video element and the corresponding dynamic image in the session message to the cos platform.
Step 506: and the client assembles each video, the corresponding dynamic image and the text based on the resource identification of each video and the corresponding dynamic image.
Here, the assembled message is added with a dynamic image resource and video resource field for representing dynamic images and videos. And the client sends the assembled message to the server.
Step 507: and the server combines the assembled messages to obtain the combined expression elements.
Here, the server combines the expression elements associated with the text in the session message with the video elements carried by the session message to obtain combined expression elements, and sends the combined expression elements to each client. The combined expression elements are characterized by json strings of the egg expression, and the fields in json are as follows:
Figure BDA0002619952980000281
the expression rain processing generates a corresponding json string, a video type is added to the original expression element representation basis, when the client detects that the type is video, an additional parameter value2 is added to represent a resource identifier of the video element, the client downloads a corresponding video from a decos platform according to the resource identifier, and the value represents a dynamic image of the video element.
Step 508: presenting the combined emoticon in a session interface of the client.
Step 509: and dynamically displaying the combined expression elements by the client.
The display of the colored egg expression rain is to download a corresponding dynamic image (gif image) according to a resource identifier, when a user clicks the dynamic image, a picture previewer is opened to preferentially play the dynamic image and download a corresponding video, and after the video is downloaded, the played dynamic image is replaced by the played video. Because the color egg expression supports the picture element display in the rain, the color egg expression can be directly replaced by a dynamic image, the control supports a static picture or a dynamic image, and if the control supports the dynamic image, the dynamic image can be automatically played.
Through the mode, the expression elements, the picture elements and the video elements in the conversation message are carried in the expression rain triggered by the conversation message keywords, and the user is given certain customization capacity to the expression rain of the color eggs, so that the color egg rain can express the emotion of the user better.
Continuing with the exemplary structure of the session-based information presentation apparatus 555 provided in the embodiment of the present application implemented as a software module, in some embodiments, as shown in fig. 13, fig. 13 is a schematic structural diagram of the session-based information presentation apparatus provided in the embodiment of the present application, and the software module stored in the session-based information presentation apparatus 555 of the memory 550 may include:
a message receiving module 5551, configured to receive a session message in a session interface, where the session message includes text content and media elements;
and a message display module 5552, configured to display, in a specified form, the emoticon and the media element that have association logic with the text content in a session interface when the text content is a message type that conforms to the association logic.
In some embodiments, the apparatus further comprises an editing module to, when the media element comprises an emoticon, prior to the receiving a conversation message,
presenting a text editing interface, and presenting a text editing box and an expression selection function item in the text editing interface;
in response to a text editing operation triggered based on the text editing box and an expression selection operation triggered based on the expression selection function item, presenting text content edited by the text editing operation and expression elements selected by the expression selection operation;
and responding to a message sending operation triggered based on the text editing interface, and sending a conversation message containing the edited text and the selected emoticons.
In some embodiments, the editing module is further configured to, when the media element comprises a picture element, prior to the receiving a session message,
presenting a text editing interface, and presenting a text editing box and a picture selection function item in the text editing interface;
in response to a text editing operation triggered based on the text editing box and a picture selecting operation triggered based on the picture selecting function item, presenting text content edited by the text editing operation and picture elements selected by the picture selecting operation;
and responding to a message sending operation triggered based on the text editing interface, and sending a conversation message containing the edited text and the selected picture element.
In some embodiments, when the media element comprises a first number of picture elements and the first number is greater than a second number, the message presentation module is further configured to present the media element with a second number of picture elements
Presenting the conversation message including the text content in a conversation interface, and
independently presenting a second number of the first number of picture elements in the session message, and overlappingly presenting picture elements of the first number of picture elements except the second number of picture elements.
In some embodiments, when the media element comprises at least one of a video element and an audio element, the message presentation module is further configured to
And in a session interface, presenting the text content and the media element by adopting a session message, wherein the text content and the media element form the message content of the session message.
In some embodiments, when the media element comprises a video element, the apparatus further comprises an image capture module to capture the video element
In response to a static image interception instruction for the selected video element, intercepting a first frame image of the video element, and determining the intercepted first frame image as a video image corresponding to the video element, so that the text content and the video image are presented in the session interface by adopting a session message; or,
and in response to a dynamic image intercepting instruction aiming at the selected video element, intercepting to obtain a dynamic video image corresponding to the video element, determining the dynamic video image as a video image corresponding to the video element, presenting the text content and the video image by adopting a session message in the session interface, and continuously intercepting and combining a plurality of sequence frame images by the dynamic video image based on a first frame image of the video element.
In some embodiments, the message presentation module is further configured to present, in the conversation interface, a plurality of first emoticons of the emoticon that has logic associated with the text content and a plurality of second emoticons of the media element that are merged into the plurality of first emoticons, and present the first emoticons and the second emoticons
And showing the moving processes of the first expression copies and the second expression copies.
In some embodiments, when the media element comprises at least one of a video element, an audio element, and a picture element, the apparatus further comprises a detail presentation module to present details
Receiving a trigger operation aiming at the second expression copy in the moving process of the first expression copies and the second expression copies;
and responding to the trigger operation, presenting a detail page corresponding to the media element, and presenting the content of the media element in the detail page.
In some embodiments, when the media elements are emoji elements and the number of media elements exceeds a target number,
the message display module is further configured to combine the expression elements with the target number of expression elements to obtain combined expression elements;
and displaying the moving process of the combined emoticons in the session interface.
In some embodiments, the message presentation module is further configured to perform superposition combination or parallel combination on the expression elements having the association logic with the text content and the media elements to obtain a combined expression element;
and displaying the bounce process of the combined expression element in the session interface.
In some embodiments, the message presentation module is further configured to combine the emoticon having the association logic with the text content with the media element to obtain a combined emoticon;
and displaying the process that the combined expression element moves along the target track of the corresponding target pattern in the session interface.
In some embodiments, the message presentation module is further configured to combine the emoticon having the association logic with the text content with the media element to obtain a combined emoticon;
and displaying a plurality of third emotion copies of the combined expression element, and displaying the moving process of the third emotion copies.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the information display method based on the conversation provided by the embodiment of the application when the executable instructions stored in the memory are executed.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the session-based information presentation method described in the embodiment of the present application.
The embodiment of the application provides a computer-readable storage medium storing executable instructions, wherein the executable instructions are stored, and when being executed by a processor, the executable instructions cause the processor to execute the session-based information presentation method provided by the embodiment of the application.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (H TML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A method for presenting information based on a session, the method comprising:
receiving a session message in a session interface, wherein the session message comprises text content and media elements;
and when the message type corresponding to the text content is the message type conforming to the association logic, displaying the expression elements and the media elements which have the association logic with the text content in the conversation interface in a specified form.
2. The method of claim 1, wherein when the media element comprises an emoticon, the method further comprises, prior to receiving a conversation message in a conversation interface:
presenting a text editing interface, and presenting a text editing box and an expression selection function item in the text editing interface;
in response to a text editing operation triggered based on the text editing box and an expression selection operation triggered based on the expression selection function item, presenting text content edited by the text editing operation and expression elements selected by the expression selection operation;
and responding to a message sending operation triggered based on the text editing interface, and sending a conversation message containing the edited text content and the selected emoticons.
3. The method of claim 1, wherein when the media element comprises a picture element, prior to receiving a conversation message in a conversation interface, the method further comprises:
presenting a text editing interface, and presenting a text editing box and a picture selection function item in the text editing interface;
in response to a text editing operation triggered based on the text editing box and a picture selecting operation triggered based on the picture selecting function item, presenting text content edited by the text editing operation and picture elements selected by the picture selecting operation;
and responding to a message sending operation triggered based on the text editing interface, and sending a conversation message containing the edited text and the selected picture element.
4. The method of claim 3, wherein when the media element comprises a first number of picture elements and the first number is greater than a second number, the method further comprises:
presenting the conversation message including the text content in a conversation interface, and
independently presenting a second number of the first number of picture elements in the session message, and overlappingly presenting picture elements of the first number of picture elements except the second number of picture elements.
5. The method of claim 1, wherein when the media element comprises at least one of a video element, an audio element, the method further comprises:
and in a session interface, presenting the text content and the media element by adopting a session message, wherein the text content and the media element form the message content of the session message.
6. The method of claim 5, wherein when the media element comprises a video element, the method further comprises:
in response to a static image interception instruction for the selected video element, intercepting a first frame image of the video element, and determining the intercepted first frame image as a video image corresponding to the video element, so that the text content and the video image are presented in the session interface by adopting a session message; or,
and in response to a dynamic image intercepting instruction aiming at the selected video element, intercepting to obtain a dynamic video image corresponding to the video element, determining the dynamic video image as a video image corresponding to the video element, presenting the text content and the video image by adopting a session message in the session interface, and continuously intercepting and combining a plurality of sequence frame images by the dynamic video image based on a first frame image of the video element.
7. The method of claim 1, wherein presenting the emoticon logical to the text content and the media element in a specified form in a conversation interface comprises:
displaying a plurality of first expression copies of expression elements with associated logic with the text content and a plurality of second expression copies of the media elements fused in the plurality of first expression copies in the session interface, and
and showing the moving processes of the first expression copies and the second expression copies.
8. The method of claim 7, wherein when the media element comprises at least one of a video element, an audio element, and a picture element, the method further comprises:
receiving a trigger operation aiming at the second expression copy in the moving process of the first expression copies and the second expression copies;
and responding to the trigger operation, presenting a detail page corresponding to the media element, and presenting the content of the media element in the detail page.
9. The method of claim 1, wherein when the media elements are emoji elements and the number of the media elements exceeds a target number, the presenting the emoji elements and the media elements having the associated logic with the text content in a specified form in a conversation interface comprises:
combining the expression elements with the text content having the association logic and the target number of media elements to obtain combined expression elements;
and displaying the moving process of the combined emoticons in the session interface.
10. The method of claim 1, wherein presenting the emoticon logical to the text content and the media element in a specified form in a conversation interface comprises:
performing superposition combination or parallel combination on the expression elements with the associated logic with the text content and the media elements to obtain combined expression elements;
and displaying the bounce process of the combined expression element in the session interface.
11. The method of claim 1, wherein presenting the emoticon logical to the text content and the media element in a specified form in a conversation interface comprises:
combining the expression elements with the text content and the media elements to obtain combined expression elements;
and displaying the process that the combined expression element moves along the target track of the corresponding target pattern in the session interface.
12. The method of claim 1, wherein said combining the emoji element with the media element comprises:
combining the expression elements with the text content and the media elements to obtain combined expression elements;
and in the session interface, showing a plurality of third emotion copies of the combined expression element, and showing the moving process of the third emotion copies.
13. An apparatus for session-based information presentation, the apparatus comprising:
the message receiving module is used for receiving a session message in a session interface, wherein the session message comprises text content and media elements;
and the message display module is used for displaying the expression elements and the media elements which have the association logic with the text content in a session interface in a specified form when the text content is the message type conforming to the association logic.
14. An electronic device, comprising:
a memory for storing executable instructions;
a processor, configured to execute the executable instructions stored in the memory to implement the session-based information presentation method of any one of claims 1 to 12.
15. A computer-readable storage medium storing executable instructions for implementing the session-based information presentation method of any one of claims 1 to 12 when executed by a processor.
CN202010780272.3A 2020-08-05 2020-08-05 Information display method, device, equipment and storage medium based on session Active CN112817670B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010780272.3A CN112817670B (en) 2020-08-05 2020-08-05 Information display method, device, equipment and storage medium based on session

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010780272.3A CN112817670B (en) 2020-08-05 2020-08-05 Information display method, device, equipment and storage medium based on session

Publications (2)

Publication Number Publication Date
CN112817670A true CN112817670A (en) 2021-05-18
CN112817670B CN112817670B (en) 2024-05-28

Family

ID=75853116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010780272.3A Active CN112817670B (en) 2020-08-05 2020-08-05 Information display method, device, equipment and storage medium based on session

Country Status (1)

Country Link
CN (1) CN112817670B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438150A (en) * 2021-07-20 2021-09-24 网易(杭州)网络有限公司 Expression sending method and device
CN113438149A (en) * 2021-07-20 2021-09-24 网易(杭州)网络有限公司 Expression sending method and device
CN113934349A (en) * 2021-10-28 2022-01-14 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN114510182A (en) * 2022-01-25 2022-05-17 支付宝(杭州)信息技术有限公司 Data processing method, device, equipment and medium
CN115268712A (en) * 2022-07-14 2022-11-01 北京字跳网络技术有限公司 Method, device, equipment and medium for previewing expression picture
CN115269886A (en) * 2022-08-15 2022-11-01 北京字跳网络技术有限公司 Media content processing method, device, equipment and storage medium
CN115396391A (en) * 2022-08-23 2022-11-25 北京字跳网络技术有限公司 Method, device, equipment and storage medium for presenting session message
CN115695348A (en) * 2021-07-27 2023-02-03 腾讯科技(深圳)有限公司 Expression display method and device electronic device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150025882A1 (en) * 2013-07-16 2015-01-22 Samsung Electronics Co., Ltd. Method for operating conversation service based on messenger, user interface and electronic device using the same
CN105049318A (en) * 2015-05-22 2015-11-11 腾讯科技(深圳)有限公司 Message transmitting method and device, and message processing method and device
CN107577513A (en) * 2017-09-08 2018-01-12 北京小米移动软件有限公司 A kind of method, apparatus and storage medium for showing painted eggshell
CN109388297A (en) * 2017-08-10 2019-02-26 腾讯科技(深圳)有限公司 Expression methods of exhibiting, device, computer readable storage medium and terminal
US20190068658A1 (en) * 2017-08-31 2019-02-28 T-Mobile Usa, Inc. Exchanging non-text content in real time text messages
US20190122412A1 (en) * 2017-10-23 2019-04-25 Paypal, Inc. System and method for generating animated emoji mashups
US20190379618A1 (en) * 2018-06-11 2019-12-12 Gfycat, Inc. Presenting visual media
CN111369645A (en) * 2020-02-28 2020-07-03 北京百度网讯科技有限公司 Expression information display method, device, equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150025882A1 (en) * 2013-07-16 2015-01-22 Samsung Electronics Co., Ltd. Method for operating conversation service based on messenger, user interface and electronic device using the same
CN105049318A (en) * 2015-05-22 2015-11-11 腾讯科技(深圳)有限公司 Message transmitting method and device, and message processing method and device
CN109388297A (en) * 2017-08-10 2019-02-26 腾讯科技(深圳)有限公司 Expression methods of exhibiting, device, computer readable storage medium and terminal
US20190068658A1 (en) * 2017-08-31 2019-02-28 T-Mobile Usa, Inc. Exchanging non-text content in real time text messages
CN107577513A (en) * 2017-09-08 2018-01-12 北京小米移动软件有限公司 A kind of method, apparatus and storage medium for showing painted eggshell
US20190122412A1 (en) * 2017-10-23 2019-04-25 Paypal, Inc. System and method for generating animated emoji mashups
US20190379618A1 (en) * 2018-06-11 2019-12-12 Gfycat, Inc. Presenting visual media
CN111369645A (en) * 2020-02-28 2020-07-03 北京百度网讯科技有限公司 Expression information display method, device, equipment and medium

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438150A (en) * 2021-07-20 2021-09-24 网易(杭州)网络有限公司 Expression sending method and device
CN113438149A (en) * 2021-07-20 2021-09-24 网易(杭州)网络有限公司 Expression sending method and device
CN113438150B (en) * 2021-07-20 2022-11-08 网易(杭州)网络有限公司 Expression sending method and device
CN115695348A (en) * 2021-07-27 2023-02-03 腾讯科技(深圳)有限公司 Expression display method and device electronic device and storage medium
CN113934349A (en) * 2021-10-28 2022-01-14 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and storage medium
WO2023071606A1 (en) * 2021-10-28 2023-05-04 北京字跳网络技术有限公司 Interaction method and apparatus, electronic device, and storage medium
CN113934349B (en) * 2021-10-28 2023-11-07 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN114510182A (en) * 2022-01-25 2022-05-17 支付宝(杭州)信息技术有限公司 Data processing method, device, equipment and medium
CN115268712A (en) * 2022-07-14 2022-11-01 北京字跳网络技术有限公司 Method, device, equipment and medium for previewing expression picture
CN115269886A (en) * 2022-08-15 2022-11-01 北京字跳网络技术有限公司 Media content processing method, device, equipment and storage medium
CN115396391A (en) * 2022-08-23 2022-11-25 北京字跳网络技术有限公司 Method, device, equipment and storage medium for presenting session message
CN115396391B (en) * 2022-08-23 2024-05-03 北京字跳网络技术有限公司 Method, apparatus, device and storage medium for session message presentation

Also Published As

Publication number Publication date
CN112817670B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN112817670B (en) Information display method, device, equipment and storage medium based on session
CN111294663B (en) Bullet screen processing method and device, electronic equipment and computer readable storage medium
CN102368196B (en) Method, terminal and the system of customer end contents transmission window inediting dynamic picture
US20150213001A1 (en) Systems and Methods for Collection-Based Multimedia Data Packaging and Display
CN112748976B (en) Expression element display method, device, equipment and computer readable storage medium
CN108055593A (en) A kind of processing method of interactive message, device, storage medium and electronic equipment
WO2020187012A1 (en) Communication method, apparatus and device, and group creation method, apparatus and device
CN112748974B (en) Information display method, device, equipment and storage medium based on session
CN113746874B (en) Voice package recommendation method, device, equipment and storage medium
CN106462810A (en) Connecting current user activities with related stored media collections
CN110162667A (en) Video generation method, device and storage medium
US20160275108A1 (en) Producing Multi-Author Animation and Multimedia Using Metadata
CN108737903B (en) Multimedia processing system and multimedia processing method
CN104765761A (en) Media data processing method
CN102801652A (en) Method, client and system for adding contact persons through expression data
CN113973223B (en) Data processing method, device, computer equipment and storage medium
US20240007316A1 (en) Adaptive background in video conferencing
CN107343221B (en) Online multimedia interaction system and method
CN113010733A (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
CN112533032B (en) Video data processing method and device and storage medium
CN116561439A (en) Social interaction method, device, equipment, storage medium and program product
CN112799748B (en) Expression element display method, device, equipment and computer readable storage medium
CN114764289A (en) Expression processing method, device and equipment and computer readable storage medium
CN112748975B (en) Expression element display method, device, equipment and computer readable storage medium
WO2023207439A1 (en) Information sending processing method and apparatus, information receiving processing method and apparatus, electronic device, computer readable storage medium, and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40045014

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant