CN112817670B - Information display method, device, equipment and storage medium based on session - Google Patents
Information display method, device, equipment and storage medium based on session Download PDFInfo
- Publication number
- CN112817670B CN112817670B CN202010780272.3A CN202010780272A CN112817670B CN 112817670 B CN112817670 B CN 112817670B CN 202010780272 A CN202010780272 A CN 202010780272A CN 112817670 B CN112817670 B CN 112817670B
- Authority
- CN
- China
- Prior art keywords
- expression
- session
- message
- text content
- media
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 91
- 230000014509 gene expression Effects 0.000 claims abstract description 455
- 230000001960 triggered effect Effects 0.000 claims description 47
- 230000008569 process Effects 0.000 claims description 39
- 230000006870 function Effects 0.000 claims description 32
- 230000015654 memory Effects 0.000 claims description 25
- 230000004044 response Effects 0.000 claims description 13
- 230000003068 static effect Effects 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 235000013601 eggs Nutrition 0.000 description 56
- 238000010586 diagram Methods 0.000 description 15
- 238000012545 processing Methods 0.000 description 13
- 230000008451 emotion Effects 0.000 description 8
- 230000033001 locomotion Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 241000220317 Rosa Species 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 235000013305 food Nutrition 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000004549 pulsed laser deposition Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
- G06F3/0233—Character input methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Information Transfer Between Computers (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application provides a session-based information display method, a session-based information display device, session-based information display equipment and a computer-readable storage medium; the method comprises the following steps: and receiving a session message in a session interface, wherein the session message comprises text content and media elements, and when the text content is a message type conforming to association logic, displaying the expression elements and the media elements which have association logic with the text content in a designated form in the session interface. The application can improve the richness of the expression elements.
Description
Technical Field
The present application relates to the field of mobile communications technologies, and in particular, to a method, an apparatus, a device, and a computer readable storage medium for displaying information based on a session.
Background
With the development of mobile communication technology, in order to better transfer emotion during a session of instant messaging, when a session message sent by a user contains certain keywords triggering expression elements, expression elements associated with the keywords are dynamically displayed in a session interface, for example, when the user inputs a session message of "happy birthday", the expression elements of a "cake" style are presented in a manner of colored eggs and expression rain in the session interface.
In the related art, when the inputted conversation message includes media elements such as an expression element or an image element in addition to the keyword including the triggering expression element, only the expression element associated with the keyword can be dynamically displayed in the conversation interface, so that the displayed expression element is single, and bad experience is brought to the user.
Disclosure of Invention
The embodiment of the application provides a session-based information display method, a session-based information display device and a session-based information display device, and a session-based information display medium.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an information display method based on a session, which comprises the following steps:
receiving a session message in a session interface, the session message including text content and media elements; and when the text content is a message type conforming to the association logic, displaying the expression element and the media element which have the association logic with the text content in a designated form in a session interface.
The embodiment of the application provides an information display device based on a session, which comprises:
The message receiving module is used for receiving a session message in the session interface, wherein the session message comprises text content and media elements;
And the message display module is used for displaying the expression element and the media element which have the association logic with the text content in a designated form in a session interface when the text content is of a message type conforming to the association logic.
In the above solution, the apparatus further includes an editing module, where the editing module is configured to, when the media element includes an expression element, before receiving the session message in the session interface,
Presenting a text editing interface, and presenting a text editing box and expression selection function items in the text editing interface;
Responding to a text editing operation triggered by the text editing box and an expression selection operation triggered by the expression selection function item, and presenting text content edited by the text editing operation and expression elements selected by the expression selection operation;
and responding to a message sending operation triggered based on the text editing interface, and sending a session message containing edited text and selected expression elements.
In the above solution, the editing module is further configured to, when the media element includes a picture element, before receiving the session message in the session interface,
Presenting a text editing interface, and presenting a text editing box and a picture selection function item in the text editing interface;
Responding to a text editing operation triggered by the text editing box and a picture selection operation triggered by the picture selection function item, and presenting text content edited by the text editing operation and picture elements selected by the picture selection operation;
and sending a session message containing edited text and selected picture elements in response to a message sending operation triggered based on the text editing interface.
In the above solution, when the media element includes a first number of picture elements and the first number is greater than the second number, the message display module is further configured to
Presenting the session message including the text content in a session interface, and
And in the session message, independently presenting a second number of picture elements in the first number of picture elements, and overlaying and presenting picture elements except for the second number of picture elements in the first number of picture elements.
In the above solution, when the media element includes at least one of a video element and an audio element, the message display module is further configured to
And in a session interface, presenting the text content and the media element by adopting a session message, wherein the text content and the media element form the message content of the session message.
In the above solution, when the media element includes a video element, the apparatus further includes an image capturing module, where the image capturing module is configured to
Responding to a static image interception instruction for a selected video element, intercepting a first frame image of the video element, and determining the intercepted first frame image as a video image corresponding to the video element so as to present the text content and the video image in the session interface by adopting a session message; or alternatively
And responding to a dynamic image interception instruction aiming at the selected video element, intercepting to obtain a dynamic video image corresponding to the video element, determining the dynamic video image as the video image corresponding to the video element, so as to present the text content and the video image in the session interface by adopting a session message, and continuously intercepting a plurality of sequential frame images based on the first frame image of the video element to obtain the dynamic video image.
In the above solution, the message display module is further configured to display, in the session interface, a plurality of first expression copies of the expression element having association logic with the text content, and a plurality of second expression copies of the media element fused in the plurality of first expression copies, and
And displaying the moving process of the plurality of first expression copies and the plurality of second expression copies.
In the above solution, when the media element includes at least one of a video element, an audio element, and a picture element, the apparatus further includes a detail presenting module, where the detail presenting module is configured to
In the moving process of the plurality of first expression copies and the plurality of second expression copies, receiving triggering operation for the second expression copies;
and responding to the triggering operation, presenting a detail page corresponding to the media element, and presenting the content of the media element in the detail page.
In the above scheme, when the media elements are expression elements and the number of the media elements exceeds the target number,
The message display module is further used for combining the expression elements with the text content in association logic and the media elements with the target quantity to obtain combined expression elements;
and displaying the moving process of the combined expression element in the session interface.
In the above scheme, the message display module is further configured to stack and combine or combine in parallel the expression element having association logic with the text content and the media element, to obtain a combined expression element;
And displaying the bouncing process of the combined expression element in the session interface.
In the above scheme, the message display module is further configured to combine an expression element having association logic with the text content with the media element to obtain a combined expression element;
And displaying the process that the combined expression element moves along the target track of the corresponding target pattern in the session interface.
In the above scheme, the expression display module is further configured to combine an expression element having association logic with the text content with the media element to obtain a combined expression element;
and displaying a plurality of third expression copies of the combined expression element in the session interface, and displaying the moving process of the plurality of third expression copies.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the session-based information display method provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium which stores executable instructions for causing a processor to execute, thereby realizing the session-based information display method provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects:
when the text content in the conversation message is the message type conforming to the association logic, the expression element which is associated with the text content and the media element in the conversation message are combined to be displayed in the conversation interface in a specified form, so that the expression element displayed in the conversation interface comprises the expression element associated with the text and the media element carried in the conversation message, the richness of the expression element is improved, the existing single plain text message only supports information display or the plain text message only triggers color egg display, the application range of information display is expanded, the user viscosity of a product is improved, and the display requirement of the user information sharing trend is met.
Drawings
FIGS. 1A-1D are schematic diagrams of a session-based information presentation interface provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of an alternative architecture of a session-based information presentation system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an alternative electronic device according to an embodiment of the present application;
fig. 4 is a schematic flow chart of an alternative method for presenting session-based information according to an embodiment of the present application;
FIGS. 5A-5C are schematic diagrams of display interfaces according to embodiments of the present application;
FIGS. 6A-6D are schematic diagrams of display interfaces according to embodiments of the present application;
FIGS. 7A-7D are schematic diagrams of display interfaces according to embodiments of the present application;
FIGS. 8A-8C are schematic diagrams illustrating display interfaces according to embodiments of the present application;
FIG. 9 is a schematic flow chart of an alternative method for presenting session-based information according to an embodiment of the present application;
fig. 10 is a schematic flow chart of an alternative method for presenting session-based information according to an embodiment of the present application;
FIG. 11 is a schematic flow chart of an alternative method for presenting session-based information according to an embodiment of the present application;
FIG. 12 is a schematic flow chart of an alternative method for presenting session-based information according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a session-based information display device according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the term "first\second …" is merely to distinguish similar objects and does not represent a particular ordering for objects, it being understood that "first\second …" may be interchanged in a particular order or precedence where allowed, to enable embodiments of the application described herein to be implemented in an order other than that illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) And the client is used for providing various service application programs such as a video playing client, an instant messaging client, a live broadcast client and the like, which are operated in the terminal.
2) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
3) When the conversation information of the user mode contains certain keywords, the color egg expression rain is triggered in the conversation interface.
4) In order to provide a unified interaction experience for users, the chat window component shared by different sessions is provided In software, and the behavior habits of user input, clicking operation and the like In the component can be regarded as consistent.
In the conversation process, when the conversation message sent by the user contains certain keywords triggering the expression elements, the expression elements associated with the keywords are dynamically displayed in the conversation interface, see fig. 1A-1D, fig. 1A-1D are schematic diagrams of a conversation-based information display interface provided by the embodiment of the application, and the user inputs "happy birthday" in fig. 1AIn the session interface shown in fig. 1B, dynamically displaying the expression element of the "cake" style corresponding to the text "happy birthday"; the user enters "happy birthday/>" in FIG. 1C"This session message, then in the session interface shown in fig. 1D, the emoticons of the" cake "style corresponding to the text" happy birthday "are dynamically displayed. Therefore, the expression elements dynamically displayed only contain the expression elements related to certain keywords, so that the displayed expression elements are relatively single.
In view of this, embodiments of the present application provide a method, apparatus, device, and computer-readable storage medium for session-based information presentation to improve the richness of expression elements.
Referring to fig. 2, fig. 2 is a schematic diagram of an alternative architecture of the session-based information presentation system 100 according to an embodiment of the present invention, in order to support an exemplary application, a terminal 400 (a terminal 400-1 and a terminal 400-2 are shown in an exemplary manner) is connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two, and a wireless link is used to implement data transmission.
In practical applications, the terminal 400 may be various types of user terminals such as a smart phone, a tablet computer, a notebook computer, and the like, and may also be a desktop computer, a game console, a television, or a combination of any two or more of these data processing devices; the server 200 may be a server supporting various services, which is configured separately, may be configured as a server cluster, may be a cloud server, or the like.
In practical implementation, a client, such as a video playing client, an instant messaging client, a live broadcast client, etc., is disposed on the terminal 400, and when a user opens the client on the terminal 400 to perform a session, the terminal 400 is configured to edit a session message including text content and media elements in response to an editing operation for the session message, and send the session message to the server 200 in response to a message sending operation for the session message;
The server 200 is configured to determine, based on the session message, whether the text content is a message type that conforms to the association logic, and when determining that the message type corresponding to the text content is a message type that conforms to the association logic, acquire an expression element that has the association logic with the text content, and return the acquired expression element that has the association logic with the text content and a media element that is included in the session message to the terminal 400;
The terminal 400 is configured to present the media elements contained in the emoticon and the session message associated with the text content in a specified form in the session interface.
Referring to fig. 3, fig. 3 is a schematic diagram of an alternative structure of an electronic device 500 according to an embodiment of the present application, in an actual application, the electronic device 500 may be the terminal 400 or the server 200 in fig. 2, and an electronic device implementing the method for processing live information according to the embodiment of the present application will be described by taking the electronic device as an example of the terminal 400 shown in fig. 2. The electronic device 500 shown in fig. 3 includes: at least one processor 510, a memory 550, at least one network interface 520, and a user interface 530. The various components in electronic device 500 are coupled together by bus system 540. It is appreciated that the bus system 540 is used to enable connected communications between these components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to the data bus. The various buses are labeled as bus system 540 in fig. 3 for clarity of illustration.
The Processor 510 may be an integrated circuit chip having signal processing capabilities such as a general purpose Processor, such as a microprocessor or any conventional Processor, a digital signal Processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
The user interface 530 includes one or more output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The user interface 530 also includes one or more input devices 532, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 550 may optionally include one or more storage devices physically located remote from processor 510.
Memory 550 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The non-volatile memory may be read only memory (ROM, read Only Me mory) and the volatile memory may be random access memory (RAM, random Access Memor y). The memory 550 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 550 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
Network communication module 552 is used to reach other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
A presentation module 553 for enabling presentation of information (e.g., a user interface for operating a peripheral device and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
The input processing module 554 is configured to detect one or more user inputs or interactions from one of the one or more input devices 532 and translate the detected inputs or interactions.
In some embodiments, the session-based information presentation apparatus provided in the embodiments of the present application may be implemented in software, and fig. 3 shows the session-based information presentation apparatus 555 stored in the memory 550, which may be software in the form of a program, a plug-in, or the like, including the following software modules: the message receiving module 5551 and the message presenting module 5552 are logical, and thus may be arbitrarily combined or further split according to the implemented functions.
The functions of the respective modules will be described hereinafter.
In other embodiments, the session-based information presentation apparatus provided by the embodiments of the present application may be implemented in hardware, and by way of example, the session-based information presentation apparatus provided by the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the session-based information presentation method provided by the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), application SPECIFIC INTEGRATED circuits, DS ps, programmable logic devices (PLDs, programmable Logic Device), complex Programmable logic devices (CPLDs, complex Programmable Logic Device), field-Programmable gate arrays (FPGs a, field-Programmable GATE ARRAY), or other electronic components.
The method for displaying session-based information provided by the embodiment of the present application is described next, and in actual implementation, the method for displaying session-based information provided by the embodiment of the present application may be implemented by a server or a terminal alone, or may be implemented by a server and a terminal in cooperation.
Referring to fig. 4, fig. 4 is a schematic flow chart of an alternative method for presenting session-based information according to an embodiment of the present application, and the steps shown in fig. 4 will be described.
Step 101: the terminal receives a session message in a session interface, the session message including text content and media elements.
In practical application, a client, such as an instant messaging client and a live broadcast client, is arranged on the terminal, and when a user opens the client on the terminal to perform a session, a session message is received, where the session message may be a message edited by the user based on a message editing frame, or may be a session message edited by a session object sent by the server.
When editing the session message, the terminal responds to the editing operation to present the edited session message, wherein the session message is composed of text content and media elements, and the media elements comprise at least one of the following: expression elements, picture elements, audio elements, video elements.
In some embodiments, when the media element comprises an emoticon, the terminal may obtain the composed session message before the terminal receives the session message by:
presenting a text editing interface, and presenting a text editing box and expression selection function items in the text editing interface; responding to a text editing operation triggered by a text editing box and an expression selecting operation triggered by an expression selecting function item, and presenting text content edited by the text editing operation and expression elements edited by the expression selecting operation; and transmitting a session message containing edited text content and edited expression elements in response to a message transmission operation triggered based on the text editing interface.
Here, the expression element refers to a pictographic expression symbol, also called "emoji" or "small yellow face" expression symbol, and is classified into various types of expression symbols such as animal series, fruit foods, expression series, plant nature, chinese zodiac constellation, sports leisure, celebration of a design, character series, and the like, for example, smiling face indicates smiling, cake indicates food, and the like. When editing the conversation message, the categories of the expression elements can be the same or different, and the number of the expression elements can be 1 or more.
Referring to fig. 5A-5C, fig. 5A-5C are schematic views of a display interface provided by an embodiment of the present application, in fig. 5A, a text edited by a terminal based on a text editing box A1 triggering a text editing operation is "happy birthday", and an expression element edited based on an expression option function item A2 triggering an expression selecting operation isAnd receives' happy birthday/>, based on the message sending operation triggered by the sending function item A3"This session message, and presented in the session interface shown in FIG. 5B; in fig. 5C, when the expression elements contained in the conversation message are more, the "happy birthday" as shown in fig. 5C is presented"This session message.
In some embodiments, when the media element comprises a picture element, the terminal may obtain the composed session message before the terminal receives the session message by:
Presenting a text editing interface, and presenting a text editing box and a picture selection function item in the text editing interface; responding to a text editing operation triggered by a text editing box and a picture element adding operation triggered by a picture selection function item, and presenting text contents edited by the text editing operation and picture elements selected by the picture element adding operation; in response to a message sending operation triggered based on the text editing interface, a conversation message is sent to the selected picture element containing edited text content.
In some embodiments, when the media element includes a first number of picture elements and the first number is greater than the second number, the received session message may be presented in a session interface of the terminal by:
And presenting the session message comprising the text content in the session interface, and independently presenting a second number of picture elements in the first number of picture elements and overlaying and presenting the picture elements except the second number of picture elements in the first number of picture elements in the session message.
Here, when the number of the selected picture elements is large during editing the conversation message, in order to ensure presentation timeliness for the conversation message, the first several picture elements can be sequentially presented in the conversation message, the later picture elements are stacked and then presented, when a user triggers the stacked picture elements, the terminal responds to the triggering operation to present a detail page corresponding to the picture elements, and the picture details are presented in the detail page.
Referring to fig. 6A-6D, fig. 6A-6D are schematic views of a display interface provided by an embodiment of the present application, in fig. 6A, a text is edited based on a text editing operation triggered by a text editing box B1, a picture selection function item B2 is used for adding a selected picture element, a text edited by a terminal based on the text editing operation triggered by the text editing box B1 is "happy birthday", a plurality of pictures are added based on the picture selection function item B2, a session message containing text content and the selected picture is received based on a message sending operation triggered by a sending function item B3, and the session message containing the text content and the picture element shown in fig. 6B is presented; in fig. 6C, when the number of selected pictures is excessive, such as 8, in editing the conversation message, the first 4 pictures shown as "a" can be sequentially presented in the conversation message, the remaining 4 pictures are stacked and presented in the manner described as "B", when the user triggers the stacked pictures, the terminal responds to the triggering operation to present the detail page of the corresponding picture shown in fig. 6D, and the picture details are presented in the detail page.
In some embodiments, when the media element includes at least one of a video element or an audio element, the terminal may further obtain the edited session message including the text content and the video element or the audio element before presenting the received session message in the session interface of the terminal, and present the received session message in the session interface by:
In the session interface, a piece of session message is used to present text content and media elements that constitute the message content of the session message.
Here, one conversation message contains both text content and video elements, or one conversation message contains both text content and audio elements.
Referring to fig. 7A-7B, fig. 7A-7B are schematic views of a display interface provided by an embodiment of the present application, in fig. 7A, text is edited based on a text editing operation triggered by a text editing box C1, a media adding function item C2 is used for adding video elements, text content edited by a terminal based on the text editing operation triggered by the text editing box C1 is "happy birthday", a plurality of video elements are added based on the media adding function item C2, a session message including the text content and the added video elements is received based on a message sending operation triggered by a sending function item C3, and the session message shown in fig. 7B is presented, wherein the session message includes the text content and session messages of two video elements.
In some embodiments, when the media element includes a video element, the video element presented in the session message may be a video image corresponding to the video element, and in actual implementation, the terminal may intercept the video image corresponding to the video element by:
Responding to a static image interception instruction for the selected video element, intercepting a first frame image of the video element, and determining the intercepted first frame image as a video image of the corresponding video element so as to present text content and the video image by adopting a session message in a session interface; or alternatively
And responding to a dynamic image interception instruction aiming at the selected video element, intercepting and obtaining a dynamic video image of the corresponding video element, determining the dynamic video image as the video image of the corresponding video element, and presenting text content and the video image in a session interface by adopting a session message, wherein the dynamic video image is obtained by continuously intercepting and combining a plurality of sequence frame images based on a first frame image of the video element.
When the video element is dynamically intercepted, a preset number of frame images can be intercepted from the first frame image of the video element, the intercepted preset number of frame images are combined to obtain a dynamic image (namely gif image) of the corresponding video element, or a video fragment with preset duration (such as 1.5 seconds) can be intercepted, and the intercepted video fragment is converted into a dynamic image.
And uploading each video element and the corresponding dynamic image to a cos platform by the terminal, and assembling each video element, the corresponding dynamic image and text content based on the resource identification of each video element and the corresponding dynamic image. Here, the dynamic image resource and the video element resource field are correspondingly added in the assembled message, so as to be used for representing the dynamic image and the corresponding video element. The terminal sends the assembled message to a server, and the server returns video images corresponding to the expression elements with associated logic in text content in the conversation message and the video elements carried by the conversation message to the terminal so as to present the expression elements with associated logic in text content in a conversation interface of the terminal and the video images corresponding to the video elements carried by the conversation message; or the server combines the expression elements associated with the text content in the session message with the video images corresponding to the video elements carried by the session message to obtain combined expression elements, and returns the combined expression elements to the terminal so as to display the session message containing the text content and the video elements in the session interface of the terminal and dynamically display the combined expression elements.
When the number of the selected video elements is large, in order to ensure the presentation timeliness of the session message, video images corresponding to the first video elements can be sequentially presented in the session message, and the video images corresponding to the rest video elements are presented after being stacked.
In particular, when the media element includes a third number of video images and the third number is greater than the fourth number, the received session message may be presented in the session interface of the terminal by:
Presenting a conversation message including text content in a conversation interface, and independently presenting a fourth number of video images in the third number of video images and overlaying and presenting video images other than the fourth number of video images in the third number of video images in the conversation message.
Referring to fig. 7C-7D, fig. 7C-7D are schematic views of a display interface provided by an embodiment of the present application, in fig. 7C, when there are too many selected video images, such as 10 images, in editing a session message, the first 5 images, such as "a", can be sequentially presented in the session message, the remaining 5 images are stacked and presented in the manner described by "B", and when a user triggers the stacked video images, the terminal responds to the triggering operation, presents a detail page of the corresponding video element, such as shown in fig. 7D, and plays video details in the detail page.
Step 102: when the message type corresponding to the text content is the message type conforming to the association logic, the expression element and the media element which have the association logic with the text content are displayed in a designated form in the session interface.
In some embodiments, after receiving the session message, the terminal also presents the session message including text content and emoticons; here, the presenting of the session message and the execution sequence of step 102 in the embodiment of the present application are described, in some embodiments, the session message including the text content and the expression element may be presented first, then after a period of time (may be set according to the actual requirement, for example, 3 seconds), step 102 is executed, that is, when the message type corresponding to the text content is the message type conforming to the association logic, the expression element and the media element having the association logic with the text content are displayed in the session interface in a specified form;
In other embodiments, step 102 may be performed first, that is, when the message type corresponding to the text content is a message type conforming to the association logic, the emoticons and the media elements associated with the text content are displayed in the session interface in a specified form, and then after a period of time (may be set according to actual needs, for example, 2 seconds), the session message including the text content and the emoticons is presented in the session interface;
In other embodiments, the presenting the session message and the step 102 may be performed simultaneously, that is, while the session message including the text content and the emoticon is presented in the session interface, when the message type corresponding to the text content is a message type conforming to the association logic, the emoticon and the media element having the association logic with the text content are presented in the session interface in a specified form.
The message type corresponding to the text content will be described next. The message type corresponding to the text content, that is, the message type of the session message, may include a message type conforming to the association logic and a message type not conforming to the association logic.
For a message type conforming to the association logic, the text content has an expression element corresponding to it, that is, the text content and the expression element have the association logic, specifically:
In some embodiments, the text content includes a keyword, the keyword includes one or more expression elements corresponding to the keyword, and the mapping relation table between the keyword and the expression elements can be searched based on the keyword by extracting the keyword of the text content, so as to determine whether the expression elements corresponding to the extracted keyword exist, and further determine that the message type is a message type conforming to the association logic when the one or more expression elements corresponding to the extracted keyword exist; otherwise, determining the message type as the message type which does not accord with the association logic;
In other embodiments, the session message carries a message type identifier, where the message type identifier is used to identify that the message type of the session message is a message type conforming to the association logic, that is, after the session message is received, the session message is parsed and whether the session message carries a message type identifier is checked, and when the session message carries a message type identifier, it is determined that the message type is a message type conforming to the association logic; otherwise, determining the message type as the message type which does not conform to the association logic.
In some embodiments, when the terminal receives the session message, it is first required to determine whether the text content in the session message is of a message type conforming to the association logic, for example, whether the session message contains keywords triggering the color egg expression rain, such as happy birthday, wanting you, financial source roll, red fire, great luck, surplus year, success, etc.
In some embodiments, keyword extraction may be performed on text content of a conversation message to obtain a first keyword corresponding to the conversation message; matching the first keyword with a second keyword corresponding to the candidate expression element; when the first keyword is matched with the second keyword, determining that the text content of the conversation message is associated with the expression element, and taking the candidate expression element corresponding to the second keyword as the expression element associated with the text content.
When the text content in the conversation message is determined to be the message type conforming to the association logic, the expression element with the text content and the media element carried in the conversation message with the association logic are displayed in a designated form in a conversation interface.
The appointed form can be to independently display the expression elements associated with the text content and the media elements carried in the conversation message in the conversation interface, or combine the expression elements associated with the text content and the media elements carried in the conversation message, and dynamically display the combined expression elements and media elements.
It should be noted that, in actual implementation, the presentation sequence of the session message in the session interface may be consistent or inconsistent with the presentation sequence of the expression element and the media element in the session interface, for example, the session message is presented in the session interface first, and the expression element and the media element are presented after the session message is presented for a certain time; or presenting the expression element and the media element while presenting the session message in the session interface; or presenting the expression element and the media element in the session interface, and presenting the session message after the expression element and the media element are presented for a certain time.
In some embodiments, the combined emoticons and media elements may be dynamically presented by:
And displaying a plurality of first expression copies of the expression element and a plurality of second expression copies of the media element fused in the plurality of first expression copies, and displaying the moving process of the plurality of first expression copies and the plurality of second expression copies in the session interface.
Here, the media element includes at least one of the following elements: expression elements, picture elements, video elements, and audio elements. The first expression copy corresponds to the expression element associated with the text in the conversation message, is obtained by copying the corresponding expression element, and has the same size and style as the original expression element; the second expression element copy corresponds to the media element carried in the session message, and when the media element is the expression element, the second expression element copy is obtained by copying the corresponding media element; when the media element is a picture element or a video element, the second expression copy is obtained by copying and reducing the video image corresponding to the media element to a fixed size.
That is, for the picture elements, the displayed second expression copies are thumbnail images of the original picture elements, in the dynamic display process, the second expression copy corresponding to each picture element can be of a fixed size, when the length and width are inconsistent with the fixed size ratio, the equal proportion compression is needed, the authenticity of the picture size is maintained as much as possible, and incomplete display of the picture elements caused by unsuitable filling mode is avoided. For the video element, since the second expression copy is a thumbnail of the corresponding video image, the form and the size of the second expression copy displayed corresponding to the picture element are the same, and are not described herein.
The fusion is not the fusion of two expression copies, but the first expression copies of the expression element are taken as a whole, and the second expression copies of the media element are taken as a whole, so that the expression copies between the two whole are displayed in a cross mode.
After the terminal receives a plurality of first expression copies of the expression element and a plurality of second expression copies of the media element, according to the total number and time of the first expression copies and the second expression copies, one expression copy is dropped at average interval time, the position of the dropped expression copy in a screen is random, the expression copy required to drop every time is also selected according to a random rule, namely the drop rule can be that the first expression copy and the second expression copy drop randomly. For example, the total number of the first expression copies and the second expression copies is n, and the first expression copies and the second expression copies are represented by a linked list of n, wherein each expression copy corresponds to one data in the linked list, a number m in n is randomly generated each time, the expression copies corresponding to the m positions are taken out and dropped, each time one expression copy is dropped, the linked list needs to delete the expression copies corresponding to the positions, and the first expression copies and the second expression copies are circulated until the linked list is empty, so that the first expression copies and the second expression copies do not fly according to a fixed drifting mode, and different experiences are brought to users.
In the moving process of the plurality of first expression copies and the plurality of second expression copies, the first expression copies and the second expression copies can move from above on the conversation interface to below in the same or different moving tracks and at the same or different moving speeds, for example, the first expression copies and the second expression copies move from top to bottom in a free falling mode, so that the effect of color egg expression rain is achieved, and a specific dynamic display form can be seen in fig. 5B-5C.
In some embodiments, when the media element includes at least one of a video element, an audio element, and a picture element, the terminal may also present the content of the media element by:
In the moving process of the plurality of first expression copies and the plurality of second expression copies, receiving triggering operation aiming at the second expression copies; in response to the triggering operation, rendering a detail page of the corresponding media element, and rendering the content of the media element in the detail page.
In practical application, the detail page can also present functional items such as save/download/forward, etc. so as to save/download/forward the content of the presented media expression, etc.
For example, for the second expression copy corresponding to the dropped picture element in fig. 6B-6C, when the user triggers the second expression copy, the terminal responds to the triggering operation, presents the detail page of the corresponding picture as shown in fig. 6D, and presents the original image of the picture element in the detail page. For the second expression copy corresponding to the video element that falls in fig. 7B-7C, when the user triggers the second expression copy, the terminal responds to the triggering operation to present the detail page of the corresponding video element as shown in fig. 7D, if the video image is a dynamic image, the detail page shown in fig. 7D plays the dynamic image preferentially, and after the video is downloaded locally, the detail content of the video is continuously played.
In some embodiments, when the media element is an emoticon and the number of media elements exceeds the target number, the terminal may present the emoticon and the media element for which associated logic exists with the text content in a specified form in the session interface by:
Combining the expression elements associated with the text content with the target number of media elements in the session message to obtain combined expression elements;
And displaying the moving process of the combined expression element in the session interface.
Here, when the number of the expression elements carried in the conversation message is too large, only the expression elements associated with the text content are combined, for example, in fig. 5C, the number of the expression elements in the conversation message is 14, the first 10 expression elements are combined with the expression elements associated with the text content, and the moving process of the combined expression elements is shown.
In some embodiments, the terminal may present the emoticons and media elements of the logic associated with the text content in a specified form in the session interface by:
carrying out superposition combination or parallel combination on the expression elements and the media elements to obtain combined expression elements;
And in the session interface, displaying the bouncing process of the combined expression element.
When the media element is an expression element, the expression element associated with the text content and the expression element carried by the conversation message are combined to obtain a combined expression element, the combined expression element moves in a bouncing mode in the conversation interface, and the bouncing direction can be arbitrary, for example, from the bottom of the conversation interface to jump up to jump out of the conversation interface, or from the right side of the conversation interface to the left side to jump up to jump out of the conversation interface, and the like.
Referring to fig. 8A, fig. 8A is a schematic diagram of a display interface provided by an embodiment of the present application, in fig. 8A, expression elements of a "cake" style corresponding to a text content of "happy birthday" and expression elements of a "rose" style carried in a conversation message are combined, and the obtained combined expression elements bounce from the upper portion of a conversation interface down in a direction indicated by an "arrow" until bouncing out of the conversation interface.
In some embodiments, the terminal may present the emoticons and media elements of the logic associated with the text content in a specified form in the session interface by:
Combining the expression element with the media element to obtain a combined expression element;
in the session interface, a process of moving the combined expression element along a target track corresponding to the target pattern is shown.
Here, the target pattern may be any pattern, such as a drawn "love" pattern, a "like" pattern, and a written character pattern, and in the session interface, the combined expression element moves along a target track corresponding to the target pattern, and the moving process is displayed.
Referring to fig. 8B, fig. 8B is a schematic diagram of a display interface of an emotion element provided by an embodiment of the present application, and in fig. 8B, the emotion element of a "cake" style corresponding to a text content of "happy birthday" is combined with the emotion element of a "one arrow through" style carried in a conversation message, and the combined emotion element obtained by the combination moves along a target track corresponding to an "love" pattern.
In some embodiments, the terminal may present the emoticons and media elements of the logic associated with the text content in a specified form in the session interface by:
Combining the expression element with the media element to obtain a combined expression element;
and displaying a plurality of third expression copies of the combined expression element, and displaying the moving process of the plurality of third expression copies.
Here, the expression elements associated with the text and the media elements carried in the session message are combined to obtain a combined expression element, a plurality of third expression copies corresponding to the combined expression element are obtained, and the moving process of the plurality of third expression copies is displayed.
Referring to fig. 8C, fig. 8C is a schematic diagram of a display interface of an expression element provided by an embodiment of the present application, in fig. 8C, an expression element of a "cake" style corresponding to a text content of "happy birthday" is combined with an expression element of a "rose" style carried in a session message to obtain a third expression copy of the combined expression element, and multiple third expression copies can move from above on the session interface to below in the same or different movement tracks and at the same or different movement rates, for example, move from top to bottom in a free-falling manner, so as to achieve the effect of color egg expression rain.
Through the mode, when the terminal receives the session message, firstly judging whether the text content in the session message is of a message type conforming to the association logic, and when the text content in the session message is determined to be of a message type conforming to the association logic, combining the expression element with the text content and the media element in the session message with the association logic to display the expression element and the media element in the session interface in a specified form; therefore, the expression elements presented in the conversation interface comprise the expression elements related to the text content, and also comprise the media elements carried by the conversation message, so that a certain DIY authority is given to the users, each user can trigger different expression elements, and the richness of the expression elements is improved; in addition, the expression element and the media element move in various modes in the session interface, so that the size and the movement track of the expression element are rich and diversified, emotion expression among users is facilitated, and the interestingness of the session is increased, so that the existing single plain text message supports information display or the plain text message triggers color egg display, the application range of information display is expanded to richer message types, the user viscosity of products is improved, and the display requirement of the user information sharing on increasing diversification is met.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
In practical application, a client, such as a QQ and WeChat instant messaging client, is provided on the terminal, and a user can perform a conversation based on the client, and when a conversation message input by the user contains a keyword for triggering the color egg expression rain (i.e. text content associated with an expression element), the corresponding color egg expression rain (i.e. dynamically displaying the expression element associated with the text content) will be triggered in a conversation interface. However, the session-based information presentation method has at least the following problems:
1) The key word triggered color egg expression rain is prefabricated systematically, and the size, the motion track and the expression pattern are relatively single. For example, QQ and WeChat egg expression rain are all prefabricated systematically, and egg expression rain is composed of expressions with fixed sizes and falls along one or more fixed tracks, so that the falling frequency is consistent, variation is absent, personal emotion is difficult to express, and especially in group chat, users feel boring when seeing repeated egg expression rain every time.
2) The trigger mechanism is single. Only the key words for triggering the color egg expression rain are contained in the session message, the prefabricated color egg expression rain can be triggered, and no other triggering modes exist.
In view of this, an embodiment of the present application provides a session-based information presentation method to solve at least the above-mentioned problems, when a session message edited by a session object is received, it is first determined whether text content (i.e. a trigger word) associated with an expression element is included in the session message, when it is determined that the text content associated with the expression element is included in the session message, the text content is associated with the expression element and a media element in the session message are combined, and a movement process of the combined expression element and media element is dynamically presented. Therefore, the key words trigger the color egg expression rain and carry media elements in the conversation message at the same time, and the user is given certain DIY authority, so that the richness of the color egg expression rain is increased. Next, the following will explain one by one.
1. Carrying expression elements edited by users in color egg expression rain
In practical applications, the media element comprises at least one of: expression elements, picture elements, audio elements, and video elements. The conversation information edited by the user comprises text content and expression elements for triggering the color egg expression rain, wherein the expression elements support the yellow-face small expressions in emoji and QQ, after receiving the conversation information, a client on the terminal sends the conversation information to a server, and after processing the conversation information, the server sends the conversation information and the corresponding color egg expression rain to the client so as to be presented on the client.
Referring to fig. 9, fig. 9 is a schematic flow chart of an alternative method for displaying session-based information according to an embodiment of the present application, which will be described with reference to fig. 9.
Step 201: the client receives a conversation message containing edited text content and edited emoticons.
In actual implementation, a text editing interface is presented in a session interface of the client, and a text editing box and expression selection function items are presented in the text editing interface; responding to a text editing operation triggered by a text editing box and an expression selecting operation triggered by an expression selecting function item, and presenting text content edited by the text editing operation and expression elements edited by the expression selecting operation; and receiving a session message containing edited text content and edited expression elements in response to a message sending operation triggered based on the text editing interface.
Here, the expressions in the conversation message support the yellow-face small expressions in emoji and QQ. After receiving the session message, the client sends the session message to the server for processing.
Step 202: the server pre-processes the session message.
Here, the server performs preprocessing on the received session message, and sends the preprocessed session message to the client, so as to present the session message in a session interface of the client.
Step 203: the session message is presented in a session interface of the client.
Step 204: the server judges whether the session message contains text content triggering the expression rain of the color eggs.
Here, the text content including the trigger color egg expression rain is the trigger word for triggering color egg expression rain, for example, whether the conversation message includes the trigger words such as happy birthday, wanting you, profits rolling, red and red fire, great luck, surplus years, success, etc. is judged, and when the conversation message includes the trigger word for triggering color egg expression rain, step 205 is executed; when it is determined that the conversation message does not contain a trigger word for triggering the expression rain of the color egg, step 203 is performed.
Step 205: the server sets the number of expression elements in the color egg expression rain associated with the text content.
Here, the total number and the minimum number of expression elements in the text-related color egg expression rain need to be set.
Step 206: and the server identifies the expression elements in the session message.
Here, the server processes the expression element in the session message to further identify whether the session message has emoji and QQ expressions, where the identification rule is that the emoji expression of the system is a special character, similar to 0x e44b, so long as the character is identified, if the emoji expression is QQ expression, it needs to be identified/started, and at the same time, it is detected/whether the following character matches the character string defined by the QQ expression, for exampleExpressed as/smile, these server and client agree on rules.
Step 207: the server generates a combined expression element.
Here, the server combines the expression elements associated with the text in the conversation message, and the expression elements carried by the conversation message to obtain a combined expression element, wherein the combined expression element is characterized by a json string of the color egg expression, and the fields in the json are as follows:
Wherein the fields represent different meanings, according to which the client performs corresponding processing logic:
face_egg_count: the text content triggers the total number of expression elements contained in the color egg expression rain once;
default_face_min_count: the least number of expression elements to be displayed triggered by the text content;
default_face: a character string representing a trigger expression element;
faces: expression element arrays in the message;
type: types of expression elements in the message, such as default emoji, QQ expression and picture elements;
value: the value of the expression element is 0xE44B if the expression element is a default emoji, the value is/smile if the expression element is a QQ expression, the value is the id of the picture, the value is the unique identifier of the resource on the cos platform, and the client can download the corresponding picture according to the identifier.
And finally, pushing the generated combined expression elements to clients corresponding to the sender and the receiver (namely the session object) to be displayed by the server, wherein all users in the session or the session group of the sender and the receiver can only see the color egg expression rain if the users in the session are not online, and the combined expression elements pushed by the server can not be received.
Step 208: the client dynamically displays the combined expression elements.
The client receives the combined expression elements (namely, the push strings of the color egg expressions) pushed by the server, analyzes the json, drops one expression element at average interval time according to the number and time of the required dropped expression elements, the dropped expression elements are randomly located in the screen, each time the required dropped expression elements are selected according to random rules, namely, the dropping rules can be the expression elements associated with text content in the conversation message and the expression elements carried by the conversation message randomly drop, when the carried expression elements are too many, only the first few (such as the first 10) expression elements can be dynamically displayed, and the dynamic display form of the combined expression elements can be seen in fig. 5B and 5C.
For example, the number of the expression elements is n, and the expression elements are represented by a linked list of n, wherein each expression element corresponds to one data in the linked list, a number m in n is randomly generated each time, the expression element corresponding to the m position is taken out and dropped, and each time the expression element drops, the linked list deletes the expression element corresponding to the position, and the expression element is circulated until the linked list is empty, so that the expression element associated with the text in the session message and the expression element carried by the session message do not drift in a fixed drift manner, and different experiences are brought to the user.
2. Carrying picture elements selected by users in color egg expression rain
Here, the session information edited by the user includes text content and picture elements for triggering the expression of the color egg, that is, the session information is a rich media message, after the client on the terminal receives the session information, the session information is sent to the server, and when the server receives the session information, the picture element judgment logic is added to the processing logic of the expression of the color egg shown in fig. 9, specifically, referring to fig. 10, fig. 10 is a schematic flow chart of an alternative method for displaying information based on the session provided by the embodiment of the application, and will be described with reference to fig. 10.
Step 301: the client receives a conversation message containing edited text content and edited emoticons.
In actual implementation, a text editing interface is presented in a session interface of the client, and a text editing box and a picture selection function item are presented in the text editing interface; responding to a text editing operation triggered by a text editing box and a picture element adding operation triggered by a picture selection function item, and presenting text contents edited by the text editing operation and picture elements selected by the picture element adding operation; in response to a message sending operation triggered based on the text editing interface, a conversation message is received that includes edited text content and selected picture elements.
After receiving the session message, the client sends the session message to the server for processing.
Step 302: the server pre-processes the session message.
Here, the server performs preprocessing on the received session message, and sends the preprocessed session message to the client, so as to present the session message in a session interface of the client.
Step 303: the session message is presented in a session interface of the client.
Step 304: the server judges whether the session message contains text content triggering the expression rain of the color eggs.
Here, the text content including the trigger color egg expression rain is the trigger word for triggering color egg expression rain, for example, whether the conversation message includes the trigger words such as happy birthday, wanting you, profits rolling, red and red fire, great luck, surplus year, success, etc. is judged, and when the conversation message includes the trigger word for triggering color egg expression rain, step 305 is executed; when it is determined that the trigger word for triggering the expression rain of the color egg is not included in the conversation message, step 303 is performed.
Step 305: the server sets the number of expression elements in the color egg expression rain associated with the text content.
Here, the total number and the minimum number of expression elements in the color egg expression rain associated with the text content need to be set.
Step 306: and the server identifies the expression elements in the session message.
Here, the server processes the expression element in the session message to further identify whether the session message has emoji and QQ expressions, where the identification rule is that the emoji expression of the system is a special character, similar to 0x e44b, so long as the character is identified, if the emoji expression is QQ expression, it needs to be identified/started, and at the same time, it is detected/whether the following character matches the character string defined by the QQ expression, for exampleExpressed as/smile, these server and client agree on rules.
Step 307: the server processes the picture elements in the session message.
Here, the server checks whether the session message contains picture elements, and when the session message contains picture elements, the pictures are uploaded to the cos platform one by one, specifically, for example, the number of the pictures is n, and the pictures are represented by a linked list of n, wherein each picture element corresponds to one data in the linked list, each time a number m in n is randomly generated, the picture corresponding to the m position is uploaded to the cos platform, and each time a picture element is uploaded, the linked list deletes the picture corresponding to the position, and the steps are circulated until the linked list is empty.
Step 308: the server generates a combined expression element.
Here, the server combines the expression elements associated with the text content in the session message, the expression elements edited in the session message, and the picture elements in the session message to obtain the combined expression elements, after processing the picture in the session message, the generated json string format is the same as the json string format generated by processing the expression elements in the session message, and the difference is mainly that in the faces array, the value corresponding to the type is image, and meanwhile, the value of the value is the unique identifier (i.e. picture resource identifier) of the corresponding cos platform, and the client can download the corresponding picture elements according to the identifier.
Step 309: the client dynamically displays the combined expression elements.
In practical implementation, the processing procedure of the client after receiving the combined expression element (i.e. the push string of the color egg expression) pushed by the server may be referred to fig. 11, and fig. 11 is a schematic flow chart of an alternative session-based information display method provided in the embodiment of the present application, which will be described with reference to fig. 11.
Step 401: the client receives the combined expression element pushed by the server.
Here, the combined emoji elements are characterized by json strings of color egg expressions.
Step 402: and analyzing the combined expression elements.
Here, the client parses the json string of the color egg expression corresponding to the combined expression element.
Step 403: judging whether the combined expression element contains picture elements or not.
Here, it is necessary to determine whether or not a picture element exists in the faces, and when the picture element is contained, step 404 is executed, otherwise step 407 is executed.
Step 404: and downloading the thumbnail corresponding to the picture according to the picture resource identifier.
Step 405: the thumbnail is saved locally.
Step 406: and carrying out 1 reduction operation on the number of pictures.
Through the above steps 404-406, after downloading each picture element to the local, the process of floating the color eggs is performed, and the picture element corresponds to one expression in the expression color eggs, so as to dynamically display the combined expression element.
Step 407: and dynamically displaying the combined expression elements.
In the process of dynamically displaying the combined expression elements, the expression elements related to text content in the conversation message and the expression elements carried in the conversation can be displayed dynamically, then the picture elements in the conversation message are displayed dynamically, when a plurality of picture elements exist, the first pictures are displayed in sequence, the later pictures are displayed in a stacked mode, a user can click the pictures to enter a large picture viewing page to view all the pictures, and specific dynamic display can be seen in fig. 6B-6D.
When the pictures in the session message are dynamically displayed, the size of the displayed picture elements is larger than that of the displayed picture elements of the system emoji expression and the QQ expression, and when the length and width are inconsistent with the fixed size ratio, the picture elements are required to be compressed in equal proportion, so that the authenticity of the picture size is maintained as much as possible, and incomplete picture display caused by improper filling mode is avoided.
Because the size of the display of the picture elements can be larger, if two adjacent picture elements possibly have overlapping problems in the expression drifting process, the client is required to judge the x coordinate of the image element which is drifting, and a proper position is calculated according to the drifting x coordinate of the last picture element, so that the non-overlapping picture elements are saved as much as possible, and can be completely tracked, and the picture elements can drift at uniform intervals.
It should be noted that, for such a rich media message with a combination of text and pictures, the number of pictures needs to be controlled, and after all, the number of pictures is too large, which results in too long uploading and downloading time, so that the colored eggs are not timely. The expression color eggs can be clicked only when picture elements are interacted, the user clicks effectively when the client can be an image according to type, when the picture elements are clicked, the client opens a picture previewer, thumbnail images are displayed on the previewer, then the original pictures are downloaded to the user according to the resource identification, and the corresponding resource identification is the same as the cos platform supports downloading of pictures with different sizes, so that the downloading picture types are added when downloading is requested.
3. Carrying video elements edited by users in color egg expression rain
When the text content and the video elements of the related expression elements are contained in the conversation message edited by the user, the triggered expression color egg rain displays the video elements in the conversation message at last, the user can click the video elements to perform a video playing interface to view the played video content, and particularly, the method can be seen in fig. 7B-7D.
Referring to fig. 12, fig. 12 is a schematic flow chart of an alternative method for displaying session-based information according to an embodiment of the present application, which will be described with reference to fig. 12.
Step 501: and the client responds to the text editing operation and acquires the text content edited by the text editing operation.
Step 502: and the client responds to the video adding operation and acquires the video elements edited by the video adding operation.
Step 503: the client receives a session message containing edited text content and video elements in response to the messaging operation.
Here, a text editing interface is presented in a session interface of the client, and a text editing box and a media adding function item are presented in the text editing interface; responding to a text editing operation triggered by a text editing box and a video element adding operation triggered by a media adding function item, and presenting the text edited by the text editing operation and the video element added by the media element adding operation; and receiving a session message containing edited text content and video elements in response to a message sending operation triggered based on the text editing interface.
Step 504: and the client responds to the intercepting operation for the video elements in the session message and acquires the dynamic images corresponding to the video elements.
Here, the selected video in the session message is truncated, e.g., a 1.5 second video clip is truncated and converted into a dynamic image (i.e., gif map).
Step 505: and uploading each video element in the session message and the corresponding dynamic image to the cos platform by the client.
Step 506: and the client assembles each video, the corresponding dynamic image and the text based on the resource identification of each video and the corresponding dynamic image.
Here, the dynamic image resource and video resource fields are correspondingly added in the assembled message to represent dynamic images and videos. The client sends the assembled message to the server.
Step 507: and the server combines the assembled messages to obtain the combined expression element.
Here, the server combines the expression elements related to the text in the session message and the video elements carried by the session message to obtain a combined expression element, and sends the combined expression element to each client. Wherein the combined expression elements are characterized by json strings of color egg expressions, and the fields in json are as follows:
The method comprises the steps that expression rain processing generates a corresponding json string, a video type is added on the basis of the original expression element, when the client detects that type is video, an additional parameter value2 is added for representing a resource identifier of the video element, the client downloads a corresponding video from a cos removing platform according to the resource identifier, and the value of the value represents a dynamic image of the video element.
Step 508: and presenting the combined expression element in a session interface of the client.
Step 509: the client dynamically displays the combined expression elements.
Here, the color egg expression rain display is to download a corresponding dynamic image (gif image) according to the resource identifier, when a user clicks the dynamic image, the image previewer is opened to preferentially play the dynamic image, and simultaneously download a corresponding video, and after the video is downloaded, the playing dynamic image is replaced by the playing video. The display of picture elements is supported in the color egg expression rain, so that the color egg expression rain is directly replaced by a dynamic image, a control is set to support a static picture or a dynamic image, and if the control is set to support the dynamic image, the dynamic image can be automatically played.
Through the mode, the embodiment of the application carries the expression elements, the picture elements and the video elements in the conversation message in the expression rain triggered by the conversation message keywords, so that a certain customization capability of the user on the color egg expression rain is given, and the color egg rain can express the emotion of the user more.
Continuing with the description below of an exemplary architecture of the session-based information presentation apparatus 555 implemented as a software module provided by embodiments of the present application, in some embodiments, as shown in fig. 13, fig. 13 is a schematic structural diagram of the session-based information presentation apparatus provided by embodiments of the present application, the software module stored in the session-based information presentation apparatus 555 of the memory 550 may include:
A message receiving module 5551, configured to receive a session message in a session interface, where the session message includes text content and media elements;
The message display module 5552 is configured to display, in a specified form, an emoticon and the media element that have association logic with the text content in a session interface when the text content is a message type that conforms to the association logic.
In some embodiments, the apparatus further comprises an editing module for, when the media element comprises an emoticon, prior to the receiving the session message,
Presenting a text editing interface, and presenting a text editing box and expression selection function items in the text editing interface;
Responding to a text editing operation triggered by the text editing box and an expression selection operation triggered by the expression selection function item, and presenting text content edited by the text editing operation and expression elements selected by the expression selection operation;
and responding to a message sending operation triggered based on the text editing interface, and sending a session message containing edited text and selected expression elements.
In some embodiments, the editing module is further configured to, when the media element comprises a picture element, before the receiving the session message,
Presenting a text editing interface, and presenting a text editing box and a picture selection function item in the text editing interface;
Responding to a text editing operation triggered by the text editing box and a picture selection operation triggered by the picture selection function item, and presenting text content edited by the text editing operation and picture elements selected by the picture selection operation;
and sending a session message containing edited text and selected picture elements in response to a message sending operation triggered based on the text editing interface.
In some embodiments, when the media element includes a first number of picture elements and the first number is greater than a second number, the message presentation module is further configured to
Presenting the session message including the text content in a session interface, and
And in the session message, independently presenting a second number of picture elements in the first number of picture elements, and overlaying and presenting picture elements except for the second number of picture elements in the first number of picture elements.
In some embodiments, when the media element includes at least one of a video element and an audio element, the message presentation module is further configured to
And in a session interface, presenting the text content and the media element by adopting a session message, wherein the text content and the media element form the message content of the session message.
In some embodiments, when the media element comprises a video element, the apparatus further comprises an image capture module for
Responding to a static image interception instruction for a selected video element, intercepting a first frame image of the video element, and determining the intercepted first frame image as a video image corresponding to the video element so as to present the text content and the video image in the session interface by adopting a session message; or alternatively
And responding to a dynamic image interception instruction aiming at the selected video element, intercepting to obtain a dynamic video image corresponding to the video element, determining the dynamic video image as the video image corresponding to the video element, so as to present the text content and the video image in the session interface by adopting a session message, and continuously intercepting a plurality of sequential frame images based on the first frame image of the video element to obtain the dynamic video image.
In some embodiments, the message presenting module is further configured to present, in the session interface, a plurality of first expression copies of an expression element having association logic with the text content, and a plurality of second expression copies of the media element fused in the plurality of first expression copies, and
And displaying the moving process of the plurality of first expression copies and the plurality of second expression copies.
In some embodiments, when the media element includes at least one of a video element, an audio element, and a picture element, the apparatus further includes a detail rendering module for
In the moving process of the plurality of first expression copies and the plurality of second expression copies, receiving triggering operation for the second expression copies;
and responding to the triggering operation, presenting a detail page corresponding to the media element, and presenting the content of the media element in the detail page.
In some embodiments, when the media element is an emotive element and the number of media elements exceeds a target number,
The message display module is further used for combining the expression elements with the expression elements of the target quantity to obtain combined expression elements;
and displaying the moving process of the combined expression element in the session interface.
In some embodiments, the message display module is further configured to superimpose and combine or juxtapose an expression element having association logic with the text content with the media element to obtain a combined expression element;
And displaying the bouncing process of the combined expression element in the session interface.
In some embodiments, the message display module is further configured to combine an expression element having association logic with the text content with the media element to obtain a combined expression element;
And displaying the process that the combined expression element moves along the target track of the corresponding target pattern in the session interface.
In some embodiments, the message display module is further configured to combine an expression element having association logic with the text content with the media element to obtain a combined expression element;
And displaying a plurality of third expression copies of the combined expression element, and displaying the moving process of the plurality of third expression copies.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the session-based information display method provided by the embodiment of the application when executing the executable instructions stored in the memory.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the session-based information presentation method according to the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions that, when executed by a processor, cause the processor to perform the session-based information presentation method provided by the embodiments of the present application.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (html, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.
Claims (15)
1. A method for session-based information presentation, the method comprising:
receiving a session message in a session interface, wherein the session message comprises text content and at least one media element, and the text content and the at least one media element are presented in the session interface as one session message to form message content of the session message; the at least one media element includes one or more of an expression element, a picture element, an audio element, or a video element;
When the message type corresponding to the text content is a message type conforming to association logic, combining an expression element with the text content and the at least one media element, and dynamically displaying the combined expression element and the at least one media element in the session interface;
Wherein the combining of the emoticon and the at least one media element with the text content presence association logic comprises:
when the at least one media element is at least one expression element, combining the expression element with the text content with associated logic and the at least one expression element;
when the at least one media element is at least one picture element, combining an expression element with associated logic of the text content and the at least one picture element;
when the at least one media element is at least one audio element, combining an expression element with associated logic of the text content and the at least one audio element;
When the at least one media element is at least one video element, combining an expression element with associated logic of the text content and the at least one video element;
When the at least one media element is at least two elements of an expression element, a picture element, an audio element and a video element, the expression element with associated logic with the text content and the at least two element combinations are combined.
2. The method of claim 1, wherein when the at least one media element comprises an emoticon, the method further comprises, prior to receiving a session message in a session interface:
presenting a text editing interface, and presenting a text editing box and expression selection function items in the text editing interface;
Responding to a text editing operation triggered by the text editing box and an expression selection operation triggered by the expression selection function item, and presenting text content edited by the text editing operation and expression elements selected by the expression selection operation;
And responding to a message sending operation triggered based on the text editing interface, and sending a session message containing edited text content and selected expression elements.
3. The method of claim 1, wherein when the at least one media element comprises a picture element, the method further comprises, prior to receiving a session message in a session interface:
presenting a text editing interface, and presenting a text editing box and a picture selection function item in the text editing interface;
Responding to a text editing operation triggered by the text editing box and a picture selection operation triggered by the picture selection function item, and presenting text content edited by the text editing operation and picture elements selected by the picture selection operation;
and sending a session message containing edited text and selected picture elements in response to a message sending operation triggered based on the text editing interface.
4. The method of claim 3, wherein when the at least one media element comprises a first number of picture elements and the first number is greater than a second number, the method further comprises:
Presenting the session message including the text content in a session interface, and
And in the session message, independently presenting a second number of picture elements in the first number of picture elements, and overlaying and presenting picture elements except for the second number of picture elements in the first number of picture elements.
5. The method of claim 1, wherein when the at least one media element comprises a video element, the method further comprises:
Responding to a static image interception instruction for a selected video element, intercepting a first frame image of the video element, and determining the intercepted first frame image as a video image corresponding to the video element so as to present the text content and the video image in the session interface by adopting a session message; or alternatively
And responding to a dynamic image interception instruction aiming at the selected video element, intercepting to obtain a dynamic video image corresponding to the video element, determining the dynamic video image as the video image corresponding to the video element, so as to present the text content and the video image in the session interface by adopting a session message, and continuously intercepting a plurality of sequential frame images based on the first frame image of the video element to obtain the dynamic video image.
6. The method of claim 1, wherein the dynamically presenting the combined expressive element and the at least one media element in the conversation interface comprises:
In the session interface, a plurality of first expression copies of the expression element with associated logic with the text content and a plurality of second expression copies of the at least one media element fused in the plurality of first expression copies are displayed, and
And displaying the moving process of the plurality of first expression copies and the plurality of second expression copies.
7. The method of claim 6, wherein when the at least one media element comprises at least one of a video element, an audio element, and a picture element, the method further comprises:
In the moving process of the plurality of first expression copies and the plurality of second expression copies, receiving triggering operation for the second expression copies;
And responding to the triggering operation, presenting a detail page corresponding to the at least one media element, and presenting the content of the at least one media element in the detail page.
8. The method of claim 1, wherein when the at least one media element is an emoji element and the number of media elements exceeds a target number, the combining the emoji element with the text content presence association logic with the at least one media element comprises:
Combining the expression elements with the text content in association logic and the target number of media elements to obtain combined expression elements;
the dynamically displaying the combined expression element and the at least one media element in the session interface includes:
and displaying the moving process of the combined expression element in the session interface.
9. The method of claim 1, wherein the combining the emotive element with the text content presence association logic with the at least one media element comprises:
carrying out superposition combination or parallel combination on the expression element with the text content and the at least one media element to obtain a combined expression element;
the dynamically displaying the combined expression element and the at least one media element in the session interface includes:
And displaying the bouncing process of the combined expression element in the session interface.
10. The method of claim 1, wherein the combining the emotive element with the text content presence association logic with the at least one media element comprises:
Combining the expression element with the text content and the at least one media element to obtain a combined expression element;
the dynamically displaying the combined expression element and the at least one media element in the session interface includes:
And displaying the process that the combined expression element moves along the target track of the corresponding target pattern in the session interface.
11. The method of claim 1, wherein the combining the emotive element with the text content presence association logic with the at least one media element comprises:
Combining the expression element with the text content and the at least one media element to obtain a combined expression element;
the dynamically displaying the combined expression element and the at least one media element in the session interface includes:
and displaying a plurality of third expression copies of the combined expression element in the session interface, and displaying the moving process of the plurality of third expression copies.
12. A session-based information presentation apparatus, the apparatus comprising:
A message receiving module for receiving a session message in a session interface, wherein the session message comprises text content and at least one media element, and the text content and the at least one media element are presented in the session interface as one session message to form message content of the session message; the at least one media element includes one or more of an expression element, a picture element, an audio element, or a video element;
The message display module is used for combining the expression element with the text content and the at least one media element when the text content is of a message type conforming to the association logic, and dynamically displaying the combined expression element and the at least one media element in the session interface;
the message display module is further configured to combine an expression element having association logic with the text content and the at least one expression element when the at least one media element is the at least one expression element;
The message display module is further configured to combine an expression element having association logic with the text content with at least one picture element when the at least one media element is the at least one picture element;
The message display module is further configured to combine an expression element having association logic with the text content and the at least one audio element when the at least one media element is the at least one audio element;
the message display module is further configured to combine an expression element having association logic with the text content with the at least one video element when the at least one media element is the at least one video element;
The message display module is further configured to combine an expression element having association logic with the text content and a combination of at least two elements when the at least one media element is the combination of at least two elements of the expression element, the picture element, the audio element, and the video element.
13. An electronic device, comprising:
a memory for storing executable instructions;
A processor for implementing the session-based information presentation method of any one of claims 1 to 11 when executing executable instructions stored in the memory.
14. A computer readable storage medium storing executable instructions for implementing the session based information presentation method of any one of claims 1 to 11 when executed by a processor.
15. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the session-based information presentation method of any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010780272.3A CN112817670B (en) | 2020-08-05 | 2020-08-05 | Information display method, device, equipment and storage medium based on session |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010780272.3A CN112817670B (en) | 2020-08-05 | 2020-08-05 | Information display method, device, equipment and storage medium based on session |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112817670A CN112817670A (en) | 2021-05-18 |
CN112817670B true CN112817670B (en) | 2024-05-28 |
Family
ID=75853116
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010780272.3A Active CN112817670B (en) | 2020-08-05 | 2020-08-05 | Information display method, device, equipment and storage medium based on session |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112817670B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113438150B (en) * | 2021-07-20 | 2022-11-08 | 网易(杭州)网络有限公司 | Expression sending method and device |
CN113438149A (en) * | 2021-07-20 | 2021-09-24 | 网易(杭州)网络有限公司 | Expression sending method and device |
CN115695348A (en) * | 2021-07-27 | 2023-02-03 | 腾讯科技(深圳)有限公司 | Expression display method and device electronic device and storage medium |
CN113934349B (en) * | 2021-10-28 | 2023-11-07 | 北京字跳网络技术有限公司 | Interaction method, interaction device, electronic equipment and storage medium |
CN114510182B (en) * | 2022-01-25 | 2024-09-10 | 支付宝(杭州)信息技术有限公司 | Data processing method, device, equipment and medium |
CN115268712A (en) * | 2022-07-14 | 2022-11-01 | 北京字跳网络技术有限公司 | Method, device, equipment and medium for previewing expression picture |
CN115269886A (en) * | 2022-08-15 | 2022-11-01 | 北京字跳网络技术有限公司 | Media content processing method, device, equipment and storage medium |
CN118282993A (en) * | 2022-08-23 | 2024-07-02 | 北京字跳网络技术有限公司 | Method, apparatus, device and storage medium for session message presentation |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105049318A (en) * | 2015-05-22 | 2015-11-11 | 腾讯科技(深圳)有限公司 | Message transmitting method and device, and message processing method and device |
CN107577513A (en) * | 2017-09-08 | 2018-01-12 | 北京小米移动软件有限公司 | A kind of method, apparatus and storage medium for showing painted eggshell |
CN109388297A (en) * | 2017-08-10 | 2019-02-26 | 腾讯科技(深圳)有限公司 | Expression methods of exhibiting, device, computer readable storage medium and terminal |
CN111369645A (en) * | 2020-02-28 | 2020-07-03 | 北京百度网讯科技有限公司 | Expression information display method, device, equipment and medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150009186A (en) * | 2013-07-16 | 2015-01-26 | 삼성전자주식회사 | Method for operating an conversation service based on messenger, An user interface and An electronic device supporting the same |
US10498775B2 (en) * | 2017-08-31 | 2019-12-03 | T-Mobile Usa, Inc. | Exchanging non-text content in real time text messages |
US11145103B2 (en) * | 2017-10-23 | 2021-10-12 | Paypal, Inc. | System and method for generating animated emoji mashups |
US20190379618A1 (en) * | 2018-06-11 | 2019-12-12 | Gfycat, Inc. | Presenting visual media |
-
2020
- 2020-08-05 CN CN202010780272.3A patent/CN112817670B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105049318A (en) * | 2015-05-22 | 2015-11-11 | 腾讯科技(深圳)有限公司 | Message transmitting method and device, and message processing method and device |
CN109388297A (en) * | 2017-08-10 | 2019-02-26 | 腾讯科技(深圳)有限公司 | Expression methods of exhibiting, device, computer readable storage medium and terminal |
CN107577513A (en) * | 2017-09-08 | 2018-01-12 | 北京小米移动软件有限公司 | A kind of method, apparatus and storage medium for showing painted eggshell |
CN111369645A (en) * | 2020-02-28 | 2020-07-03 | 北京百度网讯科技有限公司 | Expression information display method, device, equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN112817670A (en) | 2021-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112817670B (en) | Information display method, device, equipment and storage medium based on session | |
CN111294663B (en) | Bullet screen processing method and device, electronic equipment and computer readable storage medium | |
CN112748976B (en) | Expression element display method, device, equipment and computer readable storage medium | |
CN112748974B (en) | Information display method, device, equipment and storage medium based on session | |
US20170031550A1 (en) | Enhanced Messaging Stickers | |
CN113746874B (en) | Voice package recommendation method, device, equipment and storage medium | |
WO2023279917A1 (en) | On-screen comment displaying method and apparatus, on-screen comment transmitting method and apparatus, computer device, computer readable storage medium, and computer program product | |
CN113938696B (en) | Live broadcast interaction method and system based on custom virtual gift and computer equipment | |
CN111949908A (en) | Media information processing method and device, electronic equipment and storage medium | |
CN113824983A (en) | Data matching method, device, equipment and computer readable storage medium | |
CN113973223A (en) | Data processing method, data processing device, computer equipment and storage medium | |
CN111914193A (en) | Method, device and equipment for processing media information and computer readable storage medium | |
CN105828167A (en) | Screen-shot sharing method and device | |
US20240179377A1 (en) | Multimedia object sharing method, electronic device, and storage medium | |
CN113438492B (en) | Method, system, computer device and storage medium for generating title in live broadcast | |
CN114584599A (en) | Game data processing method and device, electronic equipment and storage medium | |
CN113010733A (en) | Information recommendation method and device, electronic equipment and computer-readable storage medium | |
CN112799748B (en) | Expression element display method, device, equipment and computer readable storage medium | |
KR102506242B1 (en) | Method, computer device, and computer program to pick and display messages in messaging-based social network service | |
CN116561439A (en) | Social interaction method, device, equipment, storage medium and program product | |
CN112748975B (en) | Expression element display method, device, equipment and computer readable storage medium | |
CN114745596B (en) | Group-based barrage processing method and device | |
CN116980711A (en) | Barrage data processing method, barrage data processing device, barrage data processing product, barrage data processing equipment and barrage data processing medium | |
CN117687548A (en) | Virtual resource package processing method, device, equipment, storage medium and program product | |
CN117008786A (en) | Content distribution method, content processing device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40045014 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |