CN115190347A - Message processing method, message processing device, electronic equipment and storage medium - Google Patents

Message processing method, message processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115190347A
CN115190347A CN202210629314.2A CN202210629314A CN115190347A CN 115190347 A CN115190347 A CN 115190347A CN 202210629314 A CN202210629314 A CN 202210629314A CN 115190347 A CN115190347 A CN 115190347A
Authority
CN
China
Prior art keywords
messages
message
rendering
group
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210629314.2A
Other languages
Chinese (zh)
Other versions
CN115190347B (en
Inventor
李阳
王辉军
纪伟
高文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210629314.2A priority Critical patent/CN115190347B/en
Publication of CN115190347A publication Critical patent/CN115190347A/en
Application granted granted Critical
Publication of CN115190347B publication Critical patent/CN115190347B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The disclosure provides a message processing method, a message processing device, an electronic device and a storage medium in live broadcasting. The message processing method can comprise the following steps: after entering a live broadcast page, acquiring a message broadcasted from a server; caching the acquired message; and rendering the cached messages according to a preset time interval and displaying the messages in a live page.

Description

Message processing method, message processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for processing a live message, an electronic device, a storage medium, and a program product.
Background
With the development of the internet, live broadcasting becomes an increasingly wide entertainment mode in the life of people. The live broadcast can be divided into character, picture live broadcast and video live broadcast, wherein the video live broadcast refers to a video watching mode of utilizing the internet and a streaming media technology to carry out live broadcast, and the video has better effect due to the fact that elements such as images, characters and sounds are fused, and gradually becomes a mainstream live broadcast mode of the internet.
In the process of video live broadcast, live broadcast messages are common and most frequently used interactive means, and the anchor end and the watching user end can realize the interaction between the anchor and the watching user through the live broadcast messages, so that the live broadcast messages (such as user comments, anchor speech and the like) can be continuously displayed in a certain display area on a video picture of the user end in a rolling manner. However, the live broadcast message is likely to cause a performance problem at the browser end due to its large volume and high frequency.
Disclosure of Invention
The present disclosure provides a message processing method, a message processing apparatus, an electronic device, and a storage medium to solve at least the above-mentioned problems.
According to a first aspect of the embodiments of the present disclosure, a method for processing a message in live broadcasting is provided, where the method for processing the message may include: after entering a live broadcast page, acquiring a message broadcasted from a server; caching the acquired message; and rendering the cached message according to a preset time interval and displaying the message in a live page.
Optionally, caching the acquired message may include: storing the message from the tail of a cache group aiming at each acquired message, wherein the cache group has a preset size; and in the preset time interval, when the messages stored in the cache group enable the number of the messages stored in the cache group to reach the preset size, deleting the messages at the head of the cache group.
Optionally, rendering the cached message at preset time intervals may include: storing the messages in the cache group from the tail of a rendering group according to the preset time interval, wherein the rendering group has the same preset size as the cache group; and rendering the messages in the rendered group.
Optionally, the message processing method may further include: in response to receiving a request to view history messages, obtaining a predetermined number of history messages from a server; for each history message, storing the history message starting from the head of the rendering group; when the messages stored in the rendering group enable the number of the messages stored in the rendering group to reach the preset size, deleting the messages at the tail of the rendering group until the preset number of historical messages are stored in the rendering group; and rendering the messages in the rendering group.
Optionally, the message processing method may further include: in response to receiving a request to view the most recent message, storing messages in the cache group starting at the end of the render group; when the messages stored in the rendering group enable the number of the messages stored in the rendering group to reach the preset size, deleting the messages at the head of the rendering group until all the messages in the cache group are stored in the rendering group, and emptying the cache group; and rendering the messages in the rendering group.
Optionally, the message processing method may further include: under the condition of viewing the historical messages, responding to the messages input by the user, and caching the input messages; and/or in response to a user entering a message without viewing historical messages, rendering and displaying the entered message directly in a live page.
According to a second aspect of the embodiments of the present disclosure, there is provided a message processing apparatus in a live broadcast, the message processing apparatus may include: the acquisition module is configured to acquire the message broadcasted from the server after entering a live broadcast page; the cache module is configured to cache the acquired message; and the rendering module is configured to render the cached message according to a preset time interval and display the message in a live page.
Optionally, the caching module may be configured to: storing the message from the tail of a cache group aiming at each acquired message, wherein the cache group has a preset size; and in the preset time interval, when the messages stored in the cache group enable the number of the messages stored in the cache group to reach the preset size, deleting the messages at the head of the cache group.
Optionally, the rendering module may be configured to: storing the messages in the cache group from the tail of a rendering group according to the preset time interval, wherein the rendering group has the same preset size as the cache group; and rendering the messages in the rendered group.
Optionally, the obtaining module may be configured to obtain a predetermined number of history messages from the server in response to receiving a request to view the history messages; the rendering module may be configured to store, for each historical message, the historical message starting at the head of the rendered group; when the messages stored in the rendering group enable the number of the messages stored in the rendering group to reach the preset size, deleting the messages at the tail of the rendering group until the preset number of historical messages are stored in the rendering group; and rendering the messages in the rendering group.
Optionally, the rendering module may be configured to store the messages in the cache set starting from the end of the rendering set in response to receiving a request to view the latest message; when the messages stored in the rendering group enable the number of the messages stored in the rendering group to reach the preset size, deleting the messages at the head of the rendering group until all the messages in the cache group are stored in the rendering group, and emptying the cache group; and rendering the messages in the rendering group.
Optionally, the message processing apparatus may further include an input module configured to receive a user input, wherein in case of viewing the history message, the caching module may cache the input message in response to the user input message; and/or in response to a user entering a message without viewing historical messages, the rendering module may render and display the entered message directly in a live page.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic apparatus, which may include: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform a message processing method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform a message processing method as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, wherein instructions of the computer program product are executed by at least one processor in an electronic device to perform the message processing method as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
avoiding frequent rendering of the browser due to frequent reception of messages by caching the received messages; the number of stored messages is limited by introducing a cache group and a rendering group, so that the data operation efficiency is improved, and the storage pressure is reduced. By directly rendering the information input by the terminal user, the time delay of receiving the broadcast information from the server is avoided, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flowchart of message processing in a conventional live broadcast;
fig. 2 is a flow diagram of a method of message processing in live according to an embodiment of the present disclosure;
FIGS. 3A and 3B are schematic flow diagrams of storing messages in a cache array according to embodiments of the present disclosure;
FIGS. 4A and 4B are schematic diagrams of a process for storing messages in a render array according to an embodiment of the present disclosure;
fig. 5 is a flow diagram of a method of message processing in live according to an embodiment of the disclosure;
fig. 6 is a schematic structural diagram of a message processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a block diagram of a message processing apparatus in live according to an embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device according to an embodiment of the disclosure.
Throughout the drawings, it should be noted that the same reference numerals are used to designate the same or similar elements, features and structures.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of embodiments of the disclosure as defined by the claims and their equivalents. Various specific details are included to aid understanding, but these are merely to be considered exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the written meaning, but are used only by the inventors to achieve a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following descriptions of the various embodiments of the present disclosure are provided for illustration only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In the related art, as shown in fig. 1, after entering a live page, a small number of latest chat records are initialized, a live long connection is established with a server, and after receiving a message broadcasted by the server, the message is added to the bottommost part (i.e. the latest part) of a chat message array (which may also be referred to as a message list), and each change of the message list causes re-rendering of a chat message area on the live page. However, high frequency messages or frequent addition of messages to the chat message array may cause the page to re-render, resulting in page jams. In addition, when the user of the terminal sends the chat message, the message can be added to the chat message array after the message broadcasted by the server is received, and then page rendering is carried out. However, since the message sent by the user at the terminal can be rendered only after being broadcast at the server, and then the user can see the message, there is a certain time delay, which results in poor user experience. Additionally, as the end user looks ahead at the historical chat messages, the retrieved historical chat history is inserted at the top of the chat message array and is presented to the top (i.e., oldest) of the chat area after rendering. However, since both the historical chat history and the new chat messages are added to the chat message array, the length of the chat message array is infinitely increased, which occupies too much memory and causes storage pressure.
When the frequency of live chat becomes high and the amount of chat messages becomes large, the following problems occur: when the chat frequency becomes high, for example, 50 messages/second are added to the bottommost part of the chat list 50 times in 1 second, so that the browser needs to render the display again every 2 milliseconds, and the page is blocked; when the amount of the chat messages is large, too many messages can be stored in a browser memory and occupy too much memory when the historical messages are reviewed or new messages are received continuously; when the amount of chat messages is large, when historical messages are reviewed or new messages are received without stop, the new messages are redrawn or rearranged every time, and the redrawing and rearranging of a large number of page nodes take a lot of time, so that the page is unsmooth, and the user experience is poor; the messages sent by the terminal users are displayed after receiving the broadcast messages, if the broadcast time delay exists, the user experience is poor, and the time delay is increased due to high-frequency chatting or a large number of online people.
Generally, the live broadcast is chat messages with high frequency, the amount of the chat messages is large, and in this case, browser rendering performance and user experience problems are easy to occur, which will seriously affect the accessibility of the user.
In order to solve the problems in the prior art, the method and the device decouple the received message and the rendered message, solve the problem of frequent rendering of the browser caused by the high-frequency message through caching, and reasonably reduce the number of front-end stored messages under the condition of not influencing user experience, so that the efficiency of data operation is improved, and the storage pressure is reduced. In addition, the message sent by the terminal user is displayed as early as possible without waiting for the broadcast of the server, so that the time delay of displaying the message is avoided, and the user can respond in time.
Hereinafter, according to various embodiments of the present disclosure, a method and apparatus of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 2 is a flowchart of a message processing method in live according to an embodiment of the present disclosure. The message processing method can be applied to live scenes, especially the conditions of high chat frequency and large message amount in live broadcast.
The message processing method according to the present disclosure may be performed by any electronic device. The electronic device may be at least one of a smartphone, a tablet, a laptop computer, a desktop computer, and the like. The electronic device may be installed with a target application for implementing the message processing method of the present disclosure.
Referring to fig. 2, after entering a live page, a message broadcasted from a server is acquired in step S101. After the user uses the terminal to enter the live broadcast page, the terminal can establish long connection with the server so as to receive the chat messages broadcasted by the server from the server.
In step S102, the acquired message is cached. Messages broadcast by the server side may be cached in "groups". In the present disclosure, a "group" may represent a form of data that has a certain size and in which information may be arranged in order, such as an array, a sequence, a queue, and the like.
For each message broadcast by the server, the message may be stored from the tail of the cache set. The buffer group may be set to a predetermined size, i.e., have a predetermined length.
When the number of messages stored in the cache set reaches a preset size by the messages stored in the cache set, the messages at the head (i.e., the header) of the cache set may be deleted. For example, assuming that a cache set is set to store 200 messages and 200 messages are already stored in the current cache set, when a new message is stored in the cache set, the new message may be stored at the end of the cache set as the latest message, and the message at the front end of the cache set is deleted, which may ensure that only a certain number of messages are stored in the cache set, thereby reducing the storage pressure. How to store the messages in the cache set will be described in detail below with reference to fig. 3A and 3B.
In step S103, the cached message is rendered and displayed in a live page at preset time intervals.
When the live broadcast page is carried out, the timing operation of the cache group can be set, namely, the messages in the cache group can be stored in the rendering group according to the preset time interval. The preset time interval may be set to 0.5 seconds, 1 second, etc. That is, the buffered message may be rendered once every preset time.
The rendering group may be used to initiate rendering operations on data in the rendering group when changes occur to the data in the rendering group. The render group may have the same size as the cache group. The size of the rendering group may also be set differently from the size of the cache group. By setting a fixed length for the rendering group, messages that do not need to be exposed can be discarded.
Under normal circumstances (such as when the user does not perform any operation on the live page), the chat messages may be updated in the live page at preset intervals. For this case, the terminal may receive the message broadcasted by the server side within a preset time interval and store it in the cache group, following the limit of the cache group with respect to the maximum number during the message caching process. After the preset time interval expires, the messages stored in the cache group may be stored in the rendering group, so that the rendering operation is triggered by the rendering group change. During each time the data in the buffer group is stored in the rendering group, the data in the buffer group may be emptied, so that the message stored in the buffer group at the next preset time interval is a completely new message. Alternatively, new messages may be cached on a per-item basis in the caching manner described above (such as within a certain preset time interval the cached messages do not reach the upper limit of the cache set, and within the next preset time interval the caching of new messages may continue after previously stored messages) or new messages may be replaced (such as when the cached messages reach the upper limit of the cache set, new messages may be cached at the tail of the cache set and head messages of the cache set may be deleted to comply with the cache set's limit with respect to maximum number).
In the case where the user views the history messages, a predetermined number of history messages may be acquired from the server. For example, when the user slides up the message list in the live page, indicating that the user wants to view the previous history messages at this time, a predetermined number of history messages may be acquired from the server.
For each history message, the history message may be stored starting at the head of the rendering group. When the messages stored in the rendering group enable the number of the messages stored in the rendering group to reach the preset size, the messages at the tail of the rendering group can be deleted until a preset number of historical messages are stored in the rendering group. For example, a predetermined number of messages before the ID may be obtained from the server side as a response to the request to view the history message according to the ID of the current header message in the rendering group.
In the case where the user views the latest message, the messages in the cache group are stored from the end of the rendering group. For example, when the user performs a downward sliding input on the message list in the live page, indicating that the user wants to view the latest message, the messages in the cache group may be stored from the end of the rendering group without waiting until the preset time interval expires. For example, the data in the cache set may be stored all stripe by stripe into the render set. When the messages stored in the rendering group enable the number of the messages stored in the rendering group to reach the preset size, the messages at the head of the rendering group can be deleted until all the messages in the cache group are stored in the rendering group, and the data in the cache group is emptied. The process of storing messages in a render group will be described in detail below with reference to fig. 4A and 4B.
According to another example of the present disclosure, different operations may be performed according to different situations for a message input by an end user himself. In the case of viewing historical messages, the incoming messages may be cached in response to a user entering a message. In the event that historical messages are not viewed (such as viewing the latest messages), in response to a user entering a message, the entered message is rendered and displayed directly in the live page. So that the user can see the message entered by himself as early as possible.
According to the embodiment of the disclosure, frequent rendering of the browser due to frequently received messages is avoided by caching the received messages. The number of stored messages is limited by introducing a cache group and a rendering group, so that the data operation efficiency is improved and the storage pressure is reduced. By directly rendering the information input by the terminal user, the time delay of receiving the broadcast information from the server terminal is avoided, and the user experience is improved.
Fig. 3A and 3B are schematic flow diagrams of storing messages in a cache array according to an embodiment of the disclosure. According to the embodiment of the disclosure, the throttling processing is performed on the new message broadcasted by the server side by introducing the cache array. The cache array may be set to a fixed length, for example, to store 200 messages.
Referring to fig. 3A, after receiving a new message broadcasted by the server, it is determined whether the added message would cause the length of the cache array to exceed the set length. When the added message does not cause the length of the cache array to exceed the set length, the new message can be pushed into the cache array from the tail of the cache array. When an added message causes the cache array length to exceed the set length, i.e., the number of cache arrays reaches an upper limit (such as 200 messages already in memory), a header message may be popped and discarded from the head of the cache array and a new message may be pushed from the tail of the cache array.
The cache array can be set to be in timing operation, namely, the messages in the cache array can be stored in the rendering array at preset time intervals so as to perform message rendering. The preset time interval may be set to 0.5 or 1 second, but is not limited thereto.
In addition, when a user request to view the latest message is detected, the messages in the cache array may be stored in the render array.
Referring to fig. 3B, the message buffering in fig. 3A may be performed every preset time interval, and when a request for a user to view a latest message is detected, data in the buffer array may be all added to the rendering array and the buffer array may be emptied. For example, when the user performs a sliding operation on the message list in the live page, which indicates that the user wants to view the latest message at this time, all messages in the cache array may be stored from the tail of the rendering array for message rendering. The message cache of fig. 3A may be performed when a user request to view the latest message is not detected.
Generally, all messages in the cache array may be rendered at preset intervals when the user does not perform any operation on the live page.
According to the embodiment of the disclosure, by decoupling the received message and the rendered message, the problem that the browser is stuck due to multiple renderings of the browser in a short time caused by the high frequency of the received message is avoided.
Fig. 4A and 4B are schematic flow diagrams of storing messages in a render array according to an embodiment of the present disclosure. The length of the render array may be set to be the same as the length of the cache array, e.g., the render array may be set to store 200 messages, which may cause messages that do not need to be exposed to be discarded.
Referring to fig. 4A, for a new message broadcasted at the server, when the new message needs to be added to a rendering array, it is determined whether the added message would cause the length of the rendering array to exceed the set length. And pushing the message from the tail of the rendering array when the added message does not cause the length of the rendering array to exceed the set length. When an added message would cause the rendering array length to exceed the set length, a message is pushed from the tail of the rendering array and the excess number of messages are removed from the head.
Referring to fig. 4B, for a history message acquired from the server, when the history message needs to be added to a rendering array, it is determined whether the added message causes the length of the rendering array to exceed a set length. When the added message does not cause the render array length to exceed the set length, the message is pushed from the render array header. When an added message would cause the render array length to exceed the set length, a message is pushed from the render array head and the excess number of messages are removed from the tail.
After the chat message is added into the rendering array, the browser can render the area of the chat message according to the message of the rendering array and display the chat message.
According to the embodiment of the disclosure, by setting the length of the rendering array, the phenomenon that the rendering array is too long, occupies too much memory and affects other operations needing the memory can be avoided.
Fig. 5 is a flowchart illustrating a message processing method in live according to an embodiment of the present disclosure.
Referring to fig. 5, after entering the live page, a chat list and a timing operation of a cache array may be initialized, and a long connection may be established with the server to receive a message broadcasted by the server. The message may be stored in a cache array upon receipt of the broadcasted message, and data in the cache array may be stored in a render array upon expiration of a timer for the cache array, a change in the render array may cause rendering of a message presentation area of the live page.
Before the timer expires, if a request for viewing the latest message from the user is received, all messages in the cache array can be stored into the rendering array from the tail of the rendering array without waiting for the expiration of the timer, so as to perform message rendering.
For the messages input by the end user (namely, the messages sent by the end user), different operations can be executed according to different situations. Specifically, it may be determined whether the user is viewing the history message at this time when the end user inputs the message by himself/herself, the message input by the end user may be stored into the buffer array when the user is viewing the history message, and the message input by the end user may be added to the rendering array to directly perform message rendering when the user is not currently viewing the history message (such as when the user is currently viewing the latest message, or when the live page periodically updates the chat message based on a timing operation of the buffer array), so that the user sees the message input by himself/herself as early as possible.
In response to a user request for viewing history messages, a preset number of history messages can be acquired from the server side, then the acquired history messages are added into the rendering array, the history messages can be added into the rendering array from the head of the rendering array one by one, and then the messages in the changed rendering array are rendered.
Fig. 6 is a schematic structural diagram of a message processing device of a hardware operating environment according to an embodiment of the present disclosure.
As shown in fig. 6, the message processing apparatus 500 may include: a processing component 501, a communication bus 502, a network interface 503, an input-output interface 504, a memory 505, and a power component 506. Wherein a communication bus 502 is used to enable connective communication between these components. The input-output interface 504 may include a video display (such as a liquid crystal display), a microphone and speakers, and a user-interaction interface (such as a keyboard, mouse, touch-input device, etc.), and optionally, the input-output interface 504 may also include a standard wired interface, a wireless interface. The network interface 503 may optionally include a standard wired interface, a wireless interface (e.g., a wireless fidelity interface). Memory 505 may be a high speed random access memory or may be a stable non-volatile memory. The memory 505 may alternatively be a storage device separate from the processing component 501 described previously.
Those skilled in the art will appreciate that the configuration shown in fig. 6 does not constitute a limitation of the message processing apparatus 500, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 6, the memory 505, which is a kind of storage medium, may include therein an operating system (such as a MAC operating system), a data storage module, a network communication module, a user interface module, a program, and a database.
In the message processing apparatus 500 shown in fig. 6, the network interface 503 is mainly used for data communication with an external electronic apparatus/terminal; the input/output interface 504 is mainly used for data interaction with a user; the processing component 501 and the memory 505 in the message processing apparatus 500 may be provided in the message processing apparatus 500, and the message processing apparatus 500 executes the message processing method provided by the embodiment of the present disclosure by the processing component 501 calling the program stored in the memory 505 and various APIs provided by the operating system.
The processing component 501 may include at least one processor, and the memory 505 has stored therein a set of computer-executable instructions that, when executed by the at least one processor, perform a message processing method according to an embodiment of the disclosure. However, the above examples are merely exemplary, and the present disclosure is not limited thereto.
The processing component 501 may implement control of the components included in the message processing apparatus 500 by executing a program.
By way of example, the message processing apparatus 500 may be a PC computer, tablet device, personal digital assistant, smart phone, or other device capable of executing the set of instructions described above. Here, the message processing apparatus 500 need not be a single electronic device, but can be any collection of devices or circuits that can individually or jointly execute the above-described instructions (or sets of instructions). The message processing device 500 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with local or remote (e.g., via wireless transmission).
In the message processing apparatus 500, the processing component 501 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a programmable logic device, a special-purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processing component 501 may also include an analog processor, a digital processor, a microprocessor, a multi-core processor, a processor array, a network processor, and the like.
The processing component 501 may execute instructions or code stored in a memory, wherein the memory 505 may also store data. Instructions and data may also be sent and received over a network via the network interface 503, where the network interface 503 may employ any known transmission protocol.
Memory 505 may be integrated with processing component 501, for example, by having RAM or flash memory disposed within an integrated circuit microprocessor or the like. Further, memory 505 may comprise a stand-alone device, such as an external disk drive, storage array, or any other storage device that may be used by a database system. The memory and processing component 501 may be operatively coupled or may communicate with each other, such as through I/O ports, network connections, etc., so that the processing component 501 can read data stored in the memory 505.
Fig. 7 is a block diagram of a message processing apparatus according to an embodiment of the present disclosure.
Referring to fig. 7, the message processing apparatus 600 may include an acquisition module 601, a buffering module 602, a rendering module 603, and an input module 604. Each module in the message processing apparatus 600 may be implemented by one or more modules, and names of the corresponding modules may vary according to types of the modules. In various embodiments, some modules in the message processing apparatus 600 may be omitted, or additional modules may also be included. Furthermore, modules/elements according to various embodiments of the present disclosure may be combined to form a single entity, and thus may equivalently perform the functions of the respective modules/elements prior to combination.
Upon entering the live page, the retrieving module 601 may retrieve the message broadcasted from the server.
The caching module 602 may cache the retrieved message.
The rendering module 603 may render and display the cached message in a live page at preset time intervals.
Optionally, for each acquired message, the cache module 602 may store the message from the tail of a cache group, where the cache group has a preset size; and deleting the messages at the head of the cache group when the messages stored in the cache group enable the number of the messages stored in the cache group to reach the preset size within the preset time interval.
Optionally, the rendering module 603 may store the message in the cache group from the tail of the rendering group at a preset time interval, where the rendering group has the same preset size as the cache group; and rendering the messages in the rendered group.
Alternatively, in response to receiving a request to view history messages, the obtaining module 601 may obtain a predetermined number of history messages from the server, and for each history message, the rendering module 603 may store the history messages starting from the head of the rendering group; when the number of the messages stored in the rendering group reaches a preset size due to the messages stored in the rendering group, deleting the messages at the tail of the rendering group until a preset number of historical messages are stored in the rendering group; and rendering the messages in the rendering group.
Alternatively, in response to receiving a request to view the latest messages, the rendering module 603 may store the messages in the cache set starting from the end of the rendering set; when the messages stored in the rendering group enable the number of the messages stored in the rendering group to reach a preset size, deleting the messages at the head of the rendering group until all the messages in the cache group are stored in the rendering group, and emptying the cache group; and rendering the messages in the rendering group.
Optionally, the input module 604 may receive user input. In the case of viewing historical messages, in response to a user entering a message, the caching module 602 may cache the entered message; in the case where the history message is not viewed, in response to the user inputting the message, the rendering module 603 may directly render and display the inputted message in the live page.
The processing manner of the live message has been described in detail above with reference to fig. 2 to 4B, and will not be described here.
According to an embodiment of the present disclosure, an electronic device may be provided. Fig. 8 is a block diagram of an electronic device 800 that may include at least one memory 802 and at least one processor 801, the at least one memory 802 storing a set of computer-executable instructions that, when executed by the at least one processor 801, perform a message processing method according to an embodiment of the disclosure, according to an embodiment of the disclosure.
The processor 801 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a programmable logic device, a special-purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processor 801 may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
The memory 802, which is a storage medium, may include an operating system, a data storage module, a network communication module, a user interface module, a program for performing the message processing method of the present disclosure, and a database.
The memory 802 may be integrated with the processor 801, for example, a RAM or flash memory may be disposed within an integrated circuit microprocessor or the like. Further, memory 802 may comprise a stand-alone device, such as an external disk drive, storage array, or any other storage device usable by a database system. The memory 802 and the processor 801 may be operatively coupled or may communicate with each other, such as through I/O ports, network connections, etc., so that the processor 801 can read files stored in the memory 802.
In addition, the electronic device 800 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device 800 may be connected to each other via a bus and/or a network.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
According to an embodiment of the present disclosure, there may also be provided a computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform a message processing method according to the present disclosure. Examples of the computer-readable storage medium herein include: read-only memory (ROM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD + RW, DVD-ROM, DVD-R, DVD-RW, DVD + RW, DVD-RAM, BD-ROM, BD-R LTH, BD-RE, blu-ray or optical disk memory, hard Disk Drive (HDD), solid State Disk (SSD), card memory (such as a multimedia card, a Secure Digital (SD) card or an extreme digital (XD) card), magnetic tape, a floppy disk, a magneto-optical data storage device, an optical data storage device, a hard disk, a solid state disk, and any other device configured to store and provide computer programs and any associated data, data files and data structures in a non-transitory manner to a computer processor or computer such that the computer programs and any associated data processors are executed or computer programs. The computer program in the computer-readable storage medium described above can be run in an environment deployed in a computer apparatus, such as a client, a host, a proxy device, a server, and the like, and further, in one example, the computer program and any associated data, data files, and data structures are distributed across a networked computer system such that the computer program and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
According to an embodiment of the present disclosure, there may also be provided a computer program product, in which instructions are executable by a processor of a computer device to perform the above-mentioned message processing method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A message processing method in live broadcast is characterized by comprising the following steps:
after entering a live broadcast page, acquiring a message broadcasted from a server;
caching the acquired message;
and rendering the cached messages according to a preset time interval and displaying the messages in a live page.
2. The message processing method according to claim 1, wherein caching the acquired message comprises:
for each acquired message, storing the message from the tail of a cache group, wherein the cache group has a preset size;
and deleting the messages at the head of the cache group when the messages stored in the cache group enable the number of the messages stored in the cache group to reach the preset size in the preset time interval.
3. The message processing method according to claim 2, wherein rendering the buffered message at a preset time interval comprises:
storing the messages in the cache group from the tail of a rendering group according to the preset time interval, wherein the rendering group has the same preset size as the cache group; and is provided with
And rendering the messages in the rendering group.
4. The message processing method according to claim 3, further comprising:
in response to receiving a request to view history messages, obtaining a predetermined number of history messages from a server;
for each history message, storing the history message starting from the head of the rendering group;
when the messages stored in the rendering group enable the number of the messages stored in the rendering group to reach the preset size, deleting the messages at the tail of the rendering group until the preset number of historical messages are stored in the rendering group;
and rendering the messages in the rendering group.
5. The message processing method according to claim 3, further comprising:
in response to receiving a request to view the most recent message, storing messages in the cache group starting at the end of the render group;
when the messages stored in the rendering group enable the number of the messages stored in the rendering group to reach the preset size, deleting the messages at the head of the rendering group until all the messages in the cache group are stored in the rendering group, and emptying the cache group;
and rendering the messages in the rendering group.
6. The message processing method according to claim 1, further comprising:
under the condition of viewing the historical messages, responding to the messages input by the user, and caching the input messages; and/or
And in the case that the historical messages are not viewed, responding to the input messages of the user, and directly rendering and displaying the input messages in the live page.
7. A message processing apparatus in a live broadcast, the message processing apparatus comprising:
the acquisition module is configured to acquire the message broadcasted from the server after entering a live broadcast page;
the cache module is configured to cache the acquired message;
and the rendering module is configured to render the cached messages according to a preset time interval and display the messages in the live broadcast page.
8. An electronic device, comprising:
at least one processor;
at least one memory storing computer-executable instructions,
wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the message processing method of any of claims 1 to 6.
9. A computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform a message processing method as claimed in any one of claims 1 to 6.
10. A computer program product in which instructions are executed by at least one processor in an electronic device to perform a message processing method as claimed in any one of claims 1 to 6.
CN202210629314.2A 2022-05-31 2022-05-31 Message processing method, message processing device, electronic equipment and storage medium Active CN115190347B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210629314.2A CN115190347B (en) 2022-05-31 2022-05-31 Message processing method, message processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210629314.2A CN115190347B (en) 2022-05-31 2022-05-31 Message processing method, message processing device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115190347A true CN115190347A (en) 2022-10-14
CN115190347B CN115190347B (en) 2024-01-02

Family

ID=83514152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210629314.2A Active CN115190347B (en) 2022-05-31 2022-05-31 Message processing method, message processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115190347B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130332782A1 (en) * 2012-06-07 2013-12-12 International Business Machines Corporation Background buffering of content updates
CN106790629A (en) * 2017-01-03 2017-05-31 努比亚技术有限公司 Data synchronization unit and its realize the method for data syn-chronization, client access system
CN107302489A (en) * 2017-06-02 2017-10-27 北京潘达互娱科技有限公司 Message display method and device
CN108965098A (en) * 2017-05-18 2018-12-07 北京京东尚科信息技术有限公司 Based on information push method, device, medium and the electronic equipment being broadcast live online
CN109684575A (en) * 2018-10-30 2019-04-26 平安科技(深圳)有限公司 Processing method and processing device, storage medium, the computer equipment of web data
WO2019144743A1 (en) * 2018-01-25 2019-08-01 阿里巴巴集团控股有限公司 Information display method and apparatus
CN110971596A (en) * 2019-11-25 2020-04-07 广州虎牙科技有限公司 Message monitoring method and device, electronic equipment and machine-readable storage medium
CN111414516A (en) * 2020-03-17 2020-07-14 北京字节跳动网络技术有限公司 Live broadcast room message processing method and device, electronic equipment and storage medium
CN111586437A (en) * 2020-04-08 2020-08-25 天津车之家数据信息技术有限公司 Barrage message processing method, system, computing device and storage medium
WO2021027631A1 (en) * 2019-08-09 2021-02-18 北京字节跳动网络技术有限公司 Image special effect processing method and apparatus, electronic device, and computer-readable storage medium
CN113301363A (en) * 2020-12-29 2021-08-24 阿里巴巴集团控股有限公司 Live broadcast information processing method and device and electronic equipment
CN113347488A (en) * 2021-08-04 2021-09-03 腾讯科技(深圳)有限公司 Video rendering method, device, equipment and storage medium
CN113486273A (en) * 2021-07-09 2021-10-08 上海淇馥信息技术有限公司 Front-end information flow page loading method and device and electronic equipment
WO2022062896A1 (en) * 2020-09-22 2022-03-31 北京达佳互联信息技术有限公司 Livestreaming interaction method and apparatus

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130332782A1 (en) * 2012-06-07 2013-12-12 International Business Machines Corporation Background buffering of content updates
CN106790629A (en) * 2017-01-03 2017-05-31 努比亚技术有限公司 Data synchronization unit and its realize the method for data syn-chronization, client access system
CN108965098A (en) * 2017-05-18 2018-12-07 北京京东尚科信息技术有限公司 Based on information push method, device, medium and the electronic equipment being broadcast live online
CN107302489A (en) * 2017-06-02 2017-10-27 北京潘达互娱科技有限公司 Message display method and device
WO2019144743A1 (en) * 2018-01-25 2019-08-01 阿里巴巴集团控股有限公司 Information display method and apparatus
CN109684575A (en) * 2018-10-30 2019-04-26 平安科技(深圳)有限公司 Processing method and processing device, storage medium, the computer equipment of web data
WO2021027631A1 (en) * 2019-08-09 2021-02-18 北京字节跳动网络技术有限公司 Image special effect processing method and apparatus, electronic device, and computer-readable storage medium
CN110971596A (en) * 2019-11-25 2020-04-07 广州虎牙科技有限公司 Message monitoring method and device, electronic equipment and machine-readable storage medium
CN111414516A (en) * 2020-03-17 2020-07-14 北京字节跳动网络技术有限公司 Live broadcast room message processing method and device, electronic equipment and storage medium
CN111586437A (en) * 2020-04-08 2020-08-25 天津车之家数据信息技术有限公司 Barrage message processing method, system, computing device and storage medium
WO2022062896A1 (en) * 2020-09-22 2022-03-31 北京达佳互联信息技术有限公司 Livestreaming interaction method and apparatus
CN113301363A (en) * 2020-12-29 2021-08-24 阿里巴巴集团控股有限公司 Live broadcast information processing method and device and electronic equipment
CN113486273A (en) * 2021-07-09 2021-10-08 上海淇馥信息技术有限公司 Front-end information flow page loading method and device and electronic equipment
CN113347488A (en) * 2021-08-04 2021-09-03 腾讯科技(深圳)有限公司 Video rendering method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115190347B (en) 2024-01-02

Similar Documents

Publication Publication Date Title
CN108712457B (en) Method and device for adjusting dynamic load of back-end server based on Nginx reverse proxy
JP2014523568A (en) Efficient conditioning
US20240323458A1 (en) Video playback method and apparatus, device, and readable storage medium
WO2020063008A1 (en) Resource configuration method and apparatus, terminal, and storage medium
CN107197359B (en) Video file caching method and device
CN107423128B (en) Information processing method and system
WO2016202215A1 (en) Method and device for previewing dynamic image, and method and device for displaying expression package
CN111163336B (en) Video resource pushing method and device, electronic equipment and computer readable medium
CN112714329A (en) Display control method and device for live broadcast room, storage medium and electronic equipment
CN116132742A (en) Method for determining video playing speed doubling value, video playing method, device and equipment
EP3200430A1 (en) Advertisement data processing method and router
JP6181291B2 (en) Information transmission based on reading speed
CN113342759B (en) Content sharing method, device, equipment and storage medium
CN111383038A (en) Advertisement display method and device of mobile terminal, mobile terminal and storage medium
US9336319B2 (en) Data file and rule driven synchronous or asynchronous document generation
CN115190347B (en) Message processing method, message processing device, electronic equipment and storage medium
CN113568548A (en) Animation information processing method and apparatus
CN116320648B (en) Bullet screen drawing method and device and electronic equipment
CN109831673B (en) Live broadcast room data processing method, device, equipment and storage medium
CN110798748A (en) Audio and video preloading method and device and electronic equipment
CN109144354B (en) Method and equipment for rotating player view layer
US10425494B2 (en) File size generation application with file storage integration
WO2017024976A1 (en) Display method and apparatus for real-time information
JP7318123B2 (en) Method, system and medium for streaming video content using adaptive buffering
JP6260347B2 (en) Program, information processing apparatus, electronic content display system, and display suppression method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant