CN112929730A - Bullet screen processing method and device, electronic equipment, storage medium and system - Google Patents

Bullet screen processing method and device, electronic equipment, storage medium and system Download PDF

Info

Publication number
CN112929730A
CN112929730A CN202110087959.3A CN202110087959A CN112929730A CN 112929730 A CN112929730 A CN 112929730A CN 202110087959 A CN202110087959 A CN 202110087959A CN 112929730 A CN112929730 A CN 112929730A
Authority
CN
China
Prior art keywords
video
story line
video frame
content
barrage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110087959.3A
Other languages
Chinese (zh)
Inventor
吴乐宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202110087959.3A priority Critical patent/CN112929730A/en
Publication of CN112929730A publication Critical patent/CN112929730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors

Abstract

The embodiment of the invention provides a bullet screen processing method, a bullet screen processing device, electronic equipment, a storage medium and a bullet screen processing system, which relate to the technical field of video coding, wherein the bullet screen processing method is applied to a client and comprises the steps of obtaining streaming media data and bullet screen data of a target video, wherein the bullet screen data comprises bullet screen content and a story line identifier associated with the bullet screen content, one story line identifier is used for uniquely identifying one node in a story line of the streaming media data, and the story line is determined based on the display content of the streaming media data; and in the process of playing the streaming media data, displaying the bullet screen content corresponding to the current node according to the story line identification. The problem that the relevance of the existing barrage and video content is poor can be solved.

Description

Bullet screen processing method and device, electronic equipment, storage medium and system
Technical Field
The present invention relates to the field of video coding technologies, and in particular, to a bullet screen processing method, apparatus, electronic device, storage medium, and system.
Background
Currently, when watching a video, a user usually expresses his/her feeling of the video by sending a barrage, or communicates with other users by the barrage. In the prior art, a bullet screen sent by a user is generally associated with the playing time of a video, and then the associated bullet screen is displayed based on the playing time.
When the playing time length of the video is changed, for example, re-clipping is performed, so that the playing time length of the video is changed, and the barrage is not changed, the barrage associated with the playing time length of the video will be disjointed from the scenario of the video, resulting in poor association between the barrage and the video content.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a bullet screen processing method, device, electronic device, storage medium, and system, so as to improve the relevance between a bullet screen and video content. The specific technical scheme is as follows:
in a first aspect of the present invention, there is provided a bullet screen processing method, applied to a client, including:
acquiring streaming media data and barrage data of a target video, wherein the barrage data comprises barrage content and story line identifications associated with the barrage content, one story line identification is used for uniquely identifying one node in a story line of the streaming media data, and the story line is determined based on display content of the streaming media data;
and displaying the barrage content corresponding to the current node according to the story line identification in the process of playing the streaming media data.
In a second aspect of the present invention, there is also provided a bullet screen processing method applied to a server, including:
acquiring streaming media data and barrage data of a target video, wherein the barrage data comprises barrage content and story line identifications associated with the barrage content, one story line identification is used for uniquely identifying one node in a story line in the streaming media data, and the story line is determined based on display content of the streaming media data;
and sending the streaming media data and the barrage data to a client so that the client displays barrage content corresponding to the current node according to the story line identifier in the playing process of the streaming media data.
In a third aspect of the present invention, there is also provided a bullet screen processing apparatus, applied to a client, including:
the system comprises a first unit, a second unit and a third unit, wherein the first unit is used for acquiring streaming media data and barrage data of a target video, the barrage data comprises barrage content and story line identifications related to the barrage content, one story line identification is used for uniquely identifying one node in a story line of the streaming media data, and the story line is determined based on display content of the streaming media data;
and the second unit is used for displaying the barrage content corresponding to the current node according to the story line identifier in the process of playing the streaming media data.
In a fourth aspect of the present invention, there is also provided a bullet screen processing apparatus applied to a server, including:
a third unit, configured to obtain streaming media data and barrage data of a target video, where the barrage data includes barrage content and story line identifiers associated with the barrage content, and one story line identifier is used to uniquely identify one node in a story line in the streaming media data, and the story line is determined based on display content of the streaming media data;
and the fourth unit is used for sending the streaming media data and the barrage data to a client so that the client displays barrage content corresponding to the current node according to the story line identifier in the playing process of the streaming media data.
In a fifth aspect of the present invention, there is also provided an electronic device, including a processor, a communication interface, a memory and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of the first aspect or the method steps of the second aspect when executing the program stored in the memory.
In a sixth aspect implemented by the present invention, there is also provided a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the method steps of the first aspect, or the method steps of the second aspect.
In a seventh aspect of the present invention, there is also provided a system comprising: a client for implementing the method steps of the first aspect; a server for implementing the method steps of the second aspect.
The barrage processing method provided by the embodiment of the invention comprises the steps of firstly receiving streaming media data and barrage data of a target video, wherein the barrage data comprises barrage content and story line identifications associated with the barrage content, one story line identification is used for uniquely identifying one node in a story line of the streaming media data, the story line is determined based on the display content of the streaming media data, and the barrage content corresponding to the current node is displayed according to the story line identification in the playing process of the streaming media data. Therefore, when the playing time of the streaming media data changes, for example, under the condition that the video is re-clipped as described in the background art, the playing duration of the video is changed, but because the story line node indicated by the display content of the streaming media data does not change, then, the bullet screen content displayed based on the story line identifier can be synchronously matched with the currently played streaming media data, so that the relevance between the bullet screen and the video content can be improved, and the bullet screen experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a flowchart of a bullet screen processing method according to an embodiment of the present invention;
FIG. 2 is an interaction diagram of a client and a server in an embodiment of the present invention;
FIG. 3 is a diagram illustrating a conventional manner of displaying a bullet screen corresponding to a playing time;
FIG. 4 is a second conventional manner of displaying the bullet screen and the playing time;
FIG. 5 is a diagram illustrating one of the display modes of the bullet screen and the story line mark;
FIG. 6 shows a second manner of displaying corresponding bullet screens and story line marks in the embodiment of the present invention;
FIG. 7 is a flowchart of another bullet screen processing method in the embodiment of the present invention;
FIG. 8 is a block diagram of a framework used in demultiplexing in an embodiment of the invention;
fig. 9 is a schematic block diagram of a bullet screen display device according to an embodiment of the present invention;
fig. 10 is a schematic block diagram of a bullet screen processing device according to another embodiment of the present invention;
fig. 11 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, an embodiment of the present application provides a bullet screen display method applied to a client, including:
step 101, stream media data and barrage data of a target video are obtained, wherein the barrage data comprise barrage content and story line identifications related to the barrage content, one story line identification is used for uniquely identifying one node in a story line of the stream media data, and the story line is determined based on display content of the stream media data.
In the embodiment of the present application, when acquiring the bullet screen data, the bullet screen data may be acquired in the process of playing the target video, for example, in the process of playing the target video, the bullet screen data is set to be automatically received by default.
The acquisition may also be understood as the acquisition in response to a corresponding request, for example, during the process of playing the target video, the default setting is not to automatically receive the bullet screen data, the user inputs the bullet screen on the playing interface, the terminal receives the bullet screen request, and at this time, the bullet screen data is received in response to the request. The method can be determined according to actual needs, and is not limited in the embodiments of the present application.
In addition, in some possible embodiments, the streaming media data and the bullet screen data of the target video may also be obtained (e.g., received from the server, etc.) in advance before the target video is played. That is, the embodiments of the present invention also support obtaining data before playing (which can be regarded as pre-cached data).
The target video may be understood as a video to be played, and may include, but is not limited to: online video or local video; among other things, online video may include: the embodiment of the invention has no limitation to the forms of live broadcast video, recorded broadcast video or cloud application video. Furthermore, the embodiment of the invention has no special limitation on the source of recorded and broadcasted videos, and can directly provide video contents such as televisions, movies, artists and the like for a video platform; in addition to this, the finished video may also be authored for the user. For example, the target video is a small video recorded by the user himself; for another example, the target video may also be a video obtained by clipping an existing video by a user; for another example, the target video may be a video obtained by a user secondarily authoring an existing video that has been primarily authored, for example, a video obtained by dubbing and/or dubbing music.
Barrage content may include, but is not limited to: at least one of text content, emoticon content, picture content or link content (e.g., web site), and the presentation form of the barrage content is not particularly limited in the embodiment of the present invention.
The storyline is unique time sequence data which can represent the content of the current video storyline, and the storyline identification is used for identifying a unique node on the storyline. Each video frame in the target video has a unique storyline identifier, in other words, the storyline identifier needs to be uniquely corresponding to the video frame, for example, each frame of the target video needs to represent different storylines, so that each video frame can be understood to have a uniquely corresponding storyline node, each storyline node can be represented by one storyline identifier, and the storyline nodes of a plurality of continuous video frames can form a storyline.
The embodiment of the invention also has no particular limitation on the representation form of the story line identifier. Illustratively, storyline identification may include, but is not limited to, at least one of text or numerical values. In this embodiment, the numerical identifier is used as the storyline identifier for illustration, which is only an example and not a limitation, and in other possible embodiments, other forms of storyline identifiers may be used, but no matter how they are changed, they are within the protection scope of the embodiments of the present application. It should be noted that, when the storyline id is represented as a numerical value, the respective storyline ids of any two adjacent video frames in the target video may be continuous or discontinuous (e.g., clipped video).
And 102, displaying the bullet screen content corresponding to the current node according to the story line identification in the process of playing the streaming media data.
The barrage data comprises barrage content and story line identification related to the barrage content, the incidence relation between the barrage content and the video frames can be determined based on the story line identification, and in the process of playing streaming media data, the barrage content is displayed according to the story line identification, so that the incidence between the barrage content and the video can be improved.
In this embodiment, the display form of the bullet screen content can be realized by suspending the bullet screen content on the video interface, for example, the bullet screen content can be displayed from one side of the video playing interface and move towards the other side of the video playing interface until the bullet screen content is displayed completely. The present invention is not limited to the above embodiments, and as an alternative embodiment, in other possible embodiments, the bullet screen content may be displayed in other display forms, but no matter how the change is made, the present invention is within the protection scope of the present embodiment.
The bullet screen processing method comprises the steps of firstly receiving streaming media data and bullet screen data of a target video, wherein the bullet screen data comprise bullet screen contents and story line identifications related to the bullet screen contents, one story line identification is used for uniquely identifying one node in a story line of the streaming media data, the story line is determined based on display contents of the streaming media data, and in the process of playing the streaming media data, the bullet screen contents corresponding to the current node are displayed according to the story line identifications. Therefore, when the playing time of the streaming media data changes, for example, under the condition that the video is re-clipped as described in the background art, the playing duration of the video is changed, but because the story line node indicated by the display content of the streaming media data does not change, then, the bullet screen content displayed based on the story line identifier can be synchronously matched with the currently played streaming media data, so that the relevance between the bullet screen and the video content can be improved, and the bullet screen experience is improved.
In the embodiment of the application, the storyline identifier corresponding to the video frame can be acquired through the identifier field. The identification field may be a field that uniquely corresponds to the video frame for storing storyline identification of the video frame. When the video frame is generated, an identification field which is uniquely corresponding to the video frame is generated, and when the story line identification which is uniquely corresponding to the video frame is generated, the story line identification is stored in the identification field of the video frame. Optionally, in the process of playing the streaming media data, displaying the barrage content corresponding to the current node according to the story line identifier, including:
the client decodes the streaming media data to obtain video frames of the target video, wherein any one of the video frames comprises an identification field, and the identification field carries a story line identification corresponding to the video frame;
and in the process of playing the target video, the client displays the barrage content corresponding to the video frame according to the story line identification in the identification field and the story line identification associated with the barrage content.
Storyline identification may be, but is not limited to, generated at the time of video encoding.
Understandably, before playing the target video, the client needs to decode the streaming media data of the target video, and after decoding, the multi-frame video frame of the target video can be obtained. Each video frame comprises an identification field, and the identification field carries the storyline identification corresponding to the video frame. Specifically, the identification field may be set when the streaming media data of the target video is encoded, for example, the identification field may be set when the client that uploads the target video encodes the streaming media data of the target video after acquiring the target video, and may also be set when the server encodes the streaming media data of the target video, where the client sets the identification field as an example for detailed description, and the setting content of the server side is described later on the server side. The method comprises the steps that after a client side obtains a target video, the streaming media data of the target video are coded, when the streaming media data of the target video are coded, multi-frame video frames of the target video are obtained, a unique correspondence is distributed to each video frame, a blank field used for storing story line identification of the video frame is used as an identification field of the blank field, story line identification corresponding to each video frame is set according to displayed video content of each video frame, and the story line identification is stored in the identification field. For example, when numerical identification is employed as storyline identification, the numerical value carried by the identification field may be 0, 1, or 3, etc. This is by way of example only and not by way of limitation.
Here, it should be emphasized that, whether executed at the server or the client, the storyline identifier corresponding to the video frame is generated in the initial state of the video, and the storyline identifier is not changed any more if the video needs to be modified subsequently. The initial video is a video with an identification field not including storyline identification, if the identification field of the video includes the storyline identification, the video is not the initial video, the storyline identification does not need to be regenerated, and the existing storyline identification is used.
In this embodiment, in generating the storyline identification corresponding to the video frame, the method may include the steps of: generating a storyline identifier corresponding to each video frame based on the video content of each video frame in the target video;
adding corresponding story line identification in the identification field of each video frame;
and coding the video frame added with the story line identifier to obtain the streaming media data of the target video.
Understandably, in the embodiment, when the storyline identifier corresponding to the video frame is generated, the video content of each video frame in the target video is determined, and the storyline identifier corresponding to the video frame is determined according to the storyline expressed by the video content. Specifically, storyline identifiers uniquely corresponding to each storyline may be sequentially generated according to the sequence of storylines of video content of continuously played video frames. Because the storyline identification is only corresponding to the storyline of the video frame, when the video is cut, the video frame is deleted, the storyline identification corresponding to the video frame is correspondingly deleted, and the video frame and the storyline identification which are remained after the cutting are still in one-to-one correspondence. In the branch video, because the playing contents of different branch scenarios are different, the story line identifier is determined according to the video frame content of each branch scenario in different branch scenarios, so that the association relationship between the barrage content and the branch scenarios can be better realized, and the situation of disordered barrage display is avoided.
Adding the story line identification to the identification field of the video frame, and coding each video frame to obtain the streaming media data of the target video, so that the story line identification corresponding to the video frame can be generated, and the story line identification is stored in the identification field so as to code the target video to obtain the streaming media data, and further, the client sends the streaming media data of the target video after the successful coding to the server for storage.
Therefore, by generating the story line identifier in the steps, the client can quickly determine the story line identifier uniquely corresponding to the video frame based on the story line expressed by the video content when playing the target video, and the played video content of the video frame can not be changed when the client uses the target video and no matter the target video is clipped or adjusted, so that the story line identifier is determined based on the video content, the barrage content is further associated based on the story line identifier, and the association between the video frame and the barrage content can be improved.
It should be noted that the target video may be an initial video directly acquired by the client, or may be a secondary video forwarded or clipped by the client, when the target video is the initial video, the client or the server may directly set the identification field, when the target video is the secondary video forwarded or clipped by the client, if the identification field exists in the video, the target video may be directly used, and if the identification field does not exist in the video, the client sets the identification field for the target video.
The client side decodes the streaming media data to obtain a video frame of a target video and a story line identifier corresponding to the video frame after receiving the streaming media data from the server, and displays barrage content associated with the story line identifier according to the story line identifier in the playing process of the target video.
Specifically, a specific manner of displaying the bullet screen content is as follows.
Firstly, in the process of playing a target video, a target video frame which is currently played is obtained, and then, barrage content is obtained according to a story line identifier of the target video frame. In this embodiment, when acquiring the bullet screen content, a piece of bullet screen content is acquired each time to avoid information confusion caused by too many acquired parts. For example, a piece of barrage content is obtained starting with a target video frame currently being played.
Here, the one-segment barrage content may be a one-segment barrage content, for example, when the playing time of the currently playing target video frame is at a time point of 12 minutes of the whole target video, the barrage content in a time period of 12-12.5 minutes is acquired. It should be noted that the time period herein is only an example, and is not limited, and as a changeable implementation, in other feasible embodiments, the time period may also be adjusted within a certain range. However, any modification thereof is within the scope of the present embodiment.
Alternatively, a piece of barrage content may also be barrage content for a video frame. For example, if the currently playing target video frame belongs to the nth video frame, the bullet screen content corresponding to the nth to n +1000 video frames is obtained. It should be noted that, the present invention is only exemplary and not limited, and as a changeable implementation, in other possible embodiments, the acquired bullet screen content may also be adjusted within a certain frame number range.
It should be noted that, in the process of playing the target video, the playing time of each video frame is short, and after the current video frame is played, the next video frame becomes a new current video frame soon, so in this embodiment, the target video frame may be any video frame in the target video at the playing time, and the first target identifier may be a story line identifier corresponding to the target video frame. Under the condition that the time interval between adjacent video frames is short, a section of barrage content is acquired, the target barrage content associated with the first target identifier can be displayed more quickly, the barrage content associated with the story line identifier corresponding to each video frame is prevented from being acquired again, and the time delay caused by too many acquisition times can be avoided.
In some possible embodiments, the user may also perform the bullet screen sending operation in real time during the playing process of the target video.
Optionally, as shown in fig. 2, the bullet screen processing method further includes:
the method comprises the steps that a client side obtains a video frame corresponding to bullet screen sending operation and a video frame identification corresponding to the video frame under the condition that the bullet screen sending operation is received;
the client sends a barrage sending request to the server, wherein the barrage sending request comprises barrage content and the video frame identifier, and the video frame identifier comprises: at least one of a storyline logo or a time logo.
In a possible implementation manner, the video frame identifier includes a story line identifier, and understandably, in the process of playing the target video, if a barrage sending operation is received, the video frame corresponding to the barrage sending operation and the story line identifier corresponding to the video frame are acquired.
Furthermore, the bullet screen content corresponding to the bullet screen sending operation can be displayed in real time on a video playing interface. And sending a barrage sending request to the server, wherein the barrage sending request comprises barrage content and a story line identifier corresponding to the video frame. Therefore, the server can store the barrage content and the story line identification in a correlated manner, so as to facilitate further subsequent display.
In yet another feasible implementation manner, the video frame identifier includes a time identifier, understandably, in the process of target video playing, if a barrage sending operation is received, barrage content and a time identifier corresponding to the barrage sending operation are acquired, the client displays the barrage content corresponding to the barrage sending operation in real time on a video playing interface, meanwhile, the client determines a video frame according to the time identifier, then determines a story line identifier corresponding to the video frame according to the video frame, further associates the barrage content with the story line identifier, and sends the barrage content and the story line identifier to the server for storage.
In another feasible implementation manner, the client may further send the time identifier to the server, and the server determines the video frame according to the time identifier, then determines the story line identifier corresponding to the video frame according to the video frame, and further stores the barrage content and the story line identifier in an associated manner, so as to facilitate further subsequent display.
In some feasible embodiments, when the story line identifier is generated, the story line identifier corresponding to each video frame may be automatically generated according to an ascending order of the video content of each video frame in the playing process. Generating a storyline identifier corresponding to each video frame based on the video content of each video frame in the target video, including:
and automatically generating story line identifications corresponding to the video frames according to the sequence of the video contents of the video frames in the playing process and the ascending sequence. Alternatively, in the presence of a branching video, storyline identifications are determined for video frame content of each of the different branching scenarios.
Understandably, when the numerical identifier is adopted as the storyline identifier, the generated storyline identifier may be 0, 1, 3 or the like, so that the storyline identifier uniquely corresponding to the video content may be quickly generated. This is by way of example only and not by way of limitation.
It is worth emphasizing that, in this embodiment, the sequence of the video content of each video frame in the playing process refers to an initial playing sequence of the video frames, for example, a segment of video frame includes a first video frame, a second video frame, a third video frame and a fourth video frame according to the initial playing sequence, in the prior art, the bullet screen content displayed is determined based on the playing time of the video, so that the corresponding relationship between the playing time of the clipped video and the bullet screen is changed and does not correspond to the bullet screen, which may cause the bullet screen to be disconnected from the scenario of the current video, and when a following user watches the bullet screen, the bullet screen may cause a perplexity and a disturbance.
For example, in a re-clipping scene, the playing time of a video changes, as shown in fig. 3, in the case of a normal video without a clip, the bullet screen of the user corresponds to a video frame based on the playing time of the video frame, from the beginning of the video to the end of the video, and in the case of no change in the playing time of the video frame, the bullet screen content is displayed based on the playing time, for example, the first video frame is 8: play at 00, then, correspond to 8: the bullet screen content at time 00 is simultaneously displayed corresponding to the first video frame.
As shown in fig. 4, when the video is clipped, the playing time of the video frame is also changed. For example, after the second video frame and the third video frame are clipped, a subsequent fourth video frame is displayed within the time of playing the original second video frame and the original third video frame, in the prior art, the barrage content is displayed according to the time sequence, then, as shown in fig. 4, the barrage content originally corresponding to the second video frame and the third video frame is still displayed on the interface and is displayed corresponding to the fourth video frame, so that a misalignment is generated, as shown by an arrow in fig. 4, so that the video content and the barrage content do not correspond synchronously any more. Then, when the user watches the clipped target video, the user directly jumps from the first video frame to the fourth video frame for playing, but when the video content of the fourth video frame is displayed, the barrage content of the second video frame and the third video frame is displayed, and the user may not understand the barrages, which causes viewing trouble.
In contrast, the method according to the present embodiment generates a storyline identifier corresponding to each video frame, and the video frames are displayed according to the storyline identifier. Then, for example, it may be that the story line of the first video frame is identified as 0, the story line of the second video frame is identified as 1, the story line of the third video frame is identified as 2, and the story line of the fourth video frame is identified as 3, and then, at this time, the respective story lines of the story line corresponding to the target video are identified as 0, 1, 2, and 3. At this time, as shown in fig. 5, when the video frame needs to be clipped, for example, the second video frame and the third video frame are clipped, and the first video frame and the fourth video frame are retained, the story lines of the story line corresponding to the target video are marked as 0 and 3, and only the bullet screen content corresponding to 0 and 3 is displayed when displayed. In other words, in the video clip scene, after a video frame is clipped, the bullet screen content corresponding to the video frame is no longer displayed. Therefore, the corresponding relation between the story line identification and the video frames is not influenced no matter the target video is edited or other operations are carried out, and the relevance between the barrage content and the video frames determined based on the story line identification is improved.
In addition, in the application scene of the branching video, when a branching scenario exists, the playing time of different branching scenarios is the same, and therefore, if the barrage content and the playing content are associated using the existing timeline in the application scene of the branching scenario, a case where the barrage does not correspond to the scenario is likely to occur. For example, the video frame of branch 1 is currently being played, but the played barrage content may include or be all the barrage corresponding to the video frame of branch 2, which may cause the drama of branch 2 and also cause trouble to the user, affecting the interactivity of the branch video.
It should be noted that the playing contents of different branching scenarios are different, and therefore, with the bullet screen processing method provided in the embodiment of the present application, as shown in fig. 6, a story line identifier is determined for the video frame content of each branching scenario in different branching scenarios, and the bullet screen content corresponding to the streaming media data of the currently played branching scenario is displayed based on the story line identifier. As shown in fig. 6, when the video frame corresponding to branch 1 is played, the content of bullet screen 1 corresponding to the video frame is shown based on the story line identifier; when the video frame corresponding to the branch 2 is played, the content of the bullet screen 2 corresponding to the branch is shown based on the story line identification. Therefore, the incidence relation between the barrage content and the branch scenario can be better realized, the situation that the barrage display is disordered is avoided, the scenario of the other branch is completely played, the barrage content and the video content are more attached, and the user experience of the branch scenario is favorably improved.
Corresponding to the client side, an embodiment of the present application further provides a bullet screen processing method, applied to a server, please refer to fig. 7, including:
701, a server acquires streaming media data and barrage data of a target video, wherein the barrage data comprises barrage content and story line identifiers associated with the barrage content, one story line identifier is used for uniquely identifying one node in a story line in the streaming media data, and the story line is determined based on display content of the streaming media data;
step 702, the server sends the streaming media data and the barrage data to the client, so that the client displays the barrage content corresponding to the current node according to the story line identifier in the playing process of the streaming media data.
The target video may be understood as a video to be played, the barrage content may be text content or emoticon content, the storyline identifier may be text or numerical value, it should be noted that the storyline identifier needs to be uniquely corresponding to the video frame, for example, each frame of the target video needs to represent a different storyline, and therefore, it may be understood that each video frame has a uniquely corresponding storyline, and the storyline identifier may be an identifier for representing the storyline, and may be text identifier or numerical identifier. In this embodiment, the numerical identifier is used as the storyline identifier for illustration, which is only an example and not a limitation, and in other possible embodiments, other forms of storyline identifiers may be used, but no matter how they are changed, they are within the protection scope of the embodiments of the present application.
According to the barrage processing method, the streaming media data and the barrage data corresponding to the target video are sent to the client, and the barrage data comprise barrage content and story line identification corresponding to the video frame related to the barrage content, so that the client can decode the streaming media data to obtain the video frame of the target video.
Here, it is worth emphasizing that the storyline identifier corresponding to the video frame is generated, and in this embodiment, the storyline identifier is generated at the server, alternatively, it may also be executed at the client. However, whether the method is executed in a server or a client, the storyline identification corresponding to the video frame is generated in the initial state of the video, and the storyline identification is not changed any more if the video needs to be modified subsequently. The initial video is a video with an identification field not including storyline identification, if the identification field of the video includes the storyline identification, the video is not the initial video, the storyline identification does not need to be regenerated, and the existing storyline identification is used.
Specifically, the detailed steps of generating the storyline identifier are performed by taking the generation of the storyline identifier by the server as an example. The method comprises the steps that after a server obtains a target video, the streaming media data of the target video are coded, when the streaming media data of the target video are coded, multi-frame video frames of the target video are obtained, a unique correspondence is distributed to each video frame, a blank field used for storing story line identification of the video frame is used as an identification field of the blank field, story line identification corresponding to each video frame is set according to displayed video content of each video frame, and the story line identification is stored in the identification field of the video frame. For example, when numerical identification is employed as storyline identification, the numerical value carried by the identification field may be 0, 1, or 3, etc. This is by way of example only and not by way of limitation.
In some possible embodiments, the server may further receive a bullet screen sending request from the client, and store bullet screen content in the bullet screen sending request and a story line identifier corresponding to a video frame associated with the bullet screen content in an associated manner, which is specifically described below.
As shown in fig. 2, the server receives a bullet screen sending request from the client;
the server acquires the bullet screen content and the video frame identification in the bullet screen sending request, wherein the video frame identification comprises: at least one of a storyline identification or a time identification;
and the server stores the barrage content and the storyline identifier in an associated manner.
In one possible implementation, the video frame identification comprises a storyline identification, and understandably, the client sends a barrage sending request comprising the storyline identification and barrage content to the server. The server stores the bullet screen content in the bullet screen sending request and the story line identifier corresponding to the video frame associated with the bullet screen content in an associated manner, and further updates the bullet screen content in the bullet screen sending request to the existing bullet screen content of the target video according to the story line identifier, so that the streaming media data and the bullet screen data of the target video can be updated, and the updated bullet screen can be conveniently displayed subsequently.
In another feasible implementation manner, the video frame identifier includes a time identifier, and understandably, in the process of playing the target video, if an operation of sending a bullet screen by a user is collected, the client acquires bullet screen content and the time identifier corresponding to the bullet screen sending operation, the client displays the bullet screen content corresponding to the bullet screen sending operation in real time on a video playing interface, meanwhile, the client determines a current video frame according to the time identifier, then determines a story line identifier corresponding to the current video frame according to the current video frame, further associates the bullet screen content with the story line identifier, and sends a bullet screen sending request carrying the bullet screen content and the video frame identifier to the server. Correspondingly, after receiving the barrage sending request, the server can store the barrage content and the video frame identifier in an associated manner.
In another possible implementation, the client may also directly send the time identifier to the server, and the server receives a bullet screen sending request with bullet screen content and time identifier. Therefore, the server side can determine the video frame corresponding to the time identifier according to the corresponding relation between the time identifier and the video frame, and then determine the story line identifier corresponding to the video frame according to the corresponding relation between the video frame and the story line identifier. Or, the server may also preset a correspondence between the time identifier and the story line identifier in advance, so that the server may determine, based on the correspondence, the story line identifier corresponding to the time identifier. Further, the server can store the bullet screen content and the story line identification in an associated manner so as to facilitate further subsequent display.
In summary, when the video frame identifier sent to the server by the client is the story line identifier, the server stores the story line identifier in association with the barrage content in the barrage sending request; when the video frame identification sent to the server by the client is the time identification, the server determines the story line identification corresponding to the time identification according to the time identification, and further stores the barrage content and the story line identification in an associated manner. That is, in the embodiment of the present invention, no matter what video frame identifier is sent by the client, the server may store the storyline identifier and the barrage content in an associated manner.
In some possible implementations, storyline identifications corresponding to video frames may be generated from video content of the video frames in the target video. The method comprises the following specific steps:
generating a storyline identifier corresponding to each video frame based on the video content of each video frame in the target video;
adding corresponding story line identification in the identification field of each video frame;
and coding the video frame added with the story line identifier to obtain the streaming media data of the target video.
Understandably, in the embodiment, when the storyline identifier corresponding to the video frame is generated, the video content of each video frame in the target video is determined, and the storyline identifier corresponding to the video frame is determined according to the storyline expressed by the video content. Specifically, storyline identifiers uniquely corresponding to each storyline may be sequentially generated according to the sequence of storylines of video content of continuously played video frames. Because the storyline identification is only corresponding to the storyline of the video frame, when the video is cut, the video frame is deleted, the storyline identification corresponding to the video frame is correspondingly deleted, and the video frame and the storyline identification which are remained after the cutting are still in one-to-one correspondence. In the branch video, because the playing contents of different branch scenarios are different, the story line identifier is determined according to the video frame content of each branch scenario in different branch scenarios, so that the association relationship between the barrage content and the branch scenarios can be better realized, and the situation of disordered barrage display is avoided.
Adding the story line identification to the identification field of the video frame, and coding each video frame to obtain the streaming media data of the target video, so that the story line identification corresponding to the video frame can be generated, and the story line identification is stored in the identification field so as to code the target video to obtain the streaming media data, and further, the client sends the streaming media data of the target video after the successful coding to the server for storage.
Therefore, by generating the story line identifier in the steps, the client can quickly determine the story line identifier uniquely corresponding to the video frame based on the story line expressed by the video content when playing the target video, and the played video content of the video frame can not be changed when the client uses the target video and no matter the target video is clipped or adjusted, so that the story line identifier is determined based on the video content, the barrage content is further associated based on the story line identifier, and the association between the video frame and the barrage content can be improved.
In some feasible embodiments, when the story line identifier is generated, the story line identifier corresponding to each video frame may be automatically generated according to an ascending order of the video content of each video frame in the playing process. Generating a storyline identifier corresponding to each video frame based on the video content of each video frame in the target video, including:
and automatically generating story line identifications corresponding to the video frames according to the sequence of the video contents of the video frames in the playing process and the ascending sequence.
Understandably, when the numerical identifier is adopted as the storyline identifier, the generated storyline identifier may be 0, 1, 3 or the like, so that the storyline identifier uniquely corresponding to the video content may be quickly generated. This is by way of example only and not by way of limitation.
It is worth emphasizing that, in this embodiment, the sequence of the video content of each video frame in the playing process refers to an initial playing sequence of the video frames, for example, a segment of video frame includes a first video frame, a second video frame, a third video frame and a fourth video frame according to the initial playing sequence, in the prior art, the bullet screen content displayed is determined based on the playing time of the video, so that the corresponding relationship between the playing time of the clipped video and the bullet screen is changed and does not correspond to the bullet screen, which may cause the bullet screen to be disconnected from the scenario of the current video, and when a following user watches the bullet screen, the bullet screen may cause a perplexity and a disturbance.
For example, in a re-clipping scene, the playing time of a video changes, as shown in fig. 3, in the case of a normal video without a clip, the bullet screen of the user corresponds to a video frame based on the playing time of the video frame, from the beginning of the video to the end of the video, and in the case of no change in the playing time of the video frame, the bullet screen content is displayed based on the playing time, for example, the first video frame is 8: play at 00, then, correspond to 8: the bullet screen content at time 00 is simultaneously displayed corresponding to the first video frame.
As shown in fig. 4, when the video is clipped, the playing time of the video frame is also changed. For example, after the second video frame and the third video frame are clipped, a subsequent fourth video frame is displayed within the time of playing the original second video frame and the original third video frame, in the prior art, the barrage content is displayed according to the time sequence, then, as shown in fig. 4, the barrage content originally corresponding to the second video frame and the third video frame is still displayed on the interface and is displayed corresponding to the fourth video frame, so that a misalignment is generated, as shown by an arrow in fig. 4, such that the video content and the barrage content do not correspond synchronously any more. Then, when the user watches the clipped target video, the user directly jumps from the first video frame to the fourth video frame for playing, but when the video content of the fourth video frame is displayed, the barrage content of the second video frame and the third video frame is displayed, and the user may not understand the barrages, which causes viewing trouble.
In contrast, the method according to the present embodiment generates a storyline identifier corresponding to each video frame, and the video frames are displayed according to the storyline identifier. Then, for example, it may be that the story line of the first video frame is identified as 0, the story line of the second video frame is identified as 1, the story line of the third video frame is identified as 2, and the story line of the fourth video frame is identified as 3, and then, at this time, the respective story lines of the story line corresponding to the target video are identified as 0, 1, 2, and 3. At this time, as shown in fig. 5, when the video frame needs to be clipped, for example, the second video frame and the third video frame are clipped, and the first video frame and the fourth video frame are retained, the story lines of the story line corresponding to the target video are marked as 0 and 3, and only the bullet screen content corresponding to 0 and 3 is displayed when displayed. In other words, in the video clip scene, after a video frame is clipped, the bullet screen content corresponding to the video frame is no longer displayed. Therefore, the corresponding relation between the story line identification and the video frames is not influenced no matter the target video is edited or other operations are carried out, and the relevance between the barrage content and the video frames determined based on the story line identification is improved.
In addition, in the application scene of the branching video, when a branching scenario exists, the playing time of different branching scenarios is the same, and therefore, if the barrage content and the playing content are associated using the existing timeline in the application scene of the branching scenario, a case where the barrage does not correspond to the scenario is likely to occur. For example, the video frame of branch 1 is currently being played, but the played barrage content may include or be all the barrage corresponding to the video frame of branch 2, which may cause the drama of branch 2 and also cause trouble to the user, affecting the interactivity of the branch video.
It should be noted that the playing contents of different branching scenarios are different, and therefore, with the bullet screen processing method provided in the embodiment of the present application, as shown in fig. 6, a story line identifier is determined for the video frame content of each branching scenario in different branching scenarios, and the bullet screen content corresponding to the streaming media data of the currently played branching scenario is displayed based on the story line identifier. As shown in fig. 6, when the video frame corresponding to branch 1 is played, the content of bullet screen 1 corresponding to the video frame is shown based on the story line identifier; when the video frame corresponding to the branch 2 is played, the content of the bullet screen 2 corresponding to the branch is shown based on the story line identification. Therefore, the incidence relation between the barrage content and the branch scenario can be better realized, the situation that the barrage display is disordered is avoided, the scenario of the other branch is completely played, the barrage content and the video content are more attached, and the user experience of the branch scenario is favorably improved.
It should be noted that, in some application scenarios, each video frame of the target video includes an image frame and an audio frame corresponding to the image frame, a subtitle frame corresponding to the image frame, and the like, at this time, since the image content of the image frame is the video content of the combined video frame, in this case, the story line identifier is determined based on the image frame. In this embodiment, the generation of the narrative line identification is performed by taking the target video in FFmpeg format as an example.
Story line identifiers corresponding to each image frame in a one-to-one manner are generated based on the image content of the image frames and are recorded as story line identifiers, wherein the story line identifiers are only used as an example and are not limited, and alternatively, in other feasible embodiments, the story line identifiers can also be recorded as story line identifiers of other forms or other names. Since, in the target video in the FFmpeg format, the video frame includes an image frame, an audio frame corresponding to the image frame, and a subtitle frame corresponding to the image frame, when the final video frame is obtained, therefore, it is necessary to combine the image frame, the audio frame and the subtitle frame, i.e., the image frame, the audio frame, and the subtitle frame, are multiplexed, and for the sake of distinction, in the embodiment of the present application, marking an identification field for storing storyline identification in the image frame as AVFrame, marking an identification field for storing storyline identification in the video frame as AVpacket, that is, the value of store _ line of the identification field AVFrame of each image frame is directly assigned to the identification field AVPacket of the corresponding video frame during multiplexing, and thus, the streaming media data can be obtained by coding each image frame, the audio frame corresponding to the image frame, the subtitle frame corresponding to the image frame and the story line identification corresponding to the image frame. The storyline identification of the video frame and the corresponding image frame in the video frame can be kept consistent, the storyline identification does not need to be generated again, and the use is convenient.
During demultiplexing, a skeleton diagram used in demultiplexing is as shown in fig. 7, and a store _ line in an identification field AVPacket is directly copied to an identification field AVFrame corresponding to an image frame in a video frame, where if a value copied to the store _ line does not indicate that a corresponding story line identifier has not been generated during encoding, a story line identifier is created, starting with the story line identifier of an initial image frame as 0, and gradually increasing a value of the story line identifier based on a frame sequence of each image frame. Therefore, the story line identification uniquely corresponding to each image frame can be quickly determined, and the association relationship between the image frame and the bullet screen can be conveniently and quickly determined. Alternatively, in other possible embodiments, the assignment may also be started with other non-0 values when determining the storyline identification of the initial image frame, which is only an example and not a limitation herein.
In summary, in the bullet screen processing method, the story line identifier is added to the identifier field of the video frame, and the bullet screen content is played based on the story line identifier, so that no matter how the later-stage video is edited, the story line identifier corresponding to the video frame does not change, and therefore the story line identifier does not need to be rearranged, the bullet screen content can be automatically associated and played based on the story line identifier, and the problems that the bullet screen such as video editing, branching stories is disjointed from the story, and the bullet screen cannot be well supported and the like can be solved.
Referring to fig. 9, an embodiment of the present application further provides a bullet screen display device 900, applied to a client, including:
a first unit 901, configured to obtain streaming media data and barrage data of a target video, where the barrage data includes barrage content and story line identifiers associated with the barrage content, where one story line identifier is used to uniquely identify one node in a story line of the streaming media data, and the story line is determined based on display content of the streaming media data;
a second unit 902, configured to display, according to the story line identifier, barrage content corresponding to the current node in the process of playing the streaming media data.
Optionally, the second unit 902 is specifically configured to:
decoding the streaming media data to obtain video frames of the target video, wherein any one of the video frames comprises an identification field, and the identification field carries a story line identification corresponding to the video frame;
and in the process of playing the target video, displaying the barrage content corresponding to the video frame according to the story line identification in the identification field and the story line identification associated with the barrage content.
Optionally, the method further comprises:
the fifth unit is used for acquiring a video frame corresponding to the bullet screen sending operation and a story line identifier corresponding to the video frame under the condition that the bullet screen sending operation is received;
and the sixth unit is used for sending a barrage sending request to the server, wherein the barrage sending request comprises barrage content and a story line identifier corresponding to the video frame.
Optionally, the method further comprises:
a seventh unit, configured to generate, based on video content of each video frame in the target video, a storyline identifier corresponding to each video frame;
the eighth unit is used for adding corresponding story line identifiers in the identifier fields of the video frames;
a ninth unit, configured to encode the video frame to which the story line identifier is added, to obtain the streaming media data of the target video.
Optionally, the seventh unit is specifically configured to:
and automatically generating story line identifications corresponding to the video frames according to the sequence of the video contents of the video frames in the playing process and the ascending sequence.
The bullet screen display device 900 can implement the processes in the bullet screen processing method embodiments, and for avoiding repetition, the details are not repeated here.
Referring to fig. 10, an embodiment of the present application further provides a bullet screen processing apparatus 1000, which is applied to a server, and includes:
a third unit 1001, configured to obtain streaming media data and barrage data of a target video, where the barrage data includes barrage content and story line identifiers associated with the barrage content, and one story line identifier is used to uniquely identify one node in a story line in the streaming media data, and the story line is determined based on display content of the streaming media data;
a fourth unit 1002, configured to send the streaming media data and the barrage data to a client, so that the client displays barrage content corresponding to a current node according to the story line identifier in a playing process of the streaming media data.
Optionally, the method further comprises:
a tenth unit, configured to receive a bullet screen sending request from the client;
an eleventh unit, configured to obtain barrage content in the barrage sending request and a storyline identifier corresponding to a video frame;
and the twelfth unit is used for storing the barrage content in association with the story line identifier.
Optionally, the method further comprises:
a thirteenth unit, configured to generate, based on the video content of each video frame in the target video, a storyline identifier corresponding to each video frame;
a fourteenth unit, configured to add a corresponding storyline identifier to the identifier field of each video frame;
and a fifteenth unit, configured to encode the video frame to which the storyline identifier is added, so as to obtain the streaming media data of the target video.
Optionally, the thirteenth unit is specifically configured to:
and automatically generating story line identifications corresponding to the video frames according to the sequence of the video contents of the video frames in the playing process and the ascending sequence.
The bullet screen processing apparatus 1000 can implement each process in the bullet screen processing method embodiments, and for avoiding repetition, the details are not repeated here.
An embodiment of the present invention further provides an electronic device, as shown in fig. 11, including a processor 1101, a communication interface 1102, a memory 1103, and a communication bus 1104, where the processor 1101, the communication interface 1102, and the memory 1103 complete mutual communication through the communication bus 1104.
A memory 1103 for storing a computer program;
the processor 1101 is configured to implement all the embodiments of any one of the bullet screen processing methods described above when executing the program stored in the memory 1103, so as to achieve the same technical effect, and details are not described herein.
An embodiment of the present application further provides a system, including:
the client is used for realizing the bullet screen processing method;
and the server is used for realizing the bullet screen processing method.
The system can implement all the embodiments of any one of the bullet screen processing methods described above, and can achieve the same technical effects, which are not described herein again.
The communication bus mentioned in the above terminal may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In another embodiment of the present invention, a computer-readable storage medium is further provided, where instructions are stored, and when the instructions are executed on a computer, the computer is enabled to execute the bullet screen display method or the bullet screen processing method in any one of the above embodiments.
In another embodiment of the present invention, a computer program product containing instructions is further provided, which when run on a computer, causes the computer to execute the bullet screen display method or the bullet screen processing method described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (14)

1. A bullet screen processing method is applied to a client and comprises the following steps:
acquiring streaming media data and barrage data of a target video, wherein the barrage data comprises barrage content and story line identifications associated with the barrage content, one story line identification is used for uniquely identifying one node in a story line of the streaming media data, and the story line is determined based on display content of the streaming media data;
and displaying the barrage content corresponding to the current node according to the story line identification in the process of playing the streaming media data.
2. The bullet screen processing method according to claim 1, wherein said displaying bullet screen content corresponding to the current node according to the story line identifier during the playing of the streaming media data comprises:
decoding the streaming media data to obtain video frames of the target video, wherein any one of the video frames comprises an identification field, and the identification field carries a story line identification corresponding to the video frame;
and in the process of playing the target video, displaying the barrage content corresponding to the video frame according to the story line identification in the identification field and the story line identification associated with the barrage content.
3. The bullet screen processing method of claim 1, further comprising:
under the condition of receiving a barrage sending operation, acquiring a video frame corresponding to the barrage sending operation and a video frame identifier corresponding to the video frame;
sending a barrage sending request to a server, wherein the barrage sending request comprises barrage content and a video frame identifier, and the video frame identifier comprises: at least one of a storyline logo or a time logo.
4. The bullet screen processing method according to any one of claims 1 to 3, wherein said method further comprises:
generating a storyline identifier corresponding to each video frame based on the video content of each video frame in the target video;
adding corresponding story line identification in the identification field of each video frame;
and coding the video frame added with the story line identifier to obtain the streaming media data of the target video.
5. The bullet screen processing method according to claim 4, wherein the generating a story line identifier corresponding to each video frame based on the video content of each video frame in the target video comprises:
and automatically generating story line identifications corresponding to the video frames according to the sequence of the video contents of the video frames in the playing process and the ascending sequence.
6. A bullet screen processing method is applied to a server and comprises the following steps:
acquiring streaming media data and barrage data of a target video, wherein the barrage data comprises barrage content and story line identifications associated with the barrage content, one story line identification is used for uniquely identifying one node in a story line in the streaming media data, and the story line is determined based on display content of the streaming media data;
and sending the streaming media data and the barrage data to a client so that the client displays barrage content corresponding to the current node according to the story line identifier in the playing process of the streaming media data.
7. The bullet screen processing method of claim 6, further comprising:
receiving a bullet screen sending request from the client;
acquiring the bullet screen content and the video frame identifier in the bullet screen sending request, wherein the video frame identifier comprises: at least one of a storyline identification or a time identification;
and storing the bullet screen content and the video frame identification in an associated manner.
8. The bullet screen processing method according to claim 6 or 7, wherein said method further comprises:
generating a storyline identifier corresponding to each video frame based on the video content of each video frame in the target video;
adding corresponding story line identification in the identification field of each video frame;
and coding the video frame added with the story line identifier to obtain the streaming media data of the target video.
9. The bullet screen processing method according to claim 8, wherein said generating a story line identifier corresponding to each video frame based on the video content of each video frame in the target video comprises:
and automatically generating story line identifications corresponding to the video frames according to the sequence of the video contents of the video frames in the playing process and the ascending sequence.
10. A bullet screen processing apparatus, comprising:
the system comprises a first unit, a second unit and a third unit, wherein the first unit is used for acquiring streaming media data and barrage data of a target video, the barrage data comprises barrage content and story line identifications related to the barrage content, one story line identification is used for uniquely identifying one node in a story line of the streaming media data, and the story line is determined based on display content of the streaming media data;
and the second unit is used for displaying the barrage content corresponding to the current node according to the story line identifier in the process of playing the streaming media data.
11. A bullet screen processing apparatus, comprising:
a third unit, configured to obtain streaming media data and barrage data of a target video, where the barrage data includes barrage content and story line identifiers associated with the barrage content, and one story line identifier is used to uniquely identify one node in a story line in the streaming media data, and the story line is determined based on display content of the streaming media data;
and the fourth unit is used for sending the streaming media data and the barrage data to a client so that the client displays barrage content corresponding to the current node according to the story line identifier in the playing process of the streaming media data.
12. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1 to 5 or the method steps of any one of claims 6 to 9 when executing a program stored in a memory.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method steps of any of the claims 1 to 5 or carries out the method steps of any of the claims 6 to 9.
14. A system, comprising:
a client for implementing the method steps of any one of claims 1-5;
a server for implementing the method steps of any one of claims 6-9.
CN202110087959.3A 2021-01-22 2021-01-22 Bullet screen processing method and device, electronic equipment, storage medium and system Pending CN112929730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110087959.3A CN112929730A (en) 2021-01-22 2021-01-22 Bullet screen processing method and device, electronic equipment, storage medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110087959.3A CN112929730A (en) 2021-01-22 2021-01-22 Bullet screen processing method and device, electronic equipment, storage medium and system

Publications (1)

Publication Number Publication Date
CN112929730A true CN112929730A (en) 2021-06-08

Family

ID=76164759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110087959.3A Pending CN112929730A (en) 2021-01-22 2021-01-22 Bullet screen processing method and device, electronic equipment, storage medium and system

Country Status (1)

Country Link
CN (1) CN112929730A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114745591A (en) * 2022-04-08 2022-07-12 深圳创维-Rgb电子有限公司 Method, device and equipment for judging video climax fragments and computer storage medium
CN115174957A (en) * 2022-06-27 2022-10-11 咪咕文化科技有限公司 Bullet screen calling method and device, computer equipment and readable storage medium
CN115348479A (en) * 2022-07-22 2022-11-15 北京奇艺世纪科技有限公司 Video playing problem identification method, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578209A (en) * 2015-12-16 2016-05-11 广州酷狗计算机科技有限公司 Pop-up screen display method and apparatus
CN105898522A (en) * 2016-05-11 2016-08-24 乐视控股(北京)有限公司 Method, device and system for processing barrage information
CN109302638A (en) * 2018-09-29 2019-02-01 传线网络科技(上海)有限公司 Information processing method and device, electronic equipment and storage medium
CN111031399A (en) * 2019-11-25 2020-04-17 上海哔哩哔哩科技有限公司 Bullet screen processing method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578209A (en) * 2015-12-16 2016-05-11 广州酷狗计算机科技有限公司 Pop-up screen display method and apparatus
CN105898522A (en) * 2016-05-11 2016-08-24 乐视控股(北京)有限公司 Method, device and system for processing barrage information
CN109302638A (en) * 2018-09-29 2019-02-01 传线网络科技(上海)有限公司 Information processing method and device, electronic equipment and storage medium
CN111031399A (en) * 2019-11-25 2020-04-17 上海哔哩哔哩科技有限公司 Bullet screen processing method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114745591A (en) * 2022-04-08 2022-07-12 深圳创维-Rgb电子有限公司 Method, device and equipment for judging video climax fragments and computer storage medium
CN115174957A (en) * 2022-06-27 2022-10-11 咪咕文化科技有限公司 Bullet screen calling method and device, computer equipment and readable storage medium
CN115174957B (en) * 2022-06-27 2023-08-15 咪咕文化科技有限公司 Barrage calling method and device, computer equipment and readable storage medium
CN115348479A (en) * 2022-07-22 2022-11-15 北京奇艺世纪科技有限公司 Video playing problem identification method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112929730A (en) Bullet screen processing method and device, electronic equipment, storage medium and system
CN109963162B (en) Cloud directing system and live broadcast processing method and device
US8689255B1 (en) Synchronizing video content with extrinsic data
US20210014574A1 (en) Using Text Data in Content Presentation and Content Search
CN108289159B (en) Terminal live broadcast special effect adding system and method and terminal live broadcast system
US11264057B2 (en) Method of modifying play of an original content form
EP2954693B1 (en) Processing of social media for selected time-shifted multimedia content
CN111246126A (en) Direct broadcasting switching method, system, device, equipment and medium based on live broadcasting platform
CN111083396B (en) Video synthesis method and device, electronic equipment and computer-readable storage medium
JP2007528144A (en) Method and apparatus for generating and detecting a fingerprint functioning as a trigger marker in a multimedia signal
US9215496B1 (en) Determining the location of a point of interest in a media stream that includes caption data
CN108989883B (en) Live broadcast advertisement method, device, equipment and medium
KR20070091328A (en) Distributive system for marking and blocking video and audio content related to video and audio programs
CN103747287A (en) Video playing speed regulation method and system applied to flash
US11778286B2 (en) Systems and methods for summarizing missed portions of storylines
CN110740386B (en) Live broadcast switching method and device and storage medium
US9992524B1 (en) Media-broadcasting system with broadcast schedule simulation feature
US20170064413A1 (en) Digital channel integration system
CN114402572A (en) Using in-band metadata as a basis for accessing reference fingerprints to facilitate content-related actions
CN106792237B (en) Message display method and system
US20120096084A1 (en) Shared media experience distribution and playback
US20240107087A1 (en) Server, terminal and non-transitory computer-readable medium
CN105916011A (en) Video real-time playing method and device
CN108124188B (en) Audio-video system operation method
CN111869225B (en) Information processing apparatus, information processing method, and non-transitory computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210608