CN110708588B - Barrage display method and device, terminal and storage medium - Google Patents

Barrage display method and device, terminal and storage medium Download PDF

Info

Publication number
CN110708588B
CN110708588B CN201910989704.9A CN201910989704A CN110708588B CN 110708588 B CN110708588 B CN 110708588B CN 201910989704 A CN201910989704 A CN 201910989704A CN 110708588 B CN110708588 B CN 110708588B
Authority
CN
China
Prior art keywords
segment
video
bullet screen
type
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910989704.9A
Other languages
Chinese (zh)
Other versions
CN110708588A (en
Inventor
余自强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910989704.9A priority Critical patent/CN110708588B/en
Publication of CN110708588A publication Critical patent/CN110708588A/en
Application granted granted Critical
Publication of CN110708588B publication Critical patent/CN110708588B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Abstract

The application discloses a bullet screen display method, a bullet screen display device, a bullet screen display terminal and a storage medium, which belong to the technical field of computers and internet, wherein the bullet screen display method comprises the following steps: acquiring segment characteristics of video segments in a target video; predicting the segment type of the video segment according to the segment characteristics; according to the technical scheme, a mode of displaying the bullet screen text of the user is expanded, the segment type of the video segment is predicted according to the segment characteristics of the video segment, and whether the bullet screen protection text is displayed or not is determined according to the prediction result, so that the next video segment can be processed in advance.

Description

Barrage display method and device, terminal and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers and internet, in particular to a bullet screen display method, a bullet screen display device, a bullet screen display terminal and a storage medium.
Background
Through the video playing application program, the user can watch videos of different titles. In the video playing process, the terminal can display the barrage text input by the user in the interface.
In the related art, a user inputs a barrage text in the process of watching a video, and a terminal receives the barrage text input by the user, namely the barrage text is displayed in an interface.
In the related art, the barrage text is only used for communication between users, and the function is single.
Disclosure of Invention
The embodiment of the application provides a bullet screen display method, a bullet screen display device, a bullet screen display terminal and a bullet screen display storage medium, which can be used for solving the technical problems that in the related art, bullet screen texts are only used for communication among users and the functions are single. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a bullet screen display method, where the method includes:
acquiring segment characteristics of video segments in a target video;
predicting a segment type of the video segment according to the segment characteristics;
when the type of the video clip is a preset type and the video clip is played, a bullet screen protective body text is superposed on a playing picture of the video clip and comprises a plurality of shielding bullet screens for shielding all or part of visual main bodies in the playing picture, and the shielding bullet screens are distributed on the visual main bodies in a dispersed manner according to target density.
In one aspect, an embodiment of the present application provides a bullet screen display device, the device includes:
the segment feature acquisition module is used for acquiring the segment features of the video segments in the target video;
a segment type prediction module for predicting the segment type of the video segment according to the segment characteristics;
the body protecting text display module is used for overlapping and displaying bullet screen body protecting texts on a playing picture of the video clip when the clip type of the video clip is a preset type and is played to the video clip, wherein the bullet screen body protecting texts comprise a plurality of shielding bullet screens used for shielding all or part of visual main bodies in the playing picture, and the shielding bullet screens are dispersedly distributed on the visual main bodies according to the target density.
In one aspect, an embodiment of the present application provides a terminal, where the terminal includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the bullet screen display method.
In one aspect, an embodiment of the present application provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or an instruction set is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement the bullet screen display method.
In one aspect, an embodiment of the present application provides a computer program product, which when running on a computer, causes the computer to execute the bullet screen display method.
According to the technical scheme provided by the embodiment of the application, the clip type of the video clip is predicted according to the clip characteristics of the video clip in the target video, and when the clip type belongs to the preset type and is played to the video clip, the bullet screen protection body text is displayed on the playing picture of the video clip in an overlapping mode, so that the mode of displaying the bullet screen text of the user is expanded. In addition, the technical scheme provided by the embodiment of the application predicts the segment type of the video segment according to the segment characteristics of the video segment, and then determines whether to display the bullet screen protection body text according to the prediction result, so that the next video segment can be processed correspondingly in advance. Taking the example that the type of the video clip is horror, discomfort and fear of the user when watching horror can be reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by one embodiment of the present application;
fig. 2 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 3 is a flowchart of a bullet screen display method according to an embodiment of the present application;
FIG. 4 is a schematic illustration of a user interface provided by one embodiment of the present application;
FIG. 5 is a schematic illustration of a bullet screen setting interface provided by one embodiment of the present application;
FIG. 6 is a schematic illustration of a user interface provided by an embodiment of the present application;
fig. 7 is a flowchart of a bullet screen display method according to an embodiment of the present application;
FIG. 8 is a block diagram of a video playback system provided by one embodiment of the present application;
FIG. 9 is a block diagram of a block chain structure according to an embodiment of the present application;
fig. 10 is a block diagram of a bullet screen display device provided in an embodiment of the present application;
fig. 11 is a block diagram of a bullet screen display device provided in an embodiment of the present application;
fig. 12 is a block diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of an implementation environment provided by an embodiment of the present application is shown. In the embodiment shown in fig. 1, at least one terminal 10, a multimedia resource server 20, and a bullet screen information server 30 are included. Among them, the terminal 10 is installed with a client which is served by the multimedia resource server 20 and the bullet screen information server 30. The client may be a multimedia resource client such as a browser client, a video application client, an audio application client, and a reading client, where such client is served by a multimedia resource server 20, for example, an online video playing client, an online music playing client, and an online reading client, and may also be a player or a reader capable of providing a playing service without networking, and such client may be used to play multimedia resources stored in a storage space of the terminal 10, and may also be a player or a reader having an independent playing function and capable of using the multimedia resource service provided by the multimedia resource server 20, which is not limited in this embodiment of the present application.
The user can access the multimedia resource server 20 through a client installed in the terminal 10 to use the multimedia service provided by the multimedia resource server 20. For example, the terminal 10 may access the multimedia resource server 20 through a video application client, and may also access a web portal of the multimedia resource server 20 through a browser client. In the process of using the multimedia service provided by the multimedia resource server 20, the user may also access the bullet screen information server 30 through a client installed in the terminal 10, obtain bullet screen information corresponding to the currently used multimedia server and provided by the bullet screen information server 30, and display the bullet screen information through a display interface of the client, and may also send the bullet screen information to the bullet screen information server 30.
The multimedia resource server 20 is used for providing multimedia services, which may refer to video services, audio services, picture services, reading services, question and answer services, and the like, and multimedia resources include, but are not limited to, video, audio, text, pictures, and the like. Taking the multimedia resource server 20 as a video server as an example, the video services provided by the multimedia resource server 20 may include services such as live video broadcasting, online video playing, video downloading, and the like, and for the multimedia resource server 20, the services provided by the multimedia resource server may not be a single service, for example, for the video server, the multimedia resource server may not only be limited to a video service, but also provide other types of multimedia services such as an audio service, and for the audio server, the multimedia resource server may not only be limited to an audio service, but also provide more types of multimedia services such as a video service, and of course, the multimedia resource server may also provide functions such as forwarding, commenting, and the like, which is not specifically limited in this embodiment of the application. The video online playing service may refer to converting a certain movie into a video data stream, and providing the video data stream to the terminal 10 through a video client or a web portal for online playing or offline downloading.
It should be noted that the multimedia resource server 20 may refer to a single server or a server cluster composed of a plurality of servers, and each service may be implemented by the same server or by different servers in the server cluster, which is not specifically limited in this embodiment of the present application.
The bullet screen information server 30 is used for providing bullet screen information services, which may include: multimedia resource retrieval service and barrage service. The multimedia resource retrieval service can be used in combination with the barrage service, that is, multimedia information is converted to enable the multimedia resource to correspond to barrage information in the barrage service, and a multimedia information database is provided, where the multimedia information database can be used to store information required for conversion, such as conversion rules, correspondence between multimedia information, and the like, for conversion, so as to provide accurate barrage information service for different platforms or clients, and certainly, description information of the multimedia resource itself, such as multimedia playing duration, and the like, can also be stored in the multimedia information database. The bullet screen service means that the bullet screen information server can collect bullet screen information and provide bullet screen information corresponding to the multimedia resource currently played by the client. Specifically, the bullet screen information server 30 may serve a plurality of multimedia resource servers 20, collect and store bullet screen information sent by different platform users and different clients, and provide the collected bullet screen information to the terminal 10 for display, so as to expand the functions of the multimedia resource servers 20. The collected and stored data is bullet screen information, the bullet screen information at least comprises a user bullet screen text, and the bullet screen information further comprises one or more of unique user identification of a bullet screen sender, bullet screen sending time and bullet screen interaction information. The unique user identifier may be an identifier supported by the bullet screen information server and used for uniquely identifying a bullet screen sender. The bullet screen sending time may be a time point when the user actually issues the bullet screen content, or a display time point of the bullet screen content in the multimedia resource, which is not specifically limited in this embodiment of the present application. And the barrage interaction information can be evaluation information, praise information, poor comment information, appreciation information, gift information and the like of other users on the barrage content. In addition, the bullet screen information can have a multimedia identifier for identifying the multimedia resource corresponding to the bullet screen information, the bullet screen information can also have a bullet screen identifier for uniquely identifying the bullet screen information, and each bullet screen identifier corresponds to one bullet screen information.
It should be noted that, the bullet screen information server 30 may refer to a single server or a server cluster composed of a plurality of servers, and each service may be implemented by the same server or by different servers in the server cluster, which is not specifically limited in this embodiment of the present application.
Of course, the multimedia resource server 20 and the bullet screen information server 30 shown in fig. 1 may be arranged in the same server or server cluster.
Of course, the method provided in the embodiment of the present application is not limited to be used in the application scenario shown in fig. 1, and may also be used in other possible application scenarios, and the embodiment of the present application is not limited. The functions that can be implemented by each device in the application scenario shown in fig. 1 will be described in the following method embodiments, and will not be described in detail herein.
In the embodiment of the method, the execution subject of each step may be a terminal. Please refer to fig. 2, which illustrates a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 10 may include: a main board 110, an external input/output device 120, a memory 130, an external interface 140, a touch system 150, and a power supply 160.
The main board 110 has integrated therein processing elements such as a processor and a controller.
The external input/output device 120 may include a display component (e.g., a display screen), a sound playing component (e.g., a speaker), a sound collecting component (e.g., a microphone), various keys, and the like.
The memory 130 has program codes and data stored therein.
The external interface 140 may include a headset interface, a charging interface, a data interface, and the like.
The touch system 150 may be integrated into a display component or a key of the external input/output device 120, and the touch system 150 is used to detect a touch operation performed by a user on the display component or the key.
The power supply 160 is used to power various other components in the mobile terminal 10.
In this embodiment, the processor in the main board 110 may generate a user interface (e.g., a video playing interface) by executing or calling the program codes and data stored in the memory, and display the generated user interface (e.g., the video playing interface) through the external output/input device 120. In the process of displaying the user interface (e.g., video playback interface), the touch system 150 may detect a touch operation performed when the user interacts with the user interface (e.g., video playback interface), and respond to the touch operation.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide the method operation steps as shown in the following embodiments or figures, more or less operation steps may be included in the method based on the conventional or non-inventive labor. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application.
Please refer to fig. 3, which illustrates a flowchart of a bullet screen display method according to an embodiment of the present application. The method can be applied to the terminal described above, such as to a client of an application program (e.g., a video playing application program) of the terminal. The method comprises the following steps (301-303):
step 301, obtaining segment characteristics of video segments in a target video.
The target video refers to a video being played in an interface displayed by the client, and the target video may be of various types, for example, the target video may be a live video, a recorded video, an on-demand video, and the like. Optionally, in order to facilitate the analysis of the target video, the target video may be divided into a plurality of video segments according to a certain rule. For example, the target video may be divided into a plurality of video segments according to time intervals, the length of the time intervals is not limited in the embodiment of the present application, and the time intervals may be specifically set according to different actual application scenarios when the application is performed. For another example, the target video may be divided into a plurality of video segments according to the playing content, for example, the target video may be divided into a plurality of video segments according to the playing key events when applied.
The segment characteristics refer to characteristics of the video segments in the target video, such as volume level, brightness of the picture, development of the plot, number of characters, and the like. Optionally, the segment features may include an audio feature, a picture feature, a plot feature, and the like, where the audio feature refers to features of the video segment in terms of the played audio, such as the volume, the frequency of the sound, and the like; the picture characteristics refer to characteristics of the video clip in terms of the played picture, such as the brightness of the picture, the contrast of the picture, and the like; the plot characteristics refer to characteristics of the video clips in terms of the played content, such as the number of characters, the trend of the plot, and the like, which is not limited in the embodiment of the present application.
Step 302, predicting the segment type of the video segment according to the segment characteristics.
The clip type refers to a type to which the content played by the video clip belongs. Optionally, the type specifically included in the clip type is determined according to a video subject to which the target video corresponding to the video clip belongs. The video material refers to a material category to which the playing content of the target video belongs, and optionally, the video material includes a horror material, a magic material, a comedy material, a science fiction material, an action material, a martial arts material, a criminal material, a suspense material, an adventure material, a biographical material, and the like, which is not limited in this embodiment. Illustratively, when the video material is horror material, the segment types may include horror segments and non-horror segments; when the video subject matter is an adventure subject matter, the segment types may include an adventure segment and a non-adventure segment; when the video material is comedy material, the segment types may include comedy segments and non-comedy segments.
In the embodiment of the application, after the terminal acquires the segment characteristics, the segment type of the video segment can be predicted according to the segment characteristics. Optionally, in order to obtain an accurate and fast prediction result for the segment type, the predicting the segment type of the video segment according to the segment characteristics includes: and inputting the segment characteristics into a type prediction model to obtain a segment type prediction result of the video segment corresponding to the segment characteristics. The type prediction model refers to a model trained based on the historical segment feature data, and the model may be a Machine learning model, such as a neural network model, an SVM (Support Vector Machine) model, and the like. And after the terminal inputs the segment characteristics into the type prediction model, the segment type prediction result of the video segment corresponding to the segment characteristics can be obtained.
Step 303, when the clip type of the video clip is a preset type and the video clip is played, superimposing and displaying a bullet screen protective body text on a playing picture of the video clip, wherein the bullet screen protective body text comprises a plurality of shielding bullet screens for shielding all or part of a visual main body in the playing picture, and the plurality of shielding bullet screens are distributed on the visual main body in a dispersed manner according to the target density.
The bullet screen body protecting text refers to a subtitle text with preset content, and optionally, the content of the bullet screen body protecting text is determined according to the segment type of the video segment and the video subject matter to which the target video belongs. Illustratively, when the video material to which the target video belongs is a horror material, and when the type of the segment of the video segment is a horror segment, the content of the bullet screen protector text includes a prompt content, a comfort content, and the like, for example, the content of the bullet screen protector text includes a literal content such as "front high energy", "bullet screen protector", and the like. Illustratively, when the video material to which the target video belongs is a comedy material, and when the segment type of the video segment is a comedy segment, the content of the bullet-screen body-protecting text includes prompt content, comment content, and the like, for example, the content of the bullet-screen body-protecting text includes the contents of words such as "laugh attacks", "haha", and the like. Optionally, the display duration of the bullet screen protection body text is the playing duration of the video clip, that is, the bullet screen protection body text is displayed when the video clip starts playing, and when the video clip finishes playing, the bullet screen protection body text is cancelled. Optionally, the bullet screen protection body text is composed of a plurality of shielding bullet screens, each shielding bullet screen refers to a user bullet screen text for shielding the playing picture, the content of each shielding bullet screen may be the same or different, and the embodiment of the present application does not limit this.
In the embodiment of the application, the bullet screen protection body text is displayed on the playing picture of the video clip in a superposition manner, and the playing picture of the video clip and the bullet screen protection body text are both displayed in the user interface. Optionally, in order that the display of the bullet screen guard text and the operation corresponding to the bullet screen guard text do not affect the normal playing of the video segment, the user interface includes a first viewing layer and a second viewing layer, wherein the display level of the first viewing layer is lower than that of the second viewing layer; the playing picture of the video clip is positioned on the first viewing layer; the bullet screen protection body text is located on the second viewing layer. Of course, the second viewing layer may include some operation controls, such as an adjustment control for a target video playing speed, a setting control for a bullet screen playing function, and an adjustment control for a target video definition, in addition to the bullet screen guard text, which is not limited in the embodiment of the present application.
The bullet screen protects the body text and includes many and shelters from the bullet screen, but every shelters from bullet screen independent display. Each shielding bullet screen comprises one to a plurality of characters. The character content of each shielding bullet screen can be the same or different. The plurality of shielding barrages are dispersedly distributed on the visual main body of the playing picture according to the target density so as to shield all or part of the visual main body. The visual subject of the playing screen may be a central region or the whole screen of the playing screen, a static object or a dynamic object in the playing screen, or a background in the playing screen. The target density may be a predetermined density or a dynamically determined density.
In a possible embodiment, in order to pre-process the unplayed video segment for the purpose of timely prediction, the step 301 includes: in the playing process of the target video, according to the preset segment duration, extracting the video segment which is closest to the current playing time point and is not played in the target video. The preset segment duration refers to a time interval set according to different actual application scenes, for example, if the preset segment duration is 5 seconds, a video segment is extracted every 5 seconds in the playing process of the target video. The extracted video clip refers to a video clip that is closest to the current playing time point and has not been played yet in the target video, for example, if the current time point is 14 seconds, the extracted video clip is a video clip of the target video between 16 seconds and 20 seconds.
In another possible implementation, in order to save processing resources of the terminal and the server and provide a user with a selection space for autonomously selecting whether to add a bullet screen body text function, the method further includes: receiving an input user barrage text in the playing process of a target video; and when the user bullet screen text contains the trigger keyword, starting a prediction adding function of the bullet screen protective body text for at least one video segment in the target video. The trigger keyword for starting the prediction addition function of the bullet screen protector text refers to a word indicating the trigger meaning, for example, a word such as "open", "bullet screen protector", or the like. In the embodiment of the application, the user can watch the user barrage text published by other users and can also send the user barrage text in the process of watching the video, so that interaction is carried out in the process of watching the video. For example, as shown in fig. 4, a target video 41 and a user barrage text 42 are displayed in the user interface 40, after the terminal receives the input user barrage text 42, the terminal analyzes the user barrage text 42, and when the user barrage text 42 includes a trigger keyword "on" and/or "barrage shield", a predictive addition function of the barrage shield text is started for at least one video segment in the target video 41. That is, when the user wants to turn on the predictive addition function, the user bullet screen text containing the words "turn on" and/or "bullet screen body" can be input.
In another possible embodiment, in order to provide more selection space for the user and prevent the user from performing a misoperation when inputting the bullet screen text of the user, the step 301 may further include: and receiving a trigger signal of a start button corresponding to a bullet screen protection body text in the bullet screen setting interface, and starting a prediction adding function of the bullet screen protection body text for at least one video segment in the target video according to the trigger signal. In the embodiment of the present application, the user interface displaying the target video may include a bullet screen setting button, for example, as shown in fig. 4, the user interface 40 includes a bullet screen setting button 43, and when a trigger signal of the bullet screen setting button 43 is received, the user enters the bullet screen setting interface, as shown in fig. 5. The bullet screen setting interface comprises a start button corresponding to the prediction addition function of the bullet screen protection body text, and when a trigger signal corresponding to the start button 51 is received, the prediction addition function of the bullet screen protection body text can be started for at least one video segment in the target video. As shown in fig. 5, the bullet screen setting interface 50 includes an open button 51, and when a trigger signal corresponding to the open button 51 is received, the predicted addition function of the bullet screen protection body text can be opened. In the embodiment of the present application, the user may generate the trigger signal through a click operation, or may generate the trigger signal through a voice, a gesture, or the like, which is not limited in the embodiment of the present application.
In summary, according to the technical scheme provided by the embodiment of the application, the clip type of the video clip is predicted according to the clip feature of the video clip in the target video, and when the clip type belongs to the preset type and is played to the video clip, the bullet screen body protecting text is displayed on the playing picture of the video clip in an overlapping manner, so that the manner of displaying the user bullet screen text is expanded. In addition, the technical scheme provided by the embodiment of the application predicts the segment type of the video segment according to the segment characteristics of the video segment, and then determines whether to display the bullet screen protection body text according to the prediction result, so that the next video segment can be processed in advance.
In addition, according to the technical scheme provided by the embodiment of the application, the predictive adding function of the bullet screen protection body text is started for at least one video segment in the target video according to the user bullet screen text containing the trigger keyword, so that the predictive adding function is not started when the user bullet screen text does not contain the trigger keyword, the processing resource of the terminal is saved, meanwhile, the space for the user to independently select whether to start the predictive adding function of the bullet screen text is provided, and the human-computer interaction experience is improved.
The prediction function of the segment type of the video segment can be realized by a neural network model. The neural network model may be a convolutional neural network or a time-series based neural network. The neural network model can be called a type prediction model. The terminal or the server inputs the segment characteristics of the video segments into the type prediction model to obtain segment type prediction results of the segment types of the video segments corresponding to the segment characteristics; the type prediction model is a model obtained by training a plurality of groups of sample data in a training set by adopting an error back propagation algorithm, wherein each group of sample data comprises a sample fragment characteristic and a sample fragment type corresponding to the sample fragment characteristic.
In one possible embodiment, in order to simplify the prediction of the segment type of the video segment, the segment feature includes an audio feature, and the step 302 includes: predicting the probability that the segment type of the video segment belongs to a preset type according to the audio characteristics; and when the probability is greater than a preset threshold value, determining the type of the video clip as a preset type.
The audio characteristics refer to the characteristics of the video clip in terms of the played audio, such as the volume, the frequency of the sound, and the like. Illustratively, the audio features include at least any one of: zero crossing rate, short-term energy, spectral centroid and mel frequency; the zero crossing rate is the probability that the time domain waveform of the audio of the video clip passes through the horizontal axis representing the zero level, the short-time energy is the energy of the audio of the video clip, the spectral centroid is the average point of the spectral energy distribution of the audio frame of the video clip, and the mel frequency is the converted auditory frequency calculated according to the frequency of the audio of the video clip. Assuming that the frequency of the audio of the video clip is f, the Mel frequency is fmelExpressed, then the Mel frequency is calculated by
Figure BDA0002237851900000101
Illustratively, in order to facilitate predicting the probability that the segment type of the video segment belongs to the preset type, the predicting the probability that the segment type of the video segment belongs to the preset type according to the audio feature includes: and inputting the audio features into a type prediction model to obtain the probability that the segment type of the video segment corresponding to the audio features belongs to a preset type. The type prediction model refers to a model trained based on historical audio feature data, and the model may be a Machine learning model, such as a neural network model, an SVM (Support Vector Machine) model, and the like. After the audio characteristics are input into the type prediction model by the terminal, the segment type prediction result of the video segment corresponding to the audio characteristics can be obtained, namely the probability that the video segment belongs to the preset type. Optionally, in this embodiment of the application, a value range of the probability is 0 to 1.
Optionally, in order to generate the bullet screen casing text in a targeted manner, the method further includes: and when the probability that the type of the video clip is the preset type is larger than a preset threshold value, generating a bullet screen protective body text according to the probability, wherein the density of the bullet screen protective body text is in positive correlation with the probability. The preset threshold is a numerical value preset according to different actual application scenarios, for example, in order to achieve a more accurate prediction result and avoid multiple prediction errors so as to avoid unnecessary display of the bullet screen protection body text, the value of the preset threshold may be a larger numerical value, and for example, when the value range of the probability is 0-1, the value of the preset threshold may be 0.95, that is, when the probability that the segment type of the video segment is the preset type is greater than 0.95, the bullet screen protection body text may be generated according to the probability. In the embodiment of the application, the density of the bullet screen protection body text is in positive correlation with the probability, namely the larger the probability is, the larger the density of the bullet screen protection body text is; the smaller the probability, the smaller the density of the bullet screen casing text, wherein the density of the bullet screen casing text refers to the number of bullet screen casing texts simultaneously displayed in the user interface at the same time.
For example, to facilitate the calculation of the number of bullet screen casing texts that need to be generated, the above-mentioned generation of bullet screen casing texts according to the probability includes: acquiring the actual number of barrages in the video clip and the maximum number of barrages supported by the video clip; determining the number of shielding bullet screens according to the probability, the actual bullet screen number and the maximum bullet screen number, wherein the number of shielding bullet screens and the probability are in positive correlation; and generating a bullet screen protection body text according to the number of the shielding bullet screens. In the embodiment of the present application, in order to provide more operable space for the user, the maximum number of barrage supported by the video clip may be freely set by the user, optionally, the maximum number of barrage supported by the video clip corresponds to an initial value, for example, 30 pieces, and the user may change the initial value into a required value according to the user's own requirement, for example, change the initial value into 20 required values. The actual number of barrage refers to the number of user barrage texts currently displayed simultaneously in the user interface. After the terminal obtains the actual number of the barrages of the video clips and the maximum number of the barrages supported by the video clips, the number of the shielding barrages can be determined according to the actual number of the barrages, the maximum number of the barrages and the probability, the number of the shielding barrages and the probability are in positive correlation, namely the larger the probability is, the larger the number of the shielding barrages is; the smaller the probability, the smaller the number of the shielding bullet screens. After the terminal determines the number of the shielding bullet screens, bullet screen protection body texts are generated according to the number of the shielding bullet screens, namely the number of the bullet screen protection body texts is the number of the shielding bullet screens.
Illustratively, the calculation formula of the number of shielding bullet screens is as follows:
Figure BDA0002237851900000111
wherein c represents the number of shielding barrages, p represents the probability that the segment type of the video segment is a preset type, d represents the maximum number of barrages supported by the video segment, u represents the actual number of barrages in the video segment, and the numerical value of u is less than or equal to the numerical value of d.
Optionally, in order to improve the interface display effect and avoid concentrated bullet screen guard texts in a part of the user interface, in the embodiment of the application, the terminal generates the bullet screen guard texts, and randomly and uniformly inserts the bullet screen guard texts into the user bullet screen texts displayed in the video clip. For example, as shown in fig. 6, the user barrage text displayed in the user interface 60 includes barrage guard text 61 and actual barrage text 62 generated according to actual input of the user, the barrage guard text 61 is uniformly inserted into the actual barrage text 62 at random, and the sum of the number of the barrage guard texts 61 and the number of the actual barrage text 62 is the maximum number of barrages supported by the video segment.
Optionally, in order to adjust the playing effect of the video clip selection from multiple aspects and meet multiple aspects of requirements of the user, the method further includes: and when the probability that the type of the video clip is the preset type is larger than the preset threshold value, the playing volume of the video clip is reduced according to the probability. Based on the explanations of the optional embodiments, explanations of the preset threshold, the probability, and the like in the optional embodiments are obtained, and for the explanations of the preset threshold, the probability, and the like, reference is made to the optional embodiments, and details are not repeated here.
Illustratively, in order to facilitate adjusting the playing volume of the video segment, the above-mentioned adjusting the playing volume of the video segment according to the probability includes: and adjusting the playing volume of the video clip to be lower than the target amplitude according to the probability, wherein the target amplitude and the probability are in positive correlation. The target amplitude is the amplitude of the decrease of the playing volume of the video clip, in the embodiment of the application, the target amplitude and the probability that the clip type of the video clip is the preset type are in a positive correlation relationship, that is, the larger the probability is, the larger the target amplitude is, the smaller the probability is, and the smaller the target amplitude is.
Illustratively, the calculation formula of the target amplitude is as follows:
Figure BDA0002237851900000121
wherein h is the target amplitude of the decrease of the playing volume of the video clip, v is the current playing volume, and p is the probability that the clip type of the video clip is a preset type.
In summary, according to the technical scheme provided by the embodiment of the application, the probability that the segment type of the video segment belongs to the preset type is predicted according to the audio feature in the segment feature, and when the probability is greater than the preset threshold value, the number of the bullet screen body protecting texts is determined according to the probability, the actual number of the bullet screens in the video segment and the maximum number of the bullet screens supported by the video segment, so that the prediction of the segment type of the video segment is simplified, and different numbers of bullet screen body protecting texts can be generated according to different probabilities.
In addition, according to the technical scheme provided by the embodiment of the application, when the probability that the video clip type is the prediction type is larger than the preset threshold value, the playing volume of the video clip is reduced according to the probability, so that the playing effect of the video clip is adjusted from multiple aspects, and the requirements of a user on multiple aspects are met.
In a possible implementation manner, referring to fig. 7 in combination, the bullet screen display method provided in the embodiments of the present application may include the following steps:
step 701, acquiring a target video; the target type refers to a video currently being played;
step 702, judging whether a video subject to which a target video belongs is a preset subject; if yes, executing the following step 703, otherwise, ending the process; the video material refers to the material type to which the playing content of the target video belongs, and the preset material refers to the preset video material type;
step 703, receiving a trigger signal corresponding to a start button of the bullet screen protection body text;
step 704, according to the trigger signal, starting a prediction adding function of the bullet screen protection body text;
step 705, receiving a user barrage text containing a starting keyword; the method comprises the steps of starting a keyword, wherein the starting keyword is used for triggering the step of predicting and adding the opening of a bullet screen text;
step 706, acquiring a video clip according to the user barrage text; the video clip is extracted from the target video, is closest to the current playing time point and is not played;
step 707, extracting segment characteristics of the video segments; the segment characteristics refer to characteristics of video segments in the target video;
step 708, inputting the segment characteristics into a type prediction model to obtain the probability that the segment type of the video segment belongs to a preset type; the type prediction model refers to a model obtained by training based on historical segment feature data, and the model may be a Machine learning model, such as a neural network model, an SVM (Support Vector Machine) model, and the like;
step 709, judging whether the probability is greater than a preset threshold value; if yes, go to step 710; if not, ending the flow;
step 710, calculating the number of bullet screen protective body texts of the video clip and the reduction amplitude of the playing volume of the video clip according to the probability;
and 711, displaying the bullet screen protective body text in the user interface corresponding to the video clip according to the number, and reducing the playing volume of the video clip according to the reduction amplitude.
In an exemplary embodiment, the method may be implemented in a video playing system implemented based on a block chain. Referring to the video playing system shown in fig. 8, the video playing system 100 refers to a system for performing data sharing between nodes, the video playing system may include a plurality of nodes 101, and the plurality of nodes 101 may refer to respective clients in the video playing system. Each node 101 may receive input information while operating normally and maintain shared data within the video playback system based on the received input information. In order to ensure information intercommunication in the video playing system, information connection can exist between each node in the video playing system, and information transmission can be carried out between the nodes through the information connection. For example, when any node in the video playing system receives input information, other nodes in the video playing system acquire the input information according to a consensus algorithm, and store the input information as data in shared data, so that the data stored on all nodes in the video playing system are consistent. Optionally, the input information includes, in the above embodiment: at least one of a clip type of the video clip and a bullet screen body text corresponding to the video clip.
Each node in the video playing system has a corresponding node identifier, and each node in the video playing system can store the node identifiers of other nodes in the video playing system, so that the generated block can be broadcasted to other nodes in the video playing system according to the node identifiers of other nodes. Each node may maintain a node identifier list as shown in the following table, and store the node name and the node identifier in the node identifier list correspondingly. The node identifier may be an IP (Internet Protocol ) address and any other information that can be used to identify the node, and the first table is described by taking an IP address as an example.
Watch 1
Node name Node identification
Node 1 117.114.151.174
Node 2 117.116.189.145
Node N 119.123.789.258
Each node in the video playback system stores one and the same blockchain. The block chain is composed of a plurality of blocks, referring to fig. 9, the block chain is composed of a plurality of blocks, the starting block includes a block header and a block main body, the block header stores an input information characteristic value, a version number, a timestamp and a difficulty value, and the block main body stores input information; the next block of the starting block takes the starting block as a parent block, the next block also comprises a block head and a block main body, the block head stores the input information characteristic value of the current block, the block head characteristic value of the parent block, the version number, the timestamp and the difficulty value, and the like, so that the block data stored in each block in the block chain is associated with the block data stored in the parent block, and the safety of the input information in the block is ensured.
When each block in the block chain is generated, referring to fig. 9, when the node where the block chain is located receives the input information, the input information is verified, after the verification is completed, the input information is stored in the memory pool, and the hash tree for recording the input information is updated; and then, updating the updating time stamp to the time when the input information is received, trying different random numbers, and calculating the characteristic value for multiple times, so that the calculated characteristic value can meet the following formula:
SHA256(SHA256(version+prev_hash+merkle_root+ntime+nbits+x))<TARGET
wherein, SHA256 is a characteristic value algorithm used for calculating a characteristic value; version is version information of the relevant block protocol in the block chain; prev _ hash is a block head characteristic value of a parent block of the current block; merkle _ root is a characteristic value of the input information; ntime is the update time of the update timestamp; nbits is the current difficulty, is a fixed value within a period of time, and is determined again after exceeding a fixed time period; x is a random number; TARGET is a feature threshold, which can be determined from nbits.
Therefore, when the random number meeting the formula is obtained through calculation, the information can be correspondingly stored, and the block head and the block main body are generated to obtain the current block. And then, the node where the block chain is located respectively sends the newly generated blocks to other nodes in the video playing system where the newly generated blocks are located according to the node identifications of the other nodes in the video playing system, the newly generated blocks are verified by the other nodes, and the newly generated blocks are added to the block chain stored in the newly generated blocks after the verification is completed.
And for any node in the block chain, the node stores at least one of the segment type of the video segment and the bullet screen body protecting text corresponding to the video segment into the block chain by adopting a consensus mechanism.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 10, a block diagram of a click prompt apparatus for a virtual object according to an embodiment of the present application is shown. The device has the functions of realizing the method examples, and the functions can be realized by hardware or by hardware executing corresponding software. The device may be a terminal or may be provided in a terminal. The apparatus 800 may include: a segment feature acquisition module 810, a segment type prediction module 820, and a body armor text display module 830.
A segment characteristic obtaining module 810, configured to obtain segment characteristics of a video segment in the target video.
A segment type prediction module 820, configured to predict a segment type of the video segment according to the segment characteristics.
The body-protecting text display module 830 is configured to, when the segment type of the video segment is a preset type and the video segment is played to, superimpose and display a bullet screen body-protecting text on a playing picture of the video segment, where the bullet screen body-protecting text includes a plurality of shielding bullet screens for shielding all or part of the visual main body in the playing picture, and the shielding bullet screens are distributed in a dispersed manner on the visual main body according to the target density.
Optionally, the segment type prediction module 820 is configured to: inputting the segment characteristics into a type prediction model to obtain a segment type prediction result of the segment type of the video segment corresponding to the segment characteristics, wherein the type prediction model is obtained based on historical segment characteristics training.
Optionally, the segment features comprise audio features, and the segment type prediction module 820 is configured to: inputting the audio features into a type prediction model, and predicting the probability that the segment type of the video segment belongs to the preset type; and when the probability is greater than a preset threshold value, determining the type of the video clip as the preset type.
Optionally, the audio features comprise at least any one of: zero crossing rate, short-term energy, spectral centroid and mel frequency; wherein the zero crossing rate is a probability that a time domain waveform of the audio of the video clip passes through a horizontal axis representing a zero level, the short-time energy is an energy of the audio of the video clip, the spectral centroid is an average point of a spectral energy distribution of an audio frame of the video clip, and the mel frequency is a converted auditory frequency calculated from a frequency of the audio of the video clip.
Optionally, as shown in fig. 11, the apparatus 800 further includes: the body-protecting text generation module 840 is configured to generate the bullet screen body-protecting text according to the probability when the segment type of the video segment is that the probability of the preset type is greater than the preset threshold, where the density of the bullet screen body-protecting text and the size of the probability are in a positive correlation relationship.
Optionally, as shown in fig. 11, the body-protecting text generating module 840 is configured to: acquiring the actual number of barrages in the video clip and the maximum number of barrages supported by the video clip; determining the number of shielding bullet screens according to the probability, the actual bullet screen number and the maximum bullet screen number, wherein the number of shielding bullet screens and the size of the probability are in positive correlation; and generating the bullet screen protection body text according to the shielding bullet screen quantity.
Optionally, as shown in fig. 11, the apparatus 800 further includes: and the playing volume adjusting module 850 is configured to, when the probability that the clip type of the video clip is the preset type is greater than the preset threshold, decrease the playing volume of the video clip according to the probability.
Optionally, as shown in fig. 11, the volume adjusting module 850 is configured to: and turning down the playing volume of the video clip by a target amplitude according to the probability, wherein the target amplitude and the probability are in positive correlation.
Optionally, as shown in fig. 11, the apparatus 800 further includes: a bullet screen text receiving module 860, configured to receive an input user bullet screen text in the playing process of the target video; and the prediction function starting module is used for starting a prediction adding function of the bullet screen protection body text for at least one video segment in the target video when the user bullet screen text contains the trigger keyword.
Optionally, the segment characteristic obtaining module 810 is configured to: and in the playing process of the target video, extracting a video clip which is closest to the current playing time point and is not played in the target video according to the preset clip duration.
Optionally, as shown in fig. 11, the apparatus 800 may be implemented as any node in a blockchain system, and the apparatus 800 further includes: a storage module 870, where the storage module 870 is configured to store at least one of a segment type of the video segment and a bullet body text corresponding to the video segment into the block chain by using a consensus mechanism.
In summary, according to the technical scheme provided by the embodiment of the application, the clip type of the video clip is predicted according to the clip feature of the video clip in the target video, and when the clip type belongs to the preset type and is played to the video clip, the bullet screen body protecting text is displayed on the playing picture of the video clip in an overlapping manner, so that the manner of displaying the user bullet screen text is expanded. In addition, the technical scheme provided by the embodiment of the application predicts the segment type of the video segment according to the segment characteristics of the video segment, and then determines whether to display the bullet screen protection body text according to the prediction result, so that the next video segment can be processed in advance.
In addition, according to the technical scheme provided by the embodiment of the application, the predictive adding function of the bullet screen protection body text is started for at least one video segment in the target video according to the user bullet screen text containing the trigger keyword, so that the predictive adding function is not started when the user bullet screen text does not contain the trigger keyword, the processing resource of the terminal is saved, meanwhile, the space for the user to independently select whether to start the predictive adding function of the bullet screen text is provided, and the human-computer interaction experience is improved.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 12, a block diagram of a terminal 1000 according to an embodiment of the present application is shown. The terminal 1000 can be an electronic device such as a mobile phone, a tablet computer, a game console, an electronic book reader, a multimedia playing device, a wearable device, etc. The terminal is used for implementing the bullet screen display method provided in the embodiment. The mobile terminal may be the terminal 10 in the implementation environment shown in fig. 1. Specifically, the method comprises the following steps:
in general, terminal 1000 can include: a processor 1001 and a memory 1002.
Processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1001 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1001 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 1001 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. The memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1002 is used to store at least one instruction, at least one program, set of codes, or set of instructions configured to be executed by one or more processors to implement the bullet screen display method described above.
In some embodiments, terminal 1000 can also optionally include: a peripheral interface 1003 and at least one peripheral. The processor 1001, memory 1002 and peripheral interface 1003 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1003 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1004, touch screen display 1005, camera 1006, audio circuitry 1007, positioning components 1008, and power supply 1009.
Those skilled in the art will appreciate that the configuration shown in fig. 12 is not intended to be limiting and that terminal 1000 can include more or fewer components than shown, or combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which when executed by a processor, implements the bullet screen display method described above.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM).
In an exemplary embodiment, a computer program product is also provided, which when executed by a processor is configured to implement the bullet screen display method described above.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, the step numbers described herein only exemplarily show one possible execution sequence among the steps, and in some other embodiments, the steps may also be executed out of the numbering sequence, for example, two steps with different numbers are executed simultaneously, or two steps with different numbers are executed in a reverse order to the order shown in the figure, which is not limited by the embodiment of the present application.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (14)

1. A bullet screen display method is characterized by comprising the following steps:
receiving an input user barrage text in the playing process of a target video;
under the condition that the user bullet screen text contains trigger keywords, acquiring the fragment characteristics of the video fragments in the target video;
predicting a segment type of the video segment according to the segment characteristics;
when the type of the video clip is a preset type and the video clip is played, a bullet screen protective body text is superposed on a playing picture of the video clip and comprises a plurality of shielding bullet screens for shielding all or part of visual main bodies in the playing picture, and the shielding bullet screens are distributed on the visual main bodies in a dispersed manner according to target density.
2. The method of claim 1, wherein the predicting the segment type of the video segment according to the segment feature comprises:
inputting the segment characteristics into a type prediction model to obtain a segment type prediction result of the segment type of the video segment corresponding to the segment characteristics;
the type prediction model is a model obtained by training a plurality of groups of sample data in a training set by adopting an error back propagation algorithm, wherein each group of sample data comprises a sample fragment characteristic and a sample fragment type corresponding to the sample fragment characteristic.
3. The method according to claim 2, wherein the segment feature comprises an audio feature, and the inputting the segment feature into a type prediction model to obtain a segment type prediction result of a segment type of a video segment corresponding to the segment feature comprises:
inputting the audio features into the type prediction model, and predicting the probability that the segment type of the video segment belongs to the preset type;
and when the probability is greater than a preset threshold value, determining the type of the video clip as the preset type.
4. The method of claim 3, wherein the audio features comprise at least any one of: zero crossing rate, short-term energy, spectral centroid and mel frequency;
wherein the zero crossing rate is a probability that a time domain waveform of the audio of the video clip passes through a horizontal axis representing a zero level, the short-time energy is an energy of the audio of the video clip, the spectral centroid is an average point of a spectral energy distribution of an audio frame of the video clip, and the mel frequency is a converted auditory frequency calculated from a frequency of the audio of the video clip.
5. The method of claim 3, further comprising:
when the probability that the segment type of the video segment is the preset type is larger than the preset threshold value, the bullet screen protection body text is generated according to the probability, and the target density of the bullet screen protection body text is in positive correlation with the probability.
6. The method of claim 5, wherein the generating the bullet screen casing text according to the probability comprises:
acquiring the actual number of barrages in the video clip and the maximum number of barrages supported by the video clip;
determining the number of shielding bullet screens according to the probability, the actual bullet screen number and the maximum bullet screen number, wherein the number of shielding bullet screens and the size of the probability are in positive correlation;
and generating the bullet screen protection body text according to the shielding bullet screen quantity.
7. The method of claim 3, further comprising:
and when the probability that the segment type of the video segment is the preset type is larger than the preset threshold value, reducing the playing volume of the video segment according to the probability.
8. The method according to any one of claims 1 to 7, wherein the obtaining of the segment characteristics of the video segments in the target video comprises:
and in the playing process of the target video, extracting a video clip which is closest to the current playing time point and is not played in the target video according to the preset clip duration.
9. The method according to any of claims 1 to 7, wherein the method is applied in any node in a blockchain system, and the method further comprises:
and storing at least one of the segment type of the video segment and the bullet screen protective body text corresponding to the video segment into a block chain by adopting a consensus mechanism.
10. A bullet screen display device, characterized in that the device comprises:
the barrage text receiving module is used for receiving the input barrage text of the user in the playing process of the target video;
the segment characteristic acquisition module is used for acquiring the segment characteristics of the video segments in the target video under the condition that the user barrage text contains the trigger keywords;
a segment type prediction module for predicting the segment type of the video segment according to the segment characteristics;
the body protecting text display module is used for overlapping and displaying bullet screen body protecting texts on a playing picture of the video clip when the clip type of the video clip is a preset type and is played to the video clip, wherein the bullet screen body protecting texts comprise a plurality of shielding bullet screens used for shielding all or part of visual main bodies in the playing picture, and the shielding bullet screens are dispersedly distributed on the visual main bodies according to the target density.
11. The apparatus of claim 10, wherein the segment features comprise audio features, and wherein the segment type prediction module is configured to:
inputting the audio features into a type prediction model, and predicting the probability that the segment type of the video segment belongs to the preset type;
and when the probability is greater than a preset threshold value, determining the type of the video clip as the preset type.
12. The apparatus of claim 11, further comprising:
and the body protecting text generation module is used for generating the bullet screen body protecting text according to the probability when the fragment type of the video clip is that the probability of the preset type is greater than the preset threshold value, wherein the target density of the bullet screen body protecting text is in positive correlation with the probability.
13. A terminal, characterized in that the terminal comprises a processor and a memory, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the bullet screen display method according to any one of claims 1 to 9.
14. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the bullet screen display method according to any one of claims 1 to 9.
CN201910989704.9A 2019-10-17 2019-10-17 Barrage display method and device, terminal and storage medium Active CN110708588B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910989704.9A CN110708588B (en) 2019-10-17 2019-10-17 Barrage display method and device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910989704.9A CN110708588B (en) 2019-10-17 2019-10-17 Barrage display method and device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110708588A CN110708588A (en) 2020-01-17
CN110708588B true CN110708588B (en) 2021-10-26

Family

ID=69201577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910989704.9A Active CN110708588B (en) 2019-10-17 2019-10-17 Barrage display method and device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110708588B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277910B (en) * 2020-03-07 2022-03-22 咪咕互动娱乐有限公司 Bullet screen display method and device, electronic equipment and storage medium
CN112307260A (en) * 2020-10-30 2021-02-02 北京字节跳动网络技术有限公司 Video identification method, video identification device, electronic equipment and computer readable storage medium
CN115484465B (en) * 2021-05-31 2024-03-15 上海幻电信息科技有限公司 Bullet screen generation method and device, electronic equipment and storage medium
CN114491152B (en) * 2021-12-02 2023-10-31 南京硅基智能科技有限公司 Method for generating abstract video, storage medium and electronic device
CN114489882B (en) * 2021-12-16 2023-05-19 成都鲁易科技有限公司 Method and device for realizing dynamic skin of browser and storage medium
CN114511359B (en) * 2022-02-17 2022-11-22 北京优酷科技有限公司 Display method, device, equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102224499A (en) * 2008-11-21 2011-10-19 汤姆森特许公司 Technique for customizing content
CN104050422A (en) * 2014-06-10 2014-09-17 腾讯科技(深圳)有限公司 Method and device for displaying information content
CN106454490A (en) * 2016-09-21 2017-02-22 天脉聚源(北京)传媒科技有限公司 Method and device for smartly playing video
CN107147957A (en) * 2017-04-19 2017-09-08 北京小米移动软件有限公司 Video broadcasting method and device
CN108495184A (en) * 2018-02-06 2018-09-04 北京奇虎科技有限公司 A kind of method and apparatus for adding barrage for video
CN109040824A (en) * 2018-08-28 2018-12-18 百度在线网络技术(北京)有限公司 Method for processing video frequency, device, electronic equipment and readable storage medium storing program for executing
CN109218800A (en) * 2018-06-29 2019-01-15 努比亚技术有限公司 A kind of barrage information display method, terminal and computer readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070022202A1 (en) * 2005-07-22 2007-01-25 Finkle Karyn S System and method for deactivating web pages
WO2017026837A1 (en) * 2015-08-12 2017-02-16 Samsung Electronics Co., Ltd. Method for masking content displayed on electronic device
US20180246695A1 (en) * 2017-02-27 2018-08-30 Hosea Taylor Individually customized automated media content filtering

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102224499A (en) * 2008-11-21 2011-10-19 汤姆森特许公司 Technique for customizing content
CN104050422A (en) * 2014-06-10 2014-09-17 腾讯科技(深圳)有限公司 Method and device for displaying information content
CN106454490A (en) * 2016-09-21 2017-02-22 天脉聚源(北京)传媒科技有限公司 Method and device for smartly playing video
CN107147957A (en) * 2017-04-19 2017-09-08 北京小米移动软件有限公司 Video broadcasting method and device
CN108495184A (en) * 2018-02-06 2018-09-04 北京奇虎科技有限公司 A kind of method and apparatus for adding barrage for video
CN109218800A (en) * 2018-06-29 2019-01-15 努比亚技术有限公司 A kind of barrage information display method, terminal and computer readable storage medium
CN109040824A (en) * 2018-08-28 2018-12-18 百度在线网络技术(北京)有限公司 Method for processing video frequency, device, electronic equipment and readable storage medium storing program for executing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Commenting on YouTube Videos: From Guatemalan Rock;Thelwall M;《Journal of the American Society for Information Science & Technology》;20121231;全文 *
弹幕的发展现状透视;王卓尔;《新媒体》;20170131;全文 *
论弹幕影视的文化革新;马若宏;《当代艺术观察》;20171031;第035页 *

Also Published As

Publication number Publication date
CN110708588A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
CN110708588B (en) Barrage display method and device, terminal and storage medium
US11417341B2 (en) Method and system for processing comment information
US10826949B2 (en) Distributed control of media content item during webcast
US11736749B2 (en) Interactive service processing method and system, device, and storage medium
CN112399194B (en) Live data processing method and device, computer and readable storage medium
US20170168660A1 (en) Voice bullet screen generation method and electronic device
US20230285854A1 (en) Live video-based interaction method and apparatus, device and storage medium
US9607088B2 (en) Method and apparatus for detecting multimedia content change, and resource propagation system
CN110909241A (en) Information recommendation method, user identification recommendation method, device and equipment
CN110807009A (en) File processing method and device
EP3528151A1 (en) Method and apparatus for user authentication
CN105005612A (en) Music file acquisition method and mobile terminal
CN113573128A (en) Audio processing method, device, terminal and storage medium
US9875242B2 (en) Dynamic current results for second device
US20200177593A1 (en) Generating a custom blacklist for a listening device based on usage
CN111918140B (en) Video playing control method and device, computer equipment and storage medium
US11442606B2 (en) User interface interaction method and system
CN111031354B (en) Multimedia playing method, device and storage medium
CN117319340A (en) Voice message playing method, device, terminal and storage medium
CN113343984A (en) Bullet screen prompting method and device, electronic equipment and storage medium
CN114827701A (en) Multimedia information interaction method and device, electronic equipment and storage medium
CN108174308B (en) Video playing method, video playing device, storage medium and electronic equipment
WO2023029862A1 (en) Bullet-screen comment display method and apparatus, and device and storage medium
CN112245911B (en) Method and device for issuing game program, storage medium and computer equipment
US11570523B1 (en) Systems and methods to enhance interactive program watching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40020923

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant