CN112995694A - Video display method and device, electronic equipment and storage medium - Google Patents

Video display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112995694A
CN112995694A CN202110384039.8A CN202110384039A CN112995694A CN 112995694 A CN112995694 A CN 112995694A CN 202110384039 A CN202110384039 A CN 202110384039A CN 112995694 A CN112995694 A CN 112995694A
Authority
CN
China
Prior art keywords
video
cut
special effect
address
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110384039.8A
Other languages
Chinese (zh)
Other versions
CN112995694B (en
Inventor
熊名男
辛彦哲
宋毓韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202110384039.8A priority Critical patent/CN112995694B/en
Publication of CN112995694A publication Critical patent/CN112995694A/en
Application granted granted Critical
Publication of CN112995694B publication Critical patent/CN112995694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2407Monitoring of transmitted content, e.g. distribution time, number of downloads
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window

Abstract

The embodiment of the disclosure provides a video display method, a video display device, an electronic device and a storage medium, wherein in the process of playing a current video, if a cut-in video is detected, video special effect information is determined according to the cut-in video, wherein the video special effect information is used for representing a video special effect matched with video content of the cut-in video; fusing video special effect information into a cut-in video to obtain a special effect video; and rendering the special effect video to the current video for displaying. By detecting the cut-in video in real time and automatically adding and displaying the video special effect matched with the video content of the cut-in video, the automatic processing and displaying of the burst cut-in video in the video live broadcasting process are realized, manual intervention is not needed, the playing real-time performance of the burst video is improved, and the playing effect and the propaganda effect of the burst video in the live broadcasting process are improved.

Description

Video display method and device, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of computer and network communication, and in particular, to a video display method and apparatus, an electronic device, and a storage medium.
Background
The cloud broadcasting guide platform is a software platform used for generating and editing videos in real time and distributing the videos to the live broadcasting platform, and functions of online broadcasting guide making, distribution and the like of the live broadcasting videos can be conveniently and rapidly achieved through the cloud broadcasting guide platform.
At present, in the process of live video, when sudden video needs to be cut into the original live video, for example, important news, disaster warning, and advertisement information aiming at real-time live content are often added only by hand.
However, due to the fact that time consumption for manually editing video information and adding video special effects is long, the burst video cannot be timely cut into the original live video, playing real-time performance of the burst video is affected, broadcasting effect and publicity effect of an emergency with high real-time requirement are reduced, and propagation influence of the live video is affected.
Disclosure of Invention
The embodiment of the disclosure provides a video display method and device, electronic equipment and a storage medium, so as to solve the problem that the playing real-time performance of a burst video is affected because the burst video cannot be timely cut into an original live video.
In a first aspect, an embodiment of the present disclosure provides a video display method, including:
in the process of playing a current video, if a cut-in video is detected, determining video special effect information according to the cut-in video, wherein the video special effect information is used for representing a video special effect matched with video content of the cut-in video; fusing the video special effect information into the cut-in video to obtain a special effect video; rendering the special effect video into the current video for display.
In a second aspect, an embodiment of the present disclosure provides a video display apparatus, including:
the video cut-in processing module is used for determining video special effect information according to a cut-in video if the cut-in video is detected in the process of playing a current video, wherein the video special effect information is used for representing a video special effect matched with the video content of the cut-in video;
the generating module is used for fusing the video special effect information into the cut-in video to obtain a special effect video;
and the display module is used for rendering the special effect video to the current video for display.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the video display method as set forth in the first aspect above and in various possible designs of the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the video display method according to the first aspect and various possible designs of the first aspect are implemented.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program that, when executed by a processor, implements a video display method as described above in the first aspect and various possible designs of the first aspect.
According to the video display method, the video display device, the electronic equipment and the storage medium provided by the embodiment, in the process of playing a current video, if a cut-in video is detected, video special effect information is determined according to the cut-in video, wherein the video special effect information is used for representing a video special effect matched with video content of the cut-in video; fusing the video special effect information into the cut-in video to obtain a special effect video; rendering the special effect video into the current video for display. By detecting the cut-in video in real time and automatically adding and displaying the video special effect matched with the video content of the cut-in video, the automatic processing and displaying of the burst cut-in video in the video live broadcasting process are realized, manual intervention is not needed, the playing real-time performance of the burst video is improved, and the playing effect and the propaganda effect of the burst video in the live broadcasting process are improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is an application scene diagram of a video display method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a cut-in video according to an embodiment of the present disclosure;
fig. 3 is a first schematic flowchart of a video display method according to an embodiment of the present disclosure;
fig. 4 is a second flowchart illustrating a video display method according to an embodiment of the disclosure;
FIG. 5 is a schematic diagram of a process for obtaining a cut-in video according to an embodiment of the present disclosure;
fig. 6 is a block diagram of a video display device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure
Fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The following explains an application scenario of the embodiment of the present application:
fig. 1 is an application scene diagram of a video display method provided in an embodiment of the present application, where the video display method provided in the embodiment of the present disclosure may be applied to a live video director processing scene, as shown in fig. 1, in the application scene, a camera unit is used for acquiring video data and sending the video data to a cloud server, a live director user is in communication connection with the cloud server through a terminal device installed with a cloud director platform client to acquire the video data and edit and process the video data, and finally, an output video after editing processing is distributed to a live broadcast platform for live broadcast, so that the live broadcast platform user can access the live broadcast platform to view the output video. In the process of the above-mentioned directing, when there are unplanned emergency events, such as temporarily inserted advertisements, earthquake, fire alarm, etc., the directing user needs to cut such emergency events into the current video being live in the form of cut-in video, fig. 2 is a schematic diagram of a cut-in video provided by an embodiment of the present disclosure, as shown in fig. 2, the cut-in video is displayed on an upper layer of a current video, in order to reduce the occlusion of the cut-in video to the current video, the cut-in video is usually displayed in the edge or corner area of the current video, however, for example, the cut-in video is a temporarily inserted advertisement, because the cut-in video is located at the edge corner of the current video, the display effect is poor, and if the cut-in video only contains static characters or pictures, the cut-in video cannot attract the attention of the user and is easy to be ignored by the user, so that the cut-in video does not play an ideal role in popularizing and displaying the advertisement information. For the above reasons, in general, a cut-in video with a video special effect is formed by adding a dynamic special effect to information corresponding to an emergency event, such as a temporarily inserted advertisement, so as to improve the presentation effect of the cut-in video, enable the information in the cut-in video to draw sufficient attention of a user, and improve the transmission effect of the information.
In the prior art, in order to add a video special effect to a cut-in video, which can improve the video expressive power of the cut-in video and conforms to the content expressed by the cut-in video, a common method is that after a director user acquires the cut-in video corresponding to an emergency, the cut-in video is added with the video special effect according to manual experience to form a video with a special effect, and then the video with the special effect is added to a current video through a director platform to be displayed. However, adding special effects to the cut-in video manually wastes time and labor, and the real-time playing of the cut-in video is affected, so that the broadcasting effect and the propaganda effect of an emergency with higher real-time requirement are reduced, and the expressive force of the cut-in video is affected.
The embodiment of the disclosure provides a video display method, which is used for automatically detecting a cut-in video and adding a video special effect matched with the content of the cut-in video to realize automatic generation and broadcasting guidance of the cut-in video to be specially treated so as to solve the problems.
Referring to fig. 3, fig. 3 is a first schematic flow chart of a video display method according to an embodiment of the present disclosure. The method of the embodiment can be applied to a cloud server, a cloud director platform runs in the cloud server, and the video display method comprises the following steps:
s101: in the process of playing the current video, if the cut-in video is detected, determining video special effect information according to the cut-in video, wherein the video special effect information is used for representing a video special effect matched with the video content of the cut-in video.
Illustratively, a current video is stored on a cloud director platform operated by a cloud server, and the cloud server plays the current video according to a director task, wherein the current video is a video with live broadcast properties, such as a live broadcast program, a late meeting performance, a conference performance and the like. Specifically, in the process of playing the current video by the cloud server, the director user may connect to the cloud director platform through a cloud director client operating in the terminal device, and display the current video in real time through the terminal device.
Illustratively, the cut-in video is a video for showing information corresponding to an emergency event, for example, an emergency fire earthquake message, temporarily inserted advertisement information. More specifically, for example, the cut-in video is a temporarily inserted advertisement, the current video is a live meeting performance, in the process of the live meeting performance, the audience claps due to an interaction link related to the product A, and the advertisement video of the product A is inserted, so that the advertisement display effect is better. The advertisement video of the inserted product a is a cut-in video. Therefore, the sudden events represented by the cut-in video have the characteristics of sudden and random property, and the real-time requirement on the video cut-in display is high,
further, the cut-in video is detected through a preset detection channel in the cloud server. The detection channel is, for example, a functional module implemented by a software program, and can detect a certain network address, and when a video exists at the network address, the video is regarded as a cut-in video. Further, exemplarily, if the cut-in video is detected, video feature extraction is performed on the cut-in video to obtain feature information, a video category is determined according to the feature information, and further, for the cut-in video of the video category, a corresponding video special effect is matched for the cut-in video, that is, video special effect information is determined. The process can be realized through a preset cut-in video special effect model, the cut-in video special effect model can be a classifier model based on a neural network and is used for classifying different videos according to the video contents of the videos and matching corresponding video special effects according to the video contents, and the process is not repeated.
Optionally, in this embodiment, before step S101, the method further includes: acquiring sample data, wherein the sample data comprises a special effect video and an original video which are matched with each other; training the initial cut-in video special effect model according to the sample data to obtain a trained cut-in video special effect model; the trained cut-in video special effect model is used for representing the mapping relation between the cut-in video and the video special effect information.
Illustratively, the cut-in video comprises image data and audio data, the feature information comprises image features and audio features, feature extraction is performed on the cut-in video to obtain feature information, and the video category is determined according to the feature information, and the method comprises the following steps: carrying out feature extraction on image data in the cut-in video to obtain image features, and carrying out feature extraction on audio data in the cut-in video to obtain audio features; and determining the video category according to the image characteristics and the audio characteristics.
In the embodiment, by respectively extracting the features of the image data and the audio data in the cut-in video, more content information in the cut-in video can be obtained, and the accuracy of matching the video special effect of the cut-in video is improved.
S102: and fusing the video special effect information into the cut-in video to obtain the special effect video.
Exemplarily, after the video special effect information is determined, the video special effect information is fused into a cut-in video according to a preset characteristic generation method to obtain a special effect video, which is the cut-in video with the video special effect. More specifically, the video effect information may include modifications to the size and position of the cut-in video, and may also include an effect template added to the cut-in video, among other things. The special effect video comprises content information of the cut-in video, and the content information can be better displayed due to the fact that the special effect video is provided with the dynamic special effect, and the display effect is improved.
S103: and rendering the special effect video to the current video for displaying.
Specifically, after the special effect video is obtained, the special effect video is rendered into the current video, an output video is generated, and the current video and the special effect video are simultaneously displayed in the output video. And after the cloud service, the output video is distributed to the live broadcast platform, so that a network user can watch the output video by accessing the live broadcast platform through the terminal equipment.
In the embodiment, in the process of playing the current video, if the cut-in video is detected, video special effect information is determined according to the cut-in video, wherein the video special effect information is used for representing a video special effect matched with the video content of the cut-in video; fusing video special effect information into a cut-in video to obtain a special effect video; and rendering the special effect video to the current video for displaying. By detecting the cut-in video in real time and automatically adding and displaying the video special effect matched with the video content of the cut-in video, the automatic processing and displaying of the burst cut-in video in the video live broadcasting process are realized, manual intervention is not needed, the playing real-time performance of the burst video is improved, and the playing effect and the propaganda effect of the burst video in the live broadcasting process are improved.
Fig. 4 is a schematic flowchart illustrating a second video display method according to an embodiment of the disclosure. Detailed description in this embodiment, steps S101 to S103 are further detailed in a video display method including:
s201: and acquiring a video access address through a preset video address channel.
S202: based on the video access address, it is true whether a cut-in video is detected.
Illustratively, the video address channel is a functional module implemented by a software program, and the functional module can output a video access address, and more specifically, the video address channel can receive the video access address sent by other terminal devices, so as to obtain the video access address. In a specific implementation manner, a director user inputs a specific video access address through a cloud director client operating in a terminal device, and the terminal device sends the video access address to a cloud server to be acquired by a video address channel of the cloud server, so as to acquire the video access address.
Further, in the video access address, if a valid video exists, it is determined that a cut-in video is detected, otherwise, it is determined that no cut-in video is detected.
In one possible implementation manner, detecting an incoming video according to a video access address includes:
if the video access address is an effective video address, determining that cut-in video is detected, wherein the effective video address is used for representing a network address capable of accessing and obtaining video data; and if the video access address is an invalid video address, determining that the cut-in video is not detected.
In the embodiment, the problem that the wrong cut-in video is obtained due to the fact that the video access address uploaded by the terminal device is wrong is solved by judging the validity of the video access address.
In another possible implementation manner, before obtaining the video access address of the cut-in video through a preset video address channel, the method further includes: and acquiring the currently detected video through a preset video detection channel. According to the video access address, detecting the cut-in video, comprising:
and if the currently detected video meets the preset requirement, sending the address of the video detection channel to a video address channel, and executing the step of obtaining the video access address of the cut-in video through the preset video address channel.
Fig. 5 is a schematic diagram of a process of obtaining a cut-in video according to an embodiment of the present disclosure, and as shown in fig. 5, a video monitor is in communication connection with a cloud server running a cloud director platform, the video monitor is used for collecting video data, a video detection channel obtains and identifies a data video collected by the video monitor, and after it is determined that the video data collected by the video monitor meets a preset requirement, an access address of the video monitor is sent to a video address channel, so that the cloud server obtains the cut-in video through the video address channel.
More specifically, the video detection channel is a functional module for detecting video content, and more specifically, for example, the video detection channel is connected to a video monitor in communication, and processes the content detected by the video monitor, such as performing video feature recognition, video feature classification, and the like. Because the video monitor is usually in a continuous working state, most of the collected video data is meaningless, after the content detected by the video monitor is processed through the video detection channel, valuable video images are identified, for example, special actions or expressions of strangers intruding and live audiences are detected, that is, if the video detected by the video monitor meets a preset requirement, the video detected by the video monitor is determined as a cut-in video, the address of the video detection channel is sent to the video address channel, and a step of obtaining a video access address of the cut-in video through the preset video address channel is executed, that is, after the address of the video detection channel is obtained by the video address channel, the video images which are collected by the video monitor and meet the preset requirement are obtained through the address.
In the embodiment, a large number of videos detected by a video monitor are processed and screened through a preset video detection channel, so that valuable video data are obtained, automatic cut-in video obtaining is achieved, a director does not need to manually input a video access address through terminal equipment to obtain the cut-in video, the automation degree of the cloud director platform for obtaining the cut-in video and the quality of the cut-in video are improved, and the timeliness of displaying the cut-in video is further improved because the cut-in video does not need to be manually screened.
S203: in the process of playing the current video, if the cut-in video is detected, feature extraction is carried out on the cut-in video to obtain feature information, and the video category is determined according to the feature information.
S204: the method comprises the steps of obtaining video content of a current video, and determining video special effect information according to the video content of the current video and video types.
For example, to improve the video performance, the video special effect needs to be matched with the cut-in video itself and the content played by the current video. If the video special effect of the cut-in video does not match with the video content of the current video, the understanding obstruction and the understanding deviation can be generated on the meaning expressed by the cut-in video and the meaning appearing at the moment when a viewer watches the cut-in video, so that the understanding of the viewer on the information really wanted to be expressed by the cut-in video is influenced, even the sense of incongruity is generated, and the expressive force of the cut-in video is influenced.
In the step of the embodiment, after the cloud server extracts and classifies the features of the cut-in video and determines the video category, the video special effect information is determined according to the video category of the cut-in video and the video content of the current video.
S205: and fusing the video special effect information into the cut-in video to obtain the special effect video.
S206: and acquiring the video content of the current video.
S207: and determining display parameters according to the video content of the current video.
S208: and rendering the special effect video to the current video according to the display parameters, generating an output video, and displaying the output video on the director window.
Optionally, the display parameters include one or more of: displaying position, displaying size and dynamically displaying track. After the special effect video is obtained, before the special effect video is displayed in the current video, the display parameters of the special effect video also need to be determined. Such as display position, display size, dynamic display trajectory, etc. In order to reduce the shielding of the special effect video on the current video and improve the expressive force of the special effect video, the display parameters of the special effect video need to be determined according to the content of the current video. Specifically, for example, according to the video content of the current video, the video content of the current video is divided into an important content area and a non-important content area in units of one frame image in the current video, and information such as the size and the dimension when the special effect video is displayed in the non-important area is determined as the display parameter.
In the embodiment, the display parameters are determined according to the video content of the current video, so that the special effect video can be displayed at a proper position in the current video, the effective information of the current video is prevented from being shielded, and the expressive force of the special effect video can be improved.
Optionally, after step S208, the present embodiment further includes:
s209: and acquiring a stopping instruction, wherein the stopping instruction is used for instructing and controlling the special effect video to stop displaying on the current video.
S210: and controlling the special effect video to stop displaying on the current video according to the stop instruction.
Exemplarily, after the special effect video is rendered to the current video and the output video is generated, one side of the terminal device can display the output video in real time through the cloud director client, if a director user on the side of the terminal device is unsatisfied with the video characteristic pattern in the special effect video, a stop instruction can be sent to the cloud server through the terminal device, and the cloud server obtains the stop instruction or stops displaying the stop special effect video on the current video, so that the influence on the live broadcast of the current video due to unreasonable video special effect matching of the cut-in video is avoided, live broadcast accidents are avoided, and the expressive force of the live broadcast of the video is improved.
In this embodiment, the implementation manners of steps S203 to S205 are the same as the implementation manners of steps S101 and S102 in the embodiment shown in fig. 3 of this application, and are not described again.
Fig. 6 is a block diagram of a video display apparatus according to an embodiment of the present disclosure, corresponding to the video display method of the foregoing embodiment. For ease of illustration, only portions that are relevant to embodiments of the present disclosure are shown.
Referring to fig. 6, the video display apparatus 3 includes:
the determining module 31 is configured to, in a process of playing a current video, determine video special effect information according to a cut-in video if the cut-in video is detected, where the video special effect information is used to represent a video special effect matched with video content of the cut-in video;
the generating module 32 is configured to fuse the video special effect information into the cut-in video to obtain a special effect video;
and the display module 33 is configured to render the special effect video into the current video for display.
In an embodiment of the present disclosure, the determining module 31 is specifically configured to: performing feature extraction on the cut-in video to obtain feature information, and determining the video category according to the feature information; and determining video special effect information according to the video category.
In one embodiment of the present disclosure, image data and audio data are included in the cut-in video, and the feature information includes image features and audio features; when the determining module 31 performs feature extraction on the cut-in video to obtain feature information, and determines a video category according to the feature information, it is specifically configured to: carrying out feature extraction on image data in the cut-in video to obtain image features, and carrying out feature extraction on audio data in the cut-in video to obtain audio features; and determining the video category according to the image characteristics and the audio characteristics.
In an embodiment of the present disclosure, when determining the video special effect information according to the video category, the determining module 31 is specifically configured to: acquiring video content of a current video; and determining the video special effect information according to the video content and the video category of the current video.
In an embodiment of the present disclosure, the determining module 31 is further configured to: acquiring a video access address through a preset video address channel; based on the video access address, it is true whether a cut-in video is detected.
In an embodiment of the present disclosure, when detecting the video cut-in according to the video access address, the determining module 31 is specifically configured to: if the video access address is an effective video address, determining that cut-in video is detected, wherein the effective video address is used for representing a network address capable of accessing and obtaining video data; and if the video access address is an invalid video address, determining that the cut-in video is not detected.
In an embodiment of the present disclosure, the determining module 31, before acquiring the video access address of the cut-in video through the preset video address channel, is further configured to: acquiring a currently detected video through a preset video detection channel; and if the currently detected video meets the preset requirement, sending the address of the video detection channel to a video address channel, and executing the step of obtaining the video access address of the cut-in video through the preset video address channel.
In an embodiment of the present disclosure, the display module 33 is specifically configured to: acquiring video content of a current video; determining display parameters according to the video content of the current video; and rendering the special effect video to the current video according to the display parameters, generating an output video, and displaying the output video on the director window.
In one embodiment of the present disclosure, the display parameters include one or more of the following: displaying position, displaying size and dynamically displaying track.
In an embodiment of the present disclosure, the display module 32 is further configured to: acquiring a stopping instruction, wherein the stopping instruction is used for instructing and controlling the special effect video to stop displaying on the current video; and controlling the special effect video to stop displaying on the current video according to the stop instruction.
In an embodiment of the present disclosure, the determining module 31 is further configured to: acquiring sample data, wherein the sample data comprises a special effect video and an original video which are matched with each other; training the initial cut-in video special effect model according to the sample data to obtain a trained cut-in video special effect model; the trained cut-in video special effect model is used for representing the mapping relation between the cut-in video and the video special effect information.
The device provided in this embodiment may be used to implement the technical solution of the above method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, and as shown in fig. 7, the electronic device 4 includes at least one processor 401 and a memory 402; the memory 402 stores computer-executable instructions; the at least one processor 401 executes the computer-executable instructions stored by the memory 402 to cause the at least one processor 401 to perform the video display method in the embodiment shown in fig. 2-5.
The processor 401 and the memory 402 are connected by a bus 403.
The relevant descriptions and effects corresponding to the steps in the embodiments corresponding to fig. 2 to fig. 5 can be understood, and are not described in detail herein.
Referring to fig. 8, a schematic structural diagram of an electronic device 900 suitable for implementing the embodiment of the present disclosure is shown, where the electronic device 900 may be a terminal device or a server. Among them, the terminal Device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a Digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a car terminal (e.g., car navigation terminal), etc., and a fixed terminal such as a Digital TV, a desktop computer, etc. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, the electronic device 900 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 901, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage means 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for the operation of the electronic apparatus 900 are also stored. The processing apparatus 901, the ROM902, and the RAM 903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
Generally, the following devices may be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 907 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 908 including, for example, magnetic tape, hard disk, etc.; and a communication device 909. The communication device 909 may allow the electronic apparatus 900 to perform wireless or wired communication with other apparatuses to exchange data. While fig. 8 illustrates an electronic device 900 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication device 909, or installed from the storage device 908, or installed from the ROM 902. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing apparatus 901.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above embodiments.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, according to one or more embodiments of the present disclosure, there is provided a video display method including:
in the process of playing a current video, if a cut-in video is detected, determining video special effect information according to the cut-in video, wherein the video special effect information is used for representing a video special effect matched with video content of the cut-in video; fusing the video special effect information into the cut-in video to obtain a special effect video; rendering the special effect video into the current video for display.
According to one or more embodiments of the present disclosure, the determining video special effect information according to the cut-in video includes: performing feature extraction on the cut-in video to obtain feature information, and determining the video category according to the feature information; and determining the video special effect information according to the video category.
According to one or more embodiments of the present disclosure, image data and audio data are included in the cut-in video, and the feature information includes image features and audio features; performing feature extraction on the cut-in video to obtain feature information, and determining the video category according to the feature information, wherein the method comprises the following steps: performing feature extraction on the image data in the cut-in video to obtain image features, and performing feature extraction on the audio data in the cut-in video to obtain audio features; and determining the video category according to the image feature and the audio feature.
According to one or more embodiments of the present disclosure, determining the video special effect information according to the video category includes: acquiring the video content of the current video; and determining video special effect information according to the video content of the current video and the video category.
According to one or more embodiments of the present disclosure, the method further comprises: acquiring a video access address through a preset video address channel; whether the cut-in video is actually detected or not is determined according to the video access address.
According to one or more embodiments of the present disclosure, detecting the cut-in video according to the video access address includes: if the video access address is an effective video address, determining that the cut-in video is detected, wherein the effective video address is used for representing a network address capable of accessing and obtaining video data; and if the video access address is an invalid video address, determining that the cut-in video is not detected.
According to one or more embodiments of the present disclosure, before the obtaining the video access address of the cut-in video through the preset video address channel, the method further includes: acquiring a currently detected video through a preset video detection channel; and if the currently detected video meets the preset requirement, sending the address of the video detection channel to the video address channel, and executing the step of obtaining the video access address of the cut-in video through the preset video address channel.
According to one or more embodiments of the disclosure, rendering the special effects video into the current video for display comprises: acquiring the video content of the current video; determining display parameters according to the video content of the current video; rendering the special effect video to the current video according to the display parameters, generating an output video, and displaying the output video on a program guide window.
According to one or more embodiments of the present disclosure, the display parameters include one or more of: displaying position, displaying size and dynamically displaying track.
According to one or more embodiments of the present disclosure, the method further comprises: obtaining a stopping instruction, wherein the stopping instruction is used for instructing and controlling the special effect video to stop displaying on the current video; and controlling the special effect video to stop displaying on the current video according to the stop instruction.
According to one or more embodiments of the present disclosure, the method further comprises: acquiring sample data, wherein the sample data comprises a special effect video and an original video which are matched with each other; training the initial cut-in video special effect model according to the sample data to obtain a trained cut-in video special effect model; the trained cut-in video special effect model is used for representing the mapping relation between the cut-in video and the video special effect information.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided a video display apparatus including:
the video cut-in processing module is used for determining video special effect information according to a cut-in video if the cut-in video is detected in the process of playing a current video, wherein the video special effect information is used for representing a video special effect matched with the video content of the cut-in video;
the generating module is used for fusing the video special effect information into the cut-in video to obtain a special effect video;
and the display module is used for rendering the special effect video to the current video for display.
According to one or more embodiments of the present disclosure, the determining module is specifically configured to: performing feature extraction on the cut-in video to obtain feature information, and determining the video category according to the feature information; and determining the video special effect information according to the video category.
According to one or more embodiments of the present disclosure, image data and audio data are included in the cut-in video, and the feature information includes image features and audio features; the determining module is specifically configured to, when performing feature extraction on the cut-in video to obtain feature information and determining a video category according to the feature information: performing feature extraction on the image data in the cut-in video to obtain image features, and performing feature extraction on the audio data in the cut-in video to obtain audio features; and determining the video category according to the image feature and the audio feature.
According to one or more embodiments of the present disclosure, when determining the video special effect information according to the video category, the determining module is specifically configured to: acquiring the video content of the current video; and determining video special effect information according to the video content of the current video and the video category.
According to one or more embodiments of the present disclosure, the determining module is further configured to: acquiring a video access address through a preset video address channel; whether the cut-in video is actually detected or not is determined according to the video access address.
According to one or more embodiments of the present disclosure, when detecting the cut-in video according to the video access address, the determining module is specifically configured to: if the video access address is an effective video address, determining that the cut-in video is detected, wherein the effective video address is used for representing a network address capable of accessing and obtaining video data; and if the video access address is an invalid video address, determining that the cut-in video is not detected.
According to one or more embodiments of the present disclosure, before the determining module obtains the video access address of the cut-in video through a preset video address channel, the determining module is further configured to: acquiring a currently detected video through a preset video detection channel; and if the currently detected video meets the preset requirement, sending the address of the video detection channel to the video address channel, and executing the step of obtaining the video access address of the cut-in video through the preset video address channel.
According to one or more embodiments of the present disclosure, the display module is specifically configured to: acquiring the video content of the current video; determining display parameters according to the video content of the current video; rendering the special effect video to the current video according to the display parameters, generating an output video, and displaying the output video on a program guide window.
According to one or more embodiments of the present disclosure, the display parameters include one or more of: displaying position, displaying size and dynamically displaying track.
According to one or more embodiments of the present disclosure, the display module is further configured to: obtaining a stopping instruction, wherein the stopping instruction is used for instructing and controlling the special effect video to stop displaying on the current video; and controlling the special effect video to stop displaying on the current video according to the stop instruction.
According to one or more embodiments of the present disclosure, the determining module is further configured to: acquiring sample data, wherein the sample data comprises a special effect video and an original video which are matched with each other; training the initial cut-in video special effect model according to the sample data to obtain a trained cut-in video special effect model; the trained cut-in video special effect model is used for representing the mapping relation between the cut-in video and the video special effect information.
In a third aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the video display method as set forth in the first aspect above and in various possible designs of the first aspect.
In a fourth aspect, according to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the video display method according to the first aspect and various possible designs of the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program that, when executed by a processor, implements a video display method as described above in the first aspect and various possible designs of the first aspect.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (15)

1. A video display method, comprising:
in the process of playing a current video, if a cut-in video is detected, determining video special effect information according to the cut-in video, wherein the video special effect information is used for representing a video special effect matched with video content of the cut-in video;
fusing the video special effect information into the cut-in video to obtain a special effect video;
rendering the special effect video into the current video for display.
2. The method of claim 1, wherein determining video special effect information from the cut-in video comprises:
performing feature extraction on the cut-in video to obtain feature information, and determining the video category according to the feature information;
and determining the video special effect information according to the video category.
3. The method of claim 2, wherein the cut-in video comprises image data and audio data, and wherein the feature information comprises image features and audio features; performing feature extraction on the cut-in video to obtain feature information, and determining the video category according to the feature information, wherein the method comprises the following steps:
performing feature extraction on the image data in the cut-in video to obtain image features, and performing feature extraction on the audio data in the cut-in video to obtain audio features;
and determining the video category according to the image feature and the audio feature.
4. The method of claim 2, wherein determining the video special effect information according to the video category comprises:
acquiring the video content of the current video;
and determining video special effect information according to the video content of the current video and the video category.
5. The method of claim 1, further comprising:
acquiring a video access address through a preset video address channel;
whether the cut-in video is actually detected or not is determined according to the video access address.
6. The method of claim 5, wherein detecting the hand-in video according to the video access address comprises:
if the video access address is an effective video address, determining that the cut-in video is detected, wherein the effective video address is used for representing a network address capable of accessing and obtaining video data;
and if the video access address is an invalid video address, determining that the cut-in video is not detected.
7. The method of claim 5, further comprising, before the obtaining the video access address of the cut-in video through a preset video address channel:
acquiring a currently detected video through a preset video detection channel;
and if the currently detected video meets the preset requirement, sending the address of the video detection channel to the video address channel, and executing the step of obtaining the video access address of the cut-in video through the preset video address channel.
8. The method of any of claims 1-7, wherein rendering the special effects video into the current video for display comprises:
acquiring the video content of the current video;
determining display parameters according to the video content of the current video;
rendering the special effect video to the current video according to the display parameters, generating an output video, and displaying the output video on a program guide window.
9. The method of claim 8, wherein the display parameters include one or more of: displaying position, displaying size and dynamically displaying track.
10. The method according to any one of claims 1-7, further comprising:
obtaining a stopping instruction, wherein the stopping instruction is used for instructing and controlling the special effect video to stop displaying on the current video;
and controlling the special effect video to stop displaying on the current video according to the stop instruction.
11. The method according to any one of claims 1-7, further comprising:
acquiring sample data, wherein the sample data comprises a special effect video and an original video which are matched with each other;
training the initial cut-in video special effect model according to the sample data to obtain a trained cut-in video special effect model; the trained cut-in video special effect model is used for representing the mapping relation between the cut-in video and the video special effect information.
12. A video display apparatus, comprising:
the video cut-in processing module is used for determining video special effect information according to a cut-in video if the cut-in video is detected in the process of playing a current video, wherein the video special effect information is used for representing a video special effect matched with the video content of the cut-in video;
the generating module is used for fusing the video special effect information into the cut-in video to obtain a special effect video;
and the display module is used for rendering the special effect video to the current video for display.
13. An electronic device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the video display method of any of claims 1 to 11.
14. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, implement the video display method of any one of claims 1 to 11.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1 to 11.
CN202110384039.8A 2021-04-09 2021-04-09 Video display method and device, electronic equipment and storage medium Active CN112995694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110384039.8A CN112995694B (en) 2021-04-09 2021-04-09 Video display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110384039.8A CN112995694B (en) 2021-04-09 2021-04-09 Video display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112995694A true CN112995694A (en) 2021-06-18
CN112995694B CN112995694B (en) 2022-11-22

Family

ID=76339686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110384039.8A Active CN112995694B (en) 2021-04-09 2021-04-09 Video display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112995694B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113840152A (en) * 2021-07-15 2021-12-24 阿里巴巴达摩院(杭州)科技有限公司 Live broadcast key point processing method and device
CN113923391A (en) * 2021-09-08 2022-01-11 荣耀终端有限公司 Method, apparatus, storage medium, and program product for video processing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102263906A (en) * 2011-07-26 2011-11-30 北京数码视讯科技股份有限公司 Method and device for processing video images
CN108289159A (en) * 2017-05-25 2018-07-17 广州华多网络科技有限公司 A kind of terminal live streaming special efficacy add-on system, method and terminal live broadcast system
CN109474844A (en) * 2017-09-08 2019-03-15 腾讯科技(深圳)有限公司 Video information processing method and device, computer equipment
CN110049371A (en) * 2019-05-14 2019-07-23 北京比特星光科技有限公司 Video Composition, broadcasting and amending method, image synthesizing system and equipment
CN110611776A (en) * 2018-05-28 2019-12-24 腾讯科技(深圳)有限公司 Special effect processing method, computer device and computer storage medium
CN111464827A (en) * 2020-04-20 2020-07-28 玉环智寻信息技术有限公司 Data processing method and device, computing equipment and storage medium
CN111770375A (en) * 2020-06-05 2020-10-13 百度在线网络技术(北京)有限公司 Video processing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102263906A (en) * 2011-07-26 2011-11-30 北京数码视讯科技股份有限公司 Method and device for processing video images
CN108289159A (en) * 2017-05-25 2018-07-17 广州华多网络科技有限公司 A kind of terminal live streaming special efficacy add-on system, method and terminal live broadcast system
CN109474844A (en) * 2017-09-08 2019-03-15 腾讯科技(深圳)有限公司 Video information processing method and device, computer equipment
US20200195980A1 (en) * 2017-09-08 2020-06-18 Tencent Technology (Shenzhen) Company Limited Video information processing method, computer equipment and storage medium
CN110611776A (en) * 2018-05-28 2019-12-24 腾讯科技(深圳)有限公司 Special effect processing method, computer device and computer storage medium
CN110049371A (en) * 2019-05-14 2019-07-23 北京比特星光科技有限公司 Video Composition, broadcasting and amending method, image synthesizing system and equipment
CN111464827A (en) * 2020-04-20 2020-07-28 玉环智寻信息技术有限公司 Data processing method and device, computing equipment and storage medium
CN111770375A (en) * 2020-06-05 2020-10-13 百度在线网络技术(北京)有限公司 Video processing method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113840152A (en) * 2021-07-15 2021-12-24 阿里巴巴达摩院(杭州)科技有限公司 Live broadcast key point processing method and device
CN113923391A (en) * 2021-09-08 2022-01-11 荣耀终端有限公司 Method, apparatus, storage medium, and program product for video processing

Also Published As

Publication number Publication date
CN112995694B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN110134880B (en) Comment data providing method, comment data displaying method, comment data providing device, comment data displaying device, electronic equipment and storage medium
CN112995694B (en) Video display method and device, electronic equipment and storage medium
CN111399729A (en) Image drawing method and device, readable medium and electronic equipment
CN110177295B (en) Subtitle out-of-range processing method and device and electronic equipment
CN109684589B (en) Client comment data processing method and device and computer storage medium
US11924500B2 (en) Information interaction method and device, electronic apparatus, and computer readable storage medium
CN111562895A (en) Multimedia information display method and device and electronic equipment
CN110930220A (en) Display method, display device, terminal equipment and medium
CN114071179A (en) Live broadcast preview method, device, equipment, program product and medium
CN111459601A (en) Data processing method and device, electronic equipment and computer readable medium
US20240119073A1 (en) Information processing method and apparatus, device and readable storage medium
CN114679628B (en) Bullet screen adding method and device, electronic equipment and storage medium
CN114205635A (en) Live comment display method, device, equipment, program product and medium
CN113253885A (en) Target content display method, device, equipment, readable storage medium and product
US20240048665A1 (en) Video generation method, video playing method, video generation device, video playing device, electronic apparatus and computer-readable storage medium
CN111107381A (en) Live broadcast room bullet screen display method, storage medium, equipment and system
CN110414625B (en) Method and device for determining similar data, electronic equipment and storage medium
CN109871465B (en) Time axis calculation method and device, electronic equipment and storage medium
CN112000251A (en) Method, apparatus, electronic device and computer readable medium for playing video
CN113891135B (en) Multimedia data playing method and device, electronic equipment and storage medium
CN115047999A (en) Interface switching method and device, electronic equipment, storage medium and program product
CN110366002B (en) Video file synthesis method, system, medium and electronic device
CN114238805A (en) Information processing method, device, equipment, medium and product based on information flow
CN114430491A (en) Live broadcast-based data processing method and device
CN110389805B (en) Information display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant