CN111629253A - Video processing method and device, computer readable storage medium and electronic equipment - Google Patents

Video processing method and device, computer readable storage medium and electronic equipment Download PDF

Info

Publication number
CN111629253A
CN111629253A CN202010528616.1A CN202010528616A CN111629253A CN 111629253 A CN111629253 A CN 111629253A CN 202010528616 A CN202010528616 A CN 202010528616A CN 111629253 A CN111629253 A CN 111629253A
Authority
CN
China
Prior art keywords
video
target virtual
attribute information
information
processing method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010528616.1A
Other languages
Chinese (zh)
Inventor
黄业龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010528616.1A priority Critical patent/CN111629253A/en
Publication of CN111629253A publication Critical patent/CN111629253A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4784Supplemental services, e.g. displaying phone caller identification, shopping application receiving rewards

Abstract

The embodiment of the invention relates to a video processing method and a device, a computer readable storage medium and electronic equipment, relating to the technical field of internet screen type video live broadcast, wherein the method comprises the following steps: when it is monitored that a presentation behavior event exists in a current live broadcast room, determining a target virtual article corresponding to the presentation behavior event; acquiring attribute information of the target virtual article; and generating a recorded video according to the attribute information and the current live video, so that when the recorded video is replayed, a virtual effect animation corresponding to the target virtual article is obtained according to the attribute information of the target virtual article, and the virtual effect animation is played. The embodiment of the invention solves the problem that the special effect of the gift appearing in the video cannot be secondarily displayed when the user secondarily plays the recorded and played video of the live video in the prior art.

Description

Video processing method and device, computer readable storage medium and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of internet screen type live video, in particular to a video processing method, a video processing device, a computer readable storage medium and electronic equipment.
Background
In recent two years, with the development of the internet, the network live broadcast of the general entertainment gradually comes into the view of the public. The web broadcast is popular because of the real-time feedback without delay and the multi-dimensional interaction, and the audience is more immersed. The content of the network live broadcast is all-inclusive, and a user can interact with the live broadcast room in a manner of giving gifts to the live broadcast room when watching the live broadcast, so that the live broadcast is encouraged.
The existing gift presenting mode of the live broadcast room generally has two modes according to different terminals of users. When the live broadcast is watched on the personal computer, the gifts are arranged below a user interface of the live broadcast room in an icon mode, a user can check the gifts by clicking a page turning button and click for presentation, and after the presentation is finished, information such as special effects and blessing words corresponding to the gifts can be displayed in the live broadcast room.
However, when the user plays the recorded video of the live video twice, the special effect of the gift in the video cannot be displayed twice.
Therefore, it is desirable to provide a new video processing method and apparatus.
It is to be noted that the information invented in the above background section is only for enhancing the understanding of the background of the present invention, and therefore, may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present invention is directed to a video processing method, a video processing apparatus, a computer-readable storage medium, and an electronic device, which overcome at least some of the problems that a user cannot secondarily display a special effect of a gift appearing in a video when secondarily playing a recorded video of a live video due to limitations and disadvantages of related art.
According to an aspect of the present disclosure, there is provided a video processing method including:
when it is monitored that a presentation behavior event exists in a current live broadcast room, determining a target virtual article corresponding to the presentation behavior event;
acquiring attribute information of the target virtual article;
and generating a recorded video according to the attribute information and the current live video, so that when the recorded video is replayed, a virtual effect animation corresponding to the target virtual article is obtained according to the attribute information of the target virtual article, and the virtual effect animation is played.
In an exemplary embodiment of the present disclosure, the attribute information includes identification information of the target virtual item, user identification information gifted to the target virtual item, and a descriptive statement corresponding to the target virtual item;
the recorded video is played through a preset player, and the virtual effect animation is stored in the preset player.
In an exemplary embodiment of the present disclosure, generating a recorded video according to the attribute information and a current live video includes:
creating a video file with a preset data format, and adding an information header to the video file according to the preset data format;
writing the attribute information of the target virtual article into a data frame of a video file added with an information header, and writing the current live video into a video frame and an audio frame of the video file added with the information header;
and generating the recorded video according to the data frame, the video frame and the audio frame.
In an exemplary embodiment of the present disclosure, the video processing method further includes:
acquiring an anchor portrait corresponding to the current live broadcast room according to the live broadcast identification of the current live broadcast room;
calculating name information and summary information of the recorded video according to the anchor portrait and the current live video;
and adding the name information into a live video list, and taking the abstract information as a cover page of the recorded video in the live video list.
According to an aspect of the present disclosure, there is provided a video processing method including:
acquiring a recorded video and decoding the recorded video to obtain a video frame, an audio frame and a data frame;
playing the video frame and the audio frame through a preset video player, and judging whether the data frame comprises attribute information of a target virtual article generated by a presentation behavior event in a live broadcast process;
if the attribute information of the target virtual article exists in the data frame, acquiring a virtual effect animation corresponding to the target virtual article according to the attribute information;
and calling an animation playing module in the preset video player to play the virtual effect animation.
In an exemplary embodiment of the present disclosure, the obtaining of the virtual effect animation corresponding to the target virtual article according to the attribute information includes:
and acquiring a virtual effect animation corresponding to the identification information from a preset special effect configuration module of the video player according to the identification information of the target virtual article included in the attribute information.
In an exemplary embodiment of the present disclosure, invoking an animation playing module in the preset video player to play the virtual effect animation includes:
and calling an animation playing module in the preset video player to play the virtual effect animation, the user identification information which is contained in the attribute information and is used for presenting the target virtual article, and the description sentence corresponding to the target virtual article.
According to an aspect of the present disclosure, there is provided a video processing apparatus including:
the virtual article determining module is used for determining a target virtual article corresponding to a present behavior event when the present behavior event is monitored to exist in the current live broadcast room;
the attribute information acquisition module is used for acquiring the attribute information of the target virtual article;
and the recorded video generating module is used for generating a recorded video according to the attribute information and the current live video, so that when the recorded video is replayed, a virtual effect animation corresponding to the target virtual article is obtained according to the attribute information of the target virtual article, and the virtual effect animation is played.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a video processing method as recited in any one of the above.
According to an aspect of the present disclosure, there is provided an electronic device including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform any one of the video processing methods described above via execution of the executable instructions.
According to the video processing method and device provided by the embodiment of the invention, on one hand, when a present behavior event is monitored to exist in a current live broadcast room, a target virtual article corresponding to the present behavior event is determined; then, acquiring attribute information of the target virtual article; finally, a recorded video is generated according to the attribute information and the current live video, so that when the recorded video is replayed, a virtual effect animation corresponding to the target virtual article is obtained according to the attribute information of the target virtual article, and the virtual effect animation is played, and the problem that in the prior art, when a user carries out secondary playing on the recorded and played video of the live video, the special effect of the gift appearing in the video cannot be secondarily displayed is solved; on the other hand, the recorded video is generated according to the attribute information and the current live video, and then the corresponding virtual effect animation is obtained according to the attribute information when secondary playing is carried out, so that the problem of heavy system burden caused by directly writing the virtual effect animation into a video file is avoided, and the data writing efficiency is improved; on the other hand, the problem that the definition of the virtual effect animation is influenced by directly recording the virtual effect animation can be avoided, and the user experience is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 schematically shows a flow chart of a video processing method according to an exemplary embodiment of the present invention.
Fig. 2 schematically shows a flow chart of another video processing method according to an exemplary embodiment of the present invention.
Fig. 3 schematically shows a flow chart of another video processing method according to an exemplary embodiment of the present invention.
Fig. 4 schematically shows a flow chart of another video processing method according to an exemplary embodiment of the present invention.
Fig. 5 schematically shows a flow chart of another video processing method according to an exemplary embodiment of the present invention.
Fig. 6 schematically shows a block diagram of a video processing apparatus according to an exemplary embodiment of the present invention.
Fig. 7 schematically shows a block diagram of another video processing apparatus according to an exemplary embodiment of the present invention.
Fig. 8 schematically illustrates an electronic device for implementing the above-described video processing method according to an exemplary embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the invention.
Furthermore, the drawings are merely schematic illustrations of the invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The present example embodiment first provides a video processing method, which may be used to record a current live video. The method can be operated in an equipment terminal, a server cluster or a cloud server and the like; of course, those skilled in the art may also operate the method of the present invention on other platforms as needed, and this is not particularly limited in this exemplary embodiment. Referring to fig. 1, the video processing method may include the steps of:
s110, when it is monitored that a present behavior event exists in a current live broadcast room, determining a target virtual article corresponding to the present behavior event;
s120, acquiring attribute information of the target virtual article;
and S130, generating a recorded video according to the attribute information and the current live video, so that when the recorded video is replayed, a virtual effect animation corresponding to the target virtual article is obtained according to the attribute information of the target virtual article, and the virtual effect animation is played.
In the video processing method, on one hand, when the fact that a present behavior event exists in the current live broadcast room is monitored, a target virtual article corresponding to the present behavior event is determined; then, acquiring attribute information of the target virtual article; finally, a recorded video is generated according to the attribute information and the current live video, so that when the recorded video is replayed, a virtual effect animation corresponding to the target virtual object is obtained according to the attribute information of the target virtual object, and the virtual effect animation is played, and the problem that in the prior art, when the recorded video of the live video is replayed, the special effect (virtual effect animation) of the gift appearing in the video cannot be secondarily displayed is solved; on the other hand, the recorded video is generated according to the attribute information and the current live video, and then the corresponding virtual effect animation is obtained according to the attribute information when secondary playing is carried out, so that the problem of heavy system burden caused by directly writing the virtual effect animation into a video file is avoided, and the data writing efficiency is improved; on the other hand, the problem that the definition of the virtual effect animation is influenced by directly recording the virtual effect animation can be avoided, and the user experience is further improved.
Hereinafter, each step involved in the video processing method according to the exemplary embodiment of the present invention will be explained and explained in detail with reference to the drawings.
First, the objects of the exemplary embodiments of the present invention are explained and explained. In the process of main broadcasting and live broadcasting, the video is recorded through special recording software, the received related information is written into a data frame of a video file, for example, the related information is decoded to obtain the gift-sending related information at the corresponding time point while the video is played back through a special player, and the corresponding gift special effect is played according to the gift-sending related information.
Next, terms involved in the exemplary embodiments of the present invention are explained as follows:
mp4 file: an existing video format file is composed of video frames, audio frames and data frames, the data frames are empty under normal conditions, and when an ordinary player plays an mp4 file, pictures and sounds can be viewed only by processing the video frames and the audio frames by default.
Special recording software: the ordinary recording software only records the audio and video frames, the data frames are default to be empty, and the related information is written into the video file mp4 as the data frames through the special recording software.
The audio and video recording and writing module: the main broadcast picture and sound code is written into mp4 file.
The gift-related information writing module: during the video recording process, the anchor receives the gift-sending information and plays the special effect of the gift, and only the information related to the gift (gift id, the nickname of the gift-sender, blessing words, etc.) is written into the mp4 file as a data frame.
Video frame: mp4 for storing data of video pictures.
Audio frame: mp4 for storing sound data.
Data frame: mp4 for storing data of user-defined information.
The special player comprises: the special player can play videos and gift special effects, and ordinary videos only can see video pictures and hear sound, but cannot see the gift special effects when playing mp4 files recorded by the invention.
The audio and video playing module: and (4) carrying out sum decoding on the video frames and the audio frames in the mp4 file, and playing video pictures and sound.
The gift special effect playing module: and decoding the data frames in the mp4 file to obtain salutation related information. And then searching for corresponding animation files gif/webp and the like in the configuration according to the gift id to play the animation, and displaying the nickname and blessing words of the gift giver.
A gift special effect configuration module: and storing a file of the gift animation information, and searching the corresponding gift animation information through the gift id.
Hereinafter, steps S110 to S130 will be explained and explained.
In step S110, when it is monitored that a present action event exists in the current live broadcast room, a target virtual item corresponding to the present action event is determined.
In the present example embodiment, in the process of recording a current live video, when it is monitored that a present behavior event of an audience exists in a current live broadcast room, a target virtual object corresponding to the present behavior event, that is, a gift presented by the audience needs to be determined; such as rockets, submarines, billows, etc. It should be added that, the current live video may be recorded by clicking a recording button on the live platform, or may be recorded by other methods, for example, the current live video is recorded by other devices, and the example is not limited in this respect.
In step S120, attribute information of the target virtual item is acquired.
In this exemplary embodiment, the attribute information may include identification information of the target virtual object, user identification information presenting the target virtual object, and a descriptive statement corresponding to the target virtual object, that is, an ID of the gift, a nickname of a user corresponding to the user presenting the gift, a blessing word of the gift, and the like, for example, when a viewer presents the rocket, the corresponding attribute information is an ID of the rocket (the ID is fixed, and an ID corresponding to a same gift is fixed on a live broadcast room or a live broadcast platform), a nickname of a viewer presenting the rocket (for example, a small red hat and a big grey wolf), and a blessing word corresponding to the rocket, where one gift may correspond to one blessing word, or may correspond to multiple blessing words, and the viewer may select the nicking words according to his/her needs. Further, after the gift presentation is completed, on the live broadcast screen, virtual effect animations corresponding to the gift, such as various special effects in the rocket launching and launching processes, can be displayed.
In step S130, a recorded video is generated according to the attribute information and the current live video, so that when the recorded video is replayed, a virtual effect animation corresponding to the target virtual object is obtained according to the attribute information of the target virtual object, and the virtual effect animation is played.
In this example embodiment, first, generating a recorded video according to the attribute information and the current live video may specifically include: creating a video file with a preset data format, and adding an information header to the video file according to the preset data format; writing the attribute information of the target virtual article into a data frame of a video file added with an information header, and writing the current live video into a video frame and an audio frame of the video file added with the information header; and generating the recorded video according to the data frame, the video frame and the audio frame.
Specifically, in order to facilitate recording of a current live video and facilitate subsequent user learning of relevant information of a recorded video in a playing process of the recorded video, a video file with a preset data format may be created first, and an information header is added to the video file according to the preset data format. The video file with the preset data format is an mp4 file, and may also be other video files with data frames, which is not limited in this example. Further, the header may include: the width and height of the video image, the sampling frequency of the audio frame, the video frame rate and bit rate, etc., may also include other information, and this example is not particularly limited. Furthermore, after the information header is added, the attribute information of the target virtual article can be written into the data frame of the video file to which the information header is added, the video is written into the video frame and the audio frame of the video file to which the information header is added, and then the recorded video is generated according to the data frame, the video frame and the audio frame.
Secondly, after the recorded video is generated, when the recorded video is replayed, the virtual effect animation corresponding to the target virtual object can be obtained according to the attribute information of the target virtual object, and the virtual effect animation is played; the recorded video is played through a preset player, and the virtual effect animation is stored in the preset player. It is emphasized that, when a target virtual object (gift) appears, the attribute information of the target virtual object is written into the recorded video as a data frame, and the virtual effect animation corresponding to the target virtual object is not required to be written into the recorded video, but only when the target virtual object is replayed, the virtual animation corresponding to the identification information is acquired from a preset player according to the identification information in the attribute information and then played. According to the embodiment of the invention, the current live video and the target virtual article do not need to be sent to the server and then processed by the server to obtain the recorded video, but the recording is finished at the live end, and only the recorded video needs to be uploaded to the server and then stored by the server, so that other users can obtain the recorded video from the server to play. Moreover, the virtual effect animation is stored in the local preset player, and is not required to be loaded additionally during playing, so that the playing delay is caused.
Fig. 2 schematically illustrates another video processing method according to an exemplary embodiment of the present invention. Referring to fig. 2, the video processing method may further include steps S210 to S230. Wherein:
in step S210, an anchor portrait corresponding to the current live broadcast room is obtained according to the live broadcast identifier of the current live broadcast room.
In step S220, the name information and the summary information of the recorded video are calculated according to the anchor portrait and the current live video.
In step S230, the name information is added to a live video list, and the summary information is used as a cover page of the recorded video in the live video list.
Hereinafter, steps S210 to S230 will be explained and explained. Firstly, an anchor portrait corresponding to a current live broadcast room can be obtained according to a live broadcast identification (also can be a live broadcast room identification) of the current live broadcast room, wherein the anchor portrait can comprise an anchor name, an anchor image, an anchor brief introduction and the like; then, calculating name information of the recorded video according to the anchor portrait and the video frame of the current live video; the specific calculation process may be: inputting the anchor portrait and the current live video into the deep neural network learning model based on the deep neural network learning model so as to obtain name information and abstract information of the recorded video; the deep neural network learning model may be, for example, a bidirectional long-short term memory network, or may be other models, which is not particularly limited in this example; finally, the obtained name information can be added into the live video list, so that the user can find the video which needs to be watched according to the name information in time, of course, the abstract information can also be used as a cover page of the recorded video in the live video list, when the user clicks the cover page, the cover page is used as an entry of the recorded video, and then the corresponding recorded video resource is obtained through the entry and watched.
Fig. 3 schematically illustrates another video processing method according to an exemplary embodiment of the present invention, which may be used to play a recorded video that is recorded from a live video in which there is an act of a gift given by a viewer. Referring to fig. 3, the video processing method may include steps S310 to S340. Wherein:
in step S310, a recorded video is obtained and decoded to obtain a video frame, an audio frame, and a data frame.
In this example embodiment, when a user needs to watch a certain recorded video, an entry of the recorded video may be obtained from the recorded video list through a cover of the recorded video, and then the recorded video may be obtained through the entry and decoded to obtain a video frame, an audio frame, and a data frame. It should be noted that, when the recorded video is viewed, any device supporting the playing of the video file format corresponding to the recorded video may be selected, and the device includes, but is not limited to, a mobile phone, a PC, a computer, a tablet computer, and the like.
In step S320, the preset video player plays the video frame and the audio frame, and determines whether the data frame includes attribute information of the target virtual item generated by a presentation behavior event in a live broadcast process.
In this exemplary embodiment, after obtaining the audio frame and the video frame, a preset video player (mp4 video player) may be called to play the video frame and the audio frame, and it may be determined whether the data frame includes the attribute information. It should be noted that, the playing of the video frame and the audio frame and the determination of the attribute information are performed simultaneously, and since the current live video is recorded from the beginning of live broadcast and there is a video head in the recorded video, the determination of the attribute information does not cause the problem of delayed playing of the virtual animation. Meanwhile, in the playing process of the recorded and broadcast video, the embodiment of the invention focuses on whether the special effect of the gift can be displayed, but does not particularly focus on when the gift is given, so that the specific playing effect is not influenced even if the time of the present of the live broadcast is slightly delayed.
In step S330, if the attribute information of the target virtual article exists in the data frame, a virtual effect animation corresponding to the target virtual article is obtained according to the attribute information.
In this example embodiment, when attribute information exists in a data frame, a virtual effect animation corresponding to the identifier information may be acquired from the preset special effect configuration module of the video player according to the identifier information of the target virtual item included in the attribute information. Specifically, the target virtual article and the virtual effect animation corresponding to the target virtual article are stored in the gift special effect configuration module in a mapping manner, so that the corresponding virtual effect animation can be directly obtained according to the gift identification in the attribute information and then played.
In step S340, an animation playing module in the preset video player is called to play the virtual effect animation.
In this example embodiment, the animation playing module in the preset video player may be invoked to play the virtual effect animation, the user identification information included in the attribute information and used for presenting the target virtual item, and the description sentence corresponding to the target virtual item. By the method, the problem that the special effect of the gift appearing in the video cannot be secondarily displayed when the user secondarily plays the recorded and played video of the live video in the prior art is solved, and the watching experience of the user is further improved. Moreover, the virtual effect animation is stored in the local preset player, and is not required to be loaded additionally during playing, so that the playing delay is caused.
Hereinafter, the video processing method according to the exemplary embodiment of the present invention will be further explained and explained with reference to fig. 4 and 5.
First, referring to fig. 4, the video processing method (for recording the current live video) may include the following steps:
step S401, in the process of anchor live broadcasting, a point recording button starts recording, and the recording can be operated at an anchor end or a user end;
step S402, creating an empty mp4 file, writing information headers according to the mp4 file format standard, and waiting for writing of audio frames and data frames;
step S403, continuously writing video frames and audio frames in the main broadcast live video stream into an mp4 file, and if a gift sending message is received in the process, writing the gift sending information (gift id, a nickname of a gift sender, blessing words and the like) into an mp4 file as a data frame;
step S404, clicking a recording ending button, and not writing audio and video data into the mp4 file any more;
step S405, get the final mp4 file, play the file with the ordinary player, can see the video picture and sound, play the mp4 file with the special player related to this invention can see the video picture and sound, see the gift specially good effect at the same time.
Next, referring to fig. 5, the video processing method (for playing the recorded video) may include the following steps:
step S501, a video list is played on a video website, a video cover is used as an entrance, and the video is played by using the special player related to the invention when the cover is clicked;
step S502, decoding to obtain video frames, audio frames and data frames according to the standard format of the mp4 file;
step S503, displaying video images and playing sound according to the video frames and the audio frames, and displaying the special effect and blessing words of the gift by the gift-sending related information in the data frames in combination with the configuration information of the gift;
in step S504, after the mp4 finishes decoding and playing, the playing is finished.
The video processing method provided by the embodiment of the invention enables a user to play the special effect of the gift received during live broadcasting when watching the main broadcast video, recovers the atmosphere of the gift sent during live broadcasting and improves the watching experience.
An exemplary embodiment of the present invention further provides a video processing apparatus, which may include a virtual article determination module 610, an attribute information acquisition module 620, and a recorded video generation module 630, as shown in fig. 6. Wherein:
the virtual article determining module 610 may be configured to determine, when it is monitored that a present behavior event exists in a current live broadcast room, a target virtual article corresponding to the present behavior event;
the attribute information obtaining module 620 may be configured to obtain attribute information of the target virtual article;
the recorded video generating module 630 may be configured to generate a recorded video according to the attribute information and the current live video, so that when the recorded video is replayed, a virtual effect animation corresponding to the target virtual object is obtained according to the attribute information of the target virtual object, and the virtual effect animation is played.
In an exemplary embodiment of the present disclosure, the attribute information includes identification information of the target virtual item, identification information of an audience gifting the target virtual item, and a descriptive sentence corresponding to the target virtual item;
the recorded video is played through a preset player, and the virtual effect animation is stored in the preset player.
In an exemplary embodiment of the present disclosure, generating a recorded video according to the attribute information and a current live video includes:
creating a video file with a preset data format, and adding an information header to the video file according to the preset data format;
writing the attribute information of the target virtual article into a data frame of a video file added with an information header, and writing the current live video into a video frame and an audio frame of the video file added with the information header;
and generating the recorded video according to the data frame, the video frame and the audio frame.
In an exemplary embodiment of the present disclosure, the video processing apparatus further includes:
the anchor portrait acquisition module can be used for acquiring an anchor portrait corresponding to the current live broadcast room according to the live broadcast identification of the current live broadcast room;
the information calculation module can be used for calculating the name information and the abstract information of the recorded video according to the anchor portrait and the current live video;
and the information adding module can be used for adding the name information to a live video list and taking the abstract information as a cover page of the recorded video in the live video list.
Still another video processing apparatus according to an exemplary embodiment of the present invention is provided, and as shown in fig. 7, the video processing apparatus may include a recorded video obtaining module 710, an attribute information determining module 720, a virtual effect animation obtaining module 730, and a virtual effect animation playing module 740. Wherein:
the recorded video acquiring module 710 may be configured to acquire a recorded video and decode the recorded video to obtain a video frame, an audio frame, and a data frame;
the attribute information determining module 720 may be configured to play the video frame and the audio frame through a preset video player, and determine whether the data frame includes attribute information of a target virtual item generated by a presentation behavior event in a live broadcast process;
the virtual effect animation obtaining module 730 may be configured to, if attribute information of the target virtual article exists in the data frame, obtain a virtual effect animation corresponding to the target virtual article according to the attribute information;
the virtual animation playing module 740 may be configured to call an animation playing module in the preset video player to play the virtual animation.
In an exemplary embodiment of the present disclosure, the obtaining of the virtual effect animation corresponding to the target virtual article according to the attribute information includes:
and acquiring a virtual effect animation corresponding to the identification information from a preset special effect configuration module of the video player according to the identification information of the target virtual article included in the attribute information.
In an exemplary embodiment of the present disclosure, invoking an animation playing module in the preset video player to play the virtual effect animation includes:
and calling an animation playing module in the preset video player to play the virtual effect animation, the user identification information which is contained in the attribute information and is used for presenting the target virtual article, and the description sentence corresponding to the target virtual article.
The specific details of each module in the video processing apparatus have been described in detail in the corresponding video processing method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present invention are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
In an exemplary embodiment of the present invention, there is also provided an electronic device capable of implementing the above method.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to this embodiment of the invention is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present invention.
As shown in fig. 8, electronic device 800 is in the form of a general purpose computing device. The components of the electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one memory unit 820, a bus 830 connecting various system components (including the memory unit 820 and the processing unit 810), and a display unit 840.
Wherein the storage unit stores program code that is executable by the processing unit 810 to cause the processing unit 810 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification. For example, the processing unit 810 may perform step S110 as shown in fig. 1: when it is monitored that a presentation behavior event exists in a current live broadcast room, determining a target virtual article corresponding to the presentation behavior event; step S120: acquiring attribute information of the target virtual article; step S130: and generating a recorded video according to the attribute information and the current live video, so that when the recorded video is replayed, a virtual effect animation corresponding to the target virtual article is obtained according to the attribute information of the target virtual article, and the virtual effect animation is played.
The processing unit 810 may further perform step S310 as shown in fig. 3: acquiring a recorded video and decoding the recorded video to obtain a video frame, an audio frame and a data frame; step S320: playing the video frame and the audio frame through a preset video player, and judging whether the data frame comprises attribute information of a target virtual article generated by a presentation behavior event in a live broadcast process; step S330: if the attribute information of the target virtual article exists in the data frame, acquiring a virtual effect animation corresponding to the target virtual article according to the attribute information; step S340: and calling an animation playing module in the preset video player to play the virtual effect animation.
The storage unit 820 may include readable media in the form of volatile memory units such as a random access memory unit (RAM)8201 and/or a cache memory unit 8202, and may further include a read only memory unit (ROM) 8203.
The storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 830 may be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 900 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 800, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 800 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 850. Also, the electronic device 800 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 860. As shown, the network adapter 860 communicates with the other modules of the electronic device 800 via the bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 800, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present invention.
In an exemplary embodiment of the present invention, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
According to the program product for realizing the method, the portable compact disc read only memory (CD-ROM) can be adopted, the program code is included, and the program product can be operated on terminal equipment, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (10)

1. A video processing method, characterized in that the video processing method comprises:
when it is monitored that a presentation behavior event exists in a current live broadcast room, determining a target virtual article corresponding to the presentation behavior event;
acquiring attribute information of the target virtual article;
and generating a recorded video according to the attribute information and the current live video, so that when the recorded video is replayed, a virtual effect animation corresponding to the target virtual article is obtained according to the attribute information of the target virtual article, and the virtual effect animation is played.
2. The video processing method according to claim 1, wherein the attribute information includes identification information of the target virtual item, user identification information gifting the target virtual item, and a descriptive statement corresponding to the target virtual item;
the recorded video is played through a preset player, and the virtual effect animation is stored in the preset player.
3. The video processing method of claim 1, wherein generating a recorded video from the attribute information and a current live video comprises:
creating a video file with a preset data format, and adding an information header to the video file according to the preset data format;
writing the attribute information of the target virtual article into a data frame of a video file added with an information header, and writing the current live video into a video frame and an audio frame of the video file added with the information header;
and generating the recorded video according to the data frame, the video frame and the audio frame.
4. The video processing method of claim 1, wherein the video processing method further comprises:
acquiring an anchor portrait corresponding to the current live broadcast room according to the live broadcast identification of the current live broadcast room;
calculating name information and summary information of the recorded video according to the anchor portrait and the current live video;
and adding the name information into a live video list, and taking the abstract information as a cover page of the recorded video in the live video list.
5. A video processing method, characterized in that the video processing method comprises:
acquiring a recorded video and decoding the recorded video to obtain a video frame, an audio frame and a data frame;
playing the video frame and the audio frame through a preset video player, and judging whether the data frame comprises attribute information of a target virtual article generated by a presentation behavior event in a live broadcast process;
if the attribute information of the target virtual article exists in the data frame, acquiring a virtual effect animation corresponding to the target virtual article according to the attribute information;
and calling an animation playing module in the preset video player to play the virtual effect animation.
6. The video processing method according to claim 5, wherein obtaining the virtual effect animation corresponding to the target virtual item according to the attribute information comprises:
and acquiring a virtual effect animation corresponding to the identification information from a preset special effect configuration module of the video player according to the identification information of the target virtual article included in the attribute information.
7. The video processing method of claim 5, wherein invoking an animation playing module in the preset video player to play the virtual effect animation comprises:
and calling an animation playing module in the preset video player to play the virtual effect animation, the user identification information which is contained in the attribute information and is used for presenting the target virtual article, and the description sentence corresponding to the target virtual article.
8. A video processing apparatus, characterized in that the video processing apparatus comprises:
the virtual article determining module is used for determining a target virtual article corresponding to a present behavior event when the present behavior event is monitored to exist in the current live broadcast room;
the attribute information acquisition module is used for acquiring the attribute information of the target virtual article;
and the recorded video generating module is used for generating a recorded video according to the attribute information and the current live video, so that when the recorded video is replayed, a virtual effect animation corresponding to the target virtual article is obtained according to the attribute information of the target virtual article, and the virtual effect animation is played.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the video processing method of any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the video processing method of any of claims 1-7 via execution of the executable instructions.
CN202010528616.1A 2020-06-11 2020-06-11 Video processing method and device, computer readable storage medium and electronic equipment Pending CN111629253A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010528616.1A CN111629253A (en) 2020-06-11 2020-06-11 Video processing method and device, computer readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010528616.1A CN111629253A (en) 2020-06-11 2020-06-11 Video processing method and device, computer readable storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111629253A true CN111629253A (en) 2020-09-04

Family

ID=72273288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010528616.1A Pending CN111629253A (en) 2020-06-11 2020-06-11 Video processing method and device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111629253A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672177A (en) * 2020-12-15 2021-04-16 创盛视联数码科技(北京)有限公司 Video live broadcast processing method and device and electronic equipment
CN113179446A (en) * 2021-04-26 2021-07-27 北京字跳网络技术有限公司 Video interaction method and device, electronic equipment and storage medium
CN113490045A (en) * 2021-06-30 2021-10-08 北京百度网讯科技有限公司 Special effect adding method, device and equipment for live video and storage medium
CN113596493A (en) * 2021-07-26 2021-11-02 腾讯科技(深圳)有限公司 Interactive special effect synchronization method and related device
CN113694522A (en) * 2021-08-27 2021-11-26 上海米哈游璃月科技有限公司 Method and device for processing crushing effect, storage medium and electronic equipment
CN113694518A (en) * 2021-08-27 2021-11-26 上海米哈游璃月科技有限公司 Freezing effect processing method and device, storage medium and electronic equipment
CN113694519A (en) * 2021-08-27 2021-11-26 上海米哈游璃月科技有限公司 Method and device for processing applique effect, storage medium and electronic equipment
CN114173173A (en) * 2020-09-10 2022-03-11 腾讯数码(天津)有限公司 Barrage information display method and device, storage medium and electronic equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104853223A (en) * 2015-04-29 2015-08-19 小米科技有限责任公司 Video stream intercutting method and terminal equipment
CN106293723A (en) * 2016-08-03 2017-01-04 北京金山安全软件有限公司 Method, device and equipment for manufacturing live broadcast cover
CN106506448A (en) * 2016-09-26 2017-03-15 北京小米移动软件有限公司 Live display packing, device and terminal
CN106998477A (en) * 2017-04-05 2017-08-01 腾讯科技(深圳)有限公司 The front cover display methods and device of live video
CN107995482A (en) * 2016-10-26 2018-05-04 腾讯科技(深圳)有限公司 The treating method and apparatus of video file
CN108769562A (en) * 2018-06-29 2018-11-06 广州酷狗计算机科技有限公司 The method and apparatus for generating special efficacy video
CN109561351A (en) * 2018-12-03 2019-04-02 网易(杭州)网络有限公司 Network direct broadcasting back method, device and storage medium
US10404923B1 (en) * 2018-10-29 2019-09-03 Henry M. Pena Real time video special effects system and method
CN110324643A (en) * 2019-04-24 2019-10-11 网宿科技股份有限公司 A kind of video recording method and system
CN110996118A (en) * 2019-12-20 2020-04-10 北京达佳互联信息技术有限公司 Cover synthesis method, device, server and storage medium
CN111031393A (en) * 2019-12-26 2020-04-17 广州酷狗计算机科技有限公司 Video playing method, device, terminal and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104853223A (en) * 2015-04-29 2015-08-19 小米科技有限责任公司 Video stream intercutting method and terminal equipment
CN106293723A (en) * 2016-08-03 2017-01-04 北京金山安全软件有限公司 Method, device and equipment for manufacturing live broadcast cover
CN106506448A (en) * 2016-09-26 2017-03-15 北京小米移动软件有限公司 Live display packing, device and terminal
CN107995482A (en) * 2016-10-26 2018-05-04 腾讯科技(深圳)有限公司 The treating method and apparatus of video file
CN106998477A (en) * 2017-04-05 2017-08-01 腾讯科技(深圳)有限公司 The front cover display methods and device of live video
CN108769562A (en) * 2018-06-29 2018-11-06 广州酷狗计算机科技有限公司 The method and apparatus for generating special efficacy video
US10404923B1 (en) * 2018-10-29 2019-09-03 Henry M. Pena Real time video special effects system and method
CN109561351A (en) * 2018-12-03 2019-04-02 网易(杭州)网络有限公司 Network direct broadcasting back method, device and storage medium
CN110324643A (en) * 2019-04-24 2019-10-11 网宿科技股份有限公司 A kind of video recording method and system
CN110996118A (en) * 2019-12-20 2020-04-10 北京达佳互联信息技术有限公司 Cover synthesis method, device, server and storage medium
CN111031393A (en) * 2019-12-26 2020-04-17 广州酷狗计算机科技有限公司 Video playing method, device, terminal and storage medium

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114173173A (en) * 2020-09-10 2022-03-11 腾讯数码(天津)有限公司 Barrage information display method and device, storage medium and electronic equipment
CN112672177A (en) * 2020-12-15 2021-04-16 创盛视联数码科技(北京)有限公司 Video live broadcast processing method and device and electronic equipment
CN113179446A (en) * 2021-04-26 2021-07-27 北京字跳网络技术有限公司 Video interaction method and device, electronic equipment and storage medium
US11924516B2 (en) 2021-04-26 2024-03-05 Beijing Zitiao Network Technology Co, Ltd. Video interaction method and apparatus, electronic device, and storage medium
CN113490045A (en) * 2021-06-30 2021-10-08 北京百度网讯科技有限公司 Special effect adding method, device and equipment for live video and storage medium
CN113490045B (en) * 2021-06-30 2024-03-22 北京百度网讯科技有限公司 Special effect adding method, device, equipment and storage medium for live video
CN113596493B (en) * 2021-07-26 2023-03-10 腾讯科技(深圳)有限公司 Interactive special effect synchronization method and related device
CN113596493A (en) * 2021-07-26 2021-11-02 腾讯科技(深圳)有限公司 Interactive special effect synchronization method and related device
CN113694519A (en) * 2021-08-27 2021-11-26 上海米哈游璃月科技有限公司 Method and device for processing applique effect, storage medium and electronic equipment
CN113694519B (en) * 2021-08-27 2023-10-20 上海米哈游璃月科技有限公司 Applique effect processing method and device, storage medium and electronic equipment
CN113694522B (en) * 2021-08-27 2023-10-24 上海米哈游璃月科技有限公司 Method and device for processing crushing effect, storage medium and electronic equipment
CN113694518B (en) * 2021-08-27 2023-10-24 上海米哈游璃月科技有限公司 Freezing effect processing method and device, storage medium and electronic equipment
CN113694518A (en) * 2021-08-27 2021-11-26 上海米哈游璃月科技有限公司 Freezing effect processing method and device, storage medium and electronic equipment
CN113694522A (en) * 2021-08-27 2021-11-26 上海米哈游璃月科技有限公司 Method and device for processing crushing effect, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN111629253A (en) Video processing method and device, computer readable storage medium and electronic equipment
CN107770626B (en) Video material processing method, video synthesizing device and storage medium
US11943486B2 (en) Live video broadcast method, live broadcast device and storage medium
US11012486B2 (en) Personalized video playback
US11025967B2 (en) Method for inserting information push into live video streaming, server, and terminal
JP6467554B2 (en) Message transmission method, message processing method, and terminal
CN110267113B (en) Video file processing method, system, medium, and electronic device
CN109474843A (en) The method of speech control terminal, client, server
CN109729429B (en) Video playing method, device, equipment and medium
CN112714329B (en) Display control method and device for live broadcasting room, storage medium and electronic equipment
CN111163330A (en) Live video rendering method, device, system, equipment and storage medium
JP2017538328A (en) Promotion information processing method, apparatus, device, and computer storage medium
CN111800661A (en) Live broadcast room display control method, electronic device and storage medium
CN111901695A (en) Video content interception method, device and equipment and computer storage medium
WO2021218981A1 (en) Method and apparatus for generating interaction record, and device and medium
CN114979531A (en) Double-recording method for android terminal to support real-time voice recognition
CN110366002B (en) Video file synthesis method, system, medium and electronic device
CN113411627A (en) Data pushing method and device, readable storage medium and electronic equipment
CN113885741A (en) Multimedia processing method, device, equipment and medium
CN108614656B (en) Information processing method, medium, device and computing equipment
CN114341866A (en) Simultaneous interpretation method, device, server and storage medium
CN109815408B (en) Method and device for pushing information
CN110392313B (en) Method, system, medium and electronic device for displaying specific voice comments
CN108540840B (en) Content output method and device, medium and computing equipment
EP3389049B1 (en) Enabling third parties to add effects to an application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200904