CN113630618A - Video processing method, device and system - Google Patents

Video processing method, device and system Download PDF

Info

Publication number
CN113630618A
CN113630618A CN202110901970.9A CN202110901970A CN113630618A CN 113630618 A CN113630618 A CN 113630618A CN 202110901970 A CN202110901970 A CN 202110901970A CN 113630618 A CN113630618 A CN 113630618A
Authority
CN
China
Prior art keywords
video
service
attribute information
live broadcast
broadcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110901970.9A
Other languages
Chinese (zh)
Other versions
CN113630618B (en
Inventor
刘瑞洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202110901970.9A priority Critical patent/CN113630618B/en
Publication of CN113630618A publication Critical patent/CN113630618A/en
Application granted granted Critical
Publication of CN113630618B publication Critical patent/CN113630618B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23109Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion by placing content in organized collections, e.g. EPG data repository
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4332Content storage operation, e.g. storage operation in response to a pause request, caching operations by placing content in organized collections, e.g. local EPG data repository
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The application provides a video processing method, a video processing device and a video processing system, wherein the video processing method comprises the following steps: responding to a recording starting instruction of a main broadcast, and acquiring a stream pushing file which is pushed by a distribution server and generated by the main broadcast in a live broadcast process; creating a fragment video based on the plug flow file, and determining video attribute information of the fragment video; storing the fragment video and the video attribute information to a service storage space; and under the condition that the anchor broadcast closes the live broadcast, creating a recorded broadcast service video based on the fragment video and the video attribute information in the service storage space and releasing the recorded broadcast service video.

Description

Video processing method, device and system
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video processing method, apparatus, and system.
Background
With the development of internet technology, more and more services are becoming online, and with the increasing richness of services, more and more service types can be brought to users. In general, after a service user initiates a service through a service platform, other users can participate in the service in an online participation manner. In order to provide a traceable function for a user who does not participate in a service or a user who already participates in the service operation process, the service platform usually re-collects a video stream involved in the service operation process after the service is closed, and then creates a recorded and broadcast video based on the video stream, so that the user can still know the content involved in the service after the service is closed. However, in the prior art, after the service platform acquires the video stream, the video stream needs to be downloaded and transcoded, which consumes much time, and related notifications and user interaction information involved in the service operation process cannot be traced back.
Disclosure of Invention
In view of this, the present application provides a video processing method. The application also relates to a video processing device, a video processing system, a computing device and a computer readable storage medium, which are used for solving the problem that the creation time period of the traceable video is long in the prior art.
According to a first aspect of embodiments of the present application, there is provided a video processing method, including:
responding to a recording starting instruction of a main broadcast, and acquiring a stream pushing file which is pushed by a distribution server and generated by the main broadcast in a live broadcast process;
creating a fragment video based on the plug flow file, and determining video attribute information of the fragment video;
storing the fragment video and the video attribute information to a service storage space;
and under the condition that the anchor broadcast closes the live broadcast, creating a recorded broadcast service video based on the fragment video and the video attribute information in the service storage space and releasing the recorded broadcast service video.
According to a second aspect of embodiments of the present application, there is provided a video processing apparatus including:
the acquisition module is configured to respond to a recording start instruction of a main broadcast, and acquire a stream pushing file which is pushed by a distribution server and generated by the main broadcast in a live broadcast process;
a determining module configured to create a segment video based on the stream pushing file and determine video attribute information of the segment video;
the storage module is configured to store the fragment video and the video attribute information to a service storage space;
and the creating module is configured to create and release a recorded broadcast service video based on the fragment video and the video attribute information in the service storage space under the condition that the anchor broadcast closes the live broadcast.
According to a third aspect of embodiments of the present application, there is provided a video processing system including:
the system comprises a main broadcasting end, a recording end, a service end and a content distribution network;
the anchor terminal is configured to collect a push stream file generated by an anchor in a live broadcast process and send the push stream file to the content distribution network;
the content distribution network is configured to push the stream pushing file to the recording segment under the condition that the recording function is started by the anchor;
the recording end is configured to create a fragment video based on the stream pushing file and determine video attribute information of the fragment video; storing the fragment video and the video attribute information to a service storage space; under the condition that the anchor broadcast closes the live broadcast, the fragment video and the video attribute information in the service storage space are sent to the service end;
and the service end is configured to create and release a recorded broadcast service video based on the fragment video and the video attribute information.
According to a fourth aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the video processing method when executing the instructions.
According to a fifth aspect of embodiments of the present application, there is provided a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the video processing method.
The video processing method provided by the application responds to a recording start request of an anchor, acquires a push stream file generated by the anchor in a live broadcasting process and pushed by a distribution server, can directly create a fragment video corresponding to the current stage based on the push stream file at the moment, determines the video attribute information of the fragment video, then stores the video attribute information and the fragment video into a service storage space, and can realize the purpose of recording while live broadcasting through continuous periodic circulation; when the anchor broadcast is closed, all the fragment videos and the video attribute information can be directly extracted from the service storage space to create the recorded broadcast service video for publishing, so that the recorded broadcast service video can be generated without waiting for a long time, video publishing can be timely carried out, a user who misses the live broadcast or has a backtracking requirement can watch the video timely, the running stage of the live broadcast service is quickly butted with the video publishing stage of the recorded broadcast service, and the participation experience of the user is improved.
Drawings
Fig. 1 is a schematic diagram of a video processing method according to an embodiment of the present application;
fig. 2 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of playing back a recorded broadcast service video according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a video processing system according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a video processing system applied in a live scene according to an embodiment of the present application
Fig. 7 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the present application, a video processing method is provided, and the present application relates to a video processing apparatus, a video processing system, a computing device, and a computer readable storage medium, which are described in detail in the following embodiments one by one.
In practical application, with the development of live broadcast services, live broadcast becomes one of important entertainment items of users, and audiences can watch game matches, sports matches or human knowledge and the like through live broadcast; in order to provide better live viewing experience for audiences, a live video before the live video is played by an anchor is generally generated as a recorded and played video after the anchor is played by the anchor so as to be browsed again by the audiences; however, the process of generating the recorded and broadcast video not only needs to download and transcode the video, but also needs to consume a long time to create the recorded and broadcast video, so that the audience cannot quickly see the recorded and broadcast video, the butt joint period of the live broadcast stage and the recorded and broadcast stage is long, and the participation experience of the user is influenced to a great extent.
In view of this, referring to the schematic diagram shown in fig. 1, in order to realize fast generation of recorded and broadcast video in a live broadcast scenario, in the process of anchor live broadcast, a streaming file distributed by a live CDN (Content Delivery Network) is obtained through a recording cluster, and at the same time, virtual gift information, barrage information, minute data index information, and the like involved in the live broadcast process are obtained, and a video stream in the streaming file is split into small segment videos, and after each segment video is created, the data is stored in Redis in combination with the above data, and this is circulated until the anchor broadcast is downloaded. After the main broadcast is carried out, the downloading and transcoding of videos are not needed, live videos, virtual gift information, barrage information, minute data index information and the like can be directly extracted from Redis, barrage index information and recorded broadcast videos are generated and published based on the video fragment information, the barrage fragment information and the index fragment information, the recorded broadcast videos can be created and published within a short time after the main broadcast is carried out, the quick connection between a live broadcast stage and a recorded broadcast stage is achieved, and the user experience is improved.
The video processing method provided by the application responds to a recording start request of an anchor, acquires a push stream file generated by the anchor in a live broadcasting process and pushed by a distribution server, can directly create a fragment video corresponding to the current stage based on the push stream file at the moment, determines the video attribute information of the fragment video, then stores the video attribute information and the fragment video into a service storage space, and can realize the purpose of recording while live broadcasting through continuous periodic circulation; when the anchor broadcast is closed, all the fragment videos and the video attribute information can be directly extracted from the service storage space to create the recorded broadcast service video for publishing, so that the recorded broadcast service video can be generated without waiting for a long time, video publishing can be timely carried out, a user who misses the live broadcast or has a backtracking requirement can watch the video timely, the running stage of the live broadcast service is quickly butted with the video publishing stage of the recorded broadcast service, and the participation experience of the user is improved.
Fig. 2 shows a flowchart of a video processing method according to an embodiment of the present application, which specifically includes the following steps:
step S202, responding to a recording starting instruction of a main broadcast, and acquiring a stream pushing file generated by the main broadcast in a live broadcasting process and pushed by a distribution server.
Specifically, the push stream file refers to a push stream file generated in the running process of live broadcast started by the anchor, and the file supports live broadcast running; correspondingly, the distribution server specifically refers to a CDN (content delivery network) that interfaces with the anchor, that is, a content distribution network node, and since the content distribution network is a node that performs resource scheduling, load balancing, and the like, the content distribution network will continuously receive the push stream file pushed by the anchor; and under the condition that the anchor starts a live broadcast recording function, the content distribution network pushes a stream pushing file generated in the live broadcast process to the server.
Based on this, in order to be able to generate the recorded broadcast service video quickly after the live broadcast is closed, so as to realize that the running stage of the live broadcast is quickly connected with the stage of generating the recorded broadcast service video after the live broadcast is closed, the stream pushing file pushed by the distribution server can be directly acquired under the condition that the live broadcast is in the running state, the purpose of creating the recorded broadcast service video while running is realized, and thus the time for creating the recorded broadcast service video is saved.
Further, when acquiring a push stream file generated in a live broadcast service process, the push stream file may be implemented based on a request of a main broadcast, and in this embodiment, a specific implementation manner is as follows:
receiving a live broadcast starting request submitted by the anchor aiming at a live broadcast service; and under the condition that the live broadcast starting request comprises the recording starting instruction, acquiring the push stream file which is pushed by the distribution server and generated by the anchor broadcast in the live broadcast process.
Specifically, the live broadcast start request refers to a request submitted by a main broadcast when the live broadcast platform has a live broadcast start demand, the live broadcast is started and operated after the live broadcast start request is received, and in a live broadcast scene, after a live broadcast platform server receives the live broadcast start request submitted by the main broadcast, the live broadcast platform server starts the live broadcast service of the main broadcast according to the live broadcast start request. Correspondingly, the recording start instruction specifically refers to an instruction for recording the live broadcast content after the live broadcast is started.
Based on the method, after receiving a live broadcast starting request submitted by the anchor for the live broadcast service, the anchor indicates that the live broadcast needs to be started at the moment so that other users can participate in the live broadcast; the server side also starts the live broadcast according to the live broadcast starting request. Under the condition that the live broadcast starting request contains a recording starting instruction, the recorded broadcast service video needs to be created by the anchor after live broadcast is closed, and at the moment, the recording service can be started according to the recording instruction, namely, a push stream file generated in the running process of the live broadcast service started by the anchor is collected.
In practical application, after a push stream file is generated in the running process of a live broadcast service, the push stream file is sent to a Content Distribution Network (CDN) for load balancing, scheduling and other operations, and the push stream file is generated continuously along with the running of the live broadcast service, so that when the push stream file is collected, the acquisition can be realized based on the content distribution network, that is, after the push stream file is generated and sent to the content distribution network, the push stream file can be extracted from the content distribution network according to a recording instruction for use in subsequent video processing, and the process is performed along with the running of the live broadcast service, so that the acquisition of the push stream file is performed while the live broadcast service is running, and the creation of a recording service video can be completed conveniently and quickly in the subsequent process.
In summary, by providing the recording request submitting interface to the anchor, not only can the subsequent operation of creating the recorded broadcast service video be selectively turned on/off according to the requirements of the anchor, but also a more flexible selection can be provided for the anchor, thereby further improving the participation experience of the anchor.
Step S204, a fragment video is created based on the stream pushing file, and video attribute information of the fragment video is determined.
Specifically, on the basis of obtaining the push stream file, further, in order to enable the recording and playing service video to be created subsequently and quickly, the fragmented video may be created based on the push stream file. That is to say, in order to facilitate storage and creation of a recorded broadcast service video, at this time, a video stream related to an acquired push stream file is split into segment videos with shorter duration, and then after creation of each segment video is completed, video attribute information corresponding to each segment video is determined, where the video attribute information specifically refers to information corresponding to an attribute possessed by each segment video, and includes, but is not limited to, start time information, duration information, file size information, and the like of each segment video.
Further, in the process of creating the segmented video, in consideration that the streamlet file is continuously generated, correspondingly, if the segmented video needs to be created, the video stream related to the streamlet file needs to be integrated, and the process is completed by using a preset processing strategy, in this embodiment, the specific implementation manner is as follows:
analyzing the stream pushing file to obtain a video stream corresponding to the anchor; and carrying out fragment processing on the video stream according to a preset fragment strategy, and generating the fragment video according to a fragment processing result.
Specifically, the video stream specifically refers to a video stream related to the push stream file, and for example, in a live scene, the video stream is a video clip of the anchor live content contained in the push stream file; correspondingly, the preset fragmentation strategy specifically refers to a strategy for creating a fragmented video, and since the live broadcast service is operated for a long time, in order to improve the efficiency of subsequently creating a recording service video, a plurality of fragmented videos can be created based on a video stream, that is, the video stream is fragmented according to the preset fragmentation strategy to obtain the fragmented videos.
Based on this, in the process of creating the segment video according to the push stream file, because the push stream file is continuously generated in the running process of the live broadcast service, and the segment video is a segment video with a set length, after the push stream file is obtained, the creation of one segment video can be completed under the condition that the length corresponding to the video stream related to the push stream file reaches the set length, and by analogy, a plurality of segment videos obtained by splitting the video stream related to all the push stream files related to the live broadcast service from opening to closing can be obtained, and the subsequent processing can be performed after each segment video is obtained, so that the subsequent processing of each segment video is continuously performed, the time for subsequently creating the recorded broadcast service video is effectively reduced, and the distribution efficiency of the recorded broadcast service video is improved.
In specific implementation, because the length of the video stream related to the plug-flow file may be short, if the fragmentation processing is directly performed, a large number of fragmented videos are generated, and at this time, the resource processing pressure is increased; in view of this, the slicing strategy provided in this embodiment may perform splicing processing on the video stream obtained by parsing, and then perform slicing according to the slicing length, that is, first form a video stream with a long duration, and then perform slicing processing, so as to obtain a plurality of sliced videos according to the slicing result, which is convenient for subsequent processing.
In summary, by performing the fragmentation processing on the video stream obtained by parsing by using the preset fragmentation strategy, not only can each fragmented video be ensured to have similar attributes, but also a foundation can be laid for subsequently creating the recorded broadcast service video, and the efficiency of subsequently creating the recorded broadcast service video is improved by using a fragmentation mode.
Furthermore, since the live broadcast platform can provide a service for starting a live broadcast service to multiple anchor broadcasts at the same time, after a segment video is created based on a stream pushing file corresponding to the anchor broadcasts, in order to facilitate subsequent generation of a recorded broadcast service video corresponding to the live broadcast service started by the anchor broadcasts, it is further necessary to determine video attribute information of the segment video, so as to store the segment video and the video attribute information at the same time, and to determine a target object of the created recorded broadcast service video when the recorded broadcast service video is generated, in this embodiment, a specific implementation manner is as follows:
analyzing the segmented video to obtain the starting time information, the video duration information, the video space occupation information and the video sequence index information of the segmented video; determining a storage space identifier according to the storage position of the live broadcast attribute information in the service storage space; and creating the video attribute information of the segmented video based on the starting time information, the video duration information, the video space occupation information, the video sequence index information and the storage space identification.
Specifically, the start time information specifically refers to start time information of each sliced video; the video duration information specifically refers to the playing duration information of each segmented video; the video space occupation information specifically refers to file size information of each segmented video; the video sequence index information specifically refers to information corresponding to the front sequence and the back sequence of each segmented video, and when the video sequence index information is convenient for subsequently creating the recorded broadcast service video, the problem of the recorded broadcast service video playing sequence caused by the disordered segmented videos is avoided. Correspondingly, the live broadcast attribute information specifically refers to information related to the live broadcast service, and includes, but is not limited to, a unique identifier corresponding to the current execution of the live broadcast service, a unique identifier corresponding to the live broadcast service, timestamp information, and the like. It should be noted that the live broadcast attribute information is generated when the live broadcast service is created and is pre-stored in the service storage space, and the subsequently generated segment video and the video attribute information are merged and written into the service storage space in the same storage location as long as they are associated with the anchor, thereby facilitating subsequent management and use.
Based on this, after the segment video is created according to the stream pushing file, in order to enable the recorded broadcast service video to be successfully created subsequently, at this time, each segment video may be analyzed to obtain the start time information, the video duration information, the video space occupation information, and the video index information of each segment video, and meanwhile, a storage space identifier may be determined according to the storage location of the live broadcast attribute information in the service storage space, and then the start time information, the video duration information, the video space occupation information, the video index information, and the storage space identifier are integrated to generate the video attribute information corresponding to each segment video, so as to facilitate the use when the recorded broadcast service video is created.
In summary, the video attribute information of each segment video is obtained by analyzing the segment video, so that not only can the creation of the recorded broadcast service video be conveniently performed in a subsequent pertinence manner, but also the success of the creation of the recorded broadcast service video can be ensured, and the release processing of the recorded broadcast service video can be ensured to be completed in a short time.
Step S206, storing the fragment video and the video attribute information to a service storage space.
Specifically, after the fragment video and the video attribute information of the fragment video are determined, because the live broadcast service is still in an operating state, the fragment video and the video attribute information corresponding to the fragment video can be temporarily stored in a service storage space corresponding to the live broadcast service, so that the video attribute information and the fragment video can be directly extracted from the service storage space to create the recorded broadcast service video after the live broadcast service is finished; the service storage space may be implemented by using a storage unit Redis, or by selecting another storage system according to an actual application scenario, which is not limited herein.
For example, a user a submits a live broadcast start request of a live broadcast L game in an APP corresponding to a platform B, and when it is determined that the live broadcast start request carries a recording instruction for recording live broadcast content submitted by the user a, it is described that the user a needs to record next live broadcast content, in order to be able to quickly generate a recorded broadcast video (a historical video generated for the live broadcast content of the live broadcast L game) after the live broadcast of the user a is finished, a push stream file which is continuously acquired from a client of the user a after the live broadcast start is sent to the live broadcast CDN from a live broadcast CDN, at this time, a sliced video which takes 30min as one piece is created based on the push stream file, and when the video after the splicing reaches 30min due to a short length of a video stream related to the push stream file, the video stream can be spliced in sequence based on the continuously received push stream file, the video is divided into one fragment video, and the like, and the creation of the fragment video is stopped until the live broadcast of the user A is finished.
When each segment video is created, in order to improve the efficiency of creating recorded and broadcast videos, the attribute information of each segment video can be determined at this time, so that storage in combination with the segment video in the following process is facilitated. Based on this, after obtaining a video segment, the video attribute information of the video segment can be determined, and by analyzing the video segment, the Start time (Start time) of the video segment is ST1, Duration (playback time length) is D1, File Size (video File Size) is FS1, and index (sequential index of video segments) is I1; meanwhile, a Bucket (storage space identifier) for storing the segmented video is determined to be B1, then the video attribute information corresponding to the segmented video can be determined by integrating the parameters, and by analogy, each segmented video is processed by adopting the same strategy, so that the recorded and broadcast video can be conveniently stored and created subsequently. Furthermore, after the creation of the segment video and the determination of the video attribute information corresponding to the segment video are completed, the segment video and the video attribute information corresponding to the segment video can be written into the storage unit Redis according to the storage space identifier B1, so that the related information corresponding to the live broadcast of the user a is stored together, and the recorded and broadcast video can be created subsequently.
Step S208, under the condition that the anchor broadcast is closed, a recorded broadcast service video is created and released based on the fragment video and the video attribute information in the service storage space.
Specifically, under the condition that the live broadcast service is in an execution state, the above processes are continuously repeated, and all the segment videos involved in the running process of the live broadcast service can be stored; under the condition that the live broadcast service is closed, the live broadcast service that the anchor is actively stopped is described, and then in order to provide recorded broadcast service videos for audiences of the anchor in time, the segment videos and the corresponding video attribute information thereof can be directly extracted from the service storage space, the recorded broadcast service videos corresponding to the live broadcast service that the anchor initiates the current time are created by combining the segment videos and the video attribute information, and the recorded broadcast service videos are released. The recorded broadcast service video specifically refers to video content corresponding to the live broadcast service in the running process; if in a live broadcast scene, after the anchor broadcast, the video generated by the content which has been finished by the live broadcast is the recorded broadcast service video.
In practical application, when a recorded broadcast service video is released, because the video is created for a live broadcast service initiated by a main broadcast, the recorded broadcast service video can be released in a carrier running the live broadcast service in order to be convenient for a user to watch, and the live broadcast service and the recorded broadcast service video which can be participated by the carrier are both related to the main broadcast; for example, in a live scene, a recorded broadcast service video is released in a live broadcast room of a main broadcast for a user watching the live broadcast to review.
In specific implementation, under the condition that the live broadcast service is closed, the last segment video being created may not be created completely, that is, the segment policy is not satisfied, and if the segment video is discarded, the integrity of the subsequently created recorded broadcast service video cannot be guaranteed.
Further, since the recorded broadcast service video to be released is not only related to the anchor, but also related to the live broadcast service initiated by the anchor, considering that the server may create a plurality of recorded broadcast service videos for different anchors at the same time, the recorded broadcast service videos can be created by combining with the live broadcast attribute information, in this embodiment, the specific implementation manner is as in step S2082 to step S2084:
step S2082, generating live broadcast attribute information according to the anchor information carried in the live broadcast starting request, and storing the live broadcast attribute information to the service storage space.
Specifically, the anchor information specifically refers to information capable of identifying the anchor, and the information may be a unique identifier corresponding to the anchor, or a password of the anchor, or identity information of the anchor, and the like, which is not limited herein; correspondingly, the live broadcast attribute information specifically refers to information which needs to be generated and used when the live broadcast service is created, the recorded broadcast service video can be conveniently bound with the live broadcast service and the anchor broadcast through the live broadcast attribute information, and the live broadcast attribute information stored in the service storage space stores the fragmented video and the video attribute information to the same positions, namely the three have the same storage identification, so that the recorded broadcast service video can be conveniently extracted and used when being created.
Further, the process of creating live attribute information based on the anchor information is as follows:
analyzing the live broadcast starting request to obtain the anchor information; generating a timestamp and a live broadcast identifier based on the anchor information, and reading a live broadcast address identifier corresponding to the anchor information; and integrating the timestamp, the live broadcast identification and the live broadcast address identification to generate the live broadcast attribute information.
Specifically, the timestamp specifically refers to the time when the live broadcast service is started, the live broadcast identifier specifically refers to a unique identifier corresponding to the live broadcast service, and the live broadcast address identifier specifically refers to a persistent identifier corresponding to the live broadcast service initiated by the anchor. In a live broadcast scene, the timestamp specifically refers to the time when the user starts live broadcast, the live broadcast identifier specifically refers to the unique identifier of the current broadcast, and the live broadcast address identifier specifically refers to the live broadcast room ID of the user.
Based on the method, under the condition that a live broadcast starting request submitted by a main broadcast aiming at the live broadcast service is received, the main broadcast information can be determined based on the live broadcast starting request; because this opening live broadcast service is a new round of live broadcast service, therefore the corresponding timestamp and the live broadcast identification of this opening live broadcast service are established based on the anchor information, meanwhile, the live broadcast platform stores the relevant information corresponding to the anchor, and the live broadcast address identification is uniquely allocated, so that the live broadcast address identification can be directly read, and then the live broadcast attribute information corresponding to this live broadcast service initiated by the anchor can be obtained by integrating the timestamp, the live broadcast identification and the live broadcast address identification, thereby facilitating the subsequent establishment of recorded broadcast service video in a targeted manner.
Further, after the live broadcast attribute information is obtained, the live broadcast attribute information can be stored in a service storage space, and when the fragmented video and the video attribute information are stored subsequently, the live broadcast attribute information is generated for the live broadcast service of this time, and the fragmented video and the video attribute information corresponding to the fragmented video are also related to the live broadcast service of this time, so that the fragmented video and the video attribute information corresponding to the fragmented video can be directly stored in a storage position corresponding to the live broadcast attribute information, and the subsequent unified calling and use are facilitated.
In summary, by combining the timestamp, the live broadcast identifier and the live broadcast address identifier to form the live broadcast attribute information, the recorded broadcast service video associated with the anchor can be created in a targeted manner, and the relationship between the internal information and the video can be established, so that the targeted property can be maintained when the recorded broadcast service video is released.
Step S2084, based on the fragment video, the video attribute information and the live broadcast attribute information in the service storage space, creating and releasing the recorded broadcast service video.
Specifically, after the storage of the live broadcast attribute information, the segment video and the video attribute information corresponding to the segment video are completed in the storage space of the service, if the live broadcast service is closed, the segment video related to the live broadcast service, the video attribute information corresponding to the segment video and the live broadcast attribute information can be directly extracted from the service storage space, and the recorded broadcast service video can be created by combining three programs for distribution.
In practical application, the closing of the live broadcast service is determined based on the control of the anchor, when the anchor submits a closing request, the live broadcast service is closed immediately, and the processing operation of creating the recorded broadcast service video is triggered, so that the operation of creating and releasing the recorded broadcast service video is responded at the moment of closing the live broadcast service, and the time consumption is reduced.
Further, in the process of creating and distributing a recorded broadcast service video, considering that a live broadcast platform can interface with more anchor broadcasts and different anchor broadcasts can initiate different live broadcast services, an interface corresponding to the anchor broadcast is also selected for distribution when the video is distributed, in this embodiment, the specific implementation manner is as follows:
extracting the segmented video, the video attribute information and the live broadcast attribute information corresponding to the anchor broadcast from the service storage space; creating the recorded broadcast service video according to the fragment video, the video attribute information and the live broadcast attribute information; and determining a release interface corresponding to the anchor based on the live broadcast attribute information, and calling the release interface to release the recorded broadcast service video.
Specifically, the release interface is an interface corresponding to the anchor, and the release interface corresponds to the live address identifier, so as to ensure that the released recorded broadcast service video is associated with the live broadcast service. Based on this, under the condition that the live broadcast service is closed, a closing request submitted by the anchor broadcast aiming at the live broadcast service is described, at this time, in order to be capable of creating and releasing the recorded broadcast service video in time, the segment video, the video attribute information and the live broadcast attribute information corresponding to the anchor broadcast can be extracted from the service storage space, then the recorded broadcast service video is created based on the segment video, the video attribute information and the live broadcast attribute information, and the recorded broadcast service video is limited to be created aiming at the closed live broadcast service initiated by the anchor broadcast through the live broadcast attribute information and the video attribute information; and then, in order to ensure that the recorded broadcast service video is published at the publishing address associated with the anchor, determining a unique publishing interface corresponding to the anchor based on the live broadcast attribute information when the anchor initiates the live broadcast service, and then calling the publishing interface to complete the publishing processing of the recorded broadcast service video.
Along the above example, when the user A starts live broadcasting, the live broadcasting platform can provide live broadcasting services for different users at the same time, so that other users can conveniently watch live broadcasting contents of different users, and different live broadcasting rooms are set for different anchor broadcasters; based on the above, when the user starts live broadcasting, the corresponding Room ID (ID) is determined to be ID _123456 based on the ID _1 of the user A; the tiemstamp (timestamp) of the current live broadcast is TS 1; live Key (live unique identification) of this live broadcast is LK 1; after obtaining the live broadcast information, the live broadcast information is written into the storage unit Redis according to the storage space identifier B1.
Further, when the user a submits a live broadcast closing request, in order to be able to quickly generate a recorded broadcast video, the segment video corresponding to the live broadcast of the user a, the video attribute information and the live broadcast information corresponding to the segment video are extracted from the storage location corresponding to the storage space identifier B1 in the storage unit Redis, and the obtained segment video is the segment video 1 and the segment video 2 … … segment video n respectively; the video attribute information corresponding to the segment video 1 is Start time ST1, Duration D1, File Size FS1, and index I1; the video attribute information corresponding to the segment video 2 is Start time ST2, Duration D2, File Size FS2, and index I2; … …, respectively; the video attribute information corresponding to the sliced video n is Start time STn, Duration Dn, File Size FSn, and index In.
Furthermore, after the corresponding information is obtained, the recorded and broadcast videos can be spliced based on the fragment videos and the corresponding video attribute information thereof, the splicing sequence is indexed according to the sequence of the fragment videos, the length of the recorded and broadcast videos obtained after splicing is the total duration of the sum of the fragment videos 1 to the fragment videos n, and the file size is also the total size of the sum of the fragment videos 1 to the fragment videos n; after the recorded and broadcast video is obtained, in order to enable other users to watch the live broadcast content before the user A more quickly, the recorded and broadcast video playback inlet corresponding to the user A can be generated based on the live broadcast information, namely, the recorded and broadcast video is released in the live broadcast room of the user A, so that other users can watch the recorded and broadcast video of the user A after the user A enters the corresponding live broadcast room in the live broadcast time of the user A, and under the condition that the user A does not start broadcasting after the recorded and broadcast video is released, the content displayed in the live broadcast room of the user A is shown in (a) in fig. 3.
In summary, the recorded broadcast service video is created by combining the live broadcast attribute information, the segment video and the corresponding video attribute information thereof, so that the release of the recorded broadcast service video is ensured to be targeted, and other users can browse and watch conveniently, thereby further improving the user participation experience.
In addition, when creating the recorded broadcast service video, considering that the recorded broadcast service video process may be inconvenient for other users to watch, and meanwhile, if all recorded broadcast service videos are loaded, more network resources will be consumed, so that the recorded broadcast service videos can be created in segments during creation, in this embodiment, the specific implementation manner is as follows:
creating an initial recorded broadcast service video according to the fragment video, the video attribute information and the live broadcast attribute information; under the condition that the playing time of the initial recorded broadcast service video is greater than a preset time threshold, segmenting the initial recorded broadcast service video into at least two intermediate recorded broadcast service videos; and taking the at least two intermediate recorded broadcast service videos as the recorded broadcast service videos.
Specifically, the initial recorded broadcast service video refers to a recorded broadcast service video to be released, which is created by combining the fragmented video and the corresponding video attribute information thereof, and the live broadcast attribute information; correspondingly, the at least two intermediate recorded broadcast service videos specifically refer to each recorded broadcast service video obtained after the initial recorded broadcast service video is segmented.
Based on this, after the initial recorded broadcast service video is created based on the segment video, the video attribute information and the live broadcast attribute information, in order to facilitate other users to browse the recorded broadcast service video, at this time, whether the playing time of the initial recorded broadcast service video is greater than a preset time threshold value or not can be judged, if not, the playing time of the initial recorded broadcast service video is not very long, and the initial recorded broadcast service video can be directly released as the recorded broadcast service video.
If the video is long, the playing time of the initial recorded broadcast service video is too long, and if the video is not convenient for other users to watch if the video is directly published, the initial recorded broadcast service video can be segmented to obtain two or more intermediate recorded broadcast service videos, and then all the obtained intermediate recorded broadcast service videos are published as recorded broadcast service videos.
In practical application, under the condition that at least two intermediate recorded broadcast service videos are taken as recorded broadcast service videos, if the recorded broadcast service videos are published, a plurality of intermediate recorded broadcast service videos need to be published, and in order to facilitate watching of a user, a corresponding stage identifier can be added in each intermediate recorded broadcast service video, so that the user can know the sequence of playing the intermediate recorded broadcast service videos. Meanwhile, the playing time of the segmented intermediate recorded and played service video can be set according to the actual application scene, and the embodiment is not limited at all and only needs to be convenient for the user to watch.
In sum, the recorded broadcast service video is selectively created by adopting the play duration judgment mode, so that the play duration of the recorded broadcast service video can be controlled, the problem that the user cannot watch conveniently due to the fact that the play duration is too long can be solved, and the watching experience of the user is improved.
In practical application, when other users watch recorded and broadcast service videos, the users not only need to watch the video contents, but also can have interactive information of other participating users in the running process of watching the live broadcast service, so that the watching experience is improved. Based on this, in consideration of such a requirement of the participating user, in the process of creating the segment video, corresponding barrage information and virtual article information may also be recorded at the same time, and in this embodiment, a specific implementation manner is as follows:
acquiring a barrage file and a virtual article file corresponding to the fragment video, and establishing an association relation between the barrage file and the virtual article file and the fragment video; and storing the bullet screen file and the virtual article file to the business storage space based on the incidence relation.
Specifically, the barrage file specifically refers to a file corresponding to barrage information of each segment video, and all barrages and corresponding timestamps related to the segment video within the time length are stored in the file, so that the corresponding barrages can be displayed to participating users when recording and playing service videos are conveniently played; correspondingly, the virtual article file specifically refers to a file corresponding to a virtual article of each segment video, and the virtual gift and the corresponding timestamp which are presented to the main broadcast by other participating users within the time length of the segment video are stored in the file, so that when the recorded broadcast service video is conveniently played, the corresponding virtual gift presentation condition can be displayed to the participating users.
In specific implementation, because a plurality of segment videos are continuously created in the execution process of the live broadcast service, and each segment video corresponds to a barrage file and a virtual article file, in order to facilitate the subsequent creation of recorded broadcast service videos, the association relationship between each segment video and the corresponding barrage file and virtual article file can be pre-established, so that the method can be used when the recorded broadcast service videos are played.
Based on this, in the process of creating the segment video, the barrage file and the virtual article file of the segment video can be simultaneously acquired, then the association relationship between the barrage file and the virtual article file and the segment video is established, so that each segment video is associated with the corresponding barrage and the corresponding virtual article, then the barrage file and the virtual article file are also written into the service storage space based on the association relationship, after the recorded broadcast service video is created and released, the barrage file and/or the virtual article file can be called to generate the corresponding barrage and/or the corresponding virtual article when a user has a demand for watching the barrage and/or the virtual article, and when the user watches the recorded broadcast service video, the barrage file and the virtual article file are displayed together to the user at the set time in combination with the timestamp.
Further, when a participating user has a viewing requirement, in order to conveniently display the recorded broadcast service video and the corresponding barrage and virtual article to the user, the video may be sent to the participating user by combining the corresponding barrage file and virtual article file, in this embodiment, the specific implementation manner is as follows:
under the condition that a watching request submitted by a viewer of the anchor program aiming at the recorded and broadcast service video is received, extracting the barrage file and the virtual article file from the service storage space; and generating a recorded broadcast video packet based on the recorded broadcast service video, the barrage file and the virtual article file and sending the recorded broadcast video packet to the audience.
In practical application, after the recorded broadcast service video is released, other participating users can check the recorded broadcast service video at corresponding live broadcast addresses after the live broadcast service is closed, at the moment, if the participating users participating in the live broadcast service submit viewing requests for the recorded broadcast service video, in order to improve the participating experience of the participating users, the barrage file and the virtual article file can be directly extracted from the service storage space, the recorded broadcast video packet is generated by combining the recorded broadcast service video, the barrage file and the virtual article file and is sent to the participating users, so that the participating users can simultaneously know the barrage and the virtual article sent by other users in the running process of the live broadcast service when watching the recorded broadcast service video, and the participating experience of the participating users is improved.
Along with the above example, in the process of live broadcast of the first user, the audience sends a large number of barrage and virtual gifts, and when each sliced video is created, the barrage file and the virtual gifts file corresponding to the time interval of each sliced video are also stored in the storage unit Redis. When receiving the situation that the user B watches the recorded and broadcast video of the live broadcast of the user A, the user B can extract the barrage file and the virtual gift file from the storage unit Redis, and creates a recorded and broadcast video packet in combination with the recorded and broadcast video to be sent to the client of the user B, when watching the recorded and broadcast video, the user B can simultaneously watch the barrage and the virtual gift sent by other users in the live broadcast process, and the recorded and broadcast content watched by the user B is shown as (b) in FIG. 3.
In conclusion, by combining the barrage file and the virtual gift file to create the recorded broadcast video package, the participating users can conveniently know the participation degree of other participating users in the running process of the live broadcast service, and the watching experience of the users is effectively improved.
In addition, the recorded broadcast service video may be watched by a plurality of other participating users at the same time after being released, and in order to improve the participating experience of the user watching the recorded broadcast service video, a barrage sending interface and a virtual article sending interface may be provided at the interface watching the recorded broadcast service video, that is, the user may also send a barrage and present a virtual gift when watching the recorded broadcast service video, so that the user is more convenient to participate in the service, and the participating experience of the user is improved.
The video processing method provided by the application responds to a recording start request of an anchor, acquires a push stream file generated by the anchor in a live broadcasting process and pushed by a distribution server, can directly create a fragment video corresponding to the current stage based on the push stream file at the moment, determines the video attribute information of the fragment video, then stores the video attribute information and the fragment video into a service storage space, and can realize the purpose of recording while live broadcasting through continuous periodic circulation; when the anchor broadcast is closed, all the fragment videos and the video attribute information can be directly extracted from the service storage space to create the recorded broadcast service video for publishing, so that the recorded broadcast service video can be generated without waiting for a long time, video publishing can be timely carried out, a user who misses the live broadcast or has a backtracking requirement can watch the video timely, the running stage of the live broadcast service is quickly butted with the video publishing stage of the recorded broadcast service, and the participation experience of the user is improved.
Corresponding to the above method embodiment, the present application further provides an embodiment of a video processing apparatus, and fig. 4 shows a schematic structural diagram of a video processing apparatus provided in an embodiment of the present application. As shown in fig. 4, the apparatus includes:
an obtaining module 402, configured to, in response to a recording start instruction of a main broadcast, obtain a stream pushing file, which is pushed by a distribution server and generated by the main broadcast in a live broadcast process;
a determining module 404 configured to create a segment video based on the stream pushing file and determine video attribute information of the segment video;
a storage module 406, configured to store the sliced video and the video attribute information to a service storage space;
a creating module 408 configured to create and publish a recorded broadcast service video based on the segment video and the video attribute information in the service storage space when the anchor closes the live broadcast.
In an optional embodiment, the obtaining module 402 is further configured to:
receiving a live broadcast starting request submitted by the anchor aiming at a live broadcast service; and under the condition that the live broadcast starting request comprises the recording starting instruction, acquiring the push stream file which is pushed by the distribution server and generated by the anchor broadcast in the live broadcast process.
In an optional embodiment, the video processing apparatus further includes:
the information storage module is configured to generate live broadcast attribute information according to anchor information carried in the live broadcast starting request and store the live broadcast attribute information to the service storage space;
accordingly, the creation module 408 is further configured to: and creating and releasing the recorded and played service video based on the fragment video, the video attribute information and the live broadcast attribute information in the service storage space.
In an optional embodiment, the information storage module is further configured to:
analyzing the live broadcast starting request to obtain the anchor information; generating a timestamp and a live broadcast identifier based on the anchor information, and reading a live broadcast address identifier corresponding to the anchor information; and integrating the timestamp, the live broadcast identification and the live broadcast address identification to generate the live broadcast attribute information.
In an optional embodiment, the determining module 404 is further configured to:
analyzing the stream pushing file to obtain a video stream corresponding to the anchor; and carrying out fragment processing on the video stream according to a preset fragment strategy, and generating the fragment video according to a fragment processing result.
In an optional embodiment, the determining module 404 is further configured to:
analyzing the segmented video to obtain the starting time information, the video duration information, the video space occupation information and the video sequence index information of the segmented video; determining a storage space identifier according to the storage position of the live broadcast attribute information in the service storage space; and creating the video attribute information of the segmented video based on the starting time information, the video duration information, the video space occupation information, the video sequence index information and the storage space identification.
In an optional embodiment, the information storage module is further configured to:
extracting the segmented video, the video attribute information and the live broadcast attribute information corresponding to the anchor broadcast from the service storage space; creating the recorded broadcast service video according to the fragment video, the video attribute information and the live broadcast attribute information; and determining a release interface corresponding to the anchor based on the live broadcast attribute information, and calling the release interface to release the recorded broadcast service video.
In an optional embodiment, the information storage module is further configured to:
creating an initial recorded broadcast service video according to the fragment video, the video attribute information and the live broadcast attribute information; under the condition that the playing time of the initial recorded broadcast service video is greater than a preset time threshold, segmenting the initial recorded broadcast service video into at least two intermediate recorded broadcast service videos; and taking the at least two intermediate recorded broadcast service videos as the recorded broadcast service videos.
In an optional embodiment, the video processing apparatus further includes:
the establishing module is configured to acquire a barrage file and a virtual article file corresponding to the sliced video and establish an association relationship between the barrage file and the virtual article file and the sliced video; and storing the bullet screen file and the virtual article file to the business storage space based on the incidence relation.
In an optional embodiment, the video processing apparatus further includes:
a sending module configured to extract the barrage file and the virtual item file in the service storage space when receiving a viewing request submitted by a viewer of the anchor for the recorded service video; and generating a recorded broadcast video packet based on the recorded broadcast service video, the barrage file and the virtual article file and sending the recorded broadcast video packet to the audience.
The video processing device provided by the application responds to a recording start request of an anchor, acquires a push stream file generated by the anchor in a live broadcasting process and pushed by a distribution server, can directly create a fragment video corresponding to the current stage based on the push stream file at the moment, determines the video attribute information of the fragment video, then stores the video attribute information and the fragment video into a service storage space, and can realize the purpose of recording while live broadcasting through continuous periodic circulation; when the anchor broadcast is closed, all the fragment videos and the video attribute information can be directly extracted from the service storage space to create the recorded broadcast service video for publishing, so that the recorded broadcast service video can be generated without waiting for a long time, video publishing can be timely carried out, a user who misses the live broadcast or has a backtracking requirement can watch the video timely, the running stage of the live broadcast service is quickly butted with the video publishing stage of the recorded broadcast service, and the participation experience of the user is improved.
The above is a schematic scheme of a video processing apparatus of the present embodiment. It should be noted that the technical solution of the video processing apparatus belongs to the same concept as the technical solution of the video processing method, and details that are not described in detail in the technical solution of the video processing apparatus can be referred to the description of the technical solution of the video processing method.
Corresponding to the above method embodiment, the present application further provides an embodiment of a video processing system, and fig. 5 shows a schematic structural diagram of a video processing system provided in an embodiment of the present application. As shown in fig. 5, the system 500 includes:
an anchor terminal 510, a recording terminal 520, a service terminal 530 and a content distribution network 540;
the anchor terminal 510 is configured to collect a push stream file generated by an anchor in a live broadcast process, and send the push stream file to the content distribution network 540;
the content distribution network 540 is configured to push the streaming file to the recording segment 520 if it is determined that the anchor starts a recording function;
the recording end 520 is configured to create a segment video based on the stream pushing file and determine video attribute information of the segment video; storing the fragment video and the video attribute information to a service storage space; sending the fragmented video and the video attribute information in the service storage space to the service end 530 when the anchor closes the live broadcast;
the service end 530 is configured to create and distribute a recorded broadcast service video based on the segment video and the video attribute information.
Specifically, the anchor terminal 510 specifically refers to a terminal device held by the anchor, and the recording terminal 520 specifically refers to a cluster providing creation of a segment video, and is used for supporting creation of a recorded broadcast service video by the docking service terminal 530; correspondingly, the service end 530 specifically refers to an end that performs recording and playing service video creation and distribution.
Optionally, the content distribution network 540 is configured to determine whether a recording instruction submitted by the anchor for live broadcast is received; and if so, sending the stream pushing file to the recording end.
Specifically, the content distribution network 540 specifically refers to a CDN that interfaces with the anchor terminal 510, and since the content distribution network is a node that performs resource scheduling, load balancing, and other operations, the content distribution network will continuously receive the stream pushing file that is pushed by the anchor terminal 510 in the RTMP stream pushing manner, so that it can be determined whether to record a video at the content distribution network, and if recording is needed, the stream pushing file is directly pushed to the recording terminal 520 for processing.
Optionally, the service end 530 is further configured to generate live broadcast attribute information according to anchor information carried in the live broadcast start request, and store the live broadcast attribute information in the service storage space; correspondingly, the creating and releasing a recorded broadcast service video based on the segment video and the video attribute information in the service storage space includes: and creating and releasing the recorded and played service video based on the fragment video, the video attribute information and the live broadcast attribute information in the service storage space.
Optionally, the service end 530 is further configured to parse the live broadcast start request to obtain the anchor information; generating a timestamp and a live broadcast identifier based on the anchor information, and reading a live broadcast address identifier corresponding to the anchor information; and integrating the timestamp, the live broadcast identification and the live broadcast address identification to generate the live broadcast attribute information.
Optionally, the recording end 520 is further configured to parse the stream pushing file to obtain a video stream corresponding to the anchor; and carrying out fragment processing on the video stream according to a preset fragment strategy, and generating the fragment video according to a fragment processing result.
Optionally, the recording end 520 is further configured to analyze the segmented video to obtain start time information, video duration information, video space occupation information, and video sequence index information of the segmented video; determining a storage space identifier according to the storage position of the live broadcast attribute information in the service storage space; and creating the video attribute information of the segmented video based on the starting time information, the video duration information, the video space occupation information, the video sequence index information and the storage space identification.
Optionally, the service end 530 is further configured to extract the segment video, the video attribute information, and the live broadcast attribute information corresponding to the anchor from the service storage space; creating the recorded broadcast service video according to the fragment video, the video attribute information and the live broadcast attribute information; and determining a release interface corresponding to the anchor based on the live broadcast attribute information, and calling the release interface to release the recorded broadcast service video.
Optionally, the service end 530 is further configured to create an initial recorded broadcast service video according to the segment video, the video attribute information, and the live broadcast attribute information; under the condition that the playing time of the initial recorded broadcast service video is greater than a preset time threshold, segmenting the initial recorded broadcast service video into at least two intermediate recorded broadcast service videos; and taking the at least two intermediate recorded broadcast service videos as the recorded broadcast service videos.
Optionally, the service end 530 is further configured to obtain a barrage file and a virtual article file corresponding to the sliced video, and establish an association relationship between the barrage file and the virtual article file and the sliced video; and storing the bullet screen file and the virtual article file to the business storage space based on the incidence relation.
Optionally, the service end 530 is further configured to, in a case that a viewing request is submitted by a viewer of the anchor for the recorded service video, extract the barrage file and the virtual article file in the service storage space; and generating a recorded broadcast video packet based on the recorded broadcast service video, the barrage file and the virtual article file and sending the recorded broadcast video packet to the audience.
The video processing system provided by the application realizes that the recorded broadcast service video can be generated without waiting for a long time, not only can the video be published in time, but also the video can be timely watched by a user who misses the live broadcast or has a backtracking demand, the running stage of the live broadcast service is rapidly butted with the video publishing stage of the recorded broadcast service, and therefore participation experience of the user is improved.
The above is a schematic scheme of a video processing system of the present embodiment. It should be noted that the technical solution of the video processing system and the technical solution of the video processing method belong to the same concept, and details that are not described in detail in the technical solution of the video processing system can be referred to the description of the technical solution of the video processing method.
The following description will further describe the video processing system with reference to fig. 6 by taking an application of the video processing system provided in the present application in a live scene as an example. Fig. 6 shows a schematic flowchart of a video processing system applied in a live scene according to an embodiment of the present application, which specifically includes the following steps:
step S602, the live broadcast cluster receives a live broadcast starting request submitted by a user through a user side.
Step S604, the live broadcast cluster generates live broadcast information according to the user broadcast information carried in the live broadcast starting request.
The live broadcast information comprises a live broadcast room ID played by a user, a timestamp and a unique identification of the live broadcast, and meanwhile, the live broadcast cluster can deliver the live broadcast information to a DataBus, so that the live broadcast cluster is convenient for subsequent recording and use.
Step S606, the recording cluster pulls the live broadcast information in the live broadcast cluster in each fixed period based on a polling mechanism, and stores the live broadcast information in a storage unit Redis.
Specifically, in order to ensure that the live broadcast is effectively recorded later, the live broadcast information is periodically pulled by the recording cluster, so that the fragmented video belonging to the user can be stored in the same storage position when the follow-up recording is ensured, and the use and the management are convenient.
Step S608, the user starts live broadcasting through the user side, and the user side collects the push stream file and sends the push stream file to the live broadcast CDN through the RTMP.
Step S610, the live CDN sends the push stream file to the recording cluster when determining that the user starts the recording function.
Step S612, the recording cluster receives the stream pushing file, creates a segment video based on the stream pushing file, and determines video attribute information corresponding to the segment video.
Specifically, the video attribute information includes a storage space identifier corresponding to a storage location of the segment video in the storage unit Redis, a start time of the segment video, a playback time length, a size of the recording file, and a sequential index of the segment video.
Step S614, the recording cluster stores the fragment video and the video attribute information corresponding to the fragment video to a storage unit Redis.
Step S616, the recording cluster reads the segment video and the corresponding video attribute information and live broadcast information in the storage unit Redis and sends the segment video and the corresponding video attribute information and live broadcast information to the live broadcast cluster when determining that the user closes the live broadcast.
Step 618, the live broadcast cluster creates a recorded broadcast video based on the segment video and the corresponding video attribute information and the live broadcast information.
Step S620, the live broadcast cluster reads the bullet screen, the virtual gift, and the live broadcast video related to the live broadcast process of the user, and issues the bullet screen, the virtual gift, and the live broadcast video through the playback interface corresponding to the user.
Specifically, when recorded broadcast video is released, considering the participation degree that a watching user may need to connect with other users, the barrage and the virtual gift related in the live broadcast process of the user can be read to jointly release the recorded broadcast video, and the barrage content and the virtual gift presenting condition in the live broadcast process can be watched by other users.
In conclusion, the recorded broadcast service video can be generated without waiting for a long time, so that the video can be timely published, a user who misses the live broadcast or has a backtracking requirement can timely watch the video, the running stage of the live broadcast service is quickly connected with the video publishing stage of the recorded broadcast service, and the participation experience of the user is improved.
Fig. 7 illustrates a block diagram of a computing device 700 provided according to an embodiment of the present application. The components of the computing device 700 include, but are not limited to, memory 710 and a processor 720. Processor 720 is coupled to memory 710 via bus 730, and database 750 is used to store data.
Computing device 700 also includes access device 740, access device 740 enabling computing device 700 to communicate via one or more networks 760. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 740 may include one or more of any type of network interface, e.g., a Network Interface Card (NIC), wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the application, the above-described components of the computing device 700 and other components not shown in fig. 7 may also be connected to each other, for example, by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 7 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 700 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 700 may also be a mobile or stationary server.
Wherein the processor 720 implements the steps of the video processing method when executing the instructions.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the video processing method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the video processing method.
An embodiment of the present application further provides a computer readable storage medium, which stores computer instructions, and when the instructions are executed by a processor, the instructions implement the steps of the video processing method as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the above-mentioned video processing method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the above-mentioned video processing method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (15)

1. A video processing method, comprising:
responding to a recording starting instruction of a main broadcast, and acquiring a stream pushing file which is pushed by a distribution server and generated by the main broadcast in a live broadcast process;
creating a fragment video based on the plug flow file, and determining video attribute information of the fragment video;
storing the fragment video and the video attribute information to a service storage space;
and under the condition that the anchor broadcast closes the live broadcast, creating a recorded broadcast service video based on the fragment video and the video attribute information in the service storage space and releasing the recorded broadcast service video.
2. The video processing method according to claim 1, wherein the obtaining, in response to a recording start instruction of an anchor, a push stream file generated by the anchor in a live broadcast process and pushed by a distribution server comprises:
receiving a live broadcast starting request submitted by the anchor aiming at a live broadcast service;
and under the condition that the live broadcast starting request comprises the recording starting instruction, acquiring the push stream file which is pushed by the distribution server and generated by the anchor broadcast in the live broadcast process.
3. The video processing method according to claim 2, wherein before the step of creating and publishing a recorded service video based on the segment video and the video attribute information in the service storage space is executed, the method further comprises:
generating live broadcast attribute information according to anchor information carried in the live broadcast starting request, and storing the live broadcast attribute information to the service storage space;
correspondingly, the creating and releasing a recorded broadcast service video based on the segment video and the video attribute information in the service storage space includes:
and creating and releasing the recorded and played service video based on the fragment video, the video attribute information and the live broadcast attribute information in the service storage space.
4. The video processing method according to claim 3, wherein the generating live broadcast attribute information according to anchor information carried in the live broadcast start request includes:
analyzing the live broadcast starting request to obtain the anchor information;
generating a timestamp and a anchor identification based on the anchor information, and reading a live broadcast address identification corresponding to the anchor information;
and integrating the timestamp, the anchor identification and the live broadcast address identification to generate the live broadcast attribute information.
5. The video processing method of claim 1, wherein the creating a sliced video based on the streamlet file comprises:
analyzing the stream pushing file to obtain a video stream corresponding to the anchor;
and carrying out fragment processing on the video stream according to a preset fragment strategy, and generating the fragment video according to a fragment processing result.
6. The video processing method according to claim 3, wherein said determining video attribute information of the sliced video comprises:
analyzing the segmented video to obtain the starting time information, the video duration information, the video space occupation information and the video sequence index information of the segmented video;
determining a storage space identifier according to the storage position of the live broadcast attribute information in the service storage space;
and creating the video attribute information of the segmented video based on the starting time information, the video duration information, the video space occupation information, the video sequence index information and the storage space identification.
7. The video processing method according to claim 3, wherein the creating and publishing the recorded broadcast service video based on the segment video, the video attribute information, and the live broadcast attribute information in the service storage space comprises:
extracting the segmented video, the video attribute information and the live broadcast attribute information corresponding to the anchor broadcast from the service storage space;
creating the recorded broadcast service video according to the fragment video, the video attribute information and the live broadcast attribute information;
and determining a release interface corresponding to the anchor based on the live broadcast attribute information, and calling the release interface to release the recorded broadcast service video.
8. The video processing method of claim 7, wherein the creating the recorded service video according to the segment video, the video attribute information, and the live attribute information comprises:
creating an initial recorded broadcast service video according to the fragment video, the video attribute information and the live broadcast attribute information;
under the condition that the playing time of the initial recorded broadcast service video is greater than a preset time threshold, segmenting the initial recorded broadcast service video into at least two intermediate recorded broadcast service videos;
and taking the at least two intermediate recorded broadcast service videos as the recorded broadcast service videos.
9. The video processing method according to any of claims 1 to 8, wherein before the step of creating and publishing a recorded broadcast service video based on the sliced video and the video attribute information in the service storage space is executed, the method further comprises:
acquiring a barrage file and a virtual article file corresponding to the fragment video, and establishing an association relation between the barrage file and the virtual article file and the fragment video;
and storing the bullet screen file and the virtual article file to the business storage space based on the incidence relation.
10. The video processing method according to claim 9, wherein after the steps of creating a recorded service video based on the segment video and the video attribute information in the service storage space and publishing are performed, the method further comprises:
under the condition that a watching request submitted by a viewer of the anchor program aiming at the recorded and broadcast service video is received, extracting the barrage file and the virtual article file from the service storage space;
and generating a recorded broadcast video packet based on the recorded broadcast service video, the barrage file and the virtual article file and sending the recorded broadcast video packet to the audience.
11. A video processing apparatus, comprising:
the acquisition module is configured to respond to a recording start instruction of a main broadcast, and acquire a stream pushing file which is pushed by a distribution server and generated by the main broadcast in a live broadcast process;
a determining module configured to create a segment video based on the stream pushing file and determine video attribute information of the segment video;
the storage module is configured to store the fragment video and the video attribute information to a service storage space;
and the creating module is configured to create and release a recorded broadcast service video based on the fragment video and the video attribute information in the service storage space under the condition that the anchor broadcast closes the live broadcast.
12. A video processing system, comprising:
the system comprises a main broadcasting end, a recording end, a service end and a content distribution network;
the anchor terminal is configured to collect a push stream file generated by an anchor in a live broadcast process and send the push stream file to the content distribution network;
the content distribution network is configured to push the stream pushing file to the recording segment under the condition that the recording function is started by the anchor;
the recording end is configured to create a fragment video based on the stream pushing file and determine video attribute information of the fragment video; storing the fragment video and the video attribute information to a service storage space; under the condition that the anchor broadcast closes the live broadcast, the fragment video and the video attribute information in the service storage space are sent to the service end;
and the service end is configured to create and release a recorded broadcast service video based on the fragment video and the video attribute information.
13. The video processing system of claim 12, wherein the content distribution network is configured to determine whether a recording instruction submitted by the anchor for a live broadcast is received; and if so, sending the stream pushing file to the recording end.
14. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-10 when executing the instructions.
15. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 10.
CN202110901970.9A 2021-08-06 2021-08-06 Video processing method, device and system Active CN113630618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110901970.9A CN113630618B (en) 2021-08-06 2021-08-06 Video processing method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110901970.9A CN113630618B (en) 2021-08-06 2021-08-06 Video processing method, device and system

Publications (2)

Publication Number Publication Date
CN113630618A true CN113630618A (en) 2021-11-09
CN113630618B CN113630618B (en) 2024-02-13

Family

ID=78383195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110901970.9A Active CN113630618B (en) 2021-08-06 2021-08-06 Video processing method, device and system

Country Status (1)

Country Link
CN (1) CN113630618B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466201A (en) * 2022-02-21 2022-05-10 上海哔哩哔哩科技有限公司 Live stream processing method and device
CN115633194A (en) * 2022-12-21 2023-01-20 易方信息科技股份有限公司 Live broadcast playback method and device, computer equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105721811A (en) * 2015-05-15 2016-06-29 乐视云计算有限公司 Live video recording method and system
CN106412677A (en) * 2016-10-28 2017-02-15 北京奇虎科技有限公司 Generation method and device of playback video file
US20170118495A1 (en) * 2015-10-23 2017-04-27 Disney Enterprises, Inc. Methods and Systems for Dynamically Editing, Encoding, Posting and Updating Live Video Content
CN107948669A (en) * 2017-12-22 2018-04-20 成都华栖云科技有限公司 Based on CDN fast video production methods
CN109561351A (en) * 2018-12-03 2019-04-02 网易(杭州)网络有限公司 Network direct broadcasting back method, device and storage medium
CN109831676A (en) * 2019-03-18 2019-05-31 北京奇艺世纪科技有限公司 A kind of video data handling procedure and device
CN109874061A (en) * 2019-03-22 2019-06-11 北京奇艺世纪科技有限公司 A kind of processing method of live video, device and electronic equipment
CN110351506A (en) * 2019-07-17 2019-10-18 视联动力信息技术股份有限公司 A kind of video recording method, device, electronic equipment and readable storage medium storing program for executing
CN111447455A (en) * 2018-12-29 2020-07-24 北京奇虎科技有限公司 Live video stream playback processing method and device and computing equipment
CN111954077A (en) * 2020-08-24 2020-11-17 上海连尚网络科技有限公司 Video stream processing method and device for live broadcast
CN111954078A (en) * 2020-08-24 2020-11-17 上海连尚网络科技有限公司 Video generation method and device for live broadcast

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105721811A (en) * 2015-05-15 2016-06-29 乐视云计算有限公司 Live video recording method and system
US20170118495A1 (en) * 2015-10-23 2017-04-27 Disney Enterprises, Inc. Methods and Systems for Dynamically Editing, Encoding, Posting and Updating Live Video Content
CN106412677A (en) * 2016-10-28 2017-02-15 北京奇虎科技有限公司 Generation method and device of playback video file
CN107948669A (en) * 2017-12-22 2018-04-20 成都华栖云科技有限公司 Based on CDN fast video production methods
CN109561351A (en) * 2018-12-03 2019-04-02 网易(杭州)网络有限公司 Network direct broadcasting back method, device and storage medium
CN111447455A (en) * 2018-12-29 2020-07-24 北京奇虎科技有限公司 Live video stream playback processing method and device and computing equipment
CN109831676A (en) * 2019-03-18 2019-05-31 北京奇艺世纪科技有限公司 A kind of video data handling procedure and device
CN109874061A (en) * 2019-03-22 2019-06-11 北京奇艺世纪科技有限公司 A kind of processing method of live video, device and electronic equipment
CN110351506A (en) * 2019-07-17 2019-10-18 视联动力信息技术股份有限公司 A kind of video recording method, device, electronic equipment and readable storage medium storing program for executing
CN111954077A (en) * 2020-08-24 2020-11-17 上海连尚网络科技有限公司 Video stream processing method and device for live broadcast
CN111954078A (en) * 2020-08-24 2020-11-17 上海连尚网络科技有限公司 Video generation method and device for live broadcast

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466201A (en) * 2022-02-21 2022-05-10 上海哔哩哔哩科技有限公司 Live stream processing method and device
CN114466201B (en) * 2022-02-21 2024-03-19 上海哔哩哔哩科技有限公司 Live stream processing method and device
CN115633194A (en) * 2022-12-21 2023-01-20 易方信息科技股份有限公司 Live broadcast playback method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113630618B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
US11350139B2 (en) Video live broadcast method and apparatus
CN108391179B (en) Live broadcast data processing method and device, server, terminal and storage medium
US9686329B2 (en) Method and apparatus for displaying webcast rooms
US11825034B2 (en) Bullet screen delivery method for live broadcast playback and live video bullet screen playback method
US11259063B2 (en) Method and system for setting video cover
CN100454815C (en) Method for realizing individualized advertsing managed by stream media
CN108512814B (en) Media data processing method, device and system
JP2009508229A (en) Method and apparatus for delivering content based on receiver characteristics
CN113630618A (en) Video processing method, device and system
EP3471421B1 (en) Live broadcast video replay method, server, and system
CN112839238B (en) Screen projection playing method and device and storage medium
CN108769816B (en) Video playing method, device and storage medium
CN108200444B (en) Video live broadcast method, device and system
CN110996145A (en) Multimedia resource playing method, system, terminal equipment and server
CN111818383A (en) Video data generation method, system, device, electronic equipment and storage medium
EP3048796A1 (en) Information system, information delivery method and iptv system based on multi-screen interaction
CN113873288A (en) Method and device for generating playback in live broadcast process
CN111447469A (en) Multimedia pushing method and device
CN114979695B (en) SRS-based multi-process live broadcast method and device, electronic equipment and storage medium
CN113301374A (en) Live broadcast audio and video processing method and device, client and server
CN114363651A (en) Live stream processing method and device
CN115914670B (en) Live broadcast playback processing method, device and storage medium
CN114466201B (en) Live stream processing method and device
US9936264B1 (en) Method of restricting offline video playback to include advertisements
CN110300324B (en) Associated information pushing method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant