CN111031333A - Video processing method, device, system and storage medium - Google Patents

Video processing method, device, system and storage medium Download PDF

Info

Publication number
CN111031333A
CN111031333A CN201911214569.7A CN201911214569A CN111031333A CN 111031333 A CN111031333 A CN 111031333A CN 201911214569 A CN201911214569 A CN 201911214569A CN 111031333 A CN111031333 A CN 111031333A
Authority
CN
China
Prior art keywords
screen recording
video
live broadcast
broadcast room
recording video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911214569.7A
Other languages
Chinese (zh)
Other versions
CN111031333B (en
Inventor
耿振健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Reach Best Technology Co Ltd
Original Assignee
Reach Best Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Reach Best Technology Co Ltd filed Critical Reach Best Technology Co Ltd
Priority to CN201911214569.7A priority Critical patent/CN111031333B/en
Publication of CN111031333A publication Critical patent/CN111031333A/en
Application granted granted Critical
Publication of CN111031333B publication Critical patent/CN111031333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • H04N21/4325Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations

Abstract

The present disclosure relates to a video processing method, apparatus, system and storage medium, the method comprising: receiving live broadcast information of a target live broadcast room; the live broadcast information comprises identity information of a target live broadcast room; screening the screen recording video on the platform according to the identity information of the target live broadcast room to obtain the screen recording video related to the identity information of the target live broadcast room; and synthesizing the associated screen recording videos to obtain the live broadcast playback video. Therefore, when a user wants to watch live playback in a later period, the user can not watch the live process of some anchor fragments any more, and can watch a complete live playback video.

Description

Video processing method, device, system and storage medium
Technical Field
The present disclosure relates to the field of video processing, and in particular, to a video processing method, apparatus, system, and storage medium.
Background
The existing live broadcast screen recording function is that a live broadcast video recorded by a user can be released as a video work of the user. However, in a live broadcast room, many users can record many videos at the same time, and the videos are published on the platform, have no relevance on the platform, and describe the live broadcast process of the main broadcast in a fragment manner. And when the user wants to watch the live playback later, only some anchor fragmented live broadcasting processes can be watched.
Disclosure of Invention
The present disclosure provides a video processing method, apparatus, system and storage medium, to at least solve the problem that in the related art, when a user wants to watch live playback in a later stage, only some anchor fragmented live broadcast processes can be watched. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a video processing method, including:
receiving live broadcast information of a target live broadcast room; the live broadcast information comprises identity information of the target live broadcast room;
screening the screen recording video on the platform according to the identity information of the target live broadcast room to obtain the screen recording video related to the identity information of the target live broadcast room;
and synthesizing the associated screen recording videos to obtain a live playback video.
According to a second aspect of the embodiments of the present disclosure, there is provided a video processing method, including:
responding to a screen recording triggering operation of a user on a target live broadcast room, and generating a screen recording video containing identity information of the target live broadcast room;
and uploading the screen recording video to a platform.
According to a third aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including:
the receiving module is configured to execute receiving live broadcast information of a target live broadcast room; the live broadcast information comprises identity information of the target live broadcast room;
the screening module is configured to screen the screen recording video on the platform according to the identity information of the target live broadcast room to obtain the screen recording video related to the identity information of the target live broadcast room;
and the synthesis module is configured to synthesize the associated screen recording videos to obtain live playback videos.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including:
the screen recording video generating module is configured to execute screen recording triggering operation of a user on a target live broadcast room, and generate a screen recording video containing identity information of the target live broadcast room;
an upload module configured to perform uploading the screen recording video to a platform.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a video processing system, comprising a client and a server;
the client is configured to execute screen recording triggering operation of a user on a target live broadcast room, generate a screen recording video containing identity information of the target live broadcast room, and upload the screen recording video to a platform;
the server is configured to receive live broadcast information of the target live broadcast room; the live broadcast information comprises identity information of the target live broadcast room; screening the screen recording video on the platform according to the identity information of the target live broadcast room to obtain the screen recording video related to the identity information of the target live broadcast room; and synthesizing the associated screen recording videos to obtain a live playback video.
According to a sixth aspect of embodiments of the present disclosure, there is provided a storage medium having instructions that, when executed by a processor of a server, enable the server to perform the video processing method according to the first aspect.
According to a seventh aspect of embodiments of the present disclosure, there is provided a storage medium, wherein instructions, when executed by a processor of a client, enable the client to perform the video processing method according to the second aspect.
According to an eighth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor of the computer program product, enabling the computer program product to perform the video processing method according to at least one of the first or second aspects.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
and screening the screen recording video on the platform according to the identity information of the target live broadcast room to obtain the screen recording video associated with the identity information of the target live broadcast room, and synthesizing the associated screen recording video, so that dispersed and repeated videos on the platform can be processed to obtain a live broadcast playback video. Therefore, when a user wants to watch live playback in a later period, the user can not watch the live process of some anchor fragments any more, and can watch a complete live playback video.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a diagram illustrating an application environment for a video processing method according to an exemplary embodiment.
Fig. 2 is a flow diagram illustrating a video processing method according to an example embodiment.
Fig. 3 is a flow diagram illustrating a video processing method according to an example embodiment.
Fig. 4 is a flow diagram illustrating a video processing method according to an example embodiment.
Fig. 5 is a flow diagram illustrating a video processing method according to an example embodiment.
Fig. 6 is a flow diagram illustrating a video processing method according to an example embodiment.
Fig. 7 is a block diagram illustrating a video processing apparatus according to an example embodiment.
Fig. 8 is a block diagram illustrating a client according to an example embodiment.
Fig. 9 is an internal block diagram of a computer program product shown in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The video processing method provided by the present disclosure can be applied to the application environment shown in fig. 1. The video processing system comprises a client 11 and a server 12. The client 11 communicates with the server 12 via a network. The client 11 responds to a screen recording triggering operation of a user on the target live broadcast room, generates a screen recording video containing identity information of the target live broadcast room, and uploads the screen recording video to the platform; the server 12 receives live broadcast information of a target live broadcast room; the live broadcast information comprises identity information of a target live broadcast room; the target live room also belongs to the client 11. The server 12 screens the screen recording video on the platform according to the identity information of the target live broadcast room to obtain the screen recording video related to the identity information of the target live broadcast room; the server 12 synthesizes the associated screen recording videos to obtain a live playback video. The client 11 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and the like, and the server 12 may be implemented by an independent server or a server cluster formed by a plurality of servers.
Fig. 2 is a flow chart illustrating a video processing method according to an exemplary embodiment, and as shown in fig. 2, a video processing method is used in the server in fig. 1, including the following steps.
In step S21, receiving live broadcast information of a target live broadcast room; and the live broadcast information comprises identity information of the target live broadcast room.
The target live broadcast room is any one live broadcast room on the platform, and each live broadcast room has identity information for distinguishing the live broadcast room from other live broadcast rooms.
In step S22, the screen recording video on the platform is screened according to the identity information of the target live broadcast room, so as to obtain the screen recording video associated with the identity information of the target live broadcast room.
The screen recording refers to a processing mode that a client acquires display contents of all or part of the area of a display screen and encodes the acquired display contents in a video stream mode to obtain a video file, and the screen recording video is the video file obtained after screen recording. The client (including anchor and viewer) can release the live video of recording on the platform, wherein, the client will be to the video release that the target live broadcast was recorded in the platform, still can send the identity information and the video packing of target live broadcast room to the platform together, consequently, every record screen video on the platform all contains an identity information, for example, there are 6 record screen videos, the identity information of the 1 st record screen video is A anchor, the identity information of the 2 nd record screen video is B anchor, the identity information of the 3 rd record screen video is A anchor, the identity information of the 4 th record screen video is C anchor, the identity information of the 5 th record screen video is A anchor, the identity information of the 6 th record screen video is B anchor. If the identity information of the target live broadcast room is A, screening 6 screen recording videos on the platform according to the identity information of the target live broadcast room, and obtaining screen recording videos related to the identity information of the target live broadcast room as a1 st screen recording video, a 3 rd screen recording video and a 5 th screen recording video.
In step S23, the associated screen recording videos are synthesized to obtain a live playback video.
And synthesizing the 1 st screen recording video, the 3 rd screen recording video and the 5 th screen recording video to obtain the live playback video of the A-anchor because the associated screen recording videos are all videos associated with the A-anchor.
According to the video processing method, the screen recording videos on the platform are screened according to the identity information of the target live broadcast room, the screen recording videos related to the identity information of the target live broadcast room are obtained, and the related screen recording videos are synthesized, so that dispersed and repeated videos on the platform can be processed, and a live broadcast playback video is obtained. Therefore, when a user wants to watch live playback in a later period, the user can not watch the live process of some anchor fragments any more, and can watch a complete live playback video.
Fig. 3 is a flow chart illustrating a video processing method according to an exemplary embodiment, and as shown in fig. 3, a video processing method is used in the server in fig. 1, including the steps of:
in step S31, receiving live broadcast information of a target live broadcast room; the live broadcast information comprises identity information of the target live broadcast room, and live broadcast starting time and live broadcast ending time of the target live broadcast room.
The content of step S31 in the present disclosure is the same as the content of step S21 in the above embodiment, and is not described again here.
In step S32, the screen recording video on the platform is screened according to the identity information of the target live broadcast room, the live broadcast start time and the live broadcast end time of the target live broadcast room, so as to obtain the screen recording video associated with the identity information of the target live broadcast room.
The target live broadcast room needs to upload the live broadcast start time and the live broadcast end time of the target live broadcast room to the server. The server screens the screen recording video on the platform to obtain the screen recording video associated with the identity information of the target live broadcast room, and the screen recording video comprises the following two schemes:
firstly, screen recording videos on a platform can be screened according to identity information of a target live broadcast room, a part of videos are screened out, then remaining videos are screened according to live broadcast start time and live broadcast end time of the target live broadcast room, and finally screen recording videos related to the identity information of the target live broadcast room are obtained.
For example, there are 6 screen recording videos, the identity information of the 1 st screen recording video is an a anchor, the identity information of the 2 nd screen recording video is a B anchor, the identity information of the 3 rd screen recording video is an a anchor, the identity information of the 4 th screen recording video is a C anchor, the identity information of the 5 th screen recording video is an a anchor, and the identity information of the 6 th screen recording video is a B anchor. If the identity information of the target live broadcast room is A, the live broadcast start time of the target live broadcast room is 2019, 7, 10:00, the live broadcast end time of the target live broadcast room is 2019, 7, 10, 12:00, and if the identity information of the target live broadcast room is A, 6 screen recording videos on the platform are screened according to the identity information of the target live broadcast room to obtain the 1 st screen recording video, the 3 rd screen recording video and the 5 th screen recording video, wherein each screen recording video also has the screen recording start time and the screen recording end time, for example, the screen recording start time of the 1 st screen recording video is 2019, 7, 10, 05:00, the screen recording end time of the 1 st screen recording video is 2019, 7, 10, 50:00, the screen recording start time of the 2 nd screen recording video is 2019, 7, 10, 08:00, and the screen recording end time of the 2 nd screen recording video is 2019, 7, 10, 55, 10:00, the screen recording start time of the 3 rd screen recording video is 2019, 7, 9, 10:08:00, the screen recording end time of the 2 nd screen recording video is 2019, 7, 9, 10:55:10, the server screens the 1 st screen recording video, the 3 rd screen recording video and the 5 th screen recording video again according to the live broadcasting start time and the live broadcasting end time of the target live broadcasting room, and the third video is found not to be in the live broadcasting start time and the live broadcasting end time of the target live broadcasting room, so that the 5 th screen recording video is screened out, and finally the screen recording videos related to the identity information of the target live broadcasting room are the 1 st screen recording video and the 3 rd screen recording video.
And secondly, screening the screen recording video on the platform once directly according to the identity information of the target live broadcast room, the live broadcast starting time and the live broadcast ending time of the target live broadcast room, and directly obtaining the screen recording video related to the identity information of the target live broadcast room.
In step S33, the associated screen recording videos are synthesized according to the timestamps in the associated screen recording videos, so as to obtain live playback videos.
In the synthesizing process, for any two screen recording videos needing to be synthesized, if the screen recording ending time of a first screen recording video is later than the screen recording starting time of a second screen recording video, the duplication elimination processing is carried out on the first screen recording video or the second screen recording video according to the screen recording ending time of the first screen recording video and the screen recording starting time of the second screen recording video.
And/or comparing video frames of the first screen recording video and the second screen recording video for any two screen recording videos needing to be synthesized; and if the same video frame exists in the first screen recording video and the second screen recording video, performing duplicate removal processing on the first screen recording video or the second screen recording video. The method for confirming whether the video frames are similar can adopt an image processing method, such as processing by an image recognition model of a convolutional neural network in deep learning.
For the first screen recording video synthesized by the time processing, in the present disclosure, as described in step S32, the screen recording start time of the 1 st screen recording video is 10:05:00 in 7/10/2019, the screen recording end time of the 1 st screen recording video is 10:50:00 in 7/10/2019, the screen recording start time of the 2 nd screen recording video is 10:08:00 in 7/10/2019, the screen recording end time of the 2 nd screen recording video is 10:55:10 in 7/10/2019, if the 1 st screen recording video and the 2 nd screen recording video have overlapped parts in the time dimension, the overlapped time frame data can be subjected to de-duplication processing when the 1 st screen recording video and the 2 nd screen recording video are synthesized, then the video obtained by combining the 1 st screen recording video and the 2 nd screen recording video is 10:05:00 in 7/10/2019-10/55: 10 in 7/2019. If there are multiple associated screen recording videos, then according to the scheme in step S33, all associated screen recording videos are synthesized to obtain a live playback video with a start time and an end time closer to the live start time and the live end time of the anchor.
For a second screen recording video synthesized through video frame comparison processing, in the present disclosure, for any two screen recording videos to be synthesized, each video frame of the first screen recording video and each video frame of the second screen recording video are compared, and if a part of the video frames in the first screen recording video and a part of the video frames in the second screen recording video are the same, deduplication processing needs to be performed, and deduplication processing is performed on the first screen recording video (deleting the same video frames in the first screen recording video) or deduplication processing is performed on the second screen recording video (deleting the same video frames in the second screen recording video).
In the synthesizing process, for any two screen recording videos needing to be synthesized, if the screen recording ending time of a first screen recording video is earlier than or equal to the screen recording starting time of a second screen recording video, the first screen recording video and the second screen recording video are subjected to over-processing.
The excessive processing is a processing method which overlaps two incoherent videos which need to be synthesized, and avoids the fact that the two incoherent videos are too abrupt after being spliced.
In this disclosure, as described in step S32, if the screen recording start time of the 1 st screen recording video is 10:05:00 at 7/10/2019, the screen recording end time of the 1 st screen recording video is 10:50:00 at 7/10/2019, the screen recording start time of the 2 nd screen recording video is 10:51:00 at 7/10/2019, and the screen recording end time of the 2 nd screen recording video is 10:55:10 at 7/10/2019, the 1 st screen recording video and the 2 nd screen recording video need to be processed excessively if there is a discontinuous portion in the time dimension between the 1 st screen recording video and the 2 nd screen recording video. The method specifically comprises the following steps: according to FFmpeg (FFmpeg is a set of open source computer programs which can be used for recording, converting digital audio and video and converting the digital audio and video into streams), a1 st screen recording video A is divided into A1 and A2, A2 is the content of the last n seconds of A, the rest content is A1, a2 nd screen recording video B is also divided into B1 and B2, B1 is the content of n seconds before B, B2 is the rest part, A2 and B1 are mixed together by using video filter to form a video AB, which is a transitional effect, and three videos A1-AB-B2 are hard-jointed by using FFmpeg.
According to the video processing method, the screen recording video on the platform is screened according to the identity information of the target live broadcast room, the live broadcast starting time and the live broadcast ending time of the target live broadcast room, the screen recording video which is related to the identity information of the target live broadcast room and belongs to the live broadcast is obtained, and the related screen recording video is synthesized, so that dispersed and repeated videos on the platform can be processed, and a live broadcast playback video is obtained. Therefore, when a user wants to watch live playback in a later period, the user can not watch the live process of some anchor fragments any more, and can watch a complete live playback video.
Fig. 4 is a flow chart illustrating a video processing method according to an exemplary embodiment, and as shown in fig. 4, a video processing method is used in the client in fig. 1, and includes the following steps:
in step S41, in response to a screen recording trigger operation of the user on the target live broadcast room, a screen recording video including identity information of the target live broadcast room is generated.
The client (including the anchor and the viewer) can release the recorded live video on the platform, wherein the client can release the video recorded in the target live broadcast room on the platform and simultaneously can pack the identity information of the target live broadcast room and the video and send the video to the platform, so that each screen recording video on the platform contains identity information.
In step S42, the screen recording video is uploaded to a platform.
In one embodiment, the method further comprises:
sending the live broadcast information of the target live broadcast room to a server, so that the server screens the screen recording video on a platform according to the identity information of the target live broadcast room to obtain the screen recording video associated with the identity information of the target live broadcast room; synthesizing the associated screen recording video to obtain a live playback video;
and the live broadcast information comprises identity information of the target live broadcast room.
And the live broadcast information comprises live broadcast starting time and live broadcast ending time of the target live broadcast room.
According to the video processing method, the client packs the identity information of the target live broadcast room and the video and sends the packed identity information and the video to the platform, so that the video on the platform has the identity information, the server can synthesize the identity information of each screen recording video, and the live broadcast playback video related to a certain anchor broadcast is obtained.
Fig. 5 is a flow chart illustrating a video processing method according to an exemplary embodiment, and as shown in fig. 5, a video processing method for use in the system of fig. 1 includes the steps of:
in step S51, the client generates a screen recording video including the identity information of the target live broadcast room in response to a screen recording trigger operation of the user on the target live broadcast room, and uploads the screen recording video to the platform.
In step S52, the server receives live broadcast information of the target live broadcast room; and the live broadcast information comprises identity information of the target live broadcast room.
In step S53, the server screens the screen recording video on the platform according to the identity information of the target live broadcast room, so as to obtain the screen recording video associated with the identity information of the target live broadcast room.
In step S54, the server synthesizes the associated screen recording videos to obtain a live playback video.
The descriptions of the steps in the embodiment of the present disclosure are the same as the descriptions in the embodiment shown in fig. 2, and are not repeated here.
According to the video processing method, the screen recording videos on the platform are screened according to the identity information of the target live broadcast room, the screen recording videos related to the identity information of the target live broadcast room are obtained, and the related screen recording videos are synthesized, so that dispersed and repeated videos on the platform can be processed, and a live broadcast playback video is obtained. Therefore, when a user wants to watch live playback in a later period, the user can not watch the live process of some anchor fragments any more, and can watch a complete live playback video.
Fig. 6 is a flow chart illustrating a video processing method according to an exemplary embodiment, and as shown in fig. 6, a video processing method for use in the system of fig. 1 includes the steps of:
in step S61, the client generates a screen recording video including the identity information of the target live broadcast room in response to a screen recording trigger operation of the user on the target live broadcast room, and uploads the screen recording video to the platform.
In step S62, the server receives live broadcast information of the target live broadcast room; the live broadcast information comprises identity information of the target live broadcast room, and live broadcast starting time and live broadcast ending time of the target live broadcast room.
In step S63, the server screens the screen recording video on the platform according to the identity information of the target live broadcast room, the live broadcast start time and the live broadcast end time of the target live broadcast room, and obtains the screen recording video associated with the identity information of the target live broadcast room.
In step S64, the server synthesizes the associated screen recording video according to the timestamp in the associated screen recording video, so as to obtain a live playback video.
The descriptions of the steps in the embodiment of the present disclosure are the same as the descriptions in the embodiment shown in fig. 3, and are not repeated here.
According to the video processing method, the screen recording video on the platform is screened according to the identity information of the target live broadcast room, the live broadcast starting time and the live broadcast ending time of the target live broadcast room, the screen recording video which is related to the identity information of the target live broadcast room and belongs to the live broadcast is obtained, and the related screen recording video is synthesized, so that dispersed and repeated videos on the platform can be processed, and a live broadcast playback video is obtained. Therefore, when a user wants to watch live playback in a later period, the user can not watch the live process of some anchor fragments any more, and can watch a complete live playback video.
Fig. 7 is a block diagram illustrating a video processing apparatus according to an example embodiment. Referring to fig. 7, the video processing apparatus includes a receiving module 71, a filtering module 72, and a synthesizing module 73.
The receiving module 71 is configured to perform receiving live broadcast information of a target live broadcast room; and the live broadcast information comprises identity information of the target live broadcast room.
The screening module 72 is configured to perform screening on the screen recording video on the platform according to the identity information of the target live broadcast room, so as to obtain the screen recording video associated with the identity information of the target live broadcast room.
The composition module 73 is configured to perform composition of the associated screen-recorded videos, resulting in a live playback video.
In an exemplary embodiment, the live information includes a live start time and a live end time of the target live room; the screening module 72 is configured to perform screening on the screen recording video on the platform according to the identity information of the target live broadcast room, the live broadcast start time and the live broadcast end time of the target live broadcast room, so as to obtain the screen recording video associated with the identity information of the target live broadcast room.
In an exemplary embodiment, the composition module 73 is configured to perform composition of the associated screen recording videos according to timestamps in the associated screen recording videos, resulting in live playback videos.
In an exemplary embodiment, the video processing apparatus further includes a duplicate removal processing module configured to execute, for any two screen recording videos that need to be combined, if a screen recording end time of a first screen recording video is later than a screen recording start time of a second screen recording video, performing duplicate removal processing on the first screen recording video or the second screen recording video according to the screen recording end time of the first screen recording video and the screen recording start time of the second screen recording video; and the excess processing module is configured to execute the screen recording videos needing to be synthesized, and if the screen recording ending time of the first screen recording video is earlier than or equal to the screen recording starting time of the second screen recording video, the excess processing is performed on the first screen recording video and the second screen recording video.
In an exemplary embodiment, the video processing apparatus further includes a duplicate removal processing module configured to perform comparison between video frames of the first screen recording video and the second screen recording video for any two screen recording videos to be synthesized; and if the same video frame exists in the first screen recording video and the second screen recording video, performing duplicate removal processing on the first screen recording video or the second screen recording video.
With regard to the video processing apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 8 is a block diagram illustrating a video processing apparatus according to an example embodiment. Referring to fig. 8, the video processing apparatus includes a screen recording video generation module 81 and an upload module 82.
The screen recording video generation module 81 is configured to execute a screen recording triggering operation of a user on a target live broadcast room, and generate a screen recording video containing identity information of the target live broadcast room.
The upload module 82 is configured to perform uploading the screen-recorded video to a platform.
In an exemplary embodiment, the video processing apparatus further includes a sending module configured to execute sending of live broadcast information of the target live broadcast room to a server, so that the server screens a screen recording video on a platform according to identity information of the target live broadcast room to obtain a screen recording video associated with the identity information of the target live broadcast room; synthesizing the associated screen recording video to obtain a live playback video;
and the live broadcast information comprises identity information of the target live broadcast room.
In an exemplary embodiment, the live information includes a live start time and a live end time of the target live room.
With regard to the video processing apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In an exemplary embodiment, a block diagram of a video processing system is shown, with reference to fig. 1, the video processing system comprising a client 11 and a server 12.
The client 11 is configured to execute a screen recording triggering operation of a user on a target live broadcast room, generate a screen recording video containing identity information of the target live broadcast room, and upload the screen recording video to a platform;
the server 12 is configured to perform receiving live broadcast information of the target live broadcast room; the live broadcast information comprises identity information of the target live broadcast room; screening the screen recording video on the platform according to the identity information of the target live broadcast room to obtain the screen recording video related to the identity information of the target live broadcast room; and synthesizing the associated screen recording videos to obtain a live playback video.
In an exemplary embodiment, the live information includes a live start time and a live end time of the target live room; the server 12 is configured to perform screening on the screen recording video on the platform according to the identity information of the target live broadcast room, the live broadcast start time and the live broadcast end time of the target live broadcast room, so as to obtain the screen recording video associated with the identity information of the target live broadcast room.
In an exemplary embodiment, the server 12 is configured to perform the composition of the associated screen recorded video according to a timestamp in the associated screen recorded video, resulting in a live playback video.
The specific manner in which the client 11 and the server 12 perform operations with respect to the system in the above-described embodiment has been described in detail in the embodiment related to the method, and will not be elaborated here.
In exemplary embodiments, there is also provided a storage medium comprising instructions, such as a memory comprising instructions, wherein any reference to memory, storage, database or other medium used in embodiments provided by the present disclosure may comprise non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In one embodiment, a computer program product is provided, which may be a server, the internal structure of which may be as shown in fig. 9. The computer program product includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer program product is configured to provide computing and control capabilities. The memory of the computer program product comprises a non-volatile storage medium, an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer program product is for storing data. The network interface of the computer program product is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a video processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of portions of an architecture consistent with the present disclosure and not intended to limit the computer program product to which the present disclosure may be applied, and that a particular computer program product may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A video processing method, comprising:
receiving live broadcast information of a target live broadcast room; the live broadcast information comprises identity information of the target live broadcast room;
screening the screen recording video on the platform according to the identity information of the target live broadcast room to obtain the screen recording video related to the identity information of the target live broadcast room;
and synthesizing the associated screen recording videos to obtain a live playback video.
2. The video processing method according to claim 1, wherein the live information includes a live start time and a live end time of the target live room;
the screen recording video on the platform is screened according to the identity information of the target live broadcast room, so that the screen recording video related to the identity information of the target live broadcast room is obtained, and the method comprises the following steps:
and screening the screen recording video on the platform according to the identity information of the target live broadcast room, the live broadcast starting time and the live broadcast ending time of the target live broadcast room to obtain the screen recording video associated with the identity information of the target live broadcast room.
3. The video processing method according to claim 2, wherein the synthesizing the associated screen recording videos to obtain a live playback video comprises:
and synthesizing the associated screen recording video according to the timestamp in the associated screen recording video to obtain the live playback video.
4. The video processing method according to any of claims 1-3, wherein the method further comprises:
for any two screen recording videos needing to be synthesized, if the screen recording ending time of a first screen recording video is later than the screen recording starting time of a second screen recording video, performing duplicate removal processing on the first screen recording video or the second screen recording video according to the screen recording ending time of the first screen recording video and the screen recording starting time of the second screen recording video;
for any two screen recording videos needing to be synthesized, if the screen recording ending time of a first screen recording video is earlier than or equal to the screen recording starting time of a second screen recording video, carrying out over-processing on the first screen recording video and the second screen recording video;
alternatively, the first and second electrodes may be,
comparing video frames of the first screen recording video and the second screen recording video for any two screen recording videos to be synthesized;
and if the same video frame exists in the first screen recording video and the second screen recording video, performing duplicate removal processing on the first screen recording video or the second screen recording video.
5. A video processing method, comprising:
responding to a screen recording triggering operation of a user on a target live broadcast room, and generating a screen recording video containing identity information of the target live broadcast room;
and uploading the screen recording video to a platform.
6. The video processing method of claim 5, wherein the method further comprises:
sending the live broadcast information of the target live broadcast room to a server, so that the server screens the screen recording video on a platform according to the identity information of the target live broadcast room to obtain the screen recording video associated with the identity information of the target live broadcast room; synthesizing the associated screen recording video to obtain a live playback video;
and the live broadcast information comprises identity information of the target live broadcast room.
7. A video processing apparatus, comprising:
the receiving module is configured to execute receiving live broadcast information of a target live broadcast room; the live broadcast information comprises identity information of the target live broadcast room;
the screening module is configured to screen the screen recording video on the platform according to the identity information of the target live broadcast room to obtain the screen recording video related to the identity information of the target live broadcast room;
and the synthesis module is configured to synthesize the associated screen recording videos to obtain live playback videos.
8. A video processing apparatus, comprising:
the screen recording video generating module is configured to execute screen recording triggering operation of a user on a target live broadcast room, and generate a screen recording video containing identity information of the target live broadcast room;
an upload module configured to perform uploading the screen recording video to a platform.
9. A video processing system comprising a client and a server;
the client is configured to execute screen recording triggering operation of a user on a target live broadcast room, generate a screen recording video containing identity information of the target live broadcast room, and upload the screen recording video to a platform;
the server is configured to receive live broadcast information of the target live broadcast room; the live broadcast information comprises identity information of the target live broadcast room; screening the screen recording video on the platform according to the identity information of the target live broadcast room to obtain the screen recording video related to the identity information of the target live broadcast room; and synthesizing the associated screen recording videos to obtain a live playback video.
10. A storage medium in which instructions, when executed by a processor of a server, enable the server to perform the video processing method of any of claims 1 to 4, or the video processing method of any of claims 5 to 6.
CN201911214569.7A 2019-12-02 2019-12-02 Video processing method, device, system and storage medium Active CN111031333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911214569.7A CN111031333B (en) 2019-12-02 2019-12-02 Video processing method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911214569.7A CN111031333B (en) 2019-12-02 2019-12-02 Video processing method, device, system and storage medium

Publications (2)

Publication Number Publication Date
CN111031333A true CN111031333A (en) 2020-04-17
CN111031333B CN111031333B (en) 2022-04-22

Family

ID=70207748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911214569.7A Active CN111031333B (en) 2019-12-02 2019-12-02 Video processing method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN111031333B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112788270A (en) * 2020-12-31 2021-05-11 平安养老保险股份有限公司 Video backtracking method and device, computer equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101202895A (en) * 2007-09-18 2008-06-18 深圳市同洲电子股份有限公司 Method and system for playback of live program
US20130094830A1 (en) * 2011-10-17 2013-04-18 Microsoft Corporation Interactive video program providing linear viewing experience
US20160064035A1 (en) * 2014-08-27 2016-03-03 International Business Machines Corporation Multi-source video input
WO2018082634A1 (en) * 2016-11-04 2018-05-11 优酷网络技术(北京)有限公司 Interaction data processing method and device of live-streaming video
CN108235141A (en) * 2018-03-01 2018-06-29 北京网博视界科技股份有限公司 Live video turns method, apparatus, server and the storage medium of fragmentation program request
CN108305632A (en) * 2018-02-02 2018-07-20 深圳市鹰硕技术有限公司 A kind of the voice abstract forming method and system of meeting
CN109309843A (en) * 2018-07-25 2019-02-05 北京达佳互联信息技术有限公司 Video distribution method, terminal and server
CN109842804A (en) * 2017-11-24 2019-06-04 腾讯科技(深圳)有限公司 Processing method and server, the computer storage medium of audio, video data
CN110392281A (en) * 2018-04-20 2019-10-29 腾讯科技(深圳)有限公司 Image synthesizing method, device, computer equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101202895A (en) * 2007-09-18 2008-06-18 深圳市同洲电子股份有限公司 Method and system for playback of live program
US20130094830A1 (en) * 2011-10-17 2013-04-18 Microsoft Corporation Interactive video program providing linear viewing experience
US20160064035A1 (en) * 2014-08-27 2016-03-03 International Business Machines Corporation Multi-source video input
WO2018082634A1 (en) * 2016-11-04 2018-05-11 优酷网络技术(北京)有限公司 Interaction data processing method and device of live-streaming video
CN109842804A (en) * 2017-11-24 2019-06-04 腾讯科技(深圳)有限公司 Processing method and server, the computer storage medium of audio, video data
CN108305632A (en) * 2018-02-02 2018-07-20 深圳市鹰硕技术有限公司 A kind of the voice abstract forming method and system of meeting
CN108235141A (en) * 2018-03-01 2018-06-29 北京网博视界科技股份有限公司 Live video turns method, apparatus, server and the storage medium of fragmentation program request
CN110392281A (en) * 2018-04-20 2019-10-29 腾讯科技(深圳)有限公司 Image synthesizing method, device, computer equipment and storage medium
CN109309843A (en) * 2018-07-25 2019-02-05 北京达佳互联信息技术有限公司 Video distribution method, terminal and server

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112788270A (en) * 2020-12-31 2021-05-11 平安养老保险股份有限公司 Video backtracking method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111031333B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
US9514783B2 (en) Video editing with connected high-resolution video camera and video cloud server
KR102082816B1 (en) Method for improving the resolution of streaming files
CN108989885B (en) Video file transcoding system, segmentation method, transcoding method and device
JP6570646B2 (en) Audio video file live streaming method, system and server
CN108683826B (en) Video data processing method, video data processing device, computer equipment and storage medium
US8855471B2 (en) Media generation system
CN116647628A (en) Remote cloud-based video production system in an environment with network delay
JP2016536945A (en) Video providing method and video providing system
US20100199151A1 (en) System and method for producing importance rate-based rich media, and server applied to the same
US11164604B2 (en) Video editing method and apparatus, computer device and readable storage medium
CN106657090B (en) Multimedia stream processing method and device and embedded equipment
US9928876B2 (en) Recording medium recorded with multi-track media file, method for editing multi-track media file, and apparatus for editing multi-track media file
US11200915B2 (en) Method for capturing and recording high-definition video and audio output as broadcast by commercial streaming service providers
CN108200482A (en) A kind of cross-platform high resolution audio and video playback method, system and client
CN111031333B (en) Video processing method, device, system and storage medium
US20220377121A1 (en) Distributed network recording system with synchronous multi-actor recording
KR20210064353A (en) Image synthesis method, apparatus, computer device and readable storage medium
US20220377126A1 (en) Distributed network recording system with multi-user audio manipulation and editing
JP7169456B2 (en) Video playback speed control method, device, device and storage medium
US11005908B1 (en) Supporting high efficiency video coding with HTTP live streaming
CN110401845B (en) First screen playing method and device, computer equipment and storage medium
JP6275906B1 (en) Program and method for reproducing moving image content, and system for distributing and reproducing moving image content
WO2020177468A1 (en) Method and apparatus for controlling content presentation, and server and storage medium
CN114615522B (en) Low-delay streaming media transcoding and distributing processing method
RU2690163C2 (en) Information processing device and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant