CN110740346A - Video data processing method, device, server, terminal and storage medium - Google Patents

Video data processing method, device, server, terminal and storage medium Download PDF

Info

Publication number
CN110740346A
CN110740346A CN201911012307.2A CN201911012307A CN110740346A CN 110740346 A CN110740346 A CN 110740346A CN 201911012307 A CN201911012307 A CN 201911012307A CN 110740346 A CN110740346 A CN 110740346A
Authority
CN
China
Prior art keywords
video
frame
client
composite
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911012307.2A
Other languages
Chinese (zh)
Other versions
CN110740346B (en
Inventor
耿振健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201911012307.2A priority Critical patent/CN110740346B/en
Publication of CN110740346A publication Critical patent/CN110740346A/en
Application granted granted Critical
Publication of CN110740346B publication Critical patent/CN110740346B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Abstract

The utility model provides video data processing method, device, server, terminal and storage medium, which relates to the technical field of network video, wherein, after receiving a video confluence play request sent by a client, the server adopts the picture data of the video frame of video to replace the picture data of the appointed area of the corresponding video frame in the second video to form a composite video frame, a composite video stream is generated according to the composite video frame obtained by frame-by-frame synthesis and is sent to the client, so that the client plays video and the second video respectively in different areas of video play pictures according to the received composite video stream, thereby the client can play the two videos simultaneously by using players without using two mutually stacked players to play the video and the second video respectively, and the resource consumption of the client is saved.

Description

Video data processing method, device, server, terminal and storage medium
Technical Field
The present disclosure relates to the field of network video technologies, and in particular, to video data processing methods, apparatuses, servers, terminals, and storage media.
Background
With the development of internet technology, people have become popular to watch short videos and live videos online through a network. Through the video playing client, the anchor can upload the short video recorded by the anchor to the video playing platform, share the short video with the user through the video playing platform, and the anchor can also carry out live broadcast through the video playing platform. The user can watch the short video shared by the anchor through a video playing client on the terminal, or enter a live broadcast room of the anchor to watch the live broadcast video of the anchor.
generally speaking, after a user selects a short video to be watched, a video playing client sends a video playing request to a server of the video playing platform, and the server sends the short video specified by the video playing request to the video playing client in a video streaming manner so as to enable the video playing client to play the short video.
At present, a video playing client cannot play short videos and live videos simultaneously.
Disclosure of Invention
The embodiment of the disclosure provides video data processing methods and devices, a server, a terminal and a storage medium, which are used for solving the problem that short videos and live videos cannot be played simultaneously in the prior art.
, the disclosed embodiment provides video data processing method, applied to a server, the method includes:
receiving a video confluence playing request sent by a client, wherein the video confluence playing request indicates that an th video and a second video are played simultaneously;
synthesizing the video frame of the th video and the video frame of the second video frame by frame to obtain a synthesized video frame, wherein the synthesized video frame is obtained by replacing the picture data of the appointed area of the corresponding video frame in the second video by the picture data of the video frame of the th video;
generating a composite video stream from the composite video frame;
and sending the composite video stream to the client.
According to the video data processing method provided by the embodiment of the disclosure, after a video confluence play request sent by a client is received, picture data of a video frame of an th video is adopted to replace picture data of a designated area of a corresponding video frame in a second video to form a composite video frame, a composite video stream is generated according to the composite video frame obtained by frame-by-frame synthesis and sent to the client, the client is enabled to play a th video and a second video respectively in different areas of video play pictures according to the received composite video stream, so that the client can play the two videos at the same time by using players without using two players stacked with each other to play the th video and the second video respectively, and resource consumption of the client is saved.
In , the synthesizing the video frame of the video and the video frame of the second video frame by frame to obtain a synthesized video frame includes:
and (3) carrying out frame-by-frame composition on the video frame of the th video and the video frame of the second video according to a preset position proportion relation.
In possible implementations, after generating a composite video stream from the composite video frame, the method further includes:
adding the audio frame of the th video and the audio frame of the second video to the composite video stream, respectively.
In the above method, the audio frame of the th video and the audio frame of the second video are respectively added to the composite video stream, so that the client can play the audio frame of the th video or the audio frame of the second video according to the received selection instruction.
In possible implementation manners, the video is a live video and the second video is a short video, or the video is a short video and the second video is a live video.
The method provides kinds of more efficient browsing experience for the user, the user can watch live videos when watching the short videos through the client, and the user can watch the short videos when watching the live videos through the client, so that the watching efficiency is improved.
In possible implementation manners, before the th video is a live video, the second video is a short video, and the picture data of the video frame of the th video is used to replace the picture data of the designated area of the corresponding video frame in the second video, the method further includes:
acquiring bullet screen data of the th video;
adding the bullet screen data to picture data of a video frame of the th video.
According to the method, the barrage data is added to the picture data of the live video at the server side, so that the barrage data can be displayed at the correct position during confluence playing, meanwhile, the client side is not required to separately obtain the confluence of the barrage data and the picture data of the live video, and the program resources and the data transmission resources of the client side can be saved.
In possible implementation manners, the video is a live video, the second video is a short video, and before receiving a streaming video play request sent by a client, the method further includes:
and if the fact that the anchor of the second video is live is determined in the process that the client plays the second video, the client is informed to display video confluence prompt information.
In the method, when the client plays the short video of a certain anchor, the server automatically detects whether the anchor is live, if the anchor is live, the client is informed to display the video confluence prompt information, so that a user can select whether to confluence play the short video and the live video of the anchor according to own preference.
In possible implementations, after sending the composite video stream to the client, the method further includes:
and if a single video playing request sent by the client is received, sending th video or a video stream of a second video indicated to be played by the single video playing request to the client according to the video playing time carried in the single video playing request.
In the method, if the user selects to play only videos at the client in the process of playing the two videos in the combined stream, the server sends the video frames starting from the playing time to the client through the video stream according to the video playing time carried by the single video playing request, and the video frame can be accurately connected with the video played in the combined stream, and the video can be continuously played without gaps.
In a second aspect, an embodiment of the present disclosure provides video data processing methods, which are applied to a client, where the method includes:
responding to the received video confluence operation, and sending a video confluence playing request to a server, wherein the video confluence playing request indicates that an th video and a second video are played simultaneously;
the server generates a composite video frame according to the composite video frame, and the composite video frame is obtained by replacing the picture data of the video frame of the th video by the picture data of the corresponding video frame in the second video;
and playing the composite video stream.
In the method, the client responds to the video confluence operation of the user and sends a video confluence playing request to the server so that the server sends a composite video stream to the client, and the client receives and plays the composite video stream, wherein the composite video stream is video streams, so that the client can play the composite video stream by using players, the effect that two videos can be played simultaneously by using players is achieved, the th video and the second video do not need to be played by using two players which are stacked with each other, and the resource consumption of the client is saved.
In possible implementations, the video frames of the th video and the video frames of the second video in the composite video frame are arranged according to a preset position proportion relationship.
In possible implementations, the playing the composite video stream includes:
and in the process of playing the composite video stream, playing the audio frame of the th video or the audio frame of the second video in the composite video stream according to the audio selected by the user.
In the method, the user can flexibly select to play the audio frame of the th video or the audio frame of the second video at the client.
In possible implementation manners, the video is a live video and the second video is a short video, or the video is a short video and the second video is a live video.
The method provides kinds of more efficient browsing experience for the user, the user can watch live videos when watching the short videos through the client, and the user can watch the short videos when watching the live videos through the client, so that the watching efficiency is improved.
In possible implementations, the second video is a short video;
before the responding to the received video confluence operation and sending a video confluence playing request to the server, the method further comprises:
and in the process of playing the second video, displaying the video confluence prompt information according to the notification of the server.
In the method, the client can display the video confluence prompt information according to the notification of the server, so that a user can select whether to play the short video and the live video which are broadcasted together with according to own preference.
In possible implementation manners, the playing the composite video stream further includes:
in the process of playing the composite video stream, responding to the operation of clicking a designated area by a user, and sending a single video playing request to the server, wherein the single video playing request indicates that the th video is played and carries the video playing time of the th video, or
In the process of playing the composite video stream, responding to the operation that a user clicks a video closing key, and sending a single video playing request to the server; and the single video playing request indicates that the second video is played and carries the video playing time of the second video.
In a third aspect, an embodiment of the present disclosure provides video data processing apparatuses, including:
the device comprises a request receiving unit, a video merging playing unit and a video merging playing unit, wherein the request receiving unit is used for receiving a video merging playing request sent by a client, and the video merging playing request indicates that an th video and a second video are played simultaneously;
the video merging unit is used for synthesizing a video frame of the th video and a video frame of the second video frame by frame to obtain a synthesized video frame and generating a synthesized video stream according to the synthesized video frame, wherein the synthesized video frame is obtained by replacing the picture data of the appointed area of the corresponding video frame in the second video by the picture data of the video frame of the th video;
and the video sending unit is used for sending the composite video stream to the client.
In possible implementation manners, the video merging unit is further configured to:
and respectively adding the audio frame of the th video and the audio frame of the second video to the composite video stream so as to enable the client to play the audio frame of the th video or the audio frame of the second video according to the selection of a user.
In possible implementation manners, the video is a live video and the second video is a short video, or the video is a short video and the second video is a live video.
In possible implementation manners, the video is a live video, the second video is a short video, and the video merging unit is further configured to:
acquiring bullet screen data of the th video;
adding the bullet screen data to picture data of a video frame of the th video.
In possible implementation manners, the video is a live video, the second video is a short video, and the apparatus further includes:
and the notification sending unit is used for notifying the client to display the video confluence prompt information if the anchor of the second video is determined to be live in the process of playing the second video by the client.
In possible implementation manners, the video sending unit is further configured to:
and if a single video playing request sent by the client is received, sending th video or a video stream of a second video indicated to be played by the single video playing request to the client according to the video playing time carried in the single video playing request.
In a fourth aspect, an embodiment of the present disclosure provides video data processing apparatuses, including:
the request sending unit is used for responding to the received video confluence operation and sending a video confluence playing request to the server, wherein the video confluence playing request indicates that an th video and a second video are played simultaneously;
the server generates a composite video frame according to the composite video frame, and the composite video frame is obtained by replacing the picture data of the video frame of the th video by the picture data of the corresponding video frame in the second video;
and the video playing unit is used for playing the composite video stream.
In possible implementation manners, the video playing unit is configured to:
and in the process of playing the composite video stream, playing the audio frame of the th video or the audio frame of the second video in the composite video stream according to the audio selected by the user.
In possible implementation manners, the video is a live video and the second video is a short video, or the video is a short video and the second video is a live video.
In possible implementations, the second video is a short video, and the apparatus further comprises:
and the confluence prompt unit is used for displaying video confluence prompt information according to the notification of the server in the process of playing the second video.
In possible implementation manners, the video playing unit is further configured to:
in the process of playing the composite video stream, responding to the operation of clicking a designated area by a user, and sending a single video playing request to the server, wherein the single video playing request indicates that the th video is played and carries the video playing time of the th video, or
In the process of playing the composite video stream, responding to the operation that a user clicks a video closing key, and sending a single video playing request to the server; and the single video playing request indicates that the second video is played and carries the video playing time of the second video.
In a fifth aspect, embodiments of the present disclosure provide servers, including or more processors, and a memory for storing instructions executable by the processors;
wherein the processor is configured to execute the instructions to perform the steps of:
receiving a video confluence playing request sent by a client, wherein the video confluence playing request indicates that an th video and a second video are played simultaneously;
synthesizing the video frame of the th video and the video frame of the second video frame by frame to obtain a synthesized video frame, wherein the synthesized video frame is obtained by replacing the picture data of the appointed area of the corresponding video frame in the second video by the picture data of the video frame of the th video;
generating a composite video stream from the composite video frame;
and sending the composite video stream to the client.
In possible implementations, the processor further performs:
and respectively adding the audio frame of the th video and the audio frame of the second video to the composite video stream so as to enable the client to play the audio frame of the th video or the audio frame of the second video according to the selection of a user.
In possible implementation manners, the video is a live video and the second video is a short video, or the video is a short video and the second video is a live video.
In possible implementation manners, the video is a live video, the second video is a short video, and the processor further executes:
acquiring bullet screen data of the th video;
adding the bullet screen data to picture data of a video frame of the th video.
In possible implementation manners, the video is a live video, the second video is a short video, and the processor further executes:
and if the fact that the anchor of the second video is live is determined in the process that the client plays the second video, the client is informed to display video confluence prompt information.
In possible implementations, the processor further performs:
and if a single video playing request sent by the client is received, sending th video or a video stream of a second video indicated to be played by the single video playing request to the client according to the video playing time carried in the single video playing request.
In a sixth aspect, embodiments of the present disclosure provide terminals, including or more processors, and a memory for storing instructions executable by the processors;
wherein the processor is configured to execute the instructions to perform the steps of:
responding to the received video confluence operation, and sending a video confluence playing request to a server, wherein the video confluence playing request indicates that an th video and a second video are simultaneously played, the th video is a live video, the second video is a short video, or the th video is a short video, and the second video is a live video;
the server generates a composite video frame according to the composite video frame, and the composite video frame is obtained by replacing the picture data of the video frame of the th video by the picture data of the corresponding video frame in the second video;
and playing the composite video stream.
In possible implementations, the processor specifically performs:
and in the process of playing the composite video stream, playing the audio frame of the th video or the audio frame of the second video in the composite video stream according to the audio selected by the user.
In possible implementation manners, the video is a live video and the second video is a short video, or the video is a short video and the second video is a live video.
In possible implementations, the second video is a short video, and the processor further performs:
and in the process of playing the second video, displaying the video confluence prompt information according to the notification of the server.
In possible implementations, the processor specifically performs:
in the process of playing the composite video stream, responding to the operation of clicking a designated area by a user, and sending a single video playing request to the server, wherein the single video playing request indicates that the th video is played and carries the video playing time of the th video, or
In the process of playing the composite video stream, responding to the operation that a user clicks a video closing key, and sending a single video playing request to the server; and the single video playing request indicates that the second video is played and carries the video playing time of the second video.
In a seventh aspect, the present disclosure provides computer-readable storage media, where the computer-readable storage media stores therein a computer program, and when the computer program is executed by a processor, the computer program implements the video data processing method according to the or the second aspect.
The technical effects brought by any implementation manners in the third to seventh aspects may be referred to the technical effects brought by the corresponding implementation manners in the or the second aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without any creative effort.
Fig. 1 is a schematic view of an application scenario of video data processing methods according to an embodiment of the present disclosure;
fig. 2 is an interaction flowchart of video data processing methods provided by the embodiments of the present disclosure;
fig. 3 is a schematic diagram of processes for generating composite video frames according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of audio selection interfaces provided by embodiments of the present disclosure;
fig. 5 is an interaction flowchart of another video data processing methods provided by the embodiments of the present disclosure;
fig. 6 is an interaction flowchart of another video data processing methods provided by the embodiments of the present disclosure;
fig. 7 is a schematic flowchart of video data processing methods according to an embodiment of the present disclosure;
fig. 8 is a schematic flow chart of another video data processing methods provided by the embodiments of the present disclosure;
fig. 9 is a block diagram illustrating the structure of video data processing apparatuses according to an embodiment of the disclosure;
fig. 10 is a block diagram of another kinds of video data processing apparatuses provided in the embodiment of the present disclosure;
fig. 11 is a block diagram of another kinds of video data processing apparatuses according to an embodiment of the present disclosure;
fig. 12 is a block diagram of another video data processing apparatuses according to an embodiment of the disclosure;
fig. 13 is a block diagram of servers according to an embodiment of the present disclosure;
fig. 14 is a block diagram of structures of terminals provided in the embodiment of the present disclosure.
Detailed Description
For purposes of clarity, technical solutions, and advantages of the present disclosure, the present disclosure will be described in further detail with reference to the accompanying drawings , and it is to be understood that the described embodiments are only a partial embodiment, rather than a full embodiment, of the disclosure .
Some terms in the embodiments of the present disclosure are explained below to facilitate understanding by those skilled in the art.
(1) application programs capable of running on terminal electronic equipment such as a smart phone, a tablet computer or a computer.
(2) The live broadcast video refers to a video which utilizes the internet and streaming media technology to carry out network interactive live broadcast and is of a mainstream expression mode of the current internet media, the live broadcast is a emerging network social contact mode, the anchor can adopt independently controllable audio and video acquisition equipment to acquire audio and video to generate live broadcast video, the live broadcast video is uploaded to a server through a network, and then the server sends the live broadcast video to a client of each user watching the live broadcast.
(3) The short videos are types of internet content transmission modes, refer to high-frequency push video contents which can be played by a client and are suitable for being watched in a mobile state or a short-time leisure state, can be individually played due to short contents and can also be series columns, is generally used for carrying out main broadcasting of video live broadcast by using a new media platform, and the short videos recorded by the server of the new media platform can be sent to the user to be shared.
(4) The terms "", "second", etc. in the embodiments of the disclosure are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The present disclosure is described in further detail with reference to the figures and the embodiments.
Fig. 1 is a schematic view of an application scenario of video data processing methods provided by an embodiment of the present disclosure, in the application scenario, terminals of multiple users (such as terminals 101 to 103 shown in fig. 1) are connected to a server 300 through a network 200, where the terminals may be devices with communication functions, such as a mobile phone, a palm computer, a PC, and a player, and may also be devices simulated by a virtual machine or a simulator, the network 200 may be a wired network or a wireless network, the server 300 may be a server of a certain video playing platform, the terminals may be terminals of a live main broadcast in the process of live broadcasting, and may also be terminals of users who are watching live broadcasting.
The terminal 103 of the anchor uploads the live video recorded by the anchor to the server 300 in real time through the network 200, and the terminal 103 of the anchor can also upload the short video recorded by the anchor to the server 300 in real time through the network 200, so as to share the user through the server. The user's terminal 101 and terminal 102 have installed thereon clients that can play short videos and live videos. A user can enter a live broadcast room of a main broadcast through a client to watch a live broadcast video being played, and the server 300 sends the live broadcast video of the main broadcast to each terminal of the user watching the live broadcast, such as the terminal 101 and the terminal 102, respectively in a live broadcast video stream manner. The user can also watch the short video shared by the anchor through the client. After a user clicks a certain short video, the client sends a video playing request for playing the short video to the server 300, and the server 300 sends the short video specified by the video playing request to the client in a video streaming manner, so that the client plays the short video.
At present, if a user wants to watch live videos at the same time in the process of watching short videos or wants to watch short videos at the same time in the process of watching live videos, a client cannot meet the requirements of the user and plays the short videos and the live videos at the same time.
In order to solve the above problem, embodiments of the present disclosure provide methods, apparatuses, servers, terminals, and storage media for processing video data, where after receiving a video confluence play request sent by a client, the server replaces picture data of a video frame of a th video with picture data of a designated area of a corresponding video frame in a second video to form a composite video frame, generates a composite video stream according to the composite video frame obtained by frame-by-frame synthesis, and sends the composite video stream to the client, so that the client plays a th video and a second video in different areas of video play pictures according to the received composite video stream, thereby enabling the client to play two videos simultaneously using players without playing a th video and a second video respectively using two players stacked on top of each other, and saving resource consumption of the client.
The application scenarios in fig. 1 described above are merely examples of application scenarios for implementing the embodiments of the present application, and the embodiments of the present application are not limited to the application scenarios described in fig. 1 described above.
Fig. 2 shows an interaction diagram of video data processing methods provided by the embodiment of the present disclosure, which can be performed by the server 300 shown in fig. 1 and a client installed on a terminal of a user.
In embodiments, as shown in fig. 2, a video data processing method provided by an embodiment of the present disclosure includes the following steps:
in step S201, the client receives a video merging operation of the user.
The video merging operation is an operation input when the user wants to watch th video and second video simultaneously, wherein th video can be live video and the second video can be short video, or th video can be short video and the second video can be live video.
In alternative embodiments, the user may trigger the video merge operation during watching the short video, for example, assuming the second video is the short video, the user sends a video playing request for playing the second video to the server through the client, and the server sends the video stream of the second video to the client.
In another alternative embodiments, the user may trigger a video join operation before viewing the short video, for example, still assuming the second video is the short video, the user may click on the summary display area of the second video to view the second video.
Similarly, the user may also trigger the video merging operation in the process of watching the live video, which is not described herein again.
Step S202, the client sends a video confluence playing request to the server.
And responding to the received video confluence operation, and sending a video confluence playing request to the server by the client, wherein the video confluence playing request indicates that th video and second video are played simultaneously.
In step S203, the server performs frame-by-frame composition on the video frame of the th video and the video frame of the second video to obtain a composite video frame.
The server receives a video confluence play request sent by a client, and synthesizes a video frame of an th video and a video frame of a second video frame by frame to obtain a synthesized video frame, namely confluence of color coding (YUV) data in the two videos, wherein the frame rates of the th video and the second video are the same, the frame rates of the th video and the second video are different, the frame rates of other video files can be dynamically adjusted according to the decoding frame rate of the th video or the second video, and no obvious frame rate difference is ensured between the two videos.
The synthesized video frame is obtained by replacing the picture data of the designated area of the corresponding video frame in the second video with the picture data of the video frame of the th video, and the synthesized video frame can be obtained by synthesizing the video frame of the th video and the video frame of the second video according to a preset position proportion relation, for example, the position proportion relation can be that the position proportion of the video frame of the th video to the video frame of the second video is 1: 3, for example, when the display screen is divided into four equal parts on average, the video frame of the th video is positioned at the upper right corner of the display screen and occupies -quarter of the area of the display screen, and the video frame of the second video occupies the remaining three-quarter of the area of the display screen.
Illustratively, for any video frames in the video and the corresponding video frames in the second video, as shown in fig. 3, the server cuts out the picture data of the designated area in the video frames of the second video, and then tiles the picture data of the corresponding video frames in the video to the designated area, resulting in a composite video frame.
In step S204, the server generates a composite video stream according to the composite video frame.
In step S205, the server sends the composite video stream to the client.
And the server sends the composite video stream to the client side which sends the video confluence play request.
Step S206, the client plays the composite video stream.
And the client receives the synthesized video stream sent by the server and starts the player to play the video picture of the synthesized video stream.
According to the video data processing method provided by the embodiment of the disclosure, after a video confluence play request sent by a client is received, a server replaces picture data of a designated area of a corresponding video frame in a second video with picture data of a video frame of an th video to form a composite video frame, a composite video stream is generated according to the composite video frame obtained by frame-by-frame synthesis and sent to the client, the client is enabled to play a th video and a second video respectively in different areas of video play pictures according to the received composite video stream, so that the client can play the two videos at the same time by using players without using two players stacked with each other to play the th video and the second video respectively, and resource consumption of the client is saved.
In alternative embodiments, in order to allow the user to freely select which video to listen to, the server may add the audio frame of the th video and the audio frame of the second video to the composite video stream, respectively, so that the client plays the audio frame of the th video or the audio frame of the second video according to the user's selection.
For example, in the process of playing the composite video stream, the client may add options corresponding to different audios in the setting area or the setting menu as shown in fig. 4, after the setting menu is expanded, the user may see options of th video and second video, the user selects th video, the client plays audio frames of th video, the user selects second video, and the client plays audio frames of the second video.
In another alternative embodiment, the server may also add the audio frame of the th video or the audio frame of the second video to the composite video stream to send to the client according to the selection of the user at the client.
In embodiments, it is assumed that a th video is a live video and a second video is a short video, and the server can obtain barrage data of a th video from a barrage server or each terminal sending the barrage message, after the server obtains the barrage data of a th video, the barrage data is added to the picture data of the video frame of the th video, and then the picture data of the video frame of the th video is used to replace the picture data of the designated area of the corresponding video frame in the second video, so as to obtain a composite video frame.
Considering that a user may tend to watch only videos in the process of watching th video and second video simultaneously, the client may set a control for watching th video or closing th video in a co-broadcasting video picture, for example, a transparent film layer is set in the picture area of th video for receiving a clicking operation of the user, if the user clicks the picture area of th video, an operation of instructing to play th video is considered to be received, if the user clicks the closing key, an operation of instructing to close th video and playing the second video is considered to be received.
Specifically, in the process of playing the composite video stream, if a user clicks a designated area, such as a picture area of a th video, a client sends a single video playing request to a server in response to the operation of clicking the designated area by the user, wherein the single video playing request indicates that a th video is played and carries a video playing time of a th video, wherein the video playing time refers to a current playing position of a th video and is generally marked by a timestamp.
In the process of playing the composite video stream, if the user clicks the video closing key, the client responds to the operation of the user clicking the video closing key and sends a single video playing request to the server. The single video playing request indicates that the second video is played and carries the video playing time of the second video. The video playing time refers to the current playing position of the second video, and is usually indicated by a timestamp. And the server receives a single video playing request sent by the client, and sends the video stream of the second video indicated to be played by the single video playing request to the client according to the video playing time carried in the single video playing request. The video stream of the second video here includes a video frame starting from the video play time and a video frame following the video frame. And the client plays the second video according to the newly received video stream.
For easier understanding, the following describes the process of video data processing by two specific embodiments, in embodiments, as shown in fig. 5, a video data processing method provided by the embodiments of the present disclosure includes the following steps:
in step S501, the client receives a video playing operation of the user.
The video play operation indicates that the second video, which is short videos, is played.
Step S502, the client sends a video playing request to the server.
In step S503, the server obtains the video data of the second video indicated to be played by the video playing request.
And the video data of the second video is stored in the server or a database of a video playing platform connected with the server. The video data of the second video comprises video frames of the second video, and a plurality of continuous video frames of the second video form a video stream of the second video.
Step S504, the server sends the video stream of the second video to the client.
In step S505, the client plays the video stream of the second video.
Step S506, in the process of playing the second video by the client, the server monitors that the anchor of the second video is live.
Wherein the anchor of the second video refers to an anchor that uploads the second video.
Step S507, the server sends a live broadcast notification to the client, and notifies the client that the anchor of the second video is live broadcast.
And step S508, the client displays the video confluence prompt information according to the notification of the server.
In step S509, the client receives the video merging operation of the user.
In step S510, the client sends a video merge play request to the server.
The video confluence play request indicates that th video and second video are played simultaneously, wherein th video is the live video of the main broadcast.
In step S511, the server performs frame-by-frame composition on the video frame of the th video and the video frame of the second video to obtain a composite video frame.
The video frames of the th video and the video frames of the second video in the composite video frame are arranged according to a preset position proportion relation.
In step S512, the server generates a composite video stream from the composite video frame, the audio frame of the th video, and the audio frame of the second video.
In step S513, the server sends the composite video stream to the client.
In step S514, the client plays the composite video stream.
The method provides brand-new live broadcast exposure modes and more efficient browsing experience, and the live broadcast video display is inserted in the scene of playing the short video.
In step S515, the client receives an operation of clicking the designated area by the user.
During playing the composite video stream, the client monitors whether the user clicks on a designated area, which may be a picture area of the th video.
In step S516, the client sends a single video playing request to the server.
The single video playing request indicates that the th video is played and carries the video playing time of the th video, and the video playing time is used for indicating the current playing position of the th video.
In step S517, the server obtains the video data of the th video indicated to be played by the single video playing request.
The server can obtain the video data of th video from the terminal of the main broadcast, the video data of th video comprises the video frames of th video from the video playing time, and a plurality of continuous video frames of th video form the video stream of th video from the video playing time.
In step S518, the server sends th video stream to the client.
In step S519, the client plays the video stream of the th video.
In another embodiments, as shown in fig. 6, the video data processing method provided by the embodiments of the present disclosure includes the following steps:
in step S601, the client receives a video playing operation of the user.
The video play operation indicates that the second video, which is short videos, is played.
Step S602, the client sends a video playing request to the server.
In step S603, the server obtains the video data of the second video indicated to be played by the video playing request.
In step S604, the server sends the video stream of the second video to the client.
In step S605, the client plays the video stream of the second video.
Step S606, in the process of playing the second video by the client, the server monitors that the anchor of the second video is live.
Wherein the anchor of the second video refers to an anchor that uploads the second video.
Step S607, the server sends a live broadcast notification to the client, and notifies the client that the anchor of the second video is live broadcast.
In step S608, the client displays the video confluence prompt information according to the notification from the server.
In step S609, the client receives the video merging operation of the user.
In step S610, the client sends a video streaming playing request to the server.
The video confluence play request indicates that th video and second video are played simultaneously, wherein th video is the live video of the main broadcast.
In step S611, the server performs frame-by-frame composition on the video frame of the th video and the video frame of the second video to obtain a composite video frame.
In step S612, the server generates a composite video stream from the composite video frame, the audio frame of the th video, and the audio frame of the second video.
In step S613, the server sends the composite video stream to the client.
In step S614, the client plays the composite video stream.
In step S615, the client receives an operation of the user to click the close video button.
The close video button is used to close th video.
In step S616, the client sends a single video playing request to the server.
The single video play request indicates that the second video is played.
In step S617, the server obtains the video data of the second video indicated to be played by the single video playing request.
The video data of the second video comprises video frames starting from the video playing time in the second video. Starting from the video playing time, a plurality of continuous video frames of the second video form a video stream of the second video.
In step S618, the server sends the video stream of the second video to the client.
Step S619, the client plays the video stream of the second video.
It should be noted that the application scenarios described in the embodiment of the present disclosure are for more clearly illustrating the technical solutions of the embodiment of the present disclosure, and do not constitute a limitation on the technical solutions provided in the embodiment of the present disclosure, and as a new application scenario appears, a person skilled in the art may know that the technical solutions provided in the embodiment of the present disclosure are also applicable to similar technical problems.
In the embodiment of the present disclosure, a video data processing method executed by a server is shown in fig. 7, and includes the following steps:
step S701, receiving a video confluence play request sent by a client.
The video confluence play request indicates that th video and second video are played simultaneously, wherein th video is live video, and the second video is short video, or th video is short video, and the second video is live video.
In an alternative embodiment of , during the playing of the second video by the client, the server monitors whether the anchor of the second video is live, and if it is determined that the anchor of the second video is live, the client is notified to display the video merging prompt information.
Step S702 is to synthesize the video frame of the th video and the video frame of the second video frame by frame to obtain a synthesized video frame.
Before replacing the picture data of the designated area of the corresponding video frame in the second video with the picture data of the video frame of the th video, the server may first obtain the barrage data of the th video and add the barrage data to the picture data of the video frame of the th video.
Step S703 generates a composite video stream from the composite video frame.
In generating the composite video stream, th video audio frames and second video audio frames may be added to the composite video stream, respectively, to enable the client to play th video audio frames or second video audio frames according to the user's selection, or th video audio frames or second video audio frames may be added to the composite video stream according to the user's selection.
Step S704, sending the composite video stream to the client.
Optionally, in the process of playing the co-played video by the client, if a single video playing request sent by the client is received, if the single video playing request indicates that the th video is played, the video stream of the th video is sent to the client according to the video playing time of the th video carried in the single video playing request, and if the single video playing request indicates that the second video is played, the video stream of the second video is sent to the client according to the video playing time of the second video carried in the single video playing request.
In the embodiment of the present disclosure, as shown in fig. 8, a video data processing method executed by a client includes the following steps:
step S801, in response to the received video join operation, sends a video join play request to the server.
The video confluence play request indicates that th video and second video are played simultaneously, wherein th video is live video, and the second video is short video, or th video is short video, and the second video is live video.
Optionally, in the process of playing the second video, if receiving a notification that the anchor is live broadcast and sent by the server, the client displays the video confluence prompt information according to the notification of the server. And if the user selects to play the two video confluence according to the video confluence prompt information displayed by the client, responding to the received video confluence operation and sending a video confluence playing request to the server.
Step S802, receiving the composite video stream returned by the server.
The composite video stream is generated by the server from composite video frames obtained by replacing picture data of a designated area of a corresponding video frame in the second video with picture data of a video frame of the th video.
Step S803, the composite video stream is played.
In playing the composite video stream, the audio frame of the th video or the audio frame of the second video in the composite video stream may be played according to the audio selected by the user.
In the process of playing the composite video stream, if the condition that the user clicks the designated area is monitored, a single video playing request is sent to the server in response to the operation that the user clicks the designated area, wherein the single video playing request indicates that the th video is played and carries the video playing time of the th video.
And in the process of playing the composite video stream, if the condition that the user clicks the video closing key is monitored, responding to the operation that the user clicks the video closing key, and sending a single video playing request to the server. And the single video playing request indicates that the second video is played and carries the video playing time of the second video.
The present disclosure also provides kinds of video data processing apparatuses based on the same inventive concept as the video data processing method shown in fig. 7, as shown in fig. 9, the video data processing apparatus includes:
the request receiving unit 91 is used for receiving a video confluence playing request sent by a client, wherein the video confluence playing request indicates that an th video and a second video are played simultaneously;
a video merging unit 92, configured to synthesize a video frame of the th video and a video frame of the second video frame by frame to obtain a synthesized video frame, and generate a synthesized video stream according to the synthesized video frame, where the synthesized video frame is obtained by replacing picture data of a designated area of a corresponding video frame in the second video with picture data of a video frame of the th video;
a video sending unit 93, configured to send the composite video stream to the client.
In possible implementation manners, the video merging unit 92 is further configured to:
and respectively adding the audio frame of the th video and the audio frame of the second video to the composite video stream so as to enable the client to play the audio frame of the th video or the audio frame of the second video according to the selection of a user.
In possible implementation manners, the video is a live video and the second video is a short video, or the video is a short video and the second video is a live video.
In possible implementation manners, the video is a live video, the second video is a short video, and the video merging unit 92 is further configured to:
acquiring bullet screen data of the th video;
adding the bullet screen data to picture data of a video frame of the th video.
In possible implementation manners, the video is a live video, the second video is a short video, and as shown in fig. 10, the apparatus further includes:
a notification sending unit 94, configured to notify the client to display video confluence prompting information if it is determined that a main broadcast of the second video is live in a process of playing the second video by the client.
In possible implementation manners, the video sending unit 93 is further configured to:
and if a single video playing request sent by the client is received, sending th video or a video stream of a second video indicated to be played by the single video playing request to the client according to the video playing time carried in the single video playing request.
The present disclosure also provides kinds of moving picture material playback apparatuses based on the same inventive concept as the video data processing method shown in fig. 8, as shown in fig. 11, the video data processing apparatus includes:
a request sending unit 111, configured to send a video join play request to the server in response to the received video join operation, where the video join play request indicates that an th video and a second video are played simultaneously;
a video receiving unit 112, configured to receive a composite video stream returned by the server, where the composite video stream is generated by the server according to a composite video frame, and the composite video frame is obtained by replacing, by the server, picture data of a video frame of the th video with picture data of a specified area of a corresponding video frame in the second video;
a video playing unit 113, configured to play the composite video stream.
In possible implementation manners, the video playing unit 113 may be further configured to:
and in the process of playing the composite video stream, playing the audio frame of the th video or the audio frame of the second video in the composite video stream according to the audio selected by the user.
In possible implementation manners, the video is a live video and the second video is a short video, or the video is a short video and the second video is a live video.
In possible implementation manners, the second video is a short video, and as shown in fig. 12, the apparatus may further include a merge prompt unit 114, configured to present video merge prompt information according to the notification of the server during playing the second video.
In possible implementation manners, the video playing unit 113 may be further configured to:
in the process of playing the composite video stream, responding to the operation of clicking a designated area by a user, and sending a single video playing request to the server, wherein the single video playing request indicates that the th video is played and carries the video playing time of the th video, or
In the process of playing the composite video stream, responding to the operation that a user clicks a video closing key, and sending a single video playing request to the server; and the single video playing request indicates that the second video is played and carries the video playing time of the second video.
According to the video data processing device provided by the embodiment of the disclosure, after a video confluence play request sent by a client is received, picture data of a video frame of an th video is adopted to replace picture data of a designated area of a corresponding video frame in a second video to form a composite video frame, a composite video stream is generated according to the composite video frame obtained by frame-by-frame synthesis and sent to the client, the client is enabled to play a th video and a second video respectively in different areas of video play pictures according to the received composite video stream, so that the client can play the two videos at the same time by using players without using two players stacked with each other to play the th video and the second video respectively, and resource consumption of the client is saved.
Based on the same inventive concept as the video data processing method shown in fig. 7, the embodiment of the present disclosure further provides servers, for example, the server 300 shown in fig. 1, fig. 13 shows a block diagram of possible servers, which may include a memory 1301 and a processor 1302, as shown in fig. 13.
The memory 1301 may mainly include a storage program area that may store an operating system, application programs required for at least functions, and the like, and a storage data area that may store data created according to the use of the server, such as short videos, bullet screen data, and the like.
The memory 1301 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 1301 may also be a non-volatile memory (non-volatile memory) such as, but not limited to, a read-only memory (rom), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD), or the memory 1301 may be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Memory 1301 may be a combination of the above.
The processor 1302 may include or more Central Processing Units (CPUs), Graphics Processing Units (GPUs), or digital Processing units (dsps), among others.
In the embodiment of the present disclosure, the memory 1301 and the processor 1302 are connected through a bus 1303, the bus 1303 is shown by a thick line in fig. 13, and the connection manner between other components is only schematically illustrated and not limited in the embodiment of the present disclosure.
Specifically, the processor 1302 is configured to implement the following steps when calling the computer program stored in the memory 1301:
receiving a video confluence playing request sent by a client, wherein the video confluence playing request indicates that an th video and a second video are played simultaneously;
synthesizing the video frame of the th video and the video frame of the second video frame by frame to obtain a synthesized video frame, wherein the synthesized video frame is obtained by replacing the picture data of the appointed area of the corresponding video frame in the second video by the picture data of the video frame of the th video;
generating a composite video stream from the composite video frame;
and sending the composite video stream to the client.
In possible implementations, the processor 1302 further performs:
and respectively adding the audio frame of the th video and the audio frame of the second video to the composite video stream so as to enable the client to play the audio frame of the th video or the audio frame of the second video according to the selection of a user.
In possible implementation manners, the video is a live video and the second video is a short video, or the video is a short video and the second video is a live video.
In possible implementations, the video is a live video, the second video is a short video, and the processor 1302 further performs:
acquiring bullet screen data of the th video;
adding the bullet screen data to picture data of a video frame of the th video.
In possible implementations, the video is a live video, the second video is a short video, and the processor 1302 further performs:
and if the fact that the anchor of the second video is live is determined in the process that the client plays the second video, the client is informed to display video confluence prompt information.
In possible implementations, the processor 1302 further performs:
and if a single video playing request sent by the client is received, sending th video or a video stream of a second video indicated to be played by the single video playing request to the client according to the video playing time carried in the single video playing request.
The video data processing method shown in fig. 8 is based on the same inventive concept, and the present disclosure also provides kinds of terminals, for example, the terminal 101 or the terminal 102 shown in fig. 1, on which clients capable of playing short videos and live videos are installed, as shown in fig. 14, the terminal may include Radio Frequency (RF) circuit 1401, power source 1402, processor 1403, memory 1404, input unit 1405, display unit 1406, camera 1407, communication interface 1408, and Wireless Fidelity (WiFi) module 1409, etc. it can be understood by those skilled in the art that the structure of the terminal device shown in fig. 14 does not constitute a limitation of the terminal device, and the terminal device provided in the present disclosure may include more or less components than those shown in the figure, or combine some components, or arrange different components.
The following describes the various components of the terminal in detail with reference to fig. 14:
the RF circuit 1401 may be used for receiving and transmitting data during communication or conversation, and particularly, the RF circuit 1401 transmits live data transmitted from a server to the processor 1403 for processing, and transmits uplink data to a base station, and in general, the RF circuit 1401 includes, but is not limited to, an antenna, at least amplifiers, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
The wireless communications may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The WiFi technology belongs to a short distance wireless transmission technology, and the terminal can connect to an Access Point (AP) through a WiFi module 1409, thereby implementing Access to a data network. The WiFi module 1409 may be used for receiving and sending data during communication, and may also receive a video stream sent by a server through the WiFi module 1409.
The terminal may be physically connected to other devices via the communication interface 1408. Optionally, the communication interface 1408 is connected to the communication interface of the other device through a cable, so as to implement data transmission between the terminal and the other device.
Although fig. 14 shows communication modules such as the RF circuit 1401, the WiFi module 1409, and the communication interface 1408, it is understood that at least of the above components or other communication modules (such as bluetooth modules) for realizing communication exist in the terminal to perform data transmission.
For example, when the terminal is a mobile phone, the terminal may include the RF circuit 1401, and may further include the WiFi module 1409; when the terminal is a computer, the terminal may include the communication interface 1408 and may further include the WiFi module 1409; when the terminal is a tablet computer, the terminal may include the WiFi module.
The input unit 1405 may be used to receive numeric or character information input by a user and generate key signal inputs related to user settings and function control of the terminal.
Optionally, the input unit 1405 may include a touch panel 1453 and other input devices 1454.
The touch panel 1453, also referred to as a touch screen, may collect touch operations of a user (for example, operations of the user on or near the touch panel 1453 using any suitable object or accessory such as a finger, a stylus, etc.) and implement corresponding operations according to a preset program, such as user selection of a gift, etc. Alternatively, the touch panel 1453 may include two parts, i.e., a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1403, and can receive and execute commands sent by the processor 1403. In addition, the touch panel 1453 may be implemented in various types, such as resistive, capacitive, infrared, and surface acoustic wave.
Optionally, the other input devices 1454 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1406 may be used to display information input by a user or information provided to the user and various menus of the terminal. The display unit 1406 is a display system of the terminal, and is used for presenting interfaces, such as live pictures, and implementing human-computer interaction.
The display unit 1406 may include a display panel 1461. Alternatively, the Display panel 1461 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-emitting diode (OLED), or the like.
, the touch panel 1453 can cover the display panel 1461, and when the touch panel 1453 detects a touch operation on or near the touch panel, the touch panel is transmitted to the processor 1403 to determine the type of touch event, and then the processor 1403 provides a corresponding visual output on the display panel 1461 according to the type of touch event.
Although in fig. 14, the touch panel 1453 and the display 1461 are implemented as two separate components to implement the input and output functions of the terminal, in some embodiments, the touch panel 1453 and the display 1461 may be integrated to implement the input and output functions of the terminal.
The camera 1407 is used for realizing the shooting function of the terminal and shooting pictures or videos. For example, the anchor may take live video through the camera 1407. The camera 1407 can also be used to implement a scanning function of the terminal to scan a scanned object (two-dimensional code/barcode).
The terminal also includes a power supply 1402 (such as a battery) for powering the various components. Optionally, the power source 1402 may be logically connected to the processor 1403 through a power management system, so as to implement functions of managing charging, discharging, power consumption, and the like through the power management system.
The memory 1404 may be either volatile or nonvolatile memory. The memory 1404 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. A memory 1404 for storing computer programs executed by the processor 1403.
Processor 1403 may include or more Central Processing Units (CPUs), Graphics Processing Units (GPUs), or digital Processing units, among others.
In particular, the processor 1403 is configured to implement the following steps when invoking the computer program stored in the memory 1404:
responding to the received video confluence operation, and sending a video confluence playing request to a server, wherein the video confluence playing request indicates that an th video and a second video are simultaneously played, the th video is a live video, the second video is a short video, or the th video is a short video, and the second video is a live video;
the server generates a composite video frame according to the composite video frame, and the composite video frame is obtained by replacing the picture data of the video frame of the th video by the picture data of the corresponding video frame in the second video;
and playing the composite video stream.
In possible implementations, the processor 1403 specifically performs:
and in the process of playing the composite video stream, playing the audio frame of the th video or the audio frame of the second video in the composite video stream according to the audio selected by the user.
In possible implementation manners, the video is a live video and the second video is a short video, or the video is a short video and the second video is a live video.
In possible implementations, the second video is a short video, and the processor 1403 further performs:
and in the process of playing the second video, displaying the video confluence prompt information according to the notification of the server.
In possible implementations, the processor 1403 specifically performs:
in the process of playing the composite video stream, responding to the operation of clicking a designated area by a user, and sending a single video playing request to the server, wherein the single video playing request indicates that the th video is played and carries the video playing time of the th video, or
In the process of playing the composite video stream, responding to the operation that a user clicks a video closing key, and sending a single video playing request to the server; and the single video playing request indicates that the second video is played and carries the video playing time of the second video.
Although not shown, the terminal may further include at least sensors, audio circuits, and the like, which are not described in detail herein.
The embodiment of the present disclosure further provides computer storage media, where the computer storage media store computer-executable instructions, and the computer-executable instructions are used to implement any of the video data processing methods described in the embodiment of the present disclosure.
In possible embodiments, various aspects of the methods provided by the present disclosure may also be implemented in the form of program products including program code for causing a computer device to perform the steps of the methods according to the various exemplary embodiments of the present disclosure described above in this specification when the program product is run on the computer device, for example, the computer device may perform any of the video data processing methods described in the embodiments of the present disclosure.
More specific examples (a non-exhaustive list) of the readable storage medium include an electrical connection having or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications can be made in the present disclosure without departing from the spirit and scope of the disclosure. Thus, if such modifications and variations of the present disclosure fall within the scope of the claims of the present disclosure and their equivalents, the present disclosure is intended to include such modifications and variations as well.

Claims (10)

1, A video data processing method, applied to a server, the method comprising:
receiving a video confluence playing request sent by a client, wherein the video confluence playing request indicates that an th video and a second video are played simultaneously;
synthesizing the video frame of the th video and the video frame of the second video frame by frame to obtain a synthesized video frame, wherein the synthesized video frame is obtained by replacing the picture data of the appointed area of the corresponding video frame in the second video by the picture data of the video frame of the th video;
generating a composite video stream from the composite video frame;
and sending the composite video stream to the client.
2. The method according to claim 1, wherein the frame-by-frame synthesizing the video frame of the th video with the video frame of the second video to obtain a synthesized video frame comprises:
and (3) carrying out frame-by-frame composition on the video frame of the th video and the video frame of the second video according to a preset position proportion relation.
3. The method of claim 1, wherein after generating a composite video stream from the composite video frames, the method further comprises:
adding the audio frame of the th video and the audio frame of the second video to the composite video stream, respectively.
4. The method of claim 1, wherein the th video is a live video and the second video is a short video, or wherein the th video is a short video and the second video is a live video.
5, A video data processing method, applied to a client, the method comprising:
responding to the received video confluence operation, and sending a video confluence playing request to a server, wherein the video confluence playing request indicates that an th video and a second video are played simultaneously;
the server generates a composite video frame according to the composite video frame, and the composite video frame is obtained by replacing the picture data of the video frame of the th video by the picture data of the corresponding video frame in the second video;
and playing the composite video stream.
An apparatus for processing video data of type, comprising:
the device comprises a request receiving unit, a video merging playing unit and a video merging playing unit, wherein the request receiving unit is used for receiving a video merging playing request sent by a client, and the video merging playing request indicates that an th video and a second video are played simultaneously;
the video merging unit is used for synthesizing a video frame of the th video and a video frame of the second video frame by frame to obtain a synthesized video frame and generating a synthesized video stream according to the synthesized video frame, wherein the synthesized video frame is obtained by replacing the picture data of the appointed area of the corresponding video frame in the second video by the picture data of the video frame of the th video;
and the video sending unit is used for sending the composite video stream to the client.
A video data processing apparatus of the type 7, , comprising:
the request sending unit is used for responding to the received video confluence operation and sending a video confluence playing request to the server, wherein the video confluence playing request indicates that an th video and a second video are played simultaneously;
the server generates a composite video frame according to the composite video frame, and the composite video frame is obtained by replacing the picture data of the video frame of the th video by the picture data of the corresponding video frame in the second video;
and the video playing unit is used for playing the composite video stream.
A server of the type 8, , comprising or more processors and memory storing instructions executable by the processors;
wherein the processor is configured to execute the instructions to implement the video data processing method of any of claims 1-4.
A terminal of 9, , comprising or more processors and a memory for storing instructions executable by the processors;
wherein the processor is configured to execute the instructions to implement the video data processing method of claim 5.
10, computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, implements the video data processing method according to any of claims 1-4 or 5.
CN201911012307.2A 2019-10-23 2019-10-23 Video data processing method, device, server, terminal and storage medium Active CN110740346B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911012307.2A CN110740346B (en) 2019-10-23 2019-10-23 Video data processing method, device, server, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911012307.2A CN110740346B (en) 2019-10-23 2019-10-23 Video data processing method, device, server, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110740346A true CN110740346A (en) 2020-01-31
CN110740346B CN110740346B (en) 2022-04-22

Family

ID=69270935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911012307.2A Active CN110740346B (en) 2019-10-23 2019-10-23 Video data processing method, device, server, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110740346B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565693A (en) * 2020-11-30 2021-03-26 广东荣文科技集团有限公司 Method, system and equipment for monitoring video on demand
CN112637624A (en) * 2020-12-14 2021-04-09 广州繁星互娱信息科技有限公司 Live stream processing method, device, equipment and storage medium
CN115209222A (en) * 2022-06-15 2022-10-18 深圳市锐明技术股份有限公司 Video playing method and device, electronic equipment and readable storage medium
CN115484469A (en) * 2021-06-15 2022-12-16 北京字节跳动网络技术有限公司 Wheat connecting system, method, device, equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015093113A1 (en) * 2013-12-16 2015-06-25 三菱電機株式会社 Video processing apparatus and video displaying apparatus
CN105430424A (en) * 2015-11-26 2016-03-23 广州华多网络科技有限公司 Video live broadcast method, device and system
CN106658145A (en) * 2016-12-27 2017-05-10 北京奇虎科技有限公司 Live data processing method and device
US20170187657A1 (en) * 2015-12-28 2017-06-29 Facebook, Inc. Systems and methods to selectively combine video streams
CN107018448A (en) * 2017-03-23 2017-08-04 广州华多网络科技有限公司 Data processing method and device
CN108055552A (en) * 2017-12-13 2018-05-18 广州虎牙信息科技有限公司 Direct broadcasting room barrage methods of exhibiting, device and corresponding terminal
WO2018095174A1 (en) * 2016-11-22 2018-05-31 广州华多网络科技有限公司 Control method, device, and terminal apparatus for synthesizing video stream of live streaming room
CN108462883A (en) * 2018-01-08 2018-08-28 平安科技(深圳)有限公司 A kind of living broadcast interactive method, apparatus, terminal device and storage medium
CN108900859A (en) * 2018-08-17 2018-11-27 广州酷狗计算机科技有限公司 Live broadcasting method and system
CN109151594A (en) * 2018-09-27 2019-01-04 广州虎牙信息科技有限公司 Direct playing and playback video broadcasting method, device and electronic equipment
US20190082207A1 (en) * 2016-03-31 2019-03-14 Viacom International Inc. Device, System, and Method for Hybrid Media Content Distribution

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015093113A1 (en) * 2013-12-16 2015-06-25 三菱電機株式会社 Video processing apparatus and video displaying apparatus
CN105430424A (en) * 2015-11-26 2016-03-23 广州华多网络科技有限公司 Video live broadcast method, device and system
US20170187657A1 (en) * 2015-12-28 2017-06-29 Facebook, Inc. Systems and methods to selectively combine video streams
US20190082207A1 (en) * 2016-03-31 2019-03-14 Viacom International Inc. Device, System, and Method for Hybrid Media Content Distribution
WO2018095174A1 (en) * 2016-11-22 2018-05-31 广州华多网络科技有限公司 Control method, device, and terminal apparatus for synthesizing video stream of live streaming room
CN106658145A (en) * 2016-12-27 2017-05-10 北京奇虎科技有限公司 Live data processing method and device
CN107018448A (en) * 2017-03-23 2017-08-04 广州华多网络科技有限公司 Data processing method and device
CN108055552A (en) * 2017-12-13 2018-05-18 广州虎牙信息科技有限公司 Direct broadcasting room barrage methods of exhibiting, device and corresponding terminal
CN108462883A (en) * 2018-01-08 2018-08-28 平安科技(深圳)有限公司 A kind of living broadcast interactive method, apparatus, terminal device and storage medium
CN108900859A (en) * 2018-08-17 2018-11-27 广州酷狗计算机科技有限公司 Live broadcasting method and system
CN109151594A (en) * 2018-09-27 2019-01-04 广州虎牙信息科技有限公司 Direct playing and playback video broadcasting method, device and electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565693A (en) * 2020-11-30 2021-03-26 广东荣文科技集团有限公司 Method, system and equipment for monitoring video on demand
CN112637624A (en) * 2020-12-14 2021-04-09 广州繁星互娱信息科技有限公司 Live stream processing method, device, equipment and storage medium
CN115484469A (en) * 2021-06-15 2022-12-16 北京字节跳动网络技术有限公司 Wheat connecting system, method, device, equipment and storage medium
CN115484469B (en) * 2021-06-15 2024-01-09 北京字节跳动网络技术有限公司 Wheat connecting system, method, device, equipment and storage medium
CN115209222A (en) * 2022-06-15 2022-10-18 深圳市锐明技术股份有限公司 Video playing method and device, electronic equipment and readable storage medium
CN115209222B (en) * 2022-06-15 2024-02-09 深圳市锐明技术股份有限公司 Video playing method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN110740346B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
CN110740346B (en) Video data processing method, device, server, terminal and storage medium
CN110536177B (en) Video generation method and device, electronic equipment and storage medium
JP6324625B2 (en) Live interactive system, information transmission method, information reception method and apparatus
CN110597774B (en) File sharing method, system, device, computing equipment and terminal equipment
US8789094B1 (en) Optimizing virtual collaboration sessions for mobile computing devices
EP3242462B1 (en) Cooperative control method for user equipment, user equipment, and communication system
CN111050203B (en) Video processing method and device, video processing equipment and storage medium
CN108924464B (en) Video file generation method and device and storage medium
US20180221762A1 (en) Video generation system, control device, and processing device
CN106803993B (en) Method and device for realizing video branch selection playing
WO2017181796A1 (en) Program interaction system, method, client and back-end server
CN104363476A (en) Online-live-broadcast-based team-forming activity method, device and system
CN110418207B (en) Information processing method, device and storage medium
CN102918835A (en) Controllable device companion data
CN106028092A (en) Television screenshot sharing method and device
CN105117216A (en) Management method and device for notification bar of mobile terminal and mobile terminal
CN110830813B (en) Video switching method and device, electronic equipment and storage medium
US11351467B2 (en) Information processing apparatus and game image distributing method
CN111246225B (en) Information interaction method and device, electronic equipment and computer readable storage medium
CN113810732B (en) Live content display method, device, terminal, storage medium and program product
CN113271472B (en) Game live broadcast method and device, electronic equipment and readable storage medium
TW201327202A (en) Cooperative provision of personalized user functions using shared and personal devices
CN114025180A (en) Game operation synchronization system, method, device, equipment and storage medium
WO2020238840A1 (en) Standalone program run method, apparatus, device, and storage medium
CN113596555B (en) Video playing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant