WO2021135334A1 - Procédé et appareil de traitement de contenu de diffusion en continu en direct, et système - Google Patents

Procédé et appareil de traitement de contenu de diffusion en continu en direct, et système Download PDF

Info

Publication number
WO2021135334A1
WO2021135334A1 PCT/CN2020/112856 CN2020112856W WO2021135334A1 WO 2021135334 A1 WO2021135334 A1 WO 2021135334A1 CN 2020112856 W CN2020112856 W CN 2020112856W WO 2021135334 A1 WO2021135334 A1 WO 2021135334A1
Authority
WO
WIPO (PCT)
Prior art keywords
video stream
live
original video
additional information
live broadcast
Prior art date
Application number
PCT/CN2020/112856
Other languages
English (en)
Chinese (zh)
Inventor
王云
Original Assignee
广州华多网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州华多网络科技有限公司 filed Critical 广州华多网络科技有限公司
Publication of WO2021135334A1 publication Critical patent/WO2021135334A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
    • H04N21/4383Accessing a communication channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program

Definitions

  • This application relates to the field of live broadcast, and in particular to a method, device, and system for processing live broadcast content.
  • short videos are appearing and integrated into people’s lives.
  • Using short videos on various short video platforms has become one of the ways for people to entertain and spend their lives.
  • Some short video content is some wonderful moments recorded by the anchor during the live broadcast.
  • Wonderful moments often require the live broadcast platform to filter the live broadcast content of the host to find the brilliant moments during the live broadcast of the high-quality host, and then publish them to a third-party short video platform.
  • the live broadcast pictures of the host on the live broadcast platform often include not only the original picture of the host obtained through the host’s camera, but also some advertisements such as stickers and subtitles added by the host himself or the copyright mark of the live broadcast platform.
  • These advertisements and The logo must appear in the live broadcast screen of the host on the live broadcast platform, but cannot appear in the short video screen of the wonderful moments published on the short video platform of the third party.
  • the short video generated on the live screen of the selected host is posted to the third party In the short video platform, the generated short video pictures usually carry these advertisements and logos.
  • the short video pictures of the wonderful moments posted on the third-party short video platform also include advertisements and logos.
  • the present application provides a method, device, live broadcast system, equipment, and computer-readable storage medium for processing live broadcast content.
  • a method for processing live content including:
  • the specified image frame of the original video stream is saved based on the evaluation index of the user behavior.
  • a method for processing live content including:
  • the user behavior evaluation index saves the designated image frames of the original video stream, where the original video stream is sent through a video channel, and the live broadcast additional information is sent through a signaling channel.
  • a live broadcast system includes an anchor client, a server, and an audience client;
  • the anchor client is used to obtain the original video stream
  • the server is configured to synthesize the original video stream and the additional information of the live broadcast into a live video stream, and send the live video stream to the viewer client;
  • the audience client is used to receive and display the live video stream.
  • an apparatus for processing live content including:
  • the receiving module is used to receive the original video stream sent by the host client through the video channel and the live broadcast additional information sent through the signaling channel;
  • a synthesis module for synthesizing the original video stream and the live broadcast additional information into a live video stream according to the live broadcast additional information
  • the transmission module is used to send the live video stream to the audience client;
  • the saving module is configured to save the designated image frame of the original video stream based on the evaluation index of the user behavior.
  • an apparatus for processing live content including:
  • the acquisition module is used to acquire the original video stream
  • the sending module is configured to send the original video stream and live additional information to the server respectively, so that the server synthesizes the original video stream and the live additional information into the live video stream according to the live additional information signaling to the viewer client
  • the live video stream is sent, and the designated image frame of the original video stream is saved based on the evaluation index of the user behavior, wherein the original video stream is sent through a video channel, and the live broadcast additional information is sent through a signaling channel.
  • the host client of this application sends the original video stream and live additional information to the server through different transmission channels, and the server completes the action of synthesizing the live additional information and the original video stream into the live video stream instead of pushing the stream by the host client
  • this application can not only allow the viewer client to watch the live video stream imperceptibly, but also share the original video stream to other video platforms in a timely and convenient manner.
  • Fig. 1a is a schematic diagram of a picture of an original video stream collected in a live broadcast scenario according to an exemplary embodiment of the present application
  • Fig. 1b is a schematic diagram of a synthesized live broadcast picture according to an exemplary embodiment of the present application
  • Fig. 2 is a flowchart of a method for processing live content shown in an exemplary embodiment of the present application
  • Fig. 3 is a flowchart of another method for processing live content shown in an exemplary embodiment of the present application.
  • Figure 3a is an application example in an application scenario of this application.
  • Fig. 4 is a schematic diagram of an apparatus for processing live content shown in an exemplary embodiment of the present application
  • Fig. 5 is a schematic diagram of another device for processing live content shown in an exemplary embodiment of the present application.
  • Fig. 6 is a schematic diagram of an electronic device shown in an exemplary embodiment of the present application.
  • first, second, third, etc. may be used in this application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
  • word “if” as used herein can be interpreted as "when” or “when” or "in response to determination”.
  • the live broadcast screen of the host on the live broadcast platform often includes not only the original image of the host obtained through the host’s camera, or the original image captured by the host on the terminal display interface, but also includes some stickers and subtitles added by the host himself.
  • this screen is the host’s original image obtained by the host’s camera. No additional live broadcast information is added to the host’s original image, and only the host’s image obtained by the host’s camera is displayed. .
  • the screen includes some text, image advertisements, and copyright identification information added by the host on the original screen obtained by the camera.
  • the host uses this screen as a live broadcast for viewers to watch, Figure 1b
  • These advertisements and logos in the live broadcast screen shown are necessary for the live broadcast platform, so they must appear in the live broadcast screen of the host on the live broadcast platform, but cannot appear on the short video platform published to the third party. In the short video screen at the moment.
  • This application provides a method for processing live content.
  • the server combines the original video stream sent by the host client and the live additional information into the live video stream, and sends the live video stream to the viewer client, as well as evaluation indicators based on user behavior Save the designated image frame of the original video stream.
  • the live video stream synthesized by the original video stream and the additional information of the live broadcast is used for the live broadcast, and the specified image frames of the original video stream are saved for publishing to a third-party video platform.
  • Fig. 2 is a flowchart of a method for processing live content shown in an exemplary embodiment of the application. As shown in Fig. 2, the method includes the following steps:
  • S201 Receive the original video stream sent by the host client through the video channel and the live additional information sent through the signaling channel;
  • S202 Synthesize the original video stream and the live additional information into a live video stream according to the live additional information, and send the live video stream to the viewer client;
  • S203 Save the designated image frame of the original video stream based on the evaluation index of the user behavior.
  • the original image frame sent by the host client can be the image frame of any screen captured in real time by the camera, and the frame can be the screen that the host wants to use for live broadcast, for example, the host can capture itself through the camera during the live broadcast. , Or some pictures of the surrounding environment of the host.
  • the acquired original image frame can also be the screen captured by the host on the terminal display interface, such as a game or movie being broadcast by the host, the host can capture the game interface or movie playback interface displayed on a mobile device or a fixed device as the original image frame .
  • the live broadcast additional information signaling carries live broadcast additional information.
  • the live broadcast additional information may indicate some subtitles and copyright identifications added to the original image frame, or texture pictures added manually by the host, and these subtitles and pictures may be some advertisement information.
  • the live broadcast additional information includes the attribute information and location information of the live broadcast additional information.
  • the attribute information is some fixed attributes of the live broadcast additional information.
  • the attribute information can be the style of subtitles, for example, it can be subtitled. Font, size, color, background color and other text styles.
  • the attribute information can be the URL of the picture and the zoom factor of the picture according to the attribute information and location information.
  • the location information is the coordinates of the live broadcast additional information added to the image frame of the original video stream.
  • the original video stream and the live broadcast additional information are combined into the live video stream.
  • the original video stream can be combined with the live broadcast according to the attribute information and location information.
  • the additional information is combined into a live video stream.
  • the designated image frame is the original image frame determined by the server based on the evaluation index of the user behavior.
  • the evaluation index of the user behavior may include at least one of the number of gifts and the activity of the public screen. Of course, it may be other than these two. Evaluation indicators other than indicators. Which parameter to choose as the evaluation index can be determined according to the characteristics of the specified image frame that the person using this scheme needs to select. For example, the number of gifts or fair activity can also be used as the evaluation index, or other parameters can be used as the evaluation index. index.
  • the server receives the live video stream sent by the host client and sends the live video stream to the audience client for viewing by the audience client.
  • the audience can enter the live broadcast room of the host they like according to their personal preferences.
  • the host As the number of bullet screens and the number of gifts increases, or the value of the gift is higher, the host’s The heat of the live broadcast room will also rise.
  • the server When the server sends the live video frame to the audience client, it will use the real-time audience barrage, the number of audience gifts, the current live broadcast room heat, etc. as evaluation indicators, and real-time statistics of the audience barrage in the host's live room corresponding to the current live video frame According to these evaluation indicators, determine whether the current live video frame is a video frame of a wonderful moment.
  • the wonderful moment refers to if the audience barrage in the live broadcast room of the host within a certain period of time If the number, the number of viewers giving gifts, and the current popularity of the live broadcast room reach a certain index, then it is determined that the live video frames within the time period constitute a wonderful moment.
  • the specified image frame can be the original image frame corresponding to the live video frame included in the time period of the wonderful moment, that is, the original image frame of the wonderful moment. Since the server will judge in real time whether the current moment is a wonderful moment, when it is judged that the current moment is a wonderful moment Start recording the received original image frame at the moment, and stop recording and save the recorded file as a video file when the current moment is judged to be a non-exciting moment.
  • the format of the video file can be, for example, MP4 format, AVI format, etc., and the server It will continue to judge whether the current moment is a wonderful moment. Once it is judged that the current moment is a wonderful moment, it will be recorded and saved again. Since the original image frame of the wonderful moment does not carry some additional live broadcast information such as subtitles, copyright signs, or stickers manually added by the anchor, it can be used to publish to other video platforms.
  • the process for the server to store the specified file can refer to the following process: the server can store the original video stream in segments according to the time sequence of the received original video stream, and judge whether there is a specified image frame in the saved original video stream.
  • the timing can be performed in real time or asynchronously with the saved action. For example, each time a file is saved, it can be queried whether the time period includes a wonderful moment. If the time period includes a wonderful moment, it is determined that the original image frame of the wonderful moment exists in the file, and the file is compared with other original image frames of the wonderful moment.
  • File integration and archiving if the time period of the file does not include the wonderful moments, it is determined that the original image frames of the wonderful moments do not exist in the file, and the file can be deleted directly to avoid occupying the memory space of the device. Since the files uniformly sent to the server after integrated archiving are all files that include the original image frames of the wonderful moments, there is no need for the server to filter again, so that users can directly share these files obtained by the server to other video platforms when needed.
  • the designated image frame may be distributed to the current live broadcast platform or a third-party video platform.
  • the user can obtain the video files of the wonderful moments saved on the server from the server, and the user can edit the obtained video files, for example, perform appropriate editing, add special effects, etc., and then distribute the processed video files to third-party videos
  • the platform is for viewers of third-party video platforms to watch these wonderful moments, or the processed video files can also be distributed to the current live broadcast platform for viewers of the current live broadcast platform to watch these wonderful moments.
  • Fig. 3 is a flowchart of a method for processing live content shown in an exemplary embodiment of the application. As shown in Fig. 3, the method includes the following steps:
  • S302 Send the original video stream and the live broadcast additional information signaling to the server respectively, so that the server synthesizes the original video stream and the live broadcast additional information into the live video stream according to the live broadcast additional information signaling, and sends it to the viewer client.
  • the live video stream and the evaluation index based on user behavior save the designated image frames of the original video stream.
  • This application also provides a live broadcast system, which includes an anchor client, a server, and an audience client;
  • the anchor client is used to obtain the original video stream
  • the server is configured to synthesize the original video stream and the additional information of the live broadcast into a live video stream, and send the live video stream to the viewer client;
  • the specified image frame of the original video stream is saved based on the evaluation index of the user behavior.
  • the host client can also locally synthesize the original image frame and the additional information of the live broadcast into a local live image frame for the host to watch the live image locally.
  • the server can directly send the saved file of the specified image frame to the third-party video platform, or the user can obtain the file of the specified image frame from the server and publish it directly or after editing and publishing to the third-party video platform for viewers on the third-party video platform Watch.
  • FIG. 3a there are an anchor client 301a, several viewer clients 302a, a media server 303a, a synthesis server 304a, a live video stream database 305a and an original video stream database 306a in the live broadcast system.
  • the host can enter the live broadcast room through the host client 301a.
  • the camera collects the original video stream, and pushes the collected original video stream to the media server 303a in the form of streaming media data.
  • the media server 303a in the form of streaming media data.
  • Some additional live broadcast information of the viewer is sent to the composition server 304a in the form of signaling (for example, through a channel for sending control signaling).
  • the media server 303a After receiving the original video stream, the media server 303a also sends the original video stream to the synthesis server 304a.
  • the synthesis server 304a needs to synthesize the original video stream and the live broadcast additional information to generate a live video stream for the viewer function client to watch.
  • the live video stream viewed by the viewer is a video picture with subtitles, copyright identification, and other information attached.
  • the synthesis server 304a stores the synthesized live video stream in the live video stream database 305a, and the original video stream in the original video stream database 306a.
  • the original video stream can be stored in multiple folders according to the time sequence of the received file.
  • the live video stream is distributed by the synthesis server 304a to each viewer client 302a in real time.
  • the video data stored in the original video stream database 306a it can be judged whether it contains the image frame of the wonderful moment in real time or at an appropriate time according to different needs.
  • the judgment method can be realized by means of AI image recognition.
  • the basis for judgment can be Use certain evaluation indicators that can analyze user behavior. File the retrieved folders containing the image frames of the notable moments; delete the folders that do not contain the image frames of the notable moments.
  • the original video stream stored in the original video stream database 306a can be distributed to other video platforms at an appropriate time, or the user can use a client to obtain it from the synthesis server 304a and send it to the user, so that the user can send other videos to other video streams.
  • the synthesis server 304a when the synthesis server 304a pushes the synthesized live video stream, it may not push it to the anchor client 301a, because the anchor client 301a can obtain the original video stream collected by the anchor locally. , It is also possible to obtain additional live broadcast information such as subtitles and pictures, so the host client 301a also has the ability to synthesize live video streams. The host client 301a can preview the live broadcast effect locally after completing the synthesis of the local live video stream, which makes the preview process more efficient.
  • this application also provides an embodiment of an apparatus for processing live content.
  • FIG. 4 is an apparatus 400 for processing live content shown in an exemplary embodiment of this application, including:
  • the receiving module 401 is configured to receive the original video stream sent by the host client through the video channel and the live broadcast additional information sent through the signaling channel;
  • the synthesis module 402 is configured to synthesize the original video stream and the live broadcast additional information into a live video stream according to the live broadcast additional information;
  • the transmission module 403 is configured to send the live video stream to the audience client;
  • the saving module 404 is configured to save the designated image frame of the original video stream based on the evaluation index of the user behavior.
  • FIG. 5 shows an apparatus 500 for processing live content according to an exemplary embodiment of this application, including:
  • the obtaining module 501 is used to obtain the original video stream
  • the sending module 502 is configured to send the original video stream and the live broadcast additional information to the server respectively, so that the server combines the original video stream and the live broadcast additional information into the live video stream according to the live broadcast additional information signaling, and presents it to the audience client
  • the terminal sends the live video stream, and saves the designated image frame of the original video stream based on the evaluation index of the user behavior, wherein the original video stream is sent through a video channel, and the live broadcast additional information is sent through a signaling channel.
  • the embodiments of the apparatus for processing live content in this application can be applied to devices.
  • the device embodiments can be implemented by software, or can be implemented by hardware or a combination of software and hardware.
  • Taking software implementation as an example as a logical device, it is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory through the processor of the device where it is located.
  • FIG. 6 a hardware structure diagram of the device where the device for processing live content of this application is located, except for the processor, memory, network interface, and non-volatile memory shown in FIG.
  • the device where the device is located in the embodiment usually includes other hardware according to the actual function of the device, which will not be repeated here.
  • non-volatile memory is used to store executable instructions of the processor, and the processor is configured to execute the instructions to implement the method for processing live content described in any of the foregoing embodiments.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the method for processing live content described in any of the above embodiments is implemented.
  • the relevant part can refer to the part of the description of the method embodiment.
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units.
  • Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the present application. Those of ordinary skill in the art can understand and implement without creative work.

Abstract

L'invention concerne un procédé de traitement de contenu de diffusion en continu en direct. Le procédé comporte les étapes consistant à: recevoir un flux vidéo d'origine émis par un client d'ancrage au moyen d'un canal vidéo et des informations supplémentaires de diffusion en continu en direct émises par le client d'ancrage au moyen d'un canal de signalisation; d'après les informations supplémentaires de diffusion en continu en direct, synthétiser le flux vidéo d'origine et les informations supplémentaires de diffusion en continu en direct en un flux vidéo de diffusion en continu en direct, et envoyer le flux vidéo de diffusion en continu en direct à un client de public; et stocker une trame d'image spécifiée du flux vidéo d'origine sur la base d'un indice d'évaluation de comportements d'utilisateurs. Le client de public peut visualiser le flux vidéo de diffusion en continu en direct sans perception, et le flux vidéo d'origine peut être partagé commodément vers d'autres plates-formes vidéo en temps opportun.
PCT/CN2020/112856 2019-12-31 2020-09-01 Procédé et appareil de traitement de contenu de diffusion en continu en direct, et système WO2021135334A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911407509.7A CN111083515B (zh) 2019-12-31 2019-12-31 处理直播内容的方法、装置、系统
CN201911407509.7 2019-12-31

Publications (1)

Publication Number Publication Date
WO2021135334A1 true WO2021135334A1 (fr) 2021-07-08

Family

ID=70320557

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112856 WO2021135334A1 (fr) 2019-12-31 2020-09-01 Procédé et appareil de traitement de contenu de diffusion en continu en direct, et système

Country Status (2)

Country Link
CN (1) CN111083515B (fr)
WO (1) WO2021135334A1 (fr)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111083515B (zh) * 2019-12-31 2021-07-23 广州华多网络科技有限公司 处理直播内容的方法、装置、系统
CN112084369A (zh) * 2020-08-03 2020-12-15 广州数说故事信息科技有限公司 一种基于视频直播高光时刻挖掘方法和模型
CN112087669B (zh) * 2020-08-07 2023-03-10 广州方硅信息技术有限公司 赠送虚拟礼物的方法、装置及电子设备
CN113490001A (zh) * 2020-11-28 2021-10-08 青岛海信电子产业控股股份有限公司 一种音视频数据分享方法、服务器、设备及介质
CN112954374B (zh) * 2021-01-28 2023-05-23 广州虎牙科技有限公司 视频数据处理方法、装置、电子设备及存储介质
CN113691877A (zh) * 2021-08-27 2021-11-23 余浪 直播方法及装置
CN113873296A (zh) * 2021-09-24 2021-12-31 上海哔哩哔哩科技有限公司 视频流处理方法及装置
CN115022654B (zh) * 2022-05-18 2024-01-19 北京达佳互联信息技术有限公司 一种直播场景下的视频编辑方法及装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050232610A1 (en) * 2004-04-16 2005-10-20 Gateway, Inc. User automated content deletion
CN103686450A (zh) * 2013-12-31 2014-03-26 广州华多网络科技有限公司 视频处理方法及系统
CN105872580A (zh) * 2016-04-15 2016-08-17 广州酷狗计算机科技有限公司 直播视频的录制方法及装置
CN106131591A (zh) * 2016-06-30 2016-11-16 广州华多网络科技有限公司 直播方法、装置及终端
US20170134595A1 (en) * 2015-11-11 2017-05-11 Vivint, Inc. Automated image album
CN106792122A (zh) * 2017-02-20 2017-05-31 北京金山安全软件有限公司 视频自动录制方法及装置、终端
CN108289159A (zh) * 2017-05-25 2018-07-17 广州华多网络科技有限公司 一种终端直播特效添加系统、方法及终端直播系统
CN111083515A (zh) * 2019-12-31 2020-04-28 广州华多网络科技有限公司 处理直播内容的方法、装置、系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9681160B2 (en) * 2011-06-22 2017-06-13 Tout Inc. Method and apparatus for automatically associating media segments with broadcast media streams
CN105282617A (zh) * 2014-06-12 2016-01-27 李英元 可实现画面标识差异化的视频点播系统
CN105245801A (zh) * 2015-09-24 2016-01-13 天脉聚源(北京)科技有限公司 一种传送电视互动系统互动信号的方法
CN108696474A (zh) * 2017-04-05 2018-10-23 杭州登虹科技有限公司 多媒体传输的通信方法
CN108521584B (zh) * 2018-04-20 2020-08-28 广州虎牙信息科技有限公司 互动信息处理方法、装置、主播侧设备和介质
CN110198456B (zh) * 2019-04-26 2023-02-07 腾讯科技(深圳)有限公司 基于直播的视频推送方法、装置和计算机可读存储介质
CN110392226A (zh) * 2019-06-19 2019-10-29 视联动力信息技术股份有限公司 一种直播实现方法和装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050232610A1 (en) * 2004-04-16 2005-10-20 Gateway, Inc. User automated content deletion
CN103686450A (zh) * 2013-12-31 2014-03-26 广州华多网络科技有限公司 视频处理方法及系统
US20170134595A1 (en) * 2015-11-11 2017-05-11 Vivint, Inc. Automated image album
CN105872580A (zh) * 2016-04-15 2016-08-17 广州酷狗计算机科技有限公司 直播视频的录制方法及装置
CN106131591A (zh) * 2016-06-30 2016-11-16 广州华多网络科技有限公司 直播方法、装置及终端
CN106792122A (zh) * 2017-02-20 2017-05-31 北京金山安全软件有限公司 视频自动录制方法及装置、终端
CN108289159A (zh) * 2017-05-25 2018-07-17 广州华多网络科技有限公司 一种终端直播特效添加系统、方法及终端直播系统
CN111083515A (zh) * 2019-12-31 2020-04-28 广州华多网络科技有限公司 处理直播内容的方法、装置、系统

Also Published As

Publication number Publication date
CN111083515B (zh) 2021-07-23
CN111083515A (zh) 2020-04-28

Similar Documents

Publication Publication Date Title
WO2021135334A1 (fr) Procédé et appareil de traitement de contenu de diffusion en continu en direct, et système
US10735798B2 (en) Video broadcast system and a method of disseminating video content
US11937010B2 (en) Data segment service
US9794615B2 (en) Broadcast management system
TW482985B (en) Automatic media and advertising system
US20080127272A1 (en) Aggregation of Multiple Media Streams to a User
CN103258557B (zh) 显示控制装置及显示控制方法
JP2023115088A (ja) 画像ファイル生成装置及び画像ファイル生成方法、画像生成装置及び画像生成方法、画像生成システム、並びにプログラム
KR101843815B1 (ko) 비디오 클립간 중간영상 ppl 편집 플랫폼 제공 방법
US9264746B2 (en) Content distribution system, content distribution server, content distribution method, software program, and storage medium
US9779306B2 (en) Content playback system, server, mobile terminal, content playback method, and recording medium
JP2009124516A (ja) 動画編集装置、再生装置、動画編集方法、及び再生方法
CN109874024A (zh) 一种基于动态视频海报的弹幕处理方法、系统及存储介质
KR102069897B1 (ko) 사용자 영상 생성 방법 및 이를 위한 장치
KR101430985B1 (ko) 2d-3d 복합 차원 콘텐츠 파일을 사용하는 복합 차원 콘텐츠 서비스 제공 시스템, 그 서비스 제공 방법
JP2015142207A (ja) 視聴ログ記録システム及び動画配信システム
KR100886149B1 (ko) 기본 영상에 삽입 영상을 삽입하여 동영상을 형성하는방법 및 기록매체
JP2005191892A (ja) 情報取得装置及びこれを用いたマルチメディア情報作成システム
US20200366973A1 (en) Automatic Video Preview Creation System
CN111107388A (zh) 处理直播内容的方法、装置、系统、设备、存储介质
CN114554232A (zh) 基于裸眼3d的混合现实直播方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20910248

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20910248

Country of ref document: EP

Kind code of ref document: A1