WO2017185798A1 - 多媒体文件分享的方法及装置 - Google Patents

多媒体文件分享的方法及装置 Download PDF

Info

Publication number
WO2017185798A1
WO2017185798A1 PCT/CN2016/113211 CN2016113211W WO2017185798A1 WO 2017185798 A1 WO2017185798 A1 WO 2017185798A1 CN 2016113211 W CN2016113211 W CN 2016113211W WO 2017185798 A1 WO2017185798 A1 WO 2017185798A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
video data
audio data
decoding
audio
Prior art date
Application number
PCT/CN2016/113211
Other languages
English (en)
French (fr)
Inventor
张龙华
向建中
薄景仁
林强生
Original Assignee
广州视睿电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视睿电子科技有限公司 filed Critical 广州视睿电子科技有限公司
Publication of WO2017185798A1 publication Critical patent/WO2017185798A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
    • H04N21/4385Multiplex stream processing, e.g. multiplex stream decrypting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4341Demultiplexing of audio and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4

Definitions

  • the present invention relates to the field of file sharing, and in particular, to a method and device for sharing multimedia files.
  • multimedia files such as video and audio
  • audio and video files can be played to other users in real time while playing on a certain user's machine.
  • the current way of sharing multimedia files is mainly by dividing the audio and video files into small files on the server side, and more sometimes transcoding into multiple different resolutions to meet the client loading of different network conditions; or one side Play while intercepting the screen.
  • the former still needs to send the file to the server first, while relying on the processing power of the server side, which requires additional desktop capture and encoding operations to achieve sharing. Therefore, the existing multimedia file sharing method has high hardware requirements, and the real-time performance needs to be further improved.
  • a method for multimedia file sharing comprising the steps of:
  • a device for multimedia file sharing comprising:
  • a data separation module configured to separate the preset size multimedia data into audio data and video data from a preset play time of the multimedia file according to the shared play command
  • a type parsing module configured to parse the decoding type of the decoder corresponding to the audio data and the video data, respectively, and send the decoding type to the receiving end;
  • a split output module configured to divide the audio data and the video data from the preset play time into two outputs; wherein the audio data and the video data are output to a local decoding
  • the device performs decoding, and the other audio data and the video data are sent to the receiving end.
  • the method and device for sharing the multimedia file according to the shared play command, separating the preset size multimedia data into audio data and video data from the preset play time of the multimedia file; and parsing the audio data and the video data respectively
  • the decoding type of the decoder is sent to the receiving end; the audio data and the video data starting from the preset playing time are equally divided into two outputs; wherein the audio data and the video data are all the way The output to the local decoder is decoded, and the other audio data and the video data are sent to the receiving end.
  • a method for multimedia file sharing comprising the steps of:
  • Receiving, by the transmitting end, the audio data and the video data starting from a preset playing time are equally divided into two channels of the audio data and the video data sent in two outputs;
  • the audio data and the decoded data obtained by decoding the video data are rendered and played.
  • a device for multimedia file sharing comprising:
  • a type receiving module configured to receive a decoding type of a decoder corresponding to the audio data and the video data respectively parsed by the sending end;
  • a branch receiving module configured to receive the audio that is to be started by the sending end from a preset playing time Data and the video data are divided into one channel of the audio data and the video data sent in two outputs;
  • a data decoding module configured to decode the received audio data and the video data by using a decoder of the decoding type
  • a rendering module configured to render and play the decoded data obtained by decoding the audio data and the video data.
  • the method and device for sharing the multimedia file receiving the decoding type of the decoder corresponding to the audio data and the video data respectively parsed by the sending end; receiving the audio data from the preset playing time and the receiving end
  • the video data is divided into one of the audio data and the video data sent in two outputs; the received audio data and the video data are separately decoded by the decoding type decoder; the audio is The data and the decoded data obtained by decoding the video data are rendered and played.
  • the utility model can realize the beneficial effect of playing the multimedia data of the same multimedia data shared by the transmitting end in real time while playing the multimedia data on the transmitting end according to the shared playing command.
  • FIG. 1 is a flow chart of a method for multimedia file sharing of an embodiment running on a transmitting end
  • FIG. 2 is a structural diagram of an apparatus for multimedia file sharing of an embodiment running on a transmitting end
  • FIG. 3 is a flow chart of a method for multimedia file sharing of an embodiment running on a receiving end
  • FIG. 4 is a structural diagram of an apparatus for multimedia file sharing of an embodiment running on a receiving end.
  • the present invention is applicable to a scenario in which a multimedia file is shared by a sharing end (ie, a transmitting end) on a playing end (ie, a receiving end), and is particularly suitable for a scenario in which a multimedia file is shared by a transmitting end to a receiving end in an online education process.
  • an implementation manner of a method for multimedia file sharing running on a sending end includes the following steps:
  • S110 Separate the preset size multimedia data into audio data and video data from a preset play time of the multimedia file according to the shared play command.
  • the shared play command includes a preset play time.
  • the shared play command is obtained at the sender.
  • the multimedia file can be audio and video data.
  • the preset playback time is the default or user-set playback time acquired by the sender.
  • the default playing time is the time on the time axis is zero; the playing time set by the user can also be the intermediate time on the time axis or any other time.
  • the multimedia file includes multimedia data of the length of time of the entire timeline.
  • the preset size can be a preset time length, such as the length of time of one frame (1/12 second), or the length of time of the preset number of frames.
  • the preset size can also be one frame or a preset multi-frame.
  • the preset size can also be the minimum size, maximum size, median value, or other value of a packet.
  • the transmitting end separates the preset size multimedia data into audio data and video data through the splitter.
  • the method before step S110, the method further includes the steps of:
  • Get a shared play command that includes the preset play time is received at the transmitting end, and the playback of the transmitting end and the receiving end is controlled, and the shared play command is not separately set for the playback of the receiving end.
  • S130 Parse the decoding type of the decoder corresponding to the audio data and the video data, respectively, and send the decoding type to the receiving end.
  • the decoding end parses the decoding type of the decoder corresponding to the audio data and the video data, and sends the decoding type to the receiving end.
  • the receiving end does not need to parse the decoding type of the decoder again, and the real-time sharing can be improved.
  • S150 equalize the audio data and the video data from the preset playing time Two outputs; wherein the audio data and the video data are output to a local decoder for decoding, and the other audio data and the video data are sent to the receiving end.
  • the audio data and the video data from the preset playback time are split into two outputs.
  • the audio data and the video data are output to a local decoder for decoding, so that the multimedia data is played at the transmitting end; the other audio data and the video data are sent to the receiving end, so that The receiving end receives and plays the same multimedia data.
  • the beneficial effect of sharing the same multimedia data with the played multimedia data to the receiving end in real time according to the shared play command while playing the multimedia data on the transmitting end can be realized.
  • the multimedia file is divided into multimedia data of a preset size.
  • the location of the multimedia file sharing is limited by the preset playing time at the transmitting end.
  • the playback control commands such as pause playback and drag position at the receiving end can be controlled by the transmitted audio data and video data.
  • the audio data and the video data parsed by the sending end are the dragged data, and for the receiving end, only what data is received, what data is played. It looks like it's dragging the playback position.
  • the method for sharing the multimedia file according to the shared play command, separating the preset size multimedia data into the audio data and the video data from the preset play time of the multimedia file; and parsing the corresponding decoding of the audio data and the video data respectively
  • the decoding type of the device is sent to the receiving end; the audio data and the video data starting from the preset playing time are equally divided into two outputs; wherein the audio data and the video data are output to The local decoder performs decoding, and the other audio data and the video data are transmitted to the receiving end.
  • the step of dividing the audio data and the video data into two channels is to divide the audio data and the video data into two by a three-pass filter. Road output.
  • the method further includes the step of registering two three-pass filters. Among them, a three-way filter is used to split the audio data into two channels, and another three-way filter is used to divide the video data. Two outputs.
  • the step of dividing the audio data and the video data into two channels that is, the step of transmitting the another audio data and the video data to the receiving end in step S150 includes:
  • the buffer includes an audio buffer and a video buffer.
  • the audio buffer and the video buffer respectively receive the audio data and the video data.
  • the buffer sends the audio data and the video data to a receiving end.
  • the audio buffer and the video buffer respectively send the audio data and the video data to a receiving end through an audio transmission channel and a video transmission channel.
  • the transmitted audio data and the video data are audio data and video data obtained by separating the preset size multimedia data from the preset playing time of the multimedia file
  • the method for sharing the multimedia file has good real-time performance. Transmitting through the buffer can further improve real-time performance.
  • an implementation manner of an apparatus for sharing multimedia files on a transmitting end includes:
  • the data separation module 110 is configured to separate the preset size multimedia data into audio data and video data from a preset play time of the multimedia file according to the shared play command.
  • the shared play command includes a preset play time.
  • the shared play command is obtained at the sender.
  • the multimedia file can be audio and video data.
  • the preset playback time is the default or user-set playback time acquired by the sender.
  • the default playing time is the time on the time axis is zero; the playing time set by the user can also be the intermediate time on the time axis or any other time.
  • the multimedia file includes multimedia data of the length of time of the entire timeline.
  • the preset size can be a preset time length, such as the length of time of one frame (1/12 second), or the length of time of the preset number of frames.
  • the preset size can also be one frame or a preset multi-frame.
  • the preset size can also be the minimum size, maximum size, median value, or other value of a packet.
  • the transmitting end separates the preset size multimedia data into audio data and the view through the splitter. Frequency data.
  • the method further includes:
  • the command acquisition module is configured to acquire a shared play command including a preset play time. In this way, the shared play command is received at the transmitting end, and the playback of the transmitting end and the receiving end is controlled, and the shared play command is not separately set for the playback of the receiving end.
  • the type parsing module 130 is configured to parse the decoding type of the decoder corresponding to the audio data and the video data, respectively, and send the decoding type to the receiving end.
  • the decoding end parses the decoding type of the decoder corresponding to the audio data and the video data, and sends the decoding type to the receiving end.
  • the receiving end does not need to parse the decoding type of the decoder again, and the real-time sharing can be improved.
  • the split output module 150 is configured to divide the audio data and the video data from the preset play time into two channels; wherein the audio data and the video data are output to a local device
  • the decoder performs decoding, and the other audio data and the video data are transmitted to the receiving end.
  • the audio data and the video data from the preset playback time are split into two outputs.
  • the audio data and the video data are output to a local decoder for decoding, so that the multimedia data is played at the transmitting end; the other audio data and the video data are sent to the receiving end, so that The receiving end receives and plays the same multimedia data.
  • the beneficial effect of sharing the same multimedia data with the played multimedia data to the receiving end in real time according to the shared play command while playing the multimedia data on the transmitting end can be realized.
  • the multimedia file is divided into multimedia data of a preset size.
  • the location of the multimedia file sharing is limited by the preset playing time at the transmitting end.
  • the playback control commands such as pause playback and drag position at the receiving end can be controlled by the transmitted audio data and video data.
  • the audio data and the video data parsed by the sending end are the dragged data, and for the receiving end, only what data is received, what data is played. It looks like it's dragging the playback position.
  • the data separation module 110 separates the preset size multimedia data into audio data and video data from the preset play time of the multimedia file according to the shared play command; the type parsing module 130 parses the audio data and Decoder corresponding to the video data respectively The decoding type is sent to the receiving end; the shunt output module 150 divides the audio data and the video data starting from the preset playing time into two channels; wherein the audio data and the The video data is output to a local decoder for decoding, and the other audio data and the video data are transmitted to the receiving end.
  • the beneficial effect of sharing the same multimedia data with the played multimedia data to the receiving end in real time according to the shared play command while playing the multimedia data on the transmitting end can be realized.
  • the shunt output module 150 is configured to divide the audio data and the video data into two channels by a three-pass filter.
  • the device further includes:
  • Registration module for registering two three-way filters. Among them, a three-way filter is used to split the audio data into two channels, and another three-way filter is used to split the video data into two channels.
  • the shunt output module 150 includes:
  • the branch output unit 151 (not shown) is configured to output the audio data and the video data to the buffer in the other way.
  • the buffer includes an audio buffer and a video buffer.
  • the audio buffer and the video buffer respectively receive the audio data and the video data.
  • the buffer sending unit 153 (not shown) is configured to send the audio data and the video data to the receiving end by the buffer.
  • the audio buffer and the video buffer respectively send the audio data and the video data to a receiving end through an audio transmission channel and a video transmission channel.
  • the transmitted audio data and the video data are audio data and video data obtained by separating the preset size multimedia data from the preset playing time of the multimedia file
  • the method for sharing the multimedia file has good real-time performance. Transmitting through the buffer can further improve real-time performance.
  • an implementation manner of a method for sharing a multimedia file corresponding to a method for sharing a multimedia file running on a sending end is performed on a receiving end, and includes the following steps:
  • S310 Receive decoding of the decoder corresponding to the audio data and the video data respectively parsed by the transmitting end Types of.
  • the receiving and transmitting end decodes the preset size multimedia data into the audio data and the video data, and then decodes the decoder corresponding to the audio data and the video data respectively according to the shared play command according to the shared play command.
  • S330 Receive, by the sending end, the audio data and the video data that are started from a preset playing time, and divide the audio data and the video data that are sent in two outputs.
  • the three-pass filter at the transmitting end outputs the audio data and the video data from the preset playing time in two ways.
  • the transmitted audio data and video data are sent through the buffer of the transmitting end. Therefore, in step S330, the three-pass filter that receives the transmitting end divides the audio data and the video data from the preset playing time into two channels, and outputs the audio data through the buffer. And the video data.
  • S350 Decode the received audio data and the video data by using a decoder of the decoding type.
  • the receiving end Since the decoding type parsed by the transmitting end is received, the receiving end does not need to parse the decoding type again, but directly decodes the decoding type decoder, so that the real-time performance of the method can be improved.
  • S370 Render and play the decoded data obtained by decoding the audio data and the video data.
  • the receiving end does not need to control the playing of the multimedia data, and can directly render and play the multimedia data according to the shared playing command of the sending end, so as to play the multimedia data in the transmitting end according to the shared playing command, and the real-time playing and sharing of the transmitting end is performed.
  • the beneficial effects of multimedia data with the same multimedia data are not need to control the playing of the multimedia data, and can directly render and play the multimedia data according to the shared playing command of the sending end, so as to play the multimedia data in the transmitting end according to the shared playing command, and the real-time playing and sharing of the transmitting end is performed.
  • the utility model can realize the beneficial effect of playing the multimedia data of the same multimedia data shared by the transmitting end in real time while playing the multimedia data on the transmitting end according to the shared playing command.
  • step S370 is decoding decoding the audio data and the video data.
  • the data is rendered and played until a command to stop playback is obtained.
  • step S370 includes:
  • S371 Render and play the decoded data obtained by decoding the audio data and the video data.
  • the receiving end can also control the stop playing of the multimedia data.
  • the receiving end still does not need to perform the related control play command such as starting the sharing play or the playing progress.
  • step S375 is to stop receiving the decoding type and subsequent steps according to the command to stop playing. In this way, resources are saved, and the decoding type, audio data, and video data are continuously received without being played.
  • an embodiment of an apparatus for sharing a multimedia file corresponding to a device for sharing a multimedia file running on a transmitting end of the present invention, which is operated at a receiving end includes:
  • the type receiving module 310 is configured to receive a decoding type of a decoder corresponding to the audio data and the video data respectively parsed by the transmitting end.
  • the receiving and transmitting end decodes the preset size multimedia data into the audio data and the video data, and then decodes the decoder corresponding to the audio data and the video data respectively according to the shared play command according to the shared play command.
  • the branch receiving module 330 is configured to receive, by the transmitting end, the audio data and the video data that are sent from the preset audio data and the video data in two outputs.
  • the three-pass filter at the transmitting end outputs the audio data and the video data from the preset playing time in two ways.
  • the transmitted audio data and video data are sent through the buffer of the transmitting end. Therefore, the shunt receiving module 330 is specifically configured to receive the three-way filter of the transmitting end.
  • the audio data and the video data starting from a preset playing time are equally divided into two channels of the audio data and the video data that are transmitted through a buffer in two outputs.
  • the data decoding module 350 is configured to separately decode the received audio data and the video data by using a decoder of the decoding type.
  • the receiving end Since the decoding type parsed by the transmitting end is received, the receiving end does not need to parse the decoding type again, but directly decodes the decoding type decoder, so that the real-time performance of the device can be improved.
  • the rendering play module 370 is configured to render and play the decoded data obtained by decoding the audio data and the video data.
  • the receiving end does not need to control the playing of the multimedia data, and can directly render and play the multimedia data according to the shared playing command of the sending end, so as to play the multimedia data in the transmitting end according to the shared playing command, and the real-time playing and sharing of the transmitting end is performed.
  • the beneficial effects of multimedia data with the same multimedia data are not need to control the playing of the multimedia data, and can directly render and play the multimedia data according to the shared playing command of the sending end, so as to play the multimedia data in the transmitting end according to the shared playing command, and the real-time playing and sharing of the transmitting end is performed.
  • the device for receiving the multimedia file receives the decoding type of the decoder corresponding to the audio data and the video data respectively parsed by the sending end, and the branch receiving module 330 receives the sending end from the preset playing time.
  • the audio data and the video data are divided into one channel of the audio data and the video data sent in two outputs;
  • the data decoding module 350 uses the decoder type of the decoding type to receive the audio data and the
  • the video data is separately decoded;
  • the rendering and playing module 370 renders and plays the audio data and the decoded data obtained by decoding the video data.
  • the utility model can realize the beneficial effect of playing the multimedia data of the same multimedia data shared by the transmitting end in real time while playing the multimedia data on the transmitting end according to the shared playing command.
  • the rendering play module 370 is configured to render and play the decoded data obtained by decoding the audio data and the video data until a command to stop playing is obtained.
  • the rendering play module 370 includes:
  • a rendering playing unit 371, configured to render and play the decoded data obtained by decoding the audio data and the video data
  • the acquiring unit 373 is configured to acquire a command to stop playing
  • Stop execution unit 375 configured to stop rendering and playing the solution according to the stop playing command Code data.
  • the receiving end can also control the stop playing of the multimedia data.
  • the receiving end still does not need to perform the related control play command such as starting the sharing play or the playing progress.
  • the execution unit 375 is stopped for stopping receiving the decoding type and subsequent steps according to the command to stop playing. In this way, resources are saved, and the decoding type, audio data, and video data are continuously received without being played.

Abstract

一种运行于发送端的多媒体文件分享的方法及装置,根据共享播放命令从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离成音频数据和视频数据;解析所述音频数据和所述视频数据分别对应的解码器的解码类型并发送至接收端;将从所述预设播放时间处开始的所述音频数据和所述视频数据均分两路输出;其中,一路所述音频数据和所述视频数据输出至本地的解码器进行解码,另一路所述音频数据和所述视频数据发送至接收端。如此,可以实现根据共享播放命令在发送端播放多媒体数据的同时,实时向接收端分享与播放的多媒体数据相同的多媒体数据的有益效果。

Description

多媒体文件分享的方法及装置 技术领域
本发明涉及文件共享领域,尤其涉及一种多媒体文件分享的方法及装置。
背景技术
随着信息技术迅速发展,特别是从互联网到移动互联网,创造了跨时空的生活、工作和学习方式,使知识获取的方式发生了根本变化,在线教育方式孕育而生。在线教育使得教与学可以不受时间、空间和地点条件的限制,知识获取渠道更加灵活、多样。
在线教育过程中,经常需要进行视频、音频等多媒体文件的分享,以使在某一个用户的机器上,播放音视频文件的同时,实时共享给其他用户。然而目前多媒体文件共享的方式主要是通过在服务器端将音视频文件分成一片片小文件,更有的会同时转码成多个不同分辨率,来满足不同网络情况的客户端加载;或者是一边播放一边截取屏幕。前者依然需要将文件先发送给服务器端,同时依赖服务器端的处理能力,后者需要额外的桌面采集和编码操作方能实现共享。因此,现有的多媒体文件共享方式对硬件要求高,实时性有待进一步提高。
发明内容
基于此,有必要提供一种实时性好的多媒体文件分享的方法及装置。
一种多媒体文件分享的方法,包括步骤:
根据共享播放命令从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离成音频数据和视频数据;
解析所述音频数据和所述视频数据分别对应的解码器的解码类型并发送至接收端;
将从所述预设播放时间处开始的所述音频数据和所述视频数据均分两路输出;其中,一路所述音频数据和所述视频数据输出至本地的解码器进行解码,另一路所述音频数据和所述视频数据发送至接收端。
一种多媒体文件分享的装置,包括:
数据分离模块,用于根据共享播放命令从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离成音频数据和视频数据;
类型解析模块,用于解析所述音频数据和所述视频数据分别对应的解码器的解码类型并发送至接收端;
分路输出模块,用于将从所述预设播放时间处开始的所述音频数据和所述视频数据均分两路输出;其中,一路所述音频数据和所述视频数据输出至本地的解码器进行解码,另一路所述音频数据和所述视频数据发送至接收端。
上述多媒体文件分享的方法及装置,根据共享播放命令从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离成音频数据和视频数据;解析所述音频数据和所述视频数据分别对应的解码器的解码类型并发送至接收端;将从所述预设播放时间处开始的所述音频数据和所述视频数据均分两路输出;其中,一路所述音频数据和所述视频数据输出至本地的解码器进行解码,另一路所述音频数据和所述视频数据发送至接收端。如此,可以实现根据共享播放命令在发送端播放多媒体数据的同时,实时向接收端分享与播放的多媒体数据相同的多媒体数据的有益效果。
一种多媒体文件分享的方法,包括步骤:
接收发送端解析到的音频数据和视频数据分别对应的解码器的解码类型;
接收所述发送端将从预设播放时间处开始的所述音频数据和所述视频数据均分两路输出中发送的一路所述音频数据和所述视频数据;
采用所述解码类型的解码器对接收到的所述音频数据和所述视频数据分别进行解码;
将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放。
一种多媒体文件分享的装置,包括:
类型接收模块,用于接收发送端解析到的音频数据和视频数据分别对应的解码器的解码类型;
分路接收模块,用于接收所述发送端将从预设播放时间处开始的所述音频 数据和所述视频数据均分两路输出中发送的一路所述音频数据和所述视频数据;
数据解码模块,用于采用所述解码类型的解码器对接收到的所述音频数据和所述视频数据分别进行解码;
渲染播放模块,用于将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放。
上述多媒体文件分享的方法及装置,接收发送端解析到的音频数据和视频数据分别对应的解码器的解码类型;接收所述发送端将从预设播放时间处开始的所述音频数据和所述视频数据均分两路输出中发送的一路所述音频数据和所述视频数据;采用所述解码类型的解码器对接收到的所述音频数据和所述视频数据分别进行解码;将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放。可以达到根据共享播放命令在发送端播放多媒体数据的同时,实时播放发送端分享的与播放的多媒体数据相同的多媒体数据的有益效果。
附图说明
图1为一种运行于发送端的实施方式的多媒体文件分享的方法的流程图;
图2为一种运行于发送端的实施方式的多媒体文件分享的装置的结构图;
图3为一种运行于接收端的实施方式的多媒体文件分享的方法的流程图;
图4为一种运行于接收端的实施方式的多媒体文件分享的装置的结构图。
具体实施方式
为了便于理解本发明,下面将参照相关附图对本发明进行更全面的描述。附图中给出了本发明的较佳的实施例。但是,本发明可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本发明的公开内容的理解更加透彻全面。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的 术语“或/和”包括一个或多个相关的所列项目的任意的和所有的组合。
本发明适用于通过分享端(即发送端)对播放端(即接收端)进行多媒体文件分享的场景,尤其适用于在线教育过程中通过发送端对接收端进行多媒体文件分享的场景。
如图1所示,为本发明一种运行在发送端的多媒体文件分享的方法的实施方式,包括步骤:
S110:根据共享播放命令从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离成音频数据和视频数据。
共享播放命令包括预设播放时间。共享播放命令在发送端获取。多媒体文件可以为音视频数据。预设播放时间为发送端获取到的默认的或用户设定的播放时间。其中,默认的播放时间为时间轴上时间为零的时间;用户设定的播放时间还可以为时间轴上的中间时间或其它任意时间。
多媒体文件包括整个时间轴的时间长度的多媒体数据。预设大小可以为预设时间长度,如一帧的时间长度(1/12秒),或预设帧数的时间长度。预设大小还可以为一帧或预设的多帧。预设大小还可以为一个数据包的最小大小、最大大小、中间值或其它值。
具体地,发送端通过分离器将预设大小的多媒体数据分离成音频数据和视频数据。
在其中一个实施例中,步骤S110之前,还包括步骤:
获取包括预设播放时间的共享播放命令。如此,在发送端接收共享播放命令,而控制发送端及接收端的播放,无需对接收端的播放另行设置共享播放命令。
S130:解析所述音频数据和所述视频数据分别对应的解码器的解码类型并发送至接收端。
在发送端解析所述音频数据和所述视频数据分别对应的解码器的解码类型并将解码类型发送至接收端,如此,接收端无需再次解析解码器的解码类型,可以提高分享的实时性。
S150:将从所述预设播放时间处开始的所述音频数据和所述视频数据均分 两路输出;其中,一路所述音频数据和所述视频数据输出至本地的解码器进行解码,另一路所述音频数据和所述视频数据发送至接收端。
如此,从预设播放时间处开始的所述音频数据和所述视频数据均分成两路输出。其中,一路所述音频数据和所述视频数据输出至本地的解码器进行解码,如此在发送端进行多媒体数据的播放;另一路所述音频数据和所述视频数据发送至接收端,如此可以在接收端接收并播放相同的多媒体数据。最终,可以实现根据共享播放命令在发送端播放多媒体数据的同时,实时向接收端分享与播放的多媒体数据相同的多媒体数据的有益效果。
在本实施例中,将多媒体文件被划分成预设大小的多媒体数据。在发送端由预设播放时间限定多媒体文件共享的位置。如此,使得接收端的暂停播放、拖动位置等播放控制命令,都可以通过发送过来的音频数据及视频数据来控制。换而言之就是,当发送端端拖动播放位置的时候,发送端解析到的音频数据和视频数据都是拖动后的数据,对于接收端来说,只是接收什么数据就播放什么数据,看起来就像是拖动了播放位置一样。
上述多媒体文件分享的方法,根据共享播放命令从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离成音频数据和视频数据;解析所述音频数据和所述视频数据分别对应的解码器的解码类型并发送至接收端;将从所述预设播放时间处开始的所述音频数据和所述视频数据均分两路输出;其中,一路所述音频数据和所述视频数据输出至本地的解码器进行解码,另一路所述音频数据和所述视频数据发送至接收端。如此,可以实现根据共享播放命令在发送端播放多媒体数据的同时,实时向接收端分享与播放的多媒体数据相同的多媒体数据的有益效果。
在其中一个实施例中,所述将所述音频数据和所述视频数据均分两路输出的步骤,即步骤S150为,通过三通滤波器将所述音频数据和所述视频数据均分两路输出。
进一步地,步骤S150之前还包括步骤:注册两个三通滤波器。其中,一个三通滤波器用于将音频数据分两路输出,另一个三通滤波器用于将视频数据分 两路输出。
在其中一个实施例中,将所述音频数据和所述视频数据均分两路输出的步骤,即步骤S150中,所述另一路所述音频数据和所述视频数据发送至接收端的步骤包括:
S151:另一路所述音频数据和所述视频数据输出至缓冲区。
其中,所述缓冲区包括音频缓冲区和视频缓冲区。所述音频缓冲区和所述视频缓冲区分别接收所述音频数据和所述视频数据。
S153:所述缓冲区将所述音频数据和所述视频数据发送至接收端。
所述音频缓冲区和所述视频缓冲区分别通过音频传输通道和视频传输通道将所述音频数据和所述视频数据发送至接收端。
由于发送的音频数据和视频数据是从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离而成的音频数据和视频数据,本多媒体文件分享的方法实时性好。通过缓冲区进行发送,可以进一步提高实时性。
如图2所示,为本发明一种运行在发送端的多媒体文件分享的装置的实施方式,包括:
数据分离模块110,用于根据共享播放命令从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离成音频数据和视频数据。
共享播放命令包括预设播放时间。共享播放命令在发送端获取。多媒体文件可以为音视频数据。预设播放时间为发送端获取到的默认的或用户设定的播放时间。其中,默认的播放时间为时间轴上时间为零的时间;用户设定的播放时间还可以为时间轴上的中间时间或其它任意时间。
多媒体文件包括整个时间轴的时间长度的多媒体数据。预设大小可以为预设时间长度,如一帧的时间长度(1/12秒),或预设帧数的时间长度。预设大小还可以为一帧或预设的多帧。预设大小还可以为一个数据包的最小大小、最大大小、中间值或其它值。
具体地,发送端通过分离器将预设大小的多媒体数据分离成音频数据和视 频数据。
在其中一个实施例中,还包括:
命令获取模块,用于获取包括预设播放时间的共享播放命令。如此,在发送端接收共享播放命令,而控制发送端及接收端的播放,无需对接收端的播放另行设置共享播放命令。
类型解析模块130,用于解析所述音频数据和所述视频数据分别对应的解码器的解码类型并发送至接收端。
在发送端解析所述音频数据和所述视频数据分别对应的解码器的解码类型并将解码类型发送至接收端,如此,接收端无需再次解析解码器的解码类型,可以提高分享的实时性。
分路输出模块150,用于将从所述预设播放时间处开始的所述音频数据和所述视频数据均分两路输出;其中,一路所述音频数据和所述视频数据输出至本地的解码器进行解码,另一路所述音频数据和所述视频数据发送至接收端。
如此,从预设播放时间处开始的所述音频数据和所述视频数据均分成两路输出。其中,一路所述音频数据和所述视频数据输出至本地的解码器进行解码,如此在发送端进行多媒体数据的播放;另一路所述音频数据和所述视频数据发送至接收端,如此可以在接收端接收并播放相同的多媒体数据。最终,可以实现根据共享播放命令在发送端播放多媒体数据的同时,实时向接收端分享与播放的多媒体数据相同的多媒体数据的有益效果。
在本实施例中,将多媒体文件被划分成预设大小的多媒体数据。在发送端由预设播放时间限定多媒体文件共享的位置。如此,使得接收端的暂停播放、拖动位置等播放控制命令,都可以通过发送过来的音频数据及视频数据来控制。换而言之就是,当发送端端拖动播放位置的时候,发送端解析到的音频数据和视频数据都是拖动后的数据,对于接收端来说,只是接收什么数据就播放什么数据,看起来就像是拖动了播放位置一样。
上述多媒体文件分享的装置,数据分离模块110根据共享播放命令从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离成音频数据和视频数据;类型解析模块130解析所述音频数据和所述视频数据分别对应的解码器 的解码类型并发送至接收端;分路输出模块150将从所述预设播放时间处开始的所述音频数据和所述视频数据均分两路输出;其中,一路所述音频数据和所述视频数据输出至本地的解码器进行解码,另一路所述音频数据和所述视频数据发送至接收端。如此,可以实现根据共享播放命令在发送端播放多媒体数据的同时,实时向接收端分享与播放的多媒体数据相同的多媒体数据的有益效果。
在其中一个实施例中,分路输出模块150,用于通过三通滤波器将所述音频数据和所述视频数据均分两路输出。
进一步地,该装置还包括:
注册模块,用于注册两个三通滤波器。其中,一个三通滤波器用于将音频数据分两路输出,另一个三通滤波器用于将视频数据分两路输出。
在其中一个实施例中,分路输出模块150包括:
分路输出单元151(图未示),用于所述另一路所述音频数据和所述视频数据输出至缓冲区。
其中,所述缓冲区包括音频缓冲区和视频缓冲区。所述音频缓冲区和所述视频缓冲区分别接收所述音频数据和所述视频数据。
缓冲发送单元153(图未示),用于所述缓冲区将所述音频数据和所述视频数据发送至接收端。
所述音频缓冲区和所述视频缓冲区分别通过音频传输通道和视频传输通道将所述音频数据和所述视频数据发送至接收端。
由于发送的音频数据和视频数据是从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离而成的音频数据和视频数据,本多媒体文件分享的方法实时性好。通过缓冲区进行发送,可以进一步提高实时性。
如图3所示,为本发明一种与运行在发送端的多媒体文件分享的方法对应的多媒体文件分享的方法的实施方式,其运行在接收端,包括步骤:
S310:接收发送端解析到的音频数据和视频数据分别对应的解码器的解码 类型。
具体地,接收发送端根据共享播放命令从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离成音频数据和视频数据后解析到的音频数据和视频数据分别对应的解码器的解码类型。
S330:接收所述发送端将从预设播放时间处开始的所述音频数据和所述视频数据均分两路输出中发送的一路所述音频数据和所述视频数据。
具体地,发送端的三通滤波器将从预设播放时间处开始的所述音频数据和所述视频数据均分两路输出。发送的一路音频数据和视频数据通过发送端的缓冲器发送。因此,步骤S330具体为,接收所述发送端的三通滤波器将从预设播放时间处开始的所述音频数据和所述视频数据均分两路输出中通过缓冲区发送的一路所述音频数据和所述视频数据。
S350:采用所述解码类型的解码器对接收到的所述音频数据和所述视频数据分别进行解码。
由于接收了发送端解析到的解码类型,接收端无需再次解析解码类型,而直接采用解码类型的解码器进行解码,如此,可以提高本方法的实时性。
S370:将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放。
接收端无需在对多媒体数据的播放进行控制,可以直接按照发送端的共享播放命令,渲染并播放多媒体数据,达到根据共享播放命令在发送端播放多媒体数据的同时,实时播放发送端分享的与播放的多媒体数据相同的多媒体数据的有益效果。
上述多媒体文件分享的方法,接收发送端解析到的音频数据和视频数据分别对应的解码器的解码类型;接收所述发送端将从预设播放时间处开始的所述音频数据和所述视频数据均分两路输出中发送的一路所述音频数据和所述视频数据;采用所述解码类型的解码器对接收到的所述音频数据和所述视频数据分别进行解码;将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放。可以达到根据共享播放命令在发送端播放多媒体数据的同时,实时播放发送端分享的与播放的多媒体数据相同的多媒体数据的有益效果。
在其中一个实施例中,所述将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放的步骤,即步骤S370为,将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放直至获取到停止播放的命令。
具体地,步骤S370包括:
S371:将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放;
S373:获取停止播放的命令;
S375:根据所述停止播放的命令,停止渲染及播放所述解码数据。
如此,在接收端也可以控制多媒体数据的停止播放,当然接收端仍然无需进行开始共享播放或者播放进度等相关的控制播放命令。
在另一个实施例中,步骤S375为,根据所述停止播放的命令,停止接收解码类型及后续步骤。如此,节约资源,避免无需播放时仍然继续接收解码类型、音频数据、视频数据。
如图4所示,为本发明一种与运行在发送端的多媒体文件分享的装置对应的多媒体文件分享的装置的实施方式,其运行在接收端,包括:
类型接收模块310,用于接收发送端解析到的音频数据和视频数据分别对应的解码器的解码类型。
具体地,接收发送端根据共享播放命令从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离成音频数据和视频数据后解析到的音频数据和视频数据分别对应的解码器的解码类型。
分路接收模块330,用于接收所述发送端将从预设播放时间处开始的所述音频数据和所述视频数据均分两路输出中发送的一路所述音频数据和所述视频数据。
具体地,发送端的三通滤波器将从预设播放时间处开始的所述音频数据和所述视频数据均分两路输出。发送的一路音频数据和视频数据通过发送端的缓冲器发送。因此,分路接收模块330具体用于,接收所述发送端的三通滤波器 将从预设播放时间处开始的所述音频数据和所述视频数据均分两路输出中通过缓冲区发送的一路所述音频数据和所述视频数据。
数据解码模块350,用于采用所述解码类型的解码器对接收到的所述音频数据和所述视频数据分别进行解码。
由于接收了发送端解析到的解码类型,接收端无需再次解析解码类型,而直接采用解码类型的解码器进行解码,如此,可以提高本装置的实时性。
渲染播放模块370,用于将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放。
接收端无需在对多媒体数据的播放进行控制,可以直接按照发送端的共享播放命令,渲染并播放多媒体数据,达到根据共享播放命令在发送端播放多媒体数据的同时,实时播放发送端分享的与播放的多媒体数据相同的多媒体数据的有益效果。
上述多媒体文件分享的装置,类型接收模块310接收发送端解析到的音频数据和视频数据分别对应的解码器的解码类型;分路接收模块330接收所述发送端将从预设播放时间处开始的所述音频数据和所述视频数据均分两路输出中发送的一路所述音频数据和所述视频数据;数据解码模块350采用所述解码类型的解码器对接收到的所述音频数据和所述视频数据分别进行解码;渲染播放模块370将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放。可以达到根据共享播放命令在发送端播放多媒体数据的同时,实时播放发送端分享的与播放的多媒体数据相同的多媒体数据的有益效果。
在其中一个实施例中,渲染播放模块370,用于将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放直至获取到停止播放的命令。
具体地,渲染播放模块370包括:
渲染播放单元371,用于将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放;
停止获取单元373,用于获取停止播放的命令;
停止执行单元375,用于根据所述停止播放的命令,停止渲染及播放所述解 码数据。
如此,在接收端也可以控制多媒体数据的停止播放,当然接收端仍然无需进行开始共享播放或者播放进度等相关的控制播放命令。
在另一个实施例中,停止执行单元375,用于根据所述停止播放的命令,停止接收解码类型及后续步骤。如此,节约资源,避免无需播放时仍然继续接收解码类型、音频数据、视频数据。
以上实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出多个变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。

Claims (10)

  1. 一种多媒体文件分享的方法,其特征在于,包括步骤:
    根据共享播放命令从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离成音频数据和视频数据;
    解析所述音频数据和所述视频数据分别对应的解码器的解码类型并发送至接收端;
    将从所述预设播放时间处开始的所述音频数据和所述视频数据均分两路输出;其中,一路所述音频数据和所述视频数据输出至本地的解码器进行解码,另一路所述音频数据和所述视频数据发送至接收端。
  2. 根据权利要求1所述的多媒体文件分享的方法,其特征在于,所述将所述音频数据和所述视频数据均分两路输出的步骤为,通过三通滤波器将所述音频数据和所述视频数据均分两路输出。
  3. 根据权利要求1所述的多媒体文件分享的方法,其特征在于,所述将所述音频数据和所述视频数据均分两路输出的步骤中,所述另一路所述音频数据和所述视频数据发送至接收端的步骤包括:
    所述另一路所述音频数据和所述视频数据输出至缓冲区;
    所述缓冲区将所述音频数据和所述视频数据发送至接收端。
  4. 一种多媒体文件分享的装置,其特征在于,包括:
    数据分离模块,用于根据共享播放命令从多媒体文件的预设播放时间处开始将预设大小的多媒体数据分离成音频数据和视频数据;
    类型解析模块,用于解析所述音频数据和所述视频数据分别对应的解码器的解码类型并发送至接收端;
    分路输出模块,用于将从所述预设播放时间处开始的所述音频数据和所述视频数据均分两路输出;其中,一路所述音频数据和所述视频数据输出至本地的解码器进行解码,另一路所述音频数据和所述视频数据发送至接收端。
  5. 根据权利要求4所述的多媒体文件分享的装置,其特征在于,所述分路输出模块,用于通过三通滤波器将所述音频数据和所述视频数据均分两路输出。
  6. 根据权利要求4所述的多媒体文件分享的装置,其特征在于,所述分路 输出模块包括:
    分路输出单元,用于所述另一路所述音频数据和所述视频数据输出至缓冲区;
    缓冲发送单元,用于所述缓冲区将所述音频数据和所述视频数据发送至接收端。
  7. 一种多媒体文件分享的方法,其特征在于,包括步骤:
    接收发送端解析到的音频数据和视频数据分别对应的解码器的解码类型;
    接收所述发送端将从所述预设播放时间处开始的所述音频数据和所述视频数据均分两路输出中发送的一路所述音频数据和所述视频数据;
    采用所述解码类型的解码器对接收到的所述音频数据和所述视频数据分别进行解码;
    将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放。
  8. 根据权利要求7所述的多媒体文件分享的方法,其特征在于,所述将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放的步骤为,将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放直至获取到停止播放的命令。
  9. 一种多媒体文件分享的装置,其特征在于,包括:
    类型接收模块,用于接收发送端解析到的音频数据和视频数据分别对应的解码器的解码类型;
    分路接收模块,用于接收所述发送端将从预设播放时间处开始的所述音频数据和所述视频数据均分两路输出中发送的一路所述音频数据和所述视频数据;
    数据解码模块,用于采用所述解码类型的解码器对接收到的所述音频数据和所述视频数据分别进行解码;
    渲染播放模块,用于将所述音频数据和所述视频数据解码得到的解码数据进行渲染并播放。
  10. 根据权利要求9所述的多媒体文件分享的装置,其特征在于,所述渲染播放模块,用于将所述音频数据和所述视频数据解码得到的解码数据进行渲染并 播放直至获取到停止播放的命令。
PCT/CN2016/113211 2016-04-29 2016-12-29 多媒体文件分享的方法及装置 WO2017185798A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610286921.8 2016-04-29
CN201610286921.8A CN105959778B (zh) 2016-04-29 2016-04-29 多媒体文件分享的方法及装置

Publications (1)

Publication Number Publication Date
WO2017185798A1 true WO2017185798A1 (zh) 2017-11-02

Family

ID=56913569

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113211 WO2017185798A1 (zh) 2016-04-29 2016-12-29 多媒体文件分享的方法及装置

Country Status (2)

Country Link
CN (1) CN105959778B (zh)
WO (1) WO2017185798A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959778B (zh) * 2016-04-29 2019-06-11 广州视睿电子科技有限公司 多媒体文件分享的方法及装置
CN108632718B (zh) * 2018-04-11 2021-09-21 维沃移动通信有限公司 一种音频共享的方法及系统
CN111355960B (zh) * 2018-12-21 2021-05-04 北京字节跳动网络技术有限公司 合成视频文件的方法、装置、移动终端及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101426443B1 (ko) * 2007-04-25 2014-08-05 엘지전자 주식회사 이동통신 단말기 및 그 동작방법
CN104159139A (zh) * 2014-08-25 2014-11-19 小米科技有限责任公司 多媒体同步方法及装置
CN104301767A (zh) * 2014-09-29 2015-01-21 四川长虹电器股份有限公司 一种在手机上实现与电视同步播放视频的方法
CN104581367A (zh) * 2015-01-04 2015-04-29 华为技术有限公司 分享多媒体内容的方法及装置
CN104821929A (zh) * 2014-03-21 2015-08-05 腾讯科技(北京)有限公司 多媒体数据分享方法及终端
CN105142009A (zh) * 2015-07-31 2015-12-09 深圳Tcl数字技术有限公司 音视频播放控制方法及装置
CN105959778A (zh) * 2016-04-29 2016-09-21 广州视睿电子科技有限公司 多媒体文件分享的方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100744785B1 (ko) * 2005-11-21 2007-08-02 엘지전자 주식회사 Bifs 채널을 통한 데이터 서비스 방법, 및 디지털멀티미디어 방송수신 단말기
CN202077141U (zh) * 2011-05-17 2011-12-14 郭荣玉 数字化监控网络应用系统
CN103491333B (zh) * 2013-09-11 2017-01-04 江苏中科梦兰电子科技有限公司 一种应用于一对多视频广播的视频分流方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101426443B1 (ko) * 2007-04-25 2014-08-05 엘지전자 주식회사 이동통신 단말기 및 그 동작방법
CN104821929A (zh) * 2014-03-21 2015-08-05 腾讯科技(北京)有限公司 多媒体数据分享方法及终端
CN104159139A (zh) * 2014-08-25 2014-11-19 小米科技有限责任公司 多媒体同步方法及装置
CN104301767A (zh) * 2014-09-29 2015-01-21 四川长虹电器股份有限公司 一种在手机上实现与电视同步播放视频的方法
CN104581367A (zh) * 2015-01-04 2015-04-29 华为技术有限公司 分享多媒体内容的方法及装置
CN105142009A (zh) * 2015-07-31 2015-12-09 深圳Tcl数字技术有限公司 音视频播放控制方法及装置
CN105959778A (zh) * 2016-04-29 2016-09-21 广州视睿电子科技有限公司 多媒体文件分享的方法及装置

Also Published As

Publication number Publication date
CN105959778B (zh) 2019-06-11
CN105959778A (zh) 2016-09-21

Similar Documents

Publication Publication Date Title
US10930318B2 (en) Gapless video looping
CN109168078B (zh) 一种视频清晰度切换方法及装置
US9282287B1 (en) Real-time video transformations in video conferences
EP3562163B1 (en) Audio-video synthesis method and system
US7669206B2 (en) Dynamic redirection of streaming media between computing devices
US20170324792A1 (en) Dynamic track switching in media streaming
KR101614862B1 (ko) 멀티미디어 비디오 데이터의 송신, 수신 방법 및 대응되는 장치
US10623454B2 (en) System and method for multimedia redirection for cloud desktop conferencing
US20150244658A1 (en) System and method for efficiently mixing voip data
WO2017185798A1 (zh) 多媒体文件分享的方法及装置
US11727940B2 (en) Autocorrection of pronunciations of keywords in audio/videoconferences
WO2018103696A1 (zh) 媒体文件的播放方法、服务端、客户端及系统
US20120169929A1 (en) Method And Apparatus For Processing A Video Signal
WO2017071428A1 (zh) 快进快退的处理方法及终端
CN111541905B (zh) 一种直播方法、装置、计算机设备和存储介质
CN106604115B (zh) 视频播放控制装置及方法
WO2019118890A1 (en) Method and system for cloud video stitching
US9813462B2 (en) Unified dynamic executable media playlists across connected endpoints
CN104853224B (zh) 一种视频数据处理方法及装置
US20190036838A1 (en) Delivery of Multimedia Components According to User Activity
JP7395766B2 (ja) Httpを介した動的適応ストリーミングのための方法および装置
US11924523B2 (en) Method and system for midstream filtering of audio and video content
WO2015196571A1 (zh) 播放多媒体的方法及系统、服务器、存储介质
Maunero Design And Development Of An Architecture For Real-time Enhancement Of TV Streams
AU2020256339A1 (en) Generating media programs configured for seamless playback

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16900302

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16900302

Country of ref document: EP

Kind code of ref document: A1