CN115086282A - Video playing method, device and storage medium - Google Patents

Video playing method, device and storage medium Download PDF

Info

Publication number
CN115086282A
CN115086282A CN202110281970.3A CN202110281970A CN115086282A CN 115086282 A CN115086282 A CN 115086282A CN 202110281970 A CN202110281970 A CN 202110281970A CN 115086282 A CN115086282 A CN 115086282A
Authority
CN
China
Prior art keywords
data
video
audio
media slice
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110281970.3A
Other languages
Chinese (zh)
Inventor
李兴广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xunlei Network Technology Co Ltd
Original Assignee
Shenzhen Xunlei Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xunlei Network Technology Co Ltd filed Critical Shenzhen Xunlei Network Technology Co Ltd
Priority to CN202110281970.3A priority Critical patent/CN115086282A/en
Publication of CN115086282A publication Critical patent/CN115086282A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The application discloses a video playing method, video playing equipment and a storage medium. By the mode, the audio data and the video data can be processed respectively, and the audio and video editing capacity is enhanced; and the media data is sliced, so that the fluency of playing is higher.

Description

Video playing method, device and storage medium
Technical Field
The present application relates to the field of audio and video data processing technologies, and in particular, to a video playing method, device, and storage medium.
Background
With the development of network technology, the demand of playing audio and video at the web end is higher and higher. At present, the scheme for playing audio and video on the web end mainly includes using a flash plug-in, a video tag of html5, and server decoding.
The H265 can not be supported by using a flash plug-in, and the mainstream browser does not update the plug-in and gradually cancels the support of the mainstream browser; video encoding types that can be decoded using the video tag of html5 are limited, e.g., cannot support H265; after the decoding is carried out at the server, the data volume is overlarge, and the overlarge bandwidth resources are occupied; in addition, the audio and the video in the audio and video played by the current web end are integrated, and the audio or the video cannot be edited and switched independently.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a video playing method, a device and a storage medium, which can respectively process audio data and video data and enhance the editing capacity of audio and video.
In order to solve the technical problem, the application adopts a technical scheme that: the video playing method comprises the steps of obtaining current media slice data, carrying out de-encapsulation and decoding processing on the current media slice data to obtain video data and audio data, and synchronously playing the audio data and the video data.
The index file of the video file slice position is obtained, and the media slice data are sequentially obtained according to the index file.
After the index file of the video file slice position is obtained, a first thread and a second thread are further created, so that media slice data are obtained through the first thread, and the obtained current media slice data are subjected to de-encapsulation and decoding processing through the second thread.
The first thread and the second thread work in parallel, so that the acquired current media slice data is subjected to de-encapsulation and decoding processing by the second thread in parallel, and the next media slice data is acquired by the first thread according to the index file.
The method comprises the steps of creating a first object and a second object, decapsulating current media slice data into video coded data and audio coded data through the first object, and decoding the video coded data and the audio coded data into video data and audio data through the second object.
The method comprises the steps of decoding video coded data and audio coded data, wherein the video coded data are H265 coded video files, and the step of performing decapsulation and decoding processing on current media slice data comprises the steps of decapsulating the media slice data into the video coded data and the audio coded data through demux.
The video file is a TS video file, the video coding data and the audio coding data are decoded into video data and audio data through the second object, the decoding and the conversion of the video coding data of the media slice data into yuv data are carried out by utilizing Web Assembly, and the yuv data are drawn into a picture by utilizing yuv-canvas.
Wherein decoding the video encoded data and the Audio encoded data into video data and Audio data through the second object includes decoding the Audio encoded data of the media slice data into Audio data using a Web Audio API.
Wherein the first object and the second object operate in parallel to decode video encoded data and audio encoded data decapsulated from the current media slice into video data and audio data using the second object and decapsulate next media slice data using the first object in parallel.
The synchronous playing of the audio data and the video data comprises controlling the video data to synchronize the time stamp of the audio data to play the picture.
The beneficial effect of this application is: different from the situation of the prior art, the method and the device have the advantages that the current media slice data are obtained, the current media slice data are subjected to decapsulation and decoding processing, the video data and the audio data are obtained, the audio data and the video data are synchronously played, the audio and video data can be divided into the audio data and the video data, particularly, the audio or the video can be independently switched and edited aiming at the multi-language videos, and the audio and video editing capacity is enhanced.
Drawings
Fig. 1 is a schematic flowchart of an embodiment of a video playing method according to the present application;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a video playing method according to the present application;
fig. 3 is a schematic structural diagram of a video playback device in an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a video playback device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples.
In order to facilitate understanding of the technical solutions provided in the embodiments of the present application, the technical terms used in the embodiments of the present application will be briefly described below.
The coding format is as follows: the coding formats typically include two, respectively a video coding format and an audio coding format. The video coding format is also called as video coding specification, and original video is compressed in a compression coding mode because original video data is very large and inconvenient to transmit and store. The video coding format defines the specification of video data in the storage and transmission processes, and the video compression formats are commonly H.264 and H.265.
The audio coding format is also called audio coding specification, and the original audio data is compressed, thereby defining the specification of the audio data in the process of storage and transmission. Common audio compression formats are AAC, MP 3.
The packing format (also called container) packs original video data and audio data into one file, such as TS, AVI, RMVB, MP4, etc., after compression encoding.
It should be noted that the method provided by the embodiment of the present application may be applied to any scene of the web end, and is not limited to the web end player. For ease of understanding, the following description will be given by taking the application to a web-end player as an example.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a video playing method according to the present application, where the video playing method includes the following steps:
step S101: acquiring current media slice data;
and the Web end sends a network data request to the server end to acquire the current media slice data. The media slice data is to divide the media data into a plurality of slice data, when a user looks at one section, the section is loaded, the seamless playing experience of the video is good, and when the user pauses or jumps out, the loading is not continued, so that the bandwidth is saved to a greater extent. Specifically, the media slice data is encoded and encapsulated audio/video data, and the types of encapsulation formats are many, for example, MP4, MKV, RMVB, TS, FLV, AVI, and the like, and it functions to put the compressed and encoded audio/video data together according to a certain format.
Step S102: decapsulating and decoding current media slice data to obtain video data and audio data;
and performing de-encapsulation and decoding processing on the acquired media slice data to obtain video data and audio data. The decapsulation is the inverse process of encapsulation, the decapsulation is the inverse process of decapsulation, the information in the packet header is processed, the service information data in the payload is extracted, and the encapsulation and decapsulation are a pair of inverse processes, and the decapsulation can separate the input data in the encapsulated format into audio compression encoded data and video compression encoded data; the decoding is to decode the video/audio compression coding data into uncompressed video/audio original data. The decoding can comprise soft decoding and hard decoding, wherein the soft decoding enables the CPU to decode the video through software, and the GPU is called to render and combine the video after the decoding, and then the video is displayed on a screen. While hard decoding refers to independently performing the video decoding task through a dedicated daughter card device without the aid of a CPU.
Step S103: and synchronously playing the audio data and the video data.
The video data and the audio data are obtained after the media slice data are unpacked and decoded, and the audio data and the video data are synchronously played, so that the video playing of the Web end can be completed.
According to the method, the audio and video data can be divided into the audio data and the video data, particularly, the audio or the video can be independently switched and edited aiming at multi-language videos, and the audio and video editing capacity is enhanced; and the media data is sliced, when a user watches one section, the section is loaded, the seamless playing experience of the video is good, and when the user pauses or jumps out, the loading is not continued, so that the bandwidth is saved to a greater extent.
In one embodiment, the media slice data is decapsulated by a demux.js class library to decapsulate the media slice data into video encoded data and audio encoded data; decoding is realized by means of the wasm soft solution generated by ffmpeg packing, the video coding data and the audio coding data are decoded into video data and audio data, and the CPU utilization rate is high.
Specifically, the decoding processing of the media slice data includes decoding and converting video encoded data of the media slice data into yuv data by using Web Assembly, and rendering the yuv data into a picture by using yuv-canvas. WebAssembly is a bytecode standard, depends on a virtual machine to run in a browser in the form of bytecode, and can depend on a compiler such as Emscript to compile strong languages such as C + +/Golang/Rust/Kotlin into WebAssembly bytecode (. wasm file). So WebAssembly is not Assembly. In a browser which does not support containers and codecs, an efficient video decoding module (C/C + + code) is compiled into WebAssembly, and an RTP media stream is decoded into yuv data in real time. The Web end converts yuv video data into rgb data by using WebGL, and then draws a video picture on canvas; and an audio and video timestamp for decoding the frame of picture can be generated in the decoding process, so that the Web end can accurately realize audio and video synchronization when playing. Specifically, the decoding process of the media slice data includes decoding Audio encoded data of the media slice data into Audio data by using a Web Audio API and playing the Audio data.
Specifically, the synchronously playing the audio data and the video data includes: and controlling the video data to synchronize the time stamp of the audio data to play the picture. After Audio data is sent into a player constructed by a Web Audio API, a timestamp of the currently played Audio can be obtained through an API of the Web Audio, a video frame is synchronized by taking the timestamp as a time reference, if the time of the current video frame is lagged, the video frame is immediately rendered, and if the time is earlier, the video frame needs to be delayed.
The following will explain the present application in a step by taking an example in which a Web player plays an MPEG-ts (ts) video file. Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a video playing method according to another embodiment of the present application, the video playing method includes the following steps:
s201: the Web player creates a worker thread;
a thread is the smallest unit that an operating system can perform computational scheduling. It is included in the process and is the actual unit of operation in the process. A thread refers to a single sequential control flow in a process, multiple threads can be concurrently executed in a process, and each thread executes different tasks in parallel. The role of the Worker thread is to create a multi-thread environment, allow the main thread to create the Worker thread and assign some tasks to the latter for running. And when the main thread runs, the Worker thread runs in the background, and the main thread and the Worker thread do not interfere with each other. And when the Worker thread finishes the calculation task, returning the result to the main thread. This has the advantage that some computationally intensive or highly delayed tasks are burdened by the Worker thread and the main thread (which is typically responsible for UI interactions) is very fluid and not blocked or slowed down. In this embodiment, the web player initializes and creates two worker threads, httpWorker and demuxWorker, respectively. The httpWorker is responsible for network data requests, namely the Web player is used for acquiring media slice data from the server; and the demuxWorker is responsible for de-encapsulating and decoding the media slice data acquired by the Web player. Specifically, demuxWorker initializes, creates two objects, demuxer and decode, to decapsulate and decode the acquired media slice data. The demuxer de-encapsulates the media slice data through a demuxer.js class library to obtain video encoded data and audio encoded data; decode decodes video encoded data and audio encoded data, wherein decoding is commonly performed in both soft (ffmpeg) and hard (MediaCodec, MediaPlayer) solutions. The soft decoding is to make the CPU decode the video by software, and the hard decoding is to hand over the part of the video data originally processed by the CPU to the GPU. The hard decoding efficiency is very high, so that the burden of a CPU can be reduced, and the characteristics of low power consumption, less heat generation and the like are realized. However, since the hard decoding is started later and the software and driver have low support for the hard decoding, the problem of poor compatibility often occurs. In addition, hard decoding is not ideal in terms of filter, caption and picture quality. Soft decoding requires a large amount of video information to be operated on, so the requirements on the processing performance of the CPU are very high. The huge calculation amount causes problems of low conversion efficiency, high heat productivity and the like. However, the soft decoding does not need excessive hardware support, and the compatibility is very high. And the soft decoding has the effects of rich filters, subtitles, picture processing optimization and the like, and can realize more excellent picture effect. In the embodiment, the decode realizes decoding through the wasm soft solution generated by ffmpeg packing.
Step S203: the TS video slice is requested based on range.
The server generates an m3u8 file based on the file bytes slice position, but does not generate specific slice data. Specifically, the m3u8 file refers to an m3u file in the UTF-8 encoding format. The m3u file is a file recorded with index plain text, when it is opened, the playing software does not play it, but finds out the network address of the corresponding audio/video file according to its index to play online. The method can be used for multi-code rate adaptation, and the client can automatically select a file suitable for the code rate to play according to the network bandwidth, so that the smoothness of the video stream is ensured. Specifically, the httpworker requests to acquire slice data corresponding to the TS file according to the range parameter recorded in the m3u8 file through the event agent and the postMessage.
Step 205: and de-encapsulating and decoding the obtained TS file slice.
And the Web player acquires the TS file slices and then sends the TS file slices into a demuxWorker thread, and the demuxer de-encapsulates the audio and the video. The Audio data is sent to a player constructed by a Web Audio API, a timestamp of the currently played Audio can be obtained through the Api of the Web Audio, and a video frame is synchronized by taking the timestamp as a time reference; the video relies on the webAssembly to decode and convert into yuv data, and the yuv data is drawn into a picture by using a yuv-canvas to obtain a yuv video set. The Web end converts yuv video data into rgb data by using WebGL, and then draws a video picture on canvas; and an audio and video timestamp for decoding the frame of picture can be generated in the decoding process, so that the Web end can accurately realize audio and video synchronization when playing.
Step 207: synchronized playback of audio data and video data
After Audio data is sent into a player constructed by a Web Audio API, a timestamp of the currently played Audio can be obtained through the Web Audio API, a video frame is synchronized by taking the timestamp as a time reference, if the time of the current video frame is lagged, the video frame is immediately rendered, and if the time is earlier, the video frame needs to be delayed.
Further, the request to load the next TS slice file may be considered by the number variation within the yuv video set. Specifically, when the number of videos in the yuv video set is lower than a certain threshold, the next TS slice file is acquired, and the continuity of playing audio and video by the Web player is ensured.
In the above embodiment, the m3u8 file based on the file bytes slice position is generated at the server, but specific slice data is not generated, and the m3u8 file records an index plain text file, so that a large number of slice files are not reserved at the server, and the space is saved; the Web player unpacks and decodes the TS file slice data after acquiring the TS file slice data to obtain video data and audio data, and then synchronously plays the video data and the audio data, so that the video file of the TS and the video in the H265 format can be directly played at the Web end, the played TS video can be directly played from the beginning of a corresponding slice when jumping, the waiting time is reduced, and the response efficiency is improved; in the scheme, the player is constructed through the Web Audio API, and an API method for controlling the player can be provided for the outside, so that the UI operation control can be conveniently customized; after the TS file data is divided into audio data and video data, the audio data or the video data can be processed independently, audio can be switched independently aiming at multi-language videos, and the audio and the video do not need to be switched integrally; and the image quality/speed and the like of the audio and video can be controlled, so that the editing capability of the audio and video is enhanced.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a video playback device according to an embodiment of the present disclosure. Specifically, the video playing apparatus includes an obtaining module 31 for obtaining media slice data generated by the server, an parsing module 32 for decapsulating and decoding the media slice data, and a playing module 33 for playing video data and audio data.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure. The computer readable storage medium 41 of the embodiments of the present application stores instructions/program data 42, which when executed, implement the methods provided by any of the embodiments of the video playback method of the present application and any non-conflicting combinations. The instructions/program data 42 may form a program file stored in the storage medium 41 in the form of a software product, so as to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium 41 includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a video playback device in an embodiment of the present application. In this embodiment, the video playback device 50 includes a processor 51.
The processor 51 may also be referred to as a CPU (Central Processing Unit). The processor 51 may be an integrated circuit chip having signal processing capabilities. The processor 51 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 51 may be any conventional processor or the like.
Video playback device 50 may further include a memory (not shown) for storing instructions and data necessary for processor 51 to operate.
The processor 51 is configured to execute instructions to implement the methods provided in any of the embodiments of the field strength monitoring method of the present application and any non-conflicting combinations thereof.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A video playback method, comprising:
acquiring an index file of a video file slice position to sequentially acquire media slice data according to the index file;
acquiring current media slice data, and performing decapsulation and decoding processing on the current media slice data to obtain video data and audio data;
and synchronously playing the audio data and the video data.
2. The video playing method according to claim 1, wherein the obtaining current media slice data and performing de-encapsulation and decoding processing on the current media slice data comprises:
creating a first thread and a second thread, so as to acquire media slice data by using the first thread, and performing decapsulation and decoding processing on the acquired current media slice data by using the second thread.
3. The video playback method according to claim 2, wherein the first thread and the second thread operate in parallel, to perform decapsulation and decoding processing on the acquired current media slice data using the second thread in parallel, and to acquire next media slice data from the index file using the first thread.
4. The video playback method of claim 1,
the de-encapsulating and decoding processing of the current media slice data comprises:
creating a first object and a second object;
decapsulating the current media slice data into video encoded data and audio encoded data by the first object;
decoding the encoded video data and the encoded audio data into video data and audio data through the second object.
5. The video playing method of claim 4, wherein the video encoding data is an H265 encoding video file, and the decapsulating and decoding the current media slice data comprises:
decapsulating the media slice data into video encoded data and audio encoded data by demux.
Wasm decodes the video encoded data and the audio encoded data into video data and audio data through ffmpeg.
6. The video playing method according to claim 4, wherein the video file is a TS video file, and the decoding the video encoding data and the audio encoding data into video data and audio data by the second object comprises:
decoding and converting the video coding data of the media slice data into yuv data by using Web Assembly, and drawing the yuv data into pictures by using yuv-canvas.
7. The video playback method according to claim 4, wherein the first object and the second object operate in parallel to decode the video encoded data and the audio encoded data decapsulated from the current media slice into video data and audio data using the second object in parallel, and decapsulate next media slice data using the first object.
8. The video playback method of claim 1,
the synchronized playing of the audio data and the video data comprises:
and controlling the video data to synchronize the time stamp of the audio data to play the picture.
9. A video player comprising a processor for executing instructions to implement the video playing method of any one of claims 1-8.
10. A computer-readable storage medium for storing instructions/program data executable to implement the video playback method of any of claims 1-8.
CN202110281970.3A 2021-03-16 2021-03-16 Video playing method, device and storage medium Pending CN115086282A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110281970.3A CN115086282A (en) 2021-03-16 2021-03-16 Video playing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110281970.3A CN115086282A (en) 2021-03-16 2021-03-16 Video playing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN115086282A true CN115086282A (en) 2022-09-20

Family

ID=83246302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110281970.3A Pending CN115086282A (en) 2021-03-16 2021-03-16 Video playing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN115086282A (en)

Similar Documents

Publication Publication Date Title
KR102450781B1 (en) Method and apparatus for encoding media data comprising generated content
CN108989885B (en) Video file transcoding system, segmentation method, transcoding method and device
US10930318B2 (en) Gapless video looping
CN105049920B (en) A kind of method for recording and device of multimedia file
CN111641838A (en) Browser video playing method and device and computer storage medium
EP2745526A1 (en) Script-based video rendering
CN104159150A (en) Cloud terminal, cloud server, media data stream playing system and method
CN112930687B (en) Media stream processing method and device, storage medium and program product
WO2020155964A1 (en) Audio/video switching method and apparatus, and computer device and readable storage medium
CN105049904B (en) A kind of playing method and device of multimedia file
KR20230028291A (en) Media access using scene descriptions
CN113938470A (en) Method and device for playing RTSP data source by browser and streaming media server
CN109600651B (en) Method and system for synchronizing file type live broadcast interactive data and audio and video data
CN116450149A (en) Hardware decoding method, device and storage medium
WO2023207119A1 (en) Immersive media processing method and apparatus, device, and storage medium
US20230025664A1 (en) Data processing method and apparatus for immersive media, and computer-readable storage medium
CN115086282A (en) Video playing method, device and storage medium
KR20170010574A (en) Information processing apparatus, image processsing apparatus and control methods thereof
EP3972260A1 (en) Information processing device, information processing method, reproduction processing device, and reproduction processing method
WO2016107174A1 (en) Method and system for processing multimedia file data, player and client
CN107426611B (en) multi-path output method and system based on video transcoding
US20120082435A1 (en) Moving image display device
CN117241062A (en) Video synthesis method and device, storage medium and electronic equipment
US20160117796A1 (en) Content Adaptive Decoder Quality Management
CN115811621A (en) Live stream playing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination