CN105142037A - Distributed transcoded audio and video synthesis method and system - Google Patents
Distributed transcoded audio and video synthesis method and system Download PDFInfo
- Publication number
- CN105142037A CN105142037A CN201510574975.XA CN201510574975A CN105142037A CN 105142037 A CN105142037 A CN 105142037A CN 201510574975 A CN201510574975 A CN 201510574975A CN 105142037 A CN105142037 A CN 105142037A
- Authority
- CN
- China
- Prior art keywords
- video
- audio
- coded format
- coding
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/2368—Multiplexing of audio and video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/434—Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
- H04N21/4341—Demultiplexing of audio and video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4398—Processing of audio elementary streams involving reformatting operations of audio signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440218—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
Abstract
The invention discloses a distributed transcoded audio and video synthesis method and system, wherein the method comprises the steps of: obtaining audio and video files; carrying out de-multiplexing on the audio and video files, and obtaining an audio file encoded by a first encoding format and a video file encoded by a second encoding format; encoding the audio file by a third encoding format, encoding the video file by a fourth encoding format, and respectively obtaining the audio file encoded by the third encoding format and the video file encoded by the fourth encoding format; packaging the audio file encoded by the third encoding format and the video file encoded by the fourth encoding format respectively with an individual container; and carrying out multiplexing on the packaged audio and video files. The third encoding format is different from the first encoding format, and the fourth encoding format is different from the second encoding format. According to the invention, the audio and video files are encoded and packaged, so that the decoding of the audio and video files can be performed according to the time sequence in flow, and the speed of the audio and video synchronous processing is increased.
Description
Technical field
The application relates to audio frequency and video transcoding treatment technology, specifically, relates to the method and system of a kind of distributed trans-coding audio frequency and video synthesis.
Background technology
Current, in order to use or analyze audio-video document, audio-video document under a kind of coded format is converted to the audio-video document under another kind of coded format by frequent needs, but in transcoding process, audio frequency and video are separated, Voice & Video independently carries out transcoding, when synthesized voice video, because the video code conversion time is than the length of audio frequency, the situation of easy generation audio or video data stacking, namely common transcoding mode may cause playing not smooth, audio frequency and video are asynchronous or need to expend the problem of player more multiple resource.Such as, by a kind of for the audio frequency and video in H.264 form situation when converting the file of H.265 form to, just there will be the asynchronous problem of audio frequency and video, and because H.265 coded format algorithm is complicated, amount of calculation is large, and the transcoding time is slow, and therefore nonsynchronous problem can be more outstanding.
Summary of the invention
In view of this, technical problems to be solved in this application there is provided the method and system of a kind of distributed trans-coding audio frequency and video synthesis, solve transcoding mode common in prior art can cause playing not smooth, audio frequency and video are asynchronous or need to expend the problem of player more multiple resource.
In order to solve the problems of the technologies described above, the application has following technical scheme:
The invention provides the method for a kind of distributed trans-coding audio frequency and video synthesis, it is characterized in that, comprising: obtain audio-video document; Demultiplexing is carried out to described audio-video document, obtains the audio file of the first coded format coding and the video file of the second coded format coding; The 3rd coded format is adopted to encode to the audio file of described first coded format coding, adopt the 4th coded format to encode to the video file of described second coded format coding, obtain the audio file of the 3rd coded format coding and the video file of the 4th coded format coding respectively; The audio file of described 3rd coded format coding and the video file of described 4th coded format coding are encapsulated with respective container respectively; Audio-video document after encapsulation is carried out multiplexing; Wherein said 3rd coded format is different from described first coded format, and described 4th coded format is different from described second coded format.
The system that the present invention also provides a kind of distributed trans-coding audio frequency and video to synthesize, is characterized in that, comprising: audio frequency and video module, demultiplexing module, audio coding module, video encoding module, audio frequency package module, video encapsulation module and Multiplexing module; Wherein, described audio frequency and video module, couples with described demultiplexing module, for providing audio-video document; Described demultiplexing module, couple with described audio frequency and video module, described audio coding module and described video encoding module, for carrying out demultiplexing to described audio-video document, obtain the audio file of the first coded format coding and the video file of the second coded format coding; Described audio coding module, couples with described demultiplexing module and described audio frequency package module, for adopting the 3rd coded format to encode to the audio file of described first coded format coding, obtains the audio file of the 3rd coded format coding; Described video encoding module, with described demultiplexing module and described video encapsulation module couples, for adopting the 4th coded format to encode to the video file of described second coded format coding, obtains the video file of the 4th coded format coding; Described audio frequency package module, couples with described audio coding module and described Multiplexing module, for encapsulating the audio file of described 3rd coded format coding; Described video encapsulation module, couples with described video encoding module and described Multiplexing module, for encapsulating the video file of described 4th coded format coding; Described Multiplexing module, with described audio frequency package module and described video encapsulation module couples, for being undertaken multiplexing by the audio-video document after encapsulation.
Compared with prior art, the method and system described in the application, reaches following effect:
The first, the present invention, by respectively to the encapsulation of Voice & Video document No., makes the decoding of audio frequency and video can perform by the time sequencing in stream;
The second, the solution of the present invention realizes the multiplexing of audio-video document by the timestamp comparing frame, makes consumes resources less, accelerates the process of audio-visual synchronization.
Accompanying drawing explanation
Accompanying drawing described herein is used to provide further understanding of the present application, and form a application's part, the schematic description and description of the application, for explaining the application, does not form the improper restriction to the application.In the accompanying drawings:
Fig. 1 is the method flow diagram of described a kind of distributed trans-coding audio frequency and video synthesis of the present invention;
Fig. 2 is another flow chart of the method for described a kind of distributed trans-coding audio frequency and video synthesis of the present invention;
Fig. 3 is the system configuration schematic diagram of described a kind of distributed trans-coding audio frequency and video synthesis of the present invention.
Fig. 4 is another system configuration schematic diagram of described a kind of distributed trans-coding audio frequency and video synthesis of the present invention.
Embodiment
As employed some vocabulary to censure specific components in the middle of specification and claim.Those skilled in the art should understand, and hardware manufacturer may call same assembly with different noun.This specification and claims are not used as with the difference of title the mode distinguishing assembly, but are used as the criterion of differentiation with assembly difference functionally." comprising " as mentioned in the middle of specification and claim is in the whole text an open language, therefore should be construed to " comprise but be not limited to "." roughly " refer to that in receivable error range, those skilled in the art can solve the technical problem within the scope of certain error, reach described technique effect substantially.In addition, " couple " word and comprise directly any and indirectly electric property coupling means at this.Therefore, if describe a first device in literary composition to be coupled to one second device, then represent described first device and directly can be electrically coupled to described second device, or be indirectly electrically coupled to described second device by other devices or the means that couple.Specification subsequent descriptions is implement the better embodiment of the application, and right described description is for the purpose of the rule that the application is described, and is not used to the scope limiting the application.The protection range of the application is when being as the criterion depending on the claims person of defining.
Embodiment 1
Specific embodiment for the method for distributed trans-coding audio frequency and video synthesis described in the application shown in Figure 1, described in the present embodiment, method comprises the following steps:
Step 101: obtain audio-video document;
Step 102: carry out demultiplexing to described audio-video document, obtains the audio file of the first coded format coding and the video file of the second coded format coding;
Step 103: adopt the 3rd coded format to encode to the audio file of described first coded format coding, obtain the audio file of the 3rd coded format coding;
Step 104: adopt the 4th coded format to encode to the video file of described second coded format coding, obtain the video file of the 4th coded format coding;
Step 105: the audio file container of described 3rd coded format coding is encapsulated,
Step 106: the video file container of described 4th coded format coding is encapsulated;
Step 107: the audio-video document after encapsulation is carried out multiplexing;
Wherein said 3rd coded format is different from described first coded format, and described 4th coded format is different from described second coded format.
When described audio-video document is only containing audio file, then only adopts the 3rd coded format to encode to described audio file, and encapsulate with container.
When described audio-video document is only containing video file, then only adopts the 4th coded format to encode to described video file, and encapsulate with container.
Embodiment 2
For being described in more detail the present invention, another specific embodiment for the method for distributed trans-coding audio frequency and video synthesis described in the application shown in Figure 2, described in the present embodiment, method comprises the following steps:
Described in embodiment 1 the audio-video document after encapsulation is carried out multiplexingly more comprising:
Step 201: the decoded time stamp (DecodeTimeStamp is called for short dts) extracting audio frequency present frame, proceeds to step 203;
Step 202: the decoded time stamp extracting video present frame, proceeds to step 204;
Step 203: the decoded time stamp of described audio frequency present frame is added the time interval between audio frequency two frame, obtains the decoded time stamp of audio frequency next frame;
Step 204: the decoded time stamp of described video present frame is added the time interval between video two frame, obtains the decoded time stamp of video next frame;
Step 205: whether the decoded time stamp of more described audio frequency next frame is less than the decoded time stamp of described video next frame, if be less than, then proceeds to step 206; If be not less than, then proceed to step 207;
Step 206: from audio file request next frame voice data, and add composite document;
Step 207: from video file request next frame video data, and add composite document.
Wherein, be resolve according to container to audio frequency present frame described in audio request, obtain the position of frame data, extract by a frame sign, press frame sequential in audio file inside and extract.The process of video request data is similar.
The audio-video document obtained like this sequences by decoding time sequence.Decoding end is easier synchronously to be play code stream.Playing, end decoding is relatively fewer than the resource of common code stream consumption, and play this code stream if online, the frame number needing the frame number downloaded to download than common code stream less just can do synchronously.
Embodiment 3
For being described in more detail the present invention, another specific embodiment for the method for distributed trans-coding audio frequency and video synthesis described in the application shown in Figure 3, described in the present embodiment, method comprises the following steps:
Step 301: obtain audio-video document;
Step 302: carry out demultiplexing to described audio-video document, obtains the audio file of being encoded by audio coding 3 (AudioCoding3, hereinafter referred to as AC3) and the video file of H.264 encoding;
Step 303: adopt Advanced Audio Coding (AdvancedAudioCoding, hereinafter referred to as AAC) to encode to the audio file that described AC3 encodes, obtain AAC encoded audio file;
Step 304: described video file of H.264 encoding is adopted and H.265 encodes, obtain H.265 encoded video file;
Step 305: described AAC encoded audio file container is encapsulated,
Step 306: described H.265 encoded video file container is encapsulated;
Step 307: the audio-video document after encapsulation is carried out multiplexing;
Next can utilize the step in embodiment 2, realize the multiplexing of audio-video document, separately do not repeat.
Embodiment 4
For being described in more detail the present invention, another specific embodiment for the system of distributed trans-coding audio frequency and video synthesis described in the application shown in Figure 4, comprising: audio frequency and video module 401, demultiplexing module 402, audio coding module 403, video encoding module 404, audio frequency package module 405, video encapsulation module 406 and Multiplexing module 407; Wherein,
Described audio frequency and video module 401, couples, for providing audio-video document with described demultiplexing module 402;
Described demultiplexing module 402, couple with described audio frequency and video module 401, described audio coding module 403 and described video encoding module 404, for carrying out demultiplexing to described audio-video document, obtain the audio file of the first coded format coding and the video file of the second coded format coding;
Described audio coding module 403, couples with described demultiplexing module 402 and described audio frequency package module 405, for adopting the 3rd coded format to encode to the audio file of described first coded format coding, obtains the audio file of the 3rd coded format coding;
Described video encoding module 404, couples with described demultiplexing module 402 and described video encapsulation module 406, for adopting the 4th coded format to encode to the video file of described second coded format coding, obtains the video file of the 4th coded format coding;
Described audio frequency package module 405, couples with described audio coding module 403 and described Multiplexing module 407, for encapsulating the audio file of described 3rd coded format coding;
Described video encapsulation module 406, couples with described video encoding module 404 and described Multiplexing module 407, for encapsulating the video file of described 4th coded format coding;
Described Multiplexing module 407, couples with described audio frequency package module 405 and described video encapsulation module 406, for being undertaken multiplexing by the audio-video document after encapsulation.
Wherein, described 3rd coded format is different from described first coded format, and described 4th coded format is different from described second coded format.
Wherein, described Multiplexing module 407 pairs of audio-video documents are multiplexing more comprises: extract the decoded time stamp of audio frequency present frame and the decoded time stamp of video present frame respectively; The decoded time stamp of described audio frequency present frame is added the time interval between audio frequency two frame, obtains the decoded time stamp of audio frequency next frame; The decoded time stamp of described video present frame is added the time interval between video two frame, obtains the decoded time stamp of video next frame; Whether the decoded time stamp of more described audio frequency next frame is less than the decoded time stamp of described video next frame, if be less than, then from audio file request next frame voice data, and adds composite document; If be not less than, then from video file request next frame video data, and add composite document.
Wherein, when the audio-video document that described audio frequency and video module 401 provides is only containing audio file, then described audio coding module 403 only adopts the 3rd coded format to encode to described audio file, and is encapsulated with container by described audio frequency package module 405.
Wherein, when the audio-video document that described audio frequency and video module 401 provides is only containing video file, then described video encoding module 404 only adopts the 4th coded format to encode to described video file, and is encapsulated with container by described video encapsulation module 406.
Wherein, described first coded format is AC3, and H.264 described second coded format is, described 3rd coded format is AAC, and H.265 described 4th coded format is.
Known by above each embodiment, the beneficial effect that the application exists is:
The first, the present invention, by respectively to the encapsulation of Voice & Video document No., makes the decoding of audio frequency and video can perform by the time sequencing in stream;
The second, the solution of the present invention realizes the multiplexing of audio-video document by the timestamp comparing frame, makes consumes resources less, accelerates the process of audio-visual synchronization.
Those skilled in the art should understand, the embodiment of the application can be provided as method, device or computer program.Therefore, the application can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect.And the application can adopt in one or more form wherein including the upper computer program implemented of computer-usable storage medium (including but not limited to magnetic disc store, CD-ROM, optical memory etc.) of computer usable program code.
Above-mentioned explanation illustrate and describes some preferred embodiments of the application, but as previously mentioned, be to be understood that the application is not limited to the form disclosed by this paper, should not regard the eliminating to other embodiments as, and can be used for other combinations various, amendment and environment, and can in invention contemplated scope described herein, changed by the technology of above-mentioned instruction or association area or knowledge.And the change that those skilled in the art carry out and change do not depart from the spirit and scope of the application, then all should in the protection range of the application's claims.
Claims (10)
1. a method for distributed trans-coding audio frequency and video synthesis, is characterized in that, comprising:
Obtain audio-video document;
Demultiplexing is carried out to described audio-video document, obtains the audio file of the first coded format coding and the video file of the second coded format coding;
The 3rd coded format is adopted to encode to the audio file of described first coded format coding, adopt the 4th coded format to encode to the video file of described second coded format coding, obtain the audio file of the 3rd coded format coding and the video file of the 4th coded format coding respectively;
The audio file of described 3rd coded format coding and the video file of described 4th coded format coding are encapsulated with respective container respectively;
Audio-video document after encapsulation is carried out multiplexing;
Wherein said 3rd coded format is different from described first coded format, and described 4th coded format is different from described second coded format.
2. the method for distributed trans-coding audio frequency and video synthesis according to claim 1, is characterized in that, is describedly carried out multiplexingly more comprising by audio-video document after encapsulation:
Extract the decoded time stamp of audio frequency present frame and the decoded time stamp of video present frame respectively;
The decoded time stamp of described audio frequency present frame is added the time interval between audio frequency two frame, obtains the decoded time stamp of audio frequency next frame;
The decoded time stamp of described video present frame is added the time interval between video two frame, obtains the decoded time stamp of video next frame;
Whether the decoded time stamp of more described audio frequency next frame is less than the decoded time stamp of described video next frame, if be less than, then from audio file request next frame voice data, and adds composite document; If be not less than, then from video file request next frame video data, and add composite document.
3. the method for distributed trans-coding audio frequency and video synthesis according to claim 2, is characterized in that, when described audio-video document is only containing audio file, then only adopts the 3rd coded format to encode to described audio file, and encapsulates with container.
4. the method for distributed trans-coding audio frequency and video synthesis according to claim 2, is characterized in that, when described audio-video document is only containing video file, then only adopts the 4th coded format to encode to described video file, and encapsulates with container.
5. the method for distributed trans-coding audio frequency and video synthesis according to claim 3 or 4, it is characterized in that, described first coded format is audio coding 3, and H.264 described second coded format is, described 3rd coded format is Advanced Audio Coding, and H.265 described 4th coded format is.
6. a system for distributed trans-coding audio frequency and video synthesis, is characterized in that, comprising: audio frequency and video module, demultiplexing module, audio coding module, video encoding module, audio frequency package module, video encapsulation module and Multiplexing module; Wherein,
Described audio frequency and video module, couples with described demultiplexing module, for providing audio-video document;
Described demultiplexing module, couple with described audio frequency and video module, described audio coding module and described video encoding module, for carrying out demultiplexing to described audio-video document, obtain the audio file of the first coded format coding and the video file of the second coded format coding;
Described audio coding module, couples with described demultiplexing module and described audio frequency package module, for adopting the 3rd coded format to encode to the audio file of described first coded format coding, obtains the audio file of the 3rd coded format coding;
Described video encoding module, with described demultiplexing module and described video encapsulation module couples, for adopting the 4th coded format to encode to the video file of described second coded format coding, obtains the video file of the 4th coded format coding;
Described audio frequency package module, couples with described audio coding module and described Multiplexing module, for encapsulating the audio file of described 3rd coded format coding;
Described video encapsulation module, couples with described video encoding module and described Multiplexing module, for encapsulating the video file of described 4th coded format coding;
Described Multiplexing module, with described audio frequency package module and described video encapsulation module couples, for being undertaken multiplexing by the audio-video document after encapsulation.
Wherein, described 3rd coded format is different from described first coded format, and described 4th coded format is different from described second coded format.
7. the system of distributed trans-coding audio frequency and video synthesis according to claim 6, it is characterized in that, described Multiplexing module more comprises audio-video document is multiplexing: extract the decoded time stamp of audio frequency present frame and the decoded time stamp of video present frame respectively; The decoded time stamp of described audio frequency present frame is added the time interval between audio frequency two frame, obtains the decoded time stamp of audio frequency next frame; The decoded time stamp of described video present frame is added the time interval between video two frame, obtains the decoded time stamp of video next frame; Whether the decoded time stamp of more described audio frequency next frame is less than the decoded time stamp of described video next frame, if be less than, then from audio file request next frame voice data, and adds composite document; If be not less than, then from video file request next frame video data, and add composite document.
8. the system of distributed trans-coding audio frequency and video synthesis according to claim 7, it is characterized in that, when the audio-video document that described audio frequency and video module provides is only containing audio file, then described audio coding module only adopts the 3rd coded format to encode to described audio file, and is encapsulated by described audio frequency package module container.
9. the system of distributed trans-coding audio frequency and video synthesis according to claim 7, it is characterized in that, when the audio-video document that described audio frequency and video module provides is only containing video file, then described video encoding module only adopts the 4th coded format to encode to described video file, and is encapsulated by described video encapsulation module container.
10. the system of distributed trans-coding audio frequency and video synthesis according to claim 8 or claim 9, it is characterized in that, described first coded format is audio coding 3, and H.264 described second coded format is, described 3rd coded format is Advanced Audio Coding, and H.265 described 4th coded format is.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510574975.XA CN105142037B (en) | 2015-09-10 | 2015-09-10 | A kind of distributed trans-coding audio-video synthetic method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510574975.XA CN105142037B (en) | 2015-09-10 | 2015-09-10 | A kind of distributed trans-coding audio-video synthetic method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105142037A true CN105142037A (en) | 2015-12-09 |
CN105142037B CN105142037B (en) | 2019-07-19 |
Family
ID=54727221
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510574975.XA Active CN105142037B (en) | 2015-09-10 | 2015-09-10 | A kind of distributed trans-coding audio-video synthetic method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105142037B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107343218A (en) * | 2017-05-24 | 2017-11-10 | 广东小天才科技有限公司 | A kind of method and device of Video coding |
CN108882010A (en) * | 2018-06-29 | 2018-11-23 | 深圳市九洲电器有限公司 | A kind of method and system that multi-screen plays |
CN109640162A (en) * | 2018-12-25 | 2019-04-16 | 北京数码视讯软件技术发展有限公司 | Code stream conversion method and system |
CN110858923A (en) * | 2018-08-24 | 2020-03-03 | 北京字节跳动网络技术有限公司 | Method and device for generating segmented media file and storage medium |
WO2022135105A1 (en) * | 2020-12-21 | 2022-06-30 | 展讯半导体(成都)有限公司 | Video dubbing method and apparatus for functional machine, terminal device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101247526A (en) * | 2008-03-18 | 2008-08-20 | 天津大学 | Sound volume equalization regulation and its application method based on digital television code stream |
US20120207225A1 (en) * | 2011-02-10 | 2012-08-16 | Media Excel Korea Co., Ltd. | Audio and video synchronizing method in transcoding system |
CN103647970A (en) * | 2013-12-02 | 2014-03-19 | 天脉聚源(北京)传媒科技有限公司 | Audio and video synchronization method and system for distributed transcoding |
CN103873888A (en) * | 2012-12-12 | 2014-06-18 | 深圳市快播科技有限公司 | Live broadcast method of media files and live broadcast source server |
-
2015
- 2015-09-10 CN CN201510574975.XA patent/CN105142037B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101247526A (en) * | 2008-03-18 | 2008-08-20 | 天津大学 | Sound volume equalization regulation and its application method based on digital television code stream |
US20120207225A1 (en) * | 2011-02-10 | 2012-08-16 | Media Excel Korea Co., Ltd. | Audio and video synchronizing method in transcoding system |
CN103873888A (en) * | 2012-12-12 | 2014-06-18 | 深圳市快播科技有限公司 | Live broadcast method of media files and live broadcast source server |
CN103647970A (en) * | 2013-12-02 | 2014-03-19 | 天脉聚源(北京)传媒科技有限公司 | Audio and video synchronization method and system for distributed transcoding |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107343218A (en) * | 2017-05-24 | 2017-11-10 | 广东小天才科技有限公司 | A kind of method and device of Video coding |
CN108882010A (en) * | 2018-06-29 | 2018-11-23 | 深圳市九洲电器有限公司 | A kind of method and system that multi-screen plays |
CN110858923A (en) * | 2018-08-24 | 2020-03-03 | 北京字节跳动网络技术有限公司 | Method and device for generating segmented media file and storage medium |
CN110858923B (en) * | 2018-08-24 | 2022-09-06 | 北京字节跳动网络技术有限公司 | Method and device for generating segmented media file and storage medium |
CN109640162A (en) * | 2018-12-25 | 2019-04-16 | 北京数码视讯软件技术发展有限公司 | Code stream conversion method and system |
CN109640162B (en) * | 2018-12-25 | 2021-05-14 | 北京数码视讯软件技术发展有限公司 | Code stream conversion method and system |
WO2022135105A1 (en) * | 2020-12-21 | 2022-06-30 | 展讯半导体(成都)有限公司 | Video dubbing method and apparatus for functional machine, terminal device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN105142037B (en) | 2019-07-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105142037A (en) | Distributed transcoded audio and video synthesis method and system | |
CN105474309B (en) | The device and method of high efficiency object metadata coding | |
JP6729382B2 (en) | Transmission device, transmission method, reception device, and reception method | |
CN109088887A (en) | A kind of decoded method and device of Streaming Media | |
RU2010142914A (en) | METHODS, DEVICES AND SYSTEM FOR PARALLEL CODING AND DECODING OF VIDEO INFORMATION | |
CN105049920B (en) | A kind of method for recording and device of multimedia file | |
CN102301730A (en) | Method, device and system for transmitting and processing multichannel AV | |
MX2008000122A (en) | Method and apparatus for encoding and decoding an audio signal. | |
CA2578190C (en) | Device and method for generating a coded multi-channel signal and device and method for decoding a coded multi-channel signal | |
BR112016008787B1 (en) | METHOD FOR DECODING AND ENCODING A DOWNMIX MATRIX, METHOD FOR PRESENTING AUDIO CONTENT, ENCODER AND DECODER FOR A DOWNMIX MATRIX, AUDIO ENCODER AND AUDIO DECODER | |
CN108965971A (en) | MCVF multichannel voice frequency synchronisation control means, control device and electronic equipment | |
CN101243490A (en) | Method and apparatus for encoding and decoding an audio signal | |
CN105049904B (en) | A kind of playing method and device of multimedia file | |
US20230362224A1 (en) | Systems and methods for encoding and decoding | |
CN103686203A (en) | Video transcoding method and device | |
MXPA06010867A (en) | Audio bitstream format in which the bitstream syntax is described by an ordered transveral of a tree hierarchy data structure. | |
CN105915493A (en) | Audio and video real-time transmission method and device and audio and video real-time playing method and device | |
EP3288025A1 (en) | Transmission device, transmission method, reception device, and reception method | |
EP2276192A2 (en) | Method and apparatus for transmitting/receiving multi - channel audio signals using super frame | |
JP2023081933A (en) | Transmitter, transmission method, receiver and reception method | |
CN103237259A (en) | Audio-channel processing device and audio-channel processing method for video | |
CN102819851A (en) | Method for implementing sound pictures by using computer | |
JP6724782B2 (en) | Transmission device, transmission method, reception device, and reception method | |
CN103177725B (en) | Method and device for transmitting aligned multichannel audio frequency | |
US8613038B2 (en) | Methods and apparatus for decoding multiple independent audio streams using a single audio decoder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 100191 Beijing, Xueyuan Road No. 51, the first to enjoy the science and technology building, floor 6, Applicant after: Storm group Limited by Share Ltd Address before: 100191 Beijing, Xueyuan Road No. 51, the first to enjoy the science and technology building, floor 6, Applicant before: Beijing Baofeng Technology Co., Ltd. |
|
COR | Change of bibliographic data | ||
GR01 | Patent grant | ||
GR01 | Patent grant |