KR20130029235A - Method for transcoding streaming vedio file into streaming vedio file in real-time - Google Patents

Method for transcoding streaming vedio file into streaming vedio file in real-time Download PDF

Info

Publication number
KR20130029235A
KR20130029235A KR1020110092508A KR20110092508A KR20130029235A KR 20130029235 A KR20130029235 A KR 20130029235A KR 1020110092508 A KR1020110092508 A KR 1020110092508A KR 20110092508 A KR20110092508 A KR 20110092508A KR 20130029235 A KR20130029235 A KR 20130029235A
Authority
KR
South Korea
Prior art keywords
frame
video
streaming
output
header
Prior art date
Application number
KR1020110092508A
Other languages
Korean (ko)
Inventor
황재형
김현성
양지훈
정인협
김성원
임종철
Original Assignee
주식회사 어니언텍
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 어니언텍 filed Critical 주식회사 어니언텍
Priority to KR1020110092508A priority Critical patent/KR20130029235A/en
Publication of KR20130029235A publication Critical patent/KR20130029235A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • H04N21/2335Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities

Abstract

PURPOSE: A method for transmitting streaming by converting an image file in real time is provided to transmit the streaming of a file by converting a format of the file according to suitable bit rates for a current network environment and a suitable resolution for a terminal. CONSTITUTION: A media acceleration server receives a frame stream from a streaming server(330). The media acceleration server encodes a video frame according to a generated header and decodes the video frame(340). When the size of the encoded video frame is similar with the size of the set frame data, the media acceleration server streams an output frame(350,380). [Reference numerals] (310) Receiving header stream; (320) Outputting streaming after analyzing a header and generating the header of a output file; (330) Receiving frame stream; (340) Decoding to a video frame, and encoding according to the generated header; (350) Comparing with the size of a preset frame; (360) Re-encoding after adjusting quantization coefficient; (370) 0 padding; (380) Outputting the frame streaming; (390) Final frame?; (AA) Start; (BB) Under; (CC) Over; (DD) Same; (EE) No; (FF) Yes; (GG) End

Description

How to convert streaming video file in real time and stream it {Method for transcoding streaming vedio file into streaming vedio file in real-time}

The present invention relates to a method of transcoding a video file, and more particularly, to convert a video file streamed from a streaming server into a resolution suitable for a terminal or a bit rate suitable for a network environment, and to stream it to a terminal. Relates to a method of transmission.

Streaming is a method of transmitting and playing multimedia files such as audio and video. Until streaming technology came out, it took a long time to play multimedia files because it had to be downloaded to the hard disk to play multimedia files. The amount of free space had to be secured to some extent. However, the streaming technique is widely used because it does not need to wait because multimedia files are played by the receiver as it is transmitted. The streaming technology is disclosed in, for example, Patent Registration No. 563659, Patent Registration No. 820350, Patent Registration No. 1019594, Patent Publication No. 2009-0049385, and the like.

On the other hand, even if the streaming file is optimized for real-time playback, there may be a delay in data transmission depending on the network environment, so that a short pause occurs during playback. In order to solve this problem, Patent No. 563659 periodically sends a test packet to measure the network status to the subscriber, receives a reply, and monitors the network status to the subscriber, and streams to the level corresponding to the monitored network status. The transcoding rate of the video file to be determined is determined, and the video file is transcoded and transmitted accordingly.

On the other hand, there are various file container formats for multimedia files, and the streaming service is only available on each dedicated streaming server according to the file container format, and there is no compatibility at all, and in the case of local dedicated containers such as AVI, real streaming In order to solve the problem of using the download pending method, which downloads and plays a predetermined part in advance, the patent registration No. 820350 has a frame indexing function using a hint existing inside the MP4 file. VOD streaming server replaces the AVI file format, WMV, MP2 and other formats to support streaming without any hint information.

Conventionally, streaming playback on a computer connected to the wired Internet is mainly performed by wireless Internet through mobile networks such as Wi-Fi, Wibro, and 3G, and is widely used in smartphones and tablet PCs. Streaming through has also increased dramatically. As a result, overloading of a communication network often occurs, and not only multimedia files are not played smoothly, but also voice calls are affected. This is because the data communication and the voice communication are performed using the same physical channel in the case of the wireless Internet through the mobile communication network, so the overload of the data communication affects the voice communication.

Meanwhile, multimedia files provided by streaming servers such as YouTube (www.youtube.com) and GomTV (www.gomtv.com) are transmitted at the resolution of the original file regardless of the receiving terminal (computer, smartphone, tablet, etc.). It is common to indicate that the receiver adjusts the resolution. In other words, even if the computer user changes the size of the playback window in the multimedia player, the resolution of the streamed file does not change, and the multimedia player zooms in or out to fit the window size.

However, in the case of the wireless Internet through a mobile communication network, since most connected terminals use a lower display device than a general computer, the resolution of the original file coming from the streaming server is often higher than that of the terminal device. There are cases where bandwidth is used. In addition, even when the network environment is not good, such as when many users are connected, the data is transmitted through the mobile communication network at the bit rate of the file transmitted from the streaming server, which causes interruption or interruption of voice calls during multimedia playback. The case occurs.

On the other hand, in this case, as in Patent Registration No. 563459, periodically sending a test packet for measuring the network status to the subscriber and receiving a reply, the method of monitoring the network status to the subscriber is inevitably limited. This is because it is difficult to realize that the streaming server such as YouTube periodically monitors the network status of each country's mobile communication network and adjusts the bit rate of the streaming video file accordingly.

The present invention has been made in view of the above-mentioned problem, and converts a video file streamed from a streaming server into a resolution suitable for a connected terminal, a bit rate suitable for a current network environment, or a video file format that can be processed in a terminal and streaming. It is an object to provide a method of transmitting. Another object of the present invention is to provide a method of converting a video file streamed from a streaming server into a streaming file having a constant bit rate according to the type of video frame and streaming the transmission to the terminal.

According to an embodiment of the present invention, a method of converting and streaming a streamed video file in real time, the media acceleration server analyzes a header of an input video file streamed from a streaming server to output an output bit rate, frames per second, resolution, Generating a header of the output video file according to a preset condition including at least one of file formats and streaming the header to the terminal; and transmitting the video frame data from the frame data of the input video file that is received by the media acceleration server. Decoded to generate decoded video frame data, encodes the decoded video frame data according to the set content of the generated header to generate output video frame data for streaming transmission to the terminal, audio frame data As it includes a second step of streaming delivery to the device.

According to another exemplary embodiment of the present invention, a method for streaming and converting a streamed video file in real time may include: output bit rate, frames per second, resolution, by analyzing a header of an input video file streamed from a streaming server. Generating a header of the output video file according to a preset condition including at least one of file formats and streaming the header to the terminal; and transmitting the video frame data from the frame data of the input video file that is received by the media acceleration server. Decoded to generate decoded video frame data, encodes the decoded video frame data according to the settings of the generated header to generate output video frame data for streaming transmission to the terminal, audio frame data Decodes and comprising a second step of generating a decoded audio frame data, stream transmitting the decoded audio frame data encoded by the given terminal to a size to produce an output audio frame data.

In the first step, the media acceleration server preferably generates the header of the output video file by determining the type of the video frame in a predetermined order regardless of the type of the video frame specified in the header of the input video file. In addition, it is preferable that the video frame data generated by encoding in the second step have the same size for each frame type. When encoding the video frame in the second step, the media acceleration server determines and encodes the video frame type as specified in the header of the output video file regardless of the frame type (I, P, B frame) of the input video frame. .

The size of the video frame data may be determined by multiplying the output bit rate by the number of frames per second multiplied by the frame rate. In this case, the rate for each frame type refers to the ratio of the number of bits allocated to each frame type.

As a result of encoding the video frame in the second step, if the size of the encoded video frame data exceeds the size defined for the frame type, the quantization coefficient is adjusted and re-encoded. On the other hand, if the size of the encoded audio frame data for the audio frame exceeds the predetermined size, the quantization coefficient may be adjusted and re-encoded.

As a result of the encoding of the video frame in the second step, when the size of the encoded video frame data is smaller than the predetermined size for the corresponding frame type, redundancy data is inserted in the remaining portion of the predetermined size. On the other hand, when the size of the encoded audio frame data is smaller than the predetermined size for the audio frame, redundancy data may be inserted in the remaining portion of the predetermined size.

When generating the header of the output video file in the first step, the media acceleration server determines the order of output video frames and audio frames and reflects them in the header. The sequence of video frames and audio frames is the video frame sequence value obtained by dividing the sequence number of the video frame to be output by the number of frames per second, and the sample rate divided by the sample rate per frame of the audio frame by the sequence number of the output audio frame. It is preferable to set the audio frame order value, which is the value, in the order of having the lowest value. When streaming the output video frame data and the audio frame data in the second step, the frame data is output in the determined order. In one embodiment, the number of samples per frame of the audio frame is 1024. In some embodiments, the frames per second may be set equal to the frames per second of the input video file that is streamed. The resolution may be determined according to the resolution of the display device of the terminal connected to the media acceleration server.

According to the present invention, since the streaming server converts the video file streamed into a resolution suitable for the connected terminal, a bit rate suitable for the current network environment, or a file format that can be processed by the terminal, the streaming is performed. This can reduce resources and overload the network.

In addition, since a video file streamed from a streaming server is converted into a streaming file having a constant bit rate according to the type of video frame and transmitted to the terminal for streaming, the network load over time during multimedia playback can be equally maintained.

1 is a conceptual diagram illustrating a media acceleration server connected to a streaming server and a terminal through the Internet and a mobile communication network.
2 is a schematic block diagram showing an internal configuration of a media acceleration server.
3 is a flowchart illustrating an operation of a media acceleration server according to an embodiment of the present invention.
4 is a flowchart illustrating an operation of a media acceleration server according to another embodiment of the present invention.
5 is a conceptual diagram illustrating an example of applying a frame order determination method when the number of frames per second of the input file and the number of frames per second of the output file are the same.
6 is a conceptual diagram illustrating an example in which a frame order determination method is applied when the number of frames per second of the input file and the number of frames per second of the output file are different.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.

1 is a conceptual diagram illustrating a media acceleration server connected to a streaming server and a terminal through the Internet and a mobile communication network.

The terminal 300 refers to an information terminal device such as a mobile phone, a smart phone, a tablet PC, etc., which can receive and play a streaming video by accessing the Internet through a mobile communication network. The terminal 300 accesses the Internet via a mobile communication network and receives video data from a streaming server 200 such as YouTube. In the present invention, the streaming data received from the streaming server 200 via the Internet is provided to the terminal 300 through the mobile communication network after being transcoded in the media acceleration server 100. Meanwhile, in FIG. 1, for convenience, the media acceleration server 100 is represented at the front end of the mobile communication network, but may exist inside the mobile communication network. Meanwhile, when the terminal 300 accesses the streaming server 200, the terminal 300 may be connected without passing through the media acceleration server 100, and the streaming data may be transmitted through the media acceleration server 100 when the streaming is received.

2 is a schematic block diagram illustrating an internal configuration of the media acceleration server 100.

The media acceleration server 100 transmits the video input unit 110 for receiving the video data streamed from the streaming server 200 through the Internet, and the video data being streamed at different resolutions or different bit rates according to predetermined conditions. A transcoder 120 for coding and a video output unit 130 for transmitting the transcoded video data to the terminal 300 through a mobile communication network. Meanwhile, the media acceleration server 100 further includes a parameter setting unit 140 for receiving information on a network environment, a terminal specification, etc. from a mobile communication exchange system, etc., or setting input parameters for transcoding by receiving an input from a server administrator. can do.

Next, an operation in the media acceleration server according to an embodiment of the present invention will be described with reference to FIG. 3.

In a streaming video file, the header stream is transmitted first. The header stream includes information such as video frame size information, frames per second, information about each video frame and audio frame, and the number of audio channels. When the media acceleration server 100 receives all the header streams from the streaming server 200 (step 310), the media acceleration server 100 generates a header of the new file using the information included in the header stream and the preset condition (step 320). Preset conditions include output bit rate, bit rate reduction (e.g., 80% of the bit rate of the input file), output resolution (horizontal and vertical pixels of the video frame), frames per second (FPS), frequency of I and P frames, The video file format to be output may be included. These conditions include the specifications of the terminal (e.g. display resolution, maximum frames per second that can be played), the environment of the current mobile communication network (e.g., the number of terminals connected to the base station to which the terminal is connected, and the bit error rate. Rate, etc.), depending on the current time and statistical environment of the mobile communication network according to the location of the terminal, or may be set manually by the administrator.

The media acceleration server 100 determines the size of the streaming video file to be output and the number of frames according to header information of the input streaming video and preset conditions. The size of the video file is the sum of the header size plus the size of the audio and video frames. In the present invention, video frames preferably have the same size for each frame type (I frame, P frame). For example, all I frames consist of 500 bits, and all P frames consist of 100 bits. The size of the video frame data may be determined by multiplying the output bit rate by the number of frames per second multiplied by the frame rate. In this case, the rate for each frame type refers to the ratio of the number of bits allocated to each frame type. For example, the frames per second (FPS) is 30 frames per second, the output bit rate is 30 kbps, the ratio of bits allocated for I frames is 80%, and the ratio of bits allocated for P frames is 10%. If (IPP is repeated), the size of the I frame is (30 kbps / 30) x0.8 = 800 bits, and the size of the P frame is (30 kbps / 30) x0.1 = 100 bits. The number of video frames is rounded up to the number of input frames x (output FPS / input FPS).

It is also desirable to configure the video frame so that the I and P frames always appear in the same pattern. For example, two P frames are repeated in an I frame such as IPPIPP .. With this configuration, once the number of video frames is known, the total size of the video frames can be easily determined. Although the repetition ratio of the I frame and the P frame is preferably determined in advance, it is also possible to dynamically determine the type of video, network environment, and terminal specification. When the repetition rate of the I frame and the P frame is determined in this way, the media acceleration server 100 determines the types of the video frames in a predetermined order regardless of the type of the video frames specified in the header of the input video file, thereby determining the header of the output video file. Create That is, even if a video frame is input as IPPPPIPPIPPP .. in the input video, it is encoded and output as IPPIPPIPPIPP ..

When the media acceleration server 100 generates the header of the output video file, the media acceleration server 100 determines the order of the video frame and the audio frame to be output and reflects them in the header. The sequence of video frames and audio frames is a sample of the video frame sequence value obtained by dividing the sequence of video frames to be output divided by the number of frames per second of the output movie, and multiplying the number of samples per frame of the audio frame by the sequence of audio frames to be output. Preferably, the audio frame order value, which is the value divided by the rate, is determined in the order of having the lowest value. This will be described later in detail with reference to FIGS. 5 and 6.

In the case of the mp4 file format with respect to the generation of the header, for example, in the case of video, the tkhd (Track Header) field, which includes information such as image size, position, and playback time, may be changed to reflect information about the changed resolution. The stts (Decoding Time to Sample Table) field indicating the interval information between each frame may be modified to change the information of the sample duration when the input file and the output file have different FPSs. The stsz (Sample Size atom by frame) field indicating the size information can be changed to include the size information of the frame expected to be output, and among the stco (Chunk Offset atom) fields indicating the starting address value in the file where the chunk starts. According to the size information of the frame in which the video stco field is changed, the portion representing the start offset of each chunk may be changed. The audio stco field may be changed to match the video stsz value. In addition, other fields such as meta information for seek may be appropriately changed.

After all of the header streams are received, the frame stream is received from the streaming server 200 (step 330). When analysis of the header stream and generation of a new header stream are not completed, it is preferable to store the received frame stream in a buffer. When generation of the new header stream is completed, in step 340 the video frame is decoded and encoded according to the newly generated header. Encoding according to the newly generated header means encoding according to the frame size, frame type, resolution, bit rate, and frames per second determined for the frame. In this embodiment, the audio frames are sent out in order without being decoded or encoded.

In operation 350, the size (number of bits) of the encoded video frame data is compared with a frame data size (number of bits) preset for the corresponding frame type. If the size of the encoded video frame data exceeds the size defined for the frame type, the quantization coefficient is adjusted and re-encoded (step 360).

In addition, if the size of the encoded video frame data is smaller than the size determined for the frame type as a result of the encoding for the video frame, an arbitrary bit is inserted in the remaining portion of the predetermined size (step 370). That is, if the type of the frame is an I frame and the size defined for the I frame is 800 bits and the number of encoded bits is 780 bits, redundancy data is inserted into the remaining 20 bits. On the other hand, in the case of a video format that can be played only when 0 or 1 is inserted, it is also possible to insert a playable bit. Meanwhile, in step 360, redundancy data is inserted in the remaining portion of the predetermined size even when the size of the encoded video frame data is smaller than the size corresponding to the frame type after encoding by adjusting the quantization coefficient. If the size of the encoded video frame data is the same as the preset size, step 360 or step 370 need not be performed.

The media acceleration server 100 outputs the output frames generated in this order in a predetermined order (step 380). When the steps 330 to 380 are repeated until the last frame (step 390), all the streaming input videos are transcoded into a desired form and are output to the terminal 300 for streaming.

4 is a flowchart illustrating an operation of a media acceleration server according to another embodiment of the present invention. 4 illustrates a case where transcoding is performed not only on a video frame but also on an audio frame.

Since the operations from step 410 to step 430 of FIG. 4 and the operation from step 480 to step 490 are the same as the operation from step 310 to step 330 of FIG. 3 and the operation from step 380 to step 390, detailed description is omitted. do. However, when the header is analyzed at step 420 and a new header is generated, the header should be generated according to the audio frame to be output.

In the present embodiment, in step 440, not only the video frame but also the audio frame is encoded according to a condition set after decoding (that is, for a newly generated header). In addition, steps 450 to 470 that are not performed on the audio frame in FIG. 3 are applied to the audio frame in the same manner as in the case of the video frame in the embodiment of FIG. 4. That is, if the size (number of bits) of the audio frame after encoding is larger than the predetermined size, the quantization coefficient is adjusted and re-encoded, and if smaller than the predetermined size, redundancy data is inserted in the remaining portion.

Next, a method of determining an output order of audio and video frames will be described with reference to FIGS. 5 and 6. 5 is a conceptual diagram illustrating an example in which a frame order determination method is applied when the frames per second of an input file and the frames per second of an output file are the same, and FIG. 6 is a frame when the frames per second of the input file and the frames per second of the output file are different. A conceptual diagram showing an example of applying a frame order determination method.

The sequence of video frames and audio frames is a sample of the video frame sequence value obtained by dividing the sequence of video frames to be output divided by the number of frames per second of the output movie, and multiplying the number of samples per frame of the audio frame by the sequence of audio frames to be output. The audio frame order values, which are divided by the rate, are determined in the order of having the lowest values.

5 illustrates a case where the number of frames per second of the input video file is 30 and 30 frames per second of the output video file. FIG. 6 illustrates a case where 30 frames per second of the input video file is 30 and 24 frames per second of the output video file. 5 and 6 show an audio frame in which a sampling rate of 32 kbps and the number of samples per frame are 1024.

Accordingly, in the case of FIG. 5, the sequence value of the video frame is obtained by dividing the sequence number of the video frame by 30 (or multiplied by 0.033) (for example, the sequence value of # 1 frame = 0.033, the sequence value of # 2 frame = 0.066). In the case of FIG. 6, the sequence value of the video frame is obtained by dividing the sequence number of the video frame by 24 (or multiplying by 0.041) (for example, the sequence value of # 1 frame = 0.041, of # 2 frame). Ordinal value = 0.082, ...).

The audio frame sequence value is obtained by multiplying the audio frame sequence by 1024/32000 (= 0.032). (E.g., order value of frame # 1 = 0.032, order value of frame # 2 = 0.064)

On the other hand, it is desirable to always set the first frame as a video frame regardless of the order value. This is to use a reference point for the video frame.

Audio and video may be different lengths in video files. For example, the video may already be finished but the audio may continue to play and vice versa. Even in this case, the frame order may be determined according to the calculated order value. 5 and 6, the length of the audio is longer than the length of the video, so audio frames occupy the rear of the newly generated streaming file.

While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. In other words, within the scope of the present invention, all of the components may be selectively operated in combination with one or more. In addition, although all of the components may be implemented as one independent hardware, some or all of the components may be selectively combined to perform a part or all of the functions in one or a plurality of hardware. As shown in FIG. Codes and code segments constituting the computer program may be easily inferred by those skilled in the art. Such a computer program may be stored in a computer readable storage medium and read and executed by a computer, thereby implementing embodiments of the present invention. The storage medium of the computer program may include a semiconductor recording medium, a magnetic recording medium, an optical recording medium, a carrier wave medium, and the like. In addition, such a program may be in a form that can be downloaded through a communication network.

The terms "comprise", "comprise" or "have" described above mean that the corresponding component may be included unless specifically stated otherwise, and thus, other components are not excluded. It should be construed that it may further include other components.

The foregoing description is merely illustrative of the technical idea of the present invention, and various changes and modifications may be made by those skilled in the art without departing from the essential characteristics of the present invention. Therefore, the embodiments disclosed in the present invention are intended to illustrate rather than limit the scope of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments. The protection scope of the present invention should be interpreted by the following claims, and all technical ideas within the equivalent scope should be interpreted as being included in the scope of the present invention.

100 media acceleration servers,
110 video input,
120 transcoder section,
130 video output,
140 parameter setting section,
200 streaming servers,
300 terminals.

Claims (21)

The media acceleration server analyzes the header of the input video file streamed from the streaming server and generates the header of the output video file according to a preset condition including at least one of an output bit rate, frames per second, resolution, and file format. The first step of streaming transmission;
A media acceleration server decodes video frame data from the frame data of the input video file streamed in to generate decoded video frame data, and encodes the decoded video frame data according to the setting contents of the generated header to output video. A second step of generating and transmitting frame data to the terminal and streaming the audio frame data to the terminal as it is
Method for transmitting the streaming by streaming the video file that is streamed in real time.
The method of claim 1, wherein the video frame data encoded and generated in the second step has the same size for each frame type. The method of claim 2, wherein in the first step, the media acceleration server determines the types of video frames in a predetermined order regardless of the types of video frames specified in the header of the input video file to generate the header of the output video file. This is a method of converting and streaming the streaming video file in real time. 4. The method of claim 3, wherein in the second step, when the media acceleration server encodes the video frame, the media acceleration server determines and encodes the type of the video frame as specified in the header of the output video file regardless of the frame type of the input video frame. This is a method of converting and streaming the streaming video file in real time. The method of claim 3,
The size of the video frame data is determined by multiplying the output bit rate by the number of frames per second multiplied by the frame rate.
The rate for each frame type is a ratio of the number of bits allocated for each frame type.
The method of claim 3,
As a result of the encoding of the video frame in the second step, when the size of the encoded video frame data exceeds the size defined for the frame type, the streamed video file characterized in that the quantization coefficient is adjusted and re-encoded. To convert your streaming in real time.
The redundancy data according to claim 6, wherein when the size of the encoded video frame data is smaller than the size determined for the frame type as a result of the encoding on the video frame in the second step, redundancy data is left in the portion remaining in the predetermined size. ) Is a method for streaming and converting the streaming video file, characterized in that the real-time conversion. The method of claim 1,
When generating the header of the output video file in the first step, the media acceleration server determines the order of the video frames and audio frames to be output and reflects them in the header.
The sequence of video frames and audio frames is the video frame sequence value obtained by dividing the sequence number of the video frame to be output by the number of frames per second, and the sample rate divided by the sample rate per frame of the audio frame by the sequence number of the output audio frame. Audio frame order, which is the value, in order of decreasing value,
And streaming the output video frame data and the audio frame data in the second step to output the frame data in the determined order in real time.
The method of claim 8, wherein the number of samples per frame of the audio frame is 1024. The method of claim 1,
The frame rate per second is the same as the number of frames per second of the streamed input video file, characterized in that the method for streaming and converting the streaming video file in real time.
The method of claim 1,
And the resolution is determined in accordance with the resolution of the display device of the terminal connected to the media acceleration server.
The media acceleration server analyzes the header of the input video file streamed from the streaming server and generates the header of the output video file according to a preset condition including at least one of an output bit rate, frames per second, resolution, and file format. The first step of streaming transmission;
A media acceleration server decodes video frame data from the frame data of the input video file streamed in to generate decoded video frame data, and encodes the decoded video frame data according to the setting contents of the generated header to output video. Generates frame data and streams it to the terminal, decodes the audio frame data to generate decoded audio frame data, encodes the decoded audio frame data to a predetermined size, generates output audio frame data, and transmits the stream to the terminal. 2nd step
Method for transmitting the streaming by streaming the video file that is streamed in real time.
The method of claim 12,
In the first step, the media acceleration server determines the types of video frames in a predetermined order regardless of the types of video frames specified in the header of the input video file, and generates the header of the output video file.
The video frame data encoded and generated in the second step has a same size for each frame type, the method for real-time converting the streaming video file to the stream transmission.
The method of claim 13, wherein in the second step, when the media acceleration server encodes the video frame, the media acceleration server determines and encodes the type of the video frame as specified in the header of the output video file regardless of the frame type of the input video frame. This is a method of converting and streaming the streaming video file in real time. The method of claim 13,
The size of the video frame data is determined by multiplying the output bit rate by the number of frames per second multiplied by the frame rate.
The rate for each frame type is a ratio of the number of bits allocated for each frame type.
The method of claim 13,
As a result of encoding the video frame in the second step, if the size of the encoded video frame data exceeds the size defined for the frame type, the quantization coefficient is adjusted and re-encoded.
As a result of encoding the audio frame in the second step, when the size of the encoded audio frame data exceeds the predetermined size, the streamed video file is converted in real time by adjusting the quantization coefficient and re-encoding it. How to send streaming.
17. The method of claim 16,
As a result of the encoding of the video frame in the second step, when the size of the encoded video frame data is smaller than the predetermined size for the frame type, redundancy data is inserted in the remaining portion of the predetermined size.
As a result of the encoding of the audio frame in the second step, when the size of the encoded audio frame data is smaller than the predetermined size, an arbitrary bit is inserted into the remaining part of the predetermined size. How to convert video files in real time and stream them.
The method of claim 12,
When generating the header of the output video file in the first step, the media acceleration server determines the order of the video frames and audio frames to be output and reflects them in the header.
The sequence of video frames and audio frames is the video frame sequence value obtained by dividing the sequence number of the video frame to be output by the number of frames per second, and the sample rate divided by the sample rate per frame of the audio frame by the sequence number of the output audio frame. Audio frame order, which is the value, in order of decreasing value,
And streaming the output video frame data and the audio frame data in the second step to output the frame data in the determined order in real time.
19. The method of claim 18, wherein the number of samples per frame of the audio frame is 1024. The method of claim 12,
The frame rate per second is the same as the number of frames per second of the streamed input video file, characterized in that the method for streaming and converting the streaming video file in real time.
The method of claim 12,
And the resolution is determined in accordance with the resolution of the display device of the terminal connected to the media acceleration server.

KR1020110092508A 2011-09-14 2011-09-14 Method for transcoding streaming vedio file into streaming vedio file in real-time KR20130029235A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110092508A KR20130029235A (en) 2011-09-14 2011-09-14 Method for transcoding streaming vedio file into streaming vedio file in real-time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110092508A KR20130029235A (en) 2011-09-14 2011-09-14 Method for transcoding streaming vedio file into streaming vedio file in real-time

Publications (1)

Publication Number Publication Date
KR20130029235A true KR20130029235A (en) 2013-03-22

Family

ID=48179258

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110092508A KR20130029235A (en) 2011-09-14 2011-09-14 Method for transcoding streaming vedio file into streaming vedio file in real-time

Country Status (1)

Country Link
KR (1) KR20130029235A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015050290A1 (en) * 2013-10-02 2015-04-09 주식회사 요쿠스 Video providing service method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015050290A1 (en) * 2013-10-02 2015-04-09 주식회사 요쿠스 Video providing service method

Similar Documents

Publication Publication Date Title
KR101868280B1 (en) Information processing apparatus, information processing method, and computer-readable recording medium
US11076187B2 (en) Systems and methods for performing quality based streaming
US10298985B2 (en) Systems and methods for performing quality based streaming
US8929441B2 (en) Method and system for live streaming video with dynamic rate adaptation
US9042449B2 (en) Systems and methods for dynamic transcoding of indexed media file formats
US8606966B2 (en) Network adaptation of digital content
US9521469B2 (en) Carriage of quality information of content in media formats
US20110029606A1 (en) Server apparatus, content distribution method, and program
CN112752115B (en) Live broadcast data transmission method, device, equipment and medium
CN102065339A (en) Method and system for playing audio and video media stream
KR20130005873A (en) Method and apparatus for receiving contents in broadcast system
US11743535B2 (en) Video fragment file processing
EP3096524A1 (en) Communication apparatus, communication data generation method, and communication data processing method
KR101397551B1 (en) Dynamic and Adaptive Streaming System over HTTP
US20150350300A1 (en) Content server and content distribution method
US9264720B2 (en) Apparatus and method for multimedia service
KR101251312B1 (en) Method for handling video seek request in video transcoding server
KR20130029235A (en) Method for transcoding streaming vedio file into streaming vedio file in real-time
WO2014112187A1 (en) Content server, content delivery method, content delivery system, client device, and content acquisition method
KR20140086801A (en) Realtime content transcoding method, apparatus and system, and realtime content reception method and apparatus
EP2566171A1 (en) Method for adapting the segment size in transcoded multimedia streams
JP2017175597A (en) Moving image distribution system, distribution server, receiver, and program

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right