KR20170039916A - Cloud server, media play device, computer program for processing execution of application - Google Patents

Cloud server, media play device, computer program for processing execution of application Download PDF

Info

Publication number
KR20170039916A
KR20170039916A KR1020150139089A KR20150139089A KR20170039916A KR 20170039916 A KR20170039916 A KR 20170039916A KR 1020150139089 A KR1020150139089 A KR 1020150139089A KR 20150139089 A KR20150139089 A KR 20150139089A KR 20170039916 A KR20170039916 A KR 20170039916A
Authority
KR
South Korea
Prior art keywords
sound
chunks
image
cloud server
application
Prior art date
Application number
KR1020150139089A
Other languages
Korean (ko)
Inventor
박수호
김강태
김현숙
Original Assignee
주식회사 케이티
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 케이티 filed Critical 주식회사 케이티
Priority to KR1020150139089A priority Critical patent/KR20170039916A/en
Publication of KR20170039916A publication Critical patent/KR20170039916A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone

Abstract

The cloud server that processes the application runs a receiving unit that receives a request for the application from the media player, an application driver that runs the application upon request, a frame capture unit that captures a driving screen of the application at a predetermined number of frames per second, A sound encoding unit for encoding the captured sound into image chunks, and a transmission unit for transmitting the image chunks and sound chunks to the media playback device. .

Description

CLAUD SERVER, MEDIA PLAY DEVICE, COMPUTER PROGRAM FOR PROCESSING EXECUTION OF APPLICATION BACKGROUND OF THE INVENTION 1. Field of the Invention [0001]

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a cloud server, a media playback apparatus, and a computer program for processing an application.

Cloud Computing is a technology that processes information using a separate computer connected to the Internet rather than the user's own computer. By storing data on an external server through a cloud computing service, the user can safely store the data and is free from the restriction of the storage space. However, when a cloud server is hacked, personal information may be leaked, and if the cloud server fails, the data can not be used.

In connection with such cloud computing, Korean Patent Laid-Open Publication No. 2015-0083476 discloses a method and system for providing cloud services.

In order to transmit data to a user terminal through a cloud computing service, a virtual cluster switching (VCS) method, which is a method of transmitting by streaming using H.264 encoding, and a method of capturing an image and encoding the image into a PNG format image file An ICS (Image Cloud Streaming) method is used. However, the VCS transmission method has a lower concurrent connection rate than the ICS transmission method, and the ICS transmission method has a drawback in that it can provide only a visual effect through the virtualization service.

A media playback apparatus, and a computer program for processing an application that enables bidirectional services using voice to be provided through streaming of image data and sound data. The present invention also provides a cloud server, a media playback device, and a computer program for processing an application for application to a cloud WebApp virtualization service using a sound sync mechanism through ICS of a bidirectional service. A media playback apparatus, and a computer program for processing an application that is applicable not only to IPTV but also to a cloud UI service using ICS-based virtualization in a smartphone or the like. It is to be understood, however, that the technical scope of the present invention is not limited to the above-described technical problems, and other technical problems may exist.

According to an aspect of the present invention, there is provided a media playback apparatus comprising: a receiving unit that receives a request for an application from a media player; an application driver that drives the application in response to the request; A frame capturing unit capturing a frame with a predetermined number of frames per second, a sound collecting unit collecting the sound of the application, an image encoding unit encoding the captured frame into an image chunk, a sound encoding the collected sound into a sound chunk, And a transmitter for transmitting the image chunks and the sound chunks to the media playback apparatus.

According to another aspect of the present invention, there is provided a communication system including an input unit for receiving a command for an application, a transmitting unit for transmitting a request for the command to the cloud server, an image chunk and a sound chunk for the application from the cloud server, A decoder for decoding the image chunks and the sound chunks, and an output for outputting the decoded image chunks and sound chunks, wherein the image chunks are encoded from frames captured by the cloud server, Wherein the chunk is a sound of the application being collected and encoded by the cloud server and the captured frame being captured by the cloud server to a predetermined number of frames per second have.

In another embodiment of the present invention, when executed by a computing device of the cloud server, the computer program receives a request for an application from a media player, runs the application in response to the request, The method comprising the steps of: capturing a driving screen at a predetermined number of frames per second; collecting sound of the application; encoding the captured frame into an image chunk; encoding the collected sound into a sound chunk; An image chunk, and a sequence of instructions for causing the sound chunk to be transmitted.

The above-described task solution is merely exemplary and should not be construed as limiting the present invention. In addition to the exemplary embodiments described above, there may be additional embodiments described in the drawings and the detailed description of the invention.

According to one of the above-mentioned objects of the present invention, there is provided a cloud server, a media playback apparatus, and a computer program for processing an application for providing an interactive service using audio through streaming of image data and sound data, . Also, it is possible to provide a cloud server, a media playback apparatus, and a computer program that process an application to be applied to a cloud WebApp virtualization service using a sound sink mechanism through ICS of a bidirectional service. It is possible to provide a cloud server, a media playback device, and a computer program that not only implement IPTV but also applications that are applicable to cloud UI services using ICS-based virtualization in a smartphone or the like.

1 is a configuration diagram of an application driving processing system according to an embodiment of the present invention.
2 is a configuration diagram of a media player according to an embodiment of the present invention.
3 is a flowchart of a method of processing application driving in a media playback apparatus according to an embodiment of the present invention.
4 is a configuration diagram of a cloud server according to an embodiment of the present invention.
5 is an exemplary diagram for explaining a process of encoding image chunks and sound chunks in a cloud server according to an embodiment of the present invention.
FIG. 6 is an exemplary diagram for explaining a process of decoding image chunks and sound chunks in a media player according to an embodiment of the present invention. Referring to FIG.
7 is an exemplary diagram for explaining a process of generating a sync value in a media playback apparatus according to an embodiment of the present invention.
8 is an exemplary diagram for explaining a process of changing a predetermined number of frames per second based on a sync value according to an embodiment of the present invention.
9 is a flowchart of a method of processing application driving in a cloud server according to an embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, which will be readily apparent to those skilled in the art. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In order to clearly illustrate the present invention, parts not related to the description are omitted, and similar parts are denoted by like reference characters throughout the specification.

Throughout the specification, when a part is referred to as being "connected" to another part, it includes not only "directly connected" but also "electrically connected" with another part in between . Also, when an element is referred to as "including" an element, it is to be understood that the element may include other elements as well as other elements, And does not preclude the presence or addition of one or more other features, integers, steps, operations, components, parts, or combinations thereof.

In this specification, the term " part " includes a unit realized by hardware, a unit realized by software, and a unit realized by using both. Further, one unit may be implemented using two or more hardware, or two or more units may be implemented by one hardware.

In this specification, some of the operations or functions described as being performed by the terminal or the device may be performed in the server connected to the terminal or the device instead. Similarly, some of the operations or functions described as being performed by the server may also be performed on a terminal or device connected to the server.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.

1 is a configuration diagram of an application driving processing system according to an embodiment of the present invention. Referring to FIG. 1, an application-driven processing system 1 may include a media player 110 and a cloud server 120. The media playback apparatus 110 and the cloud server 120 shown in FIG. 1 illustratively show the components that can be controlled by the application-driven processing system 1.

Each component of the application-driven processing system 1 of Fig. 1 is generally connected via a network. For example, as shown in FIG. 1, the media playback apparatus 110 may be connected to the cloud server 120 at the same time or at a time interval via a network.

A network refers to a connection structure in which information can be exchanged between nodes such as terminals and servers. An example of such a network is Wi-Fi, Bluetooth, Internet, LAN Network, wireless LAN, WAN, PAN, 3G, 4G, 5G, LTE, and the like.

The media playback apparatus 110 may receive a command for an application from a user and send a request for a command to the cloud server 120. [

When the media playback apparatus 110 receives the image chunks and sound chunks for the application from the cloud server 120, the media playback apparatus 110 may decode and output the received image chunks and sound chunks. At this time, the media playback apparatus 110 may receive the image chunks and sound chunks based on the predetermined arrangement from the cloud server 120, and the predetermined arrangement is such that the image chunks are arranged in the encoded order, The sound chunks corresponding to the chunks may be disposed.

The media playback apparatus 110 may generate the sync value based on the playback error time between the output sound chunks and the image chunks. At this time, when the media playback apparatus 110 transmits a sync value generated by the cloud server 120, the media playback apparatus 110 transmits an image of the changed frame rate per second based on the sync value transmitted from the cloud server 120 Chunks and sound chunks.

The media playback apparatus 110 may be a Personal Communication System (PCS), a Global System for Mobile communications (GSM), a Personal Digital Cellular (PDC), a Personal Handyphone System (PHS), a Personal Digital Assistant (PDA), an International Mobile Telecommunication ) -2000, Code Division Multiple Access (CDMA) -2000, W-CDMA (W-CDMA), Wireless Broadband Internet (Wibro), 3G, 4G, 5G terminal, smart phone, A laptop, a general PC, and the like.

One example of the media playback apparatus 110 may include all kinds of TV apparatuses using an Internet line such as Internet Protocol Television (IPTV), Smart TV, and Connected TV.

When the cloud server 120 receives a request for an application from the media playback apparatus 110, the cloud server 120 can start the application upon request.

The cloud server 120 can capture the application running screen with a predetermined number of frames per second and collect the sound of the application. For example, the cloud server 120 may collect PCM (Pulse Code Modulation) data generated between captured frames.

The cloud server 120 may encode the captured frames into image chunks and encode the collected sounds into sound chunks. For example, the cloud server 120 may encode the collected PCM data. In another example, the cloud server 120 may encode the collected sound at a constant bit rate (CBR). At this time, each image chunk and each sound chunk may be generated differently in size. That is, since the image chunks and sound chunks in the present invention mean image data of one frame and sound data of a predetermined interval, respectively, the meaning of the chunks in the equal division method can be different.

The cloud server 120 may send image chunks and sound chunks to the media player 110. [ For example, the cloud server 120 may arrange the image chunks in an encoded order and arrange the sound chunks corresponding to the arranged image chunks to transmit the image chunks and sound chunks to the media playback device 110.

When the cloud server 120 receives the sync value from the media playback apparatus 110, the cloud server 120 can change the predetermined number of frames per second based on the received sync value. Here, the sync value may be generated based on the playback error time between the sound chunks and the image chunks output by the media playback apparatus 110. [ For example, the cloud server 120 may decrease the frame rate per second if the sink value is larger than the set value.

When executed by the computing device, the cloud server 120 receives the request for the application from the media player 110, runs the application upon request, captures the running picture of the application at a predetermined number of frames per second , A sequence of instructions to collect the sound of the application, encode the captured frame into image chunks, encode the collected sound into sound chunks, and send the image chunks and sound chunks to media playback device 110 . ≪ / RTI >

2 is a configuration diagram of a media player according to an embodiment of the present invention. 2, the media playback apparatus 110 includes an input unit 210, a transmission unit 220, a reception unit 230, a decoding unit 240, an output unit 250, and a sync value generation unit 260 can do.

The input unit 210 can receive a command for an application.

The transmitting unit 220 may transmit a request for the command to the cloud server 120. In addition, when generating the sync value based on the playback error time between the sound chunks and the image chunks output from the sync value generator 260, the transmitter 220 may transmit the sync value generated by the cloud server 120 have.

The receiving unit 230 may receive image chunks and sound chunks for the application from the cloud server 120. The image chunk may be encoded from the frame captured by the cloud server 120 and the captured frame may be captured by the cloud server 120 with the application running screen captured at a predetermined number of frames per second. The sound chunks are collected and encoded by the cloud server 120, and the collected sounds include PCM (Pulse Code Modulation) data generated between the captured frames, and the sound chunks are transmitted to the cloud server 120, Lt; RTI ID = 0.0 > PCM < / RTI > Each image chunk and sound chunk may be of different sizes.

At this time, the receiving unit 230 may receive image chunks and sound chunks based on the predetermined arrangement from the cloud server 120. [ The predetermined arrangement may be such that the image chunks are arranged in the encoded order and the corresponding sound chunks are arranged between the arranged image chunks. For example, the receiving unit 230 can receive image chunks and sound chunks in the following order: image chunk 1, sound chunk 1, image chunk 2, sound chunk 2, ..., and so on.

The receiving unit 230 can also receive the image chunks and sound chunks of the number of frames per second that have been changed based on the sync value transmitted from the cloud server 120. [

The decoding unit 240 can decode image chunks and sound chunks.

The output unit 250 can output decoded image chunks and sound chunks.

The sync value generator 260 can generate a sync value based on the reproduction error time between the output sound chunks and the image chunks.

3 is a flowchart of a method of processing application driving in a media playback apparatus according to an embodiment of the present invention. The method for processing the application running by the media playback apparatus 110 according to the embodiment shown in Fig. 3 is the same as that of Fig. 1 except that the step of processing in the application-driven processing system 1 according to the embodiment shown in Fig. . 1 and 2, the content already described with respect to the application driving processing system 1 according to the embodiment is not limited to the media playback apparatus 110 according to the embodiment shown in FIG. 3 The present invention is also applied to a method of processing an application executed by a user.

In step S310, the media playback apparatus 110 may receive a command for the application. In step S320, the media playback apparatus 110 may send a request for the command to the cloud server 120. [ In step S330, the media playback apparatus 110 may receive image chunks and sound chunks for the application from the cloud server 120. [ At this time, the media playback apparatus 110 may receive image chunks and sound chunks based on a predetermined arrangement from the cloud server 120. [ The image chunks are encoded from the frames captured by the cloud server 120 and the sound chunks may be the sounds of the application collected and encoded by the cloud server 120 and each image chunk and sound chunk is of a size The predetermined arrangement may be that the image chunks are arranged in the encoded order and the corresponding sound chunks are arranged between the arranged image chunks. In step S340, the media playback apparatus 110 may decode the image chunks and the sound chunks. In step S350, the media playback apparatus 110 may output the decoded image chunks and sound chunks.

Although not shown in FIG. 3, the media playback apparatus 110 generates a sync value based on the playback error time between the output sound chunks and the image chunks, transmits the sync value generated by the cloud server 120, And receiving image chunks and sound chunks of the changed number of frames per second based on the sync value sent from the cloud server 120. [

In the above description, steps S310 to S350 may be further divided into further steps or combined into fewer steps, according to an embodiment of the present invention. Also, some of the steps may be omitted as necessary, and the order between the steps may be changed.

4 is a configuration diagram of a cloud server according to an embodiment of the present invention. 4, the cloud server 120 includes a receiving unit 410, an application driving unit 420, a frame capturing unit 430, a sound collecting unit 440, an image encoding unit 450, a sound encoding unit 460, A chunk arrangement unit 470, a transmission unit 480, and a frame adjustment unit 490. [

The receiving unit 410 may receive a request for the application from the media player 110. [ In addition, the receiving unit 410 may receive the sync value from the media player 110. [

The application driving unit 420 may drive the application according to a request from the media playback apparatus 110.

The frame capture unit 430 can capture a driving picture of the application at a predetermined frame rate per second. For example, the frame capture unit 430 may capture a driving screen of the application at a cycle of 1000 / n msec.

The sound collecting unit 440 can collect the sound of the application. For example, the saud collection unit 440 may collect PCM (Pulse Code Modulation) data generated between the captured frames. For example, the sound collector 440 may collect PCM data at a PCM sample rate of 44,056 Hz.

The image encoding unit 450 may encode the captured frame into an image chunk. At this time, image chunks may be generated with different sizes.

The sound encoding unit 460 can encode the collected sound into sound chunks. For example, when the PCM data is collected in the sound collection unit 440, the sound encoding unit 460 can encode the collected PCM data into sound chunks. At this time, the sound chunks may be generated with different sizes. In another example, the sound encoding unit 460 may encode the collected sound to a constant bit rate (CBR).

The chunk arranging unit 470 can arrange the image chunks in the encoded order and arrange the corresponding sound chunks between the arranged image chunks. For example, the chunk arrangement unit 470 can arrange sound chunks between the first image chunk and the second image chunk that are made up of audio files such as MP3s corresponding thereto. At this time, the chunk arranging unit 470 may designate a separator as a character string such as "boundary_png, --boundary_sound" so as to facilitate parsing from the media reproducing apparatus 110.

The transmitting unit 480 can transmit image chunks and sound chunks to the media player 110. [ For example, when the image chunk and the sound chunk are arranged in the chunk arrangement section 470, the transmission section 480 can transmit image chunks and sound chunks to the media reproduction apparatus 110 based on the arrangement.

When receiving the sync value from the media playback apparatus 110 in the receiver 410, the frame adjuster 490 can change the preset number of frames per second based on the received sync value. Here, the sync value may be generated based on the playback error time between the sound chunks and the image chunks output by the media playback apparatus 110. [ For example, if the sync value is larger than the set value, the frame adjusting unit 490 may decrease the frame rate per second.

5 is an exemplary diagram for explaining a process of encoding image chunks and sound chunks according to an embodiment of the present invention. Referring to FIG. 5, the cloud server 120 may capture a driving screen of an application at a preset frame rate (fps, frames per second) using the ICS method.

For example, the cloud server 120 may generate a sound chunk by collecting and encoding PCM data at a period of 1000 / n msec (500). At this time, the cloud server 120 can encode PCM data using a constant bit rate (CBR) in the same frame period.

Here, the number of frames per second is 1000 / n fps, and a predetermined used fps may be predetermined in order to improve the performance of the ICS transmission scheme. The set of fps can be defined as FPS = FPS = {2, 4, 5, 10, 25 ...}. In this case, the setting value can be set in advance and can be defined by assigning indexes such as FPS = {F1, F2, F3, ...}, and F1 <F2 <F3 ... have.

FIG. 6 is an exemplary diagram for explaining a process of decoding image chunks and sound chunks in a media player according to an embodiment of the present invention. Referring to FIG. Referring to FIG. 6, the media playback apparatus 110 may decode the image chunk 600, for example, Ga (601) to output an image. Also, the media playback apparatus 110 may output sound by decoding the sound chunks 610, for example, [Sa, Sb] 611 and 612. [ In this case, the sound must be buffered to the specified size, unlike images, since there should be no discontinuous parts. Also, since the image is output through the image buffer of the frame buffer, a certain amount of delay can not be expected. Since the inconsistency may occur in the decoding process of the sound and the image as described above, a process of preventing the inconsistency in the decoding process will be described in detail with reference to FIG.

7 is an exemplary diagram for explaining a process of generating a sync value in a media playback apparatus according to an embodiment of the present invention. Referring to FIG. 7, a playback error time may occur between the sound chunks and the image chunks output from the media playback apparatus 110.

For example, if the sound chunks 710 located between the image chunks 700 and the image chunks 700 are out of sync and the sound chunks 710 are output first, then the sound chunks 710 will have the same bit rate And the corresponding sound chunks 710 and thereafter the sound chunks 710 may not be synchronized with the image chunks 700 all at the same time.

In another example, if the size of the sound chunks 720 is large, the sound chunks 720 may be output before the image chunks 700 and may not be in sync with the image chunks. In this case, the size of the sound chunks 720 must be reduced to fit the sink. Therefore, when the number of frames per second of the current sound chunk is Fi, for example, Fi is set to F_ {i-1} By lowering the number, you can synchronize the image chunks with the sound chunks.

8 is an exemplary diagram for explaining a process of changing a predetermined number of frames per second based on a sync value according to an embodiment of the present invention. 7 and 8, when a playback time error occurs between the image chunks output from the media playback apparatus 110 and the sound chunks, the cloud server 120 generates a sync value based on the sync value received from the media playback apparatus 110 Thereby changing the preset frame number per second.

At this time, the cloud server 120

Figure pat00001
The number of frames per second can be changed based on the sync value generated by using the formula of equation (730).

For example, suppose that a sync value is generated for every successive 10 data chunks including image chunks and sound chunks to determine the current playback state. At this time, the media playback apparatus 110 applies the above formula,

Figure pat00002
The cloud server 120 can receive the sync value generated from the media playback apparatus 110 and change the predetermined number of frames per second based on the received sync value. Specifically, the media playback apparatus 110 determines the playback error time between the image chunks and the sound chunks of the data chunk 1 (801)
Figure pat00003
''
Figure pat00004
'802 and the reproduction error time between the image chunks and the sound chunks of the data chunks 2 802,
Figure pat00005
''
Figure pat00006
(812). Through this process, the data chunk 10 (821)
Figure pat00007
''
Figure pat00008
And the cloud server 120 may change the preset number of frames per second based on the sync value received from the media playback apparatus 110 have. In addition, the cloud server 120 may receive the reproduction error time of each data chunk from the media player 110 to directly generate a sync value, and change the predetermined number of frames per second based on the generated sync value.

9 is a flowchart of a method of processing application driving in a cloud server according to an embodiment of the present invention. The method of processing the application running by the cloud server 120 according to the embodiment shown in Fig. 9 is similar to that of the application running processing system 1 according to the embodiment shown in Fig. 1 . Therefore, the content already described with respect to the application-driven processing system 1 according to the embodiment shown in Figs. 1 to 8 will be described by the cloud server 120 according to the embodiment shown in Fig. 9 The present invention is also applied to a method of processing an operation of an application to be performed.

In step S910, the cloud server 120 may receive a request for the application from the media playback apparatus 110. [ In step S920, the cloud server 120 can start the application according to the request. In step S930, the cloud server 120 may capture a driving screen of the application at a predetermined number of frames per second. In step S940, the cloud server 120 may collect the sound of the application. For example, the cloud server 120 may collect PCM (Pulse Code Modulation) data generated between captured frames. In step S950, the cloud server 120 may encode the captured frame into an image chunk. At this time, each image chunk may be generated with a different size. For example, the cloud server 120 may encode the collected PCM data. In addition, the cloud server 120 may encode the collected sound with a constant bit rate (CBR). In step S960, the cloud server 120 may encode the collected sound into sound chunks. At this time, each sound chunk may be generated in a different size. In step S970, the cloud server 120 may transmit image chunks and sound chunks to the media playback apparatus 110. [

Although not shown in Fig. 9, the cloud server 120 arranges the image chunks in an encoded order, arranges the sound chunks corresponding to the arranged image chunks, and distributes the image chunks to the media playback apparatus 110 And transmitting the chunks and sound chunks.

Although not shown in FIG. 9, the cloud server 120 may further include receiving a sync value from the media player 110 and changing a predetermined number of frames per second based on the received sync value. At this time, the sync value is generated based on the playback error time between the sound chunks and the image chunks output by the media playback apparatus 110, and when the sync value is larger than the set value, The number of frames can be reduced.

In the above description, steps S910 to S970 may be further divided into further steps or combined into fewer steps, according to an embodiment of the present invention. Also, some of the steps may be omitted as necessary, and the order between the steps may be changed.

The media playback apparatus and the method for processing the application running on the cloud server described with reference to Figs. 1 to 9 may be implemented in the form of a computer program stored on a medium executed by a computer or a recording medium including instructions executable by the computer Can be implemented. In addition, the media playback apparatus and the method for processing the application running in the cloud server described with reference to Figs. 1 to 9 can also be implemented in the form of a computer program stored in a medium executed by a computer. Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media. In addition, the computer-readable medium may include both computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Communication media typically includes any information delivery media, including computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transport mechanism.

It will be understood by those skilled in the art that the foregoing description of the present invention is for illustrative purposes only and that those of ordinary skill in the art can readily understand that various changes and modifications may be made without departing from the spirit or essential characteristics of the present invention. will be. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. For example, each component described as a single entity may be distributed and implemented, and components described as being distributed may also be implemented in a combined form.

The scope of the present invention is defined by the appended claims rather than the detailed description and all changes or modifications derived from the meaning and scope of the claims and their equivalents are to be construed as being included within the scope of the present invention do.

110: Media player
120: Cloud server
210:
220:
230: Receiver
240:
250: Output section
260: Sync value generator
410:
420: application driver
430: frame capture unit
440: sound collecting unit
450: Image encoding unit
460: sound encoding section
470: chunk placement section
480:
490:

Claims (17)

1. A cloud server for processing an application, comprising:
A receiving unit for receiving a request for an application from the media player;
An application driver for driving the application according to the request;
A frame capture unit capturing a driving picture of the application at a predetermined number of frames per second;
A sound collector for collecting sounds of the application;
An image encoding unit encoding the captured frame into an image chunk;
A sound encoding unit for encoding the collected sound into a sound chunk; And
And a transmitting unit for transmitting the image chunk and the sound chunk to the media player,
And a cloud server.
The method according to claim 1,
Each of the image chunks is generated differently in size,
Wherein each of the sound chunks is generated differently in size.
The method according to claim 1,
The sound collecting unit collects PCM (Pulse Code Mudulation) data generated between the captured frames,
Wherein the sound encoding unit is configured to encode the collected PCM data.
The method according to claim 1,
A chunk arrangement unit for arranging the image chunks in an encoded order and arranging the sound chunks corresponding to the arranged image chunks,
Further comprising: a cloud server.
5. The method of claim 4,
Wherein the transmitting unit is configured to transmit the image chunk and the sound chunk to the media player based on the arrangement.
The method according to claim 1,
And the sound encoding unit is configured to encode the collected sound with a constant bit rate (CBR).
The method according to claim 1,
Receives a sync value from the media playback apparatus,
A frame adjusting unit for changing the predetermined number of frames per second based on the received sync value,
Further comprising: a cloud server.
8. The method of claim 7,
Wherein the sync value is generated based on a reproduction error time between the sound chunks and the image chunks output by the media reproduction apparatus.
8. The method of claim 7,
Wherein the frame adjuster is configured to lower the predetermined number of frames per second when the sink value is larger than the set value.
A media playback apparatus for operating an application in cooperation with a cloud server, comprising:
An input unit for receiving a command for an application;
A transmitting unit for transmitting a request for the command to the cloud server;
A receiving unit for receiving image chunks and sound chunks for the application from the cloud server;
A decoding unit decoding the image chunks and the sound chunks; And
An output unit for outputting the decoded image chunks and sound chunks,
Lt; / RTI &gt;
Wherein the image chunk is encoded from a frame captured by the cloud server,
Wherein the sound chunk is a sound of the application being collected and encoded by the cloud server,
Wherein the captured frame is captured by the cloud server in a predetermined number of frames per second.
11. The method of claim 10,
Wherein each of the image chunks is generated differently in size.
11. The method of claim 10,
The collected sound includes PCM (Pulse Code Mudulation) data generated between the captured frames,
Wherein the sound chunks are encoded from the PCM data by the cloud server.
11. The method of claim 10,
Wherein the receiving unit is configured to receive the image chunks and the sound chunks based on a predetermined arrangement from the cloud server.
14. The method of claim 13,
Wherein the predetermined arrangement is arranged in the order in which the image chunks are encoded, and the sound chunks corresponding to the arranged image chunks are arranged.
11. The method of claim 10,
A sync value generator for generating a sync value based on a reproduction error time between the output sound chunk and the image chunk,
The media playback apparatus further comprising:
16. The method of claim 15,
Wherein the transmitting unit transmits the generated sink value to the cloud server,
Wherein the receiving unit is configured to receive the image chunks and the sound chunks of the number of frames per second changed based on the transmitted sync value from the cloud server.
A computer program stored in a medium for processing an application in a cloud server,
When executed by the computing device of the cloud server,
The computer program comprising:
Receiving a request for an application from a media playback device,
To run the application in response to the request,
Capturing a driving picture of the application at a predetermined number of frames per second,
Collecting the sound of the application,
Encodes the captured frame into an image chunk,
Encodes the collected sound into a sound chunk,
And transmitting the image chunks and the sound chunks to the media playback apparatus.
KR1020150139089A 2015-10-02 2015-10-02 Cloud server, media play device, computer program for processing execution of application KR20170039916A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150139089A KR20170039916A (en) 2015-10-02 2015-10-02 Cloud server, media play device, computer program for processing execution of application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150139089A KR20170039916A (en) 2015-10-02 2015-10-02 Cloud server, media play device, computer program for processing execution of application

Publications (1)

Publication Number Publication Date
KR20170039916A true KR20170039916A (en) 2017-04-12

Family

ID=58580431

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150139089A KR20170039916A (en) 2015-10-02 2015-10-02 Cloud server, media play device, computer program for processing execution of application

Country Status (1)

Country Link
KR (1) KR20170039916A (en)

Similar Documents

Publication Publication Date Title
KR101868280B1 (en) Information processing apparatus, information processing method, and computer-readable recording medium
CN106686438B (en) method, device and system for synchronously playing audio images across equipment
US20150074241A1 (en) Method, terminal, and server for implementing fast playout
JP5859694B2 (en) Method and apparatus for supporting content playout
JPWO2017138387A1 (en) Information processing apparatus and information processing method
EP3448021A1 (en) Video encoding and decoding method and device
JP2006050604A (en) Method and apparatus for flexibly adjusting buffer amount when receiving av data depending on content attribute
CN103329521A (en) Methods, apparatuses and computer program products for pausing video streaming content
US10003626B2 (en) Adaptive real-time transcoding method and streaming server therefor
CN105556922B (en) DASH in network indicates adaptive
JP6354262B2 (en) Video encoded data transmitting apparatus, video encoded data transmitting method, video encoded data receiving apparatus, video encoded data receiving method, and video encoded data transmitting / receiving system
KR20190112780A (en) Data Buffering Methods, Network Devices, and Storage Media
KR101472032B1 (en) Method of treating representation switching in HTTP streaming
KR20230030589A (en) Streaming of Media Data Containing an Addressable Resource Index Track with Switching Sets
CN110996122B (en) Video frame transmission method, device, computer equipment and storage medium
US9571790B2 (en) Reception apparatus, reception method, and program thereof, image capturing apparatus, image capturing method, and program thereof, and transmission apparatus, transmission method, and program thereof
KR20130024785A (en) Data processing apparatus and control method thereof, and storage medium
JP5135147B2 (en) Video file transmission server and operation control method thereof
CN104333765A (en) Processing method and device of video live streams
KR20170039916A (en) Cloud server, media play device, computer program for processing execution of application
KR101603976B1 (en) Method and apparatus for concatenating video files
JP6400163B2 (en) Reception device, reception method, transmission device, transmission method, and program
KR101642112B1 (en) Modem bonding system and method for sending and receiving real time multimedia at mobile network
JP2014135728A (en) Video transmission system and video transmission method
US20160014181A1 (en) Content transfer method, content transfer apparatus and content receiving apparatus