KR20150101902A - Digital device and method of controlling thereof - Google Patents

Digital device and method of controlling thereof Download PDF

Info

Publication number
KR20150101902A
KR20150101902A KR1020140130810A KR20140130810A KR20150101902A KR 20150101902 A KR20150101902 A KR 20150101902A KR 1020140130810 A KR1020140130810 A KR 1020140130810A KR 20140130810 A KR20140130810 A KR 20140130810A KR 20150101902 A KR20150101902 A KR 20150101902A
Authority
KR
South Korea
Prior art keywords
audio
audio data
type
data
output
Prior art date
Application number
KR1020140130810A
Other languages
Korean (ko)
Inventor
로버트 야그트
슈레쉬 아루무감
아누팜 카울
스티브 윈스턴
사일레쉬 라차바투니
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to US15/121,977 priority Critical patent/US20170078737A1/en
Priority to PCT/KR2014/011359 priority patent/WO2015129992A1/en
Publication of KR20150101902A publication Critical patent/KR20150101902A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

In the present invention, disclosed are a digital device and a control method thereof. The digital device according to an embodiment of the present invention comprises a pulse audio module receiving first type audio data and second type audio data different from the first type from an application; an audio processing part; and an audio outputting part, wherein the pulse audio module reports reception of the first and second type audio data to the audio processing part, the audio processing part controls the pulse audio module to control output of the first and second audio data based on a policy relating to the first and second type audio data, and the audio outputting part outputs at least one among the first and second type audio data based on a control result of the pulse audio module.

Description

Technical Field [0001] The present invention relates to a digital device and a control method thereof,

The present invention relates to a digital device and a control method thereof.

Mobile devices such as a smart phone and a tablet PC have attracted attention in addition to a standing device such as a personal computer (PC) and a television (TV). Fixed devices and mobile devices originally developed in their respective domains, but the area has become obscure due to the recent boom of digital convergence.

In addition, as the digital device develops or changes environment, the user's eye level increases, and there is a demand for various kinds of high-speed services or applications.

On the other hand, as the functions of digital devices become more diverse, audio data corresponding to a plurality of contents is frequently output simultaneously. The importance of the audio processing unit is increasing.

It is an object of the present invention to provide a display device having an audio processing unit capable of controlling audio data corresponding to a plurality of contents when the audio data is simultaneously output, .

Another object of the present invention is to provide a control method of a digital device capable of contouring audio data in accordance with a user's intention when audio data corresponding to a plurality of contents must be simultaneously output.

The technical problem to be solved by the present invention is not limited to the above-described technical problems and other technical problems which are not mentioned can be clearly understood by those skilled in the art from the following description .

This document discloses various embodiments (s) of digital devices and processing methods in the digital devices.

A digital device according to an embodiment of the present invention includes a pulse audio module for receiving first type audio data from an application and second type audio data different from the first type, an audio processing unit, and an audio output unit, Wherein the pulse audio module notifies the audio processing section of the reception of the audio data of the first and second types, and the audio processing section, And the audio output unit outputs at least one of the first and second types of audio data based on the adjustment result of the pulse audio module.

According to another aspect of the present invention, there is provided a digital device comprising: a pulse audio module for receiving audio data of a first type from an application; a TV service processing unit for receiving audio data of a second type from an application; Wherein the pulse audio module notifies the audio processing unit of the reception of the audio data of the first type and the TV service processing unit notifies the audio processing unit of reception of the audio data of the second type, Wherein the audio processing unit controls the pulse audio module and the TV service module to control the output of the first and second types of audio data based on policies related to the first and second types of audio data, The output unit is connected to the pulse audio module and the TV service processing unit On the basis of the results section, and outputs at least one of audio data of the first and second types.

According to another aspect of the present invention, there is provided a method of controlling a digital device, the method comprising: receiving, in a pulse audio module, audio data of a first type and audio data of a second type different from the first type; In the audio module, notifying the audio processing unit of the reception of the audio data of the first and second types; and a step of, in the audio processing unit, Controlling the pulse audio module to adjust the output of the audio data; and outputting at least one of the first and second types of audio data through the audio output unit based on the adjustment result.

The technical solutions obtained by the present invention are not limited to the above-mentioned solutions, and other solutions not mentioned are clearly described to those skilled in the art from the following description. It can be understood.

The effects of the present invention are as follows.

According to an embodiment of the present invention, there is provided a display device having an audio processor capable of controlling audio data corresponding to a plurality of contents when they are to be output simultaneously.

According to another embodiment of the present invention, there is provided a control method of a digital device capable of controlling audio data in accordance with a user's intention when audio data corresponding to a plurality of contents must be simultaneously output.

The effects obtained by the present invention are not limited to the above-mentioned effects, and other effects not mentioned can be clearly understood by those skilled in the art from the following description will be.

1 is a schematic diagram illustrating a service system including a digital device according to an exemplary embodiment of the present invention;
2 is a block diagram illustrating a digital device according to an embodiment of the present invention;
3 is a block diagram illustrating a digital device according to another embodiment of the present invention;
4 is a block diagram illustrating a digital device according to another embodiment of the present invention;
FIG. 5 is a block diagram illustrating a detailed configuration of the control unit of FIGS. 2 to 4 according to an embodiment of the present invention; FIG.
6 illustrates input means coupled to the digital device of Figs. 2 through 4 according to one embodiment of the present invention; Fig.
FIG. 7 is a diagram illustrating a Web OS architecture according to an embodiment of the present invention; FIG.
FIG. 8 is a diagram illustrating an architecture of a Web OS device according to an embodiment of the present invention; FIG.
FIG. 9 is a diagram illustrating a graphical composition flow in a Web OS device according to an embodiment of the present invention; FIG.
10 is a diagram illustrating a media server according to an embodiment of the present invention;
11 is a block diagram illustrating a configuration of a media server according to an exemplary embodiment of the present invention;
12 is a diagram illustrating a relationship between a media server and a TV service according to an embodiment of the present invention;
13 is a configuration block diagram illustrating a method of processing audio data in a digital device according to an embodiment of the present invention;
14 is a diagram illustrating a method for activating a speech recognition function in a digital device according to an embodiment of the present invention;
15 is a diagram for explaining an example of the operation of the audio processing unit when the voice recognition function is activated in the digital device according to the embodiment of the present invention;
16 is a diagram for explaining another example of the operation of the audio processing unit when the voice recognition function is activated in the digital device according to the embodiment of the present invention;
17 is a diagram for explaining another example of the operation of the audio processing unit when the voice recognition function is activated in the digital device according to the embodiment of the present invention;
18 is a view for explaining another example of the operation of the audio processing unit when the voice recognition function is activated in the digital device according to the embodiment of the present invention;
19 is a view for explaining an example of the operation of the audio processing unit when an event related to a ring tone application occurs in a digital device according to an embodiment of the present invention;
20 is a view for explaining an example of an operation of an audio processing unit when an event related to an alert notification application occurs in a digital device according to an embodiment of the present invention;
21 is a view for explaining another example of the operation of the audio processing unit when an event related to an alert notification application occurs in the digital device according to an embodiment of the present invention;

Hereinafter, various embodiments (s) of a digital device and a control method thereof according to the present invention will be described in detail with reference to the drawings.

The suffix "module "," part ", and the like for components used in the present specification are given only for ease of specification, and both may be used as needed. Also, even when described in ordinal numbers such as " 1st ", "2nd ", and the like, it is not limited to such terms or ordinal numbers.

In addition, although the terms used in the present specification have been selected from the general terms that are widely used in the present invention in consideration of the functions according to the technical idea of the present invention, they are not limited to the intentions or customs of the artisan skilled in the art, It can be different. However, in certain cases, some terms are arbitrarily selected by the applicant, which will be described in the related description section. Accordingly, it should be understood that the term is to be interpreted based not only on its name but on its practical meaning as well as on the contents described throughout this specification.

It is to be noted that the contents of the present specification and / or drawings are not intended to limit the scope of the present invention.

The term "digital device" as used herein refers to a device that transmits, receives, processes, and outputs data, content, service, And includes all devices that perform at least one or more. The digital device can be paired or connected (hereinafter, referred to as 'pairing') with another digital device, an external server, or the like through a wire / wireless network, Can be transmitted / received. At this time, if necessary, the data may be appropriately converted before the transmission / reception. The digital device may be a standing device such as a network TV, a Hybrid Broadcast Broadband TV (HBBTV), a Smart TV, an IPTV (Internet Protocol TV), a PC (Personal Computer) And a mobile device or handheld device such as a PDA (Personal Digital Assistant), a smart phone, a tablet PC, a notebook, and the like. In order to facilitate understanding of the present invention and to facilitate the description of the present invention, FIG. 2, which will be described later, describes a digital TV, and FIG. 3 illustrates and describes a mobile device as an embodiment of a digital device. In addition, the digital device described in this specification may be a configuration having only a panel, a configuration such as a set-top box (STB), a device, a system, etc. and a set configuration .

The term "wired / wireless network" as used herein collectively refers to communication networks that support various communication standards or protocols for pairing and / or data transmission / reception between digital devices or digital devices and external servers. Such a wired / wireless network includes all of the communication networks to be supported by the standard now or in the future, and is capable of supporting one or more communication protocols therefor. Such a wired / wireless network includes, for example, a USB (Universal Serial Bus), a Composite Video Banking Sync (CVBS), a Component, an S-Video (Analog), a DVI (Digital Visual Interface) A communication standard or protocol for a wired connection such as an RGB or a D-SUB, a Bluetooth standard, a radio frequency identification (RFID), an infrared data association (IrDA), an ultra wideband (UWB) (ZigBee), DLNA (Digital Living Network Alliance), WLAN (Wi-Fi), Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access) A long term evolution (LTE-Advanced), and Wi-Fi direct, and a communication standard or protocol for the network.

In addition, when the term is simply referred to as a digital device in this specification, the meaning may mean a fixed device or a mobile device depending on the context, and may be used to mean both, unless specifically stated otherwise.

On the other hand, the digital device is an intelligent device that supports, for example, a broadcast receiving function, a computer function or a support, at least one external input, and the like. The digital device includes e-mail, web browsing, Banking, game, application, and so on. In addition, the digital device may include an interface for supporting at least one input or control means (hereinafter, 'input means') such as a handwriting input device, a touch-screen, .

In addition, the digital device can use a standardized general-purpose OS (Operating System), but in particular, the digital device described in this specification uses the Web OS as an embodiment. Therefore, a digital device can handle adding, deleting, amending, and updating various services or applications on a general-purpose OS kernel or a Linux kernel. And through which a more user-friendly environment can be constructed and provided.

Meanwhile, the above-described digital device can receive and process an external input. The external input is connected to an external input device, that is, the digital device, through the wired / wireless network, Input means or digital device. For example, the external input may be a game device such as a high-definition multimedia interface (HDMI), a playstation or an X-Box, a smart phone, a tablet PC, a pocket photo devices such as digital cameras, printing devices, smart TVs, Blu-ray device devices and the like.

In addition, the term "server" as used herein refers to a digital device or system that supplies data to or receives data from a digital device, that is, a client, and may be referred to as a processor do. The server may include a portal server for providing a web page, a web content or a web service, an advertising server for providing advertising data, A content server providing content, an SNS server providing a social network service (SNS), a service server provided by a manufacturer, a video on demand (VoD) service or a streaming service And a service server that provides a Multichannel Video Programming Distributor (MVPD) and a pay service, for example.

In addition, in the following description for convenience of explanation, only the application is described in the context of the present invention, and the meaning may include not only the application but also the service based on the context and the like.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.

1 is a schematic diagram illustrating a service system including a digital device according to an exemplary embodiment of the present invention.

1, a service system includes a content provider 10, a service provider 20, a network provider 30, and a Home Network End User (HNED) (40). Here, the HNED 40 includes, for example, a client 100, i.e., a digital device according to the present invention.

The content provider 10 produces and provides various contents. As shown in FIG. 1, the content provider 10 may include a terrestrial broadcast sender, cable SO (System Operator) or MSO (Multiple SO), satellite broadcast sender, various Internet broadcast senders, Content providers, and the like. On the other hand, the content provider 10 can produce and provide various services and applications in addition to the broadcast content.

The service provider 20 provides service packetizing the content produced by the content provider 10 to the HNED 40. [ For example, the service provider 20 packages at least one of the first terrestrial broadcast, the second terrestrial broadcast, the cable MSO, the satellite broadcast, various Internet broadcasts, 40).

The service provider 20 provides services to the client 100 in a uni-cast or multi-cast manner. Meanwhile, the service provider 20 can simultaneously transmit data to a plurality of clients 100 registered in advance, and an Internet Group Management Protocol (IGMP) protocol or the like can be used for this purpose.

The above-described content provider 10 and the service provider 20 may be the same entity. For example, the service provider 20 may also perform the functions of the service provider 20 by providing the content produced by the content provider 10 in a service package to the HNED 40, or vice versa.

The network provider 30 provides a network for exchanging data between the content provider 10 and / or the service provider 20 and the client 100.

The client 100 is a consumer belonging to the HNED 40. The client 100 constructs a home network through the network provider 30 to receive data and transmits data related to various services and applications such as VoD and streaming It can also transmit / receive.

On the other hand, the content provider 10 and / or the service provider 20 in the service system can use conditional access or content protection means to protect the transmitted content. Accordingly, the client 100 can use a processing means such as a cable card (or a POD: Point of Deployment), a DCAS (Downloadable CAS) or the like in response to the restriction reception or the content protection.

In addition, the client 100 can use the two-way service through the network. Accordingly, the client 100 may rather perform the role or function of the content provider, and the service provider 20 may receive it and transmit the same to another client or the like.

In FIG. 1, the content provider 10 and / or the service provider 20 may be a server that provides a service described later in this specification. In this case, the server may also mean to own or include network provider 30 as needed. The service or service data includes an internal service or an application, as well as a service or an application received from the outside, and the service or the application data for the Web OS-based client 100 It can mean.

2 is a block diagram illustrating a digital device according to an exemplary embodiment of the present invention.

The digital device described herein corresponds to the client 100 of FIG. 1 described above.

The digital device 200 includes a network interface 201, a TCP / IP manager 202, a service delivery manager 203, an SI decoder 204, A demultiplexer (demux or demultiplexer) 205, an audio decoder 206, a video decoder 207, a display A / V and OSD module 208, a service management manager 209, a service discovery manager 210, an SI & metadata DB 211, a metadata manager 212, a service manager 213, A UI manager 214, and the like.

The network interface unit 201 receives the IP packet (s) (IP packet (s)) or the IP datagram (s) ) Is transmitted / received. For example, the network interface unit 201 can receive services, applications, content, and the like from the service provider 20 of FIG. 1 through a network.

The TCP / IP manager 202 determines whether the IP packets received by the digital device 200 and the IP packets transmitted by the digital device 200 are packet delivery (i.e., packet delivery) between a source and a destination packet delivery). The service discovery manager 210, the service control manager 209, the meta data manager 212, the service discovery manager 210, the service control manager 209, and the metadata manager 212. The TCP / IP manager 202 classifies the received packet (s) ) Or the like.

The service delivery manager 203 is responsible for controlling the received service data. For example, the service delivery manager 203 may use RTP / RTCP when controlling real-time streaming data. When the real-time streaming data is transmitted using the RTP, the service delivery manager 203 parses the received data packet according to the RTP and transmits the packet to the demultiplexing unit 205 or the control of the service manager 213 In the SI & meta data database 211. [ Then, the service delivery manager 203 feedbacks the network reception information to the server providing the service using RTCP.

The demultiplexer 205 demultiplexes the received packets into audio, video, SI (System Information) data, and transmits them to the audio / video decoder 206/207 and the SI decoder 204, respectively.

The SI decoder 204 decodes the demultiplexed SI data, that is, Program Specific Information (PSI), Program and System Information Protocol (PSIP), Digital Video Broadcasting Service Information (DVB-SI), Digital Television Terrestrial Multimedia Broadcasting / Coding Mobile Multimedia Broadcasting). Also, the SI decoder 204 may store the decoded service information in the SI & meta data database 211. The stored service information can be read out and used by the corresponding configuration, for example, by a user's request.

The audio / video decoder 206/207 decodes each demultiplexed audio data and video data. The decoded audio data and video data are provided to the user through the display unit 208. [

The application manager may include, for example, the UI manager 214 and the service manager 213 and may perform the functions of the controller of the digital device 200. [ In other words, the application manager can manage the overall state of the digital device 200, provide a user interface (UI), and manage other managers.

The UI manager 214 provides a GUI (Graphic User Interface) / UI for a user using an OSD (On Screen Display) or the like, and receives a key input from a user to perform a device operation according to the input. For example, the UI manager 214 receives the key input regarding the channel selection from the user, and transmits the key input signal to the service manager 213.

The service manager 213 controls the manager associated with the service such as the service delivery manager 203, the service discovery manager 210, the service control manager 209, and the metadata manager 212.

In addition, the service manager 213 generates a channel map and controls the selection of a channel using the generated channel map according to a key input received from the UI manager 214. [ The service manager 213 receives the service information from the SI decoder 204 and sets an audio / video PID (Packet Identifier) of the selected channel in the demultiplexer 205. The PID thus set can be used in the demultiplexing process described above. Accordingly, the demultiplexer 205 filters (PID or section) audio data, video data, and SI data using the PID.

The service discovery manager 210 provides information necessary for selecting a service provider providing the service. Upon receiving a signal regarding channel selection from the service manager 213, the service discovery manager 210 searches for the service using the information.

The service control manager 209 is responsible for selection and control of services. For example, the service control manager 209 uses IGMP or RTSP when a user selects a live broadcasting service such as an existing broadcasting system, and uses RTSP when selecting a service such as VOD Service selection and control. The RTSP protocol may provide a trick mode for real-time streaming. In addition, the service control manager 209 can initialize and manage a session through the IMS gateway 250 using an IP Multimedia Subsystem (IMS) and a Session Initiation Protocol (SIP). The protocols are one embodiment, and other protocols may be used, depending on the implementation.

The metadata manager 212 manages the metadata associated with the service and stores the metadata in the SI & metadata database 211.

The SI & meta data database 211 stores the service information decoded by the SI decoder 204, the meta data managed by the meta data manager 212, and the information necessary for selecting a service provider provided by the service discovery manager 210 do. In addition, the SI & meta data database 211 may store set-up data for the system and the like.

The SI & meta data database 211 may be implemented using a non-volatile RAM (NVRAM) or a flash memory.

Meanwhile, the IMS gateway 250 is a gateway that collects functions necessary for accessing the IMS-based IPTV service.

3 is a block diagram illustrating a digital device according to another embodiment of the present invention.

If the above-described Fig. 2 is described with an example of a digital device as a fixing device, Fig. 3 shows a mobile device as another embodiment of a digital device.

3, the mobile device 300 includes a wireless communication unit 310, an A / V input unit 320, a user input unit 330, a sensing unit 340, an output unit 350, A memory 360, an interface unit 370, a control unit 380, a power supply unit 390, and the like.

Hereinafter, each component will be described in detail.

The wireless communication unit 310 may include one or more modules that enable wireless communication between the mobile device 300 and the wireless communication system or between the mobile device and the network in which the mobile device is located. For example, the wireless communication unit 310 may include a broadcast receiving module 311, a mobile communication module 312, a wireless Internet module 313, a short range communication module 314, and a location information module 315 .

The broadcast receiving module 311 receives broadcast signals and / or broadcast-related information from an external broadcast management server through a broadcast channel. Here, the broadcast channel may include a satellite channel and a terrestrial channel. The broadcast management server may refer to a server for generating and transmitting broadcast signals and / or broadcast related information, or a server for receiving broadcast signals and / or broadcast related information generated by the broadcast management server and transmitting the generated broadcast signals and / or broadcast related information. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and a broadcast signal in which a data broadcast signal is combined with a TV broadcast signal or a radio broadcast signal.

The broadcast-related information may mean information related to a broadcast channel, a broadcast program, or a broadcast service provider. The broadcast-related information may also be provided through a mobile communication network. In this case, it may be received by the mobile communication module 312.

The broadcast-related information may exist in various forms, for example, in the form of an EPG (Electronic Program Guide) or an ESG (Electronic Service Guide).

The broadcast receiving module 311 may be, for example, an ATSC, a Digital Video Broadcasting-Terrestrial (DVB-T), a Satellite (DVB-S), a Media Forward Link Only And Integrated Services Digital Broadcast-Terrestrial (DRS). Of course, the broadcast receiving module 311 may be adapted to not only the above-described digital broadcasting system but also other broadcasting systems.

The broadcast signal and / or broadcast related information received through the broadcast receiving module 311 may be stored in the memory 360.

The mobile communication module 312 transmits and receives radio signals to at least one of a base station, an external terminal, and a server on a mobile communication network. The wireless signal may include various types of data depending on a voice signal, a video call signal, or a text / multimedia message transmission / reception.

The wireless Internet module 313 may be embedded or external to the mobile device 300, including a module for wireless Internet access. WLAN (Wi-Fi), Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access) and the like can be used as wireless Internet technologies.

The short-range communication module 314 is a module for short-range communication. Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IRDA), Ultra Wideband (UWB), ZigBee, RS-232 and RS-485 are used as short range communication technology. .

The position information module 315 is a module for acquiring position information of the mobile device 300, and may be a GPS (Global Position System) module.

The A / V input unit 320 is for inputting audio and / or video signals. The A / V input unit 320 may include a camera 321, a microphone 322, and the like. The camera 321 processes an image frame such as a still image or moving image obtained by the image sensor in the video communication mode or the photographing mode. The processed image frame can be displayed on the display section 351. [

The image frame processed by the camera 321 may be stored in the memory 360 or transmitted to the outside via the wireless communication unit 310. [ At least two cameras 321 may be provided depending on the use environment.

The microphone 322 receives an external sound signal by a microphone in a communication mode, a recording mode, a voice recognition mode, or the like, and processes it as electrical voice data. The processed voice data can be converted into a form that can be transmitted to the mobile communication base station through the mobile communication module 312 in the case of the communication mode, and output. The microphone 322 may be implemented with various noise reduction algorithms for eliminating noise generated in receiving an external sound signal.

The user input unit 330 generates input data for a user to control the operation of the terminal. The user input unit 330 may include a key pad, a dome switch, a touch pad (static pressure / static electricity), a jog wheel, a jog switch, and the like.

The sensing unit 340 senses the current state of the mobile device 300 such as the open / closed state of the mobile device 300, the position of the mobile device 300, the user's contact, the orientation of the mobile device, And generates a sensing signal for controlling the operation of the mobile device 300. For example, when the mobile device 300 is moved or tilted, it may sense the position, slope, etc. of the mobile device. It is also possible to sense whether power is supplied to the power supply unit 390, whether the interface unit 370 is connected to an external device, and the like. Meanwhile, the sensing unit 240 may include a proximity sensor 341 including NFC (Near Field Communication).

The output unit 350 may include a display unit 351, an acoustic output module 352, an alarm unit 353, and a haptic module 354 for generating output related to visual, auditory, have.

The display unit 351 displays (outputs) information processed by the mobile device 300. [ For example, if the mobile device is in call mode, it displays a UI or GUI associated with the call. When the mobile device 300 is in the video communication mode or the photographing mode, the photographed and / or received video or UI and GUI are displayed.

The display unit 351 may be a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT LCD), an organic light-emitting diode (OLED) flexible display, and a three-dimensional display.

Some of these displays may be transparent or light transmissive so that they can be seen through. This can be referred to as a transparent display, and a typical example of the transparent display is TOLED (Transparent OLED) and the like. The rear structure of the display portion 351 may also be of a light transmission type. With this structure, the user can see an object located behind the terminal body through the area occupied by the display unit 351 of the terminal body.

There may be two or more display units 351 depending on the implementation of the mobile device 300. [ For example, in the mobile device 300, the plurality of display portions may be spaced apart or integrally arranged on one surface, and may be disposed on different surfaces, respectively.

(Hereinafter, referred to as a 'touch screen') in which a display unit 351 and a sensor (hereinafter, referred to as 'touch sensor') for sensing a touch operation form a mutual layer structure It can also be used as a device. The touch sensor may have the form of, for example, a touch film, a touch sheet, a touch pad, or the like.

The touch sensor may be configured to convert a change in a pressure applied to a specific portion of the display portion 351 or a capacitance generated in a specific portion of the display portion 351 into an electrical input signal. The touch sensor can be configured to detect not only the position and area to be touched but also the pressure at the time of touch.

If there is a touch input to the touch sensor, the corresponding signal (s) is sent to the touch controller. The touch controller processes the signal (s) and transmits corresponding data to the controller 380. Thus, the control unit 380 can know which area of the display unit 351 is touched or the like.

A proximity sensor 341 may be disposed in the interior area of the mobile device or in proximity to the touch screen. The proximity sensor refers to a sensor that detects the presence or absence of an object approaching a predetermined detection surface or a nearby object without mechanical contact using the force of an electromagnetic field or infrared rays. The proximity sensor has a longer life span than the contact sensor and its utilization is also high.

Examples of the proximity sensor include a transmission type photoelectric sensor, a direct reflection type photoelectric sensor, a mirror reflection type photoelectric sensor, a high frequency oscillation type proximity sensor, a capacitive proximity sensor, a magnetic proximity sensor, and an infrared proximity sensor. And to detect the proximity of the pointer by the change of the electric field along the proximity of the pointer when the touch screen is electrostatic. In this case, the touch screen (touch sensor) may be classified as a proximity sensor.

Hereinafter, for convenience of explanation, the act of recognizing that the pointer is positioned on the touch screen while the pointer is not in contact with the touch screen is referred to as "proximity touch" The act of actually touching the pointer on the screen is called "contact touch. &Quot; The position where the pointer is proximately touched on the touch screen means a position where the pointer is vertically corresponding to the touch screen when the pointer is touched.

The proximity sensor detects a proximity touch and a proximity touch pattern (e.g., a proximity touch distance, a proximity touch direction, a proximity touch speed, a proximity touch time, a proximity touch position, a proximity touch movement state, and the like). Information corresponding to the detected proximity touch operation and the proximity touch pattern may be output on the touch screen.

The sound output module 352 can output audio data received from the wireless communication unit 310 or stored in the memory 360 in a call signal reception mode, a call mode or a recording mode, a voice recognition mode, a broadcast reception mode, The sound output module 352 also outputs sound signals associated with functions performed on the mobile device 300 (e.g., call signal receive tones, message receive tones, etc.). The sound output module 352 may include a receiver, a speaker, a buzzer, and the like.

The alarm unit 353 outputs a signal for notifying the occurrence of an event of the mobile device 300. [ Examples of events that occur in the mobile device include receiving a call signal, receiving a message, inputting a key signal, and touch input. The alarm unit 353 may output a signal for informing occurrence of an event in a form other than the video signal or the audio signal, for example, vibration. The video signal or the audio signal may be output through the display unit 351 or the audio output module 352 so that they may be classified as a part of the alarm unit 353.

The haptic module 354 generates various tactile effects that the user can feel. A typical example of the haptic effect generated by the haptic module 354 is vibration. The intensity and pattern of the vibration generated by the haptic module 354 are controllable. For example, different vibrations may be synthesized and output or sequentially output. In addition to the vibration, the haptic module 354 may be arranged in a variety of ways, such as a pin arrangement vertically moving with respect to the contact skin surface, a spraying force or suction force of the air through the injection port or the suction port, a spit on the skin surface, contact with an electrode, Various effects such as an effect of heat generation and an effect of reproducing a cool / warm feeling using a heat absorbing or heatable element can be generated. The haptic module 354 can be implemented not only to transmit the tactile effect through direct contact but also to allow the user to feel the tactile effect through the muscular sensation of a finger or an arm. More than one haptic module 354 may be provided according to the configuration of the mobile device 300.

The memory 360 may store a program for the operation of the control unit 380 and temporarily store input / output data (e.g., phone book, message, still image, moving picture, etc.). The memory 360 may store data on vibration and sound of various patterns outputted when a touch is input on the touch screen.

The memory 360 may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD memory) (Random Access Memory), a static random access memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM) A magnetic disk, and / or an optical disk. The mobile device 300 may operate in association with a web storage that performs storage functions of the memory 360 on the Internet.

The interface unit 370 serves as a pathway to all the external devices connected to the mobile device 300. The interface unit 370 receives data from an external device or receives power from the external device and transfers the data to each component in the mobile device 300 or transmits data in the mobile device 300 to an external device. For example, it may be provided with a wired / wireless headset port, an external charger port, a wired / wireless data port, a memory card port, a port connecting a device with an identification module, an audio I / O port, A video I / O port, an earphone port, and the like may be included in the interface unit 370.

The identification module is a chip for storing various information for authenticating the usage right of the mobile device 300 and includes a user identification module (UIM), a subscriber identity module (SIM), a general user authentication module A Universal Subscriber Identity Module (USIM), and the like. A device having an identification module (hereinafter referred to as 'identification device') can be manufactured in a smart card format. Accordingly, the identification device can be connected to the terminal 200 through the port.

The interface unit 370 may be a communication path through which the power from the cradle is supplied to the mobile device 300 when the mobile device 300 is connected to an external cradle, A command signal may be the path through which it is communicated to the mobile device. The various command signals or the power supply input from the cradle may be operated with a signal for recognizing that the mobile device is correctly mounted on the cradle.

The control unit 380 typically controls the overall operation of the mobile device 300. The control unit 380 performs related control and processing, for example, for voice call, data communication, video call, and the like. The control unit 380 may include a multimedia module 381 for multimedia playback. The multimedia module 381 may be implemented in the control unit 380 or separately from the control unit 380. The control unit 380 can perform pattern recognition processing for recognizing handwriting input or drawing input performed on the touch-screen as characters and images, respectively.

The power supply unit 390 receives external power and internal power under the control of the controller 380 and supplies power necessary for operation of the respective components.

The various embodiments described herein may be implemented in a recording medium readable by a computer or similar device using, for example, software, hardware, or a combination thereof.

According to a hardware implementation, the embodiments described herein may be implemented as application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays May be implemented using at least one of a processor, a controller, micro-controllers, microprocessors, and an electrical unit for performing other functions. In some cases, the implementation described herein Examples may be implemented by the control unit 380 itself.

According to a software implementation, embodiments such as the procedures and functions described herein may be implemented with separate software modules. Each of the software modules may perform one or more of the functions and operations described herein. Software code may be implemented in a software application written in a suitable programming language. Here, the software code is stored in the memory 360 and can be executed by the control unit 380. [

4 is a block diagram illustrating a digital device according to another embodiment of the present invention.

Another example of the digital device 400 includes a broadcast receiving unit 405, an external device interface unit 435, a storage unit 440, a user input interface unit 450, a control unit 470, a display unit 480, An output unit 485, a power supply unit 490, and a photographing unit (not shown). The broadcast receiver 405 may include at least one tuner 410, a demodulator 420, and a network interface 430. In some cases, the broadcast receiver 405 may include a tuner 410 and a demodulator 420, but may not include the network interface 430, or vice versa. The broadcast receiving unit 405 may include a multiplexer to receive a demodulated signal from the demodulator 420 via the tuner 410 and a signal received via the network interface 430, May be multiplexed. In addition, although not shown, the broadcast receiver 425 includes a demultiplexer to demultiplex the multiplexed signals, demultiplex the demodulated signals or the signals passed through the network interface 430 .

The tuner 410 tunes a channel selected by the user or all pre-stored channels of an RF (Radio Frequency) broadcast signal received through the antenna, and receives the RF broadcast signal. In addition, the tuner 410 converts the received RF broadcast signal into an intermediate frequency (IF) signal or a baseband signal.

For example, if the received RF broadcast signal is a digital broadcast signal, the signal is converted into a digital IF signal (DIF). If the received RF broadcast signal is an analog broadcast signal, the signal is converted into an analog baseband image or a voice signal (CVBS / SIF). That is, the tuner 410 can process both a digital broadcast signal and an analog broadcast signal. The analog baseband video or audio signal (CVBS / SIF) output from the tuner 410 can be directly input to the controller 470.

In addition, the tuner 410 can receive RF broadcast signals of a single carrier or a multiple carrier. Meanwhile, the tuner 410 sequentially tunes and receives RF broadcast signals of all the broadcast channels stored through the channel storage function among the RF broadcast signals received through the antenna, converts the RF broadcast signals into intermediate frequency signals or baseband signals (DIF: Digital Intermediate Frequency or baseband signal).

The demodulator 420 may receive and demodulate the digital IF signal DIF converted by the tuner 410 and perform channel decoding. For this, the demodulator 420 may include a trellis decoder, a de-interleaver, a Reed-Solomon decoder, or a convolution decoder, a deinterleaver, - Solomon decoder and the like.

The demodulation unit 420 may perform demodulation and channel decoding, and then output a stream signal TS. At this time, the stream signal may be a signal in which a video signal, a voice signal, or a data signal is multiplexed. For example, the stream signal may be an MPEG-2 TS (Transport Stream) multiplexed with an MPEG-2 standard video signal, a Dolby AC-3 standard audio signal, or the like.

The stream signal output from the demodulation unit 420 may be input to the control unit 470. The control unit 470 controls demultiplexing, video / audio signal processing, and the like, and controls the output of audio through the display unit 480 and audio through the audio output unit 485.

The external device interface unit 435 provides an interface environment between the digital device 300 and various external devices. To this end, the external device interface unit 335 may include an A / V input / output unit (not shown) or a wireless communication unit (not shown).

The external device interface unit 435 may be a digital versatile disk (DVD), a Blu-ray, a game device, a camera, a camcorder, a computer (notebook), a tablet PC, a smart phone, device, an external device such as a cloud, or the like. The external device interface unit 435 transmits a signal including data such as image, image, and voice input through the connected external device to the control unit 470 of the digital device. The control unit 470 can control the processed image, image, voice, and the like to be output to the external device to which the data signal is connected. To this end, the external device interface unit 435 may further include an A / V input / output unit (not shown) or a wireless communication unit (not shown).

The A / V input / output unit includes a USB terminal, a CVBS (Composite Video Banking Sync) terminal, a component terminal, an S-video terminal (analog) terminal, A DVI (Digital Visual Interface) terminal, an HDMI (High Definition Multimedia Interface) terminal, an RGB terminal, a D-SUB terminal, and the like.

The wireless communication unit can perform short-range wireless communication with another digital device. The digital device 400 may be a Bluetooth device such as Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (UDA), Ultra Wideband (UWB), ZigBee, Digital Living Network Alliance (DLNA) Lt; RTI ID = 0.0 > and / or < / RTI > other communication protocols.

Also, the external device interface unit 435 may be connected to the set-top box STB through at least one of the various terminals described above to perform input / output operations with the set-top box STB.

Meanwhile, the external device interface unit 435 may receive an application or an application list in an adjacent external device, and may transmit the received application or application list to the control unit 470 or the storage unit 440.

The network interface unit 430 provides an interface for connecting the digital device 400 to a wired / wireless network including the Internet network. The network interface unit 430 may include an Ethernet terminal or the like for connection with a wired network and may be a WLAN (Wireless LAN) (Wi- Fi, Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), and HSDPA (High Speed Downlink Packet Access) communication standards.

The network interface unit 430 can transmit or receive data to another user or another digital device via the connected network or another network linked to the connected network. In particular, some of the content data stored in the digital device 400 can be transmitted to a selected user or selected one of other users or other digital devices previously registered with the digital device 400. [

Meanwhile, the network interface unit 430 can access a predetermined web page through the connected network or another network linked to the connected network. That is, it is possible to access a predetermined web page through a network and transmit or receive data with the server. In addition, content or data provided by a content provider or a network operator may be received. That is, it can receive content and related information of a movie, an advertisement, a game, a VOD, a broadcast signal, and the like provided from a content provider or a network provider through a network. In addition, it can receive update information and an update file of firmware provided by the network operator. It may also transmit data to the Internet or a content provider or network operator.

In addition, the network interface unit 430 can select and receive a desired application from the open applications through the network.

The storage unit 440 may store a program for each signal processing and control in the control unit 470 or may store a signal-processed video, audio, or data signal.

The storage unit 440 may also function for temporarily storing video, audio, or data signals input from the external device interface unit 435 or the network interface unit 430. [ The storage unit 440 can store information on a predetermined broadcast channel through the channel memory function.

The storage unit 440 may store a list of applications or applications input from the external device interface unit 435 or the network interface unit 330.

In addition, the storage unit 440 may store various platforms described later.

The storage unit 440 may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD Memory, etc.), RAM (RAM), and ROM (EEPROM, etc.). The digital device 400 may reproduce and provide a content file (a moving image file, a still image file, a music file, a document file, an application file, etc.) stored in the storage unit 440 to a user.

4 illustrates an embodiment in which the storage unit 440 is provided separately from the control unit 470, the present invention is not limited thereto. In other words, the storage unit 440 may be included in the control unit 470.

The user input interface unit 450 transfers a signal input by the user to the controller 470 or a signal from the controller 470 to the user.

For example, the user input interface unit 450 may control power on / off, channel selection, screen setting, etc. from the remote control device 500 according to various communication methods such as an RF communication method and an infrared (IR) And may transmit the control signal of the control unit 470 to the remote control device 500. [

In addition, the user input interface unit 450 can transmit a control signal input from a local key (not shown) such as a power key, a channel key, a volume key, and a set value to the controller 470.

The user input interface unit 450 transmits a control signal input from a sensing unit (not shown) that senses a gesture of a user to the control unit 470 or transmits a signal of the control unit 470 to a sensing unit (Not shown). Here, the sensing unit (not shown) may include a touch sensor, an audio sensor, a position sensor, an operation sensor, and the like.

The control unit 470 demultiplexes the streams input through the tuner 410, the demodulation unit 420, or the external device interface unit 435 or processes the demultiplexed signals to generate a signal for video or audio output And can output.

The video signal processed by the control unit 470 may be input to the display unit 480 and displayed as an image corresponding to the video signal. The video signal processed by the controller 470 may be input to the external output device through the external device interface unit 435. [

The audio signal processed by the control unit 470 may be audio-output to the audio output unit 485. The voice signal processed by the control unit 470 may be input to the external output device through the external device interface unit 435. [

Although not shown in FIG. 4, the control unit 470 may include a demultiplexing unit, an image processing unit, and the like.

The control unit 470 can control the overall operation of the digital device 400. [ For example, the controller 470 may control the tuner 410 to control tuning of a RF broadcast corresponding to a channel selected by the user or a previously stored channel.

The control unit 470 can control the digital device 400 by a user command or an internal program input through the user input interface unit 450. [ In particular, the user can access the network and allow a user to download a desired application or application list into the digital device 400.

For example, the control unit 470 controls the tuner 410 so that a signal of a selected channel is input according to a predetermined channel selection command received through the user input interface unit 450. And processes video, audio or data signals of the selected channel. The control unit 470 allows the display unit 480 or the audio output unit 485 to output the video or audio signal processed by the user through the channel information selected by the user.

The control unit 470 may be connected to the external device interface unit 435 through an external device such as a camera or a camcorder in accordance with an external device video playback command received through the user input interface unit 450. [ So that the video signal or the audio signal of the video signal can be output through the display unit 480 or the audio output unit 485.

On the other hand, the control unit 470 can control the display unit 480 to display an image. For example, a broadcast image input through the tuner 410, an external input image input through the external device interface unit 435, an image input through the network interface unit, or an image stored in the storage unit 440 , And display on the display unit 480. At this time, the image displayed on the display unit 480 may be a still image or a moving image, and may be a 2D image or a 3D image.

In addition, the control unit 470 can control to reproduce the content. The content at this time may be the content stored in the digital device 400, or the received broadcast content, or an external input content input from the outside. The content may be at least one of a broadcast image, an external input image, an audio file, a still image, a connected web screen, and a document file.

On the other hand, when entering the application view item, the control unit 470 can control to display a list of applications or applications that can be downloaded from the digital device 300 or from an external network.

The control unit 470, in addition to various user interfaces, can control to install and drive an application downloaded from the external network. In addition, by the user's selection, it is possible to control the display unit 480 to display an image related to the executed application.

Although not shown in the drawing, a channel browsing processing unit for generating a channel signal or a thumbnail image corresponding to an external input signal may be further provided.

The channel browsing processing unit receives a stream signal TS output from the demodulation unit 320 or a stream signal output from the external device interface unit 335 and extracts an image from an input stream signal to generate a thumbnail image . The generated thumbnail image may be encoded as it is or may be input to the controller 470. In addition, the generated thumbnail image may be encoded in a stream form and input to the controller 470. The control unit 470 may display a thumbnail list having a plurality of thumbnail images on the display unit 480 using the input thumbnail images. On the other hand, the thumbnail images in this thumbnail list can be updated in sequence or simultaneously. Accordingly, the user can easily grasp the contents of a plurality of broadcast channels.

The display unit 480 converts an image signal, a data signal, an OSD signal processed by the control unit 470 or a video signal and a data signal received from the external device interface unit 435 into R, G, and B signals, respectively Thereby generating a driving signal.

The display unit 480 may be a PDP, an LCD, an OLED, a flexible display, a 3D display, or the like.

Meanwhile, the display unit 480 may be configured as a touch screen and used as an input device in addition to the output device.

The audio output unit 485 receives a signal processed by the control unit 470, for example, a stereo signal, a 3.1 channel signal, or a 5.1 channel signal, and outputs it as a voice. The audio output unit 485 may be implemented by various types of speakers.

In order to detect the gesture of the user, a sensing unit (not shown) having at least one of a touch sensor, a voice sensor, a position sensor, and an operation sensor may be further included in the digital device 400 . A signal sensed by a sensing unit (not shown) may be transmitted to the controller 3470 through the user input interface unit 450.

On the other hand, a photographing unit (not shown) for photographing a user may be further provided. The image information photographed by the photographing unit (not shown) may be input to the control unit 470.

The control unit 470 may detect the gesture of the user by combining the images photographed by the photographing unit (not shown) or the sensed signals from the sensing unit (not shown).

The power supply unit 490 supplies the corresponding power to the digital device 400.

Particularly, it is possible to supply power to a control unit 470, a display unit 480 for displaying an image, and an audio output unit 485 for audio output, which can be implemented in the form of a system on chip (SoC) .

To this end, the power supply unit 490 may include a converter (not shown) for converting AC power to DC power. Meanwhile, for example, when the display unit 480 is implemented as a liquid crystal panel having a plurality of backlight lamps, an inverter (not shown) capable of PWM (Pulse Width Modulation) operation for variable luminance or dimming driving and an inverter (not shown).

The remote control device 500 transmits the user input to the user input interface unit 450. To this end, the remote control device 500 can use Bluetooth, RF (radio frequency) communication, infrared (IR) communication, UWB (Ultra Wideband), ZigBee, or the like.

Also, the remote control device 500 can receive the video, audio, or data signal output from the user input interface unit 450 and display it on the remote control device 500 or output sound or vibration.

The digital device 400 may be a digital broadcast receiver capable of processing digital broadcast signals of a fixed or mobile ATSC scheme or a DVB scheme.

In addition, the digital device according to the present invention may further include a configuration that omits some of the configuration shown in FIG. On the other hand, unlike the above, the digital device does not have a tuner and a demodulator, and can receive and reproduce the content through the network interface unit or the external device interface unit.

FIG. 5 is a block diagram illustrating a detailed configuration of the control unit of FIGS. 2 to 4 according to an embodiment of the present invention. Referring to FIG.

An example of the control unit includes a demultiplexer 510, an image processor 5520, an OSD generator 540, a mixer 550, a frame rate converter (FRC) 555, And may include a formatter 560. The control unit may further include a voice processing unit and a data processing unit.

The demultiplexer 510 demultiplexes the input stream. For example, the demultiplexer 510 may demultiplex the received MPEG-2 TS video, audio, and data signals. Here, the stream signal input to the demultiplexer 510 may be a stream signal output from a tuner, a demodulator, or an external device interface.

The image processing unit 420 performs image processing of the demultiplexed image signal. To this end, the image processing unit 420 may include a video decoder 425 and a scaler 435.

The video decoder 425 decodes the demultiplexed video signal, and the scaler 435 scales the decoded video signal so that the resolution of the decoded video signal can be output from the display unit.

The video decoder 525 may support various standards. For example, the video decoder 525 performs the function of an MPEG-2 decoder when the video signal is encoded in the MPEG-2 standard, and the video decoder 525 encodes the video signal in the DMB (Digital Multimedia Broadcasting) It can perform the function of the H.264 decoder.

On the other hand, the video signal decoded by the video processor 520 is input to the mixer 450.

The OSD generation unit 540 generates OSD data according to a user input or by itself. For example, the OSD generating unit 440 generates data for displaying various data in graphic or text form on the screen of the display unit 380 based on the control signal of the user input interface unit. The generated OSD data includes various data such as a user interface screen of a digital device, various menu screens, a widget, an icon, and viewing rate information. The OSD generation unit 540 may generate data for displaying broadcast information based on the caption of the broadcast image or the EPG.

The mixer 550 mixes the OSD data generated by the OSD generator 540 and the image signal processed by the image processor, and provides the mixed image to the formatter 560. Since the decoded video signal and the OSD data are mixed, the OSD is overlaid on the broadcast image or the external input image.

A frame rate conversion unit (FRC) 555 converts a frame rate of an input image. For example, the frame rate conversion unit 555 may convert the frame rate of the input 60 Hz image to have a frame rate of 120 Hz or 240 Hz, for example, in accordance with the output frequency of the display unit. As described above, there are various methods for converting the frame rate. For example, when converting the frame rate from 60 Hz to 120 Hz, the frame rate conversion unit 555 may insert the same first frame between the first frame and the second frame, Three frames can be inserted. As another example, when converting the frame rate from 60 Hz to 240 Hz, the frame rate conversion unit 555 may insert and convert three or more identical or predicted frames between existing frames. On the other hand, when the frame conversion is not performed, the frame rate conversion unit 555 may be bypassed.

The formatter 560 changes the output of the input frame rate conversion unit 555 to match the output format of the display unit. For example, the formatter 560 may output R, G, and B data signals, and the R, G, and B data signals may be output as low voltage differential signals (LVDS) or mini-LVDS . If the output of the input frame rate converter 555 is a 3D video signal, the formatter 560 may configure and output the 3D format according to the output format of the display unit to support the 3D service through the display unit.

On the other hand, the voice processing unit (not shown) in the control unit can perform the voice processing of the demultiplexed voice signal. Such a voice processing unit (not shown) may support processing various audio formats. For example, even when a voice signal is coded in a format such as MPEG-2, MPEG-4, AAC, HE-AAC, AC-3, or BSAC, a corresponding decoder can be provided.

In addition, the voice processing unit (not shown) in the control unit can process the base, the treble, the volume control, and the like.

A data processing unit (not shown) in the control unit can perform data processing of the demultiplexed data signal. For example, the data processing unit can decode the demultiplexed data signal even when it is coded. Here, the encoded data signal may be EPG information including broadcast information such as a start time and an end time of a broadcast program broadcasted on each channel.

On the other hand, the above-described digital device is an example according to the present invention, and each component can be integrated, added, or omitted according to specifications of a digital device actually implemented. That is, if necessary, two or more components may be combined into one component, or one component may be divided into two or more components. In addition, the functions performed in each block are intended to illustrate the embodiments of the present invention, and the specific operations and devices thereof do not limit the scope of rights of the present invention.

Meanwhile, the digital device may be a video signal processing device that performs signal processing of an image stored in the device or an input image. Other examples of the video signal processing device include a set-top box (STB), a DVD player, a Blu-ray player, a game device, a computer Etc. can be further exemplified.

FIG. 6 is a diagram illustrating input means coupled to the digital device of FIGS. 2 through 4 according to one embodiment of the present invention.

A front panel (not shown) or a control means (input means) provided on the digital device 600 is used to control the digital device 600.

The control means includes a remote controller 610, a keyboard 630, a pointing device 620, and a keyboard 620, which are mainly implemented for the purpose of controlling the digital device 600, as a user interface device (UID) A touch pad, or the like, but may also include control means dedicated to external input connected to the digital device 600. [ In addition, a control device may include a mobile device such as a smart phone, a tablet PC, or the like that controls the digital device 600 through a mode switching or the like, although it is not a control object of the digital device 600. In the following description, a pointing device will be described as an example, but the present invention is not limited thereto.

The input means may be a communication protocol such as Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (UDA), Ultra Wideband (UWB), ZigBee, Digital Living Network Alliance (DLNA) At least one can be employed as needed to communicate with the digital device.

The remote controller 610 is a conventional input device having various key buttons necessary for controlling the digital device 600. [

The pointing device 620 may include a gyro sensor or the like to implement a corresponding pointer on the screen of the digital device 600 based on the user's motion, The control command is transmitted. Such a pointing device 620 may be named with various names such as a magic remote controller, a magic controller, and the like.

Since the digital device 600 provides a variety of services such as a web browser, an application, and a social network service (SNS) as an intelligent integrated digital device beyond the conventional digital device 600 providing only conventional broadcasting, It is not easy, and it is implemented to complement input and realize input convenience such as text by implementing similar to PC keyboard.

The control means such as the remote control 610, the pointing device 620 and the keyboard 630 may be provided with a touch pad as required to provide more convenient and various control purposes such as text input, pointer movement, .

The digital device described in this specification uses the Web OS as an OS and / or platform. Hereinafter, a process such as a configuration or an algorithm based on the Web OS can be performed in the control unit of the digital device. Here, the control unit includes the control unit in FIGS. 2 to 5 described above and uses it as a broad concept. Accordingly, in order to process services, applications, contents and the like related to the Web OS in the digital device, the hardware and components including related software, firmware, etc. are controlled by a controller Named and explained.

Such a Web OS-based platform is intended to enhance development independence and function expandability by integrating services, applications, and the like based on, for example, a luna-service bus. Productivity can be increased. Also, multi-tasking can be supported by efficiently utilizing system resources and the like through a Web OS process and resource management.

Meanwhile, the Web OS platform described in this specification can be used not only in fixed devices such as PCs, TVs, and set-top boxes (STBs) but also in mobile devices such as mobile phones, smart phones, tablet PCs, notebooks, wearable devices .

The structure of the software for digital devices is based on a single process and a closed product based on multi-threading with conventional problem solving and market-dependent monolithic structure, And has been pursuing new platform-based development since then, has pursued cost innovation through chip-set replacement, UI application and external application development efficiency, and developed layering and componentization Layer structure and an add-on structure for add-ons, single-source products, and open applications. More recently, the software architecture has provided a modular architecture for functional units, a Web Open API (Application Programming Interface) for the echo-system, and a game engine And a native open API (Native Open API), and thus, a multi-process structure based on a service structure is being created.

FIG. 7 is a diagram illustrating a Web OS architecture according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 7, the architecture of the Web OS platform will be described below.

The platform can be roughly divided into a kernel, a system library-based Web OS core platform, an application, and a service.

The architecture of the Web OS platform is layered structure, with the OS at the lowest layer, the system library (s) at the next layer, and the applications at the top.

First, the lowest layer includes a Linux kernel as an OS layer, and can include Linux as an OS of the digital device.

The OS layer is provided with a BOS (Board Support Package) / HAL (Hardware Abstraction Layer) layer, a Web OS core modules layer, a service layer, a Luna-Service layer Bus layer, Native Developer? Kit / QT layer, and Application layer in the uppermost layer.

Meanwhile, some of the layers of the Web OS layer structure may be omitted, and a plurality of layers may be one layer, or one layer may be a plurality of layers.

The Web OS core module layer is based on LSM (Luna Surface Manager) for managing surface windows and the like, SAM (System & Application Management) for managing application execution and execution status, and Web kit And a WAM (Web Application Manager) for managing web applications and the like.

The LSM manages an application window displayed on the screen. The LSM manages display hardware (Display HW) and provides a buffer for rendering contents necessary for applications. The LSM composes the results of rendering by a plurality of applications, Can be output.

The SAM manages various conditional execution policies of the system and the application.

On the other hand, WAM is based on the Enyo Framework, which can be regarded as a basic application for a web application.

The use of an application's service is done via a luna-service bus, a new service can be registered on the bus, and an application can find and use the service it needs.

The service layer may include various service level services such as TV service and Web OS service. Meanwhile, the Web OS service may include a media server, a Node.JS, and the like. In particular, the Node.JS service supports, for example, javascript.

Web OS services can communicate over the bus to a Linux process that implements function logic. It can be divided into four parts. It is developed from TV process and existing TV to Web OS, service which is differentiated by manufacturer, service which is manufacturer differentiated, Web OS common service and JavaScript, and is used through Node.js Node.js service.

The application layer can include all applications that can be supported in a digital device, such as a TV application, a showcase application, a native application, a Web application, and the like.

An application on the Web OS can be divided into a Web application, a PDK (Palm Development Kit) application, a Qt (Qt Meta Language or Qt Modeling Language) application, and the like according to an implementation method.

The web application is based on a WebKit engine and is executed on a WAM runtime. These web applications can be based on the ENI framework, or they can be developed and run on a common HTML5, CSS (Cascading Style Sheets), or JavaScript based.

The PDK application includes a native application developed in C / C ++ based on a PDK provided for third-party or external developers. The PDK refers to a development library and a tool set provided for a third party such as a game to develop a native application (C / C ++). For example, PDK applications can be used to develop applications where performance is critical.

The QML application is a native application based on Qt, and includes a basic application provided with the Web OS platform such as a card view, a home dashboard, and a virtual keyboard. Here, QML is a mark-up language in the form of a script instead of C ++.

In the meantime, the native application is an application that is developed and compiled in C / C ++ and executed in a binary form. The native application has a high speed of execution.

8 is a diagram illustrating an architecture of a Web OS device according to an embodiment of the present invention.

8 is a block diagram based on the runtime of the Web OS device, which can be understood with reference to the layered structure of FIG.

The following description will be made with reference to FIGS. 7 and 8. FIG.

Referring to FIG. 8, services and applications and WebOS core modules are included on the system OS (Linux) and system libraries, and communication between them can be done via the Luna-Service bus.

E-mail, contact, calendar, etc. Node.js services based on HTML5, CSS, java script, logging, backup, file notify Web OS services such as notify, database (DB), activity manager, system policy, audio daemon (AudioD), update, media server, TV services such as Electronic Program Guide (PVD), Personal Video Recorder (PVR), data broadcasting, voice recognition, Now on, Notification, search, , CP services such as Auto Content Recognition (ACR), Contents List Browser (CBOX), wfdd, DMR, Remote Application, download, Sony Philips Digital Interface Format (SDPIF), PDK applications, , QML applications, etc. UI-related TV applications and Web applications based on the ENO framework are processed through the Luna-Service bus through Web OS core modules such as SAM, WAM, and LSM described above. Meanwhile, in the above, TV applications and Web applications may not necessarily be UI-based or UI-related.

CBOX can manage the list and metadata of external device contents such as USB, DLNA, cloud etc. connected to TV. Meanwhile, the CBOX can output a content listing of various content containers such as a USB, a DMS, a DVR, a cloud, etc. to an integrated view. In addition, CBOX can display various types of content listings such as pictures, music, video, and manage the metadata. In addition, the CBOX can output the contents of the attached storage in real-time. For example, when a storage device such as a USB is plugged in, the CBOX must be able to immediately output the content list of the storage device. At this time, a standardized method for processing the content listing may be defined. In addition, CBOX can accommodate various connection protocols.

The SAM is intended to improve module complexity and scalability. For example, the existing system manager processes various functions such as system UI, window management, web application runtime, and UX constraint processing in one process to separate the main functions to solve the large implementation complexity, Clarify the implementation interface by clarifying the interface.

LSM supports independent development and integration of system UX implementations such as card view, launcher, etc., and supports easy modification of product requirements. LSM can make multi-tasking possible by utilizing hardware resources (HW resources) when synthesizing a plurality of application screens such as an application in-app, A window management mechanism can be provided.

LSM supports implementation of system UI based on QML and improves its development productivity. Based on MVC, QML UX can easily construct views for layouts and UI components, and can easily develop code for handling user input. On the other hand, the interface between the QML and the Web OS component is via the QML extension plug-in, and the graphic operation of the application can be based on a wayland protocol, a luna-service call, have.

LSM is an abbreviation of Luna Surface Manager, as described above, which functions as an Application Window Compositor.

LSM synthesizes independently generated applications, UI components, etc. on the screen and outputs them. In this regard, when components such as recents applications, showcase applications, launcher applications, etc. render their own contents, the LSM defines the output area, interworking method, etc. as a compositor. In other words, the compositor LSM handles graphics synthesis, focus management, and input events. At this time, the LSM receives an event, focus, etc. from an input manager. Such an input manager may include a HID such as a remote controller, a mouse & keyboard, a joystick, a game pad, an application remote, and a pen touch.

As such, LSM supports multiple window models, which can be performed simultaneously in all applications due to the nature of the system UI. In this regard, it is also possible to provide various functions such as launcher, resent, setting, notification, system keyboard, volume UI, search, finger gesture, Voice Recognition (STT (Sound to Text), TTS LSM can support a pattern gesture (camera, mobile radio control unit (MRCU), live menu, ACR (Auto Content Recognition), etc.) .

FIG. 9 is a diagram illustrating a graphic composition flow in a Web OS device according to an embodiment of the present invention. Referring to FIG.

9, the graphic composition processing includes a web application manager 910 responsible for the UI process, a webkit 920 responsible for the web process, an LSM 930, and a graphic manager (GM) Lt; RTI ID = 0.0 > 940 < / RTI >

When the web application-based graphic data (or application) is generated as a UI process in the web application manager 910, the generated graphic data is transferred to the LSM 930 if the generated graphic data is not a full-screen application. On the other hand, the web application manager 910 receives an application generated in the web kit 920 for sharing a GPU (Graphic Processing Unit) memory for graphic management between the UI process and the web process, If it is not an application, it transfers it to the LSM 930. In the case of the full-screen application, the LSM 930 may be bypassed and directly transmitted to the graphic manager 940.

The LSM 930 transmits the received UI application to the wayland compositor via the weir surface, and the weir composer appropriately processes the received UI application and transmits it to the graphic manager. The graphic data transmitted from the LSM 930 is transmitted to the graphic manager composer via the LSM GM surface of the graphic manager 940, for example.

On the other hand, the full-screen application is passed directly to the graphics manager 940 without going through the LSM 930, as described above, and this application is processed in the graphics manager compositor via the WAM GM surface.

The graphical manager processes all the graphic data in the Web OS device, including the data through the LSM GM surface described above, the data through the WAM GM surface, as well as the GM surface such as data broadcasting application, caption application, And processes the received graphic data appropriately on the screen. Here, the functions of the GM compositor are the same or similar to those of the compositor described above.

FIG. 10 is a view for explaining a media server according to an embodiment of the present invention, FIG. 11 is a view for explaining a configuration block diagram of a media server according to an embodiment of the present invention, and FIG. 12 Is a diagram illustrating a relationship between a media server and a TV service according to an embodiment of the present invention.

The media server supports the execution of various multimedia in the digital device and manages the necessary resources. The media server can efficiently use hardware resources required for media play. For example, the media server requires an audio / video hardware resource for multimedia execution and can efficiently utilize the resource usage status by managing it. In general, a fixed device having a larger screen than a mobile device needs more hardware resources to execute multimedia, and a large amount of data must be encoded / decoded and transmitted at a high speed. On the other hand, the media server is a task that performs broadcasting, recording and tuning tasks in addition to streaming and file-based playback, recording simultaneously with viewing, and simultaneously displaying the sender and recipient screens in a video call And so on. However, the media server is limited in terms of hardware resources such as an encoder, a decoder, a tuner, and a display engine, and thus it is difficult to execute a plurality of tasks at the same time. For example, And processes it.

The media server may be robust in system stability because, for example, a pipeline in which an error occurs during media playback can be removed and restarted on a per-pipeline basis, Even if it does not affect other media play. Such a pipeline is a chain that links each unit function such as decoding, analysis, and output when a media reproduction request is made, and the necessary unit functions may be changed according to a media type and the like.

The media server may have extensibility, for example, adding new types of pipelines without affecting existing implementations. As an example, the media server may accommodate a camera pipeline, a video conference (Skype) pipeline, a third-party pipeline, and the like.

The media server can process general media playback and TV task execution as separate services because the interface of the TV service is different from the case of media playback. In the above description, the media server supports operations such as 'setchannel', 'channelup', 'channeldown', 'channeltuning', and 'recordstart' in relation to the TV service, ',' stop ', and so on, so that they can support different operations for both, and can be processed as separate services.

The media server can control or integrally manage the resource management function. The allocation and recall of hardware resources in the device are performed integrally in the media server. In particular, the TV service process transfers the running task and the resource allocation status to the media server. The media server obtains resources and executes the pipeline each time each media is executed, and permits execution by a priority (e.g., policy) of the media execution request, based on the resource status occupied by each pipeline, and And performs resource recall of other pipelines. Here, the predefined execution priority and necessary resource information for the specific request are managed by the policy manager, and the resource manager can communicate with the policy manager to process the resource allocation, the number of times, and the like.

The media server may have an identifier (ID) for all playback related operations. For example, the media server may issue a command to indicate a particular pipeline based on the identifier. The media server may separate the two into pipelines for playback of more than one media.

The media server may be responsible for playback of the HTML 5 standard media.

In addition, the media server may follow the TV restructuring scope of the TV pipeline as a separate service process. The media server can be designed regardless of the TV restructuring scope. If the TV is not a separate service process, it may be necessary to re-execute the entire TV when there is a problem with a specific task.

The media server is also referred to as uMS, i.e., a micro media server. Here, the media player is a media client, which is a media client, for example, a web page for an HTML5 video tag, a camera, a TV, a Skype, a second screen, It can mean a kit (Webkit).

In the media server, management of micro resources such as a resource manager, a policy manager, and the like is a core function. In this regard, the media server also controls the playback control role for the web standard media content. In this regard, the media server may also manage pipeline controller resources.

Such a media server supports, for example, extensibility, reliability, efficient resource usage, and the like.

In other words, the uMS, or media server, may be a Web OS device, such as a cloud game, a MVPD (pay service), a camera preview, a second screen, a Skype, And manage and control the use of resources for proper processing in an overall manner so as to enable efficient use. On the other hand, each resource uses, for example, a pipeline in its use, and the media server can manage and control the generation, deletion, and use of a pipeline for resource management as a whole.

Here, the pipeline refers to, for example, when a media associated with a task starts a series of operations, such as parsing of a request, a decoding stream, and a video output, . For example, with respect to a TV service or an application, watching, recording, channel tuning, and the like are each individually handled under the control of resource utilization through a pipeline generated according to the request .

The processing structure and the like of the media server will be described in more detail with reference to FIG.

10, an application or service is connected to a media server 1020 via a luna-to-service bus 1010, and the media server 1020 is connected to pipelines generated again via the luna- Connected and managed.

The application or service can have various clients depending on its characteristics and can exchange data with the media server 1020 or the pipeline through it.

The client includes, for example, a uMedia client (web kit) and a RM (resource manager) client (C / C ++) for connection with the media server 1020.

The application including the uMedia client is connected to the media server 1020, as described above. More specifically, the uMedia client corresponds to, for example, a video object to be described later, and the client uses the media server 1020 for the operation of video by a request or the like.

Here, the video operation relates to the video state, and the loading, unloading, play, playback, or reproduce, pause, stop, Data may be included. Each operation or state of such video can be processed through individual pipeline generation. Accordingly, the uMedia client sends the state data associated with the video operation to the pipeline manager 1022 in the media server.

The pipeline manager 1022 obtains information on resources of the current device through data communication with the resource manager 1024 and requests resource allocation corresponding to the state data of the uMedia client. At this time, the pipeline manager 1022 or the resource manager 1024 controls resource allocation through the data communication with the policy manager 1026 when necessary in connection with the resource allocation or the like. For example, when the resource manager 1024 requests or does not have resources to allocate according to a request of the pipeline manager 1022, appropriate resource allocation or the like may be performed according to the priority comparison of the policy manager 1026 or the like .

On the other hand, the pipeline manager 1022 requests the media pipeline controller 1028 to generate a pipeline for an operation according to the request of the uMedia client with respect to resources allocated according to the resource allocation of the resource manager 1024 .

The media pipeline controller 1028 generates necessary pipelines under the control of the pipeline manager 1022. [ The generated pipeline can generate pipelines related to playback, pause, suspension, etc., as well as media pipelines and camera pipelines, as shown. The pipeline may include a pipeline for HTML5, Web CP, smartshare playback, thumbnail extraction, NDK, Cinema, MHEG (Multimedia and Hypermedia Information Coding Experts Group), and the like.

In addition, the pipeline may include, for example, a service-based pipeline (its own pipeline) and a URI-based pipeline (media pipeline).

Referring to FIG. 10, an application or service including an RM client may not be directly connected to the media server 1020. This is because the application or service may directly process the media. In other words, if the application or service directly processes the media, it may not pass through the media server. However, at this time, uMS connectors need to manage resources for pipeline creation and use. Meanwhile, when receiving a resource management request for direct media processing of the application or service, the uMS connector communicates with the media server 1020 including the resource manager 1024. To this end, the media server 1020 also needs to have an uMS connector.

Accordingly, the application or service can respond to the request of the RM client by receiving the resource management of the resource manager 1024 through the uMS connector. These RM clients can handle services such as native CP, TV service, second screen, Flash player, YouTube Medai Source Extensions (MSE), cloud gaming, Skype. In this case, as described above, the resource manager 1024 can appropriately manage resources through data communication with the policy manager 1026 when necessary for resource management.

On the other hand, the URI-based pipeline is performed through the media server 1020 instead of processing the media directly as in the RM client described above. Such URI-based pipelines may include a player factory, a G streamer, a streaming plug-in, a DRM (Digital Rights Management) plug-in pipeline, and the like.

On the other hand, the interface method between application and media services may be as follows.

It is a way to interface with a service in a web application. This is a way of using Luna Call using the Palm Service Bridge (PSB), or using Cordova, which extends the display to video tags. In addition, there may be a method using the HTML5 standard for video tags or media elements.

And, it is a method of interfacing with PDK using service.

Alternatively, it is a method of using the service in the existing CP. It can be used to extend existing platform plug-ins based on Luna for backward compatibility.

Finally, it is a way to interface in the case of non-Web OS. In this case, you can interface directly by calling the Luna bus.

Seamless change is handled by a separate module (eg, TVWIN), which is the process for displaying and streamlining the TV on the screen without web OS before or during web OS boot . This is because the boot time of the webOS is delayed, so it is used to provide the basic function of the TV service first for quick response to the user's power on request. In addition, the module is part of the TV service process and supports quick boot and null change, which provides basic TV functions, and factory mode. Also, the module may be capable of switching from the non-web OS mode to the web OS mode.

Referring to FIG. 11, a processing structure of a media server is shown.

11, the solid line box represents the process processing configuration, and the dashed box represents the internal processing module during the process. In addition, the solid line arrows represent inter-process calls, that is, Luna service calls, and the dotted arrows may represent notifications or data flows such as register / notify.

A service or a web application or a PDK application (hereinafter 'application') is connected to various service processing components via a luna-service bus, through which an application is operated or controlled.

The data processing path depends on the type of application. For example, when the application is image data related to the camera sensor, the image data is transmitted to the camera processing unit 1130 and processed. At this time, the camera processing unit 1130 processes image data of a received application including a gesture, a face detection module, and the like. Here, the camera processing unit 1130 can generate a pipeline through the media server processing unit 1110 and process the corresponding data if the user desires to use the pipeline or the like.

Alternatively, when the application includes audio data, the audio processing unit (AudioD) 1140 and the audio module (PulseAudio) 1150 can process the audio. For example, the audio processing unit 1140 processes the audio data received from the application and transmits the audio data to the audio module 1150. At this time, the audio processing unit 1140 may include an audio policy manager to determine processing of the audio data. The processed audio data is processed in the audio module 1160. Meanwhile, the application may notify the audio data processing related data to the audio module 1150, which can perform the notification to the audio module 1150 in the associated pipeline. The audio module 1150 includes an Advanced Linux Sound Architecture (ALSA).

Alternatively, when the application includes or processes (includes) DRM-attached content, the DRM service processing unit 1170 transmits the content data to the DRM service processing unit 1160, and the DRM service processing unit 1170 generates a DRM instance And processes the DRM-enabled content data. Meanwhile, the DRM service processing unit 1160 may process the DRM pipeline in the media pipeline through the Luna-Service bus to process the DRM-applied content data.

Hereinafter, processing in the case where the application is media data or TV service data (e.g., broadcast data) will be described.

FIG. 12 shows only the media server processing unit and the TV service processing unit in FIG. 11 described above in more detail.

Therefore, the following description will be made with reference to FIGS. 11 and 12. FIG.

First, when the application includes TV service data, it is processed in the TV service processing unit 1120/1220.

The TV service processing unit 1120 includes at least one of a DVR / channel manager, a broadcasting module, a TV pipeline manager, a TV resource manager, a data broadcasting module, an audio setting module, and a path manager. 12, the TV service processing unit 1220 includes a TV broadcast handler, a TV broadcast interface, a service processing unit, a TV middleware (TV MW (middleware)), a path manager, a BSP NetCast). Here, the service processing unit may be a module including, for example, a TV pipeline manager, a TV resource manager, a TV policy manager, a USM connector, and the like.

In this specification, the TV service processing unit may have a configuration as shown in Fig. 11 or 12, or may be implemented by a combination of both, in which some configurations are omitted or some configurations not shown may be added.

The TV service processing unit 1120/1220 transmits the DVR or channel related data to the DVR / channel manager based on the attribute or type of the TV service data received from the application, Create and process lines. On the other hand, when the attribute or type of the TV service data is broadcast content data, the TV service processing unit 1120 generates and processes the TV pipeline through the TV pipeline manager for processing the corresponding data via the broadcasting module.

Alternatively, a json (JavaScript standard object notation) file or a file created in c is processed by a TV broadcast handler and transmitted to a TV pipeline manager through a TV broadcast interface to generate and process a TV pipeline. In this case, the TV broadcast interface unit can transmit data or files that have passed through the TV broadcast handler to the TV pipeline manager based on the TV service policy, and can refer to the generated pipeline.

On the other hand, the TV pipeline manager can be controlled by the TV resource manager in generating one or more pipelines according to a TV pipeline creation request from a processing module in a TV service, a manager, or the like. Meanwhile, the TV resource manager can be controlled by the TV policy manager to request the status and allocation of resources allocated for the TV service according to the TV pipeline creation request of the TV pipeline manager, and the media server processing unit 1110 / 1210) and uMS connectors. The resource manager in the media server processing unit 1110/1210 transmits the status of the current TV service, the allocation permission, etc. according to the request of the TV resource manager. For example, if a resource manager in the media server processing unit 1110/1210 confirms that all the resources for the TV service have already been allocated, the TV resource manager can notify that all the resources are currently allocated. At this time, the resource manager in the media server processing unit, together with the notify, removes a predetermined TV pipeline according to a priority or a predetermined criterion among the TV pipelines preliminarily allocated for the TV service, And may request or assign generation. Alternatively, the TV resource manager can appropriately remove, add, or control the TV pipeline in the TV resource manager according to the status report of the resource manager in the media server processing unit 1110/1210.

Meanwhile, the BSP supports, for example, backward compatibility with existing digital devices.

The TV pipelines thus generated can be appropriately operated according to the control of the path manager in the process. The path manager can determine and control the processing path or process of the pipelines by considering not only the TV pipeline but also the operation of the pipeline generated by the media server processing unit 1110/1210.

Next, when the application includes media data, rather than TV service data, it is processed by the media server processing unit 1110/1210. Here, the media server processing units 1110 and 1210 include a resource manager, a policy manager, a media pipeline manager, a media pipeline controller, and the like. On the other hand, the pipeline generated according to the control of the media pipeline manager and the media pipeline controller can be variously generated such as a camera preview pipeline, a cloud game pipeline, and a media pipeline. On the other hand, the media pipeline may include a streaming protocol, an auto / static gstreamer, and a DRM, which can be determined according to the control of the path manager. The specific processing in the media server processing units 1110 and / or 1210 cites the description of FIG. 10 described above, and will not be repeated here.

In this specification, the resource manager in the media server processing unit 1110/1210 can perform resource management with, for example, a counter base.

Hereinafter, various embodiments (s) for a method of processing audio data in a digital device according to an embodiment of the present invention will be described in more detail with reference to the accompanying drawings.

FIG. 13 is a block diagram illustrating a configuration of a digital device according to an exemplary embodiment of the present invention. Referring to FIG. 13, a solid line arrow between each configuration module indicates an inter-process call, that is, a Luna service call, a notify such as register / notify, a data flow .

The application 1310 may be connected to various service processing configurations via a luna-service bus and may be operated or controlled through a connected service processing configuration.

The application 1310 includes audio data and may include an application 1311 related to the system sound, an application 1312 related to alert alerts, an application 1313 related to ringtones, An application 1314 related to notifications, an application 1315 related to media, an application 1316 related to TTS (Text to Speech), an application 1317 related to flash audio (Flash), and the like can do. However, this is merely an example, and fewer or more applications may be associated with the application 1310. [ It is also assumed that the digital device has implemented functions necessary for the operation of these applications.

The application 1311 related to the system sound refers to a sound selected by a specific key provided in the remote control devices 610, 620, and 630, a sound of a specific application output to the display unit of the digital device, And the like.

An application 1312 associated with alert alerts may be an application associated with a high priority system sound.

The application 1313 related to ringtones may be an application related to a ringing tone of a call event received through a call application.

The application 1314 related to notifications may be an application related to a notification sound except for the warning notification or a notification sound for notifying occurrence of a specific event.

An application 1316 related to TTS (Text to Speech) is an application related to audio data for outputting a guidance message output through a display unit of a digital device by voice, and is mainly used when a voice recognition function is activated.

The application 1317 associated with flash audio (Flash) may be an application related to audio data streamed to Adobe Flash.

Depending on the user's selection or the occurrence of a specific event, the TV service processing unit 1330 or the pulse audio module 1340 can receive audio data from a specific application. For example, audio data to be processed through a decoder in hardware can be received at the TV service processing unit 1330, and PCM audio data that is already decoded audio data or does not need to be processed through a hardware decoder can be transmitted to a pulse audio module Lt; RTI ID = 0.0 > 1340 < / RTI > It can be controlled by the audio processing unit (AudioD) 1320 at which of the TV service processing unit 1330 or the pulse audio module 1340 the audio data from the specific application is to be received. The TV service processing unit 1330 may include a DASS (DSP Audio Sink Server) for hardware control of audio data. The pulse audio module 1340 may include an Advanced Linux Sound Architecture (ALSA), which is an interface for outputting audio data.

The TV service processing unit 1330 and the pulse audio module 1340 can notify the audio processing unit 1320 of audio data received from the application 1310. [ The audio processing unit 1320 may control the output of the audio data by controlling the TV service processing unit 1330 or the pulse audio module 1340 by applying a policy related to the audio data according to the notification.

The policy associated with the audio data is used to determine which type of audio data should be preferentially output based on the priority of each type of audio data when there is audio data corresponding to each of a plurality of contents, Control of the output volume level, and to which port of the audio output unit 1350 specific audio data is to be output. In the memory (not shown) in the digital device, a policy related to the audio data may be stored in advance. The audio processing unit 1320 controls the pulse audio module 1340 so that the audio data received from the application 1311 related to the system sound is outputted through the audio output unit 1350 without applying the policy You may.

The audio processing unit 1320 can adjust the input volume level and / or the output volume level of specific audio data. The audio processing unit 1320 may control an input volume level by controlling an input source of the volume level of specific audio data, may adjust the output volume level output through the audio output unit 1350 while maintaining the input level volume have.

The audio output unit 1350 may include a plurality of ports capable of outputting audio data in addition to a port 1351 connected to a TV speaker (internal speaker). For example, the audio output unit 1350 includes a port 1351 connected to a TV speaker, a port 1352 connected to an external speaker, a port 1353 connected to a headphone, an optical output port (SPDIF) 1354, A connected port 1355, and the like. The audio output unit 1350 outputs the audio data received from the TV service processing unit 1330 and / or the pulse audio module 1340.

Hereinafter, with reference to FIG. 14 to FIG. 21, examples of policies related to audio data applied in the audio processing unit 1320 will be described for each scenario.

14 is a diagram for explaining a method for activating a voice recognition function in a digital device according to an embodiment of the present invention. However, the method of activating the speech recognition function shown in FIG. 14 is merely an example, and the present invention is not limited thereto.

14A, a user selects a preset area of the display unit 1410 of the display device 1400 or a display unit of the display device 1400 using the remote controller 1420 The user can activate the voice recognition function by selecting the icon of the specific application output to the user.

Referring to FIG. 14B, the user can activate the voice recognition function by selecting a hot key corresponding to the voice recognition function provided in the remote controller 1420. The user can release the pressing of the hotkey after a predetermined word or sentence is uttered while the hotkey is pressed.

Referring to FIG. 14C, the user can activate the voice recognition function by pressing a specific button corresponding to the voice recognition function provided in the headphone 1430 paired with the display device 1400. The user can release the pressing of the specific button after a predetermined word or sentence is uttered while the specific button is pressed.

15 is a diagram for explaining an example of the operation of the audio processing unit when the voice recognition function is activated in the digital device according to the embodiment of the present invention.

Video data 1510 corresponding to a predetermined content is output to the display unit 1510 of the digital device 1500. [ Then, the first audio data corresponding to the content is outputted through the TV speaker of the digital device 1500. In one example, it is assumed that the content corresponds to a live broadcast signal.

When the voice recognition function is activated by the user, the voice recognition application may be executed and the first GUI 1530 corresponding to the execution screen of the voice recognition application may be output to the display unit 1510. [ When the speech recognition function is activated, the pulse audio module 1340 can receive second audio data from the notification / notification application 1315 indicating that voice recording for speech recognition has begun. The pulse audio module 1340 can notify the audio processing unit 1320 of the reception of the second audio data.

The audio processing unit 1320 can adjust the output of the first and second audio data based on the associated policy when the audio data corresponding to the broadcast signal and the audio data associated with the voice recognition coexist. For example, when the audio data related to the voice recognition function is higher in priority than the audio data corresponding to the broadcast signal, the audio processing unit 1320 sets the output volume level of the second audio data to the volume level set in the TV speaker And controls the TV service processing unit 1330 such that the input volume level of the first audio data is reduced to a preset level based on the volume level set in the TV speaker. According to an embodiment, when the first audio data is also processed through the pulse audio module 1340, the audio processor 1330 may adjust the pulse audio module 1340 in adjusting the output of the first audio data will be. In a state where the output volume level of the first audio data and the output volume level of the second audio data are adjusted, the first audio data and the second audio data are mixed by a mixer (not shown) have.

Accordingly, the output volume level of the first audio data output through the TV speaker and the output volume level of the second audio data may be different. That is, when the voice recognition function is activated, the output volume of the second audio data, which indicates that voice recording for voice recognition has started, is made larger than the output volume of the first audio data corresponding to the broadcast signal, It is possible to recognize that the recognition function has been activated and that a predetermined word or sentence should start to be uttered.

When the second audio data is outputted through the TV speaker, the user utters a desired word or sentence. When the user's utterance is completed, the pulse audio module 1340 receives a voice message from the notification / notification application 1315 It is possible to receive the third audio data indicating that the voice recording is completed. The audio processing unit 1320 can process the third audio data in accordance with the second audio data in relation to the first audio data.

The audio processing unit 1320 can control the output volume level of the first audio data to be lower than the output volume level of the first audio data while the user is speaking a desired word or sentence.

According to the embodiment, when the voice recognition function is activated, the output level of the first audio data corresponding to the broadcast signal may be zero. For example, the audio processing unit 1320 may control the input source corresponding to the first audio data to adjust the input volume level to zero.

According to the embodiment, when the pulse audio module 1340 receives specific audio data from the TTS application 1316 in a state in which the speech recognition function is activated, the audio processing unit 1320 outputs the input audio data of the first audio data Level to zero and to output only the specific audio data associated with the TTS application through the TV speaker. Therefore, the TV speaker can output only the audio data related to the TTS application without outputting the first audio data. The pulse audio module 1340 may control the pulse audio module 1340 such that when the reception of audio data from the TTS application 1316 is completed, the first audio data may be output back to the volume level set on the TV speaker have.

16 is a diagram for explaining another example of the operation of the audio processing unit when the voice recognition function is activated in the digital device according to the embodiment of the present invention. The contents overlapping with those described above with reference to FIG. 15 will not be described again, and the differences will be mainly described below.

Video data 1610 corresponding to a predetermined content is output to the display unit 1610 of the digital device 1600. [ Then, the first audio data corresponding to the content is outputted through the TV speaker of the digital device 1600. For example, it is assumed that the content is a file capable of stopping or pausing playback, such as an mp3 file, a moving picture file, or the like.

When the voice recognition function is activated by the user, the voice recognition application may be executed and the first GUI 1630 corresponding to the execution screen of the voice recognition application may be output to the display unit 1510. [

When the voice recognition function is activated by the user, playback of the content may be paused while voice recording for voice recognition is in progress. Accordingly, the audio processor 1330 controls the pulse audio module 1340 such that only the second audio data and the third audio data are output at the volume level set in the TV speaker, and the TV speaker outputs the first audio data as the output It is possible to output only the second audio data and the third audio data.

17 is a diagram for explaining another example of the operation of the audio processing unit when the voice recognition function is activated in the digital device according to the embodiment of the present invention. The parts overlapping with those described above with reference to Figs. 15 and 16 will not be described again, and the differences will be mainly described below.

Video data 1710 corresponding to a predetermined content is output to the display unit 1710 of the digital device 1700. [ Then, the first audio data corresponding to the content is outputted through the headphone 1740 of the digital device 1700. In one example, it is assumed that the content corresponds to a live broadcast signal.

When the voice recognition function is activated by the user, the voice recognition application may be executed and the first GUI 1730 corresponding to the execution screen of the voice recognition application may be output to the display unit 1510. [

When the first audio data is being output through the headphone 1740, the first audio data and the second audio data may be mixed by a mixer (not shown) and output simultaneously through the headphone 1740. Also, the first audio data and the third audio data may be mixed by a mixer (not shown) and simultaneously output through the headphone 1740.

According to the embodiment, when the content is a file that can stop or pause playback but not a live broadcast signal, the playback of the content is temporarily stopped when the voice recognition function is activated by the user, and the audio processing unit 1330 , And control the pulse audio module 1340 such that only the second audio data and the third audio data are output at the volume level set in the headphone 1740.

18 is a view for explaining another example of the operation of the audio processing unit when the voice recognition function is activated in the digital device according to the embodiment of the present invention.

The display unit 1810 of the digital device 1800 can output video data corresponding to two or more pieces of content through a virtual screen division. For example, video data 1821 corresponding to the first content is output to the first area of the display unit 1810, video data 1822 corresponding to the second content is output to the second area of the display unit 1810, Can be output. In one example, it is assumed that the first content corresponds to a live broadcast signal and the second content corresponds to video / audio data that is streamed to Adobe Flash. It is also assumed that the audio data corresponding to the first content is output through the headphone 1840 and the audio data corresponding to the second content is output through the TV speaker.

When the user activates the voice recognition function by pressing a specific button associated with the voice recognition function provided in the headphone 1840, the voice recognition application is executed and the first GUI 1830 corresponding to the execution screen of the voice recognition application And may be output to the first area of the display unit 1810 in which the first video data is output. This is because the digital device 1800 is used by a plurality of users and only the user using the headphone 1840 wants to activate the voice recognition function.

The audio processing unit 1320 may output the second audio data as it is without changing the output level of the second audio data.

The audio processing unit 1320 controls the TV service processing unit 1330 such that the input volume level of the first audio data is reduced to a preset level based on the volume level set in the TV speaker, The third audio data indicating that the voice recording for voice recognition is started is output to the volume level set in the headphone 1840. That is, the second audio data and the third audio data are mixed by the mixer in a state where the output volume level of the second audio data and the output volume level of the third audio data are adjusted and output simultaneously through the headphone 1840 have. When the audio processing unit 1320 receives fourth audio data indicating that audio recording for speech recognition is completed from the notification / notification application 1315, the fourth audio data is also transmitted to the audio / It can be processed in accordance with the third audio data.

According to the embodiment, when the first content is not a live broadcast signal but is a file capable of stopping or pausing playback, if the voice recognition function is activated by the user, playback of the content is temporarily stopped, and the audio processor 1330 May control the pulse audio module 1340 such that only the third audio data and the fourth audio data are output at the volume level set in the headphone 1840.

19 is a view for explaining an example of the operation of the audio processing unit when an event related to the ring tone application occurs in the digital device according to the embodiment of the present invention.

Video data 1920 corresponding to a predetermined content is output to the display unit 1910 of the digital device 1900. [ Then, the first audio data corresponding to the content is output through the TV speaker of the digital device 1900. In one example, it is assumed that the content corresponds to a live broadcast signal.

For example, when a telephone event is generated through the call application, the call application may be executed and a second GUI 1930 corresponding to the execution screen of the call application may be output to the display unit 1910. The second GUI 1930 may include a message indicating that a telephone event has occurred and menus for calling or declassifying. The pulse audio module 1340 can then receive the second audio data corresponding to the telephone ring tone received from the ring tone application 1313. [ The pulse audio module 1340 can notify the audio processing unit 1320 of the reception of the second audio data.

The audio processing unit 1320 may adjust the output of the first and second audio data based on the associated policy when the audio data corresponding to the broadcast signal and the audio data associated with the ring tone application are present together. For example, when the audio data related to the ring tone application is higher in priority than the audio data corresponding to the broadcast signal, the audio processing unit 1320 sets the output volume level of the second audio data to the volume level set in the TV speaker , And controls the TV service processing unit 1330 such that the input volume level of the first audio data is reduced to a predetermined level based on the volume level set on the TV speaker. Alternatively, when the audio data corresponding to the broadcasting signal and the audio data related to the ring tone application have the same priority, the first and second audio data may be mixed and output at the volume level set in the TV speaker.

According to the embodiment, when the user selects the talk menu 1931 in the second GUI 1930, the audio processing unit 1320 controls the input source corresponding to the first audio data to set the input volume level to 0 Can be adjusted.

In addition, according to the embodiment, when the content is not a live broadcast signal but a file that can stop or pause playback, if the audio data related to the ring tone application occurs, the playback of the content may be temporarily stopped.

20 is a view for explaining an example of the operation of the audio processing unit when an event related to an alert notification application occurs in the digital device according to an embodiment of the present invention.

In the display unit 2010 of the digital device 2000, a screen is displayed in which the predetermined content is in the reproduction stop or pause state, or irrespective of the content including the audio data. A headphone 2030 is connected to a specific port included in the audio output unit 1350 of the digital device 2000. Presently, it is assumed that no audio data is being output to the headphone 2030 or the TV speaker.

For example, when a specific event related to the alert notification application occurs, the alert notification application may be executed and a third GUI 2020 corresponding to the execution screen of the alert notification application may be output to the display unit 2010. [ The third GUI 2020 may include a message describing the contents of the specific event. The pulse audio module 1340 may receive the first audio data corresponding to the alarm sound received from the alarm notification application 1312. [ The pulse audio module 1340 may notify the audio processing unit 1320 of the reception of the first audio data.

The audio processing unit 1320 determines whether or not the specific content including the audio data is currently being played back when the headphone 2030 is connected to the port of the audio output unit 1350 based on the policy related to the alert notification application, It is possible to control the pulse audio module 1340 so that the second audio data can be output simultaneously through the TV speaker and the headphone 2030. [ In this case, the second audio data may be outputted through the TV speaker according to the volume level set in the TV speaker, and output through the headphone 2030 according to the volume level set in the headphone 2030.

21 is a view for explaining another example of the operation of the audio processing unit when an event related to an alert notification application occurs in the digital device according to an embodiment of the present invention. The portions overlapping with those described above with reference to Fig. 20 will not be described again, and the differences will be mainly described below.

Video data 2120 corresponding to a predetermined content is output to the display unit 2110 of the digital device 2100. [ The first audio data corresponding to the content is output through the headphone 2140 connected to the digital device 2100. [

For example, when a specific event related to the alert notification application occurs, the alert notification application may be executed and a third GUI 2130 corresponding to the execution screen of the alert notification application may be output to the display unit 2110. The third GUI 2130 may include a message describing the contents of the specific event. The pulse audio module 1340 may receive the second audio data corresponding to the warning sound received from the warning notification application 1312. [ The pulse audio module 1340 can notify the audio processing unit 1320 of the reception of the second audio data.

The audio processing unit 1320 determines whether or not the headphone 2140 is connected to the port of the audio output unit 1350 based on the policy related to the alert notification application and the first When the audio data is being output, the pulse audio module 1340 can be controlled so that the second audio data can be output through the headphone 2140. In this case, the second audio data may be output only to the headphone 2140 and not to the TV speaker.

The audio processing unit 1320 may adjust the output of the first and second audio data based on the related policies when audio data related to the alert notification application exists. For example, the audio processing unit 1320 may adjust the output volume level of the second audio data to be greater than the output volume level of the first audio data, or adjust the output volume level of the second audio data to be equal to the output volume level of the first audio data It is possible. According to the embodiment, the playback of the content may be suspended when the content is a file that can be stopped or paused. Also, according to the embodiment, the output volume level of the second audio data is not larger than the volume level set in the headphone 2140, thereby preventing the user from being surprised.

As described above, according to the embodiments of the present invention, when the audio data corresponding to a plurality of contents is simultaneously output, the audio data can be controlled according to the user's intention.

The digital device disclosed in this specification and the content processing method in the digital device are not limited in the configuration and method of the embodiments described above, but the embodiments can be applied to all or a part of each embodiment so that various modifications can be made. Some of which may be selectively combined.

Meanwhile, the operation method of the digital device disclosed in this specification can be implemented as a code readable by a processor in a recording medium readable by a processor included in the digital device. The processor-readable recording medium includes all kinds of recording devices in which data that can be read by the processor is stored. Examples of the recording medium readable by the processor include ROM (Read Only Memory), RAM (Random Access Memory), CD-ROM, magnetic tape, floppy disk, optical data storage device, And may be implemented in the form of a carrier-wave. In addition, the processor-readable recording medium may be distributed over network-connected computer systems so that code readable by the processor in a distributed fashion can be stored and executed.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Of the right. Further, such modifications are not to be understood individually from the technical idea of the present invention.

201: network interface unit 202: TCP / IP manager
203: service delivery manager 204: SI decoder
205 demultiplexer 206 audio decoder
207: Video decoder 208:
209: Service Control Manager 210: Service Discovery Manager
211: SI & Metadata Database 212: Metadata Manager
213: service manager 214: UI manager

Claims (15)

A pulse audio module for receiving audio data of a first type from an application and audio data of a second type different from the first type;
An audio processing unit; And
And an audio output unit,
The pulse audio module notifies the audio processing unit of the reception of the first and second types of audio data,
Wherein the audio processing unit controls the pulse audio module to adjust output of the first and second types of audio data based on policies related to the first and second types of audio data,
Wherein the audio output unit outputs at least one of the first and second types of audio data based on the adjustment result of the pulse audio module.
The method according to claim 1,
The policy may include at least one of a volume level of each of the first and second types of audio data based on the priority of each type of audio data and a port of the audio output unit to which each of the first and second types of audio data is to be output Related digital devices.
The method according to claim 1,
When the audio data of the first type among the first and second types of audio data corresponds to a TTS (Text to Speech) type, the audio output unit outputs audio data of the first type And outputs only the audio data corresponding to the data.
The method of claim 3,
Wherein when the reception of the audio data of the first type is completed, the pulse audio module notifies the audio processing unit that reception of the audio data of the first type is completed, and the audio output unit outputs the audio data of the second type Digital devices.
The method according to claim 1,
When the audio data of the first type is related to the voice recognition function, the audio processing unit controls the pulse audio module to control the output of the second type audio data, based on the volume level set in the audio output unit Digital devices.
The method according to claim 1,
Wherein the first and second types of audio data are decoded audio data or PCM audio data.
The method according to claim 1,
Wherein the audio output unit includes a TV speaker and a headset,
Wherein audio data corresponding to the alert notification is output through the TV speaker and the headset when any one of the first and second types of audio data corresponds to alert notifications.
The method according to claim 1,
Wherein the audio processing unit adjusts the output of the first and second types of audio data based on the policy or adjusts the volume level set in the audio output unit based on the policy.
A pulse audio module for receiving audio data of a first type from an application;
A TV service processing unit for receiving audio data of a second type from an application;
An audio processing unit; And
And an audio output unit,
The pulse audio module notifies the audio processing unit of the reception of the audio data of the first type,
The TV service processing unit notifies the audio processing unit of the reception of the audio data of the second type,
Wherein the audio processing unit controls the pulse audio module and the TV service module to control output of the first and second types of audio data based on policies related to the first and second types of audio data,
Wherein the audio output unit outputs at least one of the first and second types of audio data based on the adjustment result of each of the pulse audio module and the TV service processing unit.
10. The method of claim 9,
Wherein the TV service processing unit includes a hardware decoder,
And the second type of audio data is decoded using the decoder.
10. The method of claim 9,
Wherein the audio processing unit is configured to generate the audio data of the second type based on the volume level set in the audio output unit when the audio data of the first type is related to the voice recognition function, A digital device for controlling an audio module.
12. The method of claim 11,
Wherein a volume level of the first type audio data output through the audio output unit is different from a volume level of the second type audio data output through the audio output unit.
In a pulse audio module, receiving audio data of a first type from an application and audio data of a second type that is different from the first type;
In the pulse audio module, notifying the audio processing unit of the reception of the first and second types of audio data;
Controlling, in the audio processing unit, the pulse audio module to adjust the output of the first and second types of audio data based on the policies associated with the first and second types of audio data;
And outputting at least one of the first and second types of audio data through the audio output unit based on the adjustment result.
14. The method of claim 13,
The policy is related to volume control of each of the first and second types of audio data based on the priority of each type of audio data and to which port of the audio output unit each of the first and second types of audio data is to be output The method comprising:
14. The method of claim 13,
Wherein the outputting through the audio output unit comprises:
When the audio data of the first type among the first and second types of audio data corresponds to a TTS (Text to Speech) type, the audio output unit outputs audio data of the first type And outputting only audio data corresponding to the data.
KR1020140130810A 2014-02-27 2014-09-30 Digital device and method of controlling thereof KR20150101902A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/121,977 US20170078737A1 (en) 2014-02-27 2014-11-25 Digital device and control method therefor
PCT/KR2014/011359 WO2015129992A1 (en) 2014-02-27 2014-11-25 Digital device and control method therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461945756P 2014-02-27 2014-02-27
US61/945,756 2014-02-27

Publications (1)

Publication Number Publication Date
KR20150101902A true KR20150101902A (en) 2015-09-04

Family

ID=54242920

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140130810A KR20150101902A (en) 2014-02-27 2014-09-30 Digital device and method of controlling thereof

Country Status (1)

Country Link
KR (1) KR20150101902A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20220086976A (en) * 2020-12-17 2022-06-24 주식회사 엘지유플러스 Settop terminal and operating method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20220086976A (en) * 2020-12-17 2022-06-24 주식회사 엘지유플러스 Settop terminal and operating method thereof

Similar Documents

Publication Publication Date Title
KR101567832B1 (en) Digital device and method for controlling the same
KR102413328B1 (en) Main speaker, sub speaker and system comprising main speaker and sub speaker
KR101632221B1 (en) Digital device and method for processing service thereof
KR20160023089A (en) Digital device and method for controlling the same
KR20160062417A (en) Multimedia device and method for controlling the same
KR20170031370A (en) Mobile terminal and method for controlling the same
KR20160127452A (en) Display device, and controlling method thereof
KR20170028104A (en) Display device and method for controlling the same
KR20150101369A (en) Digital device and method of processing video data thereof
KR20160116910A (en) Digital device and method of processing application data thereof
KR20170024860A (en) Digital device and method for processing data the same
KR20170090102A (en) Digital device and method for controlling the same
KR102396035B1 (en) Digital device and method for processing stt thereof
KR20170087307A (en) Display device and method for controlling the same
KR20170126645A (en) Digital device and controlling method thereof
US20170078737A1 (en) Digital device and control method therefor
KR20170138788A (en) Digital device and controlling method thereof
KR20170092408A (en) Digital device and method for controlling the same
KR20170022612A (en) Display device and method for controlling the same
KR20160048430A (en) Digital device and method of processing data thereof
KR102443319B1 (en) Digital device and method for controlling the same
KR102158698B1 (en) Digital device and method for controlling the same
KR20150101902A (en) Digital device and method of controlling thereof
KR20170059094A (en) Digital device and method for controlling the same
KR20160127438A (en) Display device and method for controlling the same

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination