WO2015129992A1 - Dispositif numérique et son procédé de commande - Google Patents

Dispositif numérique et son procédé de commande Download PDF

Info

Publication number
WO2015129992A1
WO2015129992A1 PCT/KR2014/011359 KR2014011359W WO2015129992A1 WO 2015129992 A1 WO2015129992 A1 WO 2015129992A1 KR 2014011359 W KR2014011359 W KR 2014011359W WO 2015129992 A1 WO2015129992 A1 WO 2015129992A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
audio data
data
type
output
Prior art date
Application number
PCT/KR2014/011359
Other languages
English (en)
Korean (ko)
Inventor
야그트로버트
아루무감슈레쉬
카울아누팜
원스턴스티브
라차바투니사일레쉬
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020140130810A external-priority patent/KR20150101902A/ko
Priority claimed from KR1020140131941A external-priority patent/KR20150101904A/ko
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to US15/121,977 priority Critical patent/US20170078737A1/en
Publication of WO2015129992A1 publication Critical patent/WO2015129992A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4828End-user interface for program selection for searching program descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application

Definitions

  • the present invention relates to a digital device and a control method thereof.
  • the present invention has been made to solve the above situation or problem, and an object of the present invention is to provide a display device having an audio processor for controlling audio data corresponding to a plurality of contents simultaneously. To provide.
  • Another object of the present invention is to provide a control method of a digital device capable of containing the audio data according to a user's intention when it is necessary to simultaneously output audio data corresponding to a plurality of contents.
  • Another object of the present invention is to allow a user to perform a quick search for all contents of an application currently installed in a device by using a search application.
  • Another object of the present invention is to provide a list of at least one content that matches text information input by the user through the search application, so that the user can easily and quickly use the desired content.
  • Another object of the present invention is to omit the intermediate process and immediately execute a specific application to use the content immediately when the user selects at least one or more contents from the content list acquired by the search application.
  • This disclosure discloses various embodiment (s) of a digital device and a processing method in the digital device.
  • a digital device includes a pulsed audio module for receiving a first type of audio data and audio data of a second type different from the first type, an audio processor, and an audio output unit from an application.
  • the pulsed audio module notifies the audio processor of the reception of the first and second types of audio data, and the audio processor is based on the policy associated with the first and second types of audio data.
  • the pulsed audio module is controlled to adjust an output of audio data of the audio output unit, and the audio output unit outputs at least one of the first and second types of audio data based on a result of the adjustment of the pulsed audio module.
  • the pulse audio module for receiving the first type of audio data from the application
  • the TV service processor for receiving the second type of audio data from the application
  • the audio processor An audio output unit
  • the pulse audio module notifies the audio processing unit of the reception of the first type of audio data
  • the TV service processing unit notifies the audio processing unit of the reception of the second type of audio data
  • the audio processor controls the pulsed audio module and the TV service module to respectively control the output of the first and second types of audio data based on a policy associated with the first and second types of audio data
  • the audio The output unit, the pulse audio module and each of the TV service processing unit On the basis of the results section, and outputs at least one of audio data of the first and second types.
  • control method of a digital device in the pulse audio module, receiving a first type of audio data and audio data of a second type different from the first type from an application, and In an audio module, notifying an audio processor of the reception of the first and second types of audio data; and in the audio processor, the first and second types of audio data based on a policy associated with the first and second types of audio data. Controlling a pulsed audio module to adjust an output of audio data, and outputting at least one of the first and second types of audio data through an audio output unit based on the adjustment result.
  • a display device including an audio processor for controlling audio data corresponding to a plurality of contents at the same time may be provided.
  • a control method of a digital device capable of controlling the audio data according to a user's intention is provided.
  • a user has an advantage of performing a quick search for all contents of an application currently installed in a device using a search application.
  • FIG. 1 is a diagram schematically illustrating a service system including a digital device according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a digital device according to an embodiment of the present invention.
  • FIG. 3 is a block diagram illustrating a digital device according to another embodiment of the present invention.
  • FIG. 4 is a block diagram illustrating a digital device according to another embodiment of the present invention.
  • FIG. 5 is a block diagram illustrating a detailed configuration of the controller of FIGS. 2 to 4 according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating input means connected to the digital device of FIGS. 2 to 4 according to one embodiment of the present invention.
  • FIG. 7 is a diagram illustrating a Web OS architecture according to an embodiment of the present invention.
  • FIG. 8 is a diagram illustrating the architecture of a Web OS device according to one embodiment of the present invention.
  • FIG. 9 is a diagram illustrating a graphic composition flow in a Web OS device according to one embodiment of the present invention.
  • FIG. 10 is a diagram illustrating a media server according to one embodiment of the present invention.
  • FIG. 11 is a block diagram illustrating a configuration of a media server according to an embodiment of the present invention.
  • FIG. 12 is a diagram illustrating a relationship between a media server and a TV service, according to an exemplary embodiment.
  • FIG. 13 is a block diagram illustrating a method of processing audio data in a digital device according to an embodiment of the present invention.
  • FIG. 14 is a diagram for describing a method for activating a voice recognition function in a digital device according to one embodiment of the present invention.
  • 15 is a diagram illustrating an example of an operation of an audio processor when a voice recognition function is activated in a digital device according to an embodiment of the present invention.
  • 16 is a diagram illustrating another example of an operation of an audio processor when a voice recognition function is activated in a digital device according to an embodiment of the present invention.
  • 17 is a view for explaining another example of an operation of an audio processor when a voice recognition function is activated in a digital device according to an embodiment of the present invention.
  • FIG. 18 illustrates another example of an operation of an audio processor when a voice recognition function is activated in a digital device according to an embodiment of the present invention.
  • FIG. 19 illustrates an example of an operation of an audio processor when an event related to a ringtone application occurs in a digital device according to an embodiment of the present invention.
  • FIG. 20 illustrates an example of an operation of an audio processor when an event associated with an alert notification application occurs in a digital device according to an embodiment of the present invention.
  • FIG. 21 illustrates another example of an operation of an audio processor when an event related to an alert notification application occurs in a digital device according to an embodiment of the present invention.
  • FIG. 22 is a block diagram illustrating in detail a configuration module of a digital device according to another embodiment of the present invention.
  • FIG. 23 illustrates a deep linking function supported by a digital device according to an embodiment of the present invention.
  • FIG. 24 is a diagram for explaining an example of receiving text using a search application in a digital device according to one embodiment of the present invention.
  • FIG. 25 illustrates an example in which a digital device receives web application information and content information from a meta server according to an embodiment of the present disclosure.
  • FIG. 26 illustrates an example in which a digital device executes a deep linking function using a list received from a meta server, according to an embodiment of the present invention.
  • FIG. 27 is a diagram for explaining an example of using a search application in a digital device according to one embodiment of the present invention.
  • FIG. 28 is a diagram for explaining an example of processing deep linking data in a digital device according to one embodiment of the present invention.
  • 29 and 30 are diagrams for describing an example in which a digital device generates a search list using a snapshot image according to an embodiment of the present invention.
  • 31 is a flowchart illustrating a control method of a digital device according to an embodiment of the present invention.
  • the term " digital device" described herein is, for example, transmitting, receiving, processing, and outputting data, content, services, applications, and the like. It includes all devices that perform at least one or more.
  • the digital device may be paired or connected (hereinafter referred to as 'pairing') with another digital device, an external server, or the like through a wired / wireless network, and transmits predetermined data therethrough. Can send / receive At this time, if necessary, the data may be appropriately converted (converted) before the transmission / reception.
  • the digital device includes, for example, a standing device such as a network television (TV), a hybrid broadcast broadband TV (HBBTV), a smart television (TV), an internet protocol television (IPTV), a personal computer (PC), or the like.
  • a mobile device such as a PDA (Personal Digital Assistant), smart phone (Smart Phone), tablet PC (Tablet PC), notebook (notebook) and the like can all include.
  • a digital TV is illustrated in FIG. 2 and a mobile device is illustrated in FIG. 3, which will be described later to help the understanding of the present invention and for convenience of the applicant.
  • the digital device described herein may be a configuration having only a panel, or may be a set configuration such as a set-top box (STB), a device, a system, and the like. .
  • STB set-top box
  • wired / wireless network refers to a communication network supporting various communication standards or protocols for pairing and / or transmitting and receiving data between digital devices or digital devices and external servers.
  • wired / wireless networks include all communication networks that are currently or will be supported by the specification and are capable of supporting one or more communication protocols for them.
  • Such wired and wireless networks include, for example, Universal Serial Bus (USB), Composite Video Banking Sync (CVBS), Component, S-Video (Analog), Digital Visual Interface (DVI), High Definition Multimedia Interface (HDMI), Network for wired connection such as RGB, D-SUB and communication standard or protocol therefor, Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), Zigbee (ZigBee), Digital Living Network Alliance (DLNA), Wireless LAN (WLAN) (Wi-Fi), Wireless broadband (Wibro), World Interoperability for Microwave Access (Wimax), High Speed Downlink Packet Access (HSDPA), LTE / LTE It may be formed by a network for a wireless connection such as Long Term Evolution / LTE-Advanced (A), Wi-Fi Direct, and a communication standard or protocol therefor.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • the meaning when referred to herein only as a digital device, the meaning may mean a fixed device or a mobile device depending on the context, and may be used to include both unless specifically mentioned.
  • the digital device is, for example, an intelligent device that supports a broadcast receiving function, a computer function or support, at least one external input, and the like, and includes e-mail and web browsing through the above-described wired / wireless network. , Banking, games, applications, and the like.
  • the digital device may include an interface for supporting at least one input or control means (hereinafter, “input means”) such as a handwritten input device, a touch-screen, and a spatial remote controller. Can be.
  • the digital device may use a standardized general operating system (OS), but in particular, the digital device described in the present specification uses a web OS. Accordingly, digital devices can add, delete, modify, and update various services or applications on a general-purpose OS kernel or Linux kernel. It is possible, through which a more user-friendly environment can be constructed and provided.
  • OS general operating system
  • the digital device described in the present specification uses a web OS. Accordingly, digital devices can add, delete, modify, and update various services or applications on a general-purpose OS kernel or Linux kernel. It is possible, through which a more user-friendly environment can be constructed and provided.
  • the above-described digital device may receive and process an external input.
  • the external input may be connected to an external input device, that is, the digital device through a wired / wireless network, to transmit / receive data, and to process the external input.
  • An input means to a digital device.
  • a game device such as a high-definition multimedia interface (HDMI), a playstation or an X-box, a smartphone, a tablet PC, a pocket photo, etc. may be used as the external input.
  • digital devices such as printing devices, smart TVs, Blu-ray device devices, and the like.
  • server refers to a digital device or system for supplying data to or receiving data from the above-mentioned digital device, that is, a client, and also referred to as a processor. do.
  • a portal server for providing a web page, a web content or a web service
  • an advertising server for providing advertising data
  • Providing a content server providing content an SNS server providing a social network service (SNS), a service server provided by a manufacturer, and providing a video on demand (VOD) or streaming service
  • SNS social network service
  • VOD video on demand
  • It may include a multi-channel video programming distributor (MVDP), a service server for providing a pay service, and the like.
  • MVDP multi-channel video programming distributor
  • the meaning may be a meaning including not only an application but also a service based on the context.
  • FIG. 1 is a diagram schematically illustrating a service system including a digital device according to an embodiment of the present invention.
  • the service system includes a content provider 10, a service provider 20, a network provider 30, and a home network end user (HNED). And 40.
  • the HNED 40 comprises, for example, a client 100, ie a digital device according to the invention.
  • the content provider 10 produces and provides various contents. As shown in FIG. 1, such a content provider 10 may include a terrestrial broadcast sender, a cable SO (System Operator) or MSO (Multiple SO), a satellite broadcast sender, various Internet broadcast senders, and an individual. Content providers and the like. The content provider 10 may produce and provide various services or applications in addition to broadcast content.
  • a content provider 10 may include a terrestrial broadcast sender, a cable SO (System Operator) or MSO (Multiple SO), a satellite broadcast sender, various Internet broadcast senders, and an individual. Content providers and the like.
  • the content provider 10 may produce and provide various services or applications in addition to broadcast content.
  • the service provider 20 service packetizes the content produced by the content provider 10 and provides it to the HNED 40.
  • the service provider 20 may package at least one or more of contents produced by the first terrestrial broadcast, the second terrestrial broadcast, the cable MSO, the satellite broadcast, the various Internet broadcasts, the application, and the like for the service, and the HNED ( 40).
  • the service provider 20 provides a service to the client 100 in a uni-cast or multi-cast manner.
  • the service provider 20 may transmit data to a plurality of clients 100 registered in advance, for this purpose may use the Internet Group Management Protocol (IGMP) protocol.
  • IGMP Internet Group Management Protocol
  • the content provider 10 and the service provider 20 described above may be the same entity.
  • the content produced by the content provider 10 may be packaged as a service and provided to the HNED 40 to perform the functions of the service provider 20 together or vice versa.
  • the network provider 30 provides a network for data exchange between the content provider 10 or / and the service provider 20 and the client 100.
  • the client 100 receives a data through a network provider 30, for example, by establishing a home network, and receives data about various services or applications such as VoD and streaming. You can also send / receive.
  • the content provider 10 and / or the service provider 20 in the service system may use conditional access or content protection means to protect the transmitted content.
  • the client 100 may use a processing means such as a cable card (or point of deployment) or a downloadable casing (DCAS) in response to the limited reception or content protection.
  • a processing means such as a cable card (or point of deployment) or a downloadable casing (DCAS) in response to the limited reception or content protection.
  • the client 100 may also use a bidirectional service through a network. Accordingly, the client 100 may perform a role or function of a content provider, and the service provider 20 may receive it and transmit it to another client.
  • the content provider 10 and / or the service provider 20 may be a server that provides a service described later herein.
  • the server may mean owning or including the network provider 30 as necessary.
  • the service or service data includes not only services or applications received from the outside described above, but also internal services or applications, and the services or applications include service or application data for the Web OS-based client 100. Can mean.
  • FIG. 2 is a block diagram illustrating a digital device according to an embodiment of the present invention.
  • the digital device described herein corresponds to the client 100 of FIG. 1.
  • the digital device 200 includes a network interface 201, a TCP / IP manager 202, a service delivery manager 203, an SI decoder 204, A demux or demultiplexer 205, an audio decoder 206, a video decoder 207, a display A / V and OSD module 208, a service control manager (service control manager) 209, service discovery manager 210, SI & metadata DB 211, metadata manager 212, service manager 213, And a UI manager 214.
  • a network interface 201 includes a network interface 201, a TCP / IP manager 202, a service delivery manager 203, an SI decoder 204, A demux or demultiplexer 205, an audio decoder 206, a video decoder 207, a display A / V and OSD module 208, a service control manager (service control manager) 209, service discovery manager 210, SI & metadata DB 211, metadata manager 212, service manager 213, And a UI
  • the network interface unit 201 may be configured to perform IP packet (s) (Internet Protocol (IP) packet (s)) or IP datagram (s) (hereinafter referred to as IP packet (s) through an accessing network. Send / receive)
  • IP packet Internet Protocol
  • IP datagram IP datagram
  • Send / receive For example, the network interface unit 201 may receive a service, an application, content, and the like from the service provider 20 of FIG. 1 through a network.
  • the TCP / IP manager 202 may be configured to transfer packets between the source and the destination for IP packets received by the digital device 200 and IP packets transmitted by the digital device 200. involved in packet delivery).
  • the TCP / IP manager 202 classifies the received packet (s) to correspond to an appropriate protocol, and includes a service delivery manager 205, a service discovery manager 210, a service control manager 209, and a metadata manager 212. Output the classified packet (s).
  • the service delivery manager 203 is in charge of controlling the received service data.
  • the service delivery manager 203 may use RTP / RTCP when controlling real-time streaming data.
  • the service delivery manager 203 parses the received data packet according to the RTP and transmits it to the demultiplexer 205 or the control of the service manager 213.
  • the service delivery manager 203 feeds back the network reception information to a server that provides a service using RTCP.
  • the demultiplexer 205 demultiplexes the received packet into audio, video, SI (System Information) data, and the like, and transmits the demultiplexed unit to the audio / video decoders 206/207 and the SI decoder 204, respectively.
  • SI System Information
  • the SI decoder 204 includes demultiplexed SI data, that is, Program Specific Information (PSI), Program and System Information Protocol (PSIP), Digital Video Broadcasting-Service Information (DVB-SI), and Digital Television Terrestrial Multimedia (DTMB / CMMB). Decode service information such as Broadcasting / Coding Mobile Multimedia Broadcasting).
  • the SI decoder 204 may store the decoded service information in the SI & metadata database 211. The stored service information may be read and used by the corresponding configuration, for example, at the request of a user.
  • the audio / video decoder 206/207 decodes each demultiplexed audio data and video data.
  • the decoded audio data and video data are provided to the user through the display unit 208.
  • the application manager may include, for example, the UI manager 214 and the service manager 213 and perform a control function of the digital device 200.
  • the application manager may manage the overall state of the digital device 200, provide a user interface (UI), and manage other managers.
  • UI user interface
  • the UI manager 214 provides a Graphic User Interface (UI) / UI for a user by using an OSD (On Screen Display) and the like, and receives a key input from the user to perform a device operation according to the input. For example, the UI manager 214 transmits the key input signal to the service manager 213 when receiving a key input related to channel selection from the user.
  • UI Graphic User Interface
  • OSD On Screen Display
  • the service manager 213 controls a manager associated with a service such as a service delivery manager 203, a service discovery manager 210, a service control manager 209, and a metadata manager 212.
  • the service manager 213 generates a channel map and controls the channel selection using the generated channel map according to the key input received from the UI manager 214.
  • the service manager 213 receives service information from the SI decoder 204 and sets the audio / video packet identifier (PID) of the selected channel to the demultiplexer 205.
  • PID audio / video packet identifier
  • the PID set as described above may be used in the above demultiplexing process. Accordingly, the demultiplexer 205 filters (PID or section filtering) audio data, video data, and SI data by using the PID.
  • the service discovery manager 210 provides information necessary to select a service provider that provides a service. Upon receiving a signal regarding channel selection from the service manager 213, the service discovery manager 210 searches for a service using the information.
  • the service control manager 209 is responsible for selecting and controlling services. For example, the service control manager 209 uses IGMP or RTSP when the user selects a live broadcasting service such as a conventional broadcasting method, and uses RTSP when selecting a service such as VOD. Select and control services.
  • the RTSP protocol may provide a trick mode for real time streaming.
  • the service control manager 209 may initialize and manage a session through the IMS gateway 250 using an IP Multimedia Subsystem (IMS) or a Session Initiation Protocol (SIP).
  • IMS IP Multimedia Subsystem
  • SIP Session Initiation Protocol
  • the protocols are one embodiment, and other protocols may be used depending on implementation.
  • the metadata manager 212 manages metadata associated with the service and stores the metadata in the SI & metadata database 211.
  • the SI & metadata database 211 stores service information decoded by the SI decoder 204, metadata managed by the metadata manager 212, and information necessary to select a service provider provided by the service discovery manager 210. do.
  • the SI & metadata database 211 can store set-up data and the like for the system.
  • the SI & metadata database 211 may be implemented using non-volatile memory (NVRAM), flash memory, or the like.
  • NVRAM non-volatile memory
  • the IMS gateway 250 is a gateway that collects functions necessary for accessing an IMS-based IPTV service.
  • FIG. 3 is a block diagram illustrating a digital device according to another embodiment of the present invention.
  • Figure 3 is a mobile device to another embodiment of the digital device.
  • the mobile device 300 may include a wireless communication unit 310, an A / V input unit 320, a user input unit 330, a sensing unit 340, an output unit 350,
  • the memory 360 may include an interface unit 370, a controller 380, a power supply unit 390, and the like.
  • the wireless communication unit 310 may include one or more modules that enable wireless communication between the mobile device 300 and the wireless communication system or between the mobile device and the network in which the mobile device is located.
  • the wireless communication unit 310 may include a broadcast receiving module 311, a mobile communication module 312, a wireless internet module 313, a short range communication module 314, a location information module 315, and the like. .
  • the broadcast receiving module 311 receives a broadcast signal and / or broadcast related information from an external broadcast management server through a broadcast channel.
  • the broadcast channel may include a satellite channel and a terrestrial channel.
  • the broadcast management server may mean a server that generates and transmits a broadcast signal and / or broadcast related information or a server that receives a previously generated broadcast signal and / or broadcast related information and transmits the same to a terminal.
  • the broadcast signal may include not only a TV broadcast signal, a radio broadcast signal, and a data broadcast signal, but also a broadcast signal having a data broadcast signal combined with a TV broadcast signal or a radio broadcast signal.
  • the broadcast related information may mean information related to a broadcast channel, a broadcast program, or a broadcast service provider.
  • the broadcast related information may also be provided through a mobile communication network. In this case, it may be received by the mobile communication module 312.
  • the broadcast related information may exist in various forms, for example, in the form of an electronic program guide (EPG) or an electronic service guide (ESG).
  • EPG electronic program guide
  • ESG electronic service guide
  • the broadcast receiving module 311 may be, for example, ATSC, DVB-T (Digital Video Broadcasting-Terrestrial), DVB-S (Satellite), MediaFLO (Media Forward Link Only), DVB-H (Handheld), ISDB-T ( Digital broadcasting signals may be received using a digital broadcasting system such as Integrated Services Digital Broadcast-Terrestrial.
  • the broadcast receiving module 311 may be configured to be suitable for not only the above-described digital broadcasting system but also other broadcasting systems.
  • the broadcast signal and / or broadcast related information received through the broadcast receiving module 311 may be stored in the memory 360.
  • the mobile communication module 312 transmits and receives a radio signal with at least one of a base station, an external terminal, and a server on a mobile communication network.
  • the wireless signal may include various types of data according to transmission and reception of a voice signal, a video call signal, or a text / multimedia message.
  • the wireless internet module 313 may include a module for wireless internet access and may be embedded or external to the mobile device 300.
  • Wireless Internet technologies may include Wireless LAN (Wi-Fi), Wireless Broadband (Wibro), World Interoperability for Microwave Access (Wimax), High Speed Downlink Packet Access (HSDPA), and the like.
  • the short range communication module 314 refers to a module for short range communication.
  • Short range communication technologies include Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigBee, RS-232, and RS-485. Can be.
  • the location information module 315 may be a module for acquiring location information of the mobile device 300, and may use a Global Position System (GPS) module as an example.
  • GPS Global Position System
  • the A / V input unit 320 is for inputting an audio or / video signal, and may include a camera 321 and a microphone 322.
  • the camera 321 processes image frames such as still images or moving images obtained by the image sensor in the video call mode or the imaging mode.
  • the processed image frame may be displayed on the display unit 351.
  • the image frame processed by the camera 321 may be stored in the memory 360 or transmitted to the outside through the wireless communication unit 310. Two or more cameras 321 may be provided depending on the use environment.
  • the microphone 322 receives an external sound signal by a microphone in a call mode, a recording mode, a voice recognition mode, etc., and processes the external sound signal into electrical voice data.
  • the processed voice data may be converted into a form transmittable to the mobile communication base station through the mobile communication module 312 and output in the call mode.
  • the microphone 322 may be implemented with various noise removing algorithms for removing noise generated in the process of receiving an external sound signal.
  • the user input unit 330 generates input data for the user to control the operation of the terminal.
  • the user input unit 330 may include a key pad, a dome switch, a touch pad (constant voltage / capacitance), a jog wheel, a jog switch, and the like.
  • the sensing unit 340 may determine the current state of the mobile device 300 such as an open / closed state of the mobile device 300, a location of the mobile device 300, presence or absence of user contact, orientation of the mobile device, acceleration / deceleration of the mobile device, and the like.
  • the sensing unit generates a sensing signal for controlling the operation of the mobile device 300. For example, when the mobile device 300 is moved or tilted, the position or tilt of the mobile device may be sensed. Also, whether the power supply unit 390 is supplied with power or whether the interface unit 370 is coupled to an external device may be sensed.
  • the sensing unit 240 may include a proximity sensor 341 including near field communication (NFC).
  • the output unit 350 is to generate an output related to visual, auditory or tactile senses, and may include a display unit 351, a sound output module 352, an alarm unit 353, a haptic module 354, and the like. have.
  • the display unit 351 displays (outputs) information processed by the mobile device 300. For example, when the mobile device is in the call mode, the UI or GUI related to the call is displayed. When the mobile device 300 is in a video call mode or a shooting mode, the mobile device 300 displays a captured image and / or a received image, UI, or GUI.
  • the display unit 351 may include a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), a flexible display ( flexible display) and three-dimensional display.
  • LCD liquid crystal display
  • TFT LCD thin film transistor-liquid crystal display
  • OLED organic light-emitting diode
  • flexible display flexible display
  • three-dimensional display three-dimensional display.
  • Some of these displays can be configured to be transparent or light transmissive so that they can be seen from the outside. This may be referred to as a transparent display.
  • a representative example of the transparent display is TOLED (Transparant OLED).
  • the rear structure of the display unit 351 may also be configured as a light transmissive structure. With this structure, the user can see the object located behind the terminal body through the area occupied by the display unit 351 of the terminal body.
  • two or more display units 351 may exist.
  • a plurality of display units may be spaced apart or integrally disposed on one surface of the mobile device 300, or may be disposed on different surfaces.
  • the display unit 351 and a sensor for detecting a touch motion form a mutual layer structure (hereinafter referred to as a touch screen)
  • the display unit 351 may be input in addition to the output device. It can also be used as a device.
  • the touch sensor may have, for example, a form of a touch film, a touch sheet, a touch pad, or the like.
  • the touch sensor may be configured to convert a change in pressure applied to a specific portion of the display unit 351 or capacitance generated at a specific portion of the display unit 351 into an electrical input signal.
  • the touch sensor may be configured to detect not only the position and area of the touch but also the pressure at the touch.
  • the corresponding signal (s) is sent to the touch controller.
  • the touch controller processes the signal (s) and then transmits the corresponding data to the controller 380.
  • the controller 380 may determine which area of the display unit 351 is touched.
  • the proximity sensor 341 may be disposed in an inner region of the mobile device surrounded by the touch screen or near the touch screen.
  • the proximity sensor refers to a sensor that detects the presence or absence of an object approaching a predetermined detection surface or an object present in the vicinity without using a mechanical contact by using an electromagnetic force or infrared rays.
  • Proximity sensors have a longer life and higher utilization than touch sensors.
  • the proximity sensor examples include a transmission photoelectric sensor, a direct reflection photoelectric sensor, a mirror reflection photoelectric sensor, a high frequency oscillation proximity sensor, a capacitive proximity sensor, a magnetic proximity sensor, and an infrared proximity sensor.
  • the touch screen is capacitive, the touch screen is configured to detect the proximity of the pointer by the change of the electric field according to the proximity of the pointer.
  • the touch screen may be classified as a proximity sensor.
  • the act of allowing the pointer to be recognized without being in contact with the touch screen so that the pointer is located on the touch screen is referred to as a "proximity touch", and the touch
  • the act of actually touching the pointer on the screen is called “contact touch.”
  • the position where the proximity touch is performed by the pointer on the touch screen refers to a position where the pointer is perpendicular to the touch screen when the pointer is in proximity proximity.
  • the proximity sensor detects a proximity touch and a proximity touch pattern (for example, a proximity touch distance, a proximity touch direction, a proximity touch speed, a proximity touch time, a proximity touch position, and a proximity touch movement state).
  • a proximity touch and a proximity touch pattern for example, a proximity touch distance, a proximity touch direction, a proximity touch speed, a proximity touch time, a proximity touch position, and a proximity touch movement state.
  • Information corresponding to the sensed proximity touch operation and proximity touch pattern may be output on the touch screen.
  • the sound output module 352 may output audio data received from the wireless communication unit 310 or stored in the memory 360 in a call signal reception, a call mode or a recording mode, a voice recognition mode, a broadcast reception mode, and the like.
  • the sound output module 352 may output a sound signal related to a function (eg, a call signal reception sound, a message reception sound, etc.) performed by the mobile device 300.
  • the sound output module 352 may include a receiver, a speaker, a buzzer, and the like.
  • the alarm unit 353 outputs a signal for notifying occurrence of an event of the mobile device 300. Examples of events occurring in the mobile device include call signal reception, message reception, key signal input, and touch input.
  • the alarm unit 353 may output a signal for notifying the occurrence of an event by vibration, in addition to a video signal or an audio signal.
  • the video signal or the audio signal may also be output through the display unit 351 or the audio output module 352, so that they 351 and 352 may be classified as part of the alarm unit 353.
  • the haptic module 354 generates various tactile effects that a user can feel. Vibration is a representative example of the haptic effect generated by the haptic module 354.
  • the intensity and pattern of vibration generated by the haptic module 354 can be controlled. For example, different vibrations may be synthesized and output or may be sequentially output.
  • the haptic module 354 may be configured to provide a pin array that vertically moves with respect to the contact skin surface, a jetting force or suction force of air through the jetting or suction port, grazing to the skin surface, contact of the electrode, electrostatic force, and the like.
  • Various tactile effects can be generated, such as effects due to the effects of cold / warm reproduction using an element that can absorb heat or generate heat.
  • the haptic module 354 may not only deliver the haptic effect through direct contact, but also may implement the user to feel the haptic effect through muscle sensation such as a finger or an arm.
  • the haptic module 354 may be provided with two or more according to the configuration aspect of the mobile device 300.
  • the memory 360 may store a program for the operation of the controller 380 and may temporarily store input / output data (for example, a phone book, a message, a still image, a video, etc.).
  • the memory 360 may store data regarding vibration and sound of various patterns output when a touch input on the touch screen is performed.
  • the memory 360 may include a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD memory), Random Access Memory (RAM), Static Random Access Memory (SRAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Programmable Read-Only Memory (PROM), Magnetic Memory, It may include a storage medium of at least one type of magnetic disk, optical disk.
  • the mobile device 300 may operate in association with web storage that performs a storage function of the memory 360 on the Internet.
  • the interface unit 370 serves as a path to all external devices connected to the mobile device 300.
  • the interface unit 370 receives data from an external device, receives power, transfers the power to each component inside the mobile device 300, or transmits data within the mobile device 300 to the external device.
  • wired / wireless headset port, external charger port, wired / wireless data port, memory card port, port for connecting a device with an identification module, audio input / output (I / O) port, The video I / O port, the earphone port, and the like may be included in the interface unit 370.
  • the identification module is a chip that stores various types of information for authenticating the usage rights of the mobile device 300, and includes a user identification module (UIM), a subscriber identification module (SIM), and a universal user authentication module (UI). Universal Subscriber Identity Module (USIM), and the like.
  • a device equipped with an identification module (hereinafter referred to as an “identification device”) may be manufactured in the form of a smart card. Therefore, the identification device may be connected to the terminal 200 through a port.
  • the interface unit 370 may be a path through which power from the cradle is supplied to the mobile device 300 or may be input by the user from the cradle. It may be a passage through which a command signal is transmitted to the mobile device. Various command signals or power input from the cradle may be operated as signals for recognizing that the mobile device is correctly mounted in the cradle.
  • the controller 380 typically controls the overall operation of the mobile device 300.
  • the controller 380 performs, for example, related control and processing for voice call, data communication, video call, and the like.
  • the controller 380 may include a multimedia module 381 for multimedia playback.
  • the multimedia module 381 may be implemented in the controller 380 or may be implemented separately from the controller 380.
  • the controller 380 may perform a pattern recognition process for recognizing a writing input or a drawing input performed on a touch-screen as a character and an image, respectively.
  • the power supply unit 390 receives an external power source and an internal power source under the control of the controller 380 to supply power for operation of each component.
  • embodiments described herein include application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), and the like. It may be implemented using at least one of a processor, a controller, micro-controllers, microprocessors, and other electrical units for performing other functions. Examples may be implemented by the controller 380 itself.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • embodiments such as the procedures and functions described herein may be implemented as separate software modules.
  • Each of the software modules may perform one or more functions and operations described herein.
  • Software code may be implemented in software applications written in a suitable programming language.
  • the software code may be stored in the memory 360 and executed by the controller 380.
  • FIG. 4 is a block diagram illustrating a digital device according to another embodiment of the present invention.
  • the digital device 400 include a broadcast receiver 405, an external device interface 435, a storage 440, a user input interface 450, a controller 470, a display 480, and audio. It may include an output unit 485, a power supply unit 490 and a photographing unit (not shown).
  • the broadcast receiver 405 may include at least one tuner 410, a demodulator 420, and a network interface unit 430. However, in some cases, the broadcast receiver 405 may include a tuner 410 and a demodulator 420, but may not include the network interface 430, or vice versa.
  • the broadcast receiver 405 includes a multiplexer and a signal demodulated by the demodulator 420 via the tuner 410 and a signal received through the network interface 430. You can also multiplex.
  • the broadcast receiving unit 425 may include a demultiplexer to demultiplex the multiplexed signal or to demultiplex the demodulated signal or the signal passed through the network interface unit 430. Can be.
  • the tuner 410 receives an RF broadcast signal by tuning a channel selected by a user or all previously stored channels among radio frequency (RF) broadcast signals received through an antenna.
  • the tuner 410 also converts the received RF broadcast signal into an intermediate frequency (IF) signal or a baseband signal.
  • IF intermediate frequency
  • the received RF broadcast signal is a digital broadcast signal
  • it is converted into a digital IF signal (DIF).
  • the analog broadcast signal is converted into an analog baseband video or audio signal (CVBS / SIF). That is, the tuner 410 may process both a digital broadcast signal or an analog broadcast signal.
  • the analog baseband video or audio signal CVBS / SIF output from the tuner 410 may be directly input to the controller 470.
  • the tuner 410 may receive an RF broadcast signal of a single carrier or multiple carriers. Meanwhile, the tuner 410 sequentially tunes and receives RF broadcast signals of all broadcast channels stored through a channel memory function among RF broadcast signals received through an antenna, and then converts them to intermediate frequency signals or baseband signals (DIFs). Frequency or baseband signal).
  • DIFs baseband signals
  • the demodulator 420 may receive and demodulate the digital IF signal DIF converted by the tuner 410 and perform channel decoding.
  • the demodulator 420 includes a trellis decoder, a de-interleaver, a reed-solomon decoder, or a convolutional decoder, deinterleaver, and lead. A solo decoder or the like.
  • the demodulator 420 may output a stream signal TS after performing demodulation and channel decoding.
  • the stream signal may be a signal multiplexed with a video signal, an audio signal, or a data signal.
  • the stream signal may be an MPEG-2 Transport Stream (TS) multiplexed with an MPEG-2 standard video signal, a Dolby AC-3 standard audio signal, and the like.
  • TS MPEG-2 Transport Stream
  • the stream signal output from the demodulator 420 may be input to the controller 470.
  • the controller 470 may control demultiplexing, image / audio signal processing, and the like, control the output of the image through the display 480, and the audio output through the audio output unit 485.
  • the external device interface unit 435 provides an interfacing environment between the digital device 300 and various external devices.
  • the external device interface unit 335 may include an A / V input / output unit (not shown) or a wireless communication unit (not shown).
  • the external device interface unit 435 may include a digital versatile disk (DVD), a Blu-ray, a game device, a camera, a camcorder, a computer (laptop), a tablet PC, a smartphone, a Bluetooth device (Bluetooth). device), an external device such as a cloud, etc. may be connected via wired / wireless.
  • the external device interface unit 435 transmits a signal including data such as an image, video, and audio input through the connected external device to the controller 470 of the digital device.
  • the controller 470 may control the processed image, video, audio, and the like to output the data signal to the connected external device.
  • the external device interface unit 435 may further include an A / V input / output unit (not shown) or a wireless communication unit (not shown).
  • the A / V input / output unit may include a USB terminal, a CVBS (Composite Video Banking Sync) terminal, a component terminal, an S-video terminal (analog), so that video and audio signals of an external device can be input to the digital device 400. It may include a DVI (Digital Visual Interface) terminal, an HDMI (High Definition Multimedia Interface) terminal, an RGB terminal, a D-SUB terminal, and the like.
  • the wireless communication unit may perform short range wireless communication with another digital device.
  • the digital device 400 may include, for example, Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigBee, and Digital Living Network Alliance (DLNA). It may be networked with other digital devices according to a communication protocol.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • ZigBee ZigBee
  • DLNA Digital Living Network Alliance
  • the external device interface unit 435 may be connected to the set top box STB through at least one of the various terminals described above to perform an input / output operation with the set top box STB.
  • the external device interface unit 435 may receive an application or an application list in an adjacent external device and transmit the received application or application list to the controller 470 or the storage unit 440.
  • the network interface unit 430 provides an interface for connecting the digital device 400 to a wired / wireless network including an internet network.
  • the network interface unit 430 may include, for example, an Ethernet terminal for connection with a wired network, and for example, a wireless LAN (WLAN) for connection with a wireless network.
  • WLAN wireless LAN
  • Fi Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), and High Speed Downlink Packet Access (HSDPA) communication standards.
  • the network interface unit 430 may transmit or receive data with another user or another digital device through the connected network or another network linked to the connected network.
  • some content data stored in the digital device 400 may be transmitted to another user who is registered in advance in the digital device 400 or a selected user among the other digital devices or the selected digital device.
  • the network interface unit 430 may access a predetermined web page through a connected network or another network linked to the connected network. That is, by accessing a predetermined web page through the network, it is possible to send or receive data with the server.
  • content or data provided by a content provider or a network operator may be received. That is, content such as a movie, an advertisement, a game, a VOD, a broadcast signal, and related information provided from a content provider or a network provider may be received through a network.
  • the network interface unit 430 may select and receive a desired application from among applications that are open through the network.
  • the storage unit 440 may store a program for processing and controlling each signal in the controller 470, or may store a signal-processed video, audio, or data signal.
  • the storage unit 440 may perform a function for temporarily storing an image, audio, or data signal input from the external device interface unit 435 or the network interface unit 430.
  • the storage unit 440 may store information about a predetermined broadcast channel through a channel storage function.
  • the storage unit 440 may store an application or an application list input from the external device interface unit 435 or the network interface unit 330.
  • the storage unit 440 may store various platforms described below.
  • the storage unit 440 may include, for example, a flash memory type, a hard disk type, a multimedia card micro type, and a card type memory (for example, SD or XD). Memory, etc.), RAM (RAM), or ROM (EEPROM, etc.) may include at least one type of storage medium.
  • the digital device 400 may reproduce and provide a content file (video file, still image file, music file, document file, application file, etc.) stored in the storage unit 440 to the user.
  • FIG. 4 illustrates an embodiment in which the storage unit 440 is provided separately from the control unit 470, but the present invention is not limited thereto. In other words, the storage unit 440 may be included in the control unit 470.
  • the user input interface unit 450 transmits a signal input by the user to the controller 470 or transmits a signal of the controller 470 to the user.
  • the user input interface unit 450 controls power on / off, channel selection, screen setting, etc. from the remote control device 500 according to various communication methods such as an RF communication method and an infrared (IR) communication method.
  • the signal may be received and processed, or the control signal of the controller 470 may be transmitted to the remote control device 500.
  • the user input interface unit 450 may transmit a control signal input from a local key (not shown), such as a power key, a channel key, a volume key, and a set value, to the controller 470.
  • a local key such as a power key, a channel key, a volume key, and a set value
  • the user input interface unit 450 may transmit a control signal input from a sensing unit (not shown) that senses a user's gesture to the controller 470, or may sense a signal of the controller 470.
  • the sensing unit may include a touch sensor, a voice sensor, a position sensor, an operation sensor, and the like.
  • the controller 470 demultiplexes the stream input through the tuner 410, the demodulator 420, or the external device interface unit 435, or processes the demultiplexed signals to generate a signal for video or audio output. And output.
  • the image signal processed by the controller 470 may be input to the display unit 480 and displayed as an image corresponding to the image signal.
  • the image signal processed by the controller 470 may be input to the external output device through the external device interface 435.
  • the audio signal processed by the controller 470 may be audio output to the audio output unit 485.
  • the voice signal processed by the controller 470 may be input to the external output device through the external device interface 435.
  • controller 470 may include a demultiplexer, an image processor, and the like.
  • the controller 470 may control overall operations of the digital device 400.
  • the controller 470 may control the tuner 410 to control tuning of an RF broadcast corresponding to a channel selected by a user or a pre-stored channel.
  • the controller 470 may control the digital device 400 by a user command or an internal program input through the user input interface 450. In particular, it is possible to connect to the network so that the user can download the desired application or application list into the digital device 400.
  • the controller 470 controls the tuner 410 to input a signal of a channel selected according to a predetermined channel selection command received through the user input interface 450. It processes the video, audio or data signal of the selected channel.
  • the controller 470 allows the channel information selected by the user to be output through the display unit 480 or the audio output unit 485 together with the processed video or audio signal.
  • the controller 470 may be provided from an external device, for example, a camera or a camcorder, input through the external device interface unit 435 according to an external device image playback command received through the user input interface unit 450.
  • the video signal or the audio signal may be output through the display unit 480 or the audio output unit 485.
  • the controller 470 may control the display 480 to display an image.
  • an image For example, a broadcast image input through the tuner 410, an external input image input through the external device interface unit 435, an image input through a network interface unit, or an image stored in the storage unit 440.
  • the display unit 480 may control the display.
  • the image displayed on the display unit 480 may be a still image or a video, and may be a 2D image or a 3D image.
  • the controller 470 may control to reproduce the content.
  • the content may be content stored in the digital device 400, received broadcast content, or external input content input from the outside.
  • the content may be at least one of a broadcast image, an external input image, an audio file, a still image, a connected web screen, and a document file.
  • the controller 470 may control to display an application or a list of applications downloadable from the digital device 300 or from an external network.
  • the controller 470 may control to install and run an application downloaded from an external network, along with various user interfaces. In addition, by selecting a user, an image related to an application to be executed may be controlled to be displayed on the display unit 480.
  • a channel browsing processor may be further provided to generate a thumbnail image corresponding to the channel signal or the external input signal.
  • the channel browsing processor receives a stream signal TS output from the demodulator 320 or a stream signal output from the external device interface 335, extracts an image from the input stream signal, and generates a thumbnail image.
  • the generated thumbnail image may be input as it is or encoded to the controller 470.
  • the generated thumbnail image may be encoded in a stream form and input to the controller 470.
  • the controller 470 may display a thumbnail list including a plurality of thumbnail images on the display unit 480 using the input thumbnail image. Meanwhile, the thumbnail images in the thumbnail list may be updated sequentially or simultaneously. Accordingly, the user can easily grasp the contents of the plurality of broadcast channels.
  • the display unit 480 converts an image signal, a data signal, an OSD signal processed by the controller 470 or an image signal, data signal, etc. received from the external device interface unit 435 into R, G, and B signals, respectively. Generate a drive signal.
  • the display unit 480 may be a PDP, an LCD, an OLED, a flexible display, a 3D display, or the like.
  • the display unit 480 may be configured as a touch screen and used as an input device in addition to the output device.
  • the audio output unit 485 receives a signal processed by the controller 470, for example, a stereo signal, a 3.1 channel signal, or a 5.1 channel signal, and outputs a voice signal.
  • the voice output unit 485 may be implemented as various types of speakers.
  • a sensing unit including at least one of a touch sensor, a voice sensor, a position sensor, and a motion sensor may be further provided in the digital device 400. .
  • the signal detected by the sensing unit may be transmitted to the control unit 3470 through the user input interface unit 450.
  • a photographing unit (not shown) for photographing the user may be further provided. Image information photographed by a photographing unit (not shown) may be input to the controller 470.
  • the controller 470 may detect a user's gesture by combining or respectively combining an image photographed by a photographing unit or a sensed signal from a sensing unit (not shown).
  • the power supply unit 490 supplies the power throughout the digital device 400.
  • controller 470 may be implemented in the form of a System on Chip (SoC), a display unit 480 for displaying an image, and an audio output unit 485 for audio output. Can be.
  • SoC System on Chip
  • display unit 480 for displaying an image
  • audio output unit 485 for audio output. Can be.
  • the power supply unit 490 may include a converter (not shown) for converting AC power into DC power.
  • a converter for example, when the display unit 480 is implemented as a liquid crystal panel including a plurality of backlight lamps, an inverter capable of operating a pulse width modulation (PWM) for driving of variable brightness or dimming It may further comprise an inverter (not shown).
  • PWM pulse width modulation
  • the remote control device 500 transmits the user input to the user input interface unit 450.
  • the remote control device 500 may use Bluetooth, Radio Frequency (RF) communication, Infrared (IR) communication, Ultra Wideband (UWB), ZigBee (ZigBee), or the like.
  • RF Radio Frequency
  • IR Infrared
  • UWB Ultra Wideband
  • ZigBee ZigBee
  • the remote control device 500 may receive an image, an audio or a data signal output from the user input interface unit 450, display it on the remote control device 500, or output a voice or vibration.
  • the digital device 400 described above may be a digital broadcast receiver capable of processing a fixed or mobile ATSC or DVB digital broadcast signal.
  • the digital device according to the present invention may omit some of the configurations of the illustrated configurations, or may further include components not shown on the contrary.
  • the digital device does not include a tuner and a demodulator, and may receive and play content through a network interface unit or an external device interface unit.
  • FIG. 5 is a block diagram illustrating a detailed configuration of the controller of FIGS. 2 to 4 according to an embodiment of the present invention.
  • control unit may include a demultiplexer 510, an image processor 5520, an OSD generator 540, a mixer 550, a frame rate converter (FRC) 555, and It may include a formatter 560.
  • controller may further include a voice processor and a data processor.
  • the demultiplexer 510 demultiplexes an input stream.
  • the demultiplexer 510 may demultiplex the input MPEG-2 TS video, audio, and data signals.
  • the stream signal input to the demultiplexer 510 may be a stream signal output from a tuner, a demodulator, or an external device interface unit.
  • the image processor 420 performs image processing of the demultiplexed image signal.
  • the image processor 420 may include an image decoder 425 and a scaler 435.
  • the video decoder 425 decodes the demultiplexed video signal, and the scaler 435 scales the resolution of the decoded video signal so that the display unit can output the resolution.
  • the image decoder 525 may support various standards.
  • the video decoder 525 performs the function of the MPEG-2 decoder when the video signal is encoded in the MPEG-2 standard, and the video signal is encoded in the Digital Multimedia Broadcasting (DMB) method or the H.264 standard.
  • DMB Digital Multimedia Broadcasting
  • H.264 the function of the H.264 decoder can be performed.
  • the video signal decoded by the image processor 520 is input to the mixer 450.
  • the OSD generator 540 generates the OSD data according to a user input or itself. For example, the OSD generator 440 generates data for displaying various data in the form of a graphic or text on the screen of the display 380 based on a control signal of the user input interface.
  • the generated OSD data includes various data such as a user interface screen of the digital device, various menu screens, widgets, icons, viewing rate information, and the like.
  • the OSD generator 540 may generate data for displaying broadcast information based on subtitles or EPGs of a broadcast image.
  • the mixer 550 mixes the OSD data generated by the OSD generator 540 and the image signal processed by the image processor to provide the formatter 560. Since the decoded video signal and the OSD data are mixed, the OSD is overlaid and displayed on the broadcast video or the external input video.
  • the frame rate converter (FRC) 555 converts a frame rate of an input video.
  • the frame rate converter 555 may convert the frame rate of the input 60Hz image to have a frame rate of, for example, 120Hz or 240Hz according to the output frequency of the display unit.
  • various methods may exist in the method of converting the frame rate. For example, when the frame rate converter 555 converts the frame rate from 60 Hz to 120 Hz, the frame rate converter 555 inserts the same first frame between the first frame and the second frame or predicts the first frame from the first frame and the second frame. It can be converted by inserting three frames.
  • the frame rate converter 555 may insert and convert three more identical or predicted frames between existing frames. On the other hand, when no separate frame conversion is performed, the frame rate converter 555 may be bypassed.
  • the formatter 560 changes the output of the input frame rate converter 555 to match the output format of the display unit.
  • the formatter 560 may output R, G, B data signals, and the R, G, B data signals may be output as low voltage differential signals (LVDSs) or mini-LVDSs. Can be.
  • the formatter 560 may support a 3D service through the display by configuring the output in a 3D form according to the output format of the display.
  • the voice processing unit (not shown) in the controller may perform voice processing of the demultiplexed voice signal.
  • the voice processor (not shown) may support processing of various audio formats. For example, even when a voice signal is encoded in a format such as MPEG-2, MPEG-4, AAC, HE-AAC, AC-3, BSAC, etc., a decoder corresponding thereto may be provided.
  • the voice processing unit (not shown) in the controller may process base, treble, volume control, and the like.
  • the data processor in the control unit may perform data processing of the demultiplexed data signal.
  • the data processor may decode the demultiplexed data signal even when it is encoded.
  • the encoded data signal may be EPG information including broadcast information such as a start time and an end time of a broadcast program broadcasted in each channel.
  • each component may be integrated, added, or omitted according to the specifications of the digital device actually implemented. That is, as needed, two or more components may be combined into one component or one component may be subdivided into two or more components.
  • the function performed in each block is for explaining an embodiment of the present invention, the specific operation or device does not limit the scope of the present invention.
  • the digital device may be an image signal processing device that performs signal processing of an image stored in the device or an input image.
  • a set top box (STB) excluding the display unit 480 and the audio output unit 485 shown in FIG. 4, the above-described DVD player, Blu-ray player, game device, computer And the like can be further illustrated.
  • FIG. 6 is a diagram illustrating input means connected to the digital device of FIGS. 2 to 4 according to one embodiment of the present invention.
  • a front panel (not shown) or control means (input means) provided on the digital device 600 is used.
  • control means is a user interface device (UID) capable of wired and wireless communication, the remote control 610, keyboard 630, pointing device 620, mainly implemented for the purpose of controlling the digital device 600, A touch pad may be included, but a control means dedicated to an external input connected to the digital device 600 may also be included.
  • control means also includes a mobile device such as a smart phone, a tablet PC, etc. that control the digital device 600 through mode switching and the like, although the purpose is not the digital device 600 control.
  • a pointing device is described as an embodiment, but is not limited thereto.
  • the input means is a communication protocol such as Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigBee, Digital Living Network Alliance (DLNA), RS, or the like. At least one may be employed as necessary to communicate with the digital device.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • DLNA Digital Living Network Alliance
  • RS Digital Living Network Alliance
  • the remote controller 610 refers to conventional input means equipped with various key buttons necessary for controlling the digital device 600.
  • the pointing device 620 is equipped with a gyro sensor to implement a pointer corresponding to a screen of the digital device 600 based on a user's movement, pressure, rotation, etc., so that the digital device 600 Transmits a predetermined control command.
  • the pointing device 620 may be named by various names such as a magic remote controller and a magic controller.
  • the keyboard 630 is an intelligent integrated digital device that provides a variety of services such as a web browser, an application, a social network service (SNS), and the like, as the digital device 600 provides only a conventional broadcast. It is not easy to implement, and it is implemented to facilitate input of texts by implementing it similar to the keyboard of a PC.
  • SNS social network service
  • control means such as the remote control 610, the pointing device 620, the keyboard 630 is provided with a touch pad, if necessary, for more convenient and various control purposes such as text input, pointer movement, enlargement / reduction of pictures or videos Can be used for
  • the digital device described in the present specification uses a Web OS as an OS and / or a platform.
  • a process such as a configuration or an algorithm based on Web OS may be performed by the controller of the above-described digital device.
  • the control unit includes the control unit in FIGS. 2 to 5 and uses the concept broadly. Therefore, hereinafter, the configuration for the processing of Web OS-based or related services, applications, content, etc. in the digital device, the hardware or components including the related software (software), firmware (firmware), etc. to the controller (controller) Explain by naming.
  • Such a Web OS based platform is intended to enhance development independence and functionality scalability by integrating services and applications based on, for example, a luna-service bus, and to develop applications based on a web application framework. Productivity can also be increased. In addition, multi-tasking can be supported by efficiently utilizing system resources through Web OS processes and resource management.
  • the Web OS platform described in the present specification may be used not only for fixed devices such as PCs, TVs, and STBs, but also for mobile devices such as mobile phones, smart phones, tablet pcs, notebooks, and wearable devices. .
  • the architecture of software for digital devices is a monolithic structure that is based on conventional problem solving and market-dependent monolithic structures, and is a single process and closed product based on multi-threading technology. Afterwards, there was a difficulty in external application, and after that, we aimed for new platform-based development, and then layered and componentized by pursuing cost innovation and efficient UI and external application development through chip-set replacement. ), Which had a three-layered structure, add-on, single source product, and add-on structure for open applications.
  • the software architecture has been further developed to provide a modular architecture of functional units, to provide a Web Open API (Application Programming Interface) for the eco-system, and to provide a game engine. Modular design for the native open API (Native Open API), etc. is being made, and accordingly, it is generated as a multi-process structure based on the service structure.
  • FIG. 7 is a diagram illustrating a Web OS architecture according to an embodiment of the present invention.
  • the platform can be largely classified into a kernel, a system library-based Web OS core platform, an application, a service, and the like.
  • the architecture of the Web OS platform is a layered structure, with the OS at the bottom layer, system library (s) at the next layer, and applications at the top.
  • the lowest layer may include a Linux kernel as an OS layer and include Linux as an OS of the digital device.
  • BSP Board Support Package
  • HAL Hardware Abstraction Layer
  • Web OS core modules layer Web OS core modules layer
  • service layer Luna-Service bus layer
  • the application layer is sequentially located in the bus layer, the Enyo framework / NDK / QT layer, and the top layer.
  • some layers in the above-described Web OS layer structure may be omitted, and a plurality of layers may be one layer or conversely, one layer may have a plurality of layer structures.
  • the Web OS core module layer is based on a Luna Surface Manager (LSM) that manages surface windows, etc., a System & Application Manage (SAM) and a WebKit (WebKit) that manages the execution and execution states of applications. It may include a WAM (Web Application Manager) for managing a web application.
  • LSM Luna Surface Manager
  • SAM System & Application Manage
  • WebKit WebKit
  • WAM Web Application Manager
  • the LSM manages an application window displayed on the screen.
  • the LSM manages display hardware, provides a buffer that renders the contents required by the applications, and combines the results of the rendering of the plurality of applications on the screen. You can print
  • the SAM manages performance policies for various conditions of systems and applications.
  • WAM is a web OS that can be viewed as a basic application, which is based on the Enyo Framework.
  • the service use of the application is made through the Luna-service bus, and the service can be newly registered on the bus, and the application can find and use the service that it needs.
  • the service layer may include services of various service levels, such as a TV service and a Web OS service.
  • the Web OS service may include a media server, Node.JS, and the like.
  • the Node.JS service supports, for example, JavaScript.
  • Web OS services can communicate over the bus to Linux processes that implement function logic. It can be divided into four parts, which are migrated from the TV process and the existing TV to the Web OS or services that are differentiated from the makers, the Web OS common service, and the JavaScript developed and used through Node.js. It consists of a Node.js service.
  • the application layer may include all applications that can be supported by a digital device, such as a TV application, a showcase application, a native application, and a web application.
  • Applications on the Web OS may be classified into a web application, a Palm Development Kit (PDK) application, a Qt Meta Language or Qt Modeling Language (QML) application, and the like according to an implementation method.
  • PDK Palm Development Kit
  • QML Qt Modeling Language
  • the web application is based on the WebKit engine and runs on the WAM Runtime. Such web applications may be based on the Enyo framework, or may be developed and executed based on general HTML5, Cascading Style Sheets (CSS), or JavaScript.
  • CCS Cascading Style Sheets
  • the PDK application includes a third-party or native application developed in C / C ++ based on a PDK provided for an external developer.
  • the PDK refers to a development library and a set of tools provided to enable a third party such as a game to develop a native application (C / C ++).
  • a PDK application can be used for the development of applications whose performance is important.
  • the QML application is a Qt-based native application, and includes a basic application provided with the Web OS platform such as a card view, a home dashboard, a virtual keyboard, and the like.
  • QML is a mark-up language in script form instead of C ++.
  • the native application refers to an application that is developed in C / C ++, compiled, and executed in a binary form.
  • Such a native application has an advantage in that its execution speed is fast.
  • FIG. 8 is a diagram illustrating the architecture of a Web OS device according to one embodiment of the present invention.
  • FIG. 8 is a block diagram based on runtime of a Web OS device, which can be understood with reference to the layered structure of FIG. 7.
  • services and applications and Web OS core modules are included on a system OS (Linux) and system libraries, and communication therebetween may be via a luna-service bus.
  • Node.js services based on HTML5, CSS, JavaScript, e-mail, contacts, calendar, logging, backup, file notifier Web OS services such as notify, database (DB), activity manager, system policy, audio daemon (AudioD), update, media server, etc.
  • TV services such as Electronic Program Guide (PVR), Personal Video Recorder (PVR), data broadcasting, etc., voice recognition, Now on, Notification, search CP services such as ACR (Auto Content Recognition), CBOX (Contents List Broswer), wfdd, DMR, Remote Application, Download, SDPIF (Sony Philips Digital Interface Format), PDK applications, browser , Native applications such as QML applications
  • Enyo framework-based UI-related TV applications and Web applications are processed through Web OS core modules such as SAM, WAM, and LSM described above through the Luna-Service Bus.
  • TV applications and Web applications may not necessarily be Enyo framework based or UI related.
  • CBOX can manage the list and metadata of the content of external devices such as USB, DLNA, cloud, etc. connected to the TV. Meanwhile, the CBOX may output content listings of various content containers such as USB, DMS, DVR, cloud, etc. in an integrated view. In addition, the CBOX can display various types of content listings such as pictures, music, and videos, and manage its metadata. In addition, the CBOX may output the contents of the attached storage in real-time. For example, the CBOX should be able to immediately output a content list of the storage device when the storage device such as USB is plugged in. In this case, a standardized method for processing the content listing may be defined. In addition, CBOX can accommodate a variety of connection protocols.
  • SAM is intended to improve module complexity and enhance scalability.
  • the existing System Manager processes multiple functions such as system UI, window management, web application runtime, and handling constraints on UX in one process, so that the complexity of implementation is large. Clear implementation interfaces reduce implementation complexity.
  • LSM supports the development and integration of system UX implementations, such as card views and launcher, independently, and makes it easy to respond to changes in product requirements.
  • LSM when synthesizing a plurality of application screens, such as App-on-App, to make the most of the hardware resources (HW resource) to enable multi-tasking, multi-window (multi-window) and 21: 9, etc. It can provide a window management mechanism (window management mechanism) for.
  • LSM supports the implementation of system UI based on QML and improves its development productivity.
  • QML UX is based on MVC, which makes it easy to compose views of layouts and UI components, and to easily develop code to handle user input.
  • the interface between QML and Web OS components is made through QML extension plug-in, and the graphic operation of the application may be based on the wayland protocol, luna-service call, etc. have.
  • LSM stands for Luna Surface Manager and functions as an application window compositor.
  • the LSM allows you to synthesize independently developed applications, UI components, etc. on the screen.
  • the LSM defines a output area, an interworking method, and the like as a compositor.
  • the compositor LSM handles graphics compositing, focus management, input events, and the like.
  • the LSM receives an event, focus, and the like from an input manager.
  • the input manager may include a HID such as a remote controller, a mouse & a keyboard, a joystick, a game pad, an application remote, a pen touch, and the like.
  • LSM supports multiple window models, which can be executed simultaneously in all applications due to the system UI.
  • NLP Natural Language Processing
  • MRCU Mobile Radio Control Unit
  • Live menu ACR (Auto Content Recognition), etc. .
  • FIG. 9 is a diagram illustrating a graphic composition flow in a Web OS device according to an embodiment of the present invention.
  • the graphic composition processing includes a web application manager 910 in charge of a UI process, a webkit 920 in charge of a web process, a LSM 930, and a graphic manager (GM). Through 940.
  • the generated graphic data is transferred to the LSM 930 when the graphic data is not a full-screen application.
  • the web application manager 910 receives an application generated by the web kit 920 for sharing a GPU (Graphic Processing Unit) memory for graphic management between the UI process and the web process, and then displays the full-screen as described above. If it is not the application passes to the LSM (930). In the case of the full-screen application, the LSM 930 may be bypassed, and in this case, the LSM 930 may be directly transferred to the graphic manager 940.
  • the LSM 930 may be bypassed, and in this case, the LSM 930 may be directly transferred to the graphic manager 940.
  • the LSM 930 transmits the received UI application to the Wayland Compositor via the Wayland surface, and processes the received UI application to the graphic manager.
  • the graphic data delivered from the LSM 930 is delivered to the graphic manager compositor via, for example, the LSM GM surface of the graphic manager 940.
  • the full-screen application is delivered directly to the graphic manager 940 without passing through the LSM 930, which is processed by the graphic manager compositor via the WAM GM surface.
  • the graphics manager handles all graphic data in the Web OS device, including GM surfaces such as data broadcasting applications, caption applications, etc., as well as data via LSM GM surfaces and WAM GM surfaces. Receives all the graphic data that has passed through and processes it to be properly displayed on the screen.
  • GM surfaces such as data broadcasting applications, caption applications, etc.
  • WAM GM surfaces Receives all the graphic data that has passed through and processes it to be properly displayed on the screen.
  • the function of the GM compositor is the same as or similar to that of the compositor described above.
  • FIG. 10 is a diagram illustrating a media server according to an embodiment of the present invention.
  • FIG. 11 is a block diagram illustrating a configuration of a media server according to an embodiment of the present invention. Is a diagram illustrating a relationship between a media server and a TV service according to an embodiment of the present invention.
  • the media server supports the execution of various multimedia in the digital device and manages necessary resources.
  • the media server can efficiently use hardware resources required for media play.
  • the media server requires audio / video hardware resources in order to execute multimedia, and can efficiently utilize the resource usage status.
  • fixed devices with larger screens than mobile devices require more hardware resources to run multimedia and require faster encoding / decoding and graphics data delivery due to the large amount of data.
  • the media server may perform broadcasting, recording, and tuning tasks, record simultaneously with viewing, or simultaneously display the sender and receiver screens during a video call. It should be able to handle
  • the media server has limited hardware resources such as encoders, decoders, tuners, and display engines on a chip-set basis, making it difficult to execute multiple tasks at the same time. Input is processed.
  • the media server can enhance the system stability, for example, by removing and restarting a playback pipeline in which an error occurred during media playback by pipeline and restarting the error. Even if it does not affect other media play.
  • a pipeline is a chain connecting the respective unit functions such as decoding, analysis, and output when a media play request is requested, and required unit functions may vary according to a media type.
  • Media servers can have extensibility, for example, adding new types of pipelines without affecting existing implementations.
  • the media server may accommodate a camera pipeline, a video conference pipeline, a third-party pipeline, and the like.
  • the media server can handle normal media playback and TV task execution as separate services because the interface of the TV service is different from the media playback case.
  • the media server supports operations such as' setchannel ',' channelup ',' channeldown ',' channeltuning 'and' recordstart 'in relation to TV service, and' play 'and' pause in relation to general media playback.
  • operations such as' and 'stop', different operations can be supported for both, and they can be treated as separate services.
  • the media server may control or integrate management of resource management functions.
  • the allocation and retrieval of hardware resources in the device are integrated in the media server.
  • the TV service process transmits the running task and resource allocation status to the media server.
  • the media server frees resources and executes pipelines as each media runs, allowing execution by priority (e.g., policy) upon request for media execution based on the resource status occupied by each pipeline. Recall resources of other pipelines.
  • priority e.g., policy
  • predefined execution priority and required resource information for a specific request are managed by a policy manager, and the resource manager may communicate with the policy manager to process resource allocation and retrieval.
  • the media server may hold an identifier (ID) for all operations related to playback. For example, the media server may direct and direct a particular pipeline based on the identifier. The media server may issue separate commands to the pipelines for more than one media playback.
  • ID identifier
  • the media server may be responsible for playback of HTML 5 standard media.
  • the media server may follow the TV restructuring scope of the separate service processing of the TV pipeline.
  • the media server may be designed and implemented regardless of the TV restructuring scope. If the TV is not serviced separately, the media server may need to be re-executed when there is a problem with a specific task.
  • the media server is also referred to as uMS, or micro media server.
  • the media player is a media client, which is, for example, a web for HTML5 video tag, camera, TV, Skype, 2nd Screen, etc. It may mean a kit.
  • management of micro resources such as a resource manager, a policy manager, and the like is a core function.
  • the media server also controls the playback control role for the web standard media content.
  • the media server may also manage pipeline controller resources.
  • Such media servers support, for example, extensibility, reliability, efficient resource usage, and the like.
  • the uMS that is, the media server
  • the uMS is a Web OS device such as a cloud game, a MVPD (pay service, etc.), a camera preview, a second screen, a skype, and the like. It manages and controls the overall use and control of resource for proper processing within the system. Meanwhile, each resource uses, for example, a pipeline when the resource is used, and the media server can manage and control the creation, deletion, and use of the pipeline for resource management.
  • a pipeline is created when a media associated with a task starts a continuation of tasks such as parsing a request, a decoding stream, a video output, and the like.
  • tasks such as parsing a request, a decoding stream, a video output, and the like.
  • watching, recording, channel tuning, and the like are each processed under control of resource usage through a pipeline generated according to the request. .
  • an application or service is connected to a media server 1020 via a luna-service bus 1010, and the media server 1020 is connected to pipelines regenerated through the luna-service bus 1010.
  • the application or service may have various clients according to its characteristics and may exchange data with the media server 1020 or pipeline through the client or the client.
  • the client includes, for example, a uMedia client (web kit) and a resource manager (RM) client (C / C ++) for connecting to the media server 1020.
  • a uMedia client web kit
  • RM resource manager
  • the application including the uMedia client is connected to the media server 1020 as described above. More specifically, the uMedia client corresponds to, for example, a video object to be described later, and the client uses the media server 1020 for operation of the video by request.
  • the video operation relates to a video state
  • loading, unloading, play, playback, or reproduce, pause, stop, and the like are all states related to the video operation. May contain data.
  • Each operation or state of such video can be handled through the creation of a separate pipeline.
  • the uMedia client sends state data related to the video operation to the pipeline manager 1022 in the media server.
  • the pipeline manager 1022 obtains information on a resource of the current device through data communication with the resource manager 1024 and requests allocation of a resource corresponding to the state data of the uMedia client.
  • the pipeline manager 1022 or the resource manager 1024 controls resource allocation through data communication with the policy manager 1026 when necessary in relation to the resource allocation. For example, when the resource manager 1024 has no or insufficient resources to allocate according to the request of the pipeline manager 1022, appropriate resource allocation may be performed according to the request according to the priority comparison of the policy manager 1026. Can be.
  • the pipeline manager 1022 requests the media pipeline controller 1028 to generate a pipeline for an operation according to the request of the uMedia client for the allocated resource according to the resource allocation of the resource manager 1024.
  • the media pipeline controller 1028 generates the required pipeline under the control of the pipeline manager 1022.
  • This generated pipeline as shown, not only a media pipeline, a camera pipeline, but also a pipeline related to play, pause, and pause may be generated.
  • the pipeline may include a pipeline for HTML5, Web CP, smartshare playback, thumbnail extraction, NDK, cinema, Multimedia and Hypermedia Information coding Experts Group (MHEG), and the like.
  • the pipeline may include, for example, a service-based pipeline (own pipeline) and a URI-based pipeline (media pipeline).
  • an application or service including an RM client may not be directly connected to the media server 1020. This is because an application or service may handle media directly. In other words, when an application or service directly processes media, it may not go through a media server. However, at this time, resource management is required for pipeline creation and its use, and for this purpose, the uMS connector functions. Meanwhile, when the uMS connector receives a resource management request for direct media processing of the application or service, the uMS connector communicates with the media server 1020 including the resource manager 1024. To this end, the media server 1020 should also be equipped with a uMS connector.
  • the application or service may respond to the request of the RM client by receiving resource management of the resource manager 1024 through the uMS connector.
  • RM clients can handle services such as native CP, TV services, second screens, Flash players, YouTube Source Source Extensions (MSE), cloud games, and Skype.
  • the resource manager 1024 may manage the resource through the data communication with the policy manager 1026 as necessary for resource management.
  • the URI-based pipeline is made through the media server 1020, rather than directly processing the media as described above.
  • a URI-based pipeline may include a player factory, a Gstreamer, a streaming plug-in, a digital rights management plug-in pipeline, and the like.
  • an interface method between an application and media services may be as follows.
  • the PDK interface using a service.
  • a method of using a service in the existing CP can be used to extend existing platform plug-ins based on Luna for backward compatibility.
  • Seamless change is handled by a separate module (e.g. TVWIN), which is the process of first displaying and seamlessly displaying the TV on the screen without the web OS before or during the web OS boot. .
  • TVWIN e.g.
  • This is used to provide the basic functions of TV service for fast response to the user's power on request because the web OS has a slow boot time.
  • the module also supports seamless change, factory mode, and the like, which provide fast boot and basic TV functions as part of the TV service process.
  • the module may be responsible for switching from the non-web OS mode to the web OS mode.
  • a processing structure of the media server is shown.
  • the solid line box may indicate a process processing configuration
  • the dotted line box may indicate an internal processing module during the process.
  • the solid arrow may indicate an inter-process call, that is, a Luna service call
  • the dashed arrow may indicate a notification or data flow such as register / notify.
  • a service or web application or PDK application (hereinafter referred to as an "application") is connected to various service processing configurations via a luna-service bus through which the application operates or is controlled.
  • the data processing path depends on the type of application. For example, when the application is image data related to a camera sensor, the application is transmitted to the camera processor 1130 and processed. In this case, the camera processor 1130 processes image data of the received application, including a gesture, a face detection module, and the like. For example, when the data is required to be selected by the user or to automatically use the pipeline, the camera processor 1130 may generate a pipeline through the media server processor 1110 and process the corresponding data.
  • the audio may be processed through the audio processor 1140 and the audio module 1150.
  • the audio processor 1140 processes the audio data received from the application and transmits the audio data to the audio module 1150.
  • the audio processor 1140 may include an audio policy manager to determine the processing of the audio data.
  • the audio data thus processed is processed by the audio module 1160.
  • the application may notify data related to audio data processing to the audio module 1150, which may also notify the audio module 1150 in a related pipeline.
  • the audio module 1150 includes an advanced Linux sound architecture (ALSA).
  • ALSA advanced Linux sound architecture
  • the application includes or processes (hereinafter referred to as DRM) content
  • the content data is transmitted to the DRM service processor 1160
  • the DRM service processor 1170 generates a DRM instance (instance) To process the content data that is subject to DRM.
  • the DRM service processor 1160 may be connected to and process the DRM pipeline in the media pipeline through the Luna-service bus to process the content data on which the DRM is applied.
  • the following describes processing when the application is media data or TV service data (e.g., broadcast data).
  • TV service data e.g., broadcast data
  • FIG. 12 illustrates only the media server processing unit and the TV service processing unit in FIG. 11 described above in more detail.
  • the TV service processor 1120 may include, for example, at least one or more of a DVR / channel manager, a broadcasting module, a TV pipeline manager, a TV resource manager, a data broadcasting module, an audio setting module, a path manager, and the like.
  • the TV service processor 1220 may include a TV broadcast handler, a TV broadcast interface, a service processor, a TV middleware, a path manager, and a BSP. NetCast).
  • the service processor may mean, for example, a module including a TV pipeline manager, a TV resource manager, a TV policy manager, a USM connector, and the like.
  • the TV service processor may have a configuration as shown in FIG. 11 or 12 or a combination thereof, and some components may be omitted or some components not shown may be added.
  • the TV service processor 1120/1220 transmits the DVR or channel related data to the DVR / channel manager based on the property or type of the TV service data received from the application, and then to the TV pipeline manager to transmit the TV pipe. Create and process a line. Meanwhile, when the attribute or type of the TV service data is broadcast content data, the TV service processor 1120 generates and processes a TV pipeline through a TV pipeline manager for processing the corresponding data through a broadcast module.
  • a json (Javascript standard object notation) file or a file written in c is processed by the TV broadcast handler and transmitted to the TV pipeline manager through the TV broadcast interface to generate and process a TV pipeline.
  • the TV broadcast interface unit may transmit data or files that have passed through the TV broadcast handler to the TV pipeline manager based on the TV service policy and refer to them when generating the pipeline.
  • the TV pipeline manager may be controlled by the TV resource manager in generating one or more pipelines in response to a TV pipeline generation request from a processing module or manager in a TV service.
  • the TV resource manager may be controlled by the TV policy manager to request the state and allocation of resources allocated for the TV service according to the TV pipeline creation request of the TV pipeline manager, and the media server processor 1110. / 1210) and uMS connector to communicate data.
  • the resource manager in the media server processor 1110/1210 transmits a status and resource allocation of a resource for a current TV service at the request of the TV resource manager. For example, as a result of checking the resource manager in the media server processor 1110/1210, if all resources for the TV service are already allocated, the TV resource manager may notify that all resources are currently allocated.
  • the resource manager in the media server processing unit removes a predetermined TV pipeline according to the priority or a predetermined criterion among the TV pipelines previously allocated for the TV service together with the notification, and the TV pipeline for the requested TV service. You can also request or assign generation. Alternatively, the TV resource manager may appropriately remove, add, or establish a TV pipeline in accordance with the status report of the resource manager in the media server processor 1110/1210.
  • the BSP supports backward compatibility with existing digital devices, for example.
  • the TV pipelines thus generated may be properly operated under the control of the path manager during the processing.
  • the path manager may determine or control the processing path or process of the pipelines in consideration of not only the TV pipeline but also the operation of the pipeline generated by the media server processor 1110/1210.
  • the media server processor 1110/1210 includes a resource manager, a policy manager, a media pipeline manager, a media pipeline controller, and the like.
  • a pipeline generated under the control of the media pipeline manager and the media pipeline controller can be variously created such as a camera preview pipeline, a cloud game pipeline, and a media pipeline.
  • the media pipeline may include a streaming protocol, an auto / static gstreamer, a DRM, and the like, which may be determined according to a path manager's control.
  • the detailed processing procedure in the media server processor 1110/1210 uses the above-described description of FIG. 10 and will not be repeated herein.
  • the resource manager in the media server processor 1110/1210 may manage resources on a counter base, for example.
  • FIG. 13 is a block diagram illustrating a method of processing audio data in a digital device according to an embodiment of the present invention.
  • some of the modules of the digital device of FIG. 13 may be added or changed.
  • the solid arrows between the respective configuration modules indicate inter-process calls, that is, Luna service calls, notifications such as register / notify, or indicate data flow. Can be represented.
  • the application 1310 may be connected to various service processing components through a luna-service bus, and may operate or be controlled through the connected service processing components.
  • Application 1310 includes audio data, the application 1311 associated with system sounds, the application 1312 associated with alerts, the application 1313 associated with ringtones, and notifications, depending on the type of audio data.
  • Applications 1314 related to notifications, applications 1315 related to media, applications 1316 related to text to speech (TTS), applications 1317 related to flash audio (Flash), and the like. can do. However, this is merely an example, and fewer or more applications may be associated with the application 1310.
  • the digital device assumes that functions necessary for the operation of these applications are implemented.
  • An application 1311 related to system sound includes, for example, a selection sound of a specific key provided in the remote controllers 610, 620, and 630, a selection sound of a specific application output on a display unit of a digital device, and a digital device specification. It may be an application related to a notification sound generated when a function is executed.
  • the application 1312 associated with alert alerts may be, for example, an application associated with high priority system sound.
  • the application 1313 related to ringtones may be, for example, an application related to a notification sound of a call event received through a call application.
  • the application 1314 related to notifications / notifications may be, for example, an application related to a notification sound except for the warning notification or a notification sound for notifying occurrence of a specific event.
  • An application 1316 related to text to speech is, for example, an application related to audio data for outputting a guide message output through a display unit of a digital device as a voice, and is mainly used when a speech recognition function is activated. Can be.
  • the application 1317 related to flash audio may be, for example, an application related to audio data streamed to Adobe flash.
  • the TV service processor 1330 or the pulsed audio module 1340 may receive audio data from a specific application.
  • audio data to be processed through a decoder in hardware may be received by the TV service processor 1330
  • PCM audio data that is already decoded audio data or does not need to be processed through a decoder in hardware may be a pulsed audio module. May be received at 1340.
  • audio data from a specific application is to be received by the TV service processor 1330 or the pulsed audio module 1340 may be controlled by the audio processor 1320.
  • the TV service processor 1330 may include a DSP Audio Sink Server (DASS) for hardware processing (eg, decoding) audio data.
  • DASS DSP Audio Sink Server
  • the pulsed audio module 1340 may include an advanced Linux sound architecture (ALSA), which is an interface for outputting audio data.
  • ALSA advanced Linux sound architecture
  • the TV service processor 1330 and the pulsed audio module 1340 may notify the audio processor 1320 when the audio data is received from the application 1310.
  • the audio processor 1320 may control the output of the audio data by controlling the TV service processor 1330 or the pulsed audio module 1340 by applying a policy related to the audio data according to the notification.
  • the policy related to audio data includes, when there is audio data corresponding to each of a plurality of contents, which type of audio data should be preferentially output based on priority of each type of audio data, input volume level of specific audio data, and And / or control of the output volume level, and to which port of the audio output unit 1350 to output specific audio data.
  • the memory (not shown) in the digital device may prestore the policy associated with the audio data.
  • the audio processor 1320 uses the pulsed audio module 1340 such that audio data received from the application 1311 related to system sound is output directly through the audio output unit 1350 without applying a policy for speedy processing. You can also control it.
  • the audio processor 1320 may adjust the input volume level and / or the output volume level of the specific audio data.
  • the audio processor 1320 may adjust the input volume level by controlling an input source to control the volume level of specific audio data, and may adjust the output volume level output through the audio output unit 1350 while maintaining the input level volume as it is. have.
  • the audio output unit 1350 may further include a plurality of ports for outputting audio data, in addition to the port 1351 connected to the TV speaker (internal speaker).
  • the audio output unit 1350 may include a port 1351 connected to a TV speaker, a port 1352 connected to an external speaker, a port 1353 connected to a headphone, an optical output port (SPDIF) 1354, and a Bluetooth speaker. Connected port 1355 and the like.
  • the audio output unit 1350 outputs audio data received from the TV service processor 1330 and / or the pulsed audio module 1340.
  • FIG. 14 is a diagram for describing a method for activating a voice recognition function in a digital device according to one embodiment of the present invention.
  • the activation method of the voice recognition function shown in FIG. 14 is merely an example, and the present invention is not limited thereto.
  • a user selects a preset area of the display unit 1410 of the display device 1400 using the remote control device 1420, or displays a display unit of the display device 1400.
  • the voice recognition function may be activated by selecting an icon of a specific application output to 1410.
  • the user may activate the voice recognition function by selecting a hot key corresponding to the voice recognition function included in the remote controller 1420.
  • the user may release a press of the hot key after uttering a predetermined word or sentence while pressing the hot key.
  • the user may activate a voice recognition function by pressing a specific button corresponding to the voice recognition function included in the headphones 1430 paired with the display device 1400.
  • the user may release the pressing of the specific button after uttering a predetermined word or sentence while pressing the specific button.
  • 15 is a diagram illustrating an example of an operation of an audio processor when a voice recognition function is activated in a digital device according to an embodiment of the present invention.
  • Video data 1510 corresponding to predetermined content is output to the display unit 1510 of the digital device 1500.
  • the first audio data corresponding to the content is output through the TV speaker of the digital device 1500.
  • the content corresponds to a live broadcast signal.
  • the voice recognition application When the voice recognition function is activated by the user, the voice recognition application may be executed and the first GUI 1530 corresponding to the execution screen of the voice recognition application may be output to the display unit 1510.
  • the pulse audio module 1340 may receive second audio data indicating that voice recording for voice recognition has started from the notification / notification application 1315. The pulsed audio module 1340 may notify the audio processor 1320 of the reception of the second audio data.
  • the audio processor 1320 may adjust the output of the first and second audio data based on a policy related thereto. For example, when audio data related to a voice recognition function has a higher priority than audio data corresponding to a broadcast signal, the audio processor 1320 may set the output volume level of the second audio data to a volume level set in the TV speaker.
  • the pulse audio module 1340 may be controlled to be set, and the TV service processor 1330 may be controlled to reduce the input volume level of the first audio data to a preset level based on the volume level set in the TV speaker.
  • the audio processor 1330 may adjust the pulse audio module 1340 in adjusting the output of the first audio data.
  • the first audio data and the second audio data may be mixed by a mixer (not shown) and simultaneously output through the TV speaker. have.
  • the output volume level of the first audio data output through the TV speaker may be different from the output volume level of the second audio data. That is, when the voice recognition function is activated, the user may increase the output volume of the second audio data indicating that the voice recording for the voice recognition has started by increasing the output volume of the first audio data corresponding to the broadcast signal.
  • the recognition function may be activated and it may be recognized that a speech of a predetermined word or sentence should be started.
  • the pulse audio module 1340 may perform voice recognition from the notification / notification application 1315.
  • Third audio data indicating that voice recording is completed may be received.
  • the audio processor 1320 may process the third audio data based on the second audio data in relation to the first audio data.
  • the audio processor 1320 may control the output volume level of the first audio data to be lower than the output volume level of the first audio data.
  • the output level of the first audio data corresponding to the broadcast signal may be zero.
  • the audio processor 1320 may adjust the input volume level to 0 by controlling the input source corresponding to the first audio data.
  • the audio processor 1320 may input the volume of the first audio data.
  • the pulse audio module 1340 may be controlled to adjust the level to 0 and output only specific audio data related to the TTS application through the TV speaker. Accordingly, the TV speaker may output only audio data related to the TTS application without outputting the first audio data.
  • the pulsed audio module 1340 may control the pulsed audio module 1340 so that the first audio data may be output again at a volume level set in the TV speaker. have.
  • FIG. 16 is a diagram illustrating another example of an operation of an audio processor when a voice recognition function is activated in a digital device according to an embodiment of the present invention. Duplicate content as described above with reference to FIG. 15 will not be described again, and will be described below based on differences.
  • Video data 1610 corresponding to predetermined content is output to the display unit 1610 of the digital device 1600.
  • the first audio data corresponding to the content is output through the TV speaker of the digital device 1600.
  • the content is a file capable of stopping or pausing playback such as an mp3 file or a video file.
  • the voice recognition application When the voice recognition function is activated by the user, the voice recognition application may be executed and the first GUI 1630 corresponding to the execution screen of the voice recognition application may be output to the display unit 1510.
  • the audio processor 1330 controls the pulse audio module 1340 such that only the second audio data and the third audio data are output at the volume level set in the TV speaker, and the TV speaker outputs the first audio data. Instead, only the second audio data and the third audio data may be output.
  • FIG. 17 is a view for explaining another example of an operation of an audio processor when a voice recognition function is activated in a digital device according to an embodiment of the present invention. Portions overlapping with those described above with reference to FIGS. 15 and 16 will not be described again, and the following description will focus on differences.
  • Video data 1710 corresponding to predetermined content is output to the display unit 1710 of the digital device 1700.
  • First audio data corresponding to the content is output through the headphone 1740 of the digital device 1700.
  • the content corresponds to a live broadcast signal.
  • the voice recognition application When the voice recognition function is activated by the user, the voice recognition application may be executed and the first GUI 1730 corresponding to the execution screen of the voice recognition application may be output to the display unit 1510.
  • the first audio data and the second audio data may be mixed by a mixer (not shown) and simultaneously output through the headphone 1740.
  • the first audio data and the third audio data may also be mixed by a mixer (not shown) and output simultaneously through the headphones 1740.
  • the audio processor 1330 may be controlled to output only the second audio data and the third audio data at a volume level set in the headphone 1740.
  • FIG. 18 illustrates another example of an operation of an audio processor when a voice recognition function is activated in a digital device according to an embodiment of the present invention.
  • the display unit 1810 of the digital device 1800 may output video data corresponding to two or more contents through virtual screen division. For example, video data 1821 corresponding to the first content is output to a first area of the display 1810, and video data 1822 corresponding to the second content is output to a second area of the display 1810. May be output.
  • video data 1821 corresponding to the first content is output to a first area of the display 1810
  • video data 1822 corresponding to the second content is output to a second area of the display 1810. May be output.
  • the first content corresponds to a live broadcast signal and the second content corresponds to video / audio data streamed to Adobe Flash.
  • audio data corresponding to the first content is output through the headphone 1840 and audio data corresponding to the second content is output through the TV speaker.
  • the voice recognition application When the voice recognition function is activated by the user pressing a specific button related to the voice recognition function included in the headphone 1840, the voice recognition application is executed and the first GUI 1830 corresponding to the execution screen of the voice recognition application is executed.
  • the first video data may be output to a first area of the display unit 1810 where the first video data is being output.
  • the digital device 1800 is used by a plurality of users, and only the user who uses the headphone 1840 to activate the voice recognition function satisfies the intention of all users.
  • the audio processor 1320 may output the second audio data at the volume level set in the TV speaker without adjusting the output of the second audio data.
  • the audio processor 1320 controls the TV service processor 1330 to reduce the input volume level of the first audio data to a preset level based on the volume level set in the TV speaker, and the notification / notification application 1315
  • the third audio data informing that the voice recording for voice recognition started from the third voice data is started may be controlled at the volume level set in the headphone 1840. That is, the second audio data and the third audio data may be mixed by the mixer while the output volume level of the second audio data and the output volume level of the third audio data are adjusted and simultaneously output through the headphones 1840. have.
  • the audio processor 1320 receives from the notification / notification application 1315 the fourth audio data informing that the voice recording for voice recognition is completed, the audio data 1230 is also in the relation with the first audio data. Processing may be performed in accordance with the third audio data.
  • the audio processor 1330 May control the pulsed audio module 1340 such that only the third audio data and the fourth audio data are output at the volume level set in the headphone 1840.
  • FIG. 19 illustrates an example of an operation of an audio processor when an event related to a ringtone application occurs in a digital device according to an embodiment of the present invention.
  • Video data 1920 corresponding to predetermined content is output to the display unit 1910 of the digital device 1900.
  • the first audio data corresponding to the content is output through the TV speaker of the digital device 1900.
  • the content corresponds to a live broadcast signal.
  • the call application when a call event occurs through a call application, the call application is executed and a second GUI 1930 corresponding to an execution screen of the call application may be output to the display 1910.
  • the second GUI 1930 may include a message indicating that a telephone event has occurred and menus for a call or a call rejection.
  • the pulsed audio module 1340 may receive second audio data corresponding to a telephone ring tone received from the ringtone application 1313. The pulsed audio module 1340 may notify the audio processor 1320 of the reception of the second audio data.
  • the audio processor 1320 may adjust the output of the first and second audio data based on a policy associated with audio data corresponding to a broadcast signal and audio data related to a ringtone application. For example, when audio data related to a ringtone application has a higher priority than audio data corresponding to a broadcast signal, the audio processor 1320 sets the output volume level of the second audio data to a volume level set in the TV speaker.
  • the pulse audio module 1340 may be controlled to control the pulse audio module 1340, and the TV service processor 1330 may be controlled to reduce the input volume level of the first audio data to a predetermined level based on the volume level set in the TV speaker.
  • the first and second audio data may be mixed and output at a volume level set in the TV speaker.
  • the audio processor 1320 controls the input source corresponding to the first audio data to set the input volume level to zero. I can regulate it.
  • playback of the content may be paused when audio data related to the ringtone application occurs.
  • FIG. 20 illustrates an example of an operation of an audio processor when an event associated with an alert notification application occurs in a digital device according to an embodiment of the present invention.
  • the display unit 2010 of the digital device 2000 displays a screen which is in a state in which the predetermined content is in the paused or paused state or which is not related to the content including audio data.
  • the headphone 2030 is connected to a specific port included in the audio output unit 1350 of the digital device 2000. At this time, it is assumed that no audio data is output to the headphone 2030 or the TV speaker.
  • the alert notification application when a specific event related to an alert notification application occurs, the alert notification application is executed and a third GUI 2020 corresponding to an execution screen of the alert notification application may be output to the display unit 2010.
  • the third GUI 2020 may include a message describing the content of the specific event.
  • the pulse audio module 1340 may receive first audio data corresponding to the warning notification sound received from the warning notification application 1312. The pulsed audio module 1340 may notify the audio processor 1320 of the reception of the first audio data.
  • the audio processor 1320 When the headphone 2030 is connected to a port of the audio output unit 1350 based on a policy related to an alert notification application, and the specific content including audio data is not currently being played, the audio processor 1320 is configured to perform the above.
  • the pulsed audio module 1340 may be controlled to simultaneously output the second audio data through the TV speaker and the headphone 2030.
  • the second audio data may be output through the TV speaker according to the volume level set in the TV speaker and output through the headphone 2030 in accordance with the volume level set in the headphone 2030.
  • FIG. 21 illustrates another example of an operation of an audio processor when an event related to an alert notification application occurs in a digital device according to an embodiment of the present invention. Portions overlapping with those described above with reference to FIG. 20 will not be described again, and the following description will focus on differences.
  • Video data 2120 corresponding to predetermined content is output to the display unit 2110 of the digital device 2100.
  • the first audio data corresponding to the content is output through the headphone 2140 connected to the digital device 2100.
  • the alert notification application when a specific event related to an alert notification application occurs, the alert notification application is executed and a third GUI 2130 corresponding to an execution screen of the alert notification application may be output to the display 2110.
  • the third GUI 2130 may include a message for describing the content of the specific event.
  • the pulse audio module 1340 may receive second audio data corresponding to the warning notification sound received from the warning notification application 1312. The pulsed audio module 1340 may notify the audio processor 1320 of the reception of the second audio data.
  • the audio processor 1320 may be configured to have a headphone 2140 connected to a port of the audio output unit 1350 based on a policy related to an alert notification application and corresponding to content currently being played through the headphone 2140.
  • the pulsed audio module 1340 may be controlled to output the second audio data through the headphone 2140.
  • the second audio data may be output only to the headphone 2140 and not to the TV speaker.
  • the audio processor 1320 may adjust the output of the first and second audio data based on a policy related thereto when there is audio data related to the alert notification application. For example, the audio processor 1320 may adjust the output volume level of the second audio data to be greater than the output volume level of the first audio data, or adjust the output volume level to be equal to the output volume level of the first audio data. It may be. According to an embodiment, when the content is a file that can be paused or paused, playback of the content may be paused. In addition, according to an embodiment, the output volume level of the second audio data may not be greater than the volume level set in the headphone 2140, thereby preventing the user from being surprised.
  • the audio data when audio data corresponding to a plurality of contents should be output at the same time, the audio data may be controlled according to the user's intention.
  • the digital device 2200 may include a user interface 2210, a communication module 2220, a storage module 2230, a display module 2240, and a controller ( 2250), and the like.
  • the user interface 2210 may receive text data or image data from a user.
  • the user interface 2210 may generate a window and a graphical user interface for receiving text data or image data from a user.
  • the user interface 2210 may receive a specific signal through a touch panel connected to the display module 2240 of the digital device 2200 and receive an infrared ray signal received from a sensor module (not shown). It can receive a specific signal by using.
  • the communication module 2220 may transmit / receive data to an external server or an external device.
  • the communication module 2220 collectively refers to a communication network supporting various communication standards or protocols including wired / wireless networks.
  • wired / wireless networks include all communication networks that are currently or will be supported by the specification and are capable of supporting one or more communication protocols for them.
  • Such wired and wireless networks include, for example, Universal Serial Bus (USB), Composite Video Banking Sync (CVBS), Component, S-Video (Analog), Digital Visual Interface (DVI), High Definition Multimedia Interface (HDMI), Network for wired connection such as RGB, D-SUB and communication standard or protocol therefor, Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), Zigbee (ZigBee), Digital Living Network Alliance (DLNA), Wireless LAN (WLAN) (Wi-Fi), Wireless broadband (Wibro), World Interoperability for Microwave Access (Wimax), High Speed Downlink Packet Access (HSDPA), LTE / LTE -A (Long Term Evolution / LTE-Advanced), can be formed by a network for a wireless connection such as Wi-Fi Direct (direct) and a communication standard or protocol therefor
  • the storage module 2230 may store web application information and deep linking data.
  • the storage module may map deep linking data supported for each web application to a web application name and store the mapping.
  • the display module 2240 may process and display at least one deep linking data received by the digital device according to an embodiment of the present invention on a screen.
  • the at least one deep linking data may be extracted from the deep linking data stored in the storage module 2230.
  • the controller 2250 generally manages the functions of at least one of the modules illustrated in FIG. 22, such as the user interface 2210, the communication module 2220, the storage module 2230, and the display module 2240. do. In this regard, the following will be described in more detail with reference to FIGS. 23 to 31.
  • FIG. 23 illustrates a deep linking function supported by a digital device according to an embodiment of the present invention.
  • the controller of the digital device receives text, queries web application information and content information that matches the text to a meta server, and queries the text from the meta server. Receives a plurality of web application information and content information that matches the, and extracts the deep linking data matching the received plurality of web application information from the storage module, the list of the extracted deep linking data arranged by category on the screen
  • the display apparatus may control to display content of a web application corresponding to the selected deep linking data on a screen.
  • the deep linking data is data provided to support the deep linking function, and the deep linking function provides a function for allowing a user to directly access contents located at a low level of an application including a web application. Can mean.
  • a controller of a digital device conventionally executes a map application 2320 in which a user provides location information on a home screen 2310, and executes the map application ( When searching for a specific place (2330) in 2320, the user provides a search list 2340 associated with the specific place and displays a corresponding place when the user selects a specific item in the search list 2340. Control 2350 to display on the screen.
  • the controller of the digital device displays a text 2370 associated with a specific place in a search application on a home screen 2360. If the input is determined, the text input by the user may be determined to search for a specific place, and the current location of the digital device may be searched to directly display a map image 2380 indicating a place located closest to the screen. have.
  • a user using a digital device that provides a deep linking function may increase convenience in using an application by eliminating unnecessary processes of entering a lower level of the application.
  • FIG. 24 is a diagram for explaining an example of receiving text using a search application in a digital device according to one embodiment of the present invention.
  • the controller of the digital device may control to display a GUI 2410 for receiving text on a screen when a user executes a search application.
  • the controller may control to query the meta server 2430 for web application information and content information matching the text.
  • the meta server 2430 maps and stores at least one web application information and content information provided by the web application, the web application information includes an application ID (AppID), and the AppID is a universal resource. It may consist of a Locator (URL) address.
  • AppID application ID
  • URL Locator
  • FIG. 25 illustrates an example in which a digital device receives web application information and content information from a meta server according to an embodiment of the present disclosure.
  • the controller of the digital device receives text from a user using a search application, and transmits web application information and content information matching the text to the meta server 2510. ) Can be queried.
  • the meta server 2510 receives content information matching the text among all the content information included in the received at least one web application information. Can be extracted.
  • the meta server 2510 may transmit to the digital device a list 2520 of mapping the at least one web application information and content information matching the text among the contents included in the at least one web application to each other. have.
  • FIG. 26 illustrates an example in which a digital device executes a deep linking function using a list received from a meta server, according to an embodiment of the present invention.
  • a controller of a digital device may include at least one web application information and content of at least one web application that matches text input by a user from a meta server 2610.
  • access the storage module of the digital device (Access) to extract the deep linking data matching the received at least one web application information, the extracted deep linking data to the category of the web application It may be controlled to display the list 2620 arranged for each screen on the screen.
  • the controller may display the content 2630 of the web application corresponding to the selected deep linking data 2625. Can be controlled.
  • FIGS. 24 to 26 By designing as shown in FIGS. 24 to 26, a user can easily and quickly receive a deep linking service from a digital device where a web application is installed.
  • FIG. 27 is a diagram for explaining an example of using a search application in a digital device according to one embodiment of the present invention.
  • the controller of the digital device 2700 displays a favorite web application list 2710 on the screen at all times or when a specific function key signal is received on a home screen. Can be controlled.
  • the controller displays a GUI 2720 on the screen for allowing the user to receive a deep linking service through the search application. Can be controlled.
  • the user may input text into the GUI 2720 by using the voice recognition function 2730 or by using an external input means such as a keyboard 2740.
  • FIG. 28 is a diagram for explaining an example of processing deep linking data in a digital device according to one embodiment of the present invention.
  • the controller of the digital device receives text, queries web application information and content information that matches the text, and queries a meta server, and the plurality of web applications that match the text from the meta server.
  • Receive information and content information extract deep linking data matching the received plurality of web application information from a storage module, display a list of the extracted deep linking data arranged by category on a screen, and display at least
  • the control unit may display the content of the web application corresponding to the selected deep linking data on the screen.
  • the meta server maps and stores at least one web application information and content information provided by the web application, and the web application information includes an application ID (AppID), and the AppID is a universal resource. It may consist of a Locator (URL) address.
  • AppID application ID
  • URL Locator
  • en.wikipedia.org is a URL address for outputting a main page of WIKIPEDIA
  • en.wikipedia.org/wiki/Automobile is one of a plurality of pages of WIKIPEDIA. It may be a URL address for displaying an Automobile page. That is, en.wikipedia.org/wiki/Automobile may be a URL address including information corresponding to the deep linking data of the web application.
  • the controller of the digital device extracts the deep linking data from the storage module, the controller generates a parameter corresponding to the deep linking information, and generates the web application information and the generated parameter. You can browse the content included in a particular web application using.
  • the parameter may be generated in a different language for each of a plurality of web applications stored in the meta server.
  • the controller of the digital device may transmit a signal for requesting execution of a specific web application to the application manager, and the signal may include an application ID and a parameter of the web application. Accordingly, the application manager may output specific content in a specific web application directly on the screen by using the web application ID and parameters included in the signal.
  • 29 and 30 are diagrams for describing an example in which a digital device generates a search list using a snapshot image according to an embodiment of the present invention.
  • the controller of the digital device 2900 receives text, queries web application information and content information that matches the text, and queries the meta server with the text and match the text with the meta server. Receiving web application information and content information, extracting deep linking data matching the received plurality of web application information from a storage module, and displaying a list 2920 arranged on the screen by the extracted deep linking data by category. If a signal for selecting at least one deep linking data is received from the list 2920, it may be controlled to display content of a web application corresponding to the selected deep linking data on a screen.
  • the list 2920 may include a snapshot image of content provided by each web application that matches the text, and the controller selects at least one snapshot image from the list by the user. In this case, it may be controlled to display the content corresponding to the selected snapshot image on the screen.
  • the controller may input text information 2910 and web application information matching the text information.
  • the search list 2920 including the content information may be controlled to be displayed on the screen.
  • the search list 2920 may include a snapshot image of content provided by each web application that matches the text.
  • the controller corresponds to the selected snapshot image 2925.
  • the content 3030 may be controlled to be displayed on the screen.
  • the controller may control to display the web application information 3040 for providing the content and the additional information 3050 of the content together with the content 3030 on the screen.
  • 31 is a flowchart illustrating a control method of a digital device according to an embodiment of the present invention.
  • a text is input, and a web server information and content information matched with the text are meta servers.
  • Querying (S3120), receiving a plurality of web application information and content information matching the text from the meta server (S3130), and receiving the plurality of received web application information.
  • Extracting the deep linking data matched to the storage module from the storage module (S3140), displaying a list of the extracted deep linking data for each category on a screen (S3150), and at least from the list.
  • a web application corresponding to the selected deep linking data is received. It may be implemented by displaying the content on the screen (S2460). Detailed description of each step is the same as described above, repeated description is omitted.
  • the digital device disclosed herein and the content processing method in the digital device may not be limitedly applied to the configuration and method of the above-described embodiments, but the embodiments may be modified in whole or in various embodiments so that various modifications may be made. Some may be optionally combined.
  • the method of operating a digital device disclosed in the present specification may be embodied as processor readable codes on a processor readable recording medium included in the digital device.
  • the processor-readable recording medium includes all kinds of recording devices for storing data that can be read by the processor. Examples of processor-readable recording media include read only memory (ROM), random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage devices, and the like. It also includes the implementation in the form of a wave (carrier-wave).
  • the processor-readable recording medium can also be distributed over network coupled computer systems so that the processor-readable code is stored and executed in a distributed fashion.
  • the present invention relates to digital devices and control methods thereof and has industrial applicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

La présente invention concerne un dispositif numérique et son procédé de commande. Un dispositif numérique selon un mode de réalisation de la présente invention comprend : un module audio d'impulsion pour recevoir, à partir d'une application, des données audio de premier type et des données audio de second type différentes des données audio de premier type; une unité de traitement audio; et une unité de sortie audio, le module audio d'impulsion notifiant à l'unité de traitement audio la réception des premier et second types de données audio, l'unité de traitement audio amenant le module audio d'impulsion à régler une sortie des premier et second types de données audio sur la base d'une politique associée aux premières et secondes données audio, et l'unité de sortie audio délivrant au moins un des premier et second types de données audio sur la base d'un résultat de l'ajustement du module audio d'impulsion.
PCT/KR2014/011359 2014-02-27 2014-11-25 Dispositif numérique et son procédé de commande WO2015129992A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/121,977 US20170078737A1 (en) 2014-02-27 2014-11-25 Digital device and control method therefor

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US201461945756P 2014-02-27 2014-02-27
US201461945769P 2014-02-27 2014-02-27
US61/945,769 2014-02-27
US61/945,756 2014-02-27
KR10-2014-0130810 2014-09-30
KR10-2014-0131941 2014-09-30
KR1020140130810A KR20150101902A (ko) 2014-02-27 2014-09-30 디지털 디바이스 및 이의 제어 방법
KR1020140131941A KR20150101904A (ko) 2014-02-27 2014-09-30 디지털 디바이스 및 그 제어 방법

Publications (1)

Publication Number Publication Date
WO2015129992A1 true WO2015129992A1 (fr) 2015-09-03

Family

ID=54009276

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2014/011359 WO2015129992A1 (fr) 2014-02-27 2014-11-25 Dispositif numérique et son procédé de commande

Country Status (1)

Country Link
WO (1) WO2015129992A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107957830A (zh) * 2016-10-14 2018-04-24 富士通株式会社 开发支持系统、开发支持设备、响应控制方法及响应控制设备
CN113905027A (zh) * 2021-12-10 2022-01-07 南昌航天广信科技有限责任公司 网络信号优先传输方法、系统、计算机和可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR940011402U (ko) * 1992-10-23 1994-05-27 엘지전자주식회사 음성인식 텔레비젼 수상기의 합성 음량 자동 조절장치
KR19990013620A (ko) * 1997-07-07 1999-02-25 니시무로타이조 인텔리전트 디지털 텔레비전 수신기
KR20070067425A (ko) * 2005-12-23 2007-06-28 삼성전자주식회사 오디오 출력 제어 장치 및 방법
US20100104255A1 (en) * 2008-10-28 2010-04-29 Jaekwan Yun System and method for orchestral media service
US20130163959A1 (en) * 2011-12-21 2013-06-27 Samsung Electronics Co., Ltd. Content playing apparatus and control method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR940011402U (ko) * 1992-10-23 1994-05-27 엘지전자주식회사 음성인식 텔레비젼 수상기의 합성 음량 자동 조절장치
KR19990013620A (ko) * 1997-07-07 1999-02-25 니시무로타이조 인텔리전트 디지털 텔레비전 수신기
KR20070067425A (ko) * 2005-12-23 2007-06-28 삼성전자주식회사 오디오 출력 제어 장치 및 방법
US20100104255A1 (en) * 2008-10-28 2010-04-29 Jaekwan Yun System and method for orchestral media service
US20130163959A1 (en) * 2011-12-21 2013-06-27 Samsung Electronics Co., Ltd. Content playing apparatus and control method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107957830A (zh) * 2016-10-14 2018-04-24 富士通株式会社 开发支持系统、开发支持设备、响应控制方法及响应控制设备
CN107957830B (zh) * 2016-10-14 2021-04-23 富士通株式会社 开发支持系统、开发支持设备、响应控制方法及响应控制设备
CN113905027A (zh) * 2021-12-10 2022-01-07 南昌航天广信科技有限责任公司 网络信号优先传输方法、系统、计算机和可读存储介质

Similar Documents

Publication Publication Date Title
WO2015099343A1 (fr) Dispositif numérique et son procédé de commande
WO2016085094A1 (fr) Dispositif multimédia et procédé de commande associé
WO2016027933A1 (fr) Dispositif numérique et son procédé de commande
WO2016143965A1 (fr) Dispositif d'affichage, et procédé de commande correspondant
WO2017003022A1 (fr) Dispositif d'affichage et son procédé de commande
WO2016085070A1 (fr) Système de commande de dispositif, dispositif numérique, et procédé de commande pour commander ledit dispositif
WO2016104907A1 (fr) Dispositif numérique, et procédé de traitement de données par le même dispositif numérique
WO2016186254A1 (fr) Panneau d'affichage et son procédé de commande
WO2017034065A1 (fr) Dispositif d'affichage et son procédé de commande
WO2016175361A1 (fr) Dispositif d'affichage et son procédé de commande
WO2017135585A2 (fr) Haut-parleur principal, haut-parleur secondaire et système comprenant ceux-ci
WO2012015116A1 (fr) Appareil d'affichage d'image et son procédé de fonctionnement
WO2012026651A1 (fr) Procédé de synchronisation de contenus et dispositif d'affichage permettant le procédé
WO2012015118A1 (fr) Procédé de fonctionnement d'appareil d'affichage d'image
WO2012015117A1 (fr) Procédé pour faire fonctionner un appareil d'affichage d'image
WO2011126202A1 (fr) Appareil d'affichage d'image et son procédé d'utilisation
WO2016175356A1 (fr) Dispositif numérique et procédé de commande de dispositif numérique
WO2011132840A1 (fr) Appareil d'affichage d'image et son procédé de fonctionnement
WO2017200215A1 (fr) Terminal mobile et procédé de commande de celui-ci
EP2612504A1 (fr) Appareil d'affichage d'image et son procédé de fonctionnement
WO2012030025A1 (fr) Appareil d'affichage d'image et son procédé de fonctionnement
WO2017047868A1 (fr) Terminal mobile et procédé de commande correspondant
WO2020149426A1 (fr) Dispositif d'affichage d'image et son procédé de commande
WO2017034298A1 (fr) Dispositif numérique et procédé de traitement de données dans ledit dispositif numérique
WO2019221365A1 (fr) Téléviseur flexible et procédé associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14884042

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15121977

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14884042

Country of ref document: EP

Kind code of ref document: A1