KR20160148875A - Display device and controlling method thereof - Google Patents

Display device and controlling method thereof Download PDF

Info

Publication number
KR20160148875A
KR20160148875A KR1020150085633A KR20150085633A KR20160148875A KR 20160148875 A KR20160148875 A KR 20160148875A KR 1020150085633 A KR1020150085633 A KR 1020150085633A KR 20150085633 A KR20150085633 A KR 20150085633A KR 20160148875 A KR20160148875 A KR 20160148875A
Authority
KR
South Korea
Prior art keywords
content
metadata
controller
data
service
Prior art date
Application number
KR1020150085633A
Other languages
Korean (ko)
Inventor
설성운
서관희
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020150085633A priority Critical patent/KR20160148875A/en
Priority to EP16811933.7A priority patent/EP3311582A4/en
Priority to PCT/KR2016/006376 priority patent/WO2016204520A1/en
Priority to US15/184,620 priority patent/US20160373828A1/en
Publication of KR20160148875A publication Critical patent/KR20160148875A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention relates to a display device and a control method thereof, in which a content including at least one object is displayed on a main screen of a display device, metadata for the content is generated for each object, Wherein the metadata further includes at least one of a content classification category, a time stamp, a pointing coordinate, an enlargement level, and tag information of an object.

Description

DISPLAY DEVICE AND CONTROL METHOD THEREOF

The present invention relates to a display device and a control method thereof, and more particularly, to a display device and a control method thereof, in which when a content is displayed on a main screen, a screen enlargement function for a specific object is executed, and at the same time, And displaying the content based on the generated metadata again to automatically execute a screen enlargement function for a specific object.

Many users have recently used display devices such as smart TVs. As the high-priced personalized content market expands and the variety of types is increasing, there is an increasing need to view the images as desired by users.

In the prior art, when a user has executed a screen enlargement function for enlarging a specific object, it is necessary to separately store the image for which the enlargement function is executed in order to view it again. However, in the related art, there is a problem that the user has to feel inconvenience because a separate large storage space is required to store such images separately.

An embodiment of the present invention provides a display device capable of automatically generating a screen enlargement function for a specific object by displaying metadata on the basis of metadata when the content is displayed, And a control method therefor.

Another object of the present invention is to provide a display device and a control method thereof that can receive contents and metadata from a broadcaster or a contents producer and can view contents with a viewpoint and a sight line of the contents creator.

Another aspect of the present invention is to provide a display device and a control method thereof that can perform a screen enlargement function on content in a manner desired by a user by modifying metadata based on a time domain.

Another embodiment of the present invention is to provide a display device and a control method thereof that can provide personalized contents to a user by mapping an object keyword and an object tag.

Another embodiment of the present invention is to provide a display device and a control method thereof that can reduce a separate content storage space by providing only metadata corresponding to the content and the content.

According to an embodiment of the present invention, a method of controlling a display device includes: displaying content including at least one object on a main screen of a display device; Generating metadata for the content for each object; And displaying the content again based on the generated metadata, wherein the metadata includes at least one of a content classification category, a time stamp, a pointing coordinate, an enlargement level, and tag information of the object.

According to another embodiment of the present invention, a display device includes an interface module for transmitting and receiving data to and from an external server; Receiving the content including at least one object through the interface module, displaying the received content on a main screen of the display device, generating metadata for the content for each object, and based on the generated metadata A controller for displaying the content again; A memory for storing at least one of the displayed content and the generated metadata; And a display module for displaying the content on the main screen in accordance with a control command from the controller, wherein the metadata includes at least one of a content classification category, a time stamp, a pointing coordinate, an enlargement level, .

According to an embodiment of the present invention, when the content is displayed, meta data on the content is generated for each object, and the content is displayed again based on the meta data, The convenience is improved.

According to another embodiment of the present invention, content and metadata are received from a broadcaster or a contents producer, and contents can be viewed from the viewpoint and the eyes of the contents creator, thereby improving user convenience.

According to another embodiment of the present invention, by modifying the metadata based on the time domain, the user can perform a screen enlarging function on the content in a desired manner, thereby improving user convenience.

According to another embodiment of the present invention, personalized contents can be provided to a user by mapping an object keyword and an object tag, thereby improving user convenience.

According to another embodiment of the present invention, by providing only the metadata corresponding to the content and the content, a separate content storage space can be reduced, thereby improving user convenience.

1 is a schematic diagram illustrating a service system including a digital device according to an exemplary embodiment of the present invention.
2 is a block diagram illustrating a digital device according to an exemplary embodiment of the present invention.
3 is a block diagram illustrating a digital device according to another embodiment of the present invention.
4 is a block diagram illustrating a digital device according to another embodiment of the present invention.
FIG. 5 is a block diagram illustrating a detailed configuration of the control unit of FIGS. 2 to 4 according to an embodiment of the present invention. Referring to FIG.
Figure 6 is a diagram illustrating an input means coupled to the digital device of Figures 2 through 4, in accordance with an embodiment of the present invention.
7 is a diagram illustrating a Web OS architecture according to an embodiment of the present invention.
FIG. 8 is a diagram illustrating an architecture of a Web OS device according to an exemplary embodiment of the present invention. Referring to FIG.
9 is a diagram illustrating a graphical composition flow in a web OS device according to an embodiment of the present invention.
FIG. 10 is a diagram illustrating a media server according to an embodiment of the present invention. Referring to FIG.
11 is a block diagram illustrating a configuration of a media server according to an embodiment of the present invention.
FIG. 12 is a diagram illustrating a relationship between a media server and a TV service according to an embodiment of the present invention.
13 is a diagram illustrating a control method of a remote controller for controlling any one of the video display devices according to the embodiments of the present invention.
FIG. 14 is an internal block diagram of a remote control device for controlling any one of the video display devices according to the embodiments of the present invention.
15 is a configuration diagram of a multimedia device according to an embodiment of the present invention.
16 is a flowchart of a method of controlling a multimedia device according to an embodiment of the present invention.
17 is a diagram illustrating a case where a specific area enlargement mode according to an embodiment of the present invention is activated.
FIG. 18 is a diagram illustrating a pointer shape being changed when a specific area enlarging mode according to an embodiment of the present invention is activated.
FIG. 19 is a diagram illustrating control of a screen when a specific area enlargement mode is activated according to an embodiment of the present invention.
FIG. 20 illustrates moving a specific point on an enlarged screen to a pointer when a specific area enlarging mode according to an embodiment of the present invention is activated.
21 is a diagram illustrating a screen control using a remote controller when a specific area enlargement mode is activated according to an embodiment of the present invention.
22 is a diagram illustrating automatic execution of a specific area enlargement mode in conjunction with EPG information according to an embodiment of the present invention.
23 is a diagram showing the execution of a specific area enlargement mode and a time shift function in connection with each other according to an embodiment of the present invention.
FIG. 24 is a diagram illustrating switching between a full screen and a zoom screen according to an embodiment of the present invention.
FIG. 25 is a view showing enlargement of several points in a screen according to an embodiment of the present invention.
FIG. 26 is a view showing enlargement by selecting various points on a screen according to an embodiment of the present invention.
FIG. 27 is a diagram for solving the case where the remote control coordinates and the input video coordinates do not match according to an embodiment of the present invention.
28 illustrates solving the case where a specific area to be enlarged is out of a video output range according to an embodiment of the present invention.
FIG. 29 is a flowchart illustrating a method of dividing a screen into a predetermined number when outputting video data according to an embodiment of the present invention, enlarging a selected screen when a user selects one of the divided screens, Fig.
30 is a diagram showing that the controller divides the screen into four, nine, or sixteen divisions according to user selection when outputting video data according to an embodiment of the present invention.
Figure 31 illustrates a process for adjusting the magnification factor during execution of a specific area magnification mode, in accordance with an embodiment of the present invention.
32 illustrates a process for selecting an enlarged area during execution of a specific area enlargement mode, in accordance with an embodiment of the present invention.
33 illustrates a process for disabling an associated indicator during execution of a specific area enlargement mode, in accordance with an embodiment of the present invention.
Figure 34 illustrates a process for redisplaying an associated indicator that has disappeared during a specific area enlargement mode, in accordance with an embodiment of the present invention.
35 is a block diagram illustrating a display device according to an embodiment of the present invention.
36 is a flowchart of a method of controlling a display device according to an embodiment of the present invention.
37 is a flowchart of a method of controlling a display device according to an embodiment of the present invention.
FIG. 38 is a diagram illustrating generation of metadata when a specific object is specified by a pointer according to an embodiment of the present invention.
FIG. 39 is a diagram illustrating metadata generation for each object based on time when a specific object is specified by a pointer according to an embodiment of the present invention. Referring to FIG.
40 is a diagram illustrating transmission of metadata from a broadcasting station and a manufacturer according to an embodiment of the present invention.
FIG. 41 is a view illustrating a screen displayed differently according to an embodiment of the present invention.
Figure 42 is a view illustrating editing of metadata according to an embodiment of the present invention.
FIG. 43 illustrates icon generation for each object using metadata according to an embodiment of the present invention. Referring to FIG.
Figure 44 is a diagram illustrating an embodiment of personalization using metadata according to an embodiment of the present invention.
Figure 45 is a diagram illustrating the identification of family members using a specific application in accordance with an embodiment of the present invention.

Hereinafter, the present invention will be described in detail with reference to the drawings.

The suffix "module" and " part "for components used in the following description are given merely for ease of description, and the" module "and" part "

Meanwhile, the video display device described in the present invention is an intelligent video display device that adds a computer support function to a broadcast receiving function, for example, and is equipped with an Internet function while being faithful to a broadcast receiving function, Or an interface that is more convenient to use than a space remote controller or the like. Also, it can be connected to the Internet and a computer by the support of a wired or wireless Internet function, and can perform functions such as e-mail, web browsing, banking or game. A standardized general-purpose OS can be used for these various functions.

Therefore, the video display device described in the present invention can freely add or delete various applications, for example, on a general-purpose OS kernel, so that various user-friendly functions can be performed. More specifically, the video display device may be, for example, a network TV, an HBBTV, a smart TV, or the like, and may be applied to a smartphone according to circumstances.

BRIEF DESCRIPTION OF THE DRAWINGS The above and other features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which: FIG.

As used herein, terms used in the present invention are selected from general terms that are widely used in the present invention while taking into account the functions of the present invention, but these may vary depending on the intention or custom of a person skilled in the art or the emergence of new technologies. Also, in certain cases, there may be a term chosen arbitrarily by the applicant, in which case the meaning thereof will be described in the description of the corresponding invention. Therefore, it is intended that the terminology used herein should be interpreted based on the meaning of the term rather than on the name of the term, and on the entire contents of the specification.

The term " digital device " as used herein refers to a device that transmits, receives, processes, and outputs data, content, service, And includes all devices that perform at least one or more. The digital device can be paired or connected (hereinafter, referred to as 'pairing') with another digital device, an external server, or the like through a wire / wireless network, Can be transmitted / received. At this time, if necessary, the data may be appropriately converted before the transmission / reception. The digital device may be a standing device such as a network TV, a Hybrid Broadcast Broadband TV (HBBTV), a Smart TV, an IPTV (Internet Protocol TV), a PC (Personal Computer) And a mobile device or handheld device such as a PDA (Personal Digital Assistant), a smart phone, a tablet PC, a notebook, and the like. In order to facilitate understanding of the present invention and to facilitate the description of the present invention, FIG. 2, which will be described later, describes a digital TV, and FIG. 3 illustrates and describes a mobile device as an embodiment of a digital device. In addition, the digital device described in this specification may be a configuration having only a panel, a configuration such as a set-top box (STB), a device, a system, etc. and a set configuration .

The term " wired / wireless network " as used herein collectively refers to communication networks that support various communication standards or protocols for pairing and / or data transmission / reception between digital devices or digital devices and external servers. Such a wired / wireless network includes all of the communication networks to be supported by the standard now or in the future, and is capable of supporting one or more communication protocols therefor. Such a wired / wireless network includes, for example, a USB (Universal Serial Bus), a Composite Video Banking Sync (CVBS), a Component, an S-Video (Analog), a DVI (Digital Visual Interface) A communication standard or protocol for a wired connection such as an RGB or a D-SUB, a Bluetooth standard, a radio frequency identification (RFID), an infrared data association (IrDA), an ultra wideband (UWB) (ZigBee), DLNA (Digital Living Network Alliance), WLAN (Wi-Fi), Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access) A long term evolution (LTE-Advanced), and Wi-Fi direct, and a communication standard or protocol for the network.

In addition, when the term is simply referred to as a digital device in this specification, the meaning may mean a fixed device or a mobile device depending on the context, and may be used to mean both, unless specifically stated otherwise.

Meanwhile, a digital device is an intelligent device that supports, for example, a broadcast receiving function, a computer function or a support, at least one external input, and the like. The digital device may be an e-mail, web browsing, Banking, game, application, and so on. In addition, the digital device may have at least one input or control means (hereinafter referred to as an 'input means') to support an input device, a touch-screen, have.

In addition, the digital device can use a standardized general-purpose OS (Operating System), but in particular, the digital device described in this specification uses the Web OS as an embodiment. Therefore, a digital device can handle adding, deleting, amending, and updating various services or applications on a general-purpose OS kernel or a Linux kernel. And through which a more user-friendly environment can be constructed and provided.

Meanwhile, the above-described digital device can receive and process an external input. The external input is connected to an external input device, that is, the digital device, through the wired / wireless network, An input means or a digital device. For example, the external input may be a game device such as a high-definition multimedia interface (HDMI), a playstation or an X-Box, a smart phone, a tablet PC, a pocket photo devices such as digital cameras, printing devices, smart TVs, Blu-ray device devices and the like.

In addition, the term "server" as used herein refers to a digital device or system that supplies data to or receives data from a digital device, that is, a client, and may be referred to as a processor do. The server provides a Web server, a portal server, and advertising data for providing a web page, a web content or a web content or a web service, An advertising server, a content server for providing content, an SNS server for providing a social network service (SNS), a service server provided by a manufacturer, a video on demand (VoD) server, A service server providing a Multichannel Video Programming Distributor (MVPD) for providing a streaming service, a pay service, and the like.

In addition, in the following description for convenience of explanation, only the application is described in the context of the present invention, and the meaning may include not only the application but also the service based on the context and the like. In addition, the application may refer to a web application according to the webOS platform.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.

1 is a schematic diagram illustrating a service system including a digital device according to an exemplary embodiment of the present invention.

1, the service system includes a content provider 10, a service provider 20, a network provider 30, and a Home Network End (HNED) (Customer) (Customer) (40). Here, the HNED 40 includes, for example, a client 100, i.e., a digital device according to the present invention.

The content provider 10 produces and provides various contents. As shown in FIG. 1, the content provider 10 may include a terrestrial broadcast sender, a cable SO (System Operator), a MSO (Multiple SO), a satellite broadcast sender, Personal content providers, and the like. On the other hand, the content provider 10 can produce and provide various services and applications in addition to the broadcast content.

The service provider 20 provides service packetizing of the content produced by the content provider 10 to the HNED 40. [ For example, the service provider 20 packages at least one of the first terrestrial broadcast, the second terrestrial broadcast, the cable MSO, the satellite broadcast, the various internet broadcast, and the contents produced by the application for service, (40).

The service provider 20 provides services to the client 100 in a uni-cast or multi-cast manner. Meanwhile, the service provider 20 can transmit data to a plurality of clients 100 registered in advance, and an Internet Group Management Protocol (IGMP) protocol can be used for this purpose.

The above-described content provider 10 and the service provider 20 may be the same entity. For example, the service provided by the content provider 10 may be packaged and provided to the HNED 40, thereby performing the function of the service provider 20 or vice versa.

The network provider 30 provides a network for exchanging data between the content provider 10 and / or the service provider 20 and the client 100.

The client 100 is a consumer belonging to the HNED 40. The client 100 constructs a home network through the network provider 30 to receive data and stores data related to various services and applications such as VoD and streaming / RTI >

Meanwhile, the content provider 10 or / and the service provider 20 in the service system can use conditional access or content protection means for protecting the content to be transmitted. Accordingly, the client 100 can use a processing means such as a cable card (or a POD: Point of Deployment), a DCAS (Downloadable CAS) or the like in response to the restriction reception or the content protection.

In addition, the client 100 can use the two-way service through the network. Accordingly, the client 100 may rather perform the function or role of the content provider, and the service provider 20 may receive the content and transmit the same to another client or the like.

In FIG. 1, the content provider 10 and / or the service provider 20 may be a server that provides a service described later in this specification. In this case, the server may also mean to own or include network provider 30 as needed. The service or service data includes an internal service or an application, as well as a service or an application received from the outside, and the service or the service includes a service for a Web OS-based client 100 Or application data.

2 is a block diagram illustrating a digital device according to an exemplary embodiment of the present invention.

The digital device described herein corresponds to the client 100 of FIG. 1 described above.

The digital device 200 includes a network interface 201, a TCP / IP manager 202, a service delivery manager 203, an SI decoder 204, A demultiplexer (demux or demultiplexer) 205, an audio decoder 206, a video decoder 207, a display A / V and OSD module 208, a service management manager 209, a service discovery manager 210, an SI & metadata DB 211, a metadata manager 212, a service manager 213, A UI manager 214, and the like.

The network interface unit 201 receives the IP packet (s) (IP packet (s)) or the IP datagram (s) ) Is transmitted / received. For example, the network interface unit 201 can receive services, applications, content, and the like from the service provider 20 of FIG. 1 through a network.

The TCP / IP manager 202 determines whether the IP packets received by the digital device 200 and the IP packets transmitted by the digital device 200 are packet delivery (i.e., packet delivery) between a source and a destination packet delivery). The service discovery manager 210, the service control manager 209, the meta data manager 212, the service discovery manager 210, the service control manager 209, and the metadata manager 212. The TCP / IP manager 202 classifies the received packet (s) ) Or the like.

The service delivery manager 203 is responsible for controlling the received service data. For example, the service delivery manager 203 may use RTP / RTCP when controlling real-time streaming data. When the real-time streaming data is transmitted using the RTP, the service delivery manager 203 parses the received data packet according to the RTP and transmits the packet to the demultiplexing unit 205 or the control of the service manager 213 In the SI & meta data database 211. [ Then, the service delivery manager 203 feedbacks the network reception information to the server providing the service using RTCP.

The demultiplexer 205 demultiplexes the received packets into audio, video, SI (System Information) data, and transmits them to the audio / video decoder 206/207 and the SI decoder 204, respectively.

The SI decoder 204 decodes the demultiplexed SI data, that is, Program Specific Information (PSI), Program and System Information Protocol (PSIP), Digital Video Broadcasting Service Information (DVB-SI), Digital Television Terrestrial Multimedia Broadcasting / Coding Mobile Multimedia Broadcasting). Also, the SI decoder 204 may store the decoded service information in the SI & meta data database 211. The stored service information can be read out and used by the corresponding configuration, for example, by a user's request.

The audio / video decoder 206/207 decodes each demultiplexed audio data and video data. The decoded audio data and video data are provided to the user through the display unit 208. [

The application manager may include, for example, the UI manager 214 and the service manager 213 and may perform the functions of the controller of the digital device 200. [ In other words, the application manager can manage the overall state of the digital device 200, provide a user interface (UI), and manage other managers.

The UI manager 214 provides a GUI (Graphic User Interface) / UI for a user using an OSD (On Screen Display) or the like, and receives a key input from a user to perform a device operation according to the input. For example, the UI manager 214 transmits the key input signal to the service manager 213 upon receipt of a key input from the user regarding channel selection.

The service manager 213 controls the manager associated with the service such as the service delivery manager 203, the service discovery manager 210, the service control manager 209, and the metadata manager 212.

The service manager 213 also generates a channel map and controls the selection of a channel using the generated channel map according to the key input received from the UI manager 214. [ The service manager 213 receives the service information from the SI decoder 204 and sets an audio / video PID (Packet Identifier) of the selected channel in the demultiplexer 205. The PID thus set can be used in the demultiplexing process described above. Accordingly, the demultiplexer 205 filters (PID or section) audio data, video data, and SI data using the PID.

The service discovery manager 210 provides information necessary for selecting a service provider that provides the service. Upon receiving a signal regarding channel selection from the service manager 213, the service discovery manager 210 searches for the service using the information.

The service control manager 209 is responsible for selection and control of services. For example, the service control manager 209 uses IGMP or RTSP when a user selects a live broadcasting service such as an existing broadcasting system, and selects a service such as VOD (Video on Demand) , RTSP is used to select and control services. The RTSP protocol may provide a trick mode for real-time streaming. Also, the service control manager 209 can initialize and manage a session through the IMS gateway 250 using an IP Multimedia Subsystem (IMS) and a Session Initiation Protocol (SIP). The protocols are one embodiment, and other protocols may be used, depending on the implementation.

The metadata manager 212 manages the metadata associated with the service and stores the metadata in the SI & metadata database 211.

The SI & meta data database 211 stores service information decoded by the SI decoder 204, meta data managed by the meta data manager 212, and information necessary for selecting a service provider provided by the service discovery manager 210 . In addition, the SI & meta data database 211 may store set-up data for the system and the like.

The SI & meta data database 211 may be implemented using a non-volatile RAM (NVRAM) or a flash memory.

Meanwhile, the IMS gateway 250 is a gateway that collects functions necessary for accessing the IMS-based IPTV service.

3 is a block diagram illustrating a digital device according to another embodiment of the present invention.

If the above-described Fig. 2 is described with an example of a digital device as a fixing device, Fig. 3 shows a mobile device as another embodiment of a digital device.

3, the mobile device 300 includes a wireless communication unit 310, an A / V input unit 320, a user input unit 330, a sensing unit 340, an output unit 350, An interface unit 370, a control unit 380, a power supply unit 390, and the like.

Hereinafter, each component will be described in detail.

The wireless communication unit 310 may include one or more modules that enable wireless communication between the mobile device 300 and the wireless communication system or between the mobile device and the network in which the mobile device is located. For example, the wireless communication unit 310 may include a broadcast receiving module 311, a mobile communication module 312, a wireless Internet module 313, a short range communication module 314, and a location information module 315 .

The broadcast receiving module 311 receives broadcast signals and / or broadcast-related information from an external broadcast management server through a broadcast channel. Here, the broadcast channel may include a satellite channel and a terrestrial channel. The broadcast management server may refer to a server for generating and transmitting broadcast signals and / or broadcast related information, or a server for receiving broadcast signals and / or broadcast related information generated by the broadcast management server and transmitting the generated broadcast signals and / or broadcast related information. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and a broadcast signal in which a data broadcast signal is combined with a TV broadcast signal or a radio broadcast signal.

The broadcast-related information may mean information related to a broadcast channel, a broadcast program, or a broadcast service provider. The broadcast-related information may also be provided through a mobile communication network. In this case, it may be received by the mobile communication module 312.

The broadcast-related information may exist in various forms, for example, in the form of an EPG (Electronic Program Guide) or an ESG (Electronic Service Guide).

The broadcast receiving module 311 may be, for example, an ATSC, a Digital Video Broadcasting-Terrestrial (DVB-T), a Satellite (DVB-S), a Media Forward Link Only And Integrated Services Digital Broadcast-Terrestrial (DRS). Of course, the broadcast receiving module 311 may be adapted to not only the above-described digital broadcasting system but also other broadcasting systems.

The broadcast signal and / or broadcast related information received through the broadcast receiving module 311 may be stored in the memory 360.

The mobile communication module 312 transmits and receives radio signals to at least one of a base station, an external terminal, and a server on a mobile communication network. The wireless signal may include various types of data depending on a voice signal, a video call signal, or a text / multimedia message transmission / reception.

The wireless Internet module 313 may be embedded or external to the mobile device 300, including a module for wireless Internet access. WLAN (Wi-Fi), Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access) and the like can be used as wireless Internet technologies.

The short-range communication module 314 is a module for short-range communication. Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IRDA), Ultra Wideband (UWB), ZigBee, RS-232 and RS-485 are used as short range communication technology. .

The position information module 315 is a module for acquiring position information of the mobile device 300, and may be a GPS (Global Position System) module.

The A / V input unit 320 is for inputting audio and / or video signals. The A / V input unit 320 may include a camera 321 and a microphone 322. The camera 321 processes an image frame such as a still image or moving image obtained by the image sensor in the video communication mode or the photographing mode. The processed image frame can be displayed on the display section 351. [

The image frame processed by the camera 321 may be stored in the memory 360 or transmitted to the outside via the wireless communication unit 310. [ At least two cameras 321 may be provided depending on the use environment.

The microphone 322 receives an external sound signal by a microphone in a communication mode, a recording mode, a voice recognition mode, or the like, and processes it as electrical voice data. The processed voice data can be converted into a form that can be transmitted to the mobile communication base station through the mobile communication module 312 in the case of the communication mode, and output. The microphone 322 may be implemented with various noise reduction algorithms for eliminating noise generated in receiving an external sound signal.

The user input unit 330 generates input data for user's operation control of the terminal. The user input unit 330 may include a key pad, a dome switch, a touch pad (static pressure / static electricity), a jog wheel, a jog switch, and the like.

The sensing unit 340 senses the current state of the mobile device 300 such as the open / closed state of the mobile device 300, the position of the mobile device 300, the user's contact, the orientation of the mobile device, And generates a sensing signal for controlling the operation of the mobile device 300. For example, when the mobile device 300 is moved or tilted, it may sense the position, slope, etc. of the mobile device. It is also possible to sense whether power is supplied to the power supply unit 390, whether the interface unit 370 is connected to an external device, and the like. Meanwhile, the sensing unit 240 may include a proximity sensor 341 including NFC (Near Field Communication).

The output unit 350 may include a display unit 351, an acoustic output module 352, an alarm unit 353, and a haptic module 354 for generating output related to visual, auditory, have.

The display unit 351 displays (outputs) information processed by the mobile device 300. [ For example, if the mobile device is in call mode, it displays a UI or GUI associated with the call. When the mobile device 300 is in the video communication mode or the photographing mode, the photographed and / or received video or UI and GUI are displayed.

The display unit 351 may be a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT LCD), an organic light-emitting diode (OLED) flexible display, and a three-dimensional display.

Some of these displays may be transparent or light transmissive so that they can be seen through. This can be referred to as a transparent display, and a typical example of the transparent display is TOLED (Transparent OLED) and the like. The rear structure of the display portion 351 may also be of a light transmission type. With this structure, the user can see an object located behind the terminal body through the area occupied by the display unit 351 of the terminal body.

There may be two or more display units 351 depending on the implementation of the mobile device 300. [ For example, in the mobile device 300, the plurality of display portions may be spaced apart or integrally arranged on one surface, and may be disposed on different surfaces, respectively.

The display unit 351 includes a display unit 351 and a display unit 351. When the display unit 351 and the sensor for sensing a touch operation (hereinafter, referred to as 'touch sensor') have a mutual layer structure It can also be used as a device. The touch sensor may have the form of, for example, a touch film, a touch sheet, a touch pad, or the like.

The touch sensor may be configured to convert a change in a pressure applied to a specific portion of the display portion 351 or a capacitance generated in a specific portion of the display portion 351 into an electrical input signal. The touch sensor can be configured to detect not only the position and area to be touched but also the pressure at the time of touch.

If there is a touch input to the touch sensor, the corresponding signal (s) is sent to the touch controller. The touch controller processes the signal (s) and transmits corresponding data to the controller 380. Thus, the control unit 380 can know which area of the display unit 351 is touched or the like.

A proximity sensor 341 may be disposed in the interior area of the mobile device or in proximity to the touch screen. The proximity sensor refers to a sensor that detects the presence or absence of an object approaching a predetermined detection surface or a nearby object without mechanical contact using the force of an electromagnetic field or infrared rays. The proximity sensor has a longer life span than the contact sensor and its utilization is also high.

Examples of the proximity sensor include a transmission type photoelectric sensor, a direct reflection type photoelectric sensor, a mirror reflection type photoelectric sensor, a high frequency oscillation type proximity sensor, a capacitive proximity sensor, a magnetic proximity sensor, and an infrared proximity sensor. And to detect the proximity of the pointer by the change of the electric field along the proximity of the pointer when the touch screen is electrostatic. In this case, the touch screen (touch sensor) may be classified as a proximity sensor.

Hereinafter, for convenience of explanation, the act of recognizing that the pointer is positioned on the touch screen while the pointer is not in contact with the touch screen is referred to as "proximity touch & The act of actually touching the pointer on the screen is called "contact touch. &Quot; The position where the pointer is proximately touched on the touch screen means a position where the pointer is vertically corresponding to the touch screen when the pointer is touched.

The proximity sensor detects a proximity touch and a proximity touch pattern (e.g., a proximity touch distance, a proximity touch direction, a proximity touch speed, a proximity touch time, a proximity touch position, a proximity touch movement state, and the like). Information corresponding to the detected proximity touch operation and the proximity touch pattern may be output on the touch screen.

The sound output module 352 can output audio data received from the wireless communication unit 310 or stored in the memory 360 in a call signal reception mode, a call mode or a recording mode, a voice recognition mode, a broadcast reception mode, The sound output module 352 also outputs sound signals associated with functions performed on the mobile device 300 (e.g., call signal receive tones, message receive tones, etc.). The sound output module 352 may include a receiver, a speaker, a buzzer, and the like.

The alarm unit 353 outputs a signal for notifying the occurrence of an event of the mobile device 300. [ Examples of events that occur in the mobile device include receiving a call signal, receiving a message, inputting a key signal, and touch input. The alarm unit 353 may output a signal for informing occurrence of an event in a form other than the video signal or the audio signal, for example, vibration. The video signal or the audio signal may be output through the display unit 351 or the audio output module 352 so that they may be classified as a part of the alarm unit 353.

The haptic module 354 generates various tactile effects that the user can feel. A typical example of the haptic effect generated by the haptic module 354 is vibration. The intensity and pattern of the vibration generated by the haptic module 354 are controllable. For example, different vibrations may be synthesized and output or sequentially output. In addition to the vibration, the haptic module 354 may be arranged in a variety of ways, such as a pin arrangement vertically moving with respect to the contact skin surface, a spraying force or suction force of the air through the injection port or the suction port, a spit on the skin surface, contact with an electrode, Various effects such as an effect of heat generation and an effect of reproducing a cool / warm feeling using a heat absorbing or heatable element can be generated. The haptic module 354 can be implemented not only to transmit the tactile effect through direct contact but also to allow the user to feel the tactile effect through the muscular sensation of a finger or an arm. More than one haptic module 354 may be provided according to the configuration of the mobile device 300.

The memory 360 may store a program for the operation of the control unit 380 and temporarily store input / output data (e.g., phone book, message, still image, moving picture, etc.). The memory 360 may store data related to vibration and sound of various patterns output during touch input on the touch screen.

The memory 360 may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD memory) (Random Access Memory), a static random access memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM) A magnetic disk, and / or an optical disk. The mobile device 300 may operate in association with a web storage that performs storage functions of the memory 360 on the Internet.

The interface unit 370 serves as a pathway to all the external devices connected to the mobile device 300. The interface unit 370 receives data from an external device or receives power from the external device and transfers the data to each component in the mobile device 300 or transmits data in the mobile device 300 to an external device. For example, it may be provided with a wired / wireless headset port, an external charger port, a wired / wireless data port, a memory card port, a port connecting a device with an identification module, an audio I / O port, A video I / O port, an earphone port, and the like may be included in the interface unit 370.

The identification module is a chip for storing various information for authenticating the usage right of the mobile device 300 and includes a user identification module (UIM), a subscriber identity module (SIM), a general user authentication module A Universal Subscriber Identity Module (USIM), and the like. A device having an identification module (hereinafter referred to as 'identification device') can be manufactured in a smart card format. Accordingly, the identification device can be connected to the terminal 200 through the port.

The interface unit 370 may be a communication path through which the power from the cradle is supplied to the mobile device 300 when the mobile device 300 is connected to an external cradle, A command signal may be the path through which it is communicated to the mobile device. The various command signals or the power supply input from the cradle may be operated with a signal for recognizing that the mobile device is correctly mounted on the cradle.

The control unit 380 typically controls the overall operation of the mobile device 300. The control unit 380 performs related control and processing, for example, for voice call, data communication, video call, and the like. The control unit 380 may include a multimedia module 381 for multimedia playback. The multimedia module 381 may be implemented in the control unit 380 or separately from the control unit 380. The control unit 380 can perform pattern recognition processing for recognizing the handwriting input or the drawing input performed on the touch-screen as characters and images, respectively.

The power supply unit 390 receives external power and internal power under the control of the controller 380 and supplies power necessary for operation of the respective components.

The various embodiments described herein may be implemented in a recording medium readable by a computer or similar device using, for example, software, hardware, or a combination thereof.

According to a hardware implementation, the embodiments described herein may be implemented as application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays May be implemented using at least one of a processor, a controller, micro-controllers, microprocessors, and an electrical unit for performing other functions. In some cases, the implementation described herein Examples may be implemented by the control unit 380 itself.

According to a software implementation, embodiments such as the procedures and functions described herein may be implemented with separate software modules. Each of the software modules may perform one or more of the functions and operations described herein. Software code may be implemented in a software application written in a suitable programming language. Here, the software code is stored in the memory 360 and can be executed by the control unit 380. [

4 is a block diagram illustrating a digital device according to another embodiment of the present invention.

Another example of the digital device 400 includes a broadcast receiving unit 405, an external device interface unit 435, a storage unit 440, a user input interface unit 450, a control unit 470, a display unit 480, An output unit 485, a power supply unit 490, and a photographing unit (not shown). The broadcast receiver 405 may include at least one tuner 410, a demodulator 420, and a network interface 430. In some cases, the broadcast receiver 405 may include a tuner 410 and a demodulator 420, but may not include the network interface 430, or vice versa. The broadcast receiving unit 405 may include a multiplexer to receive a demodulated signal from the demodulator 420 via the tuner 410 and a signal received via the network interface 430, May be multiplexed. In addition, although not shown, the broadcast receiver 425 includes a demultiplexer to demultiplex the multiplexed signals, demultiplex the demodulated signals or the signals passed through the network interface 430 .

The tuner 410 tunes a channel selected by the user or all pre-stored channels of an RF (Radio Frequency) broadcast signal received through the antenna, and receives the RF broadcast signal. In addition, the tuner 410 converts the received RF broadcast signal into an intermediate frequency (IF) signal or a baseband signal.

For example, if the received RF broadcast signal is a digital broadcast signal, the signal is converted into a digital IF signal (DIF). If the received RF broadcast signal is an analog broadcast signal, the signal is converted into an analog baseband image or a voice signal (CVBS / SIF). That is, the tuner 410 can process both a digital broadcast signal and an analog broadcast signal. The analog baseband video or audio signal (CVBS / SIF) output from the tuner 410 can be directly input to the controller 470.

In addition, the tuner 410 can receive RF broadcast signals of a single carrier or a multiple carrier. Meanwhile, the tuner 410 sequentially tunes and receives RF broadcast signals of all the broadcast channels stored through the channel storage function among the RF broadcast signals received through the antenna, converts the RF broadcast signals into intermediate frequency signals or baseband signals (DIF: Digital Intermediate Frequency or baseband signal).

The demodulator 420 may receive and demodulate the digital IF signal DIF converted by the tuner 410 and perform channel decoding. For this, the demodulator 420 may include a trellis decoder, a de-interleaver, a Reed-Solomon decoder, or a convolution decoder, a deinterleaver, - Solomon decoder and the like.

The demodulation unit 420 may perform demodulation and channel decoding, and then output a stream signal TS. At this time, the stream signal may be a signal in which a video signal, a voice signal, or a data signal is multiplexed. For example, the stream signal may be an MPEG-2 TS (Transport Stream) multiplexed with an MPEG-2 standard video signal, a Dolby AC-3 standard audio signal, or the like.

The stream signal output from the demodulation unit 420 may be input to the control unit 470. The control unit 470 controls demultiplexing, video / audio signal processing, and the like, and controls the output of audio through the display unit 480 and audio through the audio output unit 485.

The external device interface unit 435 provides an interface environment between the digital device 300 and various external devices. To this end, the external device interface unit 335 may include an A / V input / output unit (not shown) or a wireless communication unit (not shown).

The external device interface unit 435 may be a digital versatile disk (DVD), a Blu-ray, a game device, a camera, a camcorder, a computer (notebook), a tablet PC, a smart phone, device, an external device such as a cloud, or the like. The external device interface unit 435 transmits a signal including data such as image, image, and voice input through the connected external device to the control unit 470 of the digital device. The control unit 470 can control the processed image, image, voice, and the like to be output to the external device to which the data signal is connected. To this end, the external device interface unit 435 may further include an A / V input / output unit (not shown) or a wireless communication unit (not shown).

The A / V input / output unit includes a USB terminal, a CVBS (Composite Video Banking Sync) terminal, a component terminal, an S-video terminal (analog) terminal, A DVI (Digital Visual Interface) terminal, an HDMI (High Definition Multimedia Interface) terminal, an RGB terminal, a D-SUB terminal, and the like.

The wireless communication unit can perform short-range wireless communication with another digital device. The digital device 400 may be a Bluetooth device such as Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (UDA), Ultra Wideband (UWB), ZigBee, Digital Living Network Alliance (DLNA) Lt; RTI ID = 0.0 > and / or < / RTI > other communication protocols.

Also, the external device interface unit 435 may be connected to the set-top box STB through at least one of the various terminals described above to perform input / output operations with the set-top box STB.

Meanwhile, the external device interface unit 435 may receive an application or an application list in an adjacent external device, and may transmit the received application or application list to the control unit 470 or the storage unit 440.

The network interface unit 430 provides an interface for connecting the digital device 400 to a wired / wireless network including the Internet network. The network interface unit 430 may include an Ethernet terminal or the like for connection with a wired network and may be a WLAN (Wireless LAN) (Wi- Fi, Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), and HSDPA (High Speed Downlink Packet Access) communication standards.

The network interface unit 430 can transmit or receive data to another user or another digital device via the connected network or another network linked to the connected network. In particular, some of the content data stored in the digital device 400 can be transmitted to a selected user or selected one of other users or other digital devices previously registered with the digital device 400. [

Meanwhile, the network interface unit 430 can access a predetermined web page through the connected network or another network linked to the connected network. That is, it is possible to access a predetermined web page through a network and transmit or receive data with the server. In addition, content or data provided by a content provider or a network operator may be received. That is, it can receive contents and related information of movies, advertisements, games, VOD, broadcasting signals, etc., provided from a content provider or a network provider through a network. In addition, it can receive update information and an update file of firmware provided by the network operator. It may also transmit data to the Internet or to a content provider or network operator.

In addition, the network interface unit 430 can select and receive a desired application from the open applications through the network.

The storage unit 440 may store a program for each signal processing and control in the control unit 470 or may store a signal-processed video, audio, or data signal.

The storage unit 440 may also function for temporarily storing video, audio, or data signals input from the external device interface unit 435 or the network interface unit 430. [ The storage unit 440 can store information on a predetermined broadcast channel through the channel memory function.

The storage unit 440 may store a list of applications or applications input from the external device interface unit 435 or the network interface unit 330.

In addition, the storage unit 440 may store various platforms described later.

The storage unit 440 may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD Memory, etc.), RAM (RAM), and ROM (EEPROM, etc.). The digital device 400 may reproduce and provide a content file (a moving image file, a still image file, a music file, a document file, an application file, etc.) stored in the storage unit 440 to a user.

4 illustrates an embodiment in which the storage unit 440 is provided separately from the control unit 470, the present invention is not limited thereto. In other words, the storage unit 440 may be included in the control unit 470.

The user input interface unit 450 transfers a signal input by the user to the controller 470 or a signal from the controller 470 to the user.

For example, the user input interface unit 450 may control power on / off, channel selection, screen setting, etc. from the remote control device 500 according to various communication methods such as an RF communication method and an infrared (IR) And may transmit the control signal of the control unit 470 to the remote control device 500. [

In addition, the user input interface unit 450 can transmit a control signal input from a local key (not shown) such as a power key, a channel key, a volume key, and a set value to the controller 470.

The user input interface unit 450 transmits a control signal input from a sensing unit (not shown) that senses a gesture of a user to the control unit 470 or transmits a signal of the control unit 470 to a sensing unit (Not shown). Here, the sensing unit may include a touch sensor, an audio sensor, a position sensor, an operation sensor, and the like.

The control unit 470 demultiplexes the streams input through the tuner 410, the demodulation unit 420, or the external device interface unit 435 or processes the demultiplexed signals to generate a signal for video or audio output And can output.

The video signal processed by the control unit 470 may be input to the display unit 480 and displayed as an image corresponding to the video signal. The video signal processed by the controller 470 may be input to the external output device through the external device interface unit 435. [

The audio signal processed by the control unit 470 may be audio-output to the audio output unit 485. The voice signal processed by the control unit 470 may be input to the external output device through the external device interface unit 435. [

Although not shown in FIG. 4, the control unit 470 may include a demultiplexing unit, an image processing unit, and the like.

The control unit 470 can control the overall operation of the digital device 400. [ For example, the controller 470 may control the tuner 410 to control tuning of a RF broadcast corresponding to a channel selected by the user or a previously stored channel.

The control unit 470 can control the digital device 400 by a user command or an internal program input through the user input interface unit 450. [ In particular, the user can access the network and allow a user to download a desired application or application list into the digital device 400.

For example, the control unit 470 controls the tuner 410 so that a signal of a selected channel is input according to a predetermined channel selection command received through the user input interface unit 450. And processes video, audio or data signals of the selected channel. The control unit 470 allows the display unit 480 or the audio output unit 485 to output the video or audio signal processed by the user through the channel information selected by the user.

The control unit 470 may be connected to the external device interface unit 435 through an external device such as a camera or a camcorder in accordance with an external device video playback command received through the user input interface unit 450. [ So that the video signal or the audio signal of the video signal can be output through the display unit 480 or the audio output unit 485.

On the other hand, the control unit 470 can control the display unit 480 to display an image. For example, a broadcast image input through the tuner 410, an external input image input through the external device interface unit 435, an image input through the network interface unit, or an image stored in the storage unit 440 , And display on the display unit 480. At this time, the image displayed on the display unit 480 may be a still image or a moving image, and may be a 2D image or a 3D image.

In addition, the control unit 470 can control to reproduce the content. The content at this time may be the content stored in the digital device 400, or the received broadcast content, or an external input content input from the outside. The content may be at least one of a broadcast image, an external input image, an audio file, a still image, a connected web screen, and a document file.

On the other hand, when entering the application view item, the control unit 470 can control to display a list of applications or applications that can be downloaded from the digital device 300 or from an external network.

The control unit 470, in addition to various user interfaces, can control to install and drive an application downloaded from the external network. In addition, by the user's selection, it is possible to control the display unit 480 to display an image related to the executed application.

Although not shown in the drawing, a channel browsing processing unit for generating a channel signal or a thumbnail image corresponding to an external input signal may be further provided.

The channel browsing processing unit receives a stream signal TS output from the demodulation unit 320 or a stream signal output from the external device interface unit 335 and extracts an image from an input stream signal to generate a thumbnail image . The generated thumbnail image may be encoded as it is or may be input to the controller 470. In addition, the generated thumbnail image may be encoded in a stream form and input to the controller 470. The control unit 470 may display a thumbnail list having a plurality of thumbnail images on the display unit 480 using the input thumbnail images. On the other hand, the thumbnail images in this thumbnail list can be updated in sequence or simultaneously. Accordingly, the user can easily grasp the contents of a plurality of broadcast channels.

The display unit 480 converts an image signal, a data signal, an OSD signal processed by the control unit 470 or a video signal and a data signal received from the external device interface unit 435 into R, G, and B signals, respectively Thereby generating a driving signal.

The display unit 480 may be a PDP, an LCD, an OLED, a flexible display, a 3D display, or the like.

Meanwhile, the display unit 480 may be configured as a touch screen and used as an input device in addition to the output device.

The audio output unit 485 receives a signal processed by the control unit 470, for example, a stereo signal, a 3.1 channel signal, or a 5.1 channel signal, and outputs it as a voice. The audio output unit 485 may be implemented by various types of speakers.

In order to detect the gesture of the user, a sensing unit (not shown) having at least one of a touch sensor, a voice sensor, a position sensor, and an operation sensor may be further included in the digital device 400 . A signal sensed by a sensing unit (not shown) may be transmitted to the controller 3470 through the user input interface unit 450.

On the other hand, a photographing unit (not shown) for photographing a user may be further provided. The image information photographed by the photographing unit (not shown) may be input to the control unit 470.

The control unit 470 may detect the gesture of the user by combining the images photographed by the photographing unit (not shown) or the sensed signals from the sensing unit (not shown).

The power supply unit 490 supplies the corresponding power to the digital device 400.

Particularly, it is possible to supply power to a control unit 470, a display unit 480 for displaying an image, and an audio output unit 485 for audio output, which can be implemented in the form of a system on chip (SoC) .

To this end, the power supply unit 490 may include a converter (not shown) for converting AC power to DC power. Meanwhile, for example, when the display unit 480 is implemented as a liquid crystal panel having a plurality of backlight lamps, an inverter (not shown) capable of PWM (Pulse Width Modulation) operation for variable luminance or dimming driving and an inverter (not shown).

The remote control device 500 transmits the user input to the user input interface unit 450. To this end, the remote control device 500 can use Bluetooth, RF (radio frequency) communication, infrared (IR) communication, UWB (Ultra Wideband), ZigBee, or the like.

Also, the remote control device 500 can receive the video, audio, or data signal output from the user input interface unit 450 and display it on the remote control device 500 or output sound or vibration.

The digital device 400 may be a digital broadcast receiver capable of processing digital broadcast signals of a fixed or mobile ATSC scheme or a DVB scheme.

In addition, the digital device according to the present invention may further include a configuration that omits some of the configuration shown in FIG. On the other hand, unlike the above, the digital device does not have a tuner and a demodulator, and can receive and reproduce the content through the network interface unit or the external device interface unit.

FIG. 5 is a block diagram illustrating a detailed configuration of the control unit of FIGS. 2 to 4 according to an embodiment of the present invention. Referring to FIG.

An example of the control unit includes a demultiplexer 510, an image processor 5520, an OSD generator 540, a mixer 550, a frame rate converter (FRC) 555, And may include a formatter 560. The control unit may further include a voice processing unit and a data processing unit.

The demultiplexer 510 demultiplexes the input stream. For example, the demultiplexer 510 may demultiplex the received MPEG-2 TS video, audio, and data signals. Here, the stream signal input to the demultiplexer 510 may be a stream signal output from a tuner, a demodulator, or an external device interface.

The image processing unit 420 performs image processing of the demultiplexed image signal. To this end, the image processing unit 420 may include a video decoder 425 and a scaler 435.

The video decoder 425 decodes the demultiplexed video signal, and the scaler 435 scales the decoded video signal so that the resolution of the decoded video signal can be output from the display unit.

The video decoder 525 may support various standards. For example, the video decoder 525 performs the function of an MPEG-2 decoder when the video signal is encoded in the MPEG-2 standard, and the video decoder 525 encodes the video signal in the DMB (Digital Multimedia Broadcasting) It can perform the function of the H.264 decoder.

On the other hand, the video signal decoded by the video processor 520 is input to the mixer 450.

The OSD generation unit 540 generates OSD data according to a user input or by itself. For example, the OSD generating unit 440 generates data for displaying various data in graphic or text form on the screen of the display unit 380 based on the control signal of the user input interface unit. The generated OSD data includes various data such as a user interface screen of a digital device, various menu screens, a widget, an icon, and viewing rate information. The OSD generation unit 540 may generate data for displaying broadcast information based on the caption of the broadcast image or the EPG.

The mixer 550 mixes the OSD data generated by the OSD generator 540 and the image signal processed by the image processor, and provides the mixed image to the formatter 560. Since the decoded video signal and the OSD data are mixed, the OSD is overlaid on the broadcast image or the external input image.

A frame rate conversion unit (FRC) 555 converts a frame rate of an input image. For example, the frame rate conversion unit 555 may convert the frame rate of the input 60 Hz image to have a frame rate of 120 Hz or 240 Hz, for example, in accordance with the output frequency of the display unit. As described above, there are various methods for converting the frame rate. For example, when converting the frame rate from 60 Hz to 120 Hz, the frame rate conversion unit 555 may insert the same first frame between the first frame and the second frame, Three frames can be inserted. As another example, when converting the frame rate from 60 Hz to 240 Hz, the frame rate conversion unit 555 may insert and convert three or more identical frames or predicted frames between existing frames. On the other hand, when the frame conversion is not performed, the frame rate conversion unit 555 may be bypassed.

The formatter 560 changes the output of the input frame rate conversion unit 555 to match the output format of the display unit. For example, the formatter 560 may output R, G, and B data signals, and the R, G, and B data signals may be output as low voltage differential signals (LVDS) or mini-LVDS . If the output of the input frame rate converter 555 is a 3D video signal, the formatter 560 may configure and output the 3D format according to the output format of the display unit to support the 3D service through the display unit.

On the other hand, the voice processing unit (not shown) in the control unit can perform the voice processing of the demultiplexed voice signal. Such a voice processing unit (not shown) may support processing various audio formats. For example, even when a voice signal is coded in a format such as MPEG-2, MPEG-4, AAC, HE-AAC, AC-3, or BSAC, a corresponding decoder can be provided.

In addition, the voice processing unit (not shown) in the control unit can process the base, the treble, the volume control, and the like.

A data processing unit (not shown) in the control unit can perform data processing of the demultiplexed data signal. For example, the data processing unit can decode the demultiplexed data signal even when it is coded. Here, the encoded data signal may be EPG information including broadcast information such as a start time and an end time of a broadcast program broadcasted on each channel.

On the other hand, the above-described digital device is an example according to the present invention, and each component can be integrated, added, or omitted according to specifications of a digital device actually implemented. That is, if necessary, two or more components may be combined into one component, or one component may be divided into two or more components. In addition, the functions performed in each block are intended to illustrate the embodiments of the present invention, and the specific operations and devices thereof do not limit the scope of rights of the present invention.

Meanwhile, the digital device may be a video signal processing device that performs signal processing of an image stored in the device or an input image. Other examples of the video signal processing device include a set-top box (STB), a DVD player, a Blu-ray player, a game device, a computer Etc. can be further exemplified.

FIG. 6 is a diagram illustrating input means coupled to the digital device of FIGS. 2 through 4 according to one embodiment of the present invention.

A front panel (not shown) or a control means (input means) provided on the digital device 600 is used to control the digital device 600.

The control means includes a remote controller 610, a keyboard 630, a pointing device 620, and a keyboard 620, which are mainly implemented for the purpose of controlling the digital device 600, as a user interface device (UID) A touch pad, or the like, but may also include control means dedicated to external input connected to the digital device 600. [ In addition, a control device may include a mobile device such as a smart phone, a tablet PC, or the like that controls the digital device 600 through a mode switching or the like, although it is not a control object of the digital device 600. In the following description, a pointing device will be described as an example, but the present invention is not limited thereto.

The input means may be a communication protocol such as Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (UDA), Ultra Wideband (UWB), ZigBee, Digital Living Network Alliance (DLNA) At least one can be employed as needed to communicate with the digital device.

The remote controller 610 is a conventional input device having various key buttons necessary for controlling the digital device 600. [

The pointing device 620 may include a gyro sensor or the like to implement a corresponding pointer on the screen of the digital device 600 based on the user's motion, The control command is transmitted. Such a pointing device 620 may be named with various names such as a magic remote controller, a magic controller, and the like.

Since the digital device 600 provides a variety of services such as a web browser, an application, and a social network service (SNS) as an intelligent integrated digital device beyond the conventional digital device 600 providing only conventional broadcasting, It is not easy, and it is implemented to complement input and realize input convenience such as text by implementing similar to PC keyboard.

On the other hand, the control means such as the remote control 610, the pointing device 620 and the keyboard 630 may be provided with a touch pad as required to provide more convenient and various control purposes such as text input, pointer movement, .

The digital device described in this specification uses the Web OS as an OS and / or platform. Hereinafter, the processing such as the configuration or algorithm based on the web OS can be performed in the control unit of the above-described digital device or the like. Here, the control unit includes the control unit in FIGS. 2 to 5 described above and uses it as a broad concept. Accordingly, in order to process services, applications, contents, and the like related to the web OS in the digital device, the hardware and components including related software, firmware, and the like are controlled by a controller Named and explained.

Such a web OS-based platform is intended to enhance development independence and function expandability by integrating services, applications, and the like based on, for example, a luna-service bus, Productivity can be increased. Also, multi-tasking can be supported by efficiently utilizing system resources and the like through a Web OS process and resource management.

Meanwhile, the web OS platform described in this specification can be used not only in fixed devices such as a PC, a TV, and a set-top box (STB) but also in mobile devices such as mobile phones, smart phones, tablet PCs, notebooks, wearable devices .

The structure of the software for digital devices is based on a single process and a closed product based on multi-threading with conventional problem solving and market-dependent monolithic structure, And has been pursuing new platform-based development since then, has pursued cost innovation through chip-set replacement, UI application and external application development efficiency, and developed layering and componentization Layer structure and an add-on structure for add-ons, single-source products, and open applications. More recently, the software architecture provides a modular architecture for functional units, a Web Open API (Application Programming Interface) for echo-systems, and a game engine And a native open API (Native Open API), and thus, a multi-process structure based on a service structure is being created.

7 is a diagram illustrating a Web OS architecture according to an embodiment of the present invention.

Referring to FIG. 7, the architecture of the Web OS platform will be described as follows.

The platform can be largely divided into a kernel, a system library based Web OS core platform, an application, and a service.

The architecture of the Web OS platform is layered structure, with the OS at the lowest layer, the system library (s) at the next layer, and the applications at the top.

First, the lowest layer includes a Linux kernel as an OS layer, and can include Linux as an OS of the digital device.

The OS layer is provided with a BOS (Board Support Package) / HAL (Hardware Abstraction Layer) layer, a Web OS core modules layer, a service layer, a Luna-Service layer Bus layer, Native Developer ◎ Kit / QT layer, and Application layer in the top layer.

Meanwhile, some layers of the above-described web OS layer structure may be omitted, and a plurality of layers may be one layer, or one layer may be a plurality of layer structures.

The web OS core module layer is composed of a Luna Surface Manager (LSM) for managing surface windows and the like, a System & Application Manager (SAM) for managing the execution and execution status of applications, and a WebKit And a WAM (Web Application Manager) for managing web applications and the like.

The LSM manages an application window displayed on the screen. The LSM manages display hardware (Display HW) and provides a buffer for rendering contents necessary for applications. The LSM composes the results of rendering by a plurality of applications, Can be output.

The SAM manages various conditional execution policies of the system and the application.

WAM, on the other hand, is based on the Enyo Framework, which can be regarded as a basic application for web applications.

The use of an application's service is done via a luna-service bus, a new service can be registered on the bus, and an application can find and use the service it needs.

The service layer may include various service level services such as TV service and Web OS service. Meanwhile, the web OS service may include a media server, a Node.JS, and the like. In particular, the Node.JS service supports, for example, javascript.

Web OS services can communicate over a bus to a Linux process that implements function logic. It can be divided into four parts. It is developed from TV process and existing TV to Web OS, services that are differentiated by manufacturer, service which is manufacturer's common service and JavaScript, and is used through Node.js Node.js service.

The application layer may include all applications that can be supported in a digital device, such as a TV application, a showcase application, a native application, a web application, and the like.

An application on the Web OS can be divided into a Web application, a PDK (Palm Development Kit) application, a QT (Qt Meta Language or Qt Modeling Language) application and the like depending on an implementation method.

The web application is based on a WebKit engine and is executed on a WAM runtime. These web applications can be based on the ENI framework, or they can be developed and run on a common HTML5, CSS (Cascading Style Sheets), or JavaScript based.

The PDK application includes a native application developed in C / C ++ based on a PDK provided for third-party or external developers. The PDK refers to a development library and a tool set provided for a third party such as a game to develop a native application (C / C ++). For example, PDK applications can be used to develop applications where performance is critical.

The QML application is a Qt-based native application and includes a basic application provided with a web OS platform such as a card view, a home dashboard, a virtual keyboard, and the like. Here, QML is a mark-up language in the form of a script instead of C ++.

In the meantime, the native application is an application that is developed and compiled in C / C ++ and executed in a binary form. The native application has a high speed of execution.

FIG. 8 is a diagram illustrating an architecture of a Web OS device according to an exemplary embodiment of the present invention. Referring to FIG.

8 is a block diagram based on the runtime of the Web OS device, which can be understood with reference to the layered structure of FIG.

The following description will be made with reference to FIGS. 7 and 8. FIG.

Referring to FIG. 8, services and applications and WebOS core modules are included on the system OS (Linux) and system libraries, and communication between them can be done via the Luna-Service bus.

E-mail, contact, calendar, etc. Node.js services based on HTML5, CSS, java script, logging, backup, file notify notify, database (DB), activity manager, system policy, audio daemon (AudioD), update, media server, TV services such as Electronic Program Guide (PVD), Personal Video Recorder (PVR), data broadcasting, voice recognition, Now on, Notification, search, , CP services such as Auto Content Recognition (ACR), Contents List Browser (CBOX), wfdd, DMR, Remote Application, download, Sony Philips Digital Interface Format (SDPIF), PDK applications, , QML applications, etc. And the enyo framework based on the TV UI-related applications and web applications, Luna - made the process via the Web OS core modules, such as the aforementioned SAM, WAM, LSM via the service bus. Meanwhile, in the above, TV applications and web applications may not necessarily be UI-based or UI-related.

CBOX can manage the list and metadata of external device contents such as USB, DLNA, cloud etc. connected to TV. Meanwhile, the CBOX can output a content listing of various content containers such as a USB, a DMS, a DVR, a cloud, etc. to an integrated view. In addition, CBOX can display various types of content listings such as pictures, music, video, and manage the metadata. In addition, the CBOX can output the contents of the attached storage in real-time. For example, when a storage device such as a USB is plugged in, the CBOX must be able to immediately output the content list of the storage device. At this time, a standardized method for processing the content listing may be defined. In addition, CBOX can accommodate various connection protocols.

The SAM is intended to improve module complexity and scalability. For example, the existing system manager processes various functions such as system UI, window management, web application runtime, and UX constraint processing in one process to separate the main functions to solve the large implementation complexity, Clarify the implementation interface by clarifying the interface.

LSM supports independent development and integration of system UX implementations such as card view, launcher, etc., and supports easy modification of product requirements. LSM can make multi-tasking possible by utilizing hardware resources (HW resources) when synthesizing a plurality of application screens such as an application in-app, A window management mechanism can be provided.

LSM supports implementation of system UI based on QML and improves its development productivity. Based on MVC, QML UX can easily construct views for layouts and UI components, and can easily develop code for handling user input. On the other hand, the interface between the QML and the Web OS component is via the QML extension plug-in, and the graphic operation of the application can be based on the wayland protocol, luna-service call, etc. have.

LSM is an abbreviation of Luna Surface Manager, as described above, which functions as an Application Window Compositor.

LSM synthesizes independently generated applications, UI components, etc. on the screen and outputs them. In this regard, when components such as recents applications, showcase applications, launcher applications, etc. render their own contents, the LSM defines the output area, interworking method, etc. as a compositor. In other words, the compositor LSM handles graphics synthesis, focus management, input events, and so on. At this time, the LSM receives events, focus, etc. from the input manager. These input managers can include HIDs such as remote controllers, mouse & keyboards, joysticks, game pads, application remotes, and pen touches.

As such, LSM supports multiple window models, which can be performed simultaneously in all applications due to the nature of the system UI. In this regard, it is also possible to provide various functions such as launcher, resent, setting, notification, system keyboard, volume UI, search, finger gesture, Voice Recognition (STT (Sound to Text), TTS LSM can support a pattern gesture (camera, mobile radio control unit (MRCU), live menu, ACR (Auto Content Recognition), etc.) .

9 is a diagram illustrating a graphic composition flow in a web OS device according to an embodiment of the present invention.

9, the graphic composition processing includes a web application manager 910 responsible for the UI process, a webkit 920 responsible for the web process, an LSM 930, and a graphic manager (GM) Lt; RTI ID = 0.0 > 940 < / RTI >

When the web application-based graphic data (or application) is generated as a UI process in the web application manager 910, the generated graphic data is transferred to the LSM 930 if the generated graphic data is not a full-screen application. On the other hand, the web application manager 910 receives an application generated in the web kit 920 for sharing a GPU (Graphic Processing Unit) memory for graphic management between the UI process and the web process, If it is not an application, it transfers it to the LSM 930. In the case of the full-screen application, the LSM 930 may be bypassed and directly transmitted to the graphic manager 940.

The LSM 930 transmits the received UI application to the wayland compositor via the weir surface, and the weir composer appropriately processes the received UI application and transmits it to the graphic manager. The graphic data transmitted from the LSM 930 is transmitted to the graphic manager composer via the LSM GM surface of the graphic manager 940, for example.

On the other hand, the full-screen application is passed directly to the graphics manager 940 without going through the LSM 930, as described above, and this application is processed in the graphics manager compositor via the WAM GM surface.

The graphical manager processes all the graphic data in the web OS device, including the data through the LSM GM surface described above, the data through the WAM GM surface, as well as the GM surface such as data broadcasting application, caption application, And processes the received graphic data appropriately on the screen. Here, the functions of the GM compositor are the same or similar to those of the compositor described above.

FIG. 10 is a view for explaining a media server according to an embodiment of the present invention, FIG. 11 is a view for explaining a configuration block diagram of a media server according to an embodiment of the present invention, and FIG. 12 Is a diagram illustrating a relationship between a media server and a TV service according to an embodiment of the present invention.

The media server supports the execution of various multimedia in the digital device and manages the necessary resources. The media server can efficiently use hardware resources required for media play. For example, the media server requires an audio / video hardware resource for multimedia execution and can efficiently utilize the resource usage status by managing it. In general, a fixed device having a larger screen than a mobile device needs more hardware resources to execute multimedia, and a large amount of data must be encoded / decoded and transmitted at a high speed. On the other hand, the media server is a task that performs broadcasting, recording and tuning tasks in addition to streaming and file-based playback, recording simultaneously with viewing, and simultaneously displaying the sender and recipient screens in a video call And so on. However, the media server is limited in terms of hardware resources such as an encoder, a decoder, a tuner, and a display engine, and thus it is difficult to execute a plurality of tasks at the same time. For example, And processes it.

The media server may be robust in system stability because, for example, a pipeline in which an error occurs during media playback can be removed and restarted on a per-pipeline basis, Even if it does not affect other media play. Such a pipeline is a chain that links each unit function such as decoding, analysis, and output when a media reproduction request is made, and the necessary unit functions may be changed according to a media type and the like.

The media server may have extensibility, for example, adding a new type of pipeline without affecting existing implementations. As an example, the media server may accommodate a camera pipeline, a video conference (Skype) pipeline, a third-party pipeline, and the like.

The media server can process general media playback and TV task execution as separate services because the interface of the TV service is different from the case of media playback. The media server supports operations such as setchannel, channelup, channeldown, channeltuning, and recordstart in connection with TV services and supports operations such as play, pause, and stop in connection with general media playback, Operations can be supported and processed as separate services.

The media server can control or integrally manage the resource management function. The allocation and recall of hardware resources in the device are performed integrally in the media server. In particular, the TV service process transfers the running task and the resource allocation status to the media server. The media server obtains resources and executes the pipeline each time each media is executed, and permits execution by a priority (e.g., policy) of the media execution request, based on the resource status occupied by each pipeline, and And performs resource recall of other pipelines. Here, the predefined execution priority and necessary resource information for the specific request are managed by the policy manager, and the resource manager can communicate with the policy manager to process the resource allocation, the number of times, and the like.

The media server may have an identifier (ID) for all playback related operations. For example, the media server may issue a command to indicate a particular pipeline based on the identifier. The media server may separate the two into pipelines for playback of more than one media.

The media server may be responsible for playback of the HTML 5 standard media.

In addition, the media server may follow the TV restructuring scope of the TV pipeline as a separate service process. The media server can be designed regardless of the TV restructuring scope. If the TV is not a separate service process, it may be necessary to re-execute the entire TV when there is a problem with a specific task.

The media server is also referred to as uMS, i.e., a micro media server. Here, the media player is a media client, which is a media client, for example, a web page for an HTML5 video tag, a camera, a TV, a Skype, a second screen, It can mean a kit (Webkit).

In the media server, management of micro resources such as a resource manager, a policy manager, and the like is a core function. In this regard, the media server also controls the playback control role for the web standard media content. In this regard, the media server may also manage pipeline controller resources.

Such a media server supports, for example, extensibility, reliability, efficient resource usage, and the like.

In other words, the uMS or media server may be a web OS device, such as a resource, such as a cloud game, a MVPD (pay service), a camera preview, a second screen, a Skype, And manage and control the use of resources for proper processing in an overall manner so as to enable efficient use. On the other hand, each resource uses, for example, a pipeline in its use, and the media server can manage and control the generation, deletion, and use of a pipeline for resource management as a whole.

Here, the pipeline refers to, for example, when a media associated with a task starts a series of operations, such as parsing of a request, a decoding stream, and a video output, . For example, with respect to a TV service or an application, watching, recording, channel tuning, and the like are each individually handled under the control of resource utilization through a pipeline generated according to the request .

The processing structure and the like of the media server will be described in more detail with reference to FIG.

10, an application or service is connected to a media server 1020 via a luna-to-service bus 1010, and the media server 1020 is connected to pipelines generated again via the luna- Connected and managed.

The application or service can have various clients depending on its characteristics and can exchange data with the media server 1020 or the pipeline through it.

The client includes, for example, a uMedia client (web kit) and a RM (resource manager) client (C / C ++) for connection with the media server 1020.

The application including the uMedia client is connected to the media server 1020, as described above. More specifically, the uMedia client corresponds to, for example, a video object to be described later, and the client uses the media server 1020 for the operation of video by a request or the like.

Here, the video operation relates to the video state, and the loading, unloading, play, playback, or reproduce, pause, stop, Data may be included. Each operation or state of such video can be processed through individual pipeline generation. Accordingly, the uMedia client sends the state data associated with the video operation to the pipeline manager 1022 in the media server.

The pipeline manager 1022 obtains information on resources of the current device through data communication with the resource manager 1024 and requests resource allocation corresponding to the state data of the uMedia client. At this time, the pipeline manager 1022 or the resource manager 1024 controls resource allocation through the data communication with the policy manager 1026 when necessary in connection with the resource allocation or the like. For example, when the resource manager 1024 requests or does not have resources to allocate according to a request of the pipeline manager 1022, appropriate resource allocation or the like may be performed according to the priority comparison of the policy manager 1026 or the like .

On the other hand, the pipeline manager 1022 requests the media pipeline controller 1028 to generate a pipeline for an operation according to the request of the uMedia client with respect to resources allocated according to the resource allocation of the resource manager 1024 .

The media pipeline controller 1028 generates necessary pipelines under the control of the pipeline manager 1022. [ The generated pipeline can generate pipelines related to playback, pause, suspension, etc., as well as media pipelines and camera pipelines, as shown. The pipeline may include pipelines for HTML5, Web CP, smartshare playback, thumbnail extraction, NDK, Cinema, MHEG (Multimedia and Hypermedia Information Coding Experts Group), and the like.

In addition, the pipeline may include, for example, a service-based pipeline (its own pipeline) and a URI-based pipeline (media pipeline).

Referring to FIG. 10, an application or service including an RM client may not be directly connected to the media server 1020. This is because the application or service may directly process the media. In other words, if the application or service directly processes the media, it may not pass through the media server. However, at this time, uMS connectors need to manage resources for pipeline creation and use. Meanwhile, when receiving a resource management request for direct media processing of the application or service, the uMS connector communicates with the media server 1020 including the resource manager 1024. To this end, the media server 1020 also needs to have an uMS connector.

Accordingly, the application or service can respond to the request of the RM client by receiving the resource management of the resource manager 1024 through the uMS connector. These RM clients can handle services such as native CP, TV service, second screen, Flash player, YouTube Medai Source Extensions (MSE), cloud gaming, Skype. In this case, as described above, the resource manager 1024 can appropriately manage resources through data communication with the policy manager 1026 when necessary for resource management.

On the other hand, the URI-based pipeline is performed through the media server 1020 instead of processing the media directly as in the RM client described above. Such URI-based pipelines may include a player factory, a G streamer, a streaming plug-in, a DRM (Digital Rights Management) plug-in pipeline, and the like.

On the other hand, the interface method between application and media services may be as follows.

It is a way to interface with a service in a web application. This is a way of using Luna Call using the Palm Service Bridge (PSB), or using Cordova, which extends the display to video tags. In addition, there may be a method using the HTML5 standard for video tags or media elements.

And, it is a method of interfacing with PDK using service.

Alternatively, it is a method of using the service in the existing CP. It can be used to extend existing platform plug-ins based on Luna for backward compatibility.

Finally, it is a way to interface in the case of non-web OS. In this case, you can interface directly by calling the Luna bus.

Seamless change is handled by a separate module (eg TVWIN), which is the process for displaying and streamlining the TV on the screen without Web OS, before or during WebOS boot . This is because the boot time of WebOS is delayed, so it is used to provide the basic function of the TV service first for quick response to the user's power on request. In addition, the module is part of the TV service process and supports quick boot and null change, which provides basic TV functions, and factory mode. In addition, the module may also switch from the Non-Web OS mode to the Web OS mode.

Referring to FIG. 11, a processing structure of a media server is shown.

11, the solid line box represents the process processing configuration, and the dashed box represents the internal processing module during the process. In addition, the solid line arrows represent inter-process calls, that is, Luna service calls, and the dotted arrows may represent notifications or data flows such as register / notify.

A service or a web application or a PDK application (hereinafter referred to as an application) is connected to various service processing components via a luna-service bus, through which an application is operated or controlled.

The data processing path depends on the type of application. For example, when the application is image data related to the camera sensor, the image data is transmitted to the camera processing unit 1130 and processed. At this time, the camera processing unit 1130 processes image data of a received application including a gesture, a face detection module, and the like. Here, the camera processing unit 1130 can generate a pipeline through the media server processing unit 1110 and process the corresponding data if the user desires to use the pipeline or the like.

Alternatively, when the application includes audio data, the audio processing unit (AudioD) 1140 and the audio module (PulseAudio) 1150 can process the audio. For example, the audio processing unit 1140 processes the audio data received from the application and transmits the audio data to the audio module 1150. At this time, the audio processing unit 1140 may include an audio policy manager to determine processing of the audio data. The processed audio data is processed in the audio module 1160. Meanwhile, the application can notify the audio data processing related data to the audio module 1160, which can also perform the notification to the audio module 1160 in the associated pipeline. The audio module 1150 includes an Advanced Linux Sound Architecture (ALSA).

Alternatively, when the application includes or processes (includes) DRM-attached content, the DRM service processing unit 1170 transmits the content data to the DRM service processing unit 1160, and the DRM service processing unit 1170 generates a DRM instance And processes the DRM-enabled content data. Meanwhile, the DRM service processing unit 1160 may process the DRM pipeline in the media pipeline through the Luna-Service bus to process the DRM-applied content data.

Hereinafter, processing in the case where the application is media data or TV service data (e.g., broadcast data) will be described.

FIG. 12 shows only the media server processing unit and the TV service processing unit in FIG. 11 described above in more detail.

Therefore, the following description will be made with reference to FIGS. 11 and 12. FIG.

First, when the application includes TV service data, it is processed in the TV service processing unit 1120/1220.

The TV service processing unit 1120 includes at least one of a DVR / channel manager, a broadcasting module, a TV pipeline manager, a TV resource manager, a data broadcasting module, an audio setting module, and a path manager. 12, the TV service processing unit 1220 includes a TV broadcast handler, a TV broadcast interface, a service processing unit, a TV middleware (TV MW (middleware)), a path manager, a BSP For example, NetCast). Here, the service processing unit may be a module including, for example, a TV pipeline manager, a TV resource manager, a TV policy manager, a USM connector, and the like.

In this specification, the TV service processing unit may have a configuration as shown in Fig. 11 or 12, or may be implemented by a combination of both, in which some configurations are omitted or some configurations not shown may be added.

The TV service processing unit 1120/1220 transmits the data to the DVR / channel manager in the case of DVR (Digital Video Recorder) or channel related data based on the attribute or type of the TV service data received from the application, To generate and process the TV pipeline. On the other hand, when the attribute or type of the TV service data is broadcast content data, the TV service processing unit 1120 generates and processes the TV pipeline through the TV pipeline manager for processing the corresponding data via the broadcasting module.

Alternatively, a json (JavaScript standard object notation) file or a file created in c is processed by a TV broadcast handler and transmitted to a TV pipeline manager through a TV broadcast interface to generate and process a TV pipeline. In this case, the TV broadcast interface unit may transmit the data or the file that has passed through the TV broadcast handler to the TV pipeline manager based on the TV service policy and refer to it when creating the pipeline.

On the other hand, the TV pipeline manager can be controlled by the TV resource manager in generating one or more pipelines according to a TV pipeline creation request from a processing module in a TV service, a manager, or the like. Meanwhile, the TV resource manager can be controlled by the TV policy manager to request the status and allocation of resources allocated for the TV service according to the TV pipeline creation request of the TV pipeline manager, and the media server processing unit 1110 / 1210) and uMS connectors. The resource manager in the media server processing unit 1110/1210 transmits the status of the current TV service, the allocation permission, etc. according to the request of the TV resource manager. For example, if a resource manager in the media server processing unit 1110/1210 confirms that all the resources for the TV service have already been allocated, the TV resource manager can notify that all the resources are currently allocated. At this time, the resource manager in the media server processing unit, together with the notify, removes a predetermined TV pipeline according to a priority or a predetermined criterion among the TV pipelines preliminarily allocated for the TV service, And may request or assign generation. Alternatively, the TV resource manager can appropriately remove, add, or control the TV pipeline in the TV resource manager according to the status report of the resource manager in the media server processing unit 1110/1210.

Meanwhile, the BSP supports, for example, backward compatibility with existing digital devices.

The TV pipelines thus generated can be appropriately operated according to the control of the path manager in the process. The path manager can determine and control the processing path or process of the pipelines by considering not only the TV pipeline but also the operation of the pipeline generated by the media server processing unit 1110/1210.

Next, when the application includes media data, rather than TV service data, it is processed by the media server processing unit 1110/1210. Here, the media server processing units 1110 and 1210 include a resource manager, a policy manager, a media pipeline manager, a media pipeline controller, and the like. On the other hand, the pipeline generated according to the control of the media pipeline manager and the media pipeline controller can be variously generated such as a camera preview pipeline, a cloud game pipeline, and a media pipeline. On the other hand, the media pipeline may include a streaming protocol, an auto / static gstreamer, and a DRM, which can be determined according to the control of the path manager. The specific processing in the media server processing units 1110 and / or 1210 cites the description of FIG. 10 described above, and will not be repeated here.

In this specification, the resource manager in the media server processing unit 1110/1210 can perform resource management with, for example, a counter base.

13 is a diagram illustrating a control method of a remote controller for controlling any one of the video display devices according to the embodiments of the present invention.

13A illustrates that the pointer 205 corresponding to the remote control device 200 is displayed on the display unit 180. As shown in FIG.

The user can move or rotate the remote control device 200 up and down, left and right (Fig. 13 (b)), and back and forth (Fig. 13 (c)). The pointer 205 displayed on the display unit 180 of the video display device corresponds to the motion of the remote control device 200. [ In this remote control device 200, as shown in the figure, since the pointer 205 is moved according to the movement in the 3D space, it can be called a space remote controller.

13B illustrates that when the user moves the remote control device 200 to the left, the pointer 205 displayed on the display unit 180 of the video display device also shifts to the left corresponding thereto.

Information on the motion of the remote control device 200 sensed through the sensor of the remote control device 200 is transmitted to the image display device. The video display device can calculate the coordinates of the pointer 205 from the information about the motion of the remote control device 200. [ The video display device can display the pointer 205 so as to correspond to the calculated coordinates.

13C illustrates a case where the user moves the remote control device 200 away from the display unit 180 while the specific button in the remote control device 200 is pressed. Thereby, the selected area in the display unit 180 corresponding to the pointer 205 can be zoomed in and displayed. Conversely, when the user moves the remote control apparatus 200 close to the display unit 180, the selection area within the display unit 180 corresponding to the pointer 205 may be zoomed out and displayed. On the other hand, when the remote control device 200 moves away from the display unit 180, the selected area is zoomed out, and when the remote control device 200 approaches the display unit 180, the selected area may be zoomed in.

On the other hand, when the specific button in the remote control device 200 is pressed, it is possible to exclude recognizing the up, down, left, and right movement. That is, when the remote control device 200 moves away from or approaches the display unit 180, it is not recognized that the up, down, left, and right movements are recognized, and only the forward and backward movements are recognized. Only the pointer 205 is moved in accordance with the upward, downward, leftward, and rightward movement of the remote control device 200 in a state where the specific button in the remote control device 200 is not pressed.

On the other hand, the moving speed and moving direction of the pointer 205 may correspond to the moving speed and moving direction of the remote control device 200.

The pointer in the present specification means an object displayed on the display unit 180 in correspondence with the operation of the remote control device 200. [ Therefore, various shapes of objects other than the arrow shapes shown in the figure are possible with the pointer 205. [ For example, it can be a concept that includes points, cursors, prompts, thick outlines, and so on. The pointer 205 may be displayed corresponding to a point on a horizontal axis or a vertical axis on the display unit 180 or may be displayed corresponding to a plurality of points such as a line or a surface Do.

14 is an internal block diagram of a remote controller for controlling any one of the video display devices according to the embodiments of the present invention.

14, the remote control device 200 includes a wireless communication unit 225, a user input unit 235, a sensor unit 240, an output unit 250, a power supply unit 260, a storage unit 270, , And a control unit 280. [

The wireless communication unit 225 transmits / receives a signal to / from any one of the video display devices according to the above-described embodiments of the present invention. Of the video display devices according to the embodiments of the present invention, one video display device 100 will be described as an example.

In this embodiment, the remote control device 200 may include an RF module 221 that can transmit and receive signals to and from the image display device 100 according to the RF communication standard. Also, the remote control device 200 may include an IR module 223 capable of transmitting and receiving signals with the image display device 100 according to the IR communication standard.

In this embodiment, the remote control device 200 transmits a signal containing information on the motion of the remote control device 200 to the image display device 100 through the RF module 221.

Also, the remote control device 200 can receive the signal transmitted by the video display device 100 through the RF module 221. [ Also, the remote control device 200 can transmit a command regarding power on / off, channel change, volume change, and the like to the image display device 100 through the IR module 223 as necessary.

The user input unit 235 may include a keypad, a button, a touch pad, or a touch screen. The user can input a command related to the image display device 100 to the remote control device 200 by operating the user input part 235. [ When the user input unit 235 has a hard key button, the user can input a command related to the image display device 100 to the remote control device 200 through the push operation of the hard key button. When the user input unit 235 has a touch screen, the user can touch a soft key of the touch screen to input a command related to the image display device 100 to the remote control device 200. [ In addition, the user input unit 235 may include various types of input means such as a scroll key, a jog key, and the like that can be operated by the user, and the present invention does not limit the scope of the present invention.

The sensor unit 240 may include a gyro sensor 241 or an acceleration sensor 243.

The gyro sensor 241 can sense information on the motion of the remote control device 200. [

For example, the gyro sensor 241 can sense information on the operation of the remote control device 200 based on the x, y, and z axes. The acceleration sensor 243 can sense information on the moving speed of the remote control device 200 and the like. Meanwhile, a distance measuring sensor may be additionally provided, so that the distance to the display unit 180 can be sensed.

The output unit 250 may output an image or voice signal corresponding to the operation of the user input unit 235 or corresponding to the signal transmitted from the image display device 100. [ The user can recognize whether the user input unit 235 is operated or whether the video display device 100 is controlled through the output unit 250.

For example, the output unit 250 may include an LED module 251 that is turned on when a user input unit 235 is operated or a signal is transmitted and received between the image display device 100 and the image display device 100 through the wireless communication unit 225, 253 for outputting sound, an acoustic output module 255 for outputting sound, or a display module 257 for outputting an image.

The power supply unit 260 supplies power to the remote control device 200. The power supply unit 260 can reduce the power consumption by stopping the power supply when the remote controller 200 is not moving for a predetermined time. The power supply unit 260 may resume power supply when a predetermined key provided in the remote control device 200 is operated.

The storage unit 270 may store various types of programs, application data, and the like necessary for the control or operation of the remote control apparatus 200. [ If the remote control device 200 wirelessly transmits and receives signals through the image display device 100 and the RF module 221, the remote control device 200 and the image display device 100 transmit signals through a predetermined frequency band Send and receive. The control unit 280 of the remote control device 200 stores information on a frequency band or the like capable of wirelessly transmitting and receiving a signal with the image display device 100 paired with the remote control device 200 in the storage unit 270 Can be referenced.

The control unit 280 controls various matters related to the control of the remote control device 200. [ The control unit 280 transmits a signal corresponding to a predetermined key operation of the user input unit 235 or a signal corresponding to the motion of the remote control device 200 sensed by the sensor unit 240, (100).

15 is a configuration diagram of a multimedia device according to an embodiment of the present invention.

15, a multimedia device 1500 according to an exemplary embodiment of the present invention includes a tuner 1510, a communication module 1520, a controller 1530, a display module 1540, a memory 1550, an EPG A signal processing module 1560, and the like. Of course, it is also within the scope of the present invention to modify, delete, or add some modules according to the needs of those skilled in the art. Further, the multimedia device 1500 corresponds to, for example, a television or a set top box (STB). Further, a person skilled in the art can supplement Fig. 15 with reference to Fig. 2 and the like.

The tuner 1510 receives a broadcast signal, an audio decoder (not shown) decodes the audio data included in the received broadcast signal, and a video decoder (not shown) / RTI >

The display module 1540 displays the decoded video data in the first area, and the interface module (or the communication module 1520) receives at least one or more commands from the external device.

The controller 1530 controls at least one of the tuner 1510, the display module 1540 and the interface module. Further, the controller 1530 controls the controller 1530 in accordance with at least one or more commands received from the external device, Execute the area enlargement mode. The controller 1530 also displays video data corresponding to the video data in the second area within the first area, and the second area includes an indicator, and according to at least one of the position or the size of the indicator The video data displayed in the first area is changed.

According to another embodiment of the present invention, the above-described process can be applied to the video data stored in the memory 1550 instead of the broadcast signal. In addition, the controller 1530 automatically executes the specific area enlargement mode in accordance with the category information of the received broadcast signal. The category information of the broadcast signal is designed to be processed by the EPG signal processing module 1560.

The above-described indicator is implemented, for example, as a graphic image of a guide box guiding a specific area to be enlarged or enlarged. Hereinafter, this will be described later in more detail with reference to FIG.

The controller 1530 converts the coordinate information of the pointer moving according to the motion of the external device according to the video data of the received broadcast signal. For example, when the resolution information of the video data of the received broadcast signal corresponds to High Definition (HD), the resolution information of the video data of the received broadcast signal is scaled by 0.66 times the coordinate information of the pointer Scaled by 1 times the coordinate information of the pointer if the information corresponds to FHD (Full High Definition), and when the resolution information of the video data of the received broadcast signal corresponds to UHD (Ultra High Definition) It is designed to scale to twice the information. Hereinafter, this will be described in more detail with reference to FIG.

If the magnification or reduction magnification of the video data displayed in the first area is changed according to at least one command received from the external device after the specific area enlargement mode is executed, 2 Automatically changes the size of the indicator in the area. Hereinafter, the details will be described later in detail with reference to FIG.

If the specific area to be enlarged is recognized in the first area in accordance with at least one command received from the external device after the specific area enlargement mode is executed, Automatically changes the center point of the indicator. Hereinafter, this will be described later in more detail with reference to FIG.

The controller 1530 controls the video data and the indicator in the second area to disappear according to at least one or more commands received from the external device after a predetermined time has elapsed since the specific area enlargement mode was executed do. Hereinafter, this will be described later in more detail with reference to FIG.

Further, the controller 1530 displays a graphic image that guides the execution of the specific area enlargement mode after all of the video data and indicators in the second area are disappeared, and the graphic image displays information indicating the enlargement magnification . Then, the controller 1530 is designed to display the video data and the indicator in the second area again according to a command to select the graphic image. Hereinafter, this will be described later in more detail with reference to FIG.

The position or size of the indicator is changed based on, for example, information obtained from a touch sensor or a motion sensor of the external device. The external device can be designed, for example, with reference to Figs. 6, 13, and 14 described above. More specifically, for example, the external device corresponds to a remote controller or a mobile device including at least one of an RF (Radio Frequency) module and an IR (Infrared) module.

The above-mentioned first area corresponds to, for example, the entire screen of the television, and the second area corresponds to, for example, a part of the area included in the first area. In this regard, it will be described later in detail with reference to FIG.

16 is a flowchart of a method of controlling a multimedia device according to an embodiment of the present invention. Of course, those skilled in the art can supplement FIG. 16 with reference to the above description in FIG.

The multimedia device according to an embodiment of the present invention decodes video data received from the outside or stored in a memory in operation S1610, displays the decoded video data in a first area in operation S1620, The above commands are received (S1630). The multimedia device corresponds to, for example, a television or an STB.

Further, the multimedia device executes a specific area enlargement mode according to at least one command received from the external device (S1640), and displays video data corresponding to the video data in the second area in the first area (S1650).

The second area includes an indicator, and the video data displayed in the first area is changed according to at least one of a position and a size of the indicator. In this regard, it will be described later in detail with reference to FIG.

17 is a diagram illustrating a case where a specific area enlargement mode according to an embodiment of the present invention is activated.

17, when the controller 1530 receives a command for activating the specific area enlargement mode from the external remote controller 1740 through the communication module 1520, And a pointer 1714 for selecting a specific point to be enlarged are displayed in the first area 1710. In this case,

When the controller 1530 receives a command specifying a specific point to be enlarged in the first area 1710 from the external remote controller through the communication module 1520 using the pointer, the controller 1530 includes a specific point corresponding to the command The area including the specified specific point is enlarged, and the area including the enlarged specific point is displayed.

17, when the controller 1530 receives a command for deactivating the specific area enlargement mode from the external remote controller 1740 through the communication module 1520, the controller 1530 controls the specific area enlargement mode The pointer 1724 indicating that the specific point can be selected disappears in the first area 1720. In this case,

17, the controller 1530 receives a command for selecting a specific portion of the first area from the external remote controller 1740 through the communication module 1520 using the pointer 1734 And displays a specific area 1732 to be enlarged corresponding to the received command in advance. Therefore, there is an advantage in that a user can confirm in advance which region will be enlarged.

FIG. 18 is a diagram illustrating a pointer shape being changed when a specific area enlarging mode according to an embodiment of the present invention is activated.

18, when a command is received from the external remote controller 1830 to activate the specific area mode 1840, the controller 1530 changes the pointer shape from the original shape (first graphic image) to another shape Graphic image).

For example, if the magnification ratio is increased in the specific area enlargement mode, the controller 1530 changes the pointer shape from the original shape 1810 to the + shape 1820.

When the enlargement ratio is reduced in the specific area enlargement mode, the controller 1530 changes the shape of the pointer to -shaped.

Therefore, according to the embodiment of the present invention, when the specific area enlargement mode is activated, the pointer changes to a magnifying glass shape and the pointer shape changes according to the increase and decrease of the enlargement ratio, It is intuitively known whether or not the increase rate is increased and the user convenience is improved.

FIG. 19 is a diagram illustrating control of a screen when a specific area enlargement mode is activated according to an embodiment of the present invention. Of course, it is also possible to simply name it "enlargement mode" instead of "specific area enlargement mode" for convenience.

First, a display device according to an embodiment of the present invention displays content on a main screen 1910, and enters an enlargement mode according to an enlargement input request received from a remote controller.

Displays a window 1920 containing the content displayed on the main screen 1910 and displays an indicator 1930 in the displayed window 1920 for selecting a specific area of the displayed content.

To enlarge a selected specific area of the displayed content and to magnify and display a selected specific area of the displayed content on the main screen 1910. [ The content corresponds to, for example, a moving picture.

Of course, for convenience of explanation, the main screen 1910 may be referred to as a first area and the window 1920 may be referred to as a second area. There is no limitation on the shape or size of the window 1920.

More specifically, for example, when video data included in a broadcast signal is output from the main screen 1910 and an input signal is received from the remote controller 1940 for a predetermined time or longer (for example, an OK button) The video data is also displayed on the display unit 1920. When the specific area enlargement mode is executed for the first time, the same video data is output to the main screen 1910 and the window 1920. The video data displayed on the main screen 1910 and the video data displayed on the window 1920 are the same but only different in size.

Further, an indicator 1930 is displayed in the window 1920, and the indicator 1930 is used to select a specific area to be enlarged by the user. For example, as shown in FIG. 19, the indicator 1930 may be implemented as a graphic image of a guide box that guides a specific area to be enlarged or already enlarged, but it is also possible to adopt a graphic image of another shape All of which fall within the scope of the present invention.

The window 1920 may be referred to as a total window or the indicator 1930 may be referred to as a local window. Then, the area specified by the indicator 1930 is enlarged and outputted to the main screen 1910. When the user first enters the specific area enlargement mode, the original (original) video data is output to the main screen 1910, and the original (original) video data is displayed on the main screen 1910 using the window 1920 and the indicator 1930 When the specific area is selected, the enlarged video data is replaced and displayed on the main screen 1910 instead of the original video data. Further, for example, in accordance with the change of the position of the pointer 1932, the position of the indicator 1930 is also changed. Also, if a specific area to be enlarged is determined after the specific area enlargement mode is executed, for example, the original video data is displayed in the window 1920 at one time, but only the size is reduced. Furthermore, the main screen 1910 displays video data that is enlarged (more than 1 time, for example, 1.2 times, 1.5 times, or 2.0 times) only a specific area other than the original video data.

For example, the controller 1530 brightly adjusts the brightness of the area inside the indicator 1930 in the window 1920 and darkens the brightness of the area outside the indicator 1930, thereby easily .

Also, a ratio display bar including an enlargement button and a reduction button for changing the screen enlargement ratio exists in the main screen 1910. The controller 1530 receives the command for selecting one of the reduction button 1922 and the enlargement button 1924 from the external device 1940 using the pointer 1932 and stores the size of the indicator 1930 in correspondence thereto Adjust it to a preset ratio. The controller 1530 changes the shape of the pointer 1932 differently according to the command for selecting the reduction button 1922 and the enlargement button 1924. [ The ratio indicator / control bar including the reduction button 1922 and the enlargement button 1924 is called a second indicator and is distinguishable from the indicator 1930 described above.

Here, the minimum value of the ratio in accordance with the selection of the reduction button 1922 may be one, and the maximum value of the ratio in accordance with the selection of the enlargement button 1924 may be five times. Here, the maximum value of the enlargement ratio is not limited to 5 and is adjustable. If the enlargement ratio is less than 1, the image displayed on the screen is reduced.

For example, when the controller 1530 receives a command to select the shrink button 1922, it changes the shape of the pointer from the original shape 1932 to a - shape. When the controller 1530 receives the command for selecting the enlargement button, it changes the shape of the pointer from the original shape 1932 to the + shape.

Further, the window 1920 can be implemented as a PIP screen, and the size of the PIP screen is designed to be adjustable. For example, the controller 1530 clicks the edge portion and the corner portion of the PIP screen, and moves the command from the first point to the second point, which is different from the first point, to the external device ( For example, a remote controller), the size of the PIP screen can be adjusted.

Further, the controller 1530 can change the position of the PIP screen.

For example, the controller 1530 may click the first point of the PIP screen as a pointer and move the first point at the first point to a different second point in the first area Is received from the external remote controller, the position of the PIP screen can be changed. The PIP screen described here corresponds to the window 1920 shown in Fig.

For example, if the window 1920 continues to be displayed, it may be inconvenient for the user to watch the video data being played back. Accordingly, after a predetermined time (for example, three seconds) has elapsed, the controller 1530 switches the window and the indicator to the hidden state, and upon receiving the preset command from the external device 1940, Display it again on the PIP screen.

The controller 1530 switches the window and the indicator to the hidden state when the pointer 1932 is located at the right boundary line, the left boundary line, the upper boundary line, and the lower boundary line of the second area 1920, Upon receiving a specific command from the device 1940, the window 1920 is displayed again in the full screen 1910 as a PIP screen.

The controller 1530 moves the indicator 1930 using the pointer 1932 and changes the video data displayed on the main screen 1910 in accordance with the change in the position of the indicator 1930. For example, the video data in the area specified by the indicator 1930 and the video data enlarged and displayed on the main screen 1910 are the same and differ only in size (the main screen 1910 and the indicator 1930), and will be apparent to those skilled in the art. More specifically, for example, when the indicator 1930 in the window 1920 includes only a specific object, the main screen 1910 also displays video data including only a specific object, When compared to my video data, there is a difference in size only.

Therefore, by displaying the changed position and size of the indicator 1930 in real time, there is a technical effect that the enlarged specific area of the original video data can be checked more quickly.

To summarize again, when the specific area enlargement mode is executed, the original video data is output to both the main screen area 1910 and the window 1920. However, in the window 1920, video data reduced in size only is displayed. For specific area enlargement, a pointer 1932 may be located within the main screen 1910 or may be located within the window 1920. The specific area to be enlarged with the pointer 1932 as a center point is confirmed.

However, if the specific area to be enlarged is confirmed, the main screen 1910 displays video data in which a specific area is enlarged instead of the original video data. Further, due to the enlargement ratio adjustment or the like, the enlarged video data displayed on the main screen 1910 can be designed to return to the original video data. After the original video data is displayed again on the main screen 1910, a specific area to be enlarged by selecting any point in the main screen 1910 can be newly designated. Of course, it is also within the scope of the present invention to newly designate a specific area to be enlarged by using the indicator 1930 in the window 1920.

Further, when the enlargement / reduction ratio is adjusted by using the external device 1940 or the like while outputting enlarged video data on the main screen 1910, the size of the indicator 1930 in the window 1920 Design to change automatically. Therefore, the user has the advantage that it is easy to check which portion of the window 1920 the video data that is enlarged or reduced in the main screen 1910 in real time corresponds to.

The second indicators 1922 and 1924 shown in Fig. 19 are used for setting the magnification level. Further, the content output to the main screen 1910 is received through a tuner or via an external device. The external device corresponds to at least one of STB, PC, or cellular phone, for example.

The size of the indicator 1930 is automatically changed according to the magnification level selected through the second indicators 1922 and 1924.

Further, although not shown in FIG. 16, a person skilled in the art will be able to refer to FIG. 19 described above by receiving a first enlargement level for enlarging the displayed content, and based on the received first enlargement level, The method of claim 1, further comprising: displaying the magnification indicator having a display size; receiving a second magnification level for magnifying the displayed content; and based on the received second magnification level, And displaying the magnification indicator having a display size.

The window 1920 includes, for example, a Picture In Picture (PIP) window.

Moving the window 1920 in the main screen 1910 also belongs to another right scope of the present invention and in order to select another specific area of the content displayed in the window 1920, Moving the indicator 1930 also falls within the scope of other rights of the invention.

The indicator 1930 moves according to the pointer 1932 signal received from the remote controller 1940 and the size of the indicator 1930 is changed according to the wheel signal received from the remote controller 1940.

Increases the size of the indicator 1930 in accordance with the reduced magnification level 1922 while decreasing the size of the indicator 1930 in accordance with the increased magnification level 1924.

The indicator 1930 is implemented, for example, as a graphic image of a guide box that guides a specific area to be enlarged or enlarged.

The method further comprises the step of converting the coordinate information of the pointer moving according to the motion of the remote controller according to the video data of the content outputted on the main screen of Fig. For example, if the resolution information of the video data of the content corresponds to HD (High Definition), the step of scaling 0.66 times the coordinate information of the pointer, the resolution information of the video data of the content is FHD Scaling the coordinate information of the pointer by one when the resolution information of the video corresponds to UHD (Ultra High Definition), and if the resolution information of the video data of the content corresponds to UHD (Ultra High Definition) / RTI > In this regard, it will be described in more detail below with reference to FIG.

Controls the window 1920 and the indicator 1930 to disappear according to at least one or more commands received from the remote controller 1940 after a predetermined time has elapsed since the enlargement mode was executed. After the window and indicator are all disappeared, a graphical image is displayed to guide the enlargement mode being executed, and the graphic image includes information indicating an enlargement magnification. The window 1920 and the indicator 1930 are displayed again in response to a command for selecting the graphic image. In this regard, it will be described later in detail with reference to FIG. 31 to FIG.

FIG. 20 illustrates moving a specific point on an enlarged screen to a pointer when a specific area enlarging mode according to an embodiment of the present invention is activated.

20, the controller 1530 displays the area specified by the indicator 2030 on the full screen in the first area 2010, and uses the pointer to specify the specific point 2012 of the entire screen The center point of the area specified by the indicator 2030 is moved from the existing center point 2014 to the specified point 2012 and a new enlarged area And displays the generated new enlarged area on the full screen.

Further, according to another embodiment of the present invention, it is possible to select a center point of a specific area to be enlarged in the second area 2020, or to select a center point of a specific area to be enlarged in the first area 2010 have. When the center point of a specific area to be enlarged is selected using the first area 2010, the enlarged area can be finely adjusted. The center point of the specific area to be enlarged can be selected using the second area 2020 There is an advantage that the specific area can be changed while checking the original video data as a whole.

21 is a diagram illustrating a screen control using a remote controller when a specific area enlargement mode is activated according to an embodiment of the present invention. As described above, the multimedia device (TV or STB) according to an embodiment of the present invention is controlled by an external device, and the external device corresponds to a remote controller or a mobile device. 21 illustrates a remote controller as an example of an external device, the scope of rights of the present invention is not limited to a remote controller alone.

According to one embodiment of the present invention, the external remote controller 2140 includes a wheel key 2142, a direction key 2144, and a volume key 2146.

The controller 1530 adjusts the screen enlargement ratio according to the operation of the wheel key 2142 when receiving a specific command corresponding to the operation of the wheel key 2142 from the external remote controller 2140.

For example, when the controller 1530 receives a specific command corresponding to an input for rotating the wheel in the upward direction of the wheel key 2142 from the external remote controller, the controller 1530 increases the screen enlargement ratio. The controller 1530 reduces the screen enlargement ratio when receiving a specific command corresponding to an input for rotating the wheel in the downward direction of the wheel key 2142 from the external remote controller.

The user can change the screen enlargement ratio by 1 to 5 times with the wheel key of the remote control and change the screen enlargement ratio by 0.2 times each time the wheel key is moved by 1 unit. The screen enlargement ratio is designed not to be fixed but to be changeable to a user setting.

The controller 1530 adjusts the screen enlargement ratio according to the operation of the volume key 2146 when receiving a specific command corresponding to the volume key 2146 operation from the external remote controller 2140. [

For example, when the controller 1530 receives a specific command corresponding to the + portion of the volume key 2146 from the external remote controller, the controller 1530 increases the screen enlargement ratio. When the controller 1530 receives a specific command corresponding to the - portion of the volume key 2146 from the external remote controller, the controller 1530 reduces the screen enlargement ratio.

When the controller 1530 receives the specific command corresponding to the operation of the direction key 2144 by the external remote controller 2140, the controller 1530 sets the center point of the area specified by the indicator 2130 according to the operation of the direction key 2144, Moves to a specific point, generates enlarged video data around a specific point, and displays the generated enlarged video data in the first area 2110. [

The position and size of the indicator 2130 in the second area 2120 corresponding to the PIP screen are changed in association with each other when the specific area enlargement ratio and position are changed using the external remote control key. According to another embodiment of the present invention, the external remote controller 2150 includes a volume key 2156, a channel key 2152, and a touch pad 2154. Of course, the external remote controller 2150 can be controlled by a motion sensor or a voice recognition sensor.

When the controller 1530 receives a specific command corresponding to the operation of the volume key 2156 from the external remote controller 2150, the controller 1530 adjusts the screen enlargement ratio according to the operation of the volume key 2156. [

For example, when the controller 1530 receives a specific command corresponding to the upward direction portion of the volume key 2156 from the external remote controller, the controller 1530 increases the screen enlargement ratio. The controller 1530 decreases the screen enlargement ratio when receiving a specific command corresponding to the downward portion of the volume key 2156 from the external remote controller.

The controller 1530 adjusts the screen enlargement ratio according to the operation of the channel key 2152 when receiving a specific command corresponding to the operation of the channel key 2152 from the external remote controller 2150. [

For example, when the controller 1530 receives a specific command corresponding to the UP (UP) direction portion of the channel key 2152 from the external remote controller, the controller 1530 increases the screen enlargement ratio. The controller 1530 reduces the screen enlargement ratio when receiving a specific command corresponding to the downward (DOWN) portion of the channel key 2152 from the external remote controller.

The controller 1530 receives the specific command corresponding to the operation of the touch pad 2154 by the external remote controller 2140 and then moves the center point of the area specified by the indicator 2130 according to the operation of the touch pad 2154 from the existing center point Moves to a specific point, generates enlarged video data around a specific point, and displays the generated enlarged video data in the first area 2110. [

22 is a diagram illustrating automatic execution of a specific area enlargement mode in conjunction with EPG information according to an embodiment of the present invention.

As shown in FIG. 22, the EPG signal processing module 1560 extracts category information (for example, genre information and the like) from the broadcast signal including the EPG signal and analyzes the extracted category. Here, the category corresponds to, for example, sports, news, documentary, movie, drama, entertainment, entertainment, talk show and the like.

The controller 1530 automatically executes the specific area enlargement mode when the information included in the broadcast signal corresponds to a specific category.

For example, the controller 1530 automatically activates the specific area enlargement mode when the currently outputting broadcast program (video data) corresponds to a category such as sports, news, or the like.

In addition, the controller 1530 automatically switches the specific area enlargement mode to the off state when the currently outputted broadcast program (video data) is an adult video, violent object, adult action, or content that can not be viewed by the youth.

Therefore, according to one embodiment of the present invention, by designing the specific area enlargement mode to be automatically turned on or off according to category information (genre information) of video data, There is an advantage that the time required for entry into the mode can be minimized or the possibility of abuse of the present invention can be reduced. 23 is a diagram showing the execution of a specific area enlargement mode and a time shift function in connection with each other according to an embodiment of the present invention.

Here, the time shift function is a function that enables the user to watch a program missed while watching TV in real time. For example, the memory 1550 is designed to automatically store the currently outputted broadcast program for a predetermined period of time without receiving an explicit save command from the user. Here, the memory 1550 corresponds to, for example, an external hard disk, an external USB memory, or a memory built in the multimedia device.

The controller 1530 displays the bar 2310 indicating the reproduction time point at the lower end of the first area 2300 in which the video data is displayed. For example, when the genre information of the video data corresponds to a sport, the controller 1530 displays a marking at a point of time (2312) at which a goal was scored and at a point (2314) at which people watched the most. The specific time points 2312 and 2314 can be collected through EPG information or web search. When the specific time points are selected, the specific area enlargement mode is automatically designed to be executed.

According to another embodiment of the present invention, the controller 1530 retrieves at least one video data stored in the memory 1550 and reproduces only a section in which the specific area enlargement function of the retrieved video data is executed.

For example, in video data in which a specific singer group composed of nine members including a first singer and a second singer is singing, the user may be interested only in a section in which the first singer and the second singer sing . Unlike the conventional time sheet function, the controller 1530 stores information on the section in which the specific area enlargement function is executed in the memory 1550 together with the video data.

The controller 1530 searches the memory 1550 for a section in which the specific area enlargement function has been executed and reproduces only the searched section.

Therefore, according to the present invention, there is an advantage that the user does not need to reproduce the entire section of the video data, since the section in which the specific area enlargement function is automatically searched is searched and only the searched section is reproduced.

According to another embodiment of the present invention, the controller 1530 divides the entire screen based on the number of times the specific area enlargement function has been executed in the video data temporarily stored in the memory. In each divided screen, the video data in which the specific area enlargement function is executed, that is, the video data enlarged only in the specific area, is designed to be output. For example, when the number of times of executing the specific area expanding function for one video data (broadcast program) is nine in total, the controller 1530 displays nine divided screens. The controller 1530 reproduces the sections in which the specific area enlargement function is executed, on the screen divided into nine sections.

Therefore, there is an advantage that the section in which the specific area enlargement function is executed can be confirmed more quickly.

FIG. 24 is a diagram illustrating switching between a full screen and a zoom screen according to an embodiment of the present invention.

As shown in Fig. 24, when the controller 1530 receives a specific command from the external remote controller, the controller 1530 changes the video signal sent to the first area and the video signal sent to the second area, Display.

Specifically, the video signal to be transmitted to the first area corresponds to the enlarged video data of the specific area, and the video signal to be transmitted to the second area corresponds to the original video data whose size is reduced.

Therefore, in the upper figure 2410 of Fig. 24, the video data in which the specific area is enlarged is displayed on the main screen, and the original video data in which only the size is reduced is displayed on the PIP screen. Specifically, the PIP screen displays the positions of the entire screen reduced in a certain ratio and the area enlarged in the entire screen.

In the lower figure 2420 of FIG. 24, the full screen is displayed on the main screen, and the screen on which the specific area is enlarged is displayed on the PIP screen.

Therefore, according to the embodiment of the present invention, there is an advantage that the original video data and the enlarged video data can be selectively switched to the full screen or PIP screen as needed.

FIG. 25 is a view showing enlargement of several points in a screen according to an embodiment of the present invention.

25, the controller 1530 receives a command specifying a plurality of points in the first area 2510 from an external remote controller through the communication module within a predetermined time in a state in which the specific area enlargement mode is activated , The PIP screen is automatically generated and displayed by the plurality of times.

For example, when the controller 1530 receives a command for selecting a specific point in three first areas 2510 within 3 seconds from the external remote controller through the communication module while entering the specific area enlargement mode, A PIP screen 2520, a second PIP screen 2530, and a third PIP screen 2540. [ Each PIP screen includes video data magnified around the three specific points.

According to an embodiment of the present invention, when a user wishes to view various points in a single screen, a plurality of points can be specified and displayed on a PIP screen in a specific portion of the screen.

In this case, when multiple people are in different positions on a single screen, it is possible to identify multiple people at the same time, to identify each specific individual, and to know details of clothes, watches, and accessories worn by the identified person , The user convenience is improved.

FIG. 26 is a view showing enlargement by selecting various points on a screen according to an embodiment of the present invention. Fig. 26 is an embodiment similar to Fig. 25, so that the difference will mainly be described, and those skilled in the art can supplement the Fig. 26 with reference to Fig.

For example, when the controller 1530 enters a specific area enlargement mode and receives a command for selecting three specific points within three seconds from the external remote controller through the communication module, the controller 1530 controls the first area The first sub screen 2620, the second sub screen 2630 and the third sub screen 2640 are displayed in an area excluding the first area 2610 by reducing the size of the first sub screen 2610 by 80% Each PIP screen includes video data magnified around the three specific points.

Compared with Fig. 25, Fig. 26 shows a solution for solving the problem that the original video data is blocked from the PIP screen. Particularly, according to the number of sub-screens 2620, 2630, and 2640, the size of the first area 2610 from which the original video data is output is also different.

FIG. 27 is a diagram for solving the case where the remote control coordinates and the input video coordinates do not match according to an embodiment of the present invention. In the process of implementing another embodiment of the present invention, the technical problems to be described later in Fig. 27 and the following should be solved.

As shown in FIG. 27, the remote control coordinates are 1920 x 1080 in the two-dimensional plane 2710, and the coordinate of the video signal may be 3840 x 2160 in the two-dimensional plane 2720. Here, the coordinates are not fixed and can be changed depending on the resolution of the input video signal and the device. The number of coordinates is not an absolute value, but can be changed to a relative value. The resolution refers to how many pixels are contained in one screen. The resolution is expressed as the number of pixels in the horizontal direction multiplied by the number of vertical pixels. That is, when the resolution is 1920 x 1080, the number of horizontal pixels is 1920 and the number of vertical pixels is 1080, which is represented by a two-dimensional plane coordinate.

For example, even if the user selects a P point of x = 1440 and y = 270 on the screen currently displayed by using the remote controller due to the mismatch between the coordinates of the remote controller and the video signal, the controller 1530 obtains x = 720 and y = 135 P 'point is selected.

Therefore, a difference occurs between the coordinates the user intended and the coordinates the controller 1530 recognizes.

Here, when transmitting the data to the display device, the external remote controller transmits data including coordinate information of the remote controller. The external remote controller and the display device are connected by wireless communication, and include RF communication and IR communication. In addition, the external remote control can be a mobile device including a smart phone and a tablet PC.

The controller 1530 scales the coordinate information of the external remote controller according to the video signal information of the contents.

Specifically, the controller 1530 detects changed video signal information when the video signal information of the content is changed, and scales the plane coordinates of the external remote controller received from the external remote controller based on the detected video signal information.

For example, if the remote control coordinates are 1920 x 1080 and the video signal resolution information of the content is 720P HD with 1280 x 720, the controller 1530 scales the received remote control coordinates based on the video signal information to 1280 x 720 Change it. When the resolution is HD, the scaling coefficient is 0.66.

If the video signal resolution information of the content is FHD of 1920 x 1080, the controller 1530 scales based on the video signal information. When the resolution is FHD, the remote control coordinates and the video signal information coordinates are the same, so that the scaling factor is 1.

If the video signal resolution information of the content is a UHD of 3840 x 2160, the controller 1530 scales the remote coordinate received based on the video signal information to 3840 x 2160. If the resolution is UHD, the scaling factor is 2.

28 illustrates solving the case where a specific area to be enlarged is out of a video output range according to an embodiment of the present invention.

As shown in an upper diagram (2810) of FIG. 28, when a specific area is enlarged with the point where the pointer is located as a center point, there arises a problem that an area not included in the original video data occurs.

Therefore, as shown in the lower diagram 2820 of FIG. 28, the specific area is enlarged by moving to another point 2824 instead of the point 2822 where the pointer is located. Compared with the top picture (2810) of FIG. 28, there is an advantage that only the area included in the original video data is enlarged.

FIG. 29 is a flowchart illustrating a method of dividing a screen into a predetermined number when outputting video data according to an embodiment of the present invention, enlarging a selected screen when a user selects one of the divided screens, Fig.

As shown in the upper diagram (2910) of Fig. 29, when the controller 1530 receives a specific command from the external remote controller, it divides one screen into nine images when outputting the video data. The controller 1530 enlarges and reproduces only the video data corresponding to the selected specific screen 2912 when the user receives a command from the external remote controller to select a specific screen 2912 from among the divided screens.

29, the controller 1530 reduces the original video data to a predetermined ratio and displays it in the second area 2924. In the first area 2920, And enlarges and displays the video data of the specific area selected in the area. Also, as described above, an indicator 2922 for guiding the enlarged specific area is displayed together in the second area 2924. [

30 is a diagram showing that the controller divides the screen into four, nine, or sixteen divisions according to user selection when outputting video data according to an embodiment of the present invention.

30, when the controller 1530 receives the specific command 3020 from the external remote controller 3010, the controller 1530 divides the screen into four parts (3030) and outputs it when the arbitrary video data is output, (3040), displays it, divides the screen into 16 parts (3050), and displays it. The number of divided screens may be set to a predetermined default value, or may be selectable by the user. By referring to the divided screen, the user can select a specific area of the video data to be enlarged.

If the magnification or reduction magnification of the video data displayed in the first area is changed according to at least one command received from the external device after the specific area enlargement mode is executed, 2 Automatically changes the size of the indicator in the area. Hereinafter, the details will be described later in detail with reference to FIG.

If the specific area to be enlarged is recognized in the first area in accordance with at least one command received from the external device after the specific area enlargement mode is executed, Automatically changes the center point of the indicator. Hereinafter, this will be described later in more detail with reference to FIG.

The controller 1530 controls the video data and the indicator in the second area to disappear according to at least one or more commands received from the external device after a predetermined time has elapsed since the specific area enlargement mode was executed do. Hereinafter, this will be described later in more detail with reference to FIG.

Further, the controller 1530 displays a graphic image that guides the execution of the specific area enlargement mode after all of the video data and indicators in the second area are disappeared, and the graphic image displays information indicating the enlargement magnification . Then, the controller 1530 is designed to display the video data and the indicator in the second area again according to a command to select the graphic image. Hereinafter, this will be described later in more detail with reference to FIG.

Figure 31 illustrates a process for adjusting the magnification factor during execution of a specific area magnification mode, in accordance with an embodiment of the present invention. It is also within the scope of the present invention for those skilled in the art to implement some other embodiments in conjunction with FIG. 31 with reference to the previous figures.

31, when the specific area enlargement mode is executed and the specific area to be enlarged is determined, the video data of the area specified by the indicator 3130 in the second area 3120 is stored in the first area 3110 ). As described above, the video data in the indicator 3130 and the video data in the first area 3110 correspond to each other and are the same but different in size.

Further, it is possible to transmit, to the multimedia device, for example, a TV or STB, an instruction to further enlarge or reduce the video data being displayed in the first area 3110 by using the external device 3100 Do. For example, as shown in FIG. 31, a command to enlarge video data in the first area 3110 is generated through a specific button of the external device 3100 and is transmitted to the multimedia device.

Therefore, as shown in Fig. 31, the enlarged video data is output in the first area 3111 as compared with the previous first area 3110. [ It is a feature of the present invention that at least one of the position or the size of the indicator 3131 in the second area 3121 is automatically changed so as to correspond to the video data in the first area 3111.

32 illustrates a process for selecting an enlarged area during execution of a specific area enlargement mode, in accordance with an embodiment of the present invention. It is also within the scope of the present invention to those skilled in the art to implement some other embodiments as shown in Figure 32 with reference to the previous figures.

32, the same video data is output to the first area 3210 and the second area 3220 at the time when the specific area enlargement mode is first executed, not. The position of the indicator 3230 in the second area 3220 is designed to correspond to or correspond to a specific area last selected in the previously-executed specific area enlargement mode.

On the other hand, assuming such a situation, it is assumed that a specific region to be enlarged is selected using the external device 3200.

Therefore, as shown in FIG. 32, the original video data is continuously output to the second area 3221, as in the previous second area 3220, while the video data of the specific area is enlarged in the first area 3211, . However, at least one of the position or the size of the indicator 3231 in the second area 3221 is automatically changed, and as shown in FIG. 32, it is possible to correspond to a video data output from the first area 3211 The position or size of the indicator 3231 is automatically changed. In the case of such a design, there is an advantage that it is possible to more easily and quickly confirm which particular area of the original video data is enlarged and watched.

33 illustrates a process for disabling an associated indicator during execution of a specific area enlargement mode, in accordance with an embodiment of the present invention. It is also within the scope of the present invention that a person skilled in the art may implement some embodiments different from those of FIG. 33 with reference to the previous drawings.

33, after the specific area enlargement mode is executed, original video data is output to the second area 3330, and only the specific area is enlarged in the first area 3310 Video data is displayed. In addition, the indicator 3320 is changed in size and position in association with the first area 3310.

However, due to the video data and the indicator 3320 in the second area 3330, some of the enlarged video data is hidden by the video data and the indicator 3320 in the second area 3330. In order to solve such a problem, when the multimedia device receives a specific command after a preset time elapsing time (EX: 3 to 5 seconds) or from the external device 3300, the first area 3311 still has a specific area enlarged And the second area 3321 is designed so as not to display the indicator and original video data when compared with the previous second area 3320. [ Therefore, the user is expected to have a technical effect that only the video data in which the specific area is enlarged can be viewed in the first area 3311.

34 illustrates a process of redisplaying a related indicator that has disappeared during the execution of a specific area enlargement mode, according to an embodiment of the present invention. It is also within the scope of the present invention to those skilled in the art to implement some other embodiments than those of FIG. 34 with reference to the previous drawings. In particular, Fig. 34 assumes Fig. 33, which was described immediately before.

34 is similar to the first area 3410 in that video data of a specific area is enlarged. However, in the first area 3410, Lt; RTI ID = 0.0 > 3440 < / RTI > In particular, the second other indicator 3440 includes information indicating how many times the video data being output in the first area 3410 is magnified when compared with the original video data.

If the second indicator 3440 is selected using the external device 3400, the second area 3430 again outputs the original video data and the indicator 3410 corresponding to the first area 3411 (3420) is displayed again.

35 is a block diagram illustrating a display device according to an embodiment of the present invention. The present invention includes all of the embodiments of Figs.

35, the display device 3500 includes an interface module 3510, a controller 3520, a display module 3530, a memory 3540, and an EGP signal processing module 3550.

The interface module 3510 transmits and receives data to and from the external server.

The controller 3520 receives the content including at least one object through the interface module 3510, displays the received content on the main screen of the display device, generates metadata for the content for each object, Display the content again based on the metadata.

The controller 3520 displays the content again by reflecting the magnification level based on the time stamp and the pointing coordinates.

A detailed description thereof will be described later with reference to FIG.

The controller 3520 receives the content and metadata from the external server through the interface module 3510 and displays the content on the main screen based on the received metadata. Here, the metadata further includes a camera tag.

A detailed description thereof will be described later with reference to FIG.

The controller 3520 moves the enlarged position to the pointer according to the time stamp of the generated metadata, changes the enlargement level, and modifies the metadata based on at least one of the moved enlargement position and the changed enlargement level.

A detailed description thereof will be described later with reference to FIG.

When the controller 3520 receives an input for selecting a specific object displayed on the main screen as a pointer, the controller 3520 executes object tracking for the specific object, stores metadata corresponding to the specific object to be tracked in the memory, Extracts a specific object tag from the stored meta data, and generates an icon for each object based on the extracted specific object tag.

A detailed description thereof will be described later with reference to FIG.

The controller 3520 extracts object keywords from at least one of the voice app information and EPG information received through the interface module 3510, extracts the object tags from the metadata stored together with the content displayed on the main screen, The extracted object keyword is mapped to the extracted object tag.

A detailed description thereof will be described later with reference to FIG.

The controller 3520 identifies the user based on the voice information received through the interface module 3510, extracts the object keyword for each divided user, and extracts the object tag from the metadata stored together with the content displayed on the main screen And maps the extracted object keyword and the extracted object tag.

A detailed description thereof will be described later with reference to FIG.

The display module 3530 displays the content on the main screen in accordance with a control command from the controller 3520. [

The memory 3540 stores at least one of the content to be displayed and the generated metadata. Here, the metadata includes at least one of a content classification category, a time stamp, a pointing coordinate, an enlargement level, and tag information of an object.

The EPG signal processing module 3550 extracts category information (e.g., genre information, etc.) from the broadcast signal including the EPG signal and analyzes the extracted category. Here, the category corresponds to, for example, sports, news, documentary, movie, drama, entertainment, entertainment, talk show and the like.

36 is a flowchart of a method of controlling a display device according to an embodiment of the present invention. The present invention is carried out by the controller 3520.

First, the content including at least one object is displayed on the main screen of the display device (S3610).

Metadata about the content is generated for each object (S3620).

The content is displayed again based on the generated metadata (S3630).

37 is a flowchart of a method of controlling a display device according to an embodiment of the present invention. The present invention is carried out by the controller 3520.

First, the content including at least one object is displayed on the main screen of the display device (S3710).

Metadata for the content is generated for each object (S3720).

The enlarged position is moved to the pointer according to the time stamp of the generated metadata, and the enlargement level is changed (S3730).

The metadata is modified based on at least one of the moved enlarged position and the changed enlarged level (S3740).

The content is displayed again based on the modified metadata (S3750).

Upon receiving an input for selecting a specific object displayed on the main screen as a pointer, object tracking for a specific object is performed (S3760).

The metadata corresponding to the specific object to be tracked is stored in the memory (S3770).

A specific object is extracted from the stored metadata (S3780).

An icon is generated for each object based on the extracted specific object (S3790).

FIG. 38 is a diagram illustrating generation of metadata when a specific object is specified by a pointer according to an embodiment of the present invention.

As shown in the main screen 3810, the controller 3520 generates the metadata 3820 by specifying a specific object with the pointer 3812.

For example, the metadata 3820 includes the contents of SOCCER_20150324_131711_0720_1200_07_OBJ2.

When analyzing the metadata contents, Meta Data = Recording Tag. + Timestamp + Pointing Coordinator + Zooming Level + Object Tag.

Recording Tag is a recording classification tag. The recording classification tag is a tag that distinguishes whether a content genre is a sport, a movie, a documentary, a drama, or the like.

TimeStamp is the time at which to identify the object as a pointer. In the case of 20150324-131711 as in the metadata 3820, it means 13:17:11 on March 24, 2015

Pointing Coordinator means pointing coordinate information when changing Pointing Zoom Control. Like the metadata 3820, the pointing coordinate 3812 is x coordinate 720 and y coordinate 1200.

Zooming Level refers to the screen enlargement level at the point when the object is specified by the pointer. Like the metadata 3820, the zoom level is 7.

An object tag refers to Tag information of a specific object when a specific object is specified by a pointer. Like the metadata 3820, the object tag becomes OBJ2.

Controller 3520 generates metadata 3820 and stores the metadata in memory 3540 whenever an event occurs that causes the object to be clicked with a pointer.

The controller 3520 generates a metadata file based on at least one meta data. The metadata file is a file with the same name as the content, and the file extension is mtd. The metadata file includes at least one metadata.

The controller 3520 can open the metadata file and edit its contents, and then re-store the same name or name.

When the content file is reproduced again, the controller 3520 reproduces based on the metadata associated with the content file, so that the content reflecting the magnification level at a specific point in time and at a specific position can be displayed on the main screen.

FIG. 39 is a diagram illustrating metadata generation for each object based on time when a specific object is specified by a pointer according to an embodiment of the present invention. Referring to FIG.

When a specific object is identified by a pointer, such as the main screen 3910, the controller 3520 generates metadata for a specific object at each viewpoint. When there are a plurality of objects, the controller 3520 generates metadata for each object based on the viewpoint.

For example, metadata for SOCCER_20150324_131705_0320_0210_10_OBJ1 is generated for OBJ1. In the case of OBJ1, the metadata includes the soccer_23_2015_312_13517.05 seconds_ (X coordinate, Y coordinate) = (320, 210) _ 10 times zoom_OBJ1.

For the OBJ2, the metadata that is SOCCER_20150324_131711_0720_1901_07_OBJ2 is generated. In the case of OBJ2, the metadata includes the soccer_02_03_24_13_13171711_ (X-coordinate, Y-coordinate) = (720, 1901) _10 times zoom_OBJ2.

Metadata of SOCCER_20150324_131714_3620_1201_05_OBJ3 is generated for OBJ3. In the case of OBJ3, the metadata includes the soccer_23_2015_13_17 17:14_ (X coordinate, Y coordinate) = (3620, 1201) _5 times zoom_OBJ3.

Like the graph 3920, the controller 3520 generates metadata for each object on the basis of time, and stores the metadata in the memory 3540.

Accordingly, the controller 3520 can reproduce the content on the basis of the time in consideration of the time stamp of the metadata, generate the metadata by classifying the object according to the object, and display the content on the basis of the metadata individually for each object .

If the first object disappears from the main screen and then reapplies, how to specify the first object and set the same object tag as before is a problem.

The controller 3520 determines whether or not the object is the same based on at least one of face shape, number, and uniform color of the object.

More specifically, the controller 3520 determines that the object is the same object if the similarity degree of the face shape of the object is equal to or greater than a predetermined threshold, the equal number is the same, and the uniform color is the same.

For example, the controller 3520 determines that the object is the same object if the similarity degree of the face shape of the object is 80% or more, the equal number is equal to 10, and the uniform color is blue, The same object tag as the object tag given to the user.

According to the present invention, when the first object is set to the parkiness, the second object is set to Rooney, and the third object is set to Ronaldo, meta data is generated for each object at the time of content reproduction and stored in the memory 3540, , It is possible to watch separately for Park Ji-sung, Rooney, and Ronaldo, thus improving user's convenience.

In addition, the controller 3520 analyzes the recording classification tag and the object tag of the meta data to classify the content.

For example, when the record classification tag is SOCCER, the controller 3520 classifies the content into sports. If the record classification tag is DRAMA, the controller 3520 classifies the content into a drama.

40 is a diagram illustrating transmission of metadata from a broadcasting station and a manufacturer according to an embodiment of the present invention.

When the broadcasting station 4010 transmits the entire image and the metadata to the display device, the display device 3500 stores the received entire image and metadata in the memory 3540.

FIG. 41 is a view illustrating a screen displayed differently according to an embodiment of the present invention.

The controller 3520 receives the content and metadata from the external server through the interface module 3510 and displays the content on the main screen based on the received metadata. Here, the metadata further includes a camera tag.

Meta Data is Camera Tag. + TimeStamp + Pointing Coordinator + Zooming Level + Object Tag.

The camera tag includes camera information for a predetermined area.

For example, in the case of a soccer game, the camera 1 captures the foreground of the stadium, the camera 2 captures the left goalpost, and the camera 3 captures the right goalpost.

The broadcasting company does not transmit all the video data shot by camera # 1, # 2 and # 3 to the display device but the camera # 1 - # 2 - # 1 - # 3 - Similarly, the metadata and the metadata corresponding to the content are transmitted to the display device in the order edited by the broadcasting station.

The controller 3520 displays the camera meta data and the content reflecting the zoom magnification.

For example, the main screen 4110 displays a game image captured by the camera 3, the main screen 4120 displays a soccer field image captured by the camera 1, and the main screen 4130 displays 2 The right side frame video taken by the camera No. 2 is displayed.

The controller 3520 can edit only the game image photographed by the third camera on the basis of the camera metadata and display it on the main screen. In addition, when the user desires to view the image on the position of the defender, the controller 3520 can edit only the image of the right goal bar photographed by the camera No. 2 on the basis of the camera metadata and display it on the main screen.

According to the present invention, content and metadata are received from a broadcaster or a contents producer, and contents can be viewed from the viewpoint and the eyes of the contents creator, thereby improving user convenience. In addition, the broadcasting company can reduce the amount of data to be transmitted since only the original content and the metadata need to be transmitted without separately transmitting the edited contents according to the consumer's preference.

Figure 42 is a view illustrating editing of metadata according to an embodiment of the present invention.

The controller 3520 moves the enlarged position to the pointer according to the time stamp of the generated metadata, changes the enlargement level, and modifies the metadata based on at least one of the moved enlargement position and the changed enlargement level.

For example, as with the main screen 4210, the controller 3520 pauses the displayed content, moves the enlarged position from the first point 4212 to the second point 4214 in accordance with the movement of the pointer, You can change the magnification from 2x to 3x.

The metadata before modification is SOCCER_20150324_131711_0600_0600_02_OBJ2.

The modified metadata is SOCCER_20150324_131711_1000_0500_03_OBJ2.

Therefore, when the metadata before and after modification are compared, it can be seen that the coordinate value is changed from (600, 600) to (1000, 500), and the enlargement magnification is changed from 2 to 3.

When the controller 3520 receives an input for adjusting the wheel from the remote controller, the controller 3520 can adjust the magnification level in accordance with the wheel rotation direction.

For example, when the wheel is pushed forward, the magnification level is increased, and when the wheel is pushed back, the magnification level is decreased.

FIG. 43 illustrates icon generation for each object using metadata according to an embodiment of the present invention. Referring to FIG.

When the controller 3520 receives an input for selecting a specific object displayed on the main screen as a pointer, the controller 3520 executes object tracking for the specific object, stores metadata corresponding to the specific object to be tracked in the memory, Extracts a specific object tag from the stored meta data, and generates an icon for each object based on the extracted specific object tag.

For example, like the main screen 4310, when the controller 3520 receives an input for selecting a specific object such as OBJ1 with the pointer 4312, it executes object tracking for OBJ1.

The controller 3520 stores the metadata corresponding to OBJ1 being tracked in the memory 3540 and extracts a specific object tag such as OBJ1 from the stored metadata and extracts a specific object tag for OBJ1 based on the extracted object tag, Icon 4320 is generated.

OBJ2, and OBJ3, the controller 3520 can execute the same process.

Upon receiving an input for selecting a specific object such as OBJ1 as a pointer, the controller 3520 displays the content based on the metadata corresponding to OBJ1.

For example, when OBJ1 is parked, OBJ2 is Rooney, and OBJ3 is Ronaldo, the controller 3520 displays a portion of the entire content that tracks the parkiness.

Figure 44 is a diagram illustrating an embodiment of personalization using metadata according to an embodiment of the present invention.

The controller 3520 extracts object keywords from at least one of the voice app information and EPG information received through the interface module 3510, extracts the object tags from the metadata stored together with the content displayed on the main screen, The extracted object keyword is mapped to the extracted object tag.

For example, as in the main screen 4410, a specific user sets the object keywords to Park Ji-sung, Rooney, and Ronaldo, and identifies Park Ji-sung, Rooney, and Ronaldo with pointers on the main screen.

Like the box 4420, the controller 3520 extracts object keywords such as voice app information, EPG information received through the interface module 3510, and Park Jisung, Rooney, and Ronaldo through a channel analysis preferred by the user.

Like box 4430, controller 3520 extracts object tags such as OBJ 1, OBJ 2, and OBJ 3 from the metadata that is stored with the content when displaying the content.

Like the box 4420 and the box 4430, the controller 3520 maps the extracted object keyword and the extracted object tag to a pointer.

According to the present invention, when analyzing the user's favorite player tendency and mapping the second object to the parkiness, the third object to Rooney, and the first object to Ronaldo, metadata is generated for each object at the time of content reproduction, (3540), and if the content is reproduced based on the meta data, it is possible to separately view each of Park Ji-sung, Rooney, and Ronaldo, thereby improving user convenience.

Figure 45 is a diagram illustrating the identification of family members using a specific application in accordance with an embodiment of the present invention.

The controller 3520 identifies the user based on the voice information received through the interface module 3510, extracts the object keyword for each divided user, and extracts the object tag from the metadata stored together with the content displayed on the main screen And maps the extracted object keyword and the extracted object tag.

For example, like the box 4510, the controller 3520 identifies the user such as father, mother, the principal, and the younger brother based on the voice information received through the interface module 3510, .

That is, when the user is the father, the object keyword becomes the father.

The controller 3520 stores the extracted object keyword in the memory 3540.

Like box 4520, controller 3520 extracts object tags such as OBJ 1, OBJ 2, OBJ 3, and OBJ 4 from the metadata stored with the content displayed on the main screen.

Like box 4510 and box 4520, controller 3520 maps the extracted object keyword to the extracted object tag.

According to the present invention, when the tone of the user's family voice is analyzed to map the first object to the first person, the second object to the father, the third object to the younger brother, and the fourth object to the mother, And stores it in the memory 3540. When the content is reproduced based on the metadata, it is possible to separately watch each of the father, the mother, the principal, and the younger brother, thereby improving the user's convenience.

According to an embodiment of the present invention, when the content is displayed, meta data on the content is generated for each object, and the content is displayed again based on the meta data, The convenience is improved.

According to another embodiment of the present invention, content and metadata are received from a broadcaster or a contents producer, and contents can be viewed from the viewpoint and the eyes of the contents creator, thereby improving user convenience.

According to another embodiment of the present invention, by modifying the metadata based on the time domain, the user can perform a screen enlarging function on the content in a desired manner, thereby improving user convenience.

According to another embodiment of the present invention, personalized contents can be provided to a user by mapping an object keyword and an object tag, thereby improving user convenience.

According to another embodiment of the present invention, by providing only the metadata corresponding to the content and the content, a separate content storage space can be reduced, thereby improving user convenience.

The image display apparatus and the operation method thereof according to the present invention are not limited to the configurations and the methods of the embodiments described above, but the embodiments can be applied to all or some of the embodiments May be selectively combined.

Meanwhile, the operation method of the video display device of the present invention can be implemented as a code that can be read by a processor on a recording medium readable by a processor included in the video display device. The processor-readable recording medium includes all kinds of recording apparatuses in which data that can be read by the processor is stored. Examples of the recording medium that can be read by the processor include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like, and may also be implemented in the form of a carrier wave such as transmission over the Internet . In addition, the processor-readable recording medium may be distributed over network-connected computer systems so that code readable by the processor in a distributed fashion can be stored and executed.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present invention.

3500: Display device
3510: Interface module
3520: Controller
3530: Display Module
3540: Memory
3550: EPG signal processing module

Claims (10)

Displaying content including at least one object on a main screen of a display device;
Generating metadata for the content for each object; And
And displaying the content again based on the generated metadata,
The metadata
A content classification category, a time stamp, a pointing coordinate, an enlargement level, and tag information of the object
/ RTI >
The method of claim 1, wherein displaying the content again comprises:
And displaying the content again by reflecting the magnification level on the basis of the time stamp and the pointing coordinate
/ RTI >
The method of claim 1, further comprising:
And modifying the generated metadata.
/ RTI >
4. The method of claim 3, wherein modifying the metadata further comprises:
Moving the enlarged position to a pointer according to a time stamp of the generated metadata, and changing an enlargement level;
And modifying the metadata based on at least one of the moved enlarged position and the changed enlarged level
/ RTI >
The method according to claim 1,
Performing object tracking on the specific object upon receiving an input selecting a particular object displayed on the main screen as a pointer;
Storing metadata corresponding to the specific object being tracked in a memory;
Extracting the specific object tag from the stored metadata; And
And generating an icon for each object based on the extracted specific object tag
/ RTI >
An interface module for transmitting and receiving data to and from an external server;
Receiving content including at least one object through the interface module, displaying the received content on a main screen of the display device, generating metadata for the content for each object, and based on the generated metadata A controller for displaying the content again;
A memory for storing at least one of the displayed content and the generated metadata; And
And a display module for displaying the content on the main screen according to a control command from the controller,
The metadata
A content classification category, a time stamp, a pointing coordinate, an enlargement level, and tag information of the object
/ RTI >
7. The apparatus of claim 6, wherein the controller
Receiving content and metadata from the external server through the interface module, and displaying the content on the main screen based on the received metadata
/ RTI >
8. The method of claim 7,
Including more camera tags
/ RTI >
7. The apparatus of claim 6, wherein the controller
Extracting an object keyword from at least one of the voice app information and the EPG information received through the interface module, extracting an object tag from the metadata stored together with the content displayed on the main screen, Mapping the object tag
/ RTI >
7. The apparatus of claim 6, wherein the controller
Extracting an object keyword for each divided user, extracting an object tag from metadata stored together with the content displayed on the main screen, extracting the object tag from the extracted object, Mapping the keyword and the extracted object tag
/ RTI >

KR1020150085633A 2015-06-17 2015-06-17 Display device and controlling method thereof KR20160148875A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020150085633A KR20160148875A (en) 2015-06-17 2015-06-17 Display device and controlling method thereof
EP16811933.7A EP3311582A4 (en) 2015-06-17 2016-06-15 Display device and operating method thereof
PCT/KR2016/006376 WO2016204520A1 (en) 2015-06-17 2016-06-15 Display device and operating method thereof
US15/184,620 US20160373828A1 (en) 2015-06-17 2016-06-16 Display device and operating method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150085633A KR20160148875A (en) 2015-06-17 2015-06-17 Display device and controlling method thereof

Publications (1)

Publication Number Publication Date
KR20160148875A true KR20160148875A (en) 2016-12-27

Family

ID=57736668

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150085633A KR20160148875A (en) 2015-06-17 2015-06-17 Display device and controlling method thereof

Country Status (1)

Country Link
KR (1) KR20160148875A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426478A (en) * 2017-08-29 2019-03-05 三星电子株式会社 Method and apparatus for using the display of multiple controller controlling electronic devicess
US11367283B2 (en) 2017-11-01 2022-06-21 Samsung Electronics Co., Ltd. Electronic device and control method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426478A (en) * 2017-08-29 2019-03-05 三星电子株式会社 Method and apparatus for using the display of multiple controller controlling electronic devicess
CN109426478B (en) * 2017-08-29 2024-03-12 三星电子株式会社 Method and apparatus for controlling display of electronic device using multiple controllers
US11367283B2 (en) 2017-11-01 2022-06-21 Samsung Electronics Co., Ltd. Electronic device and control method thereof

Similar Documents

Publication Publication Date Title
KR101586321B1 (en) Display device and controlling method thereof
US11962934B2 (en) Display device and control method therefor
KR102393510B1 (en) Display device and controlling method thereof
KR102254889B1 (en) Digital device and method for controlling the same
KR101632221B1 (en) Digital device and method for processing service thereof
KR102557574B1 (en) Digital device and controlling method thereof
KR20170090102A (en) Digital device and method for controlling the same
KR20170087307A (en) Display device and method for controlling the same
KR20170126645A (en) Digital device and controlling method thereof
KR102311249B1 (en) Display device and controlling method thereof
KR102384520B1 (en) Display device and controlling method thereof
KR20160148875A (en) Display device and controlling method thereof
KR102603458B1 (en) A digital device and method for controlling the same
KR102668748B1 (en) Display device, and controlling method thereof
KR20170138788A (en) Digital device and controlling method thereof
KR20170092408A (en) Digital device and method for controlling the same
KR102384521B1 (en) Display device and controlling method thereof
KR102439464B1 (en) Digital device and method for controlling the same
KR20220003120A (en) Display device and its control method
KR20200088033A (en) Display device, and controlling method thereof
KR20170012998A (en) Display device and controlling method thereof
KR102722276B1 (en) Display Device and Control Method thereof
KR20170083227A (en) Digital device and method for controlling the same
KR102404357B1 (en) Digital device and method for controlling the same
KR20160127438A (en) Display device and method for controlling the same