KR20160148875A - Display device and controlling method thereof - Google Patents
Display device and controlling method thereof Download PDFInfo
- Publication number
- KR20160148875A KR20160148875A KR1020150085633A KR20150085633A KR20160148875A KR 20160148875 A KR20160148875 A KR 20160148875A KR 1020150085633 A KR1020150085633 A KR 1020150085633A KR 20150085633 A KR20150085633 A KR 20150085633A KR 20160148875 A KR20160148875 A KR 20160148875A
- Authority
- KR
- South Korea
- Prior art keywords
- content
- metadata
- controller
- data
- service
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4728—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention relates to a display device and a control method thereof, in which a content including at least one object is displayed on a main screen of a display device, metadata for the content is generated for each object, Wherein the metadata further includes at least one of a content classification category, a time stamp, a pointing coordinate, an enlargement level, and tag information of an object.
Description
The present invention relates to a display device and a control method thereof, and more particularly, to a display device and a control method thereof, in which when a content is displayed on a main screen, a screen enlargement function for a specific object is executed, and at the same time, And displaying the content based on the generated metadata again to automatically execute a screen enlargement function for a specific object.
Many users have recently used display devices such as smart TVs. As the high-priced personalized content market expands and the variety of types is increasing, there is an increasing need to view the images as desired by users.
In the prior art, when a user has executed a screen enlargement function for enlarging a specific object, it is necessary to separately store the image for which the enlargement function is executed in order to view it again. However, in the related art, there is a problem that the user has to feel inconvenience because a separate large storage space is required to store such images separately.
An embodiment of the present invention provides a display device capable of automatically generating a screen enlargement function for a specific object by displaying metadata on the basis of metadata when the content is displayed, And a control method therefor.
Another object of the present invention is to provide a display device and a control method thereof that can receive contents and metadata from a broadcaster or a contents producer and can view contents with a viewpoint and a sight line of the contents creator.
Another aspect of the present invention is to provide a display device and a control method thereof that can perform a screen enlargement function on content in a manner desired by a user by modifying metadata based on a time domain.
Another embodiment of the present invention is to provide a display device and a control method thereof that can provide personalized contents to a user by mapping an object keyword and an object tag.
Another embodiment of the present invention is to provide a display device and a control method thereof that can reduce a separate content storage space by providing only metadata corresponding to the content and the content.
According to an embodiment of the present invention, a method of controlling a display device includes: displaying content including at least one object on a main screen of a display device; Generating metadata for the content for each object; And displaying the content again based on the generated metadata, wherein the metadata includes at least one of a content classification category, a time stamp, a pointing coordinate, an enlargement level, and tag information of the object.
According to another embodiment of the present invention, a display device includes an interface module for transmitting and receiving data to and from an external server; Receiving the content including at least one object through the interface module, displaying the received content on a main screen of the display device, generating metadata for the content for each object, and based on the generated metadata A controller for displaying the content again; A memory for storing at least one of the displayed content and the generated metadata; And a display module for displaying the content on the main screen in accordance with a control command from the controller, wherein the metadata includes at least one of a content classification category, a time stamp, a pointing coordinate, an enlargement level, .
According to an embodiment of the present invention, when the content is displayed, meta data on the content is generated for each object, and the content is displayed again based on the meta data, The convenience is improved.
According to another embodiment of the present invention, content and metadata are received from a broadcaster or a contents producer, and contents can be viewed from the viewpoint and the eyes of the contents creator, thereby improving user convenience.
According to another embodiment of the present invention, by modifying the metadata based on the time domain, the user can perform a screen enlarging function on the content in a desired manner, thereby improving user convenience.
According to another embodiment of the present invention, personalized contents can be provided to a user by mapping an object keyword and an object tag, thereby improving user convenience.
According to another embodiment of the present invention, by providing only the metadata corresponding to the content and the content, a separate content storage space can be reduced, thereby improving user convenience.
1 is a schematic diagram illustrating a service system including a digital device according to an exemplary embodiment of the present invention.
2 is a block diagram illustrating a digital device according to an exemplary embodiment of the present invention.
3 is a block diagram illustrating a digital device according to another embodiment of the present invention.
4 is a block diagram illustrating a digital device according to another embodiment of the present invention.
FIG. 5 is a block diagram illustrating a detailed configuration of the control unit of FIGS. 2 to 4 according to an embodiment of the present invention. Referring to FIG.
Figure 6 is a diagram illustrating an input means coupled to the digital device of Figures 2 through 4, in accordance with an embodiment of the present invention.
7 is a diagram illustrating a Web OS architecture according to an embodiment of the present invention.
FIG. 8 is a diagram illustrating an architecture of a Web OS device according to an exemplary embodiment of the present invention. Referring to FIG.
9 is a diagram illustrating a graphical composition flow in a web OS device according to an embodiment of the present invention.
FIG. 10 is a diagram illustrating a media server according to an embodiment of the present invention. Referring to FIG.
11 is a block diagram illustrating a configuration of a media server according to an embodiment of the present invention.
FIG. 12 is a diagram illustrating a relationship between a media server and a TV service according to an embodiment of the present invention.
13 is a diagram illustrating a control method of a remote controller for controlling any one of the video display devices according to the embodiments of the present invention.
FIG. 14 is an internal block diagram of a remote control device for controlling any one of the video display devices according to the embodiments of the present invention.
15 is a configuration diagram of a multimedia device according to an embodiment of the present invention.
16 is a flowchart of a method of controlling a multimedia device according to an embodiment of the present invention.
17 is a diagram illustrating a case where a specific area enlargement mode according to an embodiment of the present invention is activated.
FIG. 18 is a diagram illustrating a pointer shape being changed when a specific area enlarging mode according to an embodiment of the present invention is activated.
FIG. 19 is a diagram illustrating control of a screen when a specific area enlargement mode is activated according to an embodiment of the present invention.
FIG. 20 illustrates moving a specific point on an enlarged screen to a pointer when a specific area enlarging mode according to an embodiment of the present invention is activated.
21 is a diagram illustrating a screen control using a remote controller when a specific area enlargement mode is activated according to an embodiment of the present invention.
22 is a diagram illustrating automatic execution of a specific area enlargement mode in conjunction with EPG information according to an embodiment of the present invention.
23 is a diagram showing the execution of a specific area enlargement mode and a time shift function in connection with each other according to an embodiment of the present invention.
FIG. 24 is a diagram illustrating switching between a full screen and a zoom screen according to an embodiment of the present invention.
FIG. 25 is a view showing enlargement of several points in a screen according to an embodiment of the present invention.
FIG. 26 is a view showing enlargement by selecting various points on a screen according to an embodiment of the present invention.
FIG. 27 is a diagram for solving the case where the remote control coordinates and the input video coordinates do not match according to an embodiment of the present invention.
28 illustrates solving the case where a specific area to be enlarged is out of a video output range according to an embodiment of the present invention.
FIG. 29 is a flowchart illustrating a method of dividing a screen into a predetermined number when outputting video data according to an embodiment of the present invention, enlarging a selected screen when a user selects one of the divided screens, Fig.
30 is a diagram showing that the controller divides the screen into four, nine, or sixteen divisions according to user selection when outputting video data according to an embodiment of the present invention.
Figure 31 illustrates a process for adjusting the magnification factor during execution of a specific area magnification mode, in accordance with an embodiment of the present invention.
32 illustrates a process for selecting an enlarged area during execution of a specific area enlargement mode, in accordance with an embodiment of the present invention.
33 illustrates a process for disabling an associated indicator during execution of a specific area enlargement mode, in accordance with an embodiment of the present invention.
Figure 34 illustrates a process for redisplaying an associated indicator that has disappeared during a specific area enlargement mode, in accordance with an embodiment of the present invention.
35 is a block diagram illustrating a display device according to an embodiment of the present invention.
36 is a flowchart of a method of controlling a display device according to an embodiment of the present invention.
37 is a flowchart of a method of controlling a display device according to an embodiment of the present invention.
FIG. 38 is a diagram illustrating generation of metadata when a specific object is specified by a pointer according to an embodiment of the present invention.
FIG. 39 is a diagram illustrating metadata generation for each object based on time when a specific object is specified by a pointer according to an embodiment of the present invention. Referring to FIG.
40 is a diagram illustrating transmission of metadata from a broadcasting station and a manufacturer according to an embodiment of the present invention.
FIG. 41 is a view illustrating a screen displayed differently according to an embodiment of the present invention.
Figure 42 is a view illustrating editing of metadata according to an embodiment of the present invention.
FIG. 43 illustrates icon generation for each object using metadata according to an embodiment of the present invention. Referring to FIG.
Figure 44 is a diagram illustrating an embodiment of personalization using metadata according to an embodiment of the present invention.
Figure 45 is a diagram illustrating the identification of family members using a specific application in accordance with an embodiment of the present invention.
Hereinafter, the present invention will be described in detail with reference to the drawings.
The suffix "module" and " part "for components used in the following description are given merely for ease of description, and the" module "and" part "
Meanwhile, the video display device described in the present invention is an intelligent video display device that adds a computer support function to a broadcast receiving function, for example, and is equipped with an Internet function while being faithful to a broadcast receiving function, Or an interface that is more convenient to use than a space remote controller or the like. Also, it can be connected to the Internet and a computer by the support of a wired or wireless Internet function, and can perform functions such as e-mail, web browsing, banking or game. A standardized general-purpose OS can be used for these various functions.
Therefore, the video display device described in the present invention can freely add or delete various applications, for example, on a general-purpose OS kernel, so that various user-friendly functions can be performed. More specifically, the video display device may be, for example, a network TV, an HBBTV, a smart TV, or the like, and may be applied to a smartphone according to circumstances.
BRIEF DESCRIPTION OF THE DRAWINGS The above and other features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which: FIG.
As used herein, terms used in the present invention are selected from general terms that are widely used in the present invention while taking into account the functions of the present invention, but these may vary depending on the intention or custom of a person skilled in the art or the emergence of new technologies. Also, in certain cases, there may be a term chosen arbitrarily by the applicant, in which case the meaning thereof will be described in the description of the corresponding invention. Therefore, it is intended that the terminology used herein should be interpreted based on the meaning of the term rather than on the name of the term, and on the entire contents of the specification.
The term " digital device " as used herein refers to a device that transmits, receives, processes, and outputs data, content, service, And includes all devices that perform at least one or more. The digital device can be paired or connected (hereinafter, referred to as 'pairing') with another digital device, an external server, or the like through a wire / wireless network, Can be transmitted / received. At this time, if necessary, the data may be appropriately converted before the transmission / reception. The digital device may be a standing device such as a network TV, a Hybrid Broadcast Broadband TV (HBBTV), a Smart TV, an IPTV (Internet Protocol TV), a PC (Personal Computer) And a mobile device or handheld device such as a PDA (Personal Digital Assistant), a smart phone, a tablet PC, a notebook, and the like. In order to facilitate understanding of the present invention and to facilitate the description of the present invention, FIG. 2, which will be described later, describes a digital TV, and FIG. 3 illustrates and describes a mobile device as an embodiment of a digital device. In addition, the digital device described in this specification may be a configuration having only a panel, a configuration such as a set-top box (STB), a device, a system, etc. and a set configuration .
The term " wired / wireless network " as used herein collectively refers to communication networks that support various communication standards or protocols for pairing and / or data transmission / reception between digital devices or digital devices and external servers. Such a wired / wireless network includes all of the communication networks to be supported by the standard now or in the future, and is capable of supporting one or more communication protocols therefor. Such a wired / wireless network includes, for example, a USB (Universal Serial Bus), a Composite Video Banking Sync (CVBS), a Component, an S-Video (Analog), a DVI (Digital Visual Interface) A communication standard or protocol for a wired connection such as an RGB or a D-SUB, a Bluetooth standard, a radio frequency identification (RFID), an infrared data association (IrDA), an ultra wideband (UWB) (ZigBee), DLNA (Digital Living Network Alliance), WLAN (Wi-Fi), Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access) A long term evolution (LTE-Advanced), and Wi-Fi direct, and a communication standard or protocol for the network.
In addition, when the term is simply referred to as a digital device in this specification, the meaning may mean a fixed device or a mobile device depending on the context, and may be used to mean both, unless specifically stated otherwise.
Meanwhile, a digital device is an intelligent device that supports, for example, a broadcast receiving function, a computer function or a support, at least one external input, and the like. The digital device may be an e-mail, web browsing, Banking, game, application, and so on. In addition, the digital device may have at least one input or control means (hereinafter referred to as an 'input means') to support an input device, a touch-screen, have.
In addition, the digital device can use a standardized general-purpose OS (Operating System), but in particular, the digital device described in this specification uses the Web OS as an embodiment. Therefore, a digital device can handle adding, deleting, amending, and updating various services or applications on a general-purpose OS kernel or a Linux kernel. And through which a more user-friendly environment can be constructed and provided.
Meanwhile, the above-described digital device can receive and process an external input. The external input is connected to an external input device, that is, the digital device, through the wired / wireless network, An input means or a digital device. For example, the external input may be a game device such as a high-definition multimedia interface (HDMI), a playstation or an X-Box, a smart phone, a tablet PC, a pocket photo devices such as digital cameras, printing devices, smart TVs, Blu-ray device devices and the like.
In addition, the term "server" as used herein refers to a digital device or system that supplies data to or receives data from a digital device, that is, a client, and may be referred to as a processor do. The server provides a Web server, a portal server, and advertising data for providing a web page, a web content or a web content or a web service, An advertising server, a content server for providing content, an SNS server for providing a social network service (SNS), a service server provided by a manufacturer, a video on demand (VoD) server, A service server providing a Multichannel Video Programming Distributor (MVPD) for providing a streaming service, a pay service, and the like.
In addition, in the following description for convenience of explanation, only the application is described in the context of the present invention, and the meaning may include not only the application but also the service based on the context and the like. In addition, the application may refer to a web application according to the webOS platform.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.
1 is a schematic diagram illustrating a service system including a digital device according to an exemplary embodiment of the present invention.
1, the service system includes a
The
The
The
The above-described
The
The
Meanwhile, the
In addition, the
In FIG. 1, the
2 is a block diagram illustrating a digital device according to an exemplary embodiment of the present invention.
The digital device described herein corresponds to the
The
The
The TCP /
The
The
The
The audio /
The application manager may include, for example, the
The
The
The
The
The
The
The SI &
The SI &
Meanwhile, the
3 is a block diagram illustrating a digital device according to another embodiment of the present invention.
If the above-described Fig. 2 is described with an example of a digital device as a fixing device, Fig. 3 shows a mobile device as another embodiment of a digital device.
3, the
Hereinafter, each component will be described in detail.
The
The
The broadcast-related information may mean information related to a broadcast channel, a broadcast program, or a broadcast service provider. The broadcast-related information may also be provided through a mobile communication network. In this case, it may be received by the
The broadcast-related information may exist in various forms, for example, in the form of an EPG (Electronic Program Guide) or an ESG (Electronic Service Guide).
The
The broadcast signal and / or broadcast related information received through the
The
The
The short-
The
The A /
The image frame processed by the
The
The user input unit 330 generates input data for user's operation control of the terminal. The user input unit 330 may include a key pad, a dome switch, a touch pad (static pressure / static electricity), a jog wheel, a jog switch, and the like.
The
The
The
The
Some of these displays may be transparent or light transmissive so that they can be seen through. This can be referred to as a transparent display, and a typical example of the transparent display is TOLED (Transparent OLED) and the like. The rear structure of the
There may be two or
The
The touch sensor may be configured to convert a change in a pressure applied to a specific portion of the
If there is a touch input to the touch sensor, the corresponding signal (s) is sent to the touch controller. The touch controller processes the signal (s) and transmits corresponding data to the
A
Examples of the proximity sensor include a transmission type photoelectric sensor, a direct reflection type photoelectric sensor, a mirror reflection type photoelectric sensor, a high frequency oscillation type proximity sensor, a capacitive proximity sensor, a magnetic proximity sensor, and an infrared proximity sensor. And to detect the proximity of the pointer by the change of the electric field along the proximity of the pointer when the touch screen is electrostatic. In this case, the touch screen (touch sensor) may be classified as a proximity sensor.
Hereinafter, for convenience of explanation, the act of recognizing that the pointer is positioned on the touch screen while the pointer is not in contact with the touch screen is referred to as "proximity touch & The act of actually touching the pointer on the screen is called "contact touch. &Quot; The position where the pointer is proximately touched on the touch screen means a position where the pointer is vertically corresponding to the touch screen when the pointer is touched.
The proximity sensor detects a proximity touch and a proximity touch pattern (e.g., a proximity touch distance, a proximity touch direction, a proximity touch speed, a proximity touch time, a proximity touch position, a proximity touch movement state, and the like). Information corresponding to the detected proximity touch operation and the proximity touch pattern may be output on the touch screen.
The
The
The
The
The
The
The identification module is a chip for storing various information for authenticating the usage right of the
The
The
The
The various embodiments described herein may be implemented in a recording medium readable by a computer or similar device using, for example, software, hardware, or a combination thereof.
According to a hardware implementation, the embodiments described herein may be implemented as application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays May be implemented using at least one of a processor, a controller, micro-controllers, microprocessors, and an electrical unit for performing other functions. In some cases, the implementation described herein Examples may be implemented by the
According to a software implementation, embodiments such as the procedures and functions described herein may be implemented with separate software modules. Each of the software modules may perform one or more of the functions and operations described herein. Software code may be implemented in a software application written in a suitable programming language. Here, the software code is stored in the
4 is a block diagram illustrating a digital device according to another embodiment of the present invention.
Another example of the
The
For example, if the received RF broadcast signal is a digital broadcast signal, the signal is converted into a digital IF signal (DIF). If the received RF broadcast signal is an analog broadcast signal, the signal is converted into an analog baseband image or a voice signal (CVBS / SIF). That is, the
In addition, the
The
The
The stream signal output from the
The external
The external
The A / V input / output unit includes a USB terminal, a CVBS (Composite Video Banking Sync) terminal, a component terminal, an S-video terminal (analog) terminal, A DVI (Digital Visual Interface) terminal, an HDMI (High Definition Multimedia Interface) terminal, an RGB terminal, a D-SUB terminal, and the like.
The wireless communication unit can perform short-range wireless communication with another digital device. The
Also, the external
Meanwhile, the external
The
The
Meanwhile, the
In addition, the
The
The
The
In addition, the
The
4 illustrates an embodiment in which the
The user
For example, the user
In addition, the user
The user
The
The video signal processed by the
The audio signal processed by the
Although not shown in FIG. 4, the
The
The
For example, the
The
On the other hand, the
In addition, the
On the other hand, when entering the application view item, the
The
Although not shown in the drawing, a channel browsing processing unit for generating a channel signal or a thumbnail image corresponding to an external input signal may be further provided.
The channel browsing processing unit receives a stream signal TS output from the
The
The
Meanwhile, the
The audio output unit 485 receives a signal processed by the
In order to detect the gesture of the user, a sensing unit (not shown) having at least one of a touch sensor, a voice sensor, a position sensor, and an operation sensor may be further included in the
On the other hand, a photographing unit (not shown) for photographing a user may be further provided. The image information photographed by the photographing unit (not shown) may be input to the
The
The power supply unit 490 supplies the corresponding power to the
Particularly, it is possible to supply power to a
To this end, the power supply unit 490 may include a converter (not shown) for converting AC power to DC power. Meanwhile, for example, when the
The
Also, the
The
In addition, the digital device according to the present invention may further include a configuration that omits some of the configuration shown in FIG. On the other hand, unlike the above, the digital device does not have a tuner and a demodulator, and can receive and reproduce the content through the network interface unit or the external device interface unit.
FIG. 5 is a block diagram illustrating a detailed configuration of the control unit of FIGS. 2 to 4 according to an embodiment of the present invention. Referring to FIG.
An example of the control unit includes a
The
The
The video decoder 425 decodes the demultiplexed video signal, and the
The
On the other hand, the video signal decoded by the
The
The
A frame rate conversion unit (FRC) 555 converts a frame rate of an input image. For example, the frame
The
On the other hand, the voice processing unit (not shown) in the control unit can perform the voice processing of the demultiplexed voice signal. Such a voice processing unit (not shown) may support processing various audio formats. For example, even when a voice signal is coded in a format such as MPEG-2, MPEG-4, AAC, HE-AAC, AC-3, or BSAC, a corresponding decoder can be provided.
In addition, the voice processing unit (not shown) in the control unit can process the base, the treble, the volume control, and the like.
A data processing unit (not shown) in the control unit can perform data processing of the demultiplexed data signal. For example, the data processing unit can decode the demultiplexed data signal even when it is coded. Here, the encoded data signal may be EPG information including broadcast information such as a start time and an end time of a broadcast program broadcasted on each channel.
On the other hand, the above-described digital device is an example according to the present invention, and each component can be integrated, added, or omitted according to specifications of a digital device actually implemented. That is, if necessary, two or more components may be combined into one component, or one component may be divided into two or more components. In addition, the functions performed in each block are intended to illustrate the embodiments of the present invention, and the specific operations and devices thereof do not limit the scope of rights of the present invention.
Meanwhile, the digital device may be a video signal processing device that performs signal processing of an image stored in the device or an input image. Other examples of the video signal processing device include a set-top box (STB), a DVD player, a Blu-ray player, a game device, a computer Etc. can be further exemplified.
FIG. 6 is a diagram illustrating input means coupled to the digital device of FIGS. 2 through 4 according to one embodiment of the present invention.
A front panel (not shown) or a control means (input means) provided on the
The control means includes a
The input means may be a communication protocol such as Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (UDA), Ultra Wideband (UWB), ZigBee, Digital Living Network Alliance (DLNA) At least one can be employed as needed to communicate with the digital device.
The
The
Since the
On the other hand, the control means such as the
The digital device described in this specification uses the Web OS as an OS and / or platform. Hereinafter, the processing such as the configuration or algorithm based on the web OS can be performed in the control unit of the above-described digital device or the like. Here, the control unit includes the control unit in FIGS. 2 to 5 described above and uses it as a broad concept. Accordingly, in order to process services, applications, contents, and the like related to the web OS in the digital device, the hardware and components including related software, firmware, and the like are controlled by a controller Named and explained.
Such a web OS-based platform is intended to enhance development independence and function expandability by integrating services, applications, and the like based on, for example, a luna-service bus, Productivity can be increased. Also, multi-tasking can be supported by efficiently utilizing system resources and the like through a Web OS process and resource management.
Meanwhile, the web OS platform described in this specification can be used not only in fixed devices such as a PC, a TV, and a set-top box (STB) but also in mobile devices such as mobile phones, smart phones, tablet PCs, notebooks, wearable devices .
The structure of the software for digital devices is based on a single process and a closed product based on multi-threading with conventional problem solving and market-dependent monolithic structure, And has been pursuing new platform-based development since then, has pursued cost innovation through chip-set replacement, UI application and external application development efficiency, and developed layering and componentization Layer structure and an add-on structure for add-ons, single-source products, and open applications. More recently, the software architecture provides a modular architecture for functional units, a Web Open API (Application Programming Interface) for echo-systems, and a game engine And a native open API (Native Open API), and thus, a multi-process structure based on a service structure is being created.
7 is a diagram illustrating a Web OS architecture according to an embodiment of the present invention.
Referring to FIG. 7, the architecture of the Web OS platform will be described as follows.
The platform can be largely divided into a kernel, a system library based Web OS core platform, an application, and a service.
The architecture of the Web OS platform is layered structure, with the OS at the lowest layer, the system library (s) at the next layer, and the applications at the top.
First, the lowest layer includes a Linux kernel as an OS layer, and can include Linux as an OS of the digital device.
The OS layer is provided with a BOS (Board Support Package) / HAL (Hardware Abstraction Layer) layer, a Web OS core modules layer, a service layer, a Luna-Service layer Bus layer, Native Developer ◎ Kit / QT layer, and Application layer in the top layer.
Meanwhile, some layers of the above-described web OS layer structure may be omitted, and a plurality of layers may be one layer, or one layer may be a plurality of layer structures.
The web OS core module layer is composed of a Luna Surface Manager (LSM) for managing surface windows and the like, a System & Application Manager (SAM) for managing the execution and execution status of applications, and a WebKit And a WAM (Web Application Manager) for managing web applications and the like.
The LSM manages an application window displayed on the screen. The LSM manages display hardware (Display HW) and provides a buffer for rendering contents necessary for applications. The LSM composes the results of rendering by a plurality of applications, Can be output.
The SAM manages various conditional execution policies of the system and the application.
WAM, on the other hand, is based on the Enyo Framework, which can be regarded as a basic application for web applications.
The use of an application's service is done via a luna-service bus, a new service can be registered on the bus, and an application can find and use the service it needs.
The service layer may include various service level services such as TV service and Web OS service. Meanwhile, the web OS service may include a media server, a Node.JS, and the like. In particular, the Node.JS service supports, for example, javascript.
Web OS services can communicate over a bus to a Linux process that implements function logic. It can be divided into four parts. It is developed from TV process and existing TV to Web OS, services that are differentiated by manufacturer, service which is manufacturer's common service and JavaScript, and is used through Node.js Node.js service.
The application layer may include all applications that can be supported in a digital device, such as a TV application, a showcase application, a native application, a web application, and the like.
An application on the Web OS can be divided into a Web application, a PDK (Palm Development Kit) application, a QT (Qt Meta Language or Qt Modeling Language) application and the like depending on an implementation method.
The web application is based on a WebKit engine and is executed on a WAM runtime. These web applications can be based on the ENI framework, or they can be developed and run on a common HTML5, CSS (Cascading Style Sheets), or JavaScript based.
The PDK application includes a native application developed in C / C ++ based on a PDK provided for third-party or external developers. The PDK refers to a development library and a tool set provided for a third party such as a game to develop a native application (C / C ++). For example, PDK applications can be used to develop applications where performance is critical.
The QML application is a Qt-based native application and includes a basic application provided with a web OS platform such as a card view, a home dashboard, a virtual keyboard, and the like. Here, QML is a mark-up language in the form of a script instead of C ++.
In the meantime, the native application is an application that is developed and compiled in C / C ++ and executed in a binary form. The native application has a high speed of execution.
FIG. 8 is a diagram illustrating an architecture of a Web OS device according to an exemplary embodiment of the present invention. Referring to FIG.
8 is a block diagram based on the runtime of the Web OS device, which can be understood with reference to the layered structure of FIG.
The following description will be made with reference to FIGS. 7 and 8. FIG.
Referring to FIG. 8, services and applications and WebOS core modules are included on the system OS (Linux) and system libraries, and communication between them can be done via the Luna-Service bus.
E-mail, contact, calendar, etc. Node.js services based on HTML5, CSS, java script, logging, backup, file notify notify, database (DB), activity manager, system policy, audio daemon (AudioD), update, media server, TV services such as Electronic Program Guide (PVD), Personal Video Recorder (PVR), data broadcasting, voice recognition, Now on, Notification, search, , CP services such as Auto Content Recognition (ACR), Contents List Browser (CBOX), wfdd, DMR, Remote Application, download, Sony Philips Digital Interface Format (SDPIF), PDK applications, , QML applications, etc. And the enyo framework based on the TV UI-related applications and web applications, Luna - made the process via the Web OS core modules, such as the aforementioned SAM, WAM, LSM via the service bus. Meanwhile, in the above, TV applications and web applications may not necessarily be UI-based or UI-related.
CBOX can manage the list and metadata of external device contents such as USB, DLNA, cloud etc. connected to TV. Meanwhile, the CBOX can output a content listing of various content containers such as a USB, a DMS, a DVR, a cloud, etc. to an integrated view. In addition, CBOX can display various types of content listings such as pictures, music, video, and manage the metadata. In addition, the CBOX can output the contents of the attached storage in real-time. For example, when a storage device such as a USB is plugged in, the CBOX must be able to immediately output the content list of the storage device. At this time, a standardized method for processing the content listing may be defined. In addition, CBOX can accommodate various connection protocols.
The SAM is intended to improve module complexity and scalability. For example, the existing system manager processes various functions such as system UI, window management, web application runtime, and UX constraint processing in one process to separate the main functions to solve the large implementation complexity, Clarify the implementation interface by clarifying the interface.
LSM supports independent development and integration of system UX implementations such as card view, launcher, etc., and supports easy modification of product requirements. LSM can make multi-tasking possible by utilizing hardware resources (HW resources) when synthesizing a plurality of application screens such as an application in-app, A window management mechanism can be provided.
LSM supports implementation of system UI based on QML and improves its development productivity. Based on MVC, QML UX can easily construct views for layouts and UI components, and can easily develop code for handling user input. On the other hand, the interface between the QML and the Web OS component is via the QML extension plug-in, and the graphic operation of the application can be based on the wayland protocol, luna-service call, etc. have.
LSM is an abbreviation of Luna Surface Manager, as described above, which functions as an Application Window Compositor.
LSM synthesizes independently generated applications, UI components, etc. on the screen and outputs them. In this regard, when components such as recents applications, showcase applications, launcher applications, etc. render their own contents, the LSM defines the output area, interworking method, etc. as a compositor. In other words, the compositor LSM handles graphics synthesis, focus management, input events, and so on. At this time, the LSM receives events, focus, etc. from the input manager. These input managers can include HIDs such as remote controllers, mouse & keyboards, joysticks, game pads, application remotes, and pen touches.
As such, LSM supports multiple window models, which can be performed simultaneously in all applications due to the nature of the system UI. In this regard, it is also possible to provide various functions such as launcher, resent, setting, notification, system keyboard, volume UI, search, finger gesture, Voice Recognition (STT (Sound to Text), TTS LSM can support a pattern gesture (camera, mobile radio control unit (MRCU), live menu, ACR (Auto Content Recognition), etc.) .
9 is a diagram illustrating a graphic composition flow in a web OS device according to an embodiment of the present invention.
9, the graphic composition processing includes a
When the web application-based graphic data (or application) is generated as a UI process in the
The
On the other hand, the full-screen application is passed directly to the
The graphical manager processes all the graphic data in the web OS device, including the data through the LSM GM surface described above, the data through the WAM GM surface, as well as the GM surface such as data broadcasting application, caption application, And processes the received graphic data appropriately on the screen. Here, the functions of the GM compositor are the same or similar to those of the compositor described above.
FIG. 10 is a view for explaining a media server according to an embodiment of the present invention, FIG. 11 is a view for explaining a configuration block diagram of a media server according to an embodiment of the present invention, and FIG. 12 Is a diagram illustrating a relationship between a media server and a TV service according to an embodiment of the present invention.
The media server supports the execution of various multimedia in the digital device and manages the necessary resources. The media server can efficiently use hardware resources required for media play. For example, the media server requires an audio / video hardware resource for multimedia execution and can efficiently utilize the resource usage status by managing it. In general, a fixed device having a larger screen than a mobile device needs more hardware resources to execute multimedia, and a large amount of data must be encoded / decoded and transmitted at a high speed. On the other hand, the media server is a task that performs broadcasting, recording and tuning tasks in addition to streaming and file-based playback, recording simultaneously with viewing, and simultaneously displaying the sender and recipient screens in a video call And so on. However, the media server is limited in terms of hardware resources such as an encoder, a decoder, a tuner, and a display engine, and thus it is difficult to execute a plurality of tasks at the same time. For example, And processes it.
The media server may be robust in system stability because, for example, a pipeline in which an error occurs during media playback can be removed and restarted on a per-pipeline basis, Even if it does not affect other media play. Such a pipeline is a chain that links each unit function such as decoding, analysis, and output when a media reproduction request is made, and the necessary unit functions may be changed according to a media type and the like.
The media server may have extensibility, for example, adding a new type of pipeline without affecting existing implementations. As an example, the media server may accommodate a camera pipeline, a video conference (Skype) pipeline, a third-party pipeline, and the like.
The media server can process general media playback and TV task execution as separate services because the interface of the TV service is different from the case of media playback. The media server supports operations such as setchannel, channelup, channeldown, channeltuning, and recordstart in connection with TV services and supports operations such as play, pause, and stop in connection with general media playback, Operations can be supported and processed as separate services.
The media server can control or integrally manage the resource management function. The allocation and recall of hardware resources in the device are performed integrally in the media server. In particular, the TV service process transfers the running task and the resource allocation status to the media server. The media server obtains resources and executes the pipeline each time each media is executed, and permits execution by a priority (e.g., policy) of the media execution request, based on the resource status occupied by each pipeline, and And performs resource recall of other pipelines. Here, the predefined execution priority and necessary resource information for the specific request are managed by the policy manager, and the resource manager can communicate with the policy manager to process the resource allocation, the number of times, and the like.
The media server may have an identifier (ID) for all playback related operations. For example, the media server may issue a command to indicate a particular pipeline based on the identifier. The media server may separate the two into pipelines for playback of more than one media.
The media server may be responsible for playback of the HTML 5 standard media.
In addition, the media server may follow the TV restructuring scope of the TV pipeline as a separate service process. The media server can be designed regardless of the TV restructuring scope. If the TV is not a separate service process, it may be necessary to re-execute the entire TV when there is a problem with a specific task.
The media server is also referred to as uMS, i.e., a micro media server. Here, the media player is a media client, which is a media client, for example, a web page for an HTML5 video tag, a camera, a TV, a Skype, a second screen, It can mean a kit (Webkit).
In the media server, management of micro resources such as a resource manager, a policy manager, and the like is a core function. In this regard, the media server also controls the playback control role for the web standard media content. In this regard, the media server may also manage pipeline controller resources.
Such a media server supports, for example, extensibility, reliability, efficient resource usage, and the like.
In other words, the uMS or media server may be a web OS device, such as a resource, such as a cloud game, a MVPD (pay service), a camera preview, a second screen, a Skype, And manage and control the use of resources for proper processing in an overall manner so as to enable efficient use. On the other hand, each resource uses, for example, a pipeline in its use, and the media server can manage and control the generation, deletion, and use of a pipeline for resource management as a whole.
Here, the pipeline refers to, for example, when a media associated with a task starts a series of operations, such as parsing of a request, a decoding stream, and a video output, . For example, with respect to a TV service or an application, watching, recording, channel tuning, and the like are each individually handled under the control of resource utilization through a pipeline generated according to the request .
The processing structure and the like of the media server will be described in more detail with reference to FIG.
10, an application or service is connected to a media server 1020 via a luna-to-service bus 1010, and the media server 1020 is connected to pipelines generated again via the luna- Connected and managed.
The application or service can have various clients depending on its characteristics and can exchange data with the media server 1020 or the pipeline through it.
The client includes, for example, a uMedia client (web kit) and a RM (resource manager) client (C / C ++) for connection with the media server 1020.
The application including the uMedia client is connected to the media server 1020, as described above. More specifically, the uMedia client corresponds to, for example, a video object to be described later, and the client uses the media server 1020 for the operation of video by a request or the like.
Here, the video operation relates to the video state, and the loading, unloading, play, playback, or reproduce, pause, stop, Data may be included. Each operation or state of such video can be processed through individual pipeline generation. Accordingly, the uMedia client sends the state data associated with the video operation to the
The
On the other hand, the
The
In addition, the pipeline may include, for example, a service-based pipeline (its own pipeline) and a URI-based pipeline (media pipeline).
Referring to FIG. 10, an application or service including an RM client may not be directly connected to the media server 1020. This is because the application or service may directly process the media. In other words, if the application or service directly processes the media, it may not pass through the media server. However, at this time, uMS connectors need to manage resources for pipeline creation and use. Meanwhile, when receiving a resource management request for direct media processing of the application or service, the uMS connector communicates with the media server 1020 including the
Accordingly, the application or service can respond to the request of the RM client by receiving the resource management of the
On the other hand, the URI-based pipeline is performed through the media server 1020 instead of processing the media directly as in the RM client described above. Such URI-based pipelines may include a player factory, a G streamer, a streaming plug-in, a DRM (Digital Rights Management) plug-in pipeline, and the like.
On the other hand, the interface method between application and media services may be as follows.
It is a way to interface with a service in a web application. This is a way of using Luna Call using the Palm Service Bridge (PSB), or using Cordova, which extends the display to video tags. In addition, there may be a method using the HTML5 standard for video tags or media elements.
And, it is a method of interfacing with PDK using service.
Alternatively, it is a method of using the service in the existing CP. It can be used to extend existing platform plug-ins based on Luna for backward compatibility.
Finally, it is a way to interface in the case of non-web OS. In this case, you can interface directly by calling the Luna bus.
Seamless change is handled by a separate module (eg TVWIN), which is the process for displaying and streamlining the TV on the screen without Web OS, before or during WebOS boot . This is because the boot time of WebOS is delayed, so it is used to provide the basic function of the TV service first for quick response to the user's power on request. In addition, the module is part of the TV service process and supports quick boot and null change, which provides basic TV functions, and factory mode. In addition, the module may also switch from the Non-Web OS mode to the Web OS mode.
Referring to FIG. 11, a processing structure of a media server is shown.
11, the solid line box represents the process processing configuration, and the dashed box represents the internal processing module during the process. In addition, the solid line arrows represent inter-process calls, that is, Luna service calls, and the dotted arrows may represent notifications or data flows such as register / notify.
A service or a web application or a PDK application (hereinafter referred to as an application) is connected to various service processing components via a luna-service bus, through which an application is operated or controlled.
The data processing path depends on the type of application. For example, when the application is image data related to the camera sensor, the image data is transmitted to the
Alternatively, when the application includes audio data, the audio processing unit (AudioD) 1140 and the audio module (PulseAudio) 1150 can process the audio. For example, the audio processing unit 1140 processes the audio data received from the application and transmits the audio data to the
Alternatively, when the application includes or processes (includes) DRM-attached content, the DRM service processing unit 1170 transmits the content data to the DRM
Hereinafter, processing in the case where the application is media data or TV service data (e.g., broadcast data) will be described.
FIG. 12 shows only the media server processing unit and the TV service processing unit in FIG. 11 described above in more detail.
Therefore, the following description will be made with reference to FIGS. 11 and 12. FIG.
First, when the application includes TV service data, it is processed in the TV service processing unit 1120/1220.
The TV service processing unit 1120 includes at least one of a DVR / channel manager, a broadcasting module, a TV pipeline manager, a TV resource manager, a data broadcasting module, an audio setting module, and a path manager. 12, the TV service processing unit 1220 includes a TV broadcast handler, a TV broadcast interface, a service processing unit, a TV middleware (TV MW (middleware)), a path manager, a BSP For example, NetCast). Here, the service processing unit may be a module including, for example, a TV pipeline manager, a TV resource manager, a TV policy manager, a USM connector, and the like.
In this specification, the TV service processing unit may have a configuration as shown in Fig. 11 or 12, or may be implemented by a combination of both, in which some configurations are omitted or some configurations not shown may be added.
The TV service processing unit 1120/1220 transmits the data to the DVR / channel manager in the case of DVR (Digital Video Recorder) or channel related data based on the attribute or type of the TV service data received from the application, To generate and process the TV pipeline. On the other hand, when the attribute or type of the TV service data is broadcast content data, the TV service processing unit 1120 generates and processes the TV pipeline through the TV pipeline manager for processing the corresponding data via the broadcasting module.
Alternatively, a json (JavaScript standard object notation) file or a file created in c is processed by a TV broadcast handler and transmitted to a TV pipeline manager through a TV broadcast interface to generate and process a TV pipeline. In this case, the TV broadcast interface unit may transmit the data or the file that has passed through the TV broadcast handler to the TV pipeline manager based on the TV service policy and refer to it when creating the pipeline.
On the other hand, the TV pipeline manager can be controlled by the TV resource manager in generating one or more pipelines according to a TV pipeline creation request from a processing module in a TV service, a manager, or the like. Meanwhile, the TV resource manager can be controlled by the TV policy manager to request the status and allocation of resources allocated for the TV service according to the TV pipeline creation request of the TV pipeline manager, and the media server processing unit 1110 / 1210) and uMS connectors. The resource manager in the media server processing unit 1110/1210 transmits the status of the current TV service, the allocation permission, etc. according to the request of the TV resource manager. For example, if a resource manager in the media server processing unit 1110/1210 confirms that all the resources for the TV service have already been allocated, the TV resource manager can notify that all the resources are currently allocated. At this time, the resource manager in the media server processing unit, together with the notify, removes a predetermined TV pipeline according to a priority or a predetermined criterion among the TV pipelines preliminarily allocated for the TV service, And may request or assign generation. Alternatively, the TV resource manager can appropriately remove, add, or control the TV pipeline in the TV resource manager according to the status report of the resource manager in the media server processing unit 1110/1210.
Meanwhile, the BSP supports, for example, backward compatibility with existing digital devices.
The TV pipelines thus generated can be appropriately operated according to the control of the path manager in the process. The path manager can determine and control the processing path or process of the pipelines by considering not only the TV pipeline but also the operation of the pipeline generated by the media server processing unit 1110/1210.
Next, when the application includes media data, rather than TV service data, it is processed by the media server processing unit 1110/1210. Here, the media server processing units 1110 and 1210 include a resource manager, a policy manager, a media pipeline manager, a media pipeline controller, and the like. On the other hand, the pipeline generated according to the control of the media pipeline manager and the media pipeline controller can be variously generated such as a camera preview pipeline, a cloud game pipeline, and a media pipeline. On the other hand, the media pipeline may include a streaming protocol, an auto / static gstreamer, and a DRM, which can be determined according to the control of the path manager. The specific processing in the media server processing units 1110 and / or 1210 cites the description of FIG. 10 described above, and will not be repeated here.
In this specification, the resource manager in the media server processing unit 1110/1210 can perform resource management with, for example, a counter base.
13 is a diagram illustrating a control method of a remote controller for controlling any one of the video display devices according to the embodiments of the present invention.
13A illustrates that the
The user can move or rotate the
13B illustrates that when the user moves the
Information on the motion of the
13C illustrates a case where the user moves the
On the other hand, when the specific button in the
On the other hand, the moving speed and moving direction of the
The pointer in the present specification means an object displayed on the
14 is an internal block diagram of a remote controller for controlling any one of the video display devices according to the embodiments of the present invention.
14, the
The
In this embodiment, the
In this embodiment, the
Also, the
The
The
The
For example, the
The
For example, the
The
The
The
15 is a configuration diagram of a multimedia device according to an embodiment of the present invention.
15, a
The
The
The
According to another embodiment of the present invention, the above-described process can be applied to the video data stored in the
The above-described indicator is implemented, for example, as a graphic image of a guide box guiding a specific area to be enlarged or enlarged. Hereinafter, this will be described later in more detail with reference to FIG.
The
If the magnification or reduction magnification of the video data displayed in the first area is changed according to at least one command received from the external device after the specific area enlargement mode is executed, 2 Automatically changes the size of the indicator in the area. Hereinafter, the details will be described later in detail with reference to FIG.
If the specific area to be enlarged is recognized in the first area in accordance with at least one command received from the external device after the specific area enlargement mode is executed, Automatically changes the center point of the indicator. Hereinafter, this will be described later in more detail with reference to FIG.
The
Further, the
The position or size of the indicator is changed based on, for example, information obtained from a touch sensor or a motion sensor of the external device. The external device can be designed, for example, with reference to Figs. 6, 13, and 14 described above. More specifically, for example, the external device corresponds to a remote controller or a mobile device including at least one of an RF (Radio Frequency) module and an IR (Infrared) module.
The above-mentioned first area corresponds to, for example, the entire screen of the television, and the second area corresponds to, for example, a part of the area included in the first area. In this regard, it will be described later in detail with reference to FIG.
16 is a flowchart of a method of controlling a multimedia device according to an embodiment of the present invention. Of course, those skilled in the art can supplement FIG. 16 with reference to the above description in FIG.
The multimedia device according to an embodiment of the present invention decodes video data received from the outside or stored in a memory in operation S1610, displays the decoded video data in a first area in operation S1620, The above commands are received (S1630). The multimedia device corresponds to, for example, a television or an STB.
Further, the multimedia device executes a specific area enlargement mode according to at least one command received from the external device (S1640), and displays video data corresponding to the video data in the second area in the first area (S1650).
The second area includes an indicator, and the video data displayed in the first area is changed according to at least one of a position and a size of the indicator. In this regard, it will be described later in detail with reference to FIG.
17 is a diagram illustrating a case where a specific area enlargement mode according to an embodiment of the present invention is activated.
17, when the
When the
17, when the
17, the
FIG. 18 is a diagram illustrating a pointer shape being changed when a specific area enlarging mode according to an embodiment of the present invention is activated.
18, when a command is received from the external
For example, if the magnification ratio is increased in the specific area enlargement mode, the
When the enlargement ratio is reduced in the specific area enlargement mode, the
Therefore, according to the embodiment of the present invention, when the specific area enlargement mode is activated, the pointer changes to a magnifying glass shape and the pointer shape changes according to the increase and decrease of the enlargement ratio, It is intuitively known whether or not the increase rate is increased and the user convenience is improved.
FIG. 19 is a diagram illustrating control of a screen when a specific area enlargement mode is activated according to an embodiment of the present invention. Of course, it is also possible to simply name it "enlargement mode" instead of "specific area enlargement mode" for convenience.
First, a display device according to an embodiment of the present invention displays content on a
Displays a
To enlarge a selected specific area of the displayed content and to magnify and display a selected specific area of the displayed content on the
Of course, for convenience of explanation, the
More specifically, for example, when video data included in a broadcast signal is output from the
Further, an
The
For example, the
Also, a ratio display bar including an enlargement button and a reduction button for changing the screen enlargement ratio exists in the
Here, the minimum value of the ratio in accordance with the selection of the
For example, when the
Further, the
Further, the
For example, the
For example, if the
The
The
Therefore, by displaying the changed position and size of the
To summarize again, when the specific area enlargement mode is executed, the original video data is output to both the
However, if the specific area to be enlarged is confirmed, the
Further, when the enlargement / reduction ratio is adjusted by using the
The
The size of the
Further, although not shown in FIG. 16, a person skilled in the art will be able to refer to FIG. 19 described above by receiving a first enlargement level for enlarging the displayed content, and based on the received first enlargement level, The method of
The
Moving the
The
Increases the size of the
The
The method further comprises the step of converting the coordinate information of the pointer moving according to the motion of the remote controller according to the video data of the content outputted on the main screen of Fig. For example, if the resolution information of the video data of the content corresponds to HD (High Definition), the step of scaling 0.66 times the coordinate information of the pointer, the resolution information of the video data of the content is FHD Scaling the coordinate information of the pointer by one when the resolution information of the video corresponds to UHD (Ultra High Definition), and if the resolution information of the video data of the content corresponds to UHD (Ultra High Definition) / RTI > In this regard, it will be described in more detail below with reference to FIG.
Controls the
FIG. 20 illustrates moving a specific point on an enlarged screen to a pointer when a specific area enlarging mode according to an embodiment of the present invention is activated.
20, the
Further, according to another embodiment of the present invention, it is possible to select a center point of a specific area to be enlarged in the
21 is a diagram illustrating a screen control using a remote controller when a specific area enlargement mode is activated according to an embodiment of the present invention. As described above, the multimedia device (TV or STB) according to an embodiment of the present invention is controlled by an external device, and the external device corresponds to a remote controller or a mobile device. 21 illustrates a remote controller as an example of an external device, the scope of rights of the present invention is not limited to a remote controller alone.
According to one embodiment of the present invention, the external
The
For example, when the
The user can change the screen enlargement ratio by 1 to 5 times with the wheel key of the remote control and change the screen enlargement ratio by 0.2 times each time the wheel key is moved by 1 unit. The screen enlargement ratio is designed not to be fixed but to be changeable to a user setting.
The
For example, when the
When the
The position and size of the
When the
For example, when the
The
For example, when the
The
22 is a diagram illustrating automatic execution of a specific area enlargement mode in conjunction with EPG information according to an embodiment of the present invention.
As shown in FIG. 22, the EPG
The
For example, the
In addition, the
Therefore, according to one embodiment of the present invention, by designing the specific area enlargement mode to be automatically turned on or off according to category information (genre information) of video data, There is an advantage that the time required for entry into the mode can be minimized or the possibility of abuse of the present invention can be reduced. 23 is a diagram showing the execution of a specific area enlargement mode and a time shift function in connection with each other according to an embodiment of the present invention.
Here, the time shift function is a function that enables the user to watch a program missed while watching TV in real time. For example, the
The
According to another embodiment of the present invention, the
For example, in video data in which a specific singer group composed of nine members including a first singer and a second singer is singing, the user may be interested only in a section in which the first singer and the second singer sing . Unlike the conventional time sheet function, the
The
Therefore, according to the present invention, there is an advantage that the user does not need to reproduce the entire section of the video data, since the section in which the specific area enlargement function is automatically searched is searched and only the searched section is reproduced.
According to another embodiment of the present invention, the
Therefore, there is an advantage that the section in which the specific area enlargement function is executed can be confirmed more quickly.
FIG. 24 is a diagram illustrating switching between a full screen and a zoom screen according to an embodiment of the present invention.
As shown in Fig. 24, when the
Specifically, the video signal to be transmitted to the first area corresponds to the enlarged video data of the specific area, and the video signal to be transmitted to the second area corresponds to the original video data whose size is reduced.
Therefore, in the upper figure 2410 of Fig. 24, the video data in which the specific area is enlarged is displayed on the main screen, and the original video data in which only the size is reduced is displayed on the PIP screen. Specifically, the PIP screen displays the positions of the entire screen reduced in a certain ratio and the area enlarged in the entire screen.
In the lower figure 2420 of FIG. 24, the full screen is displayed on the main screen, and the screen on which the specific area is enlarged is displayed on the PIP screen.
Therefore, according to the embodiment of the present invention, there is an advantage that the original video data and the enlarged video data can be selectively switched to the full screen or PIP screen as needed.
FIG. 25 is a view showing enlargement of several points in a screen according to an embodiment of the present invention.
25, the
For example, when the
According to an embodiment of the present invention, when a user wishes to view various points in a single screen, a plurality of points can be specified and displayed on a PIP screen in a specific portion of the screen.
In this case, when multiple people are in different positions on a single screen, it is possible to identify multiple people at the same time, to identify each specific individual, and to know details of clothes, watches, and accessories worn by the identified person , The user convenience is improved.
FIG. 26 is a view showing enlargement by selecting various points on a screen according to an embodiment of the present invention. Fig. 26 is an embodiment similar to Fig. 25, so that the difference will mainly be described, and those skilled in the art can supplement the Fig. 26 with reference to Fig.
For example, when the
Compared with Fig. 25, Fig. 26 shows a solution for solving the problem that the original video data is blocked from the PIP screen. Particularly, according to the number of sub-screens 2620, 2630, and 2640, the size of the
FIG. 27 is a diagram for solving the case where the remote control coordinates and the input video coordinates do not match according to an embodiment of the present invention. In the process of implementing another embodiment of the present invention, the technical problems to be described later in Fig. 27 and the following should be solved.
As shown in FIG. 27, the remote control coordinates are 1920 x 1080 in the two-
For example, even if the user selects a P point of x = 1440 and y = 270 on the screen currently displayed by using the remote controller due to the mismatch between the coordinates of the remote controller and the video signal, the
Therefore, a difference occurs between the coordinates the user intended and the coordinates the
Here, when transmitting the data to the display device, the external remote controller transmits data including coordinate information of the remote controller. The external remote controller and the display device are connected by wireless communication, and include RF communication and IR communication. In addition, the external remote control can be a mobile device including a smart phone and a tablet PC.
The
Specifically, the
For example, if the remote control coordinates are 1920 x 1080 and the video signal resolution information of the content is 720P HD with 1280 x 720, the
If the video signal resolution information of the content is FHD of 1920 x 1080, the
If the video signal resolution information of the content is a UHD of 3840 x 2160, the
28 illustrates solving the case where a specific area to be enlarged is out of a video output range according to an embodiment of the present invention.
As shown in an upper diagram (2810) of FIG. 28, when a specific area is enlarged with the point where the pointer is located as a center point, there arises a problem that an area not included in the original video data occurs.
Therefore, as shown in the lower diagram 2820 of FIG. 28, the specific area is enlarged by moving to another
FIG. 29 is a flowchart illustrating a method of dividing a screen into a predetermined number when outputting video data according to an embodiment of the present invention, enlarging a selected screen when a user selects one of the divided screens, Fig.
As shown in the upper diagram (2910) of Fig. 29, when the
29, the
30 is a diagram showing that the controller divides the screen into four, nine, or sixteen divisions according to user selection when outputting video data according to an embodiment of the present invention.
30, when the
If the magnification or reduction magnification of the video data displayed in the first area is changed according to at least one command received from the external device after the specific area enlargement mode is executed, 2 Automatically changes the size of the indicator in the area. Hereinafter, the details will be described later in detail with reference to FIG.
If the specific area to be enlarged is recognized in the first area in accordance with at least one command received from the external device after the specific area enlargement mode is executed, Automatically changes the center point of the indicator. Hereinafter, this will be described later in more detail with reference to FIG.
The
Further, the
Figure 31 illustrates a process for adjusting the magnification factor during execution of a specific area magnification mode, in accordance with an embodiment of the present invention. It is also within the scope of the present invention for those skilled in the art to implement some other embodiments in conjunction with FIG. 31 with reference to the previous figures.
31, when the specific area enlargement mode is executed and the specific area to be enlarged is determined, the video data of the area specified by the
Further, it is possible to transmit, to the multimedia device, for example, a TV or STB, an instruction to further enlarge or reduce the video data being displayed in the
Therefore, as shown in Fig. 31, the enlarged video data is output in the
32 illustrates a process for selecting an enlarged area during execution of a specific area enlargement mode, in accordance with an embodiment of the present invention. It is also within the scope of the present invention to those skilled in the art to implement some other embodiments as shown in Figure 32 with reference to the previous figures.
32, the same video data is output to the
On the other hand, assuming such a situation, it is assumed that a specific region to be enlarged is selected using the
Therefore, as shown in FIG. 32, the original video data is continuously output to the
33 illustrates a process for disabling an associated indicator during execution of a specific area enlargement mode, in accordance with an embodiment of the present invention. It is also within the scope of the present invention that a person skilled in the art may implement some embodiments different from those of FIG. 33 with reference to the previous drawings.
33, after the specific area enlargement mode is executed, original video data is output to the
However, due to the video data and the
34 illustrates a process of redisplaying a related indicator that has disappeared during the execution of a specific area enlargement mode, according to an embodiment of the present invention. It is also within the scope of the present invention to those skilled in the art to implement some other embodiments than those of FIG. 34 with reference to the previous drawings. In particular, Fig. 34 assumes Fig. 33, which was described immediately before.
34 is similar to the
If the
35 is a block diagram illustrating a display device according to an embodiment of the present invention. The present invention includes all of the embodiments of Figs.
35, the
The
The
The
A detailed description thereof will be described later with reference to FIG.
The
A detailed description thereof will be described later with reference to FIG.
The
A detailed description thereof will be described later with reference to FIG.
When the
A detailed description thereof will be described later with reference to FIG.
The
A detailed description thereof will be described later with reference to FIG.
The
A detailed description thereof will be described later with reference to FIG.
The
The
The EPG
36 is a flowchart of a method of controlling a display device according to an embodiment of the present invention. The present invention is carried out by the
First, the content including at least one object is displayed on the main screen of the display device (S3610).
Metadata about the content is generated for each object (S3620).
The content is displayed again based on the generated metadata (S3630).
37 is a flowchart of a method of controlling a display device according to an embodiment of the present invention. The present invention is carried out by the
First, the content including at least one object is displayed on the main screen of the display device (S3710).
Metadata for the content is generated for each object (S3720).
The enlarged position is moved to the pointer according to the time stamp of the generated metadata, and the enlargement level is changed (S3730).
The metadata is modified based on at least one of the moved enlarged position and the changed enlarged level (S3740).
The content is displayed again based on the modified metadata (S3750).
Upon receiving an input for selecting a specific object displayed on the main screen as a pointer, object tracking for a specific object is performed (S3760).
The metadata corresponding to the specific object to be tracked is stored in the memory (S3770).
A specific object is extracted from the stored metadata (S3780).
An icon is generated for each object based on the extracted specific object (S3790).
FIG. 38 is a diagram illustrating generation of metadata when a specific object is specified by a pointer according to an embodiment of the present invention.
As shown in the
For example, the
When analyzing the metadata contents, Meta Data = Recording Tag. + Timestamp + Pointing Coordinator + Zooming Level + Object Tag.
Recording Tag is a recording classification tag. The recording classification tag is a tag that distinguishes whether a content genre is a sport, a movie, a documentary, a drama, or the like.
TimeStamp is the time at which to identify the object as a pointer. In the case of 20150324-131711 as in the
Pointing Coordinator means pointing coordinate information when changing Pointing Zoom Control. Like the
Zooming Level refers to the screen enlargement level at the point when the object is specified by the pointer. Like the
An object tag refers to Tag information of a specific object when a specific object is specified by a pointer. Like the
The
The
When the content file is reproduced again, the
FIG. 39 is a diagram illustrating metadata generation for each object based on time when a specific object is specified by a pointer according to an embodiment of the present invention. Referring to FIG.
When a specific object is identified by a pointer, such as the
For example, metadata for SOCCER_20150324_131705_0320_0210_10_OBJ1 is generated for OBJ1. In the case of OBJ1, the metadata includes the soccer_23_2015_312_13517.05 seconds_ (X coordinate, Y coordinate) = (320, 210) _ 10 times zoom_OBJ1.
For the OBJ2, the metadata that is SOCCER_20150324_131711_0720_1901_07_OBJ2 is generated. In the case of OBJ2, the metadata includes the soccer_02_03_24_13_13171711_ (X-coordinate, Y-coordinate) = (720, 1901) _10 times zoom_OBJ2.
Metadata of SOCCER_20150324_131714_3620_1201_05_OBJ3 is generated for OBJ3. In the case of OBJ3, the metadata includes the soccer_23_2015_13_17 17:14_ (X coordinate, Y coordinate) = (3620, 1201) _5 times zoom_OBJ3.
Like the
Accordingly, the
If the first object disappears from the main screen and then reapplies, how to specify the first object and set the same object tag as before is a problem.
The
More specifically, the
For example, the
According to the present invention, when the first object is set to the parkiness, the second object is set to Rooney, and the third object is set to Ronaldo, meta data is generated for each object at the time of content reproduction and stored in the
In addition, the
For example, when the record classification tag is SOCCER, the
40 is a diagram illustrating transmission of metadata from a broadcasting station and a manufacturer according to an embodiment of the present invention.
When the
FIG. 41 is a view illustrating a screen displayed differently according to an embodiment of the present invention.
The
Meta Data is Camera Tag. + TimeStamp + Pointing Coordinator + Zooming Level + Object Tag.
The camera tag includes camera information for a predetermined area.
For example, in the case of a soccer game, the
The broadcasting company does not transmit all the video data shot by
The
For example, the
The
According to the present invention, content and metadata are received from a broadcaster or a contents producer, and contents can be viewed from the viewpoint and the eyes of the contents creator, thereby improving user convenience. In addition, the broadcasting company can reduce the amount of data to be transmitted since only the original content and the metadata need to be transmitted without separately transmitting the edited contents according to the consumer's preference.
Figure 42 is a view illustrating editing of metadata according to an embodiment of the present invention.
The
For example, as with the
The metadata before modification is SOCCER_20150324_131711_0600_0600_02_OBJ2.
The modified metadata is SOCCER_20150324_131711_1000_0500_03_OBJ2.
Therefore, when the metadata before and after modification are compared, it can be seen that the coordinate value is changed from (600, 600) to (1000, 500), and the enlargement magnification is changed from 2 to 3.
When the
For example, when the wheel is pushed forward, the magnification level is increased, and when the wheel is pushed back, the magnification level is decreased.
FIG. 43 illustrates icon generation for each object using metadata according to an embodiment of the present invention. Referring to FIG.
When the
For example, like the
The
OBJ2, and OBJ3, the
Upon receiving an input for selecting a specific object such as OBJ1 as a pointer, the
For example, when OBJ1 is parked, OBJ2 is Rooney, and OBJ3 is Ronaldo, the
Figure 44 is a diagram illustrating an embodiment of personalization using metadata according to an embodiment of the present invention.
The
For example, as in the
Like the
Like
Like the
According to the present invention, when analyzing the user's favorite player tendency and mapping the second object to the parkiness, the third object to Rooney, and the first object to Ronaldo, metadata is generated for each object at the time of content reproduction, (3540), and if the content is reproduced based on the meta data, it is possible to separately view each of Park Ji-sung, Rooney, and Ronaldo, thereby improving user convenience.
Figure 45 is a diagram illustrating the identification of family members using a specific application in accordance with an embodiment of the present invention.
The
For example, like the
That is, when the user is the father, the object keyword becomes the father.
The
Like
Like
According to the present invention, when the tone of the user's family voice is analyzed to map the first object to the first person, the second object to the father, the third object to the younger brother, and the fourth object to the mother, And stores it in the
According to an embodiment of the present invention, when the content is displayed, meta data on the content is generated for each object, and the content is displayed again based on the meta data, The convenience is improved.
According to another embodiment of the present invention, content and metadata are received from a broadcaster or a contents producer, and contents can be viewed from the viewpoint and the eyes of the contents creator, thereby improving user convenience.
According to another embodiment of the present invention, by modifying the metadata based on the time domain, the user can perform a screen enlarging function on the content in a desired manner, thereby improving user convenience.
According to another embodiment of the present invention, personalized contents can be provided to a user by mapping an object keyword and an object tag, thereby improving user convenience.
According to another embodiment of the present invention, by providing only the metadata corresponding to the content and the content, a separate content storage space can be reduced, thereby improving user convenience.
The image display apparatus and the operation method thereof according to the present invention are not limited to the configurations and the methods of the embodiments described above, but the embodiments can be applied to all or some of the embodiments May be selectively combined.
Meanwhile, the operation method of the video display device of the present invention can be implemented as a code that can be read by a processor on a recording medium readable by a processor included in the video display device. The processor-readable recording medium includes all kinds of recording apparatuses in which data that can be read by the processor is stored. Examples of the recording medium that can be read by the processor include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like, and may also be implemented in the form of a carrier wave such as transmission over the Internet . In addition, the processor-readable recording medium may be distributed over network-connected computer systems so that code readable by the processor in a distributed fashion can be stored and executed.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present invention.
3500: Display device
3510: Interface module
3520: Controller
3530: Display Module
3540: Memory
3550: EPG signal processing module
Claims (10)
Generating metadata for the content for each object; And
And displaying the content again based on the generated metadata,
The metadata
A content classification category, a time stamp, a pointing coordinate, an enlargement level, and tag information of the object
/ RTI >
And displaying the content again by reflecting the magnification level on the basis of the time stamp and the pointing coordinate
/ RTI >
And modifying the generated metadata.
/ RTI >
Moving the enlarged position to a pointer according to a time stamp of the generated metadata, and changing an enlargement level;
And modifying the metadata based on at least one of the moved enlarged position and the changed enlarged level
/ RTI >
Performing object tracking on the specific object upon receiving an input selecting a particular object displayed on the main screen as a pointer;
Storing metadata corresponding to the specific object being tracked in a memory;
Extracting the specific object tag from the stored metadata; And
And generating an icon for each object based on the extracted specific object tag
/ RTI >
Receiving content including at least one object through the interface module, displaying the received content on a main screen of the display device, generating metadata for the content for each object, and based on the generated metadata A controller for displaying the content again;
A memory for storing at least one of the displayed content and the generated metadata; And
And a display module for displaying the content on the main screen according to a control command from the controller,
The metadata
A content classification category, a time stamp, a pointing coordinate, an enlargement level, and tag information of the object
/ RTI >
Receiving content and metadata from the external server through the interface module, and displaying the content on the main screen based on the received metadata
/ RTI >
Including more camera tags
/ RTI >
Extracting an object keyword from at least one of the voice app information and the EPG information received through the interface module, extracting an object tag from the metadata stored together with the content displayed on the main screen, Mapping the object tag
/ RTI >
Extracting an object keyword for each divided user, extracting an object tag from metadata stored together with the content displayed on the main screen, extracting the object tag from the extracted object, Mapping the keyword and the extracted object tag
/ RTI >
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150085633A KR20160148875A (en) | 2015-06-17 | 2015-06-17 | Display device and controlling method thereof |
EP16811933.7A EP3311582A4 (en) | 2015-06-17 | 2016-06-15 | Display device and operating method thereof |
PCT/KR2016/006376 WO2016204520A1 (en) | 2015-06-17 | 2016-06-15 | Display device and operating method thereof |
US15/184,620 US20160373828A1 (en) | 2015-06-17 | 2016-06-16 | Display device and operating method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150085633A KR20160148875A (en) | 2015-06-17 | 2015-06-17 | Display device and controlling method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20160148875A true KR20160148875A (en) | 2016-12-27 |
Family
ID=57736668
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150085633A KR20160148875A (en) | 2015-06-17 | 2015-06-17 | Display device and controlling method thereof |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20160148875A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109426478A (en) * | 2017-08-29 | 2019-03-05 | 三星电子株式会社 | Method and apparatus for using the display of multiple controller controlling electronic devicess |
US11367283B2 (en) | 2017-11-01 | 2022-06-21 | Samsung Electronics Co., Ltd. | Electronic device and control method thereof |
-
2015
- 2015-06-17 KR KR1020150085633A patent/KR20160148875A/en unknown
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109426478A (en) * | 2017-08-29 | 2019-03-05 | 三星电子株式会社 | Method and apparatus for using the display of multiple controller controlling electronic devicess |
CN109426478B (en) * | 2017-08-29 | 2024-03-12 | 三星电子株式会社 | Method and apparatus for controlling display of electronic device using multiple controllers |
US11367283B2 (en) | 2017-11-01 | 2022-06-21 | Samsung Electronics Co., Ltd. | Electronic device and control method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101586321B1 (en) | Display device and controlling method thereof | |
US11962934B2 (en) | Display device and control method therefor | |
KR102393510B1 (en) | Display device and controlling method thereof | |
KR102254889B1 (en) | Digital device and method for controlling the same | |
KR101632221B1 (en) | Digital device and method for processing service thereof | |
KR102557574B1 (en) | Digital device and controlling method thereof | |
KR20170090102A (en) | Digital device and method for controlling the same | |
KR20170087307A (en) | Display device and method for controlling the same | |
KR20170126645A (en) | Digital device and controlling method thereof | |
KR102311249B1 (en) | Display device and controlling method thereof | |
KR102384520B1 (en) | Display device and controlling method thereof | |
KR20160148875A (en) | Display device and controlling method thereof | |
KR102603458B1 (en) | A digital device and method for controlling the same | |
KR102668748B1 (en) | Display device, and controlling method thereof | |
KR20170138788A (en) | Digital device and controlling method thereof | |
KR20170092408A (en) | Digital device and method for controlling the same | |
KR102384521B1 (en) | Display device and controlling method thereof | |
KR102439464B1 (en) | Digital device and method for controlling the same | |
KR20220003120A (en) | Display device and its control method | |
KR20200088033A (en) | Display device, and controlling method thereof | |
KR20170012998A (en) | Display device and controlling method thereof | |
KR102722276B1 (en) | Display Device and Control Method thereof | |
KR20170083227A (en) | Digital device and method for controlling the same | |
KR102404357B1 (en) | Digital device and method for controlling the same | |
KR20160127438A (en) | Display device and method for controlling the same |