WO2009122372A2 - Système de gestion d'affichage - Google Patents

Système de gestion d'affichage Download PDF

Info

Publication number
WO2009122372A2
WO2009122372A2 PCT/IB2009/051385 IB2009051385W WO2009122372A2 WO 2009122372 A2 WO2009122372 A2 WO 2009122372A2 IB 2009051385 W IB2009051385 W IB 2009051385W WO 2009122372 A2 WO2009122372 A2 WO 2009122372A2
Authority
WO
WIPO (PCT)
Prior art keywords
region
display
sub
user
area
Prior art date
Application number
PCT/IB2009/051385
Other languages
English (en)
Other versions
WO2009122372A3 (fr
Inventor
Elmo M. A. Diederiks
Freddy Snijder
Paul R. Simons
Robert L. Blake
Steffen Reymann
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2009122372A2 publication Critical patent/WO2009122372A2/fr
Publication of WO2009122372A3 publication Critical patent/WO2009122372A3/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/45Picture in picture, e.g. displaying simultaneously another television channel in a region of the screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42607Internal components of the client ; Characteristics thereof for processing the incoming bitstream
    • H04N21/4263Internal components of the client ; Characteristics thereof for processing the incoming bitstream involving specific tuning arrangements, e.g. two tuners
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4622Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection

Definitions

  • the present invention relates to a display managing system to be used in conjunction with a display device including a display region for managing the division of the display region into two or more sub-regions where one or more broadcast channels are displayed.
  • Picture-in-picture is an established method of combining TV view from multiple feeds.
  • One program channel
  • One program channel
  • Sound is usually from the main program only.
  • Picture in Picture requires at least two independent tuners or signal sources to supply the large and the smaller picture.
  • Two-tuner PiP TVs have a second tuner built in, but a single-tuner PiP TV requires an external signal source, which may be an external tuner, VCR, DVD player, or a cable box with composite video outputs.
  • Picture in Picture is often used to watch one program while waiting for another to start, or advertisements to finish.
  • Picture and Picture is a related feature showing two programs side-by- side on the screen, with the sound from one program being played through the speakers, and the sound from the other being sent to headphones.
  • PAP Picture and Picture
  • the present invention relates to a display managing system to be used in conjunction with a display device including a display region for managing the division of the display region into two or more sub-regions where one or more broadcast channels are displayed, comprising: at least one processor for operating the display region, the operating including displaying a first broadcast channel on the display region, and a display area def ⁇ ner for defining at least a first sub-region from the display region, wherein the at least first defined sub-region and the remaining portion of the display region act as independent display sub-regions, the at least one processor being adapted for displaying the incoming broadcast channels at the respective display sub-regions.
  • a new picture in picture format is provided that doesn't require rescaling. It furthermore offers a solution to deal with the multiple audio streams of these channels. Accordingly, this allows users to create their own combination of TV content and have it presented in a personalized manner. For example, a user may wish to embed live football scores running along the bottom of the display while playing a DVD. This text will be readable as it will be displayed in its native format and will not suffer from the problems associated with rescaling. Also, it is now possible to mix recorded and live broadcast material.
  • the display managing system further comprises a scroll bar and an input unit operated by a user, the scroll bar being adapted to allowing the user to scroll between the independent display sub-regions, the input unit being adapted to: receive instructions from the user for a selection of a given display sub-region, and receive instructions from the user for a selection of a new broadcast channel at the selected display sub-region for replacing the previous broadcast channel displayed at the display sub-region.
  • the display area def ⁇ ner is further adapted to define at least a second sub-region from said independent display sub-regions, the input unit further being adapted to: receive instructions from the user for moving broadcast channel being displayed at one of the display sub-region to the second sub-region.
  • the display area def ⁇ ner for defining at least one sub- region is a manual input means operated by a user.
  • the manual input means comprises a free moving cursor displayed on the display region where the at least one sub-region is defined via at least two marking points selected by the user.
  • the display area def ⁇ ner for defining at least one sub- region comprises a selection mechanism for automatically detecting a pattern from the broadcast currently being displayed and based on the detected pattern define the at least a first sub-region.
  • the selection mechanism includes at least one automatic pattern recognition means, and wherein these pattern recognition means include one of the following: a face detection mechanism and where the detected a pattern includes face contour lines, or - a text detection mechanism for detecting textual matters where the detection is based on that the textual matters changes up to a pre-defined amount for a given number of frames, or a combination of both face and text detection mechanism.
  • an automatic sub-region selection mechanism which e.g. creates one sub-region for a new reporter and one region where the headlines news being displayed at the bottom of the display is being shown.
  • the user may e.g. then decide whether he accepts the automatic selected sub-regions or not.
  • the selection mechanism is operated based on an input from the user where input indicates preferred pattern to be detected.
  • input indicates preferred pattern to be detected.
  • the display area def ⁇ ner for defining at least a first sub- region from the display region is a remote device comprising a processor adapted to analyze the A/V signals of the broadcast channels and based on the analyzes pre-define the at least a first sub-region, the pre-defined at least first sub-region being associated as metadata to the A/V signals of the broadcast channels.
  • the processing power that might be required may be provide externally instead of at the display device side.
  • the display device do not need to be highly advanced and thus cheaper.
  • the display area definer for defining at least a first sub- region from the display region is a remote device, where at the remote device side the at least a first sub-region is pre-defined by hand and associated as metadata to the A/V signals of the broadcast channels.
  • the display device further includes at least one speaker, the at least one processor further being adapted to determine the distribution of audio in response to the location, or the area, or both the location and the area of the at least first sub- region on the display region.
  • the audio output will reflect the size of the sub-region, i.e. the smaller the lower becomes the audio, and the location (upper left corner) of the sub-region. This will thus appear more natural to the user of the display device.
  • the present invention relates to a display device comprising said display managing system.
  • the present invention relates to a method of managing a display device including a display region for dividing the display region into two or more sub-regions where one or more broadcast channels are displayed, comprising: displaying a first broadcast channel on the display region, and defining at least a first sub-region from the display region, wherein the at least first defined sub-region and the remaining portion of the display region act as independent display sub-regions such that the incoming broadcast channels are to be displayed at the respective display sub-regions.
  • the step of defining said at least first sub-region from the display region comprises: selecting a hold-mode for initiating said defining, - marking said at least first sub-region from the display region, selecting a second sub-region within the first sub-region, selecting a picture-in-picture (PiP) mode for transferring the broadcast shown in the first sub-region to the second sub-region, selecting a second broadcast channel within the first sub-region.
  • the method further comprises selecting a moving-mode for moving the sub-regions from one location to another location within the display region.
  • the present invention relates to a computer program product for instructing a processing unit to execute the method steps when the product is run on a computer.
  • Fig. 1 shows a display managing system according to the present invention
  • Fig. 2 depicts graphically one embodiment of a method according to the present invention
  • FIG. 3 depicts graphically another embodiment of Fig. 2,
  • Fig. 4 depicts graphically another embodiment of Fig. 2 and 3
  • Fig. 5 depict graphically a multiple tuner setup
  • Fig. 6 depicts an Internet streaming handler
  • Fig. 7 shows an embodiment of how screen area definitions are performed by the display area def ⁇ ner
  • Fig. 8 shows another embodiment of screen area definitions using face detection mechanism
  • Fig. 9 shows two sub-regions within a screen area
  • Fig. 10 shows a flowchart of a method according to the present invention.
  • Figure 1 shows a display managing system 100 according to the present invention to be used in conjunction with a display device 103 including a display region for managing the division of the display region into two or more sub-regions where one or more broadcast channels are displayed.
  • the display managing system 100 comprises at least one processor (P) 101 and a display area definer (D A D) 102.
  • the at least one processor (P) 101 is adapted to operate the display region of the display device 103 by initially displaying a first broadcast channel on the display region.
  • the display area definer (D A D) 102 is adapted to define at least a first sub-region 107-109 from the display region, where the at least first defined sub- region 107-109 and the remaining portion 110 of the display region act then as independent display sub-regions.
  • the at least one processor (P) 101 operates the system 100 such that video of the incoming broadcast channels are displayed at the respective display sub-regions.
  • the system 100 further comprises a scroll bar (S B) 104 and an input unit (I U) 105 operated by a user 106, the scroll bar being adapted to allow the user 106 to scroll between the independent display sub-regions 107-110.
  • the input unit (I U) 105 is adapted to receive instructions from the user 106 for a selection of a given display sub- region, and receive instructions from the user 106 for a selection of a new broadcast channel at the selected display sub-region for replacing the previous broadcast channel displayed at the display sub-region.
  • the way of receiving instructions from the user 106 may be done via a remote control where the user e.g. presser the appropriate button on the remote control while selection the at least one sub-region 107-109, or by touching the display area directly if the display area is provided with touch-sensitive interface, or via a speech recognition system etc. This will be discussed in more details later.
  • the selection of the at least one first sub-region 107-109 this is realized by scaling multiple pieces of the incoming video signals and position these at certain screen positions. To be able to do this in 'real-time' a fast memory is needed to store the (processed) videos. Next these multiple pieces of processed broadcast signal are combined into a new single broadcast signals signal that is displayed on the screen. Also, video-bus or similar means may be provided for allowing to transfer the broadcast signal from the one processor (P) 101 to memory and the other way around, as well as from one processor to another processor (this also includes getting the broadcast signal to the processor(s) in the first place).
  • the different tasks are distributed between the various processors, such that one processor is implemented as a video scale processor (that is optimized for scaling and clipping video), one is implemented as a video re-combination processor (that is optimized for positioning video on screen and combine multiple pieces of video into one video signal), and one processor is implemented to keep track of what need to be shown on screen (it governs the other processors based on the user input).
  • a video scale processor that is optimized for scaling and clipping video
  • a video re-combination processor that is optimized for positioning video on screen and combine multiple pieces of video into one video signal
  • one processor is implemented to keep track of what need to be shown on screen (it governs the other processors based on the user input).
  • multiple tuners are provided for obtaining and supplying the different pieces of broadcast to the processor(s).
  • IP Internet- Protocol
  • a process is provided for downloading/stream multiple pieces of video, and separate these so as to provide these to the above mentioned at least one processor (P) 101.
  • Figure 2a-k depicts graphically one embodiment of a method according to the present invention. It should be noted that the steps depicted in the figures illustrate the steps as performed by a user, but they are not shown in "real-time”. Thus, in real life scenario, the picture in Figs. 2a-k would be changing while the user is performing the selection steps.
  • the system 100 may be used in television devices with multiple tuners. This could be a TV, or a video recorder (e.g. HDD or DVD) having access to multiple tuners. Alternative to tuners the TV could make use of Internet streams. Moreover, the TV could also display other content in the defined screen areas (such as Teletext, recorded video, Internet streams, or regular Internet browser content).
  • a TV or a video recorder (e.g. HDD or DVD) having access to multiple tuners.
  • the TV could make use of Internet streams.
  • the TV could also display other content in the defined screen areas (such as Teletext, recorded video, Internet streams, or regular Internet browser content).
  • the system 100 may be applied in web-based streaming applications, such as Joost, Youtube or 'Uitzending gemist'.
  • the application would make use of multiple (live) streams, while user interaction takes place in the application environment.
  • This could be a browser window, Apple Frontrow, or Windows MediaCenter).
  • Figure 2a shows a TV broadcast that is being displayed on a display device, e.g. a TV, computer monitor or any type of a display device comprising a display region 201.
  • the broadcast is a "live” broadcast from a news channel.
  • Figure 2b depicts where the user has selects a "hold-mode” by e.g. pressing on
  • FIG. 2c depicts where the user moves a moving cursor to the down left corner, to prepare marking a first sub-region on the display region 201.
  • Figure 2d shows where the user has selected a first sub-region defined via the dotted line 203. This may e.g. be realized by holding a sub-region selection button on the remote control while selecting the area.
  • the region may of course have all kinds of shapes and is not limited to rectangular shape as shown here.
  • Moving the cursor can be done with the standard arrow buttons or a free moving cursor control such as uWand, a Wii-like controller, a trackball, a touchpad, a touch screen remote control, and the like.
  • a free moving cursor control such as uWand, a Wii-like controller, a trackball, a touchpad, a touch screen remote control, and the like.
  • the display screen is provided with a touch- sensitive interface the cursor might be replaced by the viewer's finger or means for interacting with the display region 201.
  • the user might e.g. move the finger to the down left corner and move it upwards towards the upper right corner and then release the finger from the display region 201.
  • Fig. 2e the user has conformed the selected first sub-region 203 by e.g. pressing the appropriate command button on the remote control (this could be 'OK' button) or by releasing the sub-region selection button, or by releasing the finger from the display region 201 in case the display region 201 is provided with a touch-sensitive interface.
  • the dotted line 203 might change from being "rough” to being "fine", just to illustrate to the user that he has confirmed the selected first sub-region 203.
  • Figure 2f depicts where the user selects the bottom portion covering the text matter (the headlines that is continuously running), i.e. the user selects that he wants to 'hold' the text matter.
  • Figure 2g shows where the user has selected another broadcast channel in the selected first sub-region 203, e.g. the soap "ER", while the bottom portion 204 shows the live headlines of the new acts as another display region.
  • the first sub-region 203 acts as a display region that is independent from the remaining portion 204 of the display region 201, i.e. both the first sub-region 203 and the remaining portion 204 act as independent display sub-regions.
  • the scenario depicted in Figs. 2h and 2i is more or less a repetition of Figs.
  • Fig. 2h the user creates a second sub-region 205 within the first sub-region 203, but this may be done similarly as disclosed here above, i.e. either using the moving cursor or via the touch-sensitive interface by using the finger.
  • the user "moves” the broadcast channel shown in the first sub-region 203 to the second sub-region 205. This may be done by moving the cursor to the second sub-region 205 and subsequently select "menu" on the remote control.
  • the menu displayed may e.g. include the following functions: “move”, “make pip”, “delete” and “delete all”.
  • "make pip” the broadcast channels shown in the first sub-region 203 is moved to the second sub-region 205.
  • Fig. 2i shows where the "ER” is broadcast in both the first 203 and the second 205 sub-regions. Now, the user can zap the broadcast on the first sub-region 203 to other channels.
  • Fig. 2j depicts the scenario where the user has selected a third broadcast channel in the first sub-region 203 replacing the "ER".
  • the first sub-region 203 acting as a second viewing region
  • the second sub-region 205 acting as a third viewing region.
  • Figure 2k depicts where the user has selected the function "move" from the menu display and marked, e.g. using the cursor, the new location the user intents to move. Assuming the user is interested in moving the bottom sub-region 204, by moving the cursor to the sub-region 204 and select the function "move" from the menu, the sub-region 204 may be moved to a new location. After moving it to the new location, e.g. at the upper part of the display, the user can confirm the new location by e.g. pressing or releasing the appropriate button on the remote control.
  • FIG. 3 depicts graphically another embodiment of Fig. 2, where a screen divider is provided that the user can move. Once the screen divider is at the desired position the user presses a confirmation button (this could be the 'OK' button). Subsequently the user needs to indicate what screen area to hold by selecting one of the 2 or 4 areas that are defined by the screen divider.
  • a confirmation button (this could be the 'OK' button) to put the selected screen area on 'hold'.
  • the screen divider can be moved with the standard arrow buttons or a free moving cursor control (the uWand, a Wii- like controller, a trackball, a touchpad, a touch screen remote control, etcetera). Selecting a screen area is done in a similar using a jumping highlight (in case the arrow keys are used), or by selecting a screen area with a free moving cursor. The user can press the hold button again to set another screen area to hold.
  • the user can move a screen divider and move it, select one of the new 2 or 4 areas that are defined by the screen divider, and press a confirmation button (this could be the 'OK' button) to put the selected screen area on 'hold'.
  • a confirmation button this could be the 'OK' button
  • the user selects a screen area that was previously set on hold it is assumed that the user wants to move this screen area. Now the user can move the screen area. If the user presses the menu button when a previously defined area is selected, the user can select 'move', 'make Pip' or 'delete', where the latter results in removing this screen area. (In case of multiple screen areas the user can also select 'delete all'). In one embodiment, a use is made of pre-defined screen areas that the user can select.
  • the user can move through the different screen areas and pres a confirmation button (this could be the 'OK' button) to put the selected screen area on 'hold'. Browsing through the pre-defined screen areas can be done using a so-called jumping highlight using the standard arrow buttons or a free moving cursor control to point at the areas on the screen (the uWand, a Wii-like controller, a trackball, a touchpad, a touch screen remote control, etcetera).
  • the user can press the hold button again to select another pre-defined screen area to hold.
  • the user can move through the different screen areas and press a confirmation button (this could be the 'OK' button) to put the selected screen area on 'hold'.
  • a confirmation button this could be the 'OK' button
  • the user selects a screen area that was previously set on hold it is assumed that the user wants to move this screen area.
  • the cursor changes to a 'move' cursor and now the user can move this area on the screen.
  • the user presses the menu button while the cursor is on a previously defined area the user can select 'move', 'make PiP' or 'delete', where the latter results in removing this screen area. (In case of multiple screen areas the user can also select 'delete all').
  • Figure 4 depicts graphically another embodiment of Fig. 2 and 3 where templates are used as an alternative option to the above embodiments.
  • a template basically consists of a form with different screen areas. Within each screen area the user can zap to a different TV channel. The user presses a dedicated button (a 'hold' button) and subsequently the TV offers the first template. The user can browse through the templates using the standard arrow buttons (e/g/ left and right for previous and next). Once a template is selected the user can confirm this template by pressing a confirmation button (this could be the 'OK' button).
  • the user can browse through the different screen areas in the template and select one area using a so-called jumping highlight and the standard arrow buttons or a free moving cursor control to point at the areas on the screen (the uWand, a Wii- like controller, a trackball, a touchpad, a touch screen remote control, etcetera). Once the user has selected an area he can zap in that selected area.
  • a so-called jumping highlight and the standard arrow buttons or a free moving cursor control to point at the areas on the screen (the uWand, a Wii- like controller, a trackball, a touchpad, a touch screen remote control, etcetera).
  • the user can press the hold button again to select another screen area to zap in.
  • the user can move through the different screen areas. If the user presses the menu button the user can select the option to change the template or delete all areas.
  • the audio needs to be mashed up as well. This may be done nearly completely automatically. Hence it is proposed to only render the audio of the area that the user has selected for zapping. If the user wants it differently, the user can press the 'hold' button then select the area of which he wants to have the audio and subsequently press the (un)mute button. Alternatively the user can press the menu button and select the 'audio' option so set the audio volume of the selected area. In any case it is wise to show feedback of the audio settings using on screen icons.
  • the audio of large areas is rendered at a higher volume; the audio of small areas is rendered at a lower volume
  • the relative position of the selected areas the areas at the left hand side are rendered at a higher volume on the left-hand speaker and at a lower volume at the right-hand speaker; the areas at the right hand side are rendered at a lower volume on the left-hand speaker and at a higher volume at the right-hand speaker; the areas that are positioned in the middle are rendered equally loud on both speakers).
  • the user may store a screen layout as template. This may be realized by e.g. pressing the 'hold' button, press the 'menu' button and then select the 'store' option.
  • the TV stores the screen area layout, the channels that are displayed in each area and the audio settings.
  • the template is given a default name (e.g. MyScreenOl).
  • the user should also be able to retrieve these stored templates. This can be done by pressing the 'hold' button, press the 'menu' button and then select the 'retrieve' option. The user can then select a layout from a list of stored templates.
  • Figure 5 depict graphically an example of a multiple tuner setup.
  • the display device e.g.
  • the TV should preferably be provided with multiple tuners, a display controller that can assign screen areas to different tuners and that can scale streaming video.
  • the TV could use an Internet connection to access Internet streams.
  • Still display controller would be needed can assign screen areas to different streams and that can scale streaming video.
  • the TV needs a smart audio mixer, (and possible storage to store the screen lay-outs).
  • Figure 6 depicts an Internet streaming handler. It is namely so that in addition to extraction of content such as news tickers from the live broadcast, an additional embodiment could source the information to be displayed from the Internet and dynamically create a local news-ticker based upon the preferences of the user.
  • the user can select their desired information from a series of Internet sources (typically using RSS technology) and these are automatically retrieved and updated. Using techniques described earlier, it is possible for the user to place the news-ticker at their desired location.
  • Figure 7 shows an embodiment of how screen area definitions are performed by the display area def ⁇ ner (D A D) 102. In Fig. 2, the screen area definitions were performed manually by the user of the display device.
  • the screen area definitions are performed automatically using shelf content analysis, where broadcasted content is analyzed so as to detect typical screen areas that the user might want to select.
  • the text recognition is used for detecting general textual information. This might be achieved by detecting text at the lower part of the screen, since the bottom part of the screen is often used to display general textual information. The most straightforward example is the tickertape on news channels, but also music channels provide text boxes with background information.
  • the analysis is reduced to the bottom half of the screen to speed up the analysis and discarding the top half of the screen 501 as depicted in Fig. 7b. The analysis is now focused on detecting (scrolling) text matters in the lower half. Text that does not change too often is discarded.
  • FIG. 7c depicts where a text has been detected 502. It follows that a bounding box is defined, as depicted in Fig. 7d. This bounding box is used to define the screen area. This might take into account small apparently unused screen areas.
  • Figure 8 shows another embodiment of screen area definitions using face detection mechanism. This embodiment may of course be combined with the embodiment in Fig. 7. As shown here, it is proposed to detect relative large faces as this is typical for e.g. news readers. First a rough analysis is done for flesh color as depicted in Fig. 8b 601. All smaller areas that might be detected (not shown here) will preferably be discarded. Then only a minimum size (screen area percentage) is considered to be useful. To increase reliability of the detection, face feature detection can be applied (detection of eyes and mouth 602) as shown in Fig. 8c.
  • a bounding 603 box is defined.
  • This bounding box is used to define the screen area.
  • small apparently unused screen areas are preferably taken into account. For instance, if the bounding box is very close to the top of the screen, the top part of the screen is added to the pre-defined screen area.
  • pre-defined screen areas available is to make use of metadata, where the screen areas are not defined locally, but remotely. This may be done by automatic content analysis on a central server (for instance because the required processing power exceeds the available processing power on the TV). It can also be predefined by hand. This would allow more creative freedom for the content provider. These re-defined screen areas are subsequently made available to the TV.
  • this information embedded in the broadcast stream as digital information: for instance incorporated in the first few (unused) lines of the TV image (much like teletext).
  • the TV could have an Internet connection to access the pre-defined areas stored on a central Internet server.
  • the digital file could typically have an xml-like format, for instance:
  • the name of the area is defined, the top-left coordinate (in coorl), the bottom-right coordinate (in coor2) and the z-axis ordering (in layer, where the highest number is on top). Additionally, pre-defined areas can be inserted that display Internet content:
  • the name of the area is defined, the top-left coordinate (in coorl), the bottom-right coordinate (in coor2) and the z-axis ordering (in layer, where the highest number is on top).
  • the audio needs to be defined as well. In one embodiment, this is proposed as follows:
  • the distribution of the audio over the left and right speakers is defined. For instance the audio of "Areal” is distributed evenly over the two speakers, for “Area2" the audio on the left speaker is send for 80% to the left speaker and 20% to the right speaker, and the audio of Area3 is muted.
  • the display device calculates how to mix the audio.
  • the audio of "Areal” may be set at 100% volume for both the left speaker and the right speaker.
  • the audio of Area2 is set at 100% at the left speaker and 25 % at the right speaker.
  • the audio to both speakers needs to be mixed on a frequency level and adjusted to the maximum volume output. From hereon the volume can be further adjusted (tuned down) by the user using the volume control.
  • the content is typically defined by means of a channel reference. This could be the channel number programmed on e.g. the TV, for instance channel 1. However this has the disadvantage that all templates need to be redefined if the channels are changed on the TV. Alternatively, the channel can be defined using the channel name as used in the metadata available in the broadcast stream. This however has the disadvantage that a template is useless if this metadata is unavailable.
  • a third option is to define the channel on a central server indicated by an URL. This requires the TV to have in Internet connection. The URL will provide the channel frequency for that channel in the location that the TV is located (this might be detected automatically using the IP address, or the region setting in the TV).
  • the areas may also contain Internet content. This might look as follows:
  • the templates may in one embodiment be available on the TV. Alternatively they may be made available for download embedded in the broadcast stream as digital information: for instance incorporated in the first few (unused) lines of the TV image (much like teletext).
  • the information may be embedded digitally in the audio for instance by means frequency modulation.
  • the information may also be provided via a secondary channel.
  • the display device e.g. the TV, might have an Internet connection to access the templates stored on a central Internet server.
  • the templates can also be stored by the user. Again this can be done locally using a local storage medium embedded in the TV, but it can also be stored on a central server if the TV as an Internet connection. Moreover, they can be stored on a portable medium such as a USB storage device and exchanged with other users.
  • Figure 10 shows a flowchart of a method according to the present invention for managing a display device including a display region for dividing the display region into two or more sub-regions where one or more broadcast channels are displayed.
  • a first broadcast channel is displayed on the display region.
  • a second step (S2) 1002 at least a first sub-region is defined from the display region.
  • the at least first defined sub-region and the remaining portion of the display region act as independent display sub-regions such that the incoming broadcast channels are to be displayed at the respective display sub-regions.
  • the defining said at least first sub-region from the display region comprises: selecting a hold-mode for initiating said defining (S3) 1003, marking said at least first sub-region from the display region (S4) 1004, selecting a second sub-region within the first sub-region (S5) 1005, selecting a picture-in-picture (PiP) mode for transferring the broadcast shown in the first sub-region to the second sub-region (S6) 1006, selecting a second broadcast channel within the first sub-region (S7) 1007.
  • the method further comprises selecting a moving-mode for moving the sub-regions from one location to another location within the display region (S8) 1008.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

La présente invention concerne un système de gestion d'affichage s'utilisant avec un afficheur comprenant une zone d'affichage servant à gérer la division de la zone d'affichage en deux ou plusieurs sous-zones où s'affichent un ou plusieurs canaux de diffusion. On utilise un processeur pour le fonctionnement de la zone d'affichage, lequel fonctionnement consiste à afficher un premier canal de diffusion dans la zone d'affichage. On dispose d'une logique de définition de zones d'affichage permettant de définir au moins une première sous-zone à partir de la zone d'affichage. La première sous-zone ainsi définie et le reste de la zone d'affichage se comportent comme des sous-zones d'affichage indépendantes. Le processeur met en œuvre l'afficheur de façon à afficher dans les sous-zones d'affichage correspondantes les canaux de diffusion entrants.
PCT/IB2009/051385 2008-04-04 2009-04-02 Système de gestion d'affichage WO2009122372A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP08154097 2008-04-04
EP08154097.3 2008-04-04

Publications (2)

Publication Number Publication Date
WO2009122372A2 true WO2009122372A2 (fr) 2009-10-08
WO2009122372A3 WO2009122372A3 (fr) 2009-11-26

Family

ID=40785554

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2009/051385 WO2009122372A2 (fr) 2008-04-04 2009-04-02 Système de gestion d'affichage

Country Status (1)

Country Link
WO (1) WO2009122372A2 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2894849A4 (fr) * 2012-10-11 2015-07-22 Zte Corp Procédé de mise en oeuvre de visualisation en mode écran divisé de programmes de télévision, boîtier décodeur et système de télévision
EP3131254A1 (fr) * 2015-08-11 2017-02-15 Lg Electronics Inc. Terminal mobile et son procédé de commande
CN111273883A (zh) * 2020-01-20 2020-06-12 北京远特科技股份有限公司 多操作系统的同屏显示方法、装置和终端设备
CN112298198A (zh) * 2019-07-23 2021-02-02 丰田自动车株式会社 车辆位置感测系统

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5453796A (en) * 1994-06-28 1995-09-26 Thomson Consumer Electronics, Inc. Signal swap apparatus for a television receiver having an HDTV main picture signal processor and an NTSC Pix-in-Pix signal processor
US20070229706A1 (en) * 2006-03-28 2007-10-04 Junichiro Watanabe Information reading apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5453796A (en) * 1994-06-28 1995-09-26 Thomson Consumer Electronics, Inc. Signal swap apparatus for a television receiver having an HDTV main picture signal processor and an NTSC Pix-in-Pix signal processor
US20070229706A1 (en) * 2006-03-28 2007-10-04 Junichiro Watanabe Information reading apparatus

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2894849A4 (fr) * 2012-10-11 2015-07-22 Zte Corp Procédé de mise en oeuvre de visualisation en mode écran divisé de programmes de télévision, boîtier décodeur et système de télévision
US9456169B2 (en) 2012-10-11 2016-09-27 Zte Corporation Method for implementing split-screen viewing of television programs, set-top box, and television system
EP3131254A1 (fr) * 2015-08-11 2017-02-15 Lg Electronics Inc. Terminal mobile et son procédé de commande
CN106454499A (zh) * 2015-08-11 2017-02-22 Lg电子株式会社 移动终端及其控制方法
US9788074B2 (en) 2015-08-11 2017-10-10 Lg Electronics Inc. Mobile terminal and method for controlling the same
CN106454499B (zh) * 2015-08-11 2019-06-04 Lg电子株式会社 移动终端及其控制方法
CN112298198A (zh) * 2019-07-23 2021-02-02 丰田自动车株式会社 车辆位置感测系统
CN111273883A (zh) * 2020-01-20 2020-06-12 北京远特科技股份有限公司 多操作系统的同屏显示方法、装置和终端设备

Also Published As

Publication number Publication date
WO2009122372A3 (fr) 2009-11-26

Similar Documents

Publication Publication Date Title
CN111343490B (zh) 显示设备及内容推荐方法
US9137476B2 (en) User-defined home screen for ultra high definition (UHD) TV
US9250927B2 (en) Digital receiver and method for controlling the same
US9544653B2 (en) Web-browsing method, and image display device using same
US9582504B2 (en) Method for providing playlist, remote controller applying the same, and multimedia system
EP2461577A1 (fr) Procédé de contrôle d'affichage d'écran et affichage l'utilisant
EP2566175A1 (fr) Procédé de fourniture de liste de dispositif externe et dispositif d'affichage d'image
CN111405333A (zh) 显示设备和频道控制方法
JP2008054065A (ja) 情報処理装置及びその制御方法
JP2007266800A (ja) 情報再生装置
US20120060187A1 (en) Method for providing channel list and display apparatus applying the same
US11589113B2 (en) Smart start-up of television
CN111726673B (zh) 一种频道切换方法及显示设备
CN111669634A (zh) 一种视频文件预览方法及显示设备
WO2009122372A2 (fr) Système de gestion d'affichage
JP2012209829A (ja) 番組表示制御装置
US9204079B2 (en) Method for providing appreciation object automatically according to user's interest and video apparatus using the same
CN113259733B (zh) 一种显示设备
KR20230029438A (ko) 디스플레이 장치 및 디스플레이 장치의 제어 방법
KR101880458B1 (ko) 디지털 기기 및 디지털 기기에서의 컨텐츠 처리 방법
JP7207307B2 (ja) 情報処理装置、情報処理方法、プログラム
WO2011016473A1 (fr) Dispositif d'affichage d'informations de contenu
KR101341465B1 (ko) 방송 단말기 및 방송 단말기의 데이터 객체 표시 방법
JP2009043194A (ja) コンテンツ表示装置
KR102169057B1 (ko) 방송수신장치 및 그 장치의 제어방법, 정보제공장치의 제어방법 및 컴퓨터 판독가능 기록매체

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09727524

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09727524

Country of ref document: EP

Kind code of ref document: A2