GB2409391A - Language support for set-top boxes using web pages - Google Patents

Language support for set-top boxes using web pages Download PDF

Info

Publication number
GB2409391A
GB2409391A GB0329443A GB0329443A GB2409391A GB 2409391 A GB2409391 A GB 2409391A GB 0329443 A GB0329443 A GB 0329443A GB 0329443 A GB0329443 A GB 0329443A GB 2409391 A GB2409391 A GB 2409391A
Authority
GB
United Kingdom
Prior art keywords
image
text
file
web page
embedded web
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0329443A
Other versions
GB0329443D0 (en
Inventor
David Evans
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
YES TELEVISION PLC
Original Assignee
YES TELEVISION PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by YES TELEVISION PLC filed Critical YES TELEVISION PLC
Priority to GB0329443A priority Critical patent/GB2409391A/en
Publication of GB0329443D0 publication Critical patent/GB0329443D0/en
Priority to CNA2004800418360A priority patent/CN1926539A/en
Priority to PCT/GB2004/005265 priority patent/WO2005059773A1/en
Priority to GB0614262A priority patent/GB2427810A/en
Publication of GB2409391A publication Critical patent/GB2409391A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9577Optimising the visualization of content, e.g. distillation of HTML documents

Abstract

A method of displaying an image comprising creating an embedded web page and attaching objects to the embedded web page. The objects are served as images rather than as text, thereby allowing text in unusual fonts to be displayed. This allows items such as set-top boxes to display information such as menus in different languages without requiring the necessary character/language generating software.

Description

1 2409391
A METHOD AND APPARATUS FOR DISPLAYING AN IMAGE
The present invention relates to the display of an image on a screen, for example menus for the selection of programmes on a television.
Services are available whereby programmes including films, television programmes, music and other content, are supplied to a viewer when the viewer wants to watch them, referred to as video on demand (VOD). In this way, the viewer does not have to schedule his time to fit in with the time a programme is transmitted in a broadcast, and he does not have to wait a few minutes for the beginning of the next transmission of the programme, as occurs in some other digital systems. Also, the viewer can interrupt a programme, and later resume watching it without having to record it. These are all major advantages of VOD.
VOD can be supplied via a cable television network, via a telephone network using broadband technology such as ADSL, or via a wireless network. VOD can also be implemented for other diverse platforms and technologies, such as satellite and terrestrial TV. A decoder is usually required, in the form of a set-top box (STB) which decodes the VOD signals for use in a television set. The decoder is sometimes incorporated into the television set.
When using a VOD system, a huge choice of programmes may be available including all or many of the broadcast television programmes for that day, recently released films, older films and archived television programmes. Indeed, as the VOD system is based wholly on meta-data abstraction of information, the number of categories is infinite.
For example the archive of television programmes might include every episode of "Friends" from every series. This means that the VOD system must have an advanced
-
user interface system giving menus from which the viewer can select what he wants to watch.
The VOD system will typically be available across territories meaning that menus must be available in different languages. This is normally achievable without great difficulty because many countries use the Roman alphabet. However, other countries use other alphabets and writing systems, and converting from one to the other poses technical difficulties because the decoders do not support a full range of characters. Thus, the text displayed, for example in menus, is often meaningless because of this lack of support of characters in other alphabets and writing systems.
Examples of menus are shown in Figures 1 to 10. In Figure 1, most of the initial screen will normally show an extract from a film or programme which is available on the system, or a still picture which relates to what is available on the system. In this case, an exerpt of a film is being shown. Superimposed on that image is a menu consisting of three options which may be selected by the viewer, (1) - movies which can be selected if the viewer wishes to watch a movie, (2) - music which may be selected if the viewer wishes to listen to music, and (3) - entertainment which may be selected by the viewer to watch other channel content such as television programmer, or any source of digital programming or web programming that can be delivered to the decoder. Of course, other types of content could also be included.
It will be noted that the text in the menu includes drop shadowing. This means that the text characters are visible regardless of the background since, although most of each character is white, there is a dark shadowing. Thus, when the background is dark, the white stands out against the background, and when the background is light, the drop shadowing effect of the dark shadow or outline causes the character to remain visible.
In the prior art, the text in the menus are typed in, and rendered as an image on the screen after being processed by the server on every single request. This is inflexible, and every time that the same text is to be added to a picture, the text must be re-entered, and the image calculations required to render the image in a particular location and at a particular size needs to be redone. This is wasteful of resources.
To give an example of how the menu system may be navigated, if a viewer selects (1) - movies, he is taken to a new menu, shown in Figure 2, in which he has a choice of four different options, (1) - premiers, (2) - new films, (3) - a directory of films, and (4) - search whereby the viewer can search in various ways to find a film that he wishes to watch. If he selects (4) - search, he is taken to the menu shown in Figure 3 in which he can search by any one of four indicated categories - by actor, by certificate rating, by era or by genre. If he selects the search by genre, he is taken to the menu shown in Figure 4.
In the menu in Figure 4, six genres are available in the menu, (1) action and thriller, (2) - classic films, (3) - comedy, (4) - drama, (5) horror and science fiction, and (6) - lifestyle. In this example, the viewer selects (1) - action and thriller. He is then taken to a new menu shown in Figure 5, which firstly indicates that there are 13 items found in the search, of which six are new. The menu gives him the option of (1) - view all, (2) - refine search, (3) - remove last, and (4) - display new. In this case, the viewer wishes to refine the search and selects the second option. A new menu appears which is shown in Figure 6 in which the viewer can narrow the search in the following ways: (1) by actor, (2) by certificate, (3) by era, and (4) by genre.
In this case, the viewer has selected the films by actor, and Figure 7 shows a resulting menu in which films can be chosen by actor.
In another search, the actor Harrison Ford is available at this point, and when he is selected, a new menu is displayed which is shown in Figure 8. There is one item which matches the search. Information on this film can be viewed, as is shown in Figure 9, and if further selected, a summary of the film is shown, as shown in Figure 10. The film can then be rented or previewed as required.
It will be appreciated from this summary of a menu driven selection system that there are a whole range of different text items which appear in the menus. Also, many of these pieces of text reoccur, but each time that the server generates a new menu, it is necessary for it to recalculate the image of the text resulting in the use of a considerable amount of computing resources.
The invention seeks to reduce the technical difficulties in displaying characters from other writing systems.
It is also desirable to display text in drop shadowing, as explained above. However, some set top boxes cannot support drop shadowing, many particularly as they do not have a mechanism which allows real-time drop shadowing of textual elements. Also, in those set top boxes it is not possible to use a font point size other than an HTML font size (1-7). The limitations of some set top boxes make it difficult to reliably provide text in the most appropriate form to everybody.
According to one aspect of the invention, a method of displaying an image comprises creating a web page with embedded coding or scripting ('an embedded web page'), such as a JSP page, and attaching objects to the page, the objects being served as images. It is preferred that the embedded web page is created with one or more server hooks to which the or each object is attached. The object may be created by turning an object source into an image. The image can be saved in memory as an image file for retrieval later on. The image file is preferably named according to the characteristics of the image, so that, if the image is of text, the characteristics can include its typeface, width, height, point size, drop shadowing enabled, alignment and colour.
Before creating an image, a check may be made to see whether of not an object source has previously been turned into an image and saved as an image file. If it has, the address or file name at which the file is located can be returned. To
Requests may be received and images may be served via a web server, and requests may originate from a client device which creates a request. The images that are served are served to the client device for display, and it is preferred to use TCP/IP for all aspects of this method.
According to a second aspect of the invention, an apparatus for displaying an image comprises a request source which requests the transmission of an image for display; a processor which creates an embedded web page to which objects can be attached, and a server which turns an object source into an image for attachment to the embedded web page as an object.
It is preferred that the apparatus further include a memory for saving the image in the form of an image file. The file name of the image may be allocated used an allocating device which allocates a file name or address for the image file according to the characteristics of the image including anyone or more of the typeface, width, height, point size, drop shadowing enabled, alignment and colour of the image where the image is text.
Preferably the apparatus further comprises a web server via which the request source makes its request. The apparatus will normally include a client device which is the request source. Normally the client device is a decoder.
An embodiment of this invention will now be described by way of example only with reference to the accompanying drawings in which: Figures 1 to 9 show nine menus of a known VOD system; Figure 10 is a view giving details of a target film, To Figure 11 is a block diagram showing the structure of the VOD system according to the present invention,and Figure 12 is a view showing a further menu Referring now to Figure 11, a video on demand (VOD) system (1) is shown including a main server (2) which is arranged to serve digital signals as required as well as menu images providing information in various forms of content, such as, for example, digital video and music. This is served via a web-server (3) to a decoder (4) held by the viewer, normally in the form of a set-top box, but which could be incorporated into a television or other receiving device. The decoder (4) is required to receive digital images from the main server and to decode them into signals which can be handled by a television or other receiving device. The decoder (4) might also have other functions, such as decrypting signals such that only authorised viewers may view the information from the main server (2). The decoder (4) also allows interaction between the viewer and the main server (2) whereby information can be passed back to the main server in order to control the serving of the signals from the server (2). For example, when the viewer makes menu selections, these selections must be passed back to the main server, or if the viewer wishes to pause the serving of a programme, then the paused signal must be passed back to the main server (2). In this case, the signals sent by the decoder (4) to the main server (2) use TCP/IP.
A servlet engine (5) is located between the main server (2) and the web server (3).
The operation of the VOD system (1) will now be described. When a viewer makes a selection, for example in Figure 1 where the viewer selects the content, the decoder (4), which constitutes a client in the system, creates a request in TCP/IP which is sent to the webserver (3) by which the request is directed to the servlet engine (5). The servlet engine (5) examines the request, and data is passed to the main server where calculations take place, and the result of the calculation is returned to the servlet engine (5). On the basis of the return data, the servlet picks an embedded web page, best To described as an HTML-styled page with some server hooks to which objects may be attached. In this case, since movies have been selected, the HTML-styled page that will be selected will be that shown in Figure 2 with server hooks at various locations. Each server hook corresponds to a text location. Therefore, text location (1) might correspond to "Premiers", text location (2) might correspond to "New", a third text location might correspond with "Directory", and a fourth text location might correspond to "Search". Although the objects located at each hook appear to the user to be text, in fact, they are images. The system takes text in the appropriate font, point size and having various other appropriate characteristics, and turns it into an image, and it is this image which is displayed. In doing this, it means that fonts and characters which are not supported by the decoder 4 can be displayed since all set top boxes can handle images, such as gif images. There may be other server hooks and text locations within the page, but in this case, only four are used. If the text has not previously been turned into an image with the parameters that have been used, such as scaling, then the text will now be turned into a image and stored, for example on disk, and the servlet engine (5) serves the embedded web page with the images of the text to the decoder (4) via the web server (3) in TCP/IP.
An example of a JSP call is now given: Sample Jsp call: <html> <ozone: lookupTextToImage text="button6_VALUE" width="553" height="53" typeface="Andale Mono WT T Eval" pointSize="22" shadowOn="true" horizontalAlignment="l" renderMode="scale" alt="button6_VALUE" border="0" align="left"/> </HTML> To The embedded web page is processed in the servlet engine (5) and the result is returned to the webserver (3) as HTML. The result looks like this: Resultant HTML: <HTML> <IMG SRC='/ozoneimages/autogenerated/Home/l/pace3875/ scale39092444459126145truel51025_hc24452317_sc5f717247ffl3.gif' BORDER='0' ALIGN='left'/> </HTML> The servlet engine (5) does not actually serve the image to the decoder immediately.
The image is saved on the webserver. The servlet engine (5) allocates an address or file name of the image. The decoder then sends a request to the webserver (3) for the image which is then returned.
On subsequent occasions, if the client, or any other client of the VOD system (1) makes a request, again, the request is sent by the servlet engine (5) to the main server (2) where it is processed. The result is returned to the servlet (5), and the servlet (5) picks an embedded web page. If the text images have previously been displayed, then they will be stored on disk, and rather than creating new images, the servlet engine (5) merely returns the file name to the client which can then request them directly, thereby eliminating the need to carry out calculations to create the images each time they are requested. The stored images of text will not always be the same. Various characteristics of the text may be different in different circumstances. For example, in a particular use of a piece of text, there are characteristic variables such as height, width, typeface, font or point size, horizontal alignment, scale, and whether or not the drop shadowing is enabled. Thus, if a particular use of a piece of text involves a point size of 22, a later request for the same text, but having a point size of 24 will require the image to be created and saved separately on disk with a different file name.
In a system where there are likely to be a large number of clients, storing of the images rather than recalculating them each time they are required will significantly reduce the amount of computer processing required and rely more greatly on data storage. Thus the system can be operated with fewer processing resources.
In addition, because the text is served to the client in the form of images, the decoder (4) does not need to support International writing systems. For example, if a menu is produced in Chinese, provided that the main server (2) and servlet engine (5) are equipped to support Chinese characters, it does not matter whether or not the decoder supports them since it will receive these characters in image form. Thus, the decoder (4) requires less computing power and memory, and will therefore be cheaper to manufacture as well as better able to deliver full International characters.
If a decoder must support a large internationalized font, this must be stored in its memory, and will be very large, and probably in excess of several megabits. Having such a large memory for an internationalized font, most of which will not be used in any particular device is an unnecessary technical burden. In addition, decoders are limited in what they can display, and what features they have available. If a feature, such as drop shadowing is required, and older decoder devices do not incorporate this feature, only those people with newer decoders will be able to view text in drop shadow format. Thus, it is better to serve the text in image form so that features like this can be used.
Every image that is generated receives its own unique file name. For example, the following file name: scale22909294445-2302756truel55353_hc o 1030621598_sc41646d696e697374726174696f6e2053797374656d73.gif is a reference to this image: Ci This would have been generated with these parameters: <ozone: lookupTextToImage text="button2_VALUE" width="553" height="53" typeface="Andale Mono WT T Eval" pointSize="22" shadowOn="true" horizontalAlignment="1" renderMode="scale" alt="buttonl_VALUE" border="0" align="left"/> where the server held the text "Concierge" which was referenced (hooked into) by the buttonl_VALUE. The appearance of this term can be seen in Figure 12, which is a menu giving three options, 1- Tour Guide; 2 - Concierge; and 3 - Personal Photographer. Each of the labels corresponds to a hook which causes the decoder to request the appropriate object from the webserver. "Concierge" is requested for insertion in the second available space or hook. of
In summary, the text "Concierge" is data generated in the form of an image. The HTML pages are live and are connected to that data by means of the unique file name for the image. The main server (2) and/or the servlet engine (5) manage the connections between the data and the HTML pages in an operationally efficient manner.
The unique file name for the image is based on all the attributes that can be specified.
Therefore, if the colour changes, or the alignment changes, a new image is created with a new file name. The file name is created by compounding the attributes. The string hash code of the return value is added, and in case there are any hash code clashes, the remainder of the file name is made up of a sample of the text itself which has been converted into hexadecimal to fill the maximum file length available on the operating system. The string is converted to hexadecimal so that foreign language characters, such as Chinese, can be safely written to disk. Outputting a Chinese Java string as a file name can fail on many operating systems that only have limited characters for file names. An example of how the file name is generated is given in the following computer code: /** * Determines the filename to use.
* We can't just splat it out, since the filesystem may not be able to handle.
* unicode filenames. This version turns all characters into hex values.
* The type of utility creating the file is preprended. *
* @param inString the input string, normally the text that we're going to render to an image.
* Oreturn (String) The string turned into hex values */ private String createFileName(String inString) { if (inString == null) inString = **; To // note don't put hashcode first as it may give a negative value, which gets // confused with command line options when deleting in unix.
StringBuffer retval = new StringBuffer (getOperationName())i retval. append(inString.length()); retval.append(font.hashCode()); retval. append(colour.hashCode()); retval.append(shadowOn); retval. append(hAlignment); retval.append(width); retval.append(height); retval. append("_hc"); retval.append(inString.hashCode()); retval.append("_sc"); // convert the text to be displayed into hex, and use it as part of the file name.
for (int i = 0; i < inString.length(); i++) I retval.append(Integer. toHexString((int)inString.charAt(i))); // Can only handle file names of a certain length! if (retval.length() ≥ _MAX_FILE_NAME_LENGTH) I break; // filename has hit its max so lets get outta here return retval.toString(); When an image is created or calculated, the servlet engine (5) starts with a width, height, font, font size, and text which is to be applied to the available space. The text will be wrapped from one line to another, if necessary. Manual breaks can be forced by inserting the character which is required for Internationalised text where there is no white space between words, such as with Chinese. If the text does not fit, then its font size is reduced until it will fit, or a point is reached whereby the text will be too small to be read in which case a blanking image is returned.
Each image is rendered only once on the server, and so if the same text and effect is asked for again, the server recognises the existence of the image stored in memory, and lo merely returns a reference to it in the form of the unique file name or address. This is made easier by the fact that the server uses a unique file name based on the text and its characteristic.
It is also possible to create animated gifs to give various other additional effects such as scrolling text. o

Claims (22)

  1. Claims 1. A method of displaying an image comprising creating an embedded
    web page and attaching objects to the embedded web page, the objects being served as images.
  2. 2. A method according to claim I, wherein the embedded web page is created with one or more server hooks to which the or each object is attached.
  3. 3. A method according to claim 1 or 2, wherein the embedded web page is a Java script page (JSP).
  4. 4. A method according to any one of the preceding claims, wherein the or each object is created by turning an object source into an image.
  5. 5. A method according to claim 4, wherein the image is saved in memory as an image file.
  6. 6. A method according to claim 5, wherein the image file is named according to characteristics of the image.
  7. 7. A method according to claim 6, wherein, where the image is of text, the characteristics include any one or more of typeface, width, height, point size, drop shadowing enabled, alignment and colour.
  8. 8. A method according to any one of claims 5 to 7, Farther comprising checking whether or not an object source has previously been turned into an image and saved as an image file. To
  9. 9. A method according to claim 8, wherein, if the object source has previously been turned into an image and served as an image file, returning the address or file name at which the file is located.
  10. 10. A method according to any one of claims 1 to 6, wherein the image is of text.
  11. 11. A method according to any one of the preceding claims, further comprising receiving requests and serving images via a webserver.
  12. 12. A method according to any one of the preceding claims, further comprising creating a request in a client device.
  13. 13. A method according to any one of the preceding claims, further comprising receiving images in a client device for display.
  14. 14. A method according to any one of the preceding claims, using TCP/IP.
  15. 15. Apparatus for displaying an image comprising: a request source which requests the transmission of an image for display; a processor which creates an embedded web page to which objects can be attached; and a server which turns an object source into an image for attachment to the embedded web page as an object.
  16. 16. Apparatus according to claim 15, wherein the embedded web page is a Java script page (JSP).
  17. 17. Apparatus for displaying an image according to claim 15 or 16, further comprising a memory for saving the image in the form of an image file.
  18. 18. Apparatus according to claim 17, further comprising a file name or address allocating device which allocates a file name or address for the image file according to the characteristics of the image.
  19. 19. Apparatus according to claim 18, wherein the file name or address allocating device allocates file names or addresses according to any one or more of the typeface, width, height, point size, drop shadowing enabled, alignment and colour of the image where the image is text.
  20. 20. Apparatus according to any on of claims 15 to 19, further comprising a webserver via which the request source makes its request.
  21. 21. Apparatus according to any one of claims 15 to 20, further comprising a client device which is the request source.
  22. 22. Apparatus according to claim 21, wherein the client device is a decoder.
GB0329443A 2003-12-19 2003-12-19 Language support for set-top boxes using web pages Withdrawn GB2409391A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
GB0329443A GB2409391A (en) 2003-12-19 2003-12-19 Language support for set-top boxes using web pages
CNA2004800418360A CN1926539A (en) 2003-12-19 2004-12-15 Method and device for displaying image
PCT/GB2004/005265 WO2005059773A1 (en) 2003-12-19 2004-12-15 A method and apparatus for displaying an image
GB0614262A GB2427810A (en) 2003-12-19 2004-12-15 A method and apparatus for displaying an image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0329443A GB2409391A (en) 2003-12-19 2003-12-19 Language support for set-top boxes using web pages

Publications (2)

Publication Number Publication Date
GB0329443D0 GB0329443D0 (en) 2004-01-28
GB2409391A true GB2409391A (en) 2005-06-22

Family

ID=30776104

Family Applications (2)

Application Number Title Priority Date Filing Date
GB0329443A Withdrawn GB2409391A (en) 2003-12-19 2003-12-19 Language support for set-top boxes using web pages
GB0614262A Withdrawn GB2427810A (en) 2003-12-19 2004-12-15 A method and apparatus for displaying an image

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB0614262A Withdrawn GB2427810A (en) 2003-12-19 2004-12-15 A method and apparatus for displaying an image

Country Status (3)

Country Link
CN (1) CN1926539A (en)
GB (2) GB2409391A (en)
WO (1) WO2005059773A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504648A (en) * 2008-11-14 2009-08-12 北京搜狗科技发展有限公司 Method and apparatus for showing web page resources

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11328057A (en) * 1998-05-13 1999-11-30 Yazaki Corp Internet terminal equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6964009B2 (en) * 1999-10-21 2005-11-08 Automated Media Processing Solutions, Inc. Automated media delivery system
HK1024380A2 (en) * 2000-03-28 2000-08-25 Lawrence Wai Ming Mo Internet-based font server
US8392827B2 (en) * 2001-04-30 2013-03-05 International Business Machines Corporation Method for generation and assembly of web page content
WO2003034304A1 (en) * 2001-10-18 2003-04-24 Acenet Co., Ltd. System and method for providing updatable e-mail
US7844909B2 (en) * 2002-01-03 2010-11-30 International Business Machines Corporation Dynamically rendering a button in a hypermedia content browser

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11328057A (en) * 1998-05-13 1999-11-30 Yazaki Corp Internet terminal equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Tool to Aid Translation of Web Pages into Different National Languages", IBM Technical Disclosure Bulletin 1998, US *

Also Published As

Publication number Publication date
GB2427810A (en) 2007-01-03
CN1926539A (en) 2007-03-07
GB0329443D0 (en) 2004-01-28
GB0614262D0 (en) 2006-08-30
WO2005059773A1 (en) 2005-06-30

Similar Documents

Publication Publication Date Title
US10587930B2 (en) Interactive user interface for television applications
US11765424B2 (en) Systems and methods for providing blackout recording and summary information
US9338385B2 (en) Identifying ancillary information associated with an audio/video program
US8225367B2 (en) Systems and methods for dynamic conversion of web content to an interactive walled garden program
US6204842B1 (en) System and method for a user interface to input URL addresses from captured video frames
JP4955544B2 (en) Client / server architecture and method for zoomable user interface
US6665687B1 (en) Composite user interface and search system for internet and multimedia applications
US10409445B2 (en) Rendering of an interactive lean-backward user interface on a television
US6928652B1 (en) Method and apparatus for displaying HTML and video simultaneously
US7360233B2 (en) Broadcast carousel system access for remote home communication terminal
US20060085829A1 (en) Broadcast content delivery systems and methods
US8726325B2 (en) Method and apparatus for scheduling delivery of video and graphics
US7533406B2 (en) Systems and methods for generating a walled garden program for substantially optimized bandwidth delivery
US20060168639A1 (en) Interactive television system with partial character set generator
JP2003515983A (en) Managing electronic content from different sources
US20020067428A1 (en) System and method for selecting symbols on a television display
JP2002514865A (en) System and method for providing a plurality of program services in a television system
GB2409391A (en) Language support for set-top boxes using web pages
WO2001010118A1 (en) Providing interactive links in tv programming
JP2005286798A (en) Image contents distribution system
US9060188B2 (en) Methods and systems for logging information
US9578396B2 (en) Method and device for providing HTML-based program guide service in a broadcasting terminal, and recording medium therefor
KR20140067784A (en) Apparatus for receiving augmented broadcast, method and system for receiving augmented broadcast contents

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)