WO2008142472A1 - System and method to consume web content using television set - Google Patents
System and method to consume web content using television set Download PDFInfo
- Publication number
- WO2008142472A1 WO2008142472A1 PCT/IB2007/001396 IB2007001396W WO2008142472A1 WO 2008142472 A1 WO2008142472 A1 WO 2008142472A1 IB 2007001396 W IB2007001396 W IB 2007001396W WO 2008142472 A1 WO2008142472 A1 WO 2008142472A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- providing
- top box
- set top
- internet
- user
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/16—Analogue secrecy systems; Analogue subscription systems
- H04N7/173—Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
- H04N7/17309—Transmission or handling of upstream communications
- H04N7/17318—Direct or substantially direct transmission and handling of requests
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/738—Presentation of query results
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/222—Secondary servers, e.g. proxy server, cable television Head-end
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234318—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into objects, e.g. MPEG-4 objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/254—Management at additional data server, e.g. shopping server, rights management server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/462—Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
- H04N21/4622—Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4782—Web browsing, e.g. WebTV
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
- H04N21/4828—End-user interface for program selection for searching program descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/643—Communication protocols
- H04N21/64322—IP
Definitions
- the invention relates to the Digital Video Broadcasting (DVB) television ecosystem More particularly, it relates to an interactive television infrastructure that enables TV users to ask, search, choose and consume internet content using a TV set, a remote Control or a voice recognition system, and a decoder Or set top box
- the invention relates to a client server infrastructure (like in Figure 1) comprising - as client - an hybrid decoder/set-top box receiver allowing broadcasting receiving (e g cable, satellite, terrestrial or IP) as well as accessing to the Internet network Through a broadband IP connection, and — as server - a xypicai Internet application server (web server and J2EE or equivalent application server) managing the access to the multitude of web-based applications, databases and services.
- the set top boxes could be substituted by a mini-pc.
- the invention also relates xo a user interface for a consumer eiecxronic product such as a television.
- the presented user interface allows a bimodal coexistence of the broadcasting ecosystem with an IP interactive environment (like in Figure 3).
- An additional interactive window enables to serach web-content displaying retrieved information and functions in images or objects inside a virtual environment making the navigation compatible with the television (image based and fluent) and zapping (scrolling) paradigms, and making the presented web content consumables using a normal remote control.
- the invention relates to a system to retrieve a list of data and related services or events on this data from the internet making this content interactive and consumables by TV users
- Data can be common TV channels available within the broadcasting environment or any other content available within any web-based database. Interactivity with this data goes from the search and choose of the right information to the scrolling of the selected items and its consumption.
- Television viewing is a popular form of entertainment. Developments in television and digital video technologies allow viewers to watch a wide variety of high- resolution content and to record programs for later viewing.
- the Internet offers users an alternative source for many types of information, such as movies, music, commercial products, weather, financial information, etc.
- users can access any type of web-based content at " any time. Nonetheless, users must often interrupt their television viewing in order to access the Internet via a computing device. Additionally, accessing a preferred type of web-based content regularly can require users to spend time repeating navigation steps.
- Video on the Internet is likely in the same format as what you get over a satellite or digital cable connection. It's called MPEG-2, and it's the same one used on DVDs as well. In other words, that same Internet connection you're using to view this Web page could be used to bring you full-size, DVD-quality video.
- some phone companies have acquired cable TV licenses and are delivering "IPTV” — digital television carried over phone lines, using Internet technology. It's indistinguishable from cable or satellite, except that the line coming into your living room looks like a phone cord, not a TV cable.
- IPTV IP Television
- There's IPTV so the technology exists to use the Internet infrastructure to carry television.
- a larger one is Microsoft's Media Center Extender Set-top Box. It connects your television to your PC, so you can not only watch TV networks, channels and broadcasting shows; you can also access the music, photos, and video that are on your computer.
- Streaming media technology market is still a chaos today. If an Internet user clicks on a video to watch it, there is no guarantee that he can watch it as he can easily watch a TV show after turning on a TV. Usually due to the lack of local resources allocations, Video streaming is still a comparatively new technology. A video encoded in one technology usually cannot be played by another player, due to lack of content adaptation and uses awareness techniques. Technical architecture and low cost requirements of the set top box do not allow to integrate inside a unique Set Top Box different streaming players/viewers.
- the invention seeks to provide an improved system for the consumption of web content using a TV set, and preferably seeks to mitigate, alleviate 25 or eliminate one or more of the above disadvantages singly or in any combination.
- an interactive television system allowing TV users to search, choose and consume internet content using a TV set, a remote control and a set top box, where the user does not need to read text to completely understand the data retrieved and to interact with, but rather need only look 3 Q at a three-dimensional animated landscape containing the complete answer in graphical form.
- Content can be classical TV content: channels, passed programmes, shows, movies, news, live broadcasting programs or future broadcasting offers. This content is made available searching real-time inside internet based electronic program guides (EPG).
- EPG electronic program guides
- Passed shows and productions produced by TV networks are made available though the Internet network and consumed though a streaming flow.
- Present programs are the traditional broadcasting offers, distributed and consumed through a broadcasting medium, while future programs could be linked to services or events to preset recording commands.
- Searching on channel type enable to implement advanced favourite channel lists based for example on the language or the geographic location of the TV network.
- Searching on live programs' content enable to implement other favourite channel lists based on the content of the single programmes presenting for example only channels passing tennis games, thriller movies, news, etc. Interacting and selecting a single live program tune the TV set to that channel without entering any predetermined channel number.
- Content can also be classical internet content (movies, music clips, any kind of products, auctions, additional content to single broadcasting events, electronic program guides) or any kind of user generated content (movies, music clips, texts or images). This content is made available searching real-time inside internet-based databases or applications.
- the interaction with the presented content is made compatible with the TV watching paradigms by transforming it in images (2D or 3D objects) inside 2D or 3D virtual environments, allowing the user an easy and intuitive navigation (scrolling) and consumption (selection).
- a virtual world or environment is created and presented to the user that appropriately simulates the requested data in virtual objects in simulated realistic form. By performing these actions, the user is effectively interacting with the internet data returned by the query.
- the three-dimensional environment is presented to a user on the TV set screen.
- the user can interact with the presented scene by a single click using a remote control.
- interaction with the presented scene brings additional results from the original data query into the three- dimensional view.
- the system comprises the translation of the interaction with a graphical environments in the correct API or web services calls according to the predetermined parameters stored in the system.
- the graphical user interfaces acts as a logic interfaces between the user cognitive understanding of the presented content and the complexity of the URLs knowledge necessary to interacts with the presented objects.
- the presentation and the interaction with web content occurs in a bimodal user interface allowing to perform tasks while continuing watching conventional broadcasted content.
- the search and interaction process is implemented inside a bimodal user interface allowing the coexistence of the broadcasting technology with the on demand internet paradigms.
- the present invention describes an example interface that facilitates channel surfing and browsing while enables web-content delivery and consumption within the same user interface.
- the system comprises a transcoding component enabling a cross translation of multimedia formats: from text queries result set to graphical environments, from text to sounds, from a video streaming format to another, although the used video trans-coding method is not part of the presented invention.
- a channel list is generated, for example, based on a realtime database search operation (i.e. thriller movies, tennis games, news, ..).
- An interface displays at least a portion of the channel list and allows for user-selection of a channel from the channel list so that the selected channel may be tuned.
- the presented list of objects could be generated based on a real-time search result of movies to be consumed in streaming, or music clips, or books to be bought, or any other web-based available content to be consumed.
- Figure 1 is a diagram illustrating the technical architecture of the presented system
- Figure 2 illustrates the trans-coding features of the invented system
- Figure 3 illustrates a typical broadcasting television user interface.
- Figure 4 illustrates a bimodal (IP + broadcasting) user interface allowing the interact with web content while consuming broadcasting television.
- Figure 5 is a flowchart of a method for translating an Internet search query and its related result set in graphical virtual environment according to an exemplary embodiment of the present invention.
- Figure 6 is a flowchart of a method for transparently translating the interaction with a graphical user interface with internet web services calls for the consumption of the presented content.
- Figure 7 illustrates an example for the presentation of some menu items in a graphical form.
- Figure 8 is a flowchart of a method for translating of a scrolling operation in a navigation process according to an exemplary embodiment of the present invention.
- Figure 9 illustrated the graphical representation of a result set of movies responding to a real-time search request.
- Figure 10 illustrated the graphical representation of a result set of books responding to a real-time search request inside a virtual library.
- F2 RESULT-SET F2(API-URL, SEARCH CRITERIA)
- F4 OBJECT-ID F3 (SCENE-ID, DATAKIND-ID)
- the invention concerns a method for searching, browsing and consuming web-based content within a broadcasting environment ( Figure 1).
- the system includes a set-top box device ( Figure 1 - 030) comprising at least a processor and a memory accessible to the processor.
- the system includes a client computer program embedded within the memory of the set top box and executable by its processor.
- the client computer program comprising instructions to input search criteria, to display, to browse (surf or navigate) and to interact with displayed data in a graphical form (vectors).
- the computer program comprising as well a streaming player able to decode at least single streaming format.
- the set-top box device is connected to the IP Internet network through a broadband connection ( Figure 1 - 025).
- the set-top box device is also connected to at least a broadcasting medium 035 (cable, satellite, terrestrial or IP) ( Figure 1 - 035) allowing conventional broadcasting consumption of live television programs.
- the system also includes a server hardware and software platform ( Figure 1 - 020) interfacing the invented architecture to the Internet ( Figure 1 - 010).
- a software platform and a relational database runs inside the server managing all the necessary parameters, allowing graphical environments parameters to be translated in APIs or web services calls and allowing result set of database retrieval operations to be translated in 2D or 3D virtual environments.
- the server with its predetermined parameters, acts as unique interface towards the multitude of web services, applications and databases hiding this complexity to client application and therefore to the TV user.
- the server platform also includes a trans-coding component (smart edge) enabling the selected content to be transformed in a multimedia format in order to be consumables by TV users (Figure 2 - steps 030,035, 040) : from a list of items to a graphical environment, from text to sounds, from video streaming in one format to a video streaming in another format.
- a trans-coding component smart edge
- Start flow is a common broadcasting TV screen 350 on the screen with a common show received through a broadcasting medium 035.
- the user starts the client program running inside a set top box opening an interactive window 455 beside the broadcasting window 350, implementing the physical coexistence of the passive broadcasting television consumption with the active interaction with IP-based internet content.
- Start flow could also be a predefined user interface already initialized with the two presented windows 350 and 455: one dedicated to the broadcasting environment ( Figure 4 - 350) , the other dedicated to the presentation of a real-time retrieve operation of web-based content ( Figure 4 - 455). Focusing the attention to the interactive window 455, the user can input a search criteria starting a retrieval operation of web-content ( Figure 5) and can navigate (surf) inside the displayed content interacting with single displayed items ( Figure 6). Search and presentation flow
- the process begins at step 510 where the user inputs search criteria using the remote control 045.
- Search criteria are given to the system using the T9 protocol (numbers from 0 to 9) in a similar way as SMS messages are written using mobile phones. Search criteria could also be given to the system by future available voice recognition systems.
- the program running inside the set top box detects the search criteria and sends in step 520 a request to the Internet server 020 through the IP connection 025.
- the content the user is searching for depends from the user location inside the navigational virtual environments. To search about books the user must enter a books' store, to search about TV channels the user must enter a virtual EPG an so on.
- the user physical location inside the virtual environments is translated in a SCENE-ID, then used by the server together with the search criteria to determine what the user is searching for.
- Each request is characterized by a function internal descriptor (SCENE-ID), the interaction identifier (FUNCTION-ID) and the search criteria.
- SCENE-ID function internal descriptor
- FUNCTION-ID interaction identifier
- search criteria the search criteria.
- a server program uses this information for the determination of the kind of request and therefore the determination of name or the address of the web service to be called using the formulae Fl.
- the name of the program for each different combination is automatically retrieved interpreting predetermined parameters stored inside an internal database, avoiding programming activities necessary to extend the system to different external J 1 . data sources, thus making the system extendible to any kind of available data.
- the parameters allowing the correct execution of the above described formula by internal algorithms are manually stored and maintained during the configuration phase.
- step 540 the server platform redirects therefore dynamically a specific request to an external open existing web APIs (software system designed to support 30 interoperable Machine to Machine interaction over a network) using standard Web Services or proprietary protocols.
- step 550 the called service 010 runs the database query and returns the information to the server, normally in XML format or any other readable format.
- a typical result set of a search operation comprises a list of items, where for each selected item the following attributes are defined: an external item-ID, a name or a description, one or more URLs resources to be called while interacting with the item, and eventually one or more images URL associated with the selected item.
- step 560 the result set is received by the server 020.
- Step 570 based on the parameters stored into the server, gives a 2D or 3D aspect to each selected item, placing each object in an appropriate virtual environment and assigning to each object the defined services or events available to the users to interact with.
- the algorithms that transform received result sets in virtual environments are based on two different formulae, one for the determination of the landscape (F3) and a second one for the determination of the objects shape to assign to each returned item (F4).
- F3 uses as parameter the USER-ID enabling the definition of user preferences so that the system allows different users to interact with the same content through different graphical representations.
- Formulae F3 uses the same FUNTION-ID parameter of formulae Fl to allow the physical representation of different data sources inside different graphical landscapes (SCENE-ED).
- Formulae F4 gives each selected item a shape assigning a SHAPE-ED as a result of a combination of the SCENE-ED and a new D AT AKEND-ED parameter.
- the DAT AKIND-ED is any kind of data descriptor returned by the called web service inside the result set. Examples of D AT AKIND-ED could be the sex of a persons in case of a people database, the product type (book, disc, ..) of a shop, or any other external available item descriptor.
- the OBJECT-ED results of formulae F4 corresponds with a 3D model that is displayed as 3D object inside the landscape.
- Received data (item-ED, descriptor, detail URLs, images URLs) are temporarily cached inside internal cache databases just for the time necessary to allow the user a later interaction. Any other information is permanently stored in the server databases.
- Step 580 sends the generated graphical information to the set top box 030 in in a streaming format through the EP broadband connection 025.
- a typical graphical scene descriptor comprises: scene-ED, scene-parameters, vector of 3D-objects, vector of available function-EDs.
- the typical 3D-object comprises points and coordinates, image
- the software running inside the set top box transes graphical information (vectors and descriptors) in a visible virtual environment through generic 3D engine calls drawing a virtual environment inside the interactive IP window 455 of the TV screen. If a single displayed object is associated with an image file, the client program 030 running inside the set top box reads the image directly from the internet using the received original URL and places the image as a texture over the 2D/3D model using typical computer graphics warp algorithms.
- An example of a graphic generated list of some selected movies is given in Figure 9, where each presented object represent a single movie. Another examples is illustrated in Figure 10 where a te result set of a books search is depicted as galleries, which can be explored following a physical metaphor.
- the user can navigate or browse the single data items in the same way he normally performs zapping or channel surfing (for example pushing the P+ button to advance.
- Interacting with a presented item means the effectively interaction with an item returned by the search query, thus with internet data.
- the client program automatically selects and highlights single presented items while navigating inside the presented virtual environments. The user can start the associated service or event with a single click of a confirmation button of the remote control 045 (for example the OK button). If more services or events are available for the selected item/object, these options are displayed and can be highlighted and chosen (for example using the + or - buttons of the remote control).
- the client program running inside the set top box sends the selected service request to the Internet server 020.
- Selected objects could be TV channels.
- the selection of a TV channel starts the selected broadcasting transmission in the main window without any call to the internet server, but just redirecting selected broadcasting flow, received through the broadcasting medium 035, to the screen.
- This method implements an innovative way to implement channel tuning based on a search retrieval operation without the need to enter fixed numbers with the remote control 045
- Selected objects could be menu items.
- the interaction with menu items is automatically translated in new search queries as described by the flowchart at Figure 5 without the need to manually enter search criteria, as each menu item contains implicit a predefined search criteria string. Consequently the selection of a menu item has as result the 2D/3D presentation of some objects resulting from a real-time inquiry of a specific database.
- This method allows to transparently generate search requests and therefore new items presentation by simply on-click interactions with displayed objects.
- An example of a graphical presentation some menu items inside a virtual environment is illustrated in figure 7. Interacting the "TV channel' cube, will generate the presentation of all available TV channels, while the interaction of the "Movie" cube will dynamically display to the windows 455 the subset of the TV channels passing movies as illustrated in the figure 9.
- Menu items are stored inside internal database at system 020. Menu items could be automatically generated by user preferences and passed search queries, or by system managers, allowing the system to dynamically grow and change following user behaviours and preferences.
- Selected objects could be consumable objects physically available somewhere over the Internet network.
- a server programs receives the request and traduces it in a program call or redirection to an item associated URL previously cached in step 570.
- the determination of the correct URL to call is done using predetermined server parameters following the formulae F4, where the parameters item-ID (selected item) and function-ID (requested service or event) are sent to the server inside the interaction request.
- Step 660 redirects, thus the invokes the selected URL, generating the consumption of some content related to the selected item: the streaming a movie/music clip (step 660), the real-time translation in sound of some text (step 675), the text display to the screen (step 670) or the intrinsic in the API operation included in the 5 called web resource or service (buy, movie play, related detail text..) (step 650).
- the streaming flow could be displayed inside the main broadcasting window 350, or inside the interactive window 455.
- This method implements an innovative way to implement a television on demand environment using internet available multimedia content and transforming the consumption of non multimedia content in a TV 10 compatible experience ( Figure 2 - steps 230, 235, 240).
- the user controls the navigation, thus the item's scroll, by single-click commands using the remote control 045.
- These could be the common P+, P- burtons 20 normally used while zapping television channels (Figure 9 step 810).
- step 810 depending on an environment parameter the system decides, whether to rotate the scene (step 820) or whether to simulate a user's 25 advancement in the scene (step 830).
- the system decides whether to start an automatic and passive endless movement (rotation or advancement) or to make a single object's scroll, thus rotating and highlighting the next displayed object or advancing and highlighting to the next displayed object.
- Movements and rotations simulate a movie- like animation on the screen compatible with the watching paradigms of the television. Therefore, because the search results are depicted in an environment with which the user is immediately comfortable, this presentation allows a useful dialog between the user and the presented content, by using natural movement and exploration functions of a virtual three-dimensional world.
- the cited scene parameters are send from the server to the client as described in Figure 5 step 580.
- the server platform 020 comprises also a multimedia trans-coding format block ( Figure 2).
- Figure 2 This is a smart platform enabling processing and inserting of personalized content to end users.
- search result sets are real-time traduced in graphical environments
- texts are traduced in sounds
- a multimedia trans-coding format block ( Figure 2).
- 1 _ video streaming trans-coding step 240 changes the coding of a selected stream in order to be decoded and played by a unique player included in the client program.
- the server logic of the systems 020 acts as a trans-coding platform, dealing with on-the-fly different Trans-rating and trans-coding techniques and different physical encapsulation to fit different EP networks. This features, allows to implement inside the STB a single player/viewer allowing the platform to be integrated to different content encoded content types.
- the best way to carry out the invention is to define an end-to-end open infrastructure and to develop any form of software able to freely deliver and consume 30 web-based content to TV, making it consumable in an on demand way by TV users.
- Figures 9 and 10 illustrate the final result obtained from an application prototype applying the invented method for the interaction with a movie database and a book-shop database.
- Figure 10 also empathizes the bimodality of the proposed television user interface, where the interaction with a web based virtual library occurs over a conventional broadcasting consumption window in a picture-in-picture modality.
- One architectural advantage of the proposed invention is to enable the delivery of web-content to TV users without the need of any PC. This is done transforming the set top box in a networked hybrid device, enabling the connection to the IP Internet network while still receiving and decoding DVB digital broadcasting signals.
- An advantage of the proposed application is to simplify the consumption of web-content within an open infrastructure and platform where the content is not imposed chosen an predefined by a legacy system, operator content provider or TV operator, but is freely chosen from a multitude of internet applications, databases and services.
- An advantage of the proposed application is to simplify the consumption of TV content (channels) implementing search features over an internet based electronic program guide.
- Figure 9 shows an example of a graphic television channels surfing and tuning process, while figure 10 shows how this process occurs keeping the broadcasting windows active over the TV screen, thus while watching television.
- An advantage of the proposed user interface is to integrate in a unique and bimodal interface the passive consumption of TV content (broadcasting) with the active interaction with web-based content within an immersive and multimedia experience.
- the software form, the programming language, the operating system, the standard user interface, the 3D engine used, the development technique and tools of this software infrastructure aren't important because the invented method concerns only the form of dialog between a TV user and any accessible web-based database.
- the user formulates an inquiry and the software (developed applying the invention) redirects the query to a specific database and translates the list of data resulted from the inquiry in an animated and controllable 2D or 3D image of a simulated virtual space.
- the user needs any form of scroll in the data resulted from the inquiry and the software performs an appropriate movement in the 3D space simulating a navigation or a scroll operation within the result set.
- TV content could be an internet based electronic program guide.
- TV content could be past aired TV shows and programmes stored by TV networks inside internet based databases.
- Web content could be all internet based content that users could have interest to consume using the TV set: movies and multimedia content today not distributed by broadcasting TV networks, auctions, books, music clips, any kind of product of internet offer, additional content to single broadcasting events, electronic program guides (EPG) or any kind of user generated content (movies, clips, texts or images).
- EPG electronic program guides
- the TV user experience could be transformed in an on demand environment making enhanced TV a reality.
Abstract
The presented invented method realizes a converged communication system at home having the TV set as the main networked device for integrated content search, delivery and consumption. This method delivers online content from the Internet to TV, thus seamlessly integrating in one delivery channel the traditional consumption of TV broadcasting with searching, browsing and consumption of multimedia and non multimedia online resources.
Description
SYSTEM AND METHOD TO CONSUME WEB CONTENT USING TELEVISION SET
TECHNICAL FIELD
The invention relates to the Digital Video Broadcasting (DVB) television ecosystem More particularly, it relates to an interactive television infrastructure that enables TV users to ask, search, choose and consume internet content using a TV set, a remote Control or a voice recognition system, and a decoder Or set top box The invention relates to a client server infrastructure (like in Figure 1) comprising - as client - an hybrid decoder/set-top box receiver allowing broadcasting receiving (e g cable, satellite, terrestrial or IP) as well as accessing to the Internet network Through a broadband IP connection, and — as server - a xypicai Internet application server (web server and J2EE or equivalent application server) managing the access to the multitude of web-based applications, databases and services. In the presented architecture the set top boxes could be substituted by a mini-pc.
The invention also relates xo a user interface for a consumer eiecxronic product such as a television. The presented user interface allows a bimodal coexistence of the broadcasting ecosystem with an IP interactive environment (like in Figure 3). An additional interactive window enables to serach web-content displaying retrieved information and functions in images or objects inside a virtual environment making the navigation compatible with the television (image based and fluent) and zapping (scrolling) paradigms, and making the presented web content consumables using a normal remote control. In particular the invention relates to a system to retrieve a list of data and related services or events on this data from the internet making this content interactive and consumables by TV users Data can be common TV channels available within the broadcasting environment or any other content available within any web-based database. Interactivity with this data goes from the search and choose of the right information to the scrolling of the selected items and its consumption.
BACKGROUND ART
Television viewing is a popular form of entertainment. Developments in television and digital video technologies allow viewers to watch a wide variety of high- resolution content and to record programs for later viewing.
While television programming provides a variety of information and content, it is largely bound by program scheduling. The Internet offers users an alternative source for many types of information, such as movies, music, commercial products, weather, financial information, etc. Moreover, users can access any type of web-based content at " any time. Nonetheless, users must often interrupt their television viewing in order to access the Internet via a computing device. Additionally, accessing a preferred type of web-based content regularly can require users to spend time repeating navigation steps.
The increasing complexity of consumer electronic products such as televisions and of the systems in which these products are incorporated (e.g., cable and satellite television systems with hundreds of channels) make it more difficult for users to use the products and take full advantage of the functionality that these products provide. Operations of these products that in the past were relatively straightforward and simple have become more difficult. Q For example, it can be difficult to channel "surf or "browse" to find programs of interest when a television receives hundreds of channels. In addition, with the advent of digital channels, even the task of simply tuning to a channel can involve entering a channel and can take up to five or more key presses on a keypad of a remote control or a front panel. Thus, surfing from one channel to another by entering different5 channel numbers is time-consuming and prone to error since so many numbers must be entered to surf to a series of different channels. When there are hundreds of channels and each channel may involve both channel numbers, even remembering which channels to surf to can be a difficult task.
This problems are more urgent if, additionally to existing TV channels, new0 kind of content is made available to TV users, particularly if web-content is made available within the television ecosystem.
There is a constantly growth of Internet content that users would like to consume using TV set. TV and radio networks, already started putting more content on the Web — audio, video, etc. — and this allows people to choose what they wanted to watch using PCs and Web browsers. TV networks are doing what millions of other Web sites already do: offering content for people to choose from. There is conventional content today distributed over the Internet under the form of text, which interaction do not fit well with television interaction paradigms.
There's a good chance you have two connections coming into your home: One for television and one for the Internet. Your TV set is hooked up to one connection, and your cable/satellite provider gives you a list of channels to watch. A limited list. Your computer is hooked up to the other connection; your provider (maybe even the same company that provides your TV signal) lets you access an unlimited "list" of Web sites.
Video on the Internet is likely in the same format as what you get over a satellite or digital cable connection. It's called MPEG-2, and it's the same one used on DVDs as well. In other words, that same Internet connection you're using to view this Web page could be used to bring you full-size, DVD-quality video. In fact, some phone companies have acquired cable TV licenses and are delivering "IPTV" — digital television carried over phone lines, using Internet technology. It's indistinguishable from cable or satellite, except that the line coming into your living room looks like a phone cord, not a TV cable.
There's IPTV; so the technology exists to use the Internet infrastructure to carry television. There are faster and faster data pipes coming into your home. There's incredibly cheap storage; there are services that will let you download movies to watch on your PC. There are Media Center PCs, that let you watch and record television shows on your computer. There are legacy systems that let you interact with selected content in a predefined way.
Those are small steps to the on-demand finish line. A larger one is Microsoft's Media Center Extender Set-top Box. It connects your television to your PC,
so you can not only watch TV networks, channels and broadcasting shows; you can also access the music, photos, and video that are on your computer.
Concerning the usability, in today television environment the capacity to perform search operation over general content is almost unavailable, while the user ^ satisfaction related to paradigms for moving between channels diminishes when the number of available channels augments. TV usage today is still a frustrating experience: TV as it own timing, Program Guides difficult to find, fragmented search features, difficult video recording programming, broadcaster video on demand offer difficult to find and buy.
10
Distributing multimedia content over Internet networks is based on the streaming technology. Streaming media technology market is still a chaos today. If an Internet user clicks on a video to watch it, there is no guarantee that he can watch it as he can easily watch a TV show after turning on a TV. Mostly due to the lack of local resources allocations, Video streaming is still a comparatively new technology. A video encoded in one technology usually cannot be played by another player, due to lack of content adaptation and uses awareness techniques. Technical architecture and low cost requirements of the set top box do not allow to integrate inside a unique Set Top Box different streaming players/viewers.
20
SUMMARY OF THE INVENTION
Accordingly, the invention seeks to provide an improved system for the consumption of web content using a TV set, and preferably seeks to mitigate, alleviate 25 or eliminate one or more of the above disadvantages singly or in any combination.
According to a first aspect of the invention, there is an interactive television system allowing TV users to search, choose and consume internet content using a TV set, a remote control and a set top box, where the user does not need to read text to completely understand the data retrieved and to interact with, but rather need only look 3 Q at a three-dimensional animated landscape containing the complete answer in graphical form.
Presented, therefore is a computer system and method for transforming data, technical environments and paradigms enabling the coexistence of the broadcasting television consumption with the interaction with internet services, within a unique user interface possible. Content can be classical TV content: channels, passed programmes, shows, movies, news, live broadcasting programs or future broadcasting offers. This content is made available searching real-time inside internet based electronic program guides (EPG). Passed shows and productions produced by TV networks are made available though the Internet network and consumed though a streaming flow. Present programs are the traditional broadcasting offers, distributed and consumed through a broadcasting medium, while future programs could be linked to services or events to preset recording commands. Searching on channel type enable to implement advanced favourite channel lists based for example on the language or the geographic location of the TV network. Searching on live programs' content enable to implement other favourite channel lists based on the content of the single programmes presenting for example only channels passing tennis games, thriller movies, news, etc. Interacting and selecting a single live program tune the TV set to that channel without entering any predetermined channel number.
Content can also be classical internet content (movies, music clips, any kind of products, auctions, additional content to single broadcasting events, electronic program guides) or any kind of user generated content (movies, music clips, texts or images). This content is made available searching real-time inside internet-based databases or applications.
The interaction with the presented content, is made compatible with the TV watching paradigms by transforming it in images (2D or 3D objects) inside 2D or 3D virtual environments, allowing the user an easy and intuitive navigation (scrolling) and consumption (selection). From the user's data query, a virtual world or environment is created and presented to the user that appropriately simulates the requested data in virtual objects in simulated realistic form. By performing these actions, the user is effectively interacting with the internet data returned by the query.
In some embodiments the three-dimensional environment is presented to a user on the TV set screen. In other embodiments, the user can interact with the presented
scene by a single click using a remote control. In other embodiments, interaction with the presented scene brings additional results from the original data query into the three- dimensional view.
According to a second aspect of the invention, the system comprises the translation of the interaction with a graphical environments in the correct API or web services calls according to the predetermined parameters stored in the system. The graphical user interfaces acts as a logic interfaces between the user cognitive understanding of the presented content and the complexity of the URLs knowledge necessary to interacts with the presented objects. According to a third aspect of the invention, the presentation and the interaction with web content occurs in a bimodal user interface allowing to perform tasks while continuing watching conventional broadcasted content. The search and interaction process is implemented inside a bimodal user interface allowing the coexistence of the broadcasting technology with the on demand internet paradigms. The present invention describes an example interface that facilitates channel surfing and browsing while enables web-content delivery and consumption within the same user interface.
According to another aspect of the invention, the system comprises a transcoding component enabling a cross translation of multimedia formats: from text queries result set to graphical environments, from text to sounds, from a video streaming format to another, although the used video trans-coding method is not part of the presented invention.
By way of example, a channel list is generated, for example, based on a realtime database search operation (i.e. thriller movies, tennis games, news, ..). An interface displays at least a portion of the channel list and allows for user-selection of a channel from the channel list so that the selected channel may be tuned.
By way of another example, the presented list of objects could be generated based on a real-time search result of movies to be consumed in streaming, or music clips, or books to be bought, or any other web-based available content to be consumed. These and other features and advantages will be better and more completely understood by referring to the following detailed description of example embodiments in conjunction with the drawings.
BRIEF DESCRIPTION OF DRAWINGS
Figure 1 is a diagram illustrating the technical architecture of the presented system Figure 2 illustrates the trans-coding features of the invented system
Figure 3 illustrates a typical broadcasting television user interface.
Figure 4 illustrates a bimodal (IP + broadcasting) user interface allowing the interact with web content while consuming broadcasting television.
Figure 5 is a flowchart of a method for translating an Internet search query and its related result set in graphical virtual environment according to an exemplary embodiment of the present invention.
Figure 6 is a flowchart of a method for transparently translating the interaction with a graphical user interface with internet web services calls for the consumption of the presented content. Figure 7 illustrates an example for the presentation of some menu items in a graphical form.
Figure 8 is a flowchart of a method for translating of a scrolling operation in a navigation process according to an exemplary embodiment of the present invention.
Figure 9 illustrated the graphical representation of a result set of movies responding to a real-time search request.
Figure 10 illustrated the graphical representation of a result set of books responding to a real-time search request inside a virtual library.
LIST OF PHYSICAL COMPONENTS REFERENCES
010 Internet public APIs, web resources or services
020 Web Server / Application Server layer
025 Broadband IP connection 030 TV set + Set Top Box + client program
035 Broadcasting medium
040 TV networks
045 Remote control
210 Algorithms for the translation of a user request in an API address
230 Algorithms for the translation of a query result set in a graphical environment
235 Algorithms for text to sound conversion 240 Video trans-coding algorithms
350 Passive broadcasting window
455 Active interactive IP window
LIST OF FORMULAE
F 1 API-URL - F 1 (SCENE-ID, FUNCTION-ID)
F2 RESULT-SET = F2(API-URL, SEARCH CRITERIA)
F3 SCENE-ID = F2( USER-ID, FUNCTION-ID)
F4 OBJECT-ID = F3 (SCENE-ID, DATAKIND-ID)
DETAIL DESCRIPTION OF THE INVENTION
The invention concerns a method for searching, browsing and consuming web-based content within a broadcasting environment (Figure 1). The system includes a set-top box device (Figure 1 - 030) comprising at least a processor and a memory accessible to the processor. The system includes a client computer program embedded within the memory of the set top box and executable by its processor. The client computer program comprising instructions to input search criteria, to display, to browse (surf or navigate) and to interact with displayed data in a graphical form (vectors). The computer program comprising as well a streaming player able to decode at least single streaming format. The set-top box device is connected to the IP Internet network through a broadband connection (Figure 1 - 025). The set-top box device is also connected to at least a broadcasting medium 035 (cable, satellite, terrestrial or IP) (Figure 1 - 035) allowing conventional broadcasting consumption of live television programs.
The system also includes a server hardware and software platform (Figure 1 - 020) interfacing the invented architecture to the Internet (Figure 1 - 010). A software platform and a relational database runs inside the server managing all the necessary parameters, allowing graphical environments parameters to be translated in APIs or web services calls and allowing result set of database retrieval operations to be translated in 2D or 3D virtual environments. The server, with its predetermined parameters, acts as unique interface towards the multitude of web services, applications and databases hiding this complexity to client application and therefore to the TV user.
The server platform also includes a trans-coding component (smart edge) enabling the selected content to be transformed in a multimedia format in order to be consumables by TV users (Figure 2 - steps 030,035, 040) : from a list of items to a graphical environment, from text to sounds, from video streaming in one format to a video streaming in another format.
Start flow is a common broadcasting TV screen 350 on the screen with a common show received through a broadcasting medium 035.
The user starts the client program running inside a set top box opening an interactive window 455 beside the broadcasting window 350, implementing the physical coexistence of the passive broadcasting television consumption with the active interaction with IP-based internet content.
Start flow could also be a predefined user interface already initialized with the two presented windows 350 and 455: one dedicated to the broadcasting environment (Figure 4 - 350) , the other dedicated to the presentation of a real-time retrieve operation of web-based content (Figure 4 - 455). Focusing the attention to the interactive window 455, the user can input a search criteria starting a retrieval operation of web-content (Figure 5) and can navigate (surf) inside the displayed content interacting with single displayed items (Figure 6).
Search and presentation flow
The logical process of the software functions necessaries to search, retrieve and present of web-content is described in the main functional block diagram of Figure
5.
The process begins at step 510 where the user inputs search criteria using the remote control 045. Search criteria are given to the system using the T9 protocol (numbers from 0 to 9) in a similar way as SMS messages are written using mobile phones. Search criteria could also be given to the system by future available voice recognition systems.
10
The program running inside the set top box detects the search criteria and sends in step 520 a request to the Internet server 020 through the IP connection 025. The content the user is searching for depends from the user location inside the navigational virtual environments. To search about books the user must enter a books' store, to search about TV channels the user must enter a virtual EPG an so on. The user physical location inside the virtual environments is translated in a SCENE-ID, then used by the server together with the search criteria to determine what the user is searching for.
Each request is characterized by a function internal descriptor (SCENE-ID), the interaction identifier (FUNCTION-ID) and the search criteria. In step 530 a server program uses this information for the determination of the kind of request and therefore the determination of name or the address of the web service to be called using the formulae Fl. The name of the program for each different combination is automatically retrieved interpreting predetermined parameters stored inside an internal database, avoiding programming activities necessary to extend the system to different external J1. data sources, thus making the system extendible to any kind of available data. The parameters allowing the correct execution of the above described formula by internal algorithms are manually stored and maintained during the configuration phase.
In step 540 the server platform redirects therefore dynamically a specific request to an external open existing web APIs (software system designed to support 30 interoperable Machine to Machine interaction over a network) using standard Web Services or proprietary protocols. In step 550 the called service 010 runs the database query and returns the information to the server, normally in XML format or any other
readable format. A typical result set of a search operation comprises a list of items, where for each selected item the following attributes are defined: an external item-ID, a name or a description, one or more URLs resources to be called while interacting with the item, and eventually one or more images URL associated with the selected item. In step 560 the result set is received by the server 020. Step 570, based on the parameters stored into the server, gives a 2D or 3D aspect to each selected item, placing each object in an appropriate virtual environment and assigning to each object the defined services or events available to the users to interact with. The algorithms that transform received result sets in virtual environments are based on two different formulae, one for the determination of the landscape (F3) and a second one for the determination of the objects shape to assign to each returned item (F4). F3 uses as parameter the USER-ID enabling the definition of user preferences so that the system allows different users to interact with the same content through different graphical representations. Formulae F3 uses the same FUNTION-ID parameter of formulae Fl to allow the physical representation of different data sources inside different graphical landscapes (SCENE-ED). Formulae F4 gives each selected item a shape assigning a SHAPE-ED as a result of a combination of the SCENE-ED and a new D AT AKEND-ED parameter. The DAT AKIND-ED is any kind of data descriptor returned by the called web service inside the result set. Examples of D AT AKIND-ED could be the sex of a persons in case of a people database, the product type (book, disc, ..) of a shop, or any other external available item descriptor. The OBJECT-ED results of formulae F4 corresponds with a 3D model that is displayed as 3D object inside the landscape.
Received data (item-ED, descriptor, detail URLs, images URLs) are temporarily cached inside internal cache databases just for the time necessary to allow the user a later interaction. Any other information is permanently stored in the server databases.
Step 580 sends the generated graphical information to the set top box 030 in in a streaming format through the EP broadband connection 025. A typical graphical scene descriptor comprises: scene-ED, scene-parameters, vector of 3D-objects, vector of available function-EDs. The typical 3D-object comprises points and coordinates, image
URLs, and the external item-ED to recognize items to interact with.
In step 590, the software running inside the set top box traduces graphical information (vectors and descriptors) in a visible virtual environment through generic 3D engine calls drawing a virtual environment inside the interactive IP window 455 of the TV screen. If a single displayed object is associated with an image file, the client program 030 running inside the set top box reads the image directly from the internet using the received original URL and places the image as a texture over the 2D/3D model using typical computer graphics warp algorithms. An example of a graphic generated list of some selected movies is given in Figure 9, where each presented object represent a single movie. Another examples is illustrated in Figure 10 where a te result set of a books search is depicted as galleries, which can be explored following a physical metaphor.
Using the remote control 045 the user can navigate or browse the single data items in the same way he normally performs zapping or channel surfing (for example pushing the P+ button to advance.
The logical process of the software functions necessaries to navigate inside the displayed 2D or 3D virtual environments is described in the flow main diagram of Figure 8.
The logical process of the software functions necessaries to interact with a retrieved list of objects is described in the flow main diagram of Figure 6.
Interaction Flow
Interacting with a presented item means the effectively interaction with an item returned by the search query, thus with internet data. In step 610 the client program automatically selects and highlights single presented items while navigating inside the presented virtual environments. The user can start the associated service or event with a single click of a confirmation button of the remote control 045 (for example the OK button). If more services or events are available for the selected item/object, these options are displayed and can be highlighted and chosen (for example using the + or - buttons of the remote control).
In step 620 the client program running inside the set top box sends the selected service request to the Internet server 020.
Selected objects could be TV channels. The selection of a TV channel starts the selected broadcasting transmission in the main window without any call to the internet server, but just redirecting selected broadcasting flow, received through the broadcasting medium 035, to the screen. This method implements an innovative way to implement channel tuning based on a search retrieval operation without the need to enter fixed numbers with the remote control 045
Selected objects could be menu items. The interaction with menu items is automatically translated in new search queries as described by the flowchart at Figure 5 without the need to manually enter search criteria, as each menu item contains implicit a predefined search criteria string. Consequently the selection of a menu item has as result the 2D/3D presentation of some objects resulting from a real-time inquiry of a specific database. This method allows to transparently generate search requests and therefore new items presentation by simply on-click interactions with displayed objects. An example of a graphical presentation some menu items inside a virtual environment is illustrated in figure 7. Interacting the "TV channel' cube, will generate the presentation of all available TV channels, while the interaction of the "Movie" cube will dynamically display to the windows 455 the subset of the TV channels passing movies as illustrated in the figure 9. Menu items are stored inside internal database at system 020. Menu items could be automatically generated by user preferences and passed search queries, or by system managers, allowing the system to dynamically grow and change following user behaviours and preferences.
Selected objects could be consumable objects physically available somewhere over the Internet network.
In step 630 a server programs receives the request and traduces it in a program call or redirection to an item associated URL previously cached in step 570. The determination of the correct URL to call is done using predetermined server parameters following the formulae F4, where the parameters item-ID (selected item) and function-ID (requested service or event) are sent to the server inside the interaction request.
Step 660 redirects, thus the invokes the selected URL, generating the consumption of some content related to the selected item: the streaming a movie/music clip (step 660), the real-time translation in sound of some text (step 675), the text display to the screen (step 670) or the intrinsic in the API operation included in the 5 called web resource or service (buy, movie play, related detail text..) (step 650).
The streaming flow could be displayed inside the main broadcasting window 350, or inside the interactive window 455. This method implements an innovative way to implement a television on demand environment using internet available multimedia content and transforming the consumption of non multimedia content in a TV 10 compatible experience (Figure 2 - steps 230, 235, 240).
Navigation and surfing flow
The logical process of the software functions necessaries to navigate (data , r scrolling) inside the displayed 2D or 3D virtual environments is described in the flow main diagram of Figure 8. While navigating inside the presented virtual environments the user is effectively scrolling the data returned by the search query.
The user controls the navigation, thus the item's scroll, by single-click commands using the remote control 045. These could be the common P+, P- burtons 20 normally used while zapping television channels (Figure 9 step 810).
To overcomes traditional 3D navigation problems, the navigation inside the displayed virtual environments is not free, but is imposed and limited by the parameters stored in the scene. In step 810, depending on an environment parameter the system decides, whether to rotate the scene (step 820) or whether to simulate a user's 25 advancement in the scene (step 830).
Depending on a second parameters, the system decides whether to start an automatic and passive endless movement (rotation or advancement) or to make a single object's scroll, thus rotating and highlighting the next displayed object or advancing and highlighting to the next displayed object. Movements and rotations simulate a movie- like animation on the screen compatible with the watching paradigms of the television.
Therefore, because the search results are depicted in an environment with which the user is immediately comfortable, this presentation allows a useful dialog between the user and the presented content, by using natural movement and exploration functions of a virtual three-dimensional world.
5 The cited scene parameters are send from the server to the client as described in Figure 5 step 580.
Trans-coding features
- π In order to allow the end user to transparently consume different multimedia contents' formats the server platform 020 comprises also a multimedia trans-coding format block (Figure 2). This is a smart platform enabling processing and inserting of personalized content to end users. In step 570 / 230 search result sets are real-time traduced in graphical environments, in step 235 texts are traduced in sounds, and a
1 _ video streaming trans-coding step 240 changes the coding of a selected stream in order to be decoded and played by a unique player included in the client program. In this scenario, the server logic of the systems 020 acts as a trans-coding platform, dealing with on-the-fly different Trans-rating and trans-coding techniques and different physical encapsulation to fit different EP networks. This features, allows to implement inside the STB a single player/viewer allowing the platform to be integrated to different content encoded content types.
BESTMODEFORCARRYINGOUTTHEINVENTION
25 The following description is presented solely for the purpose of disclosing how the present invention may be made and used. The scope of the invention is defined by the claims.
The best way to carry out the invention is to define an end-to-end open infrastructure and to develop any form of software able to freely deliver and consume 30 web-based content to TV, making it consumable in an on demand way by TV users.
Figures 9 and 10 illustrate the final result obtained from an application prototype applying the invented method for the interaction with a movie database and a
book-shop database. Figure 10 also empathizes the bimodality of the proposed television user interface, where the interaction with a web based virtual library occurs over a conventional broadcasting consumption window in a picture-in-picture modality. One architectural advantage of the proposed invention is to enable the delivery of web-content to TV users without the need of any PC. This is done transforming the set top box in a networked hybrid device, enabling the connection to the IP Internet network while still receiving and decoding DVB digital broadcasting signals.
An advantage of the proposed application is to simplify the consumption of web-content within an open infrastructure and platform where the content is not imposed chosen an predefined by a legacy system, operator content provider or TV operator, but is freely chosen from a multitude of internet applications, databases and services.
An advantage of the proposed application is to simplify the consumption of TV content (channels) implementing search features over an internet based electronic program guide. Figure 9 shows an example of a graphic television channels surfing and tuning process, while figure 10 shows how this process occurs keeping the broadcasting windows active over the TV screen, thus while watching television.
An advantage of the proposed user interface is to integrate in a unique and bimodal interface the passive consumption of TV content (broadcasting) with the active interaction with web-based content within an immersive and multimedia experience. The software form, the programming language, the operating system, the standard user interface, the 3D engine used, the development technique and tools of this software infrastructure aren't important because the invented method concerns only the form of dialog between a TV user and any accessible web-based database. The user formulates an inquiry and the software (developed applying the invention) redirects the query to a specific database and translates the list of data resulted from the inquiry in an animated and controllable 2D or 3D image of a simulated virtual space. The user needs any form of scroll in the data resulted from the inquiry and the software performs an appropriate movement in the 3D space simulating a navigation or a scroll operation within the result set.
INDUSTRIAL APPLICABILITY
The primary commercial application of this method is the development of a software platform able to delivery and distribute TV content and web-content in an on demand way to TV users. TV content could be an internet based electronic program guide. TV content could be past aired TV shows and programmes stored by TV networks inside internet based databases. Web content could be all internet based content that users could have interest to consume using the TV set: movies and multimedia content today not distributed by broadcasting TV networks, auctions, books, music clips, any kind of product of internet offer, additional content to single broadcasting events, electronic program guides (EPG) or any kind of user generated content (movies, clips, texts or images).
Other commercial application of this method is to boost the development of advanced networked set top boxes enabling TV users to buy a retail product outside legacy realities enabling the consumption of on demand content within a television environment.
Under the strict application of the invented method, the TV user experience could be transformed in an on demand environment making enhanced TV a reality.
Claims
1. A method to search and display web-content in a digital TV environment, comprising :
(a) providing a set top box which is connected to the IP broadband network (Internet) as well as to one or more broadcasting medium (cable, satellite, terrestrial, IP), (b) providing a remote control or a voice recognition system which sends commands to said set top box,
(c) providing a character input means which a TV user can use to query the system,
(d) providing a TV set which is operatively connected to said set top box,
(e) providing a software client program which is running inside said set top box, (f) providing an Internet application server acting as an interface between said set top box and the Internet network, (g) providing a server software platform (series of software programs and databases) running inside said Internet application server, (h) providing one or more web APIs provided by third parties, (i) providing a result set of a generic search query based on the said search criteria, (j) providing an internet based Electronic Program Guide,
whereby a TV user input said character input using said remote control, the client program detects the character input, the client program sends a request to said Internet application server, said server software platform transforms the received request in an API call using predetermined internally stored parameters, the API runs a database query, said result set is returned to the Internet application server, the server software platform transforms the result set real-time in a graphical list of 3D models inside a virtual environment using predetermined internally stored parameters, the software platform caches different URLs relating to each retuned data item for later interaction, the server software platform sends graphical and descriptor information to the client program, the client program displays said graphical environment to said TV screen and - if requested - reads images from received UKLs and places the read images on the 3D objects, and whereby a TV user can search and visualize web content in an easy and intuitive way, without the use of PCs, keyboards, without worrying about URLs of the multitude of web resources and without recurring to any text reading operation, and whereby the system transforms the interaction with non multimedia web-based content in a multimedia experience, making it consumable in a TV compatible way within said TV set using said remote control, and whereby the system simplifies the distribution and delivery of web content to TV users, and whereby the complexity to generate 3D graphic models and virtual environments, and the knowledge of the external Internet URLs of all the necessary internet available
APIs to be called lies in said server software platform only.
2. A method to search and display live TV content in a digital TV environment, comprising :
(a) providing a set top box which is connected to the IP broadband network (Internet) as well as to one or more broadcasting medium (cable, satellite, terrestrial, IP),
(b) providing a remote control or a voice recognition system which sends commands to said set top box,
(c) providing a character input means which a TV user can use to query the system, (d) providing a TV set which is operatively connected to said set top box,
(e) providing a software client program which is running inside said set top box,
(f) providing an Internet application server,
(g) providing a server software platform (series of software programs and databases) running inside said Internet application server, (h) providing one or more web Electronic Program Guide APIs (software system designed to support interoperable Machine to Machine interaction over a network) provided by third parties, whereby a TV user input said character input using said remote control, the client program detects the character input, the client program sends a request to said Internet application server, said server software platform transforms the received request in an API call using predetermined internally stored parameters, the API runs a database query, said result set is returned to the Internet application server, the server software platform transforms the result set real-time in a graphical list of 3D models inside a virtual environment using predetermined internally stored parameters, the software platform caches different URLs relating to each retuned data item for later interaction, the server software platform sends graphical and descriptor information to the client program, the client program displays said graphical environment to said TV screen and, when requested, reads images from received URLs and places the read images on the 3D objects, and whereby a TV user can search and visualize TV channels, which information stored inside said Electronic Program Guide matches said character input, and whereby the system simplifies the search and the finding of TV channels of interest.
3. A computer implemented method to interact with and consume web-content in a digital TV environment, comprising:
(a) providing a set top box which is connected to the IP broadband network (Internet) as well as to one or more broadcasting medium (cable, satellite, terrestrial, IP),
(b) providing a remote control or a voice recognition system which sends commands to said set top box, (c) providing a TV set which is connected to said set top box,
(d) providing a software client program which is running inside said set top box,
(e) providing an Internet application server,
(f) providing a server software platform (series of software programs and databases) running inside said Internet application server, (g) providing one or more web APIs (software system designed to support interoperable Machine to Machine interaction over a network) provided by third parties, whereby a TV user interacts with said 3D objects of claim 1 or of claim 2, using said remote control, said client program detects the request, the client program redirects the request to said Internet application server, said server software platform transforms the request in said API or URL call using a previously cached URL or
API address cached, the server platform calls the API or URL, said API or URL redirects some multimedia output to the client program that generates some perceptible multimedia output to the TV set, and
whereby the user interaction with menu options starts a new search request as of claim 1 or 2 with a transparent for the user search criteria, and whereby the user interaction with a displayed object of claim 1 simplifies the consumption of internet content related to a multitude of web applications and services in an intuitive way (video streaming, text display, music play,..), or to activate a service related to said object (reservation, buy request, ..), and whereby the interaction with said 3D models of claim 2 (TV channels), simplifies the tuning of the selected TV channel without having to enter predefined channels numbers with said remote control.
4. A computer implemented method to scroll web-content in a digital TV environment, comprising:
(a) providing a set top box which is connected to the IP broadband network (Internet) as well as to one or more broadcasting medium (cable, satellite, terrestrial, IP). (b) providing a remote control or a voice recognition system which sends commands to said set top box,
(c) providing a TV set which is connected to said set top box
(d) providing a software client program which is running inside said set top box
(e) providing an Internet application server, (f) providing a server software platform (series of software programs and databases) running inside said Internet application server (g) providing one or more web APIs (software system designed to support interoperable Machine to Machine interaction over a network) provided by third parties,
whereby a TV user interacts with said virtual environment of claim 1 or 2, using said remote control, said client program changes (moving or rotating) the graphical environment point of view and changes the graphical representation of the virtual environment on the said TV screen, simulating a navigation inside the scene, thus a scroll operation of the displayed 3D objects of claim 1 (web content) or of claim 2 (TV channels), and whereby the navigation is guided from predetermined parameters stored in the scene overcoming traditional 3D navigation problems, and whereby the scrolling of the said selected 3D objects of claim 1 or of claim 2 occurs in an intuitive way simulating natural movements inside a scene.
5. A computer implemented method to allow the bimodal consumption of broadcasting live television and web content, comprising:
(a) providing a set top box which is connected to the IP broadband network (Internet) as well as to one or more broadcasting medium (cable, satellite, terrestrial, IP), (b) providing a remote control or a voice recognition system which sends commands to said set top box,
(c) providing a TV set which is connected to said set top box,
(e) providing a software client program which is running inside said set top box,
(f) providing an Internet application server,
(g) providing a server software platform (series of software programs and databases) running inside said Internet application server,
(h) providing one or more web APIs (software system designed to support interoperable
Machine to Machine interaction over a network) provided by third parties, whereby a user interacts with a windows as described in the claims 1 or 2 or 3 or 4 while a second window on the same said TV set displays a live TV program received by a broadcasting medium, and whereby a TV user can search, choose and consume web content as described in claims
1 or 3 or 4 while watching live broadcasting television, and whereby a TV user can interact with TV content as described in the claim 2 or 3 or 4, while watching live broadcasting television, and whereby the user can switch from a passive broadcasting TV consumption to the IP based web content consumption paradigms of anything, anytime anywhere paradigm, and whereby the system seamlessly integrates (from the user's perspective) in one delivery channel the traditional consumption of TV broadcasting with browsing and search of multimedia online resources.
6. A computer implemented method to allow the consumption of different multimedia streaming files through a unique bimodal user interface, comprising:
(a) providing a set top box which is connected to the IP broadband network (Internet) as well as to one or more broadcasting medium (cable, satellite, terrestrial, IP),
(b) providing a remote control or a voice recognition system which sends commands to said set top box, (c) providing a TV set which is connected to said set top box,
(d) providing a software client program which is running inside said set top box,
(e) providing one or more streaming players which is/are running inside said set top box,
(f) providing an Internet application server, (s) providing a server software platform (series of software programs and databases) running inside said Internet application server, (h) providing one or more web APIs (software system designed to support interoperable
Machine to Machine interaction over a network) provided by third parties, (i) providing a multimedia trans-coding platform,
whereby a TV user interacts as described in claim 3 with said 3D objects of claim 1 or 2, and, if said called API of claim 3 generates a streaming flow, this is redirected to said multimedia trans-coding platform, the multimedia trans-codsing platform transform the input data in a different multimedia format, redirects the ouput streaming flow to said client program, the client program displays it to said TV set, and whereby the user interaction with multimedia content in a multitude of formats can displayed to said TV set by said one or more streaming players.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/IB2007/001396 WO2008142472A1 (en) | 2007-05-18 | 2007-05-18 | System and method to consume web content using television set |
EP07734695A EP2168378A1 (en) | 2007-05-18 | 2007-05-18 | System and method to consume web content using television set |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/IB2007/001396 WO2008142472A1 (en) | 2007-05-18 | 2007-05-18 | System and method to consume web content using television set |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2008142472A1 true WO2008142472A1 (en) | 2008-11-27 |
Family
ID=39273587
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2007/001396 WO2008142472A1 (en) | 2007-05-18 | 2007-05-18 | System and method to consume web content using television set |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP2168378A1 (en) |
WO (1) | WO2008142472A1 (en) |
Cited By (123)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016003509A1 (en) * | 2014-06-30 | 2016-01-07 | Apple Inc. | Intelligent automated assistant for tv user interactions |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US9412392B2 (en) | 2008-10-02 | 2016-08-09 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US10110845B2 (en) | 2009-05-06 | 2018-10-23 | T-Jat Systems 2006 Ltd. | Device and method for providing services to a user of a TV set |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
CN109862376A (en) * | 2019-02-28 | 2019-06-07 | 广州华多网络科技有限公司 | Live content jettison system, method, apparatus, listserv and storage medium |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10475446B2 (en) | 2009-06-05 | 2019-11-12 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10699717B2 (en) | 2014-05-30 | 2020-06-30 | Apple Inc. | Intelligent assistant for home automation |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
CN111491178A (en) * | 2019-01-29 | 2020-08-04 | 国家新闻出版广电总局广播电视规划院 | Method, system and electronic equipment for television program scene interaction |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11023513B2 (en) | 2007-12-20 | 2021-06-01 | Apple Inc. | Method and apparatus for searching using an active ontology |
US11069336B2 (en) | 2012-03-02 | 2021-07-20 | Apple Inc. | Systems and methods for name pronunciation |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US11350253B2 (en) | 2011-06-03 | 2022-05-31 | Apple Inc. | Active transport based notifications |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
EP4207788A4 (en) * | 2020-08-25 | 2024-03-06 | Lg Electronics Inc | Display device and method for providing content using same |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1998043183A1 (en) * | 1997-03-25 | 1998-10-01 | Sony Electronics, Inc. | Integrated search of electronic program guide, internet and other information resources |
EP1463332A1 (en) * | 2003-03-25 | 2004-09-29 | Broadcom Corporation | Media processing system supporting different media formats via server-based transcoding |
US20060136383A1 (en) * | 2004-12-20 | 2006-06-22 | Alcatel | Method and system enabling Web content searching from a remote set-top control interface or device |
EP1722551A2 (en) * | 1996-12-10 | 2006-11-15 | United Video Properties, Inc. | Internet television program guide system |
EP1731992A2 (en) * | 2005-06-09 | 2006-12-13 | Samsung Electronics Co., Ltd. | Apparatus and Method for Inputting Characters using Circular Key Arrangement |
-
2007
- 2007-05-18 EP EP07734695A patent/EP2168378A1/en not_active Withdrawn
- 2007-05-18 WO PCT/IB2007/001396 patent/WO2008142472A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1722551A2 (en) * | 1996-12-10 | 2006-11-15 | United Video Properties, Inc. | Internet television program guide system |
WO1998043183A1 (en) * | 1997-03-25 | 1998-10-01 | Sony Electronics, Inc. | Integrated search of electronic program guide, internet and other information resources |
EP1463332A1 (en) * | 2003-03-25 | 2004-09-29 | Broadcom Corporation | Media processing system supporting different media formats via server-based transcoding |
US20060136383A1 (en) * | 2004-12-20 | 2006-06-22 | Alcatel | Method and system enabling Web content searching from a remote set-top control interface or device |
EP1731992A2 (en) * | 2005-06-09 | 2006-12-13 | Samsung Electronics Co., Ltd. | Apparatus and Method for Inputting Characters using Circular Key Arrangement |
Non-Patent Citations (3)
Title |
---|
"Zukunft entwickeln - Arbeit erfinden", ECULTURE TRENDS 2006, 25 February 2007 (2007-02-25), pages 1 - 13, XP002478848, Retrieved from the Internet <URL:http://web.archive.org/web/20070225223803/http://eculturefactory.de/download/eCT-Abstracts.pdf> [retrieved on 20080422] * |
PORETTI G ET AL: "An entertaining way to access Web content", ENTERTAINMENT COMPUTING - ICEC 2004. THIRD INTERNATIONAL CONFERENCE. PROCEEDINGS (LECTURE NOTES IN COMPUT. SCI. VOL.3166) SPRINGER-VERLAG BERLIN, GERMANY, 2004, pages 518 - 521, XP002478847, ISBN: 3-540-22947-7 * |
PORETTI GIACOMO, SOLLBERGER ALBERTO: "An entertaining way to access web content", 10 May 2005 (2005-05-10), pages 1 - 6, XP002478846, Retrieved from the Internet <URL:http://web.archive.org/web/20050510205615/www.3denter.com/html/Paper.pdf> [retrieved on 20080416] * |
Cited By (174)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US11023513B2 (en) | 2007-12-20 | 2021-06-01 | Apple Inc. | Method and apparatus for searching using an active ontology |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9412392B2 (en) | 2008-10-02 | 2016-08-09 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10643611B2 (en) | 2008-10-02 | 2020-05-05 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10681299B2 (en) | 2009-05-06 | 2020-06-09 | T-Jat Systems 2006 Ltd. | Device and method for providing services to a user of a TV set |
US10110845B2 (en) | 2009-05-06 | 2018-10-23 | T-Jat Systems 2006 Ltd. | Device and method for providing services to a user of a TV set |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US10475446B2 (en) | 2009-06-05 | 2019-11-12 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
US10692504B2 (en) | 2010-02-25 | 2020-06-23 | Apple Inc. | User profiling for voice input processing |
US10417405B2 (en) | 2011-03-21 | 2019-09-17 | Apple Inc. | Device access using voice authentication |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US11350253B2 (en) | 2011-06-03 | 2022-05-31 | Apple Inc. | Active transport based notifications |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US11069336B2 (en) | 2012-03-02 | 2021-07-20 | Apple Inc. | Systems and methods for name pronunciation |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10769385B2 (en) | 2013-06-09 | 2020-09-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US11048473B2 (en) | 2013-06-09 | 2021-06-29 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US10417344B2 (en) | 2014-05-30 | 2019-09-17 | Apple Inc. | Exemplar-based natural language processing |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US10657966B2 (en) | 2014-05-30 | 2020-05-19 | Apple Inc. | Better resolution when referencing to concepts |
US10699717B2 (en) | 2014-05-30 | 2020-06-30 | Apple Inc. | Intelligent assistant for home automation |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US10714095B2 (en) | 2014-05-30 | 2020-07-14 | Apple Inc. | Intelligent assistant for home automation |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11516537B2 (en) | 2014-06-30 | 2022-11-29 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
WO2016003509A1 (en) * | 2014-06-30 | 2016-01-07 | Apple Inc. | Intelligent automated assistant for tv user interactions |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11838579B2 (en) | 2014-06-30 | 2023-12-05 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10453443B2 (en) | 2014-09-30 | 2019-10-22 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10438595B2 (en) | 2014-09-30 | 2019-10-08 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10390213B2 (en) | 2014-09-30 | 2019-08-20 | Apple Inc. | Social reminders |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US10529332B2 (en) | 2015-03-08 | 2020-01-07 | Apple Inc. | Virtual assistant activation |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US11127397B2 (en) | 2015-05-27 | 2021-09-21 | Apple Inc. | Device voice control |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10354652B2 (en) | 2015-12-02 | 2019-07-16 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10942702B2 (en) | 2016-06-11 | 2021-03-09 | Apple Inc. | Intelligent device arbitration and control |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10580409B2 (en) | 2016-06-11 | 2020-03-03 | Apple Inc. | Application integration with a digital assistant |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10847142B2 (en) | 2017-05-11 | 2020-11-24 | Apple Inc. | Maintaining privacy of personal information |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US10984798B2 (en) | 2018-06-01 | 2021-04-20 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US11009970B2 (en) | 2018-06-01 | 2021-05-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10944859B2 (en) | 2018-06-03 | 2021-03-09 | Apple Inc. | Accelerated task performance |
US10504518B1 (en) | 2018-06-03 | 2019-12-10 | Apple Inc. | Accelerated task performance |
CN111491178A (en) * | 2019-01-29 | 2020-08-04 | 国家新闻出版广电总局广播电视规划院 | Method, system and electronic equipment for television program scene interaction |
CN111491178B (en) * | 2019-01-29 | 2022-06-17 | 国家广播电视总局广播电视规划院 | Method, system and electronic equipment for television program scene interaction |
CN109862376A (en) * | 2019-02-28 | 2019-06-07 | 广州华多网络科技有限公司 | Live content jettison system, method, apparatus, listserv and storage medium |
EP4207788A4 (en) * | 2020-08-25 | 2024-03-06 | Lg Electronics Inc | Display device and method for providing content using same |
Also Published As
Publication number | Publication date |
---|---|
EP2168378A1 (en) | 2010-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2008142472A1 (en) | System and method to consume web content using television set | |
US20170223420A1 (en) | Multimedia systems, methods and applications | |
US7600243B2 (en) | User interface methods and systems for device-independent media transactions | |
CN103430136B (en) | Expanding element guide based on figure tile fragment | |
US20060236344A1 (en) | Media transaction system | |
US20060242681A1 (en) | Method and system for device-independent media transactions | |
US20110289460A1 (en) | Hierarchical display of content | |
US20090265422A1 (en) | Method and apparatus for providing and receiving user interface | |
US20130179787A1 (en) | Rendering of an Interactive Lean-Backward User Interface on a Television | |
CN106489150A (en) | For recognize and preserve media asset a part system and method | |
US7987484B2 (en) | Managing media content with a self-organizing map | |
WO2012083006A1 (en) | Browser integration for a content system | |
EP1987484A2 (en) | Systems and methods for placing advertisements | |
KR20170129398A (en) | Digital device and controlling method thereof | |
KR20110047768A (en) | Apparatus and method for displaying multimedia contents | |
US20120023521A1 (en) | Providing regional content information to a user device by using content information received from a content provider | |
JP2014534513A (en) | Method and user interface for classifying media assets | |
WO2012088307A1 (en) | Method for customizing the display of descriptive information about media assets | |
US9277285B2 (en) | Broadcasting method and system with variable audio/video program menu | |
US20120023520A1 (en) | Delivering regional content information from a content information source to a user device | |
JP2009500877A (en) | Method and system for device independent media transactions | |
US20210243504A1 (en) | Surf mode for streamed content | |
KR20180038273A (en) | Digital device and controlling method thereof | |
KR20060017892A (en) | Apparatus for accessing and processing data for television contents | |
US20120023408A1 (en) | Providing regional content information to a user device by using identifiers for content information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07734695 Country of ref document: EP Kind code of ref document: A1 |
|
DPE2 | Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101) | ||
DPE2 | Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101) | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007734695 Country of ref document: EP |