CN110741652A - Display device with intelligent user interface - Google Patents
Display device with intelligent user interface Download PDFInfo
- Publication number
- CN110741652A CN110741652A CN201980000619.3A CN201980000619A CN110741652A CN 110741652 A CN110741652 A CN 110741652A CN 201980000619 A CN201980000619 A CN 201980000619A CN 110741652 A CN110741652 A CN 110741652A
- Authority
- CN
- China
- Prior art keywords
- scene
- display device
- video content
- command
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims description 59
- 238000001514 detection method Methods 0.000 description 31
- 230000004044 response Effects 0.000 description 12
- 238000010801 machine learning Methods 0.000 description 11
- 230000001815 facial effect Effects 0.000 description 9
- 230000004913 activation Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8549—Creating video summaries, e.g. movie trailer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42203—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
- H04N21/4828—End-user interface for program selection for searching program descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A display device including user input circuitry for receiving user commands and a display for outputting video content and a user interface, the video content including metadata, the display device further including a processor in communication with the user input circuitry and the display, and a non-volatile computer readable medium in communication with the processor and storing instruction code, the instruction code being executed by the processor and causing the processor to receive a scene command from the user input circuitry, the scene command to search for scenes in the video content that belong to a scene type, the processor determining or more scenes in the video content that belong to the scene type based on the metadata, the processor then updating the user interface to show or more scene images related to or more scenes that belong to the scene type.
Description
Cross Reference to Related Applications
Priority claims are claimed for U.S. patent application No. 15/985,206 filed on day 5/21 of 2018, U.S. patent application No. 15/985,292 filed on day 5/21 of 2018, U.S. patent application No. 15/985,251 filed on day 5/21 of 2018, U.S. patent application No. 15/985,273 filed on day 5/21 of 2018, U.S. patent application No. 15/985,303 filed on day 5/21 of 2018, U.S. patent application No. 15/985,338 filed on day 5/21 of 2018, and U.S. patent application No. 15/985,325 filed on day 5/21 of 2018, the entire disclosures of which are incorporated herein by reference.
Technical Field
In particular, the present application describes display devices having intelligent user interfaces.
Background
High-end televisions of the present kind typically include network connection functionality to facilitate communications from, for example Etc. at the television uses a content server such as The operating system that facilitates execution of apps for other purposes.
Accessing an ever increasing number of new features requires changes to the user interface. Unfortunately, accessing these newer features often results in user interfaces that are frustratingly complex and difficult to navigate.
Disclosure of Invention
In , a display device includes user input circuitry for receiving user commands and a display for outputting video content and a user interface, the video content including metadata, the display device further including a processor in communication with the user input circuitry and the display, and a non-volatile computer readable medium in communication with the processor and storing instruction code, the instruction code being executed by the processor and causing the processor to receive a scene command from the user input circuitry, the scene command to search for scenes in the video content that are of a scene type, the processor determining or more scenes in the video content that are of the scene type from the metadata, the processor then updating the user interface to show or more scene images related to or more scenes that are of the scene type.
In a second aspect, a method for controlling a display device includes receiving a user command through a user input circuit and outputting video content and a user interface through a display, the video content including metadata, the method includes receiving scene commands from the user input circuit, the scene commands to search for scenes in the video content that are of a scene type, determining or more scenes in the video content that are of the scene type from the metadata, and updating the user interface to show or more scene images related to the or more scenes that are of the scene type.
In a third aspect, there is provided a non-transitory computer readable medium storing instruction code for controlling a display device, the instruction code executable by a computer for causing the computer to receive th scene commands from a user input circuit, the th scene command to search for scenes in video content that belong to a scene type, determine or more scenes in the video content that belong to the scene type from metadata of the video content, and update a user interface to show or more scene images related to or more scenes that belong to the scene type.
In a fourth aspect, a display device includes user input circuitry for receiving user commands, a display for displaying video content and a user interface, the device further including a processor in communication with the user input circuitry, the display and a search history database, and a non-volatile computer readable medium in communication with the processor and storing instruction code which, when executed by the processor, causes the processor to receive a search command from the user input circuitry, the processor determining or more candidate search commands related to the th search command from the search command, the processor subsequently updating the user interface to show or more of the candidate search commands, and receiving a second search command from the user input circuitry, the second search command corresponding to one of the or more candidate search commands, the processor determining of the video content associated with the search command and the second search command, and updating the user interface to show the video content determined to be different for each of the plurality of video controls.
Optionally, the instruction code causes the processor to update the user interface to show a unique identifier on each of the one or more controls, receive a third search command from the user input circuitry specifying of the unique identifiers, and display video content associated with the specified unique identifier.
Optionally, the th search command and the second search command correspond to voice commands, the instruction code causing the processor to implement a natural language processor and determine the meaning of the voice commands through the natural language processor.
Optionally, the instruction code causes the processor to: updating the search history database to reflect the fact that the second search command was selected, thereby increasing the likelihood that the second search command is predicted during a subsequent search.
Optionally, the instruction codes cause the processor to predict the one or more candidate search commands based at least in part on a history of search commands specified by the user stored in the search history database.
Optionally, the instruction code causes the processor to update the user interface to show phrases corresponding to the th search command and the second search command, the phrases updated in real-time as the user specifies different search commands.
In a fifth aspect, a method for controlling a display device includes receiving a user command through user input circuitry, displaying video content and a user interface, receiving a th search command from the user input circuitry, determining or more candidate search commands related to the th search command from the th search command, updating the user interface to show or more of the candidate search commands, receiving a second search command from the user input circuitry, the second search command corresponding to of the or more candidate search commands, determining video content associated with the th search command and the second search command, and updating the user interface to show or more controls, each of the controls associated with different video content of the determined video content.
Optionally, the method further includes updating the user interface to show a unique identifier on each of the one or more controls, receiving a third search command from the user input circuitry specifying of the unique identifiers, and displaying video content associated with the specified unique identifier.
Optionally, the th search command and the second search command correspond to voice commands, and the method further comprises implementing a natural language processor and determining the meaning of the voice commands through the natural language processor.
Optionally, the method further includes: updating the search history database to reflect the fact that the second search command was selected, thereby increasing the likelihood that the second search command is predicted during a subsequent search.
Optionally, the method further comprises predicting the one or more candidate search commands based at least in part on a history of search commands specified by the user stored in the search history database.
Optionally, the method further includes updating the user interface to show phrases corresponding to the th search command and the second search command, wherein the phrases are updated in real-time as the user specifies a different search command.
In a sixth aspect, non-volatile computer-readable media are provided that store instruction code for controlling a display device, the instruction code executable by a computer to cause the computer to receive a search command from user input circuitry of the computer, determine or more candidate search commands related to the search command from the search command, update a user interface of the computer to show or more of the candidate search commands, receive a second search command from the user input circuitry, the second search command corresponding to of the or more candidate search commands, determine video content associated with the search command and the second search command, and update the user interface to show or more controls, each of the controls associated with different video content of the determined video content.
Optionally, the instruction code causes the computer to update the user interface to show a unique identifier on each of the or more controls, receive a third search command from the user input circuitry specifying of the unique identifiers, and display video content associated with the specified unique identifier.
Optionally, the th search command and the second search command correspond to voice commands, and the instruction code causes the computer to implement a natural language processor and determine the meaning of the voice commands through the natural language processor.
Optionally, the instruction code causes the computer to update the search history database to reflect the fact that the second search command was selected, thereby increasing the likelihood that the second search command is predicted during a subsequent search.
Optionally, the instruction code causes the computer to predict the one or more candidate search commands based at least in part on a history of search commands specified by the user stored in the history database.
Optionally, the instruction code causes the computer to update the user interface to show phrases corresponding to the th search command and the second search command, the phrases updated in real-time as the user specifies different search commands.
In a seventh aspect, a display device includes user input circuitry for receiving user commands and a display for outputting video content and a user interface, the video content including metadata, the device further including a processor in communication with the user input circuitry and the display, and a non-volatile computer-readable medium in communication with the processor and storing instruction code that, when executed by the processor, causes the processor to receive a query from the user input circuitry for an image of video content currently displayed on the display, determine or more objects of the image associated with the query based on the metadata, update the user interface to show or more controls, each control associated with of the determined or more objects, receive a selection of the control , and update the user interface to show information related to the selection.
Optionally, the instruction code causes the processor to determine 0 or more candidate second queries related to the th query and the determined or more objects, update the user interface to show 2 or more of the 1 or more candidate second queries, receive a second query from the user input circuit, the second query corresponding to of the or more candidate second queries, determine or more objects of the image associated with the th query and the second query based on the metadata, update the user interface to show or more controls, each associated with of the determined or more objects, receive a selection of of the controls, and update the user interface to show information related to the selection.
Optionally, the instruction code causes the processor to update the user interface to show a unique identifier on each of the or more controls, receive a command from the user input circuitry specifying of the unique identifiers, and display information associated with a selection associated with the specified unique identifier.
Optionally, the query and the selection correspond to a voice command, the instruction code causing the processor to: implementing a natural language processor; and determining, by the natural language processor, a meaning of the voice command.
Optionally, the metadata defines a hierarchy of queries.
Optionally, each of the or more controls corresponds to an image associated with an object of the determined or more objects.
Optionally, the instruction code causes the processor to update the user interface to show phrases corresponding to the th query and the second query, wherein the phrases are updated in real-time as the user specifies different queries.
Optionally, the video content continues to be streamed while the display shows the one or more controls and information related to the selection.
In an eighth aspect, a method for controlling a display device includes receiving a user command through a user input circuit, and displaying video content and a user interface, the video content including metadata, the method includes receiving a query from the user input circuit for an image of the video content currently displayed, determining or more objects of the image associated with the query based on the metadata, updating the user interface to show or more controls, each control associated with of the determined or more objects, receiving a selection of of the controls, and updating the user interface to show information related to the selection.
Optionally, the method further includes determining 0 or more candidate second queries related to the query and the determined objects, updating the user interface to show 2 or more of the 1 or more candidate second queries, receiving a second query from the user input circuitry, the second query corresponding to of the or more candidate second queries, determining or more objects of the image associated with the query and the second query based on the metadata, updating the user interface to show or more controls, each associated with of the determined or more objects, receiving a selection of of the controls, and updating the user interface to show information related to the selection.
Optionally, the method further includes updating the user interface to show a unique identifier on each of the or more controls, receiving a command from the user input circuitry specifying of the unique identifiers, and displaying information associated with a selection associated with the specified unique identifier.
Optionally, the query and the selection correspond to a voice command, wherein the method further comprises: implementing a natural language processor; and determining, by the natural language processor, a meaning of the voice command.
Optionally, the metadata defines a hierarchy of queries.
Optionally, each of the or more controls corresponds to an image associated with an object of the determined or more objects.
Optionally, the method further comprises showing phrases corresponding to the th query and the second query, wherein the phrases are updated in real-time as the user specifies different queries.
Optionally, the video content continues to be streamed while the controls and information related to the selection are shown.
In a ninth aspect, non-transitory computer-readable media are provided that store instruction code for controlling a display device, the instruction code executable by a computer to cause the computer to receive a query from user input circuitry of the computer for an image of video content currently shown on a display of the computer, determine or more objects of the image associated with the query based on metadata, update a user interface of the computer to show or more controls, each control associated with of the determined or more objects, receive a selection of of the controls, and update the user interface to show information related to the selection.
Optionally, the instruction code causes the computer to determine 0 or more candidate second queries related to the query and the determined or more objects, update the user interface to show 2 or more of the 1 or more candidate second queries, receive a second query from the user input circuitry, the second query corresponding to of the or more candidate second queries, determine or more objects of the image associated with the query and the second query based on the metadata, update the user interface to show or more controls, each associated with of the determined or more objects, receive selections of the controls, and update the user interface to show information related to the selections.
Optionally, the instruction code causes the computer to update the user interface to show a unique identifier on each of the or more controls, receive a command from the user input circuitry specifying of the unique identifiers, and display information associated with a selection associated with the specified unique identifier.
Optionally, the query and the selection correspond to a voice command, the instruction code causing the computer to: implementing a natural language processor; and determining, by the natural language processor, a meaning of the voice command.
In a tenth aspect, display device includes user input circuitry for receiving user commands, and a display for outputting video content and a user interface, the video content including metadata, the device further including a processor in communication with the user input circuitry and the display, and a non-volatile computer-readable medium in communication with the processor and storing instruction code that, when executed by the processor, causes the processor to receive a pause command from the user to pause the video content such that the display shows a still image, subsequently determine or more objects in the still image based on the metadata, update the user interface to show or more controls, each control associated with of the determined or more objects, receive a selection of of the control, and update the user interface to show information related to the selection.
Optionally, each of the or more controls corresponds to an image associated with an object of the determined or more objects.
Optionally, the controls include at least of bullets associated with of the objects, a share control to share the video content, a score control to score the video content, and an information control to display information associated with of the objects.
Optionally, the information related to the selection shown includes a QR code associated with a URL linked to the information related to the selection.
In a tenth aspect, a method for controlling a display device includes receiving a user command through a user input circuit and displaying video content and a user interface, the video content including metadata, the method further including receiving a pause command from a user to pause the video content to show a still image, subsequently determining or more objects in the still image based on the metadata, updating the user interface to show or more controls, each control associated with a of the determined or more objects, receiving a selection of of the controls, and updating the user interface to show information related to the selection.
Optionally, each of the or more controls corresponds to an image associated with an object of the determined or more objects.
Optionally, the controls include at least of bullets associated with of the objects, a share control to share the video content, a score control to score the video content, and an information control to display information associated with of the objects.
Optionally, the information related to the selection shown includes a QR code associated with a URL linked to the information related to the selection.
In a twelfth aspect, non-transitory computer-readable media are provided that store instruction code for controlling a display device, the instruction code executable by a computer to cause the computer to receive a pause command from a user to pause video content to cause a display of the computer to show a still image, subsequently determine or more objects in the still image based on metadata of the video content, update a user interface of the computer to show or more controls, each of the controls associated with of the determined or more objects, receive a selection of of the controls, and update the user interface to show information related to the selection.
Optionally, each of the or more controls corresponds to an image associated with the determined or more objects of the object.
Optionally, the controls include at least of notifications associated with of the objects, a sharing control for sharing the video content, a scoring control for scoring the video content, and an information control for displaying information associated with of the objects.
Optionally, the information related to the selection shown includes a QR code associated with a URL linked to the information related to the selection.
In a thirteenth aspect, display devices include presence detection circuitry for detecting individuals in proximity to the display device, a display for displaying video content and a user interface, a processor in communication with the presence detection circuitry and the display, and a non-volatile computer-readable medium in communication with the processor and storing instruction code that, when executed by the processor, causes the processor to determine from the presence detection circuitry whether a user is in proximity to the display device, pause the video content when it is determined that the user is not in proximity to the display device, and resume the video content when it is subsequently determined that the user is in proximity to the display device.
Optionally, the presence detection circuit comprises an imager for capturing an image in front of the display device, the instruction code causing the processor to: periodically causing the imager to capture an image; analyzing the captured image to identify facial data; and comparing the face data to face data associated with the user to determine whether the user is in proximity to the display device.
Optionally, in an initial state, a plurality of users including a primary user are in the vicinity of the display device, and when it is subsequently determined that the primary user is not in the vicinity of the display device, pausing the video content and updating the user interface to indicate that the video content has been paused; and when it is subsequently determined that the primary user is in proximity to the display device, restoring the video content and updating the user interface to indicate that the video content has been restored.
Optionally, the presence detection circuitry comprises near field communication circuitry for performing near field communication with a device in the vicinity of the display device, the instruction code causing the processor to determine whether the user is in the vicinity of the display device by detecting near field communication from a portable device associated with the user.
Optionally, when the video content is paused, the processor updates a user interface on the display device to indicate that the video content has been paused; and when the video content is restored, the processor updates a user interface on the display device to indicate that the video content has been restored.
Optionally, when the user interface indicates that the video content is paused, the user interface is updated to show information related to the content of the video.
Optionally, the information related to the content of the video includes information related to the content.
In a fourteenth aspect, a method for controlling a display device includes displaying video content and a user interface, determining from a presence detection circuit whether a user is in proximity to the display device, the presence detection circuit configured to detect individuals present in proximity to the display device, pausing the video content when the user is determined not to be in proximity to the display device, and resuming the video content when the user is subsequently determined to be in proximity to the display device.
Optionally, the presence detection circuit comprises an imager for capturing an image in front of the display device, the method further comprising: periodically causing the imager to capture an image; analyzing the captured image to identify facial data; and comparing the face data to face data associated with the user to determine whether the user is in proximity to the display device.
Optionally, in the initial state, a plurality of users including the primary user are in the vicinity of the display device, and the method further includes: when it is subsequently determined that the primary user is not in proximity to the display device, pausing the video content and updating the user interface to indicate that the video content is paused; and when it is subsequently determined that the primary user is in proximity to the display device, restoring the video content and updating the user interface to indicate that the video content has been restored.
Optionally, the presence detection circuitry comprises near field communication circuitry for performing near field communication with devices in the vicinity of the display device, the method further comprising determining whether the user is in the vicinity of the display device by detecting near field communication of a portable device associated with the user.
Optionally, the method further includes: updating a user interface on the display device to indicate that the video content is paused while the video content is paused; and updating a user interface on the display device to indicate that the video content is restored when the video content is restored.
Optionally, when the user interface indicates that the video content is paused, the method includes updating the user interface to show information related to the content of the video.
Optionally, the information related to the content of the video includes information related to the content.
In a fifteenth aspect, non-transitory computer-readable media are provided that store instruction code for controlling a display device, the instruction code executable by a computer to cause the computer to perform determining, by a presence detection circuit of the computer, whether a user is in proximity to the display device, the presence detection circuit configured to detect individuals in the proximity of the display device, pausing video content when the user is determined not to be in proximity to the display device, and resuming the video content when the user is subsequently determined to be in proximity to the display device.
Optionally, the presence detection circuit comprises an imager for taking an image in front of the display device, wherein the instruction code causes the computer to perform: periodically causing the imager to capture an image; analyzing the captured image to identify facial data; and comparing the face data to face data associated with the user to determine whether the user is in proximity to the display device.
Optionally, in an initial state, a plurality of users including a primary user are in proximity to the display device, the instruction code causing the computer to pause the video content and update the user interface to indicate that the video content has been paused when it is subsequently determined that the primary user is not in proximity to the display device; and when it is subsequently determined that the primary user is in proximity to the display device, the instruction code causes the computer to restore the video content and update the user interface to indicate that the video content has been restored.
Optionally, the presence detection circuitry comprises near field communication circuitry for performing near field communication with a device in the vicinity of the display device, the instruction code causing the computer to determine whether the user is in the vicinity of the display device by detecting near field communication of a portable device associated with the user.
Optionally, when the video content is paused, the instruction code causes the computer to update a user interface on the display device to indicate that the video content is paused; and when the video content is restored, the instruction code causes the computer to update a user interface on the display device to indicate that the video content is restored.
Optionally, when the user interface indicates that the video content is paused, the instruction code causes the computer to update the user interface to show information related to the content of the video.
In a sixteenth aspect, display devices include presence detection circuitry for detecting individuals in proximity to the display device, a display for displaying video content and a user interface, a processor in communication with user input circuitry, the display, and a search history database, and a non-volatile computer readable medium in communication with the processor and storing instruction code that, when executed by the processor, causes the processor to a) determine a user in proximity to the display device from the presence detection circuitry, b) determine or more program types associated with the user, c) determine available programs that match the determined or more program types, and d) update the user interface to show a list of or more of the available programs that match the determined or more program types.
Optionally, the instruction code causes the processor to: receiving a starting-up command of the user to enable the display equipment to enter a watching state; and performing operations a) -d) in the above aspect after receiving the power-on command but before receiving any subsequent commands of the user.
Optionally, the instruction code causes the processor to determine a plurality of users in proximity to the display device from the presence detection circuitry, predict or more program types associated with the plurality of users based on a history of program types previously viewed by the plurality of users stored in the search history database, determine a common program type common to each of the plurality of users from the predicted or more program types, determine available programs that match the common program type, and update the user interface to show a list of or more of the available programs that match the common program type.
Optionally, the presence detection circuit comprises an imager for capturing an image in front of the display device, the instruction code causing the processor to: periodically causing the imager to capture an image; analyzing the captured image to identify facial data; and comparing the face data to face data associated with the user to determine whether the user is in proximity to the display device.
Optionally, the presence detection circuitry comprises near field communication circuitry for performing near field communication with a device in the vicinity of the display device, wherein the instruction code causes the processor to determine whether the user is in the vicinity of the display device by detecting near field communication of a portable device associated with the user.
Optionally, the display device further comprises the user input circuitry for receiving a user command, wherein the instruction code causes the processor to receive a command to select the available program and cause video content associated with the selected available program to be displayed on the display.
Optionally, the command corresponds to a voice command, the instruction code causing the processor to: implementing a natural language processor; and determining, by the natural language processor, a meaning of the voice command.
Optionally, the determination of one or more program types associated with the user is based on a history of program types previously viewed by the user stored in the search history database in communication with the display device.
Optionally, the instruction code causes the processor to: receiving a power-down command from a user to cause the display device to enter a low-power state and deactivate the display; performing operations a) -d) of the above aspect after receiving the power down command but before receiving any subsequent commands by the user; and deactivating the display after a predetermined time without detecting a user indication to power on the display device.
Optionally, after deactivating the display and before the predetermined time, the instruction code causes the processor to predict or more information types associated with the user and update the user interface to show information belonging to the predicted or more information types.
In a seventeenth aspect, a method for controlling a display device includes a) providing presence detection circuitry for detecting individuals in proximity to the display device, b) displaying video content and a user interface, c) determining a user in proximity to the display device from the presence detection circuitry, d) determining or more program types associated with the user, e) determining available programs that match the determined or more program types, and f) updating the user interface to show a list of or more of the available programs that match the determined or more program types.
Optionally, the method further includes: receiving a power-on command from the user to cause the display device to enter a viewing state; and performing operations c) -f) after receiving the power-on command but before receiving any subsequent commands from the user.
Optionally, the method further includes determining a plurality of users in proximity to the display device from the presence detection circuitry, predicting one or more program types associated with the plurality of users based on a history of program types previously viewed by the plurality of users stored in a search history database, determining a common program type common to the respective ones of the plurality of users from the predicted one or more program types, determining available programs that match the common program type, and updating the user interface to show a list of one or more available programs that match the common program type.
Optionally, the presence detection circuit includes: an imager for capturing images in front of the display device, the method further comprising: periodically causing the imager to capture an image; analyzing the captured image to identify facial data; and comparing the face data to face data associated with the user to determine whether the user is in proximity to the display device.
Optionally, the presence detection circuitry comprises near field communication circuitry for performing near field communication with devices in the vicinity of the display device, the method further comprising determining whether the user is in the vicinity of the display device by detecting near field communication of a portable device associated with the user.
Optionally, the method further comprises receiving a user command via the user input circuitry, receiving a command to select available programs, and causing video content associated with the selected available programs to be displayed on the display device.
Optionally, the command corresponds to a voice command, and the method further includes: implementing a natural language processor; and determining, by the natural language processor, a meaning of the voice command.
Optionally, the determination of the one or more program types associated with the user is based on a history of program types previously viewed by the user stored in the search history database in communication with the display device.
Optionally, the method further includes: receiving a power-down command of the user, thereby causing the display device to enter a low-power state and deactivate a display; performing operations c) -f) after receiving the power-down command but before receiving any subsequent commands from the user; and deactivating the display after a predetermined time without detecting a user indication to power on the display device.
Optionally, after deactivating the display and before the predetermined time, the method further comprises predicting or more information types associated with the user and updating the user interface to show information belonging to the predicted or more information types.
In an eighteenth aspect, display devices include a display for displaying video content and a user interface, a processor in communication with a presence detection circuit and the display, and a non-volatile computer-readable medium in communication with the processor and storing instruction code that, when executed by the processor, causes the processor to receive data associating a smart appliance state with a display device usage, determine a current display device usage, determine a suggested smart appliance state corresponding to the current display device usage based on the received data, and adjust a smart appliance to the determined state.
Optionally, the smart appliance state defines an activation state of the smart appliance, and the display device usage defines one or more of a time of use of the display device, a type of program viewed on the display device, and a particular user of the display device.
Optionally, the display device comprises the presence detection circuitry to detect a particular user in proximity to the display device, the presence detection circuitry comprising an imager to capture an image in front of the display device, the instruction code causing the processor to: periodically causing the imager to capture an image; analyzing the captured image to identify facial data; and comparing the face data to face data associated with a plurality of users to determine whether the particular user is in proximity to the display device.
Optionally, the display device includes: a communication circuit for receiving new status information from each intelligent appliance, and a database for storing new status information for each intelligent appliance and new display device usage information defining the display device, the instruction code causing the processor to: continuously updating the database by using the new state information of each intelligent household appliance and the new display equipment use condition information of the display equipment; and associating the new state information of each intelligent household appliance with the new display device use condition information associated with the display device to form a relationship between the state of the intelligent household appliance and the use condition of the display device.
In a nineteenth aspect, a method for controlling a display device includes displaying video content and a user interface, receiving data associating a smart appliance state with display device usage, determining current display device usage, determining a suggested smart appliance state corresponding to the current display device usage based on the received data, and adjusting the smart appliance to the determined state.
Optionally, the smart appliance state defines an activation state of the smart appliance, and the display device usage defines one or more of a time of use of the display device, a type of program viewed on the display device, and a particular user of the display device.
Optionally, the display device includes: presence detection circuitry for detecting a particular user in proximity to the display device, the presence detection circuitry comprising an imager for capturing images in front of the display device, the method further comprising: periodically causing the imager to capture an image; analyzing the captured image to identify facial data; and comparing the face data to face data associated with a plurality of users to determine whether the particular user is in proximity to the display device.
Optionally, the display device includes: a communication circuit for receiving new status information from each intelligent appliance and a database for storing the new status information for each intelligent appliance and information defining new display device usage for the display device, the method further comprising: continuously updating the database by using the new state information of each intelligent household appliance and the new display equipment use condition information of the display equipment; and associating the new state information of each intelligent household appliance with the new display device use condition information associated with the display device to form a relationship between the state of each intelligent household appliance and the use condition of the display device.
In a twentieth aspect, non-transitory computer-readable media are provided having stored thereon instruction code for controlling a display device, the instruction code executable by a computer to cause the computer to receive data associating a smart appliance state with a display device usage, determine a current display device usage, determine a suggested smart appliance state corresponding to the current display device usage based on the received data, and adjust a smart appliance to the determined state.
Optionally, the intelligent appliance state defines an activation state of the intelligent appliance, and the display device usage defines one or more of a time of use of the display device, a type of program viewed on the display device, and a particular user of the display device.
Optionally, the display device comprises a presence detection circuit for detecting a particular user in the vicinity of the display device, the presence detection circuit comprising an imager for capturing an image in front of the display device, the instruction code causing the computer to: periodically causing the imager to capture an image; analyzing the captured image to identify facial data; and comparing the face data to face data associated with a plurality of users to determine whether the particular user is in proximity to the display device.
Optionally, the display device includes a communication circuit for receiving new status information from each intelligent appliance, and a database for storing the new status information of each intelligent appliance and information defining a new display device usage of the display device, the instruction code causing the computer to: continuously updating the database by using the new state information of each intelligent household appliance and the new display equipment use condition information of the display equipment; and associating the new state information of the intelligent household appliance with the new display device usage information associated with the display device to form a relationship between the state of the intelligent household appliance and the usage of the display device.
Drawings
FIG. 1 illustrates an exemplary environment in which a display device operates;
fig. 2 illustrates exemplary operations for enhancing navigation of video content.
3A-3C illustrate exemplary user interfaces that may be presented to a user during the operation of FIG. 2;
FIG. 4 illustrates exemplary operations that facilitate locating particular types of video content;
FIG. 5 illustrates an exemplary user interface that may be presented to a user during the operations of FIG. 4;
FIG. 6 illustrates exemplary operations for determining information related to images in video content.
FIGS. 7A and 7B illustrate exemplary user interfaces that may be presented to a user during the operations of FIG. 6;
FIG. 8 illustrates alternative exemplary operations for determining information related to images in video content;
FIGS. 9A and 9B illustrate exemplary user interfaces that may be presented to a user during the operations of FIG. 8;
FIG. 10 illustrates alternative exemplary operations for automatically pausing video content;
11A and 11B illustrate exemplary user interfaces that may be presented to a user during the operations of FIG. 10;
FIG. 12 illustrates alternative exemplary operations for automatically pausing video content;
13A-13D illustrate exemplary user interfaces that may be presented to a user during the operation of FIG. 12;
fig. 14 illustrates exemplary operations for adjusting various intelligent appliances based on detected user usage habits;
15A-15B illustrate exemplary user interfaces that may be presented to a user during the operation of FIG. 14; and
fig. 16 illustrates an exemplary computer system that may form part of or implement the systems described in the figures or paragraphs below.
Detailed Description
The embodiments described below relate to various user interface embodiments that facilitate access to television features in an intelligent, easy-to-use manner. Typically, the user interface relies on various machine learning techniques that facilitate access to these features and other information in a minimum of steps. The user interface is configured to be intuitive and to require only a minimum learning time to proficiently navigate the user interface.
FIG. 1 illustrates an exemplary environment in which a display device operates, showing display devices 100, group mobile devices 105, a GPS network 110, a computer network 115, group social media servers 120, group content servers 125, a support server 127, and or more users who may view the display device 100 and/or interact with the display device 100. the display devices 100, social media servers 120, content servers 125, and support server 127 may communicate with each other via a network 107, such as the Internet, a cable network, a satellite network, and the like.
The social media server 120 generally corresponds to a computer system that carries publicly available information that may be relevant to the user 130 of the display device 100And the like. Social media server 120 may include a blog, a forum, and/or any other system or website from which information related to user 130 may be obtained.
The GPS network 110 and the computer network 115 may transmit information to the display device 100 that, in turn, may facilitate the display device 100 in determining an approximate location (general location) of the display device 100. For example, the GPS network 110 may communicate information that facilitates determining a relatively precise location of the display device 100. The computer network 115 may assign an IP address to the display device 100, which may be associated with an approximate location such as a city or other geographic area.
The content server 125 generally corresponds to a computer system that carries video content, for example, the content server 125 may correspond to a head end device operated by a cable provider, a network provider, or the like, in some cases at the content server 125 may store video content, such as movies, television programs, sports programs, or the like.
For example, metadata associated with a sporting event may include information timestamps (information timestamps), still images, and the like associated with various events of the game, such as goals, and the like.
Metadata in video content may include information that facilitates determining whether the video content is of a particular genre (e.g., comedy, drama, sports, adventure, etc.). The metadata may include information associated with different individuals shown in the video content, such as names of actors shown in the video content. The metadata may include information associated with different objects shown in the video content, such as clothing worn by the individual, personal items carried by the individual, and various objects shown in the video content.
The metadata may have been automatically generated in advance through various machine learning techniques for identifying individuals, scenes, events, etc. in the video content. Additionally or alternatively, the machine learning techniques may use some form of human assistance in making the determination.
The support server 127 generally corresponds to a computer system configured to provide advanced services to the display device 100. For example, the support server 127 may correspond to a high-end computer configured to perform various machine learning techniques to determine the meaning of voice commands, predict responses to voice commands, and so forth. Support server 127 may receive voice commands and other types of commands from display device 100 and transmit responses associated with the commands back to the display device.
CPU150 may correspond to a processor, e.g., based onAnd the like. CPU150 may execute an operating system, e.g.Or other operating system suitable for execution within the display device. Instruction code associated with an operating system and used to control various aspects of the display device 100 may be stored in the instruction memory 170. For example, instruction code stored in instruction memory 170 may facilitate controlling CPU150 to transmit information to I/O interface 155 and to receive information from I/O interface 155. CPU150 may process video content received from I/O interface 155 and transmit the processed video content to display 175. The CPU150 may generate various user interfaces that facilitate controlling different aspects of the display device.
The I/O interface 155 is configured to connect with various types of hardware and communicate information received from the hardware to the CPU, for example, the I/O interface 155 may be coupled to or more antennas that facilitate receiving information from the mobile terminal 105, GPS network 110, computer network 115, smart appliance 117, etc., the I/O interface may be connected to an imager 151 disposed on a face of the display device 100 to facilitate capturing images of individuals in the vicinity of the display device, the I/O interface may be connected to or more microphones 152 disposed on the display device 100 to facilitate capturing voice instructions that may be communicated by the user 130.
Exemplary operations performed by the CPU150 and/or other modules of the display device 100 in providing the intelligent user interface are described below. In this regard, the operations may be implemented via instruction code stored in a non-volatile computer readable medium 170 located within the subsystems, the instruction code configured to cause the respective subsystems to perform the operations illustrated in the figures and discussed herein.
Fig. 2 illustrates exemplary operations for enhancing navigation of video content. The operation of fig. 2 may be better understood with reference to fig. 3A-3C.
For example, the user 130 may simply speak loudly, "Show me all goals (Show all the goals)", in which case a natural language processor implemented by the CPU150 alone or in cooperation with the AI processor 165 may determine the meaning of the voice command.
As shown in FIG. 3A, in embodiments, user interface 300 may include a phrase control 310 that is updated in real-time to show text associated with a command issued by a user.
At step 205, in response to the th scene command 305, the display device 100 may determine a scene in the video content that is of a scene type associated with the th scene command 305. at this point, the CPU150, alone or in cooperation with the AI processor 165, may implement various machine learning techniques that utilize metadata associated with the video content to determine a scene in the video content that is of a scene type.
At step 210, the user interface 300 of the display device 100 may be updated to show a scene image 320 associated with the determined scene. For example, an image 320 from video content metadata associated with a scene may be displayed on the user interface 300. The image 320 may correspond to a still image and/or a sequence of images or video associated with a scene.
In embodiments, the user interface 300 can be updated to display only identifiers 325 on or near each image in embodiments, the identifiers are presented in a prominent fashion superimposed on the portion on each image.
At step 215, the user 130 may specify a second scene command specifying of the unique identifier 325. for example, the user 130 may specify a "1" to select a scene associated with the th image 320, the unique identifier corresponding to the associated scene. in embodiments, the unique identifier employs an identifier such as the Arabic number shown in FIG. 3A that is easy for the user to speak and easy for the display device itself or the server to recognize.
At step 220, video content associated with the specified unique identifier 325 (e.g., "1") may be displayed on the user interface 300, as shown in FIG. 3C.
Returning to step 200, in some embodiments, the user 130 may refine the scene commands by specifying additional information, for example, in response to receiving scene command 305 at step 200, at step 225, one may determine or more possible scene commands 315 related to scene command 305. machine learning techniques implemented by the CPU150, AI processor 165, and/or support server 127 may be used to determine possible scene commands related to scene command 305.
At step 230, the user interface 300 may be updated to show one or more candidate scene commands 315, as shown in FIG. 3A. for example, in response to the th scene command 305 "show me all goals," possible scene commands "first half", "Royal Queen (Real Madrid)", etc. may be determined and shown.
At step 235, the user 130 may issue of the candidate scene commands 315 to instruct the display device 100 to search for scenes in the video content, as shown in FIG. 3B for example, the user 130 may simply speak "in the top half" aloud.
Operations may repeat from step 205 for example, in response to the third scene command 330, the display device 100 may determine scenes in the video content that are of a scene type associated with th scene command 305 and third scene command 330 additionally or alternatively, th scene command 305 and third scene command 330 may be transmitted to the support server 127, and the support server 127 may transmit information defining the relevant scenes to the display device.
For example, after issuing the third scene command 330, another set of candidate scene commands 315 may be shown, and so on.
Fig. 4 illustrates exemplary operations that facilitate locating particular types of video content. The operation of fig. 4 may be better understood with reference to fig. 5.
At step 400, the display device 100 may be displaying video content, such as a sitcoms, as shown in FIG. 5. the user 130 may issue a search command 505 to the display device 100 to cause the display device 100 to search for a particular type of video content.
At step 405, the display device 100 may determine video content relevant to the search command 505 at this point, the CPU150, alone or in cooperation with the AI processor 165, may implement various machine learning techniques that utilize metadata associated with the video content to determine the video content relevant to the search command additionally or alternatively, the search command 505 may be transmitted to the support server 127, and the support server 127 may determine and transmit information related to the video content back to the display device 100, which in turn is relevant to the search command.
At step 410, the user interface 500 may be updated to show controls 520 that facilitate selection of video content, each control may include a unique identifier 525 on the control 520 or near the control 520 that facilitates selection of the control by voice, for example, the th control having a unique identifier "1" may correspond to an image representing an input source of the display device 100 that facilitates selection of video content from the input source, the second control having a unique identifier "2" may correspond to an image of an actor that, when selected, facilitates selection of video content that includes the actor, the fourth control having a unique identifier "4" may correspond to scenes from movies frequently viewed by the user or scenes belonging to the type of performance viewed by the user 130.
The machine learning technique may determine the type of control to be displayed based at least in part on a history of search commands and selections specified by the user, which may be stored in a support database 153 of the display device 100 or maintained within the support server 127 at in some embodiments, the support database 153 is dynamically updated to reflect the user's selections to improve the relevance of controls displayed to the user for subsequent requests.
At step 415, the user 130 may specify a second search command specifying of -only identifiers, for example, the user 130 may specify "4" to select a scene associated with the fourth image 520.
At step 420, video content associated with the specified unique identifier (e.g., "4") may be shown on the user interface 500 of the display device 100.
Returning to step 400, in some embodiments, user 130 may refine the search command by specifying additional information, for example, in response to receiving search command at step 400, at step 425, one may determine or more possible third search commands 515 related to th search command 505. machine learning techniques implemented by CPU150, AI processor 165, and/or support server 127 may be used to determine possible commands related to th search command 505.
At step 430, the user interface 500 may be updated to show or more of the possible search commands 515, as shown in FIG. 5. for example, in response to the th scene command "show," candidate search commands 515 "games," "action movies," etc. may be determined and displayed.
As previously described, in embodiments, user interface 500 may include phrase control 510, with phrase control 510 being updated in real-time to show text associated with a command issued by a user.
of the candidate search commands 515 that the user 130 may issue at step 435 to instruct the display device 100 to search for various types of video content, for example, the user 130 may simply speak "action movies" aloud.
For example, in response to a third search command, the display device 100 may determine video content related to the search command and the third search command and display the appropriate controls for selection by the user.
FIG. 6 illustrates exemplary operations for determining information related to images in video content. The operation of fig. 6 may be better understood with reference to fig. 7A and 7B.
At step 600, the display device 100 may be showing video content, such as a movie, as shown in FIG. 7A. the user 130 may issue a query 705 to the display device 100 to cause the display device 100 to provide information related to the query.
At step 605, in response to the query 705, the display device 100 may determine or more objects in the image associated with the query 705. in this regard, the CPU150, alone or in cooperation with the AI processor 165, may implement various machine learning techniques that utilize metadata associated with the video content to determine the different objects being shown on the user interface 700 of the display device 100. additionally or alternatively, the query 705 may be transmitted to the support server 127, and the support server 127 may determine information related to the different objects shown on the user interface 700 and transmit the information to the display device 100.
At step 610, user interface 700 of display device 100 may be updated to show controls 720 that facilitate selection of different objects, each control may include a unique identifier 725 on each control 720 or near the control 720 that facilitates selection of the control by voice, for example, controls for each actor may be shown on user interface 700.
At step 615, user 130 may select of -only identifier 725, for example, user 130 may specify "2" to select a particular actor.
At step 620, the user interface 700 may be updated to show information related to the selection. For example, as shown in fig. 7B, an information control 730 may be provided having information related to the selected actor.
Returning to step 600, in some embodiments, the user 130 may refine the query by specifying additional information, for example, in response to receiving the query in step 600, at step 625, one or more possible second queries 715 may be determined that are related to the query 705.
At step 630, as shown in FIG. 7A, the user interface 700 may be updated to show one or more candidate queries 715. for example, in response to the th query "who is on screen," the candidate queries "other movies of a certain (John Doe)," where shot, "etc. may be determined and shown.
As previously described, in embodiments, user interface 700 may include a phrase control 710 that is updated in real-time to show text associated with a query issued by a user.
At step 635, the user 130 may indicate a second query corresponding to of the candidate queries 715 to indicate to the display device 100 to show information relevant to the query the phrase control 710 may be updated in real-time to show text associated with the st query 705 and the second query.
At step 640, objects related to the second query may be determined and included in or may replace previously determined objects. The operation may then repeat from step 605.
FIG. 8 illustrates alternative exemplary operations for determining information related to images in video content. The operation of fig. 8 may be better understood with reference to fig. 9A and 9B.
At step 800, display device 100 may be showing video content, such as situation comedy, as shown in FIG. 9A. User 130 may issue a command to display device 100 to pause the video content to show a still image on user interface 900.
At step 805, the display device 100 may determine one or more objects of the image in this regard, the CPU150, alone or in cooperation with the AI processor 165, may implement various machine learning techniques that utilize metadata associated with the video content to determine the different objects being shown in the still image, additionally or alternatively, the still image may be communicated to the support server 127, and the support server 127 may determine and communicate the different objects shown in the still image to the display device 100.
At step 810, a user interface of the display device 100 can be updated to show controls 920 that facilitate selection of different objects, as shown in FIG. 9A. for example, controls 920 can be provided for selecting notes related to of objects in a still image, for sharing video content, for scoring video content, for displaying information related to of the objects.
Each control 920 can include a unique identifier on control 920 or near control 920 that facilitates selection of the control by voice.
At step 815, the user 130 may select of the unique identifiers, for example, the user 130 may specify a unique identifier associated with a control showing a handbag that corresponds to the handbag shown in the still image.
At step 820, the user interface 900 may be updated to show information related to the selection, for example, as shown in FIG. 9B, an information control 925 having information related to the selection may be provided, in embodiments, the information control 925 may show a QR code associated with a URL that may be used to find more information related to the selection.
FIG. 10 illustrates alternative exemplary operations for automatically pausing video content. The operation of fig. 10 may be better understood with reference to fig. 11A and 11B.
At step 1000, the display device 100 may determine whether the user is near the display device 100, for example, in an embodiment, the imager 151 of the display device 100 may capture an image in front of the display device, the CPU150 may control the imager 151, alone or in cooperation with the AI processor 165, to capture an image, analyze the captured image to identify face data in the image, and compare the face data to face data associated with the user 130 to determine whether the user 130 is near the display device.
In another embodiment, the near field communication circuitry of the display device 100 may be used to detect the presence of a near field communication capable device carried by the user 130 in the vicinity of the display device.
At step 1005, if it is determined that the user is not near the display device 100, then at step 1010, the video content may be paused if it has not been paused, as shown in FIG. 11A. Referring to fig. 11A, a status control 1105 may be shown on the user interface 1100 to indicate that the video content has been paused.
In embodiments, user interface 1100 may show additional details related to the still images displayed on user interface 1100, such as the information described above with respect to fig. 9A and 9B.
If at step 1005, it is determined that user 130 is near the display, then at step 1015, if the video content has not been restored, the video content may be restored, as shown in FIG. 11B. Referring to fig. 11B, the state control 1105 may be updated to indicate that the video content is to be restored.
At , in some embodiments, the display device 100 may perform the above-described operations even when other users 130 are near the display device 100. for example, in an initial state, multiple users 130 including the primary user 130 may be near the display device.
FIG. 12 illustrates alternative exemplary operations for automatically pausing video content. The operation of fig. 12 may be better understood with reference to fig. 13A-13D.
CPU150 may control imager 151, alone or in cooperation with AI processor 165, to capture an image, analyze the captured image to identify face data in the image, and compare the face data to face data associated with the user to determine whether the user is near display device 100. As described above, display device 100 may have previously captured face data associated with user 130, for example, during an initial setup routine.
In another embodiment, the presence of user 130 may be determined based on near field communication circuitry of a device carried by user 130, as described above.
At step 1205, if the user is determined to be in proximity to the display device 100, one or more program types associated with the user 130 are determined.
At step 1210, the programs available for viewing at the time the user is detected or within a predetermined time later (e.g., 30 minutes) may be determined. For example, metadata associated with available video content may be analyzed to determine whether any of the video content belongs to the user-associated program genre as determined above.
At step 1215, the user interface 1300 may be updated to present information 1305 about available programs that matches the user's associated program type the user interface 1300 may include controls to facilitate viewing of the available programs, recording the available programs, and the like.
In embodiments, groups of users 130 may be detected in the vicinity of display device 100 and the program type determined at step 1205 may be based on the intersection of the program types associated with two or more users 130 the user interface 1300 may be updated to show information 1305 about available programs that match the intersection of the user's associated program types.
For example, when a second user is in proximity to the display device, the th user 130 may be viewing video content on the display device 100.
In other embodiments, the above operation may be performed immediately after the display apparatus 100 is turned on (powering on).
For example, as shown in FIG. 13B, the display device 100 may be powered on after having been turned off or the turn-off operation may be cancelled, and the user interface 1300 may be updated to show a minimum amount of information so as not to draw too much distraction.
In other implementations, or more information types associated with the user 130 may be determined, and the user interface 1300 may be updated to show information pertaining to the determined information types, for example, as shown in FIG. 13C, it may have been determined that the user 130 is interested in knowing the weather, in which case the display device 100 may be powered on in a minimum power state and may show an information control 1305 that displays information related to the weather, or the information control 1305 may be updated to display information related to an upcoming television episode, as shown in FIG. 13D, after a predetermined time (e.g., 1 minute), the display device 100 may be powered off.
Fig. 14 illustrates exemplary operations for adjusting various smart appliances based on detected usage habits (routine) of the user 130. The operation of fig. 14 may be better understood with reference to fig. 15A-15B.
At step 1400, the display device 100 may receive data related to the status of various smart appliances 117 and display device 100 usage. For example, data relating to light switches, timers, drape controls, and other intelligent appliances 117 that are previously associated with usage of the display device 100 may be received. In this regard, the communication circuitry of the display device 100 may continuously receive status information from the smart appliance 117. The support database 153 may store status information of the smart appliances 117 and usage information of the display device 100. The CPU150 may associate the state information of the smart appliance 117 with the usage information of the display device 100 to form a relationship between the smart appliance state and the display device usage. The relationship may indicate a usage habit followed by the user 130 when viewing video content on the display device 100.
The state information may define an activation state of the smart appliance 117. For example, whether the smart lamp is turned on, off or dimmed to a certain setting, e.g. 50%. Other information may include whether the smart drape is closed, partially closed, etc. The usage information may define the time of use of the display device, the type of programs viewed on the display device, a list of particular users of the display device, and particular characteristics of the display device 100, such as the volume, contrast, and brightness of the display device.
At step 1405, a usage of the display device may be determined, and at step 1410, respective states of the one or more smart appliances 117 may be determined based on the received data.
At step 1415, the states of the various smart appliances may be set according to the state determined at step 1410. For example, the CPU150 may adjust various smart appliances 117 via the communication circuit of the display device 100.
As shown in fig. 15A, the user interface 1500 may include an information control 1505 to notify the user 130 that usage habits are detected. For example, the user interface 1500 may note (note) that the display device 100 is in "cinema mode" and that the smart light bulb is controlled when the display device 100 is in this mode. As shown in fig. 15B, the user interface 1500 may be updated to provide details related to the detected usage habits, such as the name assigned to the usage habits (e.g., "movie time 8 PM"), the time to enter the mode "theater mode" (e.g., 8:01PM), and the setting to set the smart appliance to (e.g., 10%).
FIG. 16 illustrates a computer system 1600 that may form part of or implement of the above-described systems, environments, devices, etc. computer system 1600 may include sets of instructions 1645 that processor 1605 may execute set of instructions 1645 to cause computer system 1600 to perform any of the operations described above.
In a networked deployment, computer system 1600 may operate as a client computer in a server or server-client network environment, or as a peer computer system in a peer-to-peer (or distributed) environment computer system 1600 may also be implemented as, or incorporated into, various devices, such as a personal computer or mobile device, capable of executing instructions 1645 (sequentially or otherwise) to cause the device to perform or more actions.
Additionally, computer system 1600 may include an input device 1625, such as a keyboard or mouse or touch screen, configured to allow user interaction with components of system 1600.
The computer system 1600 may also include a magnetic disk or optical drive unit 1615. The drive unit 1615 may include a computer-readable medium 1640 in which instructions 1645 may be stored. The instructions 1645 may reside, completely or at least partially, within the memory 1610 and/or within the processor 1605 during execution thereof by the computer system 1600. Memory 1610 and processor 1605 may also include computer-readable media as described above.
The methods and systems described herein may be realized in a centralized fashion in at least computer systems, or in a distributed fashion where different elements are spread across interconnected computer systems.
Computer programs, as used herein, refer to expressions in of a set of machine-executable language, code, or notation, intended to cause a device to perform a particular function either directly or after or more of a) and b) below, a) conversion of a th language, code, or notation to another language, code, or notation, and b) reproduction of a th language, code, or notation.
While the method and system have been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the claims. Therefore, it is intended that the present methods and systems not be limited to the particular embodiments disclosed, but that the disclosed methods and systems include all embodiments falling within the scope of the appended claims.
Claims (17)
1, A display device, comprising:
user input circuitry for receiving user commands;
a display for outputting video content and a user interface, the video content including metadata;
a processor in communication with the user input circuitry and the display; and
a non-transitory computer readable medium in communication with the processor and storing instruction code that, when executed by the processor, causes the processor to:
receiving th scene command from the user input circuit, the th scene command to search for scenes in the video content that are of a scene type;
determining or more scenes in the video content that belong to the scene type from the metadata, and
updating the user interface to show or more scene images related to the or more scenes belonging to the scene type.
2. The display device of claim 1, wherein the instruction code causes the processor to:
determining or more candidate second scene commands related to the scene command from the scene command based on the metadata in the video content;
updating the user interface to show or more of the candidate second scene commands;
receiving the second scene command from the user input circuitry, the second scene command to show video content of a second scene type related to the th scene command and the second scene command;
determining or more scenes in the video content that belong to the second scene type from the metadata;
updating the user interface to show or more scene images related to the or more scenes belonging to the second scene type.
3. The display device of claim 1, wherein the instruction code causes the processor to:
updating the user interface to show a -only identifier on each of the one or more scene images;
receiving a third scene command from the user input circuitry specifying of the unique identifier;
video content from the scene image associated with the specified unique identifier is displayed.
4. The display device of claim 1, wherein the th scene command corresponds to a voice command, the instruction code causing the processor to:
implementing a natural language processor; and
determining, by the natural language processor, a meaning of the voice command.
5. The display device of claim 3, wherein the unique identifier comprises an Arabic number.
6. The display device of claim 3, wherein a of the -only identifiers are presented overlaid on a -th of the scene images.
7. The display device of claim 5, wherein the third scene command comprises a voice input of a user.
8. The display device of claim 5, wherein the third scene command comprises a user input via a remote control.
9, a method for controlling a display device, comprising:
receiving a user command through a user input circuit;
outputting, via a display, video content and a user interface, the video content including metadata;
receiving th scene command from the user input circuit, the th scene command to search for scenes in the video content that are of a scene type;
determining or more scenes in the video content that belong to the scene type from the metadata, and
updating the user interface to show or more scene images related to the or more scenes belonging to the scene type.
10. The method of claim 9, further comprising:
determining or more candidate second scene commands related to the scene command from the scene command based on the metadata in the video content;
updating the user interface to show or more of the candidate second scene commands;
receiving the second scene command from the user input circuitry, the second scene command to show video content of a second scene type related to the th scene command and the second scene command;
determining or more scenes in the video content that belong to the second scene type from the metadata;
updating the user interface to show or more scene images related to the or more scenes belonging to the second scene type.
11. The method of claim 9, further comprising:
updating the user interface to show a -only identifier on each of the one or more scene images;
receiving a third scene command from the user input circuitry specifying of the unique identifier;
video content from the scene image associated with the specified unique identifier is displayed.
12. The method of claim 9, the th scene command corresponding to a voice command, the method further comprising:
implementing a natural language processor; and
determining, by the natural language processor, a meaning of the voice command.
13. The method of claim 11, wherein the unique identifier comprises an arabic numeral.
14. The method of claim 11, wherein a of the unique identifiers is presented overlaid on a of the scene images.
15. The method of claim 13, wherein the third scene command comprises a voice input of a user.
16. The method of claim 13, wherein the third scene command comprises a user input via a remote control.
A non-transitory computer readable medium of the kind 17, , having stored thereon instruction code for controlling a display device, the instruction code executable by a computer to cause the computer to:
receiving th scene command from a user input circuit of the computer, the th scene command to search for scenes in video content that are of a scene type;
determining or more scenes in the video content that belong to the scene type from metadata of the video content, and
updating a user interface of the computer to show or more scene images related to the or more scenes belonging to the scene type.
Applications Claiming Priority (15)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201815985273A | 2018-05-21 | 2018-05-21 | |
US15/985,273 | 2018-05-21 | ||
US15/985,292 | 2018-05-21 | ||
US15/985,325 | 2018-05-21 | ||
US15/985,292 US20190354603A1 (en) | 2018-05-21 | 2018-05-21 | Display apparatus with intelligent user interface |
US15/985,251 US11507619B2 (en) | 2018-05-21 | 2018-05-21 | Display apparatus with intelligent user interface |
US15/985,338 | 2018-05-21 | ||
US15/985,303 | 2018-05-21 | ||
US15/985,206 US20190354608A1 (en) | 2018-05-21 | 2018-05-21 | Display apparatus with intelligent user interface |
US15/985,251 | 2018-05-21 | ||
US15/985,338 US20190356952A1 (en) | 2018-05-21 | 2018-05-21 | Display apparatus with intelligent user interface |
US15/985,206 | 2018-05-21 | ||
US15/985,325 US10965985B2 (en) | 2018-05-21 | 2018-05-21 | Display apparatus with intelligent user interface |
US15/985,303 US20190356951A1 (en) | 2018-05-21 | 2018-05-21 | Display apparatus with intelligent user interface |
PCT/CN2019/086009 WO2019223536A1 (en) | 2018-05-21 | 2019-05-08 | Display apparatus with intelligent user interface |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110741652A true CN110741652A (en) | 2020-01-31 |
Family
ID=68615946
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201980000619.3A Pending CN110741652A (en) | 2018-05-21 | 2019-05-08 | Display device with intelligent user interface |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110741652A (en) |
WO (1) | WO2019223536A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210266191A1 (en) * | 2020-02-24 | 2021-08-26 | Haier Us Appliance Solutions, Inc. | Consumer appliance inheritance methods and systems |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103000173A (en) * | 2012-12-11 | 2013-03-27 | 优视科技有限公司 | Voice interaction method and device |
CN103077165A (en) * | 2012-12-31 | 2013-05-01 | 威盛电子股份有限公司 | Natural language dialogue method and system thereof |
US20130174195A1 (en) * | 2012-01-04 | 2013-07-04 | Google Inc. | Systems and methods of image searching |
US20140188931A1 (en) * | 2012-12-28 | 2014-07-03 | Eric J. Smiling | Lexicon based systems and methods for intelligent media search |
US20150161239A1 (en) * | 2010-03-23 | 2015-06-11 | Google Inc. | Presenting Search Term Refinements |
CN105007531A (en) * | 2014-04-23 | 2015-10-28 | Lg电子株式会社 | Image display device and control method thereof |
US20150382079A1 (en) * | 2014-06-30 | 2015-12-31 | Apple Inc. | Real-time digital assistant knowledge updates |
CN106851407A (en) * | 2017-01-24 | 2017-06-13 | 维沃移动通信有限公司 | A kind of control method and terminal of video playback progress |
CN107003797A (en) * | 2015-09-08 | 2017-08-01 | 苹果公司 | Intelligent automation assistant in media environment |
US20180018508A1 (en) * | 2015-01-29 | 2018-01-18 | Unifai Holdings Limited | Computer vision systems |
CN107833574A (en) * | 2017-11-16 | 2018-03-23 | 百度在线网络技术(北京)有限公司 | Method and apparatus for providing voice service |
US20180089203A1 (en) * | 2016-09-23 | 2018-03-29 | Adobe Systems Incorporated | Providing relevant video scenes in response to a video search query |
CN108055589A (en) * | 2017-12-20 | 2018-05-18 | 聚好看科技股份有限公司 | Smart television |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010183301A (en) * | 2009-02-04 | 2010-08-19 | Sony Corp | Video processing device, video processing method, and program |
CN102263907B (en) * | 2011-08-04 | 2013-09-18 | 央视国际网络有限公司 | Play control method of competition video, and generation method and device for clip information of competition video |
US10129608B2 (en) * | 2015-02-24 | 2018-11-13 | Zepp Labs, Inc. | Detect sports video highlights based on voice recognition |
CN107801106B (en) * | 2017-10-24 | 2019-10-15 | 维沃移动通信有限公司 | A kind of video clip intercept method and electronic equipment |
-
2019
- 2019-05-08 WO PCT/CN2019/086009 patent/WO2019223536A1/en active Application Filing
- 2019-05-08 CN CN201980000619.3A patent/CN110741652A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150161239A1 (en) * | 2010-03-23 | 2015-06-11 | Google Inc. | Presenting Search Term Refinements |
US20130174195A1 (en) * | 2012-01-04 | 2013-07-04 | Google Inc. | Systems and methods of image searching |
CN103000173A (en) * | 2012-12-11 | 2013-03-27 | 优视科技有限公司 | Voice interaction method and device |
US20140188931A1 (en) * | 2012-12-28 | 2014-07-03 | Eric J. Smiling | Lexicon based systems and methods for intelligent media search |
CN103077165A (en) * | 2012-12-31 | 2013-05-01 | 威盛电子股份有限公司 | Natural language dialogue method and system thereof |
CN105007531A (en) * | 2014-04-23 | 2015-10-28 | Lg电子株式会社 | Image display device and control method thereof |
US20150382079A1 (en) * | 2014-06-30 | 2015-12-31 | Apple Inc. | Real-time digital assistant knowledge updates |
US20180018508A1 (en) * | 2015-01-29 | 2018-01-18 | Unifai Holdings Limited | Computer vision systems |
CN107003797A (en) * | 2015-09-08 | 2017-08-01 | 苹果公司 | Intelligent automation assistant in media environment |
US20180089203A1 (en) * | 2016-09-23 | 2018-03-29 | Adobe Systems Incorporated | Providing relevant video scenes in response to a video search query |
CN106851407A (en) * | 2017-01-24 | 2017-06-13 | 维沃移动通信有限公司 | A kind of control method and terminal of video playback progress |
CN107833574A (en) * | 2017-11-16 | 2018-03-23 | 百度在线网络技术(北京)有限公司 | Method and apparatus for providing voice service |
CN108055589A (en) * | 2017-12-20 | 2018-05-18 | 聚好看科技股份有限公司 | Smart television |
Also Published As
Publication number | Publication date |
---|---|
WO2019223536A1 (en) | 2019-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102211014B1 (en) | Identification and control of smart devices | |
US20230061691A1 (en) | Display Apparatus with Intelligent User Interface | |
US11509957B2 (en) | Display apparatus with intelligent user interface | |
US20190354608A1 (en) | Display apparatus with intelligent user interface | |
RU2614137C2 (en) | Method and apparatus for obtaining information | |
US20190354603A1 (en) | Display apparatus with intelligent user interface | |
US20190356952A1 (en) | Display apparatus with intelligent user interface | |
US20170171602A1 (en) | Method and electronic device for controlling three stream video play | |
US20190356951A1 (en) | Display apparatus with intelligent user interface | |
US20130305307A1 (en) | Server, electronic apparatus, server control method and computer-readable medium | |
CN112579935B (en) | Page display method, device and equipment | |
CN111770376A (en) | Information display method, device, system, electronic equipment and storage medium | |
CN111935551A (en) | Video processing method and device, electronic equipment and storage medium | |
US20230421849A1 (en) | Systems and methods to enhance viewer program experience during profile mismatch | |
US20150106531A1 (en) | Multicast of stream selection from portable device | |
CN114095793A (en) | Video playing method and device, computer equipment and storage medium | |
CN111901482B (en) | Function control method and device, electronic equipment and readable storage medium | |
CN110741652A (en) | Display device with intelligent user interface | |
US20170171269A1 (en) | Media content playback method, apparatus and system | |
CN109471683A (en) | A kind of information displaying method, electronic equipment and storage medium | |
CN111726659B (en) | Video carousel method and device, electronic equipment and storage medium | |
CN113038208A (en) | Display method, computer equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218 Applicant after: Hisense Visual Technology Co., Ltd. Address before: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218 Applicant before: QINGDAO HISENSE ELECTRONICS Co.,Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200131 |