GB2542777A - A first apparatus for controlling a second apparatus - Google Patents

A first apparatus for controlling a second apparatus Download PDF

Info

Publication number
GB2542777A
GB2542777A GB1517062.4A GB201517062A GB2542777A GB 2542777 A GB2542777 A GB 2542777A GB 201517062 A GB201517062 A GB 201517062A GB 2542777 A GB2542777 A GB 2542777A
Authority
GB
United Kingdom
Prior art keywords
circuitry
object information
identifier
icon
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1517062.4A
Other versions
GB201517062D0 (en
Inventor
Stuart Moore Nigel
Richard Hill-Jowett David
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Europe BV United Kingdom Branch
Sony Corp
Original Assignee
Sony Europe Ltd
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Europe Ltd, Sony Corp filed Critical Sony Europe Ltd
Priority to GB1517062.4A priority Critical patent/GB2542777A/en
Publication of GB201517062D0 publication Critical patent/GB201517062D0/en
Publication of GB2542777A publication Critical patent/GB2542777A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • H04N21/41265The peripheral being portable, e.g. PDAs or mobile phones having a remote control device for bidirectional communication between the remote control device and client device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A first apparatus such as a smartphone or tablet 204, for controlling a second apparatus such as a television 100. The first apparatus has a camera for capturing an image of the second apparatus and circuitry to: detect a predetermined object in the captured image and request & receive object information from the second apparatus. The object information has an identifier of the predetermined object which relates to a user interface (UI) element. The first apparatus has display and transmission circuitry to display the UI element and transmit user input commands to the second apparatus. The first and second apparatuses may be used in a system such that a smartphone may take an image of a TV having an on-screen keyboard 106 and use the image to generate an on-screen keyboard on the smartphone to allow a user 500 to control a TV remotely from the smartphone.

Description

A First Apparatus for Controlling a Second Apparatus BACKGROUND Field of the Disclosure
The present disclosure relates to a first apparatus for controlling a second apparatus.
Description of the Related Art
The “background” description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in the background section, as well as aspects of the description which may not otherwise qualify as prior art at the time of filing, are neither expressly or impliedly admitted as prior art against the present disclosure.
With the recent increase in internet enabled devices such as televisions (TVs), set top boxes, video games consoles and the like, it has been possible to deliver a wide variety of content such as movies, TV shows, games and music to these devices via the internet. Such content is delivered on an on-demand basis, allowing a user to, for example, watch a certain TV show or listen to a certain music track whenever they want. In order to do this, a user must select the content they wish to view and/or listen to via suitable user interface, and the selected content is then downloaded to the device via the internet.
Such user interfaces may use, for example, menu systems, icons or the like to display the selectable content to the user. There may also be search features, allowing a user to search for particular content by typing in the name of, for example, a TV show, movie, music artist, etc. The problem with existing user interfaces, however, is that the user will usually have to navigate through the user interface, type text, etc. using a conventional device such as a TV remote control or video game controller. Although such devices are well suited to their conventional uses (for example, a TV remote control is well suited to changing the volume of a TV or navigating through conventional broadcast TV channels, and a game controller is well suited to controlling characters in a video game), such devices are often not well suited for operating user interfaces for on-demand, internet based content. In particular, such devices are not well suited to allowing a user to enter text to search for specific content. For example, in order to enter text using a TV remote control or video game controller, it is necessary to display an on-screen keyboard and to navigate through and select individual keys of the on-screen keyboard using a small number of directional buttons on the TV remote control or video game controller. This arrangement is time consuming and inconvenient for the user.
The present disclosure aims to alleviate the above-mentioned problems.
SUMMARY
In a first aspect, the present disclosure provides a first apparatus for controlling a second apparatus, the first apparatus comprising: a camera operable to capture an image of a displayed image generated by the second apparatus; object detection circuitry operable to detect a predetermined object in the captured image; transmitter circuitry operable to transmit an object information request to the second apparatus, the object information request comprising an identifier of the detected predetermined object; receiver circuitry operable to receive object information from the second apparatus, the object information comprising an identifier of a user interface (UI) element associated with the predetermined object identified in the object information request; display output circuitry operable to output for display the UI element identified in the object information; and input circuitry operable to receive an input command associated with the displayed UI element; wherein the transmitter circuitry is operable to transmit a control instruction to control an operation of the second apparatuses on the basis of the input command.
In an embodiment, the first apparatus comprises controller circuitry, and: the transmitter circuitry is operable to transmit the object information request to a plurality of second apparatuses; the receiver is operable to receive object information from two or more of the plurality of second apparatuses, the object information received from each of the two or more second apparatuses comprising an identifier of the second apparatus and an identifier of a UI element associated with the predetermined object identified in the object information request; the controller circuitry is operable to select one of the two or more of the plurality of second apparatuses from which object information is received; the display output circuitry is operable to output for display the UI element identified in the object information received from the selected one of the two or more of the plurality of second apparatuses; the input circuitry operable to receive an input command associated with the displayed UI element; and the transmitter circuitry is operable to transmit a control instruction to control an operation of the selected one of the two or more second apparatuses on the basis of the input command and on the basis of the identifier of the selected one of the two or more second apparatuses.
In an embodiment, the object detection circuitry is operable to detect an identifying object in the captured image, the identifying object comprising an identifier of the second apparatus which generates the displayed image; and the transmitter circuitry is operable to transmit the object information request and the control instruction to the second apparatus on the basis of the identifier of the identifying object.
In an embodiment, the predetermined object is an on-screen keyboard; the UI element is a keyboard UI element, the keyboard UI element comprising a plurality of virtual keys each representing at least one alphanumeric character; the input command received by the input circuitry is an input command to select of one of the virtual keys of the keyboard UI element; and the control instruction transmitted by the transmitter circuitry is a control instruction for controlling the second apparatus to identify the at least one alphanumeric character represented by the selected virtual key of the keyboard UI element as a text input.
In an embodiment, the predetermined object is an icon menu, the icon menu comprising one or more icons each associated with respective content; the UI element is an icon UI element, the icon UI element comprising one or more selectable icons each corresponding to a respective one of the one or more icons of the icon menu; the input command received by the input circuitry is an input command to select of one the selectable icons of the icon UI element; and the control instruction transmitted by the transmitter circuitry is a control instruction for controlling the second apparatus to open the content associated with the icon of the icon menu corresponding to the selected icon of the icon UI element.
In an embodiment, the predetermined object is a logo associated with a content provider; the UI element is a content provider UI element, the content provider UI element comprising one or more selectable icons each associated with respective content available from the content provider; the input command received by the input circuitry is an input command to select of one the selectable icons of the content provider UI element; and the control instruction transmitted by the transmitter circuitry is a control instruction for controlling the second apparatus to open the content associated with the selected icon of the content provider UI element.
In an embodiment, the controller circuitry is operable to select one of the two or more of the plurality of second apparatuses from which object information is received on the basis of a selection command received by the input circuitry.
In an embodiment, the controller circuitry is operable to select one of the two or more of the plurality of second apparatuses from which object information is received on the basis of a measurement of a radio signal transmitted from each of the two or more of the plurality of second apparatuses from which object information is received.
In an embodiment, the identifier of the UI element associated with the predetermined object identified in the object information request comprises information for allowing the download of the UI element from an internet server.
In a second aspect, the present disclosure provides a second apparatus for being controlled by a first apparatus, the second apparatus comprising: image generator circuitry for generating an image and outputting the generated image for display, the generated image comprising a predetermined object which is detectable by the first apparatus in an image of the generated image captured by a camera of the first apparatus; receiver circuitry operable to receive an object information request from the first apparatus, the object information request comprising an identifier of the detected predetermined object; and transmitter circuitry operable to transmit object information to the first apparatus, the object information comprising an identifier of a user interface (UI) element associated with the predetermined object identified in the object information request; wherein the receiver circuitry is operable to receive a control instruction from the first apparatus to control an operation of the second apparatus, the control instruction being transmitted by the first apparatus on the basis of an input command received at the first apparatus, the input command being associated with the UI element identified in the object information when the UI element is displayed.
In an embodiment, the object information comprises an identifier of the second apparatus.
In an embodiment, the second apparatus is associated with an identifying object for being captured by the camera of the first apparatus, the identifying object comprising an identifier of the second apparatus.
In an embodiment, the image generated by the image generator circuitry comprises the identifying object.
In an embodiment, the predetermined object is an on-screen keyboard; the UI element is a keyboard UI element, the keyboard UI element comprising a plurality of virtual keys each representing at least one alphanumeric character; the input command received at the first apparatus is an input command to select of one of the virtual keys of the keyboard UI element; and the control instruction received by the receiver circuitry is a control instruction for controlling control circuitry of the second apparatus to identify the at least one alphanumeric character represented by the selected virtual key of the keyboard UI element as a text input.
In an embodiment, the predetermined object is an icon menu, the icon menu comprising one or more icons each associated with respective content; the UI element is an icon UI element, the icon UI element comprising one or more selectable icons each corresponding to a respective one of the one or more icons of the icon menu; the input command received at the first apparatus is an input command to select of one the selectable icons of the icon UI element; and the control instruction received by the receiver circuitry is a control instruction for controlling control circuitry of the second apparatus to open the content associated with the icon of the icon menu corresponding to the selected icon of the icon UI element.
In an embodiment, the predetermined object is a logo associated with a content provider; the UI element is a content provider UI element, the content provider UI element comprising one or more selectable icons each associated with respective content available from the content provider; the input command received by the input circuitry is an input command to select of one the selectable icons of the content provider UI element; and the control instruction received by the receiver circuitry is a control instruction for controlling control circuitry of the second apparatus to open the content associated with the selected icon of the content provider UI element.
In an embodiment, the identifier of the UI element associated with the predetermined object identified in the object information request comprises information for allowing the download of the UI element from an internet server.
In a third aspect, the present disclosure provides a system comprising a first apparatus according to the first aspect and a second apparatus according to the second aspect.
In a fourth aspect, the present disclosure provides a method of operating a first apparatus for controlling a second apparatus, the first apparatus comprising a camera, transmitter circuitry, receiver circuitry, display output circuitry and input circuitry, wherein the method comprises: controlling the camera to capture an image of a displayed image generated by the second apparatus; detecting a predetermined object in the captured image; controlling the transmitter circuitry to transmit an object information request to the second apparatus, the object information request comprising an identifier of the detected predetermined object; controlling the receiver circuitry to receive object information from the second apparatus, the object information comprising an identifier of a user interface (UI) element associated with the predetermined object identified in the object information request; controlling the display output circuitry to output for display the UI element identified in the object information; controlling the input circuitry operable to receive an input command associated with the displayed UI element; and controlling the transmitter circuitry to transmit a control instruction to control an operation of the second apparatuses on the basis of the input command.
In a fifth aspect, the present disclosure provides a storage medium storing a computer program for controlling a computer to perform a method according to the fourth aspect.
In a sixth aspect, the present disclosure provides a method of operating a second apparatus for being controlled by a first apparatus, the second apparatus comprising image generator circuitry, receiver circuitry and transmitter circuitry, wherein the method comprises: controlling the image generator circuitry to generate an image and to output the generated image for display, the generated image comprising a predetermined object which is detectable by the first apparatus in an image of the generated image captured by a camera of the first apparatus; controlling the receiver circuitry to receive an object information request from the first apparatus, the object information request comprising an identifier of the detected predetermined object; controlling the transmitter circuitry to transmit object information to the first apparatus, the object information comprising an identifier of a user interface (UI) element associated with the predetermined object identified in the object information request; and controlling the receiver circuitry to receive a control instruction from the first apparatus to control an operation of the second apparatus, the control instruction being transmitted by the first apparatus on the basis of an input command received at the first apparatus, the input command being associated with the UI element identified in the object information when the UI element is displayed.
In a seventh aspect, the present disclosure provides a storage medium storing a computer program for controlling a computer to perform a method according to the sixth aspect.
The foregoing paragraphs have been provided by way of general introduction, and are not intended to limit the scope of the following claims. The described embodiments, together with further advantages, will be best understood by reference to the following detailed description taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
Figures 1A and IB show an arrangement for interacting with an on-demand content user interface on a TV using a conventional remote control;
Figure 2 schematically illustrates a plurality of devices connected via a network, according to an embodiment of the present disclosure;
Figure 3 schematically illustrates various components of a first device, according to an embodiment of the present disclosure;
Figure 4 schematically illustrates various components of second device, according to an embodiment of the present disclosure;
Figure 5 describes an arrangement for allowing the transmission of control instructions from a first device to a second device, according to an embodiment of the present disclosure;
Figure 6 also describes an arrangement for allowing the transmission of control instructions from a first device to a second device, according to an embodiment of the present disclosure;
Figure 7 also describes an arrangement for allowing the transmission of control instructions from a first device to a second device, according to an embodiment of the present disclosure;
Figure 8 shows a flow chart illustrating a process performed by a first device so as to allow control instructions to be transmitted to a second device, according to an embodiment of the present disclosure; Figure 9 shows a flow chart illustrating a process performed by a second device so as to allow control instructions to be received from a first, according to an embodiment of the present disclosure;
Figure 10 describes the control of a second device on the basis of an icon menu user interface (UI) element, according to an embodiment of the present disclosure; and
Figure 11 describes the control of a second device on the basis of a content provider user interface (UI) element, according to an embodiment of the present disclosure.
DESCRIPTION OF THE EMBODIMENTS
Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views.
Figures 1A and IB show an arrangement for interacting with an on-demand content user interface on a TV 100 using a conventional remote control 102. In this case, the user wishes to search for specific TV or movie content by entering text in search box 104. In order to do this, the user has previously selected the search box, and enters text in the search box 104 using on-screen keyboard 106. In order to enter text, the user must navigate through each on-screen key of the on-screen keyboard using directional buttons 108A-D on the remote control 102 and, once they have navigated to the desired on-screen key, they must select the on-screen key using selection button 108E.
In Figures 1A and IB, the user, for example, is searching for the TV show “Breaking Bad”. In Figure 1A, the user has navigated to and selected the on-screen key “B” of the on-screen keyboard (corresponding to the first letter of the name “Breaking Bad”) using the directional buttons 108A-D and selection button 108E of the remote control. That is, the user has navigated to on-screen key “B” using directional buttons 108A-D (as indicated by on-screen navigation identifier 110) and then selected on-screen key “B” using selection button 108E (as indicated by the appearance of the letter “B” in the search box 104). Then, in Figure IB, the user has navigated to and selected the on-screen key “R” of the on-screen keyboard (corresponding to the second letter of the name “Breaking Bad”) using the direction buttons 108A-D and selection button 108E of the remote control. That is, the user has navigated to on-screen key “R” using directional buttons 108A-D (as indicated by on-screen navigation identifier 110) and then selected onscreen key “R” using selection button 108E (as indicated by the appearance of the letter “R” in the search box 104). The user then continues in this way so as to type the rest of the letters needed to spell “Breaking Bad” in the search box 104 (or at least enough of the letters so as to allow the TV show “Breaking Bad” to be found by an appropriate searching algorithm).
The problem, however, is that entering text in this way is time-consuming and inconvenient for the user. This is because many button presses of directional buttons 108A-D and selection button 108E are required to type in a relatively small number of letters. For example, consider the scenario in Figures 1A and IB in which the letter “B” has been typed and in which the user now wishes to type the next letter “R”. In order to do this, the user must press directional button 108A (so as to move the on-screen navigation identifier 110 to the letter “H”), press directional button 108A again (so as to move the onscreen navigation identifier 110 to the letter “Y”), press directional button 108D (so as to move the onscreen navigation identifier 110 to the letter “T”), press directional button 108D again (so as to move the on-screen navigation identifier 110 to the letter “R”) and, finally, press selection button 108E so as to select the letter “R” so that it is entered in the search box 104. Thus, once the letter “B” has been entered, the user must press buttons on the remote control 102 five times in order to select the next letter “R”. This is time-consuming and inconvenient for the user.
The present disclosure aims to alleviate this problem by allowing another device, such as a smartphone, tablet computer, laptop computer or the like to control the user interface of a TV. Such devices will usually have user interface features such as keyboards (on-screen or otherwise), etc. which are quicker and easier for a user to use for selecting on-demand content than the user interface of TVs, which must be operated using a conventional remote control (as explained above). According to embodiments of the present disclosure, a first device, such as a smartphone or tablet computer, is used to control a second device, such as a TV, by transmitting control instructions to the second device over a communication channel. This will now be described.
Figure 2 schematically illustrates a plurality of devices connected via a network 202. The devices include a first TV 100 (TV1), a second TV 200 (TV2), a tablet computer 204 (Tablet 1) and a smartphone 206 (Smartphone 1). The first TV 100 and second TV 200 (each of which may be, for example, a Sony BRA VIA ® W80C TV) are each an example of a second device on which content may be viewed and/or listened to by a user. The tablet computer 204 (for example, a Sony Xperia Z4 ® tablet) and smartphone 206 (for example, a Sony Xperia Z3 ® smartphone) are each an example of a first device for controlling first TV 100 or second TV 200 by transmitting control instructions over the network 202. In this embodiment, the network 202, which connects each of the devices 100, 200, 204 and 206, acts as a communication channel for transmission of the control instructions. The network 202 may be, for example, a Local Area Network (LAN) or, in particular a Wireless Local Area Network (WLAN) such as a Wi-Fi ® network. The network 202 allows each device on the network to send messages to and receive messages from each other device on the network (as indicated by arrows 208).
Figure 3 schematically illustrates various components of tablet computer 204. Tablet computer 204 comprises a camera 302, an object detector 304, a display output element 306, a user input element 308, a transmitter 310 for transmitting electronic messages to other devices on the network and a receiver 312 for receiving electronic messages from other devices on the network. Each of these components is controlled by a controller 300, which processes instructions stored in storage medium 314. The same components may be found in smartphone 206, or any other first device (such as a laptop computer or the like) suitable for transmitting control instructions to a second device in the way to be described.
Figure 4 schematically illustrates various components of first TV 100. TV 100 comprises a transmitter 402 for transmitting electronic messages to other devices on the network, a receiver 408 for receiving electronic messages from other devices on the network, a display output element 406 and an image generator 404, with each of these components being controlled by a controller 400. The controller 400 processes instructions stored in storage medium 410. The same components may be found in second TV 200, or any other second device (such as a set top box, video games console or the like) suitable for receiving and responding to control instructions from the first device in the way to be described.
An arrangement for allowing the transmission of control instructions from the tablet computer 204 (as a first device) to the first TV 100 (as a second device) is described with reference to Figures 5, 6 and 7.
In Figure 5, user 500 wishes to control navigation and selection of content on TV 100 using tablet computer 204 (tablet computer 204 may also be referred to simply as a “tablet”). The user 500 therefore holds tablet 204 so that the screen 502 of the TV 100 on which a user interface for controlling navigation and selection of content is displayed is within the field of view of the camera 302. Images are then captured by the camera 302 at a predetermined frame rate and displayed in real time at the predetermined frame rate on a screen 504 of the tablet 204 so that the screen 504 of the tablet 204 acts as an electronic viewfinder for the camera 302.
Figure 6 shows the arrangement of Figure 5 from a different perspective in which the screen 502 of the first TV 100 and the screen 504 of the tablet 204 can be seen. Because the screen 502 of the first TV 100 is within the field of view of the camera 302, images of objects displayed on the screen 502 of the first TV 100 are captured by the camera 302 and are displayed on the screen 504 of the tablet 204. In Figure 6, it can be seen that an image of the on-screen keyboard 106 displayed on the screen 502 of TV 100 has been captured by the camera 302 and displayed on the screen 504 of tablet 204.
The object detector 304 of the tablet 204 is configured to detect one or more predetermined objects in images captured by the camera 302. In the embodiment of Figures 6, 7 and 8, the object detector 304 is configured to detect the on-screen keyboard 106 in an image captured by the camera 302. This is possible because the on-screen keyboard 106 has one or more predetermined visual features which allow it to be detected (or recognised) by the object detector 304 based on a suitable image object detection technique. Any suitable object detection technique may be used, including edge matching, divide-and-conquer search, greyscale matching, gradient matching or the like, as will be appreciated by the skilled person.
Once the on-screen keyboard 106 has been detected in the captured image, the transmitter 310 is configured to transmit an object information request to the TV 100. The object information request is an electronic message comprising an identifier of the detected predetermined object. In this case, the detected object is the on-screen keyboard 106, and therefore an identifier identifying that it is the onscreen keyboard 106 which has been detected is transmitted to the TV 100. The object information request is transmitted to TV 100 over the network 202.
Following receipt of the object information request by the TV 100, the TV 100, in response, transmits object information back to the tablet 204. The object information is an electronic message comprising an identifier which identifies the TV 100 and an identifier which identifies a user interface (UI) element associated with the detected predetermined object identified in the object information request. The object information is received by the receiver 312 of the tablet 204 over the network 202.
In this case, the detected predetermined object is on-screen keyboard 106, and therefore the UI element identified in the object information is a UI element associated with the on-screen keyboard. In this case, the UI element is a keyboard UI element 700 (as shown in Figure 7) which is output for display on the screen 504 of the tablet 204 by the display output element 306. The keyboard UI element 700 comprises virtual keys 702A, including a plurality of virtual keys each of which corresponds to a letter of an alphabet (in this case, the English alphabet). Furthermore, in this case, each virtual key 702A of the keyboard UI element 700 corresponds to a corresponding virtual key 702B of the on-screen keyboard 106. It will be appreciated that, in general, a virtual key of the keyboard UI element 700 may represent at least one alphanumeric character (rather than just a single letter of the English alphabet). In the case of a reduced size keyboard UI element 700 (which includes, for example, an alphanumeric keypad instead of a full size QWERTY keyboard), a virtual key of the keyboard UI element 700 may represent a plurality of alphanumeric characters.
The keyboard UI element 700 displayed on the screen 504 of the tablet 204 in Figure 7 is different to the captured image of the on-screen keyboard 106 of the TV 100 displayed on the screen 504 of the tablet 204 in Figure 6. More specifically, the captured image of the on-screen keyboard 106 of the TV 100 displayed on the screen 504 of the tablet 204 in Figure 6 is simply an image of the on-screen keyboard 106 as captured by the camera 302. This image is not a user interface, but rather is a conventional electronic image. On the other hand, the keyboard UI element 700 displayed on the screen 504 of the tablet 204 in Figure 7 forms part of an interactive user interface which, in combination with the user input element 308, allows the tablet 204 to receive input from the user.
As illustrated in Figure 7, the keyboard UI element 700 identified in the received object information is displayed on the screen 504 of the tablet. The keyboard UI element 700 forms part of an interactive user interface which, in combination with the user input element 308, allows the tablet 204 to receive input commands from the user. The tablet 204 is then configured to transmit control instructions to the TV 100 via the network 202 on the basis of the received input commands.
In the embodiment of Figure 7, the user input element 308 of the tablet 204 is a touch panel (such as a capacitive touch panel or the like) formed with the screen 504 of the tablet 204 so as to form a touch screen. Thus, when the keyboard UI element 700 is displayed, the user may touch appropriate portions of the touch screen corresponding to the location of virtual keys of the keyboard UI element 700 so as to type letters. Each touch of the touch screen to type a letter shown on the keyboard UI element 700 constitutes an input command, and a control instruction is then transmitted to the TV 100 by the transmitter 310 over the network 202 on the basis of each input command.
In the embodiment of Figure 7, an input command corresponding to the touch of a virtual key of the keyboard UI element 700 by the user is converted to a control instruction which is transmitted to the TV 100 so as to cause the letter associated with that virtual key to appear in the search box 104. Thus, for example, when the user touches the letter “B” of the keyboard UI element 700 (corresponding to the first letter of the title “Breaking Bad”), the letter “B” appears in the search box 104. Then, when the user touches the letter “R” (corresponding to the second letter in the title “Breaking Bad”), the letter “R” appears in the search box 104. The user then continues in this way, touching the appropriate letters of the keyboard UI element 700 so as to type in the rest of the letters of “Breaking Bad” into the search box 104 (or at least enough of the letters so as to allow the TV show “Breaking Bad” to be found by an appropriate searching algorithm).
Advantageously, this allows text entry to search for content to be carried out in a more convenient and less time consuming way for the user. At the same time, complex set-up processes and the like so as to enable control of the TV 100 using the tablet 204 are avoided, since all the user has to do is capture an image of the on-screen keyboard 106 using the tablet 204 so as to allow control instructions to be transmitted to the TV 100. Convenience and ease of use for the user are therefore further improved.
In another embodiment, a user is able to enter a plurality of letters prior to a control instruction being transmitted to the TV 100. So, for example, the user may type an entire letter string (for example, the entire string “Breaking Bad”) using the virtual keys of the keyboard UI element 700. Once the user has finished typing the string, the user then issues an appropriate command (for example, by pressing an “enter” virtual key of the keyboard UI element 700). The entire string of letters is then converted to a control instruction which is transmitted to the TV 100. This string is then used as a search term. This allows only a single control instruction to be transmitted in order to search for particular content, rather than multiple control instructions (as occurs when a control instruction is transmitted for every letter).
The signalling overhead related to transmitting the control instructions is therefore reduced.
Figure 8 shows a flow chart illustrating a process performed by the tablet 204 so as to allow control instructions to be transmitted to the TV 100, according to an embodiment.
The process starts at step 800. At step 802, an object detection application (object detection app) is opened on the tablet 302. The object detection app is a software application run by the controller 300 so as to activate the camera 302 and to activate the object detector 304 so as to allow objects displayed on the screen of the TV 100 to be captured in an image and detected, as explained above. The user may open the object detection app manually using an icon or the like on the tablet 204. Alternatively, the object detection app may open automatically when the user moves the tablet 204 in such a way as to point the camera 302 towards the TV 100. This may be detected by an accelerometer, gravimeter or the like (not shown) of the tablet 204. For example, a gravimeter may be used to detect when the tablet 204 moves from a first orientation in which it is substantially horizontal (and in which the camera 302 of the tablet 204 substantially points towards the floor) to a second orientation in which it is substantially vertical (and in which the camera 302 of the tablet 204 may therefore point towards the TV 100).
At step 806, it is determined as to whether or not the object detector 304 has detected a predetermined object. As mentioned above, the camera 302 captures images at a predetermined frame rate. Each captured image is then passed to the object detector 304 and an appropriate object detection algorithm is applied to that image so as to try to detect one or more predetermined objects (such as the on-screen keyboard 106 of the TV 100, as previously discussed). The image capture and objection detection may continue for a predetermined time period (such as 10 seconds, 20 seconds or 30 seconds, for example). If, within the predetermined time period, a predetermined object is detected in one of the captured image frames, then the process moves on to step 808. On the other hand, if no predetermined object is detected in any of the captured image frames within the predetermined time period, then the process ends at step 818.
At step 808, the transmitter 310 transmits an object information request. As previously mentioned, an object information request is an electronic message comprising an identifier of the detected predetermined object. The identifier is generated by the controller 300 on the basis of, for example, a lookup table which relates predetermined, detectable objects in images displayed by the TV 100 with respective object identifiers. The lookup table is stored in the storage medium 314 of the tablet 204. An example of such a lookup tablet is shown as Table 1.
Table 1
Thus, for example, on the basis of Table 1, when the on-screen keyboard 106 of the TV 100 is detected by the object detector 304, the identifier “1” is included in the object information request. Alternatively, when an icon menu (described later on) is detected by the object detector 304, the identifier “2” is included in the object information request. Alternatively, when a content provider logo (also described later on) is detected by the object detector 304, the identifier “3” is included in the object information request.
At step 810, the controller 300 determines whether or not object information has been received by the receiver 312. As previously mentioned, the object information is an electronic message comprising an identifier which identifies the TV 100 and an identifier which identifies a user interface (UI) element associated with the detected predetermined object identified in the object information request. In this example, the identifier of the TV 100 is “TV 1” and the identifier of the UI element is a Uniform
Resource Locator (URL) address which allows the tablet 204 to download the relevant UI element (for example, the keyboard UI element 700) from an internet server. Again, the determination as to whether or not object information has been received by the receiver 312 may occur over a predetermined time period (such as 10 seconds, 20 seconds or 30 seconds, for example). If object information is received within the predetermined time period, then the process moves on to step 812. On the other hand, if no predetermined object information is received within the predetermined time period, then the process ends at step 818.
At step 812, it is determined as to whether or not object information is received from more than one source. This may occur when there is more than one TV (or other second device) which may receive the object information request from the tablet 204. For example, in the arrangement of interconnected devices shown in Figure 2, there is a first TV 100 (“TV 1”) and a second TV 200 (“TV 2”). Both of these are connected to the network 202, along with the tablet 204 (“Tablet 1”) and smartphone 206 (“Smartphone 1”). It will be appreciated that when images of a predetermined object displayed on first TV 100 are captured by the camera 302, the controller 300 of the tablet 204 will not necessarily know whether the predetermined object is shown on the first TV 100 or the second TV 200. The object information request is therefore sent to both first TV 100 and second TV 200 (that is, all TVs on the network), and each of the first TV 100 and second TV 200 will transmit object information back to the tablet 204 over the network 202. The object information from the first TV 100 will contain an identifier of the first TV 100 (for example, identifier “TV 1”) together with an identifier of an appropriate UI element for controlling the first TV 100. Similarly, the object information of the second TV 200 will contain an identifier of the second TV (for example, identifier “TV 2”) together with an identifier of an appropriate UI element for controlling the second TV 200. At step 810, the tablet 204 will receive object information from each of the first TV 100 and the second TV 200, and this will be detected at step 812.
If, at step 812, it is detected that object information has been received from only one source, then the process moves on to step 816, in which the controller 300 of the tablet 204 allows control instructions to be transmitted to that source. For example, if only the first TV 100 were to be connected to the network 202, then object information will be received from first TV 100 only. This will be detected at step 812, and control instructions may therefore be transmitted to the first TV 100 at step 816.
On the other hand, if, at step 812, it is detected that object information has been received from more than one source, then the process moves on to step 814, in which one of the sources from which object information has been received is selected. This selection may be carried out manually on the basis of the source identifier included in the object information. For example, if object information is received from both the first TV 100 (with source identifier “TV 1”) and the second TV 200 (with source identifier “TV 2”), then (under the control of the controller 300) the user may be presented with a selection screen (not shown) in which they manually select the TV they wish to control. For example, the selection screen could be an interactive menu by which either “TV 1” or “TV 2” may be selected by the user. In this case, the interactive menu is output for display by the display output element 306 and the selection of either “TV 1” or “TV 2” by the user is input at the user input element 308. Alternatively, the controller 300 may automatically select the source depending on a radio signal strength or quality or the like from each of the first and second TV measured at the receiver 312 (so that the source which is nearer to the user and thus more likely to have been captured by the camera 302 is selected automatically). As another example, the source from which the object information is received first may be automatically selected as the nearest source (so, for example, if the object information from “TV 1” is received before the object information from “TV 2”, then “TV 1” is automatically selected as the source as it is likely to be closer than “TV 2”). Following selection of the appropriate source at step 814, the process moves onto step 816, in which control instructions may be transmitted to the selected source. For example, if the user selects “TV 1” as the source to be controlled by the tablet 204 (or if “TV 1” has been selected automatically), then control instructions may be transmitted to the first TV 100 at step 816. The process then ends at step 818.
Figure 9 shows a flow chart illustrating a process performed by the first TV 100 so as to allow control instructions to be received from the tablet 204, according to an embodiment.
The process starts at step 900. At step 902, the image generator 404 generates an image and the display output element 406 outputs the generated image for display on the screen 502 of the TV 100. The generated image includes a predetermined object (such as on-screen keyboard 106) which is detectable by the object detector 304 of the tablet 204 in captured images of the screen 502 of TV 100, as previously described. At step 904, it is determined as to whether or not an object information request has been received at the receiver 408 of the TV 100. If no object information request has been received, then the process returns to the start of step 904. On the other hand, if an object information request has been received, then the process moves on to step 906.
At step 906, the transmitter 402 transmits object information to the tablet 204. As already discussed, the object information is an electronic message comprising an identifier of TV 100 (for example, the identifier “TV 1”) and an identifier of a UI element associated with the predetermined object identified in the object information request received from the tablet 204. The UI element identifier is generated on the basis of, for example, a lookup table which relates the object identifiers receivable from the tablet 204 with respective UI element identifier URLs. The lookup table is stored in the storage medium 410 of the TV 100. An example of such a lookup table is shown as Table 2.
Table 2
Thus, for example, on the basis of Table 2, when the on-screen keyboard 106 of the TV 100 is detected by the object detector 304 and therefore the object identifier “1” is included in the received object information request (see Table 1), the UI element identifier “http://sony.com/urH” is included in the transmitted object information. This allows the tablet 204 to download the UI element related to the onscreen keyboard 106 (in particular, keyboard UI element 700) and display it to the user for use in entering text on the TV 100 (as previously described). Alternatively, when the icon menu of the TV 100 is detected by the object detector 304 and therefore the object identifier “2” is included in the received object information request (see Table 1), the UI element identifier “http://sony.com/url2” is included in the transmitted object information. This allows the tablet 204 to download the UI element related to the icon menu and display it to the user for use in selecting icons on the TV 100 (as will later be described). Alternatively, when the content provider logo of the TV 100 is detected by the object detector 304 and therefore the object identifier “3” is included in the object information request (see Table 1), the UI element identifier “http://sony.com/urll3” is included in the object information. This allows the tablet 204 to download the UI element related to the content provider logo and display it to the user for use in viewing content on the TV 100 (as will later be described).
The UI element identifier URL may direct the user to an external web server in order to download its associated UI element. Alternatively, the UI element identifier URL may direct the user to a web server hosted by the TV 100 itself (the TV itself in this case hosts a web server comprising the data defining each of the UI elements which may be provided to the tablet 204). It will be appreciated that any other suitable method of providing the UI elements to the tablet could also be used. All that is required is that, in response to receiving a predetermined object identifier from the tablet, the TV is able to provide information to the tablet (such as a suitable UI element identifier) which enables the tablet to obtain the information (for example, the software) which defines the associated UI element from a suitable location.
Once the object information has been transmitted, the process moves on to step 908, in which it is determined whether or not a control instruction has been received from the tablet 204. A predetermined time period (such as 1 or 2 minutes, for example, or however long it would typically take for the UI element identified in the object information to be downloaded by the tablet 204) may be set in which to determine whether or not a control instruction has been received. If a control instruction is not received within the predetermined time period, then the process ends at step 908. This may occur if, for example, object information is received by the tablet 204 from multiple sources (for example, both the first TV 100 and the second TV 200) and the first TV 100 is not selected for control by the tablet (that is, the first TV 100 is not selected at step 814 of the process shown in Figure 8). On the other hand, if a control instruction is received within the predetermined time period (as will occur when the first TV 100 is the only TV on the network 202 or if the user selects the first TV 100, for example), then the process moves on to step 910, in which the first TV 100 processes the received control instruction together with any further control instructions received from the tablet 204. For example, if it is the keyboard UI element 700 which is identified in the object information and which is downloaded and displayed to the user by the tablet 204, the first TV 100 will process control instructions received from the tablet 204 so as to allow text entry operations (such as text entry in the search box 104, as previously described) to be performed. The process then ends at step 912.
Although, in the above-described embodiments, the communication channel for allowing transmission of control instructions is provided by the network 202, any other suitable communication channel known to the skilled person may also be used.
For example, the communication channel may instead be a direct communication channel which allows control instructions to be transmitted directly from the first device (such as the tablet 204 or smartphone 206) to the second device (such as the first TV 100 or second TV 200). The direct communication channel may be established via a Bluetooth ® connection, Wi-Fi Direct ® connection or the like between the first and second device, for example. In this case, the object information transmitted by each of the first TV 100 and second TV 200 over the network 202 may include pairing information (such as Bluetooth ® or Wi-Fi Direct ® pairing information, for example) so as to allow the tablet 204 to establish the direct communication channel with the relevant one of the first TV 100 and second TV 200. Control instructions will then be transmitted from the tablet 204 to the selected TV over the direct communication channel instead of via the network 202. Alternatively, instead of the tablet 204 and first TV 100 and second TV 200 communicating over the network 202, separate direct communication channels (such as separate Bluetooth ® or Wi-Fi Direct ® communication channels) may have previously been established between the tablet 204 and each of the first TV 100 and second TV 200. The various electronic messages (including the object information requests to each TV, object information from each TV and control instructions to the TV to be controlled) will then be transmitted over the separate direct communication channels, each of which may act as a communication channel for the transmission of control instructions. In order to allow a direct communication channel to be implemented as described, each of the receiver 312 and transmitter 310 of the tablet 204 and each of the receiver 408 and transmitter 402 of the TVs will be configured appropriately. For example, each of the receiver 312 and transmitter 310 of the tablet 204 and each of the receiver 408 and transmitter 402 of the TVs may be configured to communicate via Bluetooth ® or Wi-Fi Direct ® under the control of the respective controllers 300 and 400 (it will be appreciated that, where appropriate, the receivers 312, 408 may each comprise separate sub-receivers, one for receiving, for example, WiFi ® signals and one for receiving Bluetooth ® signals, and that the transmitters 310, 402 may each comprise separate sub-transmitters, one for transmitting, for example, WiFi ® signals and one for receiving Bluetooth ® signals).
In the case that a radio based direct communication channel such as Bluetooth ® or Wi-Fi Direct ® communication is implemented between the tablet 204, first TV 100 and second TV 200, a further advantage is realised in that the selection of the first TV 100 or second TV 200 for control by the tablet 204 (as occurs in step 814 of Figure 8) may be carried out automatically on the basis of, for example, measured signal strength and/or quality of the Bluetooth ® or Wi-Fi Direct ® signal transmitted by each of the TVs. For example, in the case that Bluetooth ® or Wi-Fi Direct ® pairing information is transmitted over the network 202 as part of the object information (as described above), this pairing information will give the controller 300 of the tablet 204 sufficient information to distinguish the Bluetooth ® Wi-Fi Direct ® signals transmitted by each of the first TV 100 and second TV 200 and to therefore make an automatic selection on which TV is to be controlled on the basis which TV has the highest Bluetooth ® or Wi-Fi Direct ® signal strength and/or quality (this being indicative that this particular TV is closer in proximity to the tablet 204 and is therefore most likely to be the TV whose screen has been captured). In the case that separate Bluetooth ® or Wi-Fi Direct ® connections are maintained between the tablet 204 and each of the first and second TVs, it will be appreciated that, based on known techniques, the Bluetooth ® or Wi-Fi Direct ® signal of each TV is distinguishable, thus allowing measurement of the signal strength and/or quality of each Bluetooth ® or Wi-Fi Direct ® signal and automatic selection of the TV for which the measured strength and/or quality is highest.
Thus, it will be appreciated that the control instructions may be transmitted using any suitable communication channel known to the skilled person. As long as a first device (such as the tablet 204) is able to capture an image of an image of a predetermined object generated by a second device (such as first TV 100), transmit an object information request to the second device, receive object information back from the second device and transmit control instructions to the second device so as to control the operation of the second device, then the advantages of the present disclosure may be realised.
It will be appreciated that, in the case that the second device is a device such as set top box or video games console which does not have its own screen, the display output element 306 of the second device will output generated images comprising a predetermined object to an external display (such as a conventional TV). This allows the predetermined object to be captured by the first device (such as the tablet 204). The object information request will then be transmitted to the second device in the usual way.
Although the above-described embodiments relate to capturing an image of the on-screen keyboard 106 displayed by the first TV 100 as a predetermined object and, on the basis of this, downloading and displaying keyboard UI element 700 on the tablet 204, it will be appreciated that this is only one example of the way in which the present disclosure can be applied. Figures 10 and 11 give alternative examples.
In Figure 10, an icon menu 1000 is displayed on the screen 502 of the TV 100, together with a logo 1004 indicating the provider of the icon menu 1000. The icon menu 1000 is an interactive menu which allows the user to select icons representing different content providers from which content may be downloaded.
It will also be appreciated that playable content itself could also be presented by way of an icon menu 1000 (in this case, each icon will represent playable content such as a movie, TV show or the like). In general, it will be appreciated that the icon menu comprises one or more icons each associated with respective content (which could be playable content itself or a particular provider of content).
When controlling the icon menu 1000 with a conventional remote control 102, the user will navigate through each individual icon using the directional buttons 108A-D and will select a particular icon using the selection button 108E. However, this can be time-consuming and inconvenient for the user. On the other hand, navigating through icons using, say, a touch screen of tablet 204 is quicker and more convenient for the user. For example, the user may use a scrolling or swiping action to navigate through the icons and may simply tap an icon in order to select it (as is known in the art). Thus, in order to allow quicker and more convenient navigation and selection of icons on the TV 100, an icon UI element 1002 may be downloaded and displayed on the screen of the tablet 204. The icon UI element 1002 will downloaded following the capture of an image of the screen 502 of the TV 100 when the icon menu 1000 is displayed using the above described method. In this case, the predetermined object may be the icon menu 1000 itself, or may be the logo 1004 of the icon menu provider, for example. In this case, the icon menu provider is Sony ® Entertainment Network.
The icon UI element 1002 displays the same icons as included the icon menu 1000 displayed on the TV 100. The user may scroll through the icons of the UI element 1002 using the touch screen of the tablet 204 and tap the icon corresponding to the content provider they wish to select. Upon selection of the icon of a particular content provider, a control instruction is transmitted to the TV 100 instructing the TV to display content associated with that content provider. For example, if the user selects the icon corresponding to “Nctflix ®” on the tablet 204, then the control instruction transmitted to the TV 100 will control the TV 100 to display content available on Netflix ®. It will be appreciated that other control commands may also be transmitted from the tablet 204 to the TV 100 in response to user interaction with the icon UI element 1002. For example, if the user scrolls through the icons of the icon UI element 1002 (thus causing the icons of icon UI element 1002 to move, with some icons being removed from view and other icons newly appearing), then control commands may be transmitted to the TV 100 to control the TV 100 to scroll through the icons of icon menu 1000 in a similar manner. The scrolling of icons by the user using the icon UI element 1002 is thus mimicked by the icon menu 1000 on the TV 100, thus providing an improved interactive experience for the user.
In Figure 11, a particular TV show is being shown on the screen 502 of TV 100. This TV show is being broadcast as a conventional TV show by content provider “Channel 1”, as shown by the fact that a logo 1100 for “Channel 1” is displayed in the top left comer of the screen 502. In this case, the channel logo 1100 is the predetermined object, and when an image of the screen 502 of the TV 100 is captured by the camera 302 of the tablet 204, the UI element which is subsequently downloaded and displayed on the screen 504 of the tablet 204 is a content provider UI element 1101. This content provider UI element 1101 provides information 1102 about the show currently being watched (including, in this example, the name of the show “Show 1” and the season (“S3”) and episode (“E5”) number of the show). This information may be retrieved as part of the content provider UI element 1101 and may be found based on TV listings for “Channel 1” available on the internet, for example. In addition to the show information 1102, the content provider UI element 1101 also provides a virtual button 1104 which may be selected by the user. The virtual button 1104 is a “Watch Previous Episodes” button, and upon selection of the virtual button 1104,, the content provider UI element 1101 may show icons or the like so as to allow selection of an appropriate previous episode of “Show 1” (in this case, the content provider UI element 1101 may appear and function similarly to the icon UI element 1002 shown in Figure 10, with each icon being associated with a respective previous episode of “Show 1”). Upon selection of a particular previous episode, a control instruction is transmitted to the TV 100 to download and play back the selected episode. Advantageously, this gives the user an easy way to find and watch content on-demand based on conventionally broadcast content.
It will be appreciated that the above-described embodiments of UI elements (including the keyboard UI element 700, icon UI element 1002 and content provider UI element 1101) represent only a few examples of the types of UI element which may be downloaded and displayed by the tablet 204 following the capture of images of suitable predetermined objects displayed on the screen 502 of the TV 100. In reality, there is a large number of ways in which such a UI element might be implemented, and there is a large number of different types of predetermined objects which might be detectable in images captured by the tablet 204 so as to initiate download and display of UI elements. No matter the type of predetermined object and associated UI element, however, the present disclosure, provides a quick and convenient way of obtaining these UI elements and of allowing a first device (such as the tablet 204 or smartphone 206) to control a second device (such as the first TV 100 or second TV 200) using these UI elements.
Although, in the above-described embodiments, the appropriate UI element is downloaded in response to receiving a URL as an identifier of the UI element, it will be appreciated that the UI elements associated with the predetermined objects detectable by the tablet 204 may already be stored in the storage medium 314 of the tablet 204. In this case, download of the UI element is not required, and the controller 300 of the tablet 204 will instead retrieve the UI element from the storage medium 314 upon receipt of a relevant UI element identifier (which may not necessarily be a URL in this case, but may instead be any number, word or alphanumeric string which allows the relevant UI element to be uniquely identifier and retrieved from the storage medium 314). In general, it will be appreciated that each UI element will be implemented in software and will comprise a graphical element (to define how the UI element appears to the user on the screen of the tablet 204), interactive element (to define how the UI element responds to input from the user received at the user input element 308 of the tablet) and control instruction element (to define the possible control instructions which may be transmitted to the TV 100). The UI element may be retrieved from the internet (which, advantageously, allows a user to easily obtain a large range of up- to-date UI elements) or may be stored in the storage medium 314 of the tablet 204 (which, advantageously, allows a user to quickly retrieve the UI element without requiring access to the internet).
Settings of the UI element may also be changeable depending on particular settings of the tablet 204. For example, if the language of the operating system of the tablet 204 is set to, say, French, and the predetermined object in the captured image of a TV screen is the on-screen keyboard 106 (which is an English language on-screen keyboard), then the keyboard UI element 700, when downloaded, may be automatically set so that it displays a French language keyboard. Alternatively, the keyboard UI element 700 may be downloaded as a French language keyboard (in this case, the object information request transmitted by the tablet 204 to the TV 100 may include, in addition to the identifier of the on-screen keyboard 106 as the predetermined object, an indicator to indicate that a French language version of the keyboard UI element 700 is desired). Advantageously, the UI element is thus more closely tailored to the individual needs (such as language, in this case) to the user.
In embodiments, in order to improve the messaging and selection efficiency when there is more than one TV on the network (and thus, following the detection of a predetermined object in a captured image of the screen of one of the TVs, an object information request is sent to each TV), each TV may only respond to an object information request by sending object information when the controller 400 of that TV determines that the TV is actually displayed the predetermined object identified in the object information request. For example, in the case that first TV 100 and second TV 200 are each connected to the network 202, and an object information request comprising an identifier of on-screen keyboard 106 is transmitted to each of the first and second TVs from the tablet 204, the controller of each of the first and second TVs will only allow object information to be transmitted back to the tablet 204 if the TV concerned is actually displaying the on-screen keyboard 106 at the time that the object information request is received. Thus, if the first TV 100 only is displaying the on-screen keyboard 106, then only the first TV 100 will transmit object information back to the tablet 204. Advantageously, this reduces traffic over the network (since only one TV, rather than two, is transmitting object information). It also means that the selection step 814 of the process of Figure 8 is avoided when there are multiple TVs on the network but when only one TV is displaying the predetermined object identified in the object information request (thus resulting in a reduced amount of processing at the tablet 204). Of course, it will be appreciated that if two or more TVs on the network are simultaneously displaying the predetermined object identified in the object information request (for example, if both the first TV 100 and second TV 200 of network 202 were displaying on-screen keyboard), then the selection step 814 will still need to be carried out.
Although the above-mentioned embodiments relate to consumer devices, the present disclosure is not limited to this. For example, instead of being a TV, set top box or video games console (as previously described), the second device may be an industrial or medical device which is connectable with the first device (such as a tablet or smartphone) so as to enable control of the second device with the first device. For example, the second device may be a cardiac monitor (not shown). In this case, the tablet 204 (which may be connected to the cardiac monitor over the network 202) captures an image of an electrocardiograph (ECG) image output by the cardiac monitor as the predetermined object. An object information request comprising an identifier of the ECG image as the predetermined object will then be transmitted to the cardiac monitor which, in response, will transmit object information including an identifier of the cardiac monitor and an identifier of a UI element associated with the ECG image back to the tablet 204. The UI element will then allow the user of the tablet 204 to perform various tasks associated with the cardiac monitor. For example, the UI element may provide the user with an interface for controlling the cardiac monitor to transmit patient information to the tablet 204 for review by the user (who may be a doctor or nurse, for example) or for controlling various functions of the cardiac monitor. This demonstrates the wide range of potential applications of the present disclosure.
In the above-mentioned embodiments, the first device (such as the tablet 204) transmits the object information request to each second device (such as the first TV 100 and second TV 200) over a suitable communication channel, and each second device then transmits object information back to the first device. The object information from each second device comprises both an identifier of that device and an identifier of the UI element identified in the object information request. In an alternative embodiment, however, there may be a further identifying object associated with each second device. This identifying object uniquely identifies the second device with which it is associated (for example, comprising network credentials of the second device if the communication channel is provided by the network 202, or Bluetooth ® or Wi-Fi Direct ® pairing information for the second device if the communication channel is a Bluetooth ® or Wi-Fi Direct ® communication channel) and is detectable by the object detector 304 of the first device in images captured by the camera 302 of the first device.
For example, this further object may be a Quick Response (QR) code which is displayed on the screen of each of the first TV 100 and second TV 200 together with the predetermined object (on-screen keyboard, icon menu, content provider logo, etc .) and which comprises information uniquely identifying each of the respective first and second TVs. That is, first TV 100 will display a first QR code which uniquely identifies the first TV 100, and the second TV 200 will display a second QR code which uniquely identifies the second TV 200. When an image of the screen 502 of the first TV 100 is captured by the camera 302, then both the predetermined object (for example, on-screen keyboard 106) and the first QR code will be captured in the image and detected by the object detector 304. The object detector 304 will perform a read process of the information comprised in the first QR code (using a conventional QR code reading technique) so as to identify that the captured image is a captured image relating to the first TV 100. In other words, because of the QR code, which acts as the identifying object, it is deduced that the detected predetermined object is associated with the first TV 100 rather than the second TV 200.
In this case, the object information request is sent only to the first TV 100 (rather than to both the first and second TVs), since this is the TV identified by the QR code in the captured image. The first TV 100 then transmits object information back to the first device. Because the first device already knows that it is the first TV 100 from which it is to receive the object information, the object information transmitted by the first TV 100 does not necessarily need to comprise the identifier of the first TV 100 (the identifier of the first TV 100 is already known to the first device from the QR code). Thus, in this alternative embodiment, the object information may comprise only the identifier of the UI element identified in the object information request.
Advantageously, such an arrangement reduces bandwidth wastage, since an object information request is only sent to the second device for which an associated image (comprising the predetermined object and identifier object has been captured) rather than to all second devices for which communication is possible. Additionally, it alleviates the need for selection (manual or automatic, as already discussed) when object information is received from a plurality of second devices, since object information will only be received from the second device to which, on the basis of the QR code (or the like), the object information request is sent. Any time delay and/or processing burden associated with the selection is therefore alleviated.
It will be appreciated that any suitable identifying object may be used (not just a QR code, which his only example), as long as the identifying object is detectable by the object detector 304 and comprises information which can be read by the object detector 304 so as to uniquely identify the second device with which it is associated. The identifying object may be comprised in the image generated by the second device (as discussed above, when the first and second QR codes respectively appear on the screen of the first and second TVs), or may be located elsewhere, such as permanently on an outer frame of the first or second TV.
Various features of the present disclosure are defined with reference to the following numbered clauses: 1. A first apparatus for controlling a second apparatus, the first apparatus comprising: a camera operable to capture an image of a displayed image generated by the second apparatus; object detection circuitry operable to detect a predetermined object in the captured image; transmitter circuitry operable to transmit an object information request to the second apparatus, the object information request comprising an identifier of the detected predetermined object; receiver circuitry operable to receive object information from the second apparatus, the object information comprising an identifier of a user interface (UI) element associated with the predetermined object identified in the object information request; display output circuitry operable to output for display the UI element identified in the object information; and input circuitry operable to receive an input command associated with the displayed UI element; wherein the transmitter circuitry' is operable to transmit a control instruction to control an operation of the second apparatuses on the basis of the input command. 2. A first apparatus according to clause 1, wherein the first apparatus comprises controller circuitry, and wherein: the transmitter circuitry is operable to transmit the object information request to a plurality of second apparatuses; the receiver is operable to receive object information from two or more of the plurality of second apparatuses, the object information received from each of the two or more second apparatuses comprising an identifier of the second apparatus and an identifier of a UI element associated with the predetermined object identified in the object information request; the controller circuitry is operable to select one of the two or more of the plurality of second apparatuses from which object information is received; the display output circuitry is operable to output for display the UI element identified in the object information received from the selected one of the two or more of the plurality of second apparatuses; the input circuitry operable to receive an input command associated with the displayed UI element; and the transmitter circuitry is operable to transmit a control instruction to control an operation of the selected one of the two or more second apparatuses on the basis of the input command and on the basis of the identifier of the selected one of the two or more second apparatuses. 3. A first apparatus according to clause 1, wherein: the object detection circuitry is operable to detect an identifying object in the captured image, the identifying object comprising an identifier of the second apparatus which generates the displayed image; and the transmitter circuitry is operable to transmit the object information request and the control instruction to the second apparatus on the basis of the identifier of the identifying object. 4. A first apparatus according to any preceding clause, wherein: the predetermined object is an on-screen keyboard; the UI element is a keyboard UI element, the keyboard UI element comprising a plurality of virtual keys each representing at least one alphanumeric character; the input command received by the input circuitry is an input command to select of one of the virtual keys of the keyboard UI element; and the control instruction transmitted by the transmitter circuitry is a control instruction for controlling the second apparatus to identify the at least one alphanumeric character represented by the selected virtual key of the keyboard UI element as a text input. 5. A first apparatus according to any one of clauses 1 to 3, wherein: the predetermined object is an icon menu, the icon menu comprising one or more icons each associated with respective content; the UI element is an icon UI element, the icon UI element comprising one or more selectable icons each corresponding to a respective one of the one or more icons of the icon menu; the input command received by the input circuitry is an input command to select of one the selectable icons of the icon UI element; and the control instruction transmitted by the transmitter circuitry is a control instruction for controlling the second apparatus to open the content associated with the icon of the icon menu corresponding to the selected icon of the icon UI element. 6. A first apparatus according to any one of clauses 1 to 3, wherein: the predetermined object is a logo associated with a content provider; the UI element is a content provider UI element, the content provider UI element comprising one or more selectable icons each associated with respective content available from the content provider; the input command received by the input circuitry is an input command to select of one the selectable icons of the content provider UI element; and the control instruction transmitted by the transmitter circuitry is a control instruction for controlling the second apparatus to open the content associated with the selected icon of the content provider UI element. 7. A first apparatus according to clause 2, wherein the controller circuitry is operable to select one of the two or more of the plurality of second apparatuses from which object information is received on the basis of a selection command received by the input circuitry. 8. A first apparatus according to clause 2, wherein the controller circuitry is operable to select one of the two or more of the plurality of second apparatuses from which object information is received on the basis of a measurement of a radio signal transmitted from each of the two or more of the plurality of second apparatuses from which object information is received. 9. A first apparatus according to any preceding clause, wherein the identifier of the UI element associated with the predetermined object identified in the object information request comprises information for allowing the download of the UI element from an internet server. 10. A second apparatus for being controlled by a first apparatus, the second apparatus comprising: image generator circuitry for generating an image and outputting the generated image for display, the generated image comprising a predetermined object which is detectable by the first apparatus in an image of the generated image captured by a camera of the first apparatus; receiver circuitry operable to receive an object information request from the first apparatus, the object information request comprising an identifier of the detected predetermined object; and transmitter circuitry operable to transmit object information to the first apparatus, the object information comprising an identifier of a user interface (UI) element associated with the predetermined object identified in the object information request; wherein the receiver circuitry is operable to receive a control instruction from the first apparatus to control an operation of the second apparatus, the control instruction being transmitted by the first apparatus on the basis of an input command received at the first apparatus, the input command being associated with the UI element identified in the object information when the UI element is displayed. 11. A second apparatus according to clause 10, wherein the object information comprises an identifier of the second apparatus. 12. A second apparatus according to clause 10, wherein the second apparatus is associated with an identifying object for being captured by the camera of the first apparatus, the identifying object comprising an identifier of the second apparatus. 13. A second apparatus according to clause 12, wherein the image generated by the image generator circuitry comprises the identifying object. 14. A second apparatus according to any one of clauses 10 to 13, wherein: the predetermined object is an on-screen keyboard; the UI element is a keyboard UI element, the keyboard UI element comprising a plurality of virtual keys each representing at least one alphanumeric character; the input command received at the first apparatus is an input command to select of one of the virtual keys of the keyboard UI element; and the control instruction received by the receiver circuitry is a control instruction for controlling control circuitry of the second apparatus to identify the at least one alphanumeric character represented by the selected virtual key of the keyboard UI element as a text input. 15. A second apparatus according to any one of clauses 10 to 13, wherein: the predetermined object is an icon menu, the icon menu comprising one or more icons each associated with respective content; the UI element is an icon UI element, the icon UI element comprising one or more selectable icons each corresponding to a respective one of the one or more icons of the icon menu; the input command received at the first apparatus is an input command to select of one the selectable icons of the icon UI element; and the control instruction received by the receiver circuitry is a control instruction for controlling control circuitry of the second apparatus to open the content associated with the icon of the icon menu corresponding to the selected icon of the icon UI element. 16. A second apparatus according to any one of clauses 10 to 13, wherein: the predetermined object is a logo associated with a content provider; the UI element is a content provider UI element, the content provider UI element comprising one or more selectable icons each associated with respective content available from the content provider; the input command received by the input circuitry is an input command to select of one the selectable icons of the content provider UI element; and the control instruction received by the receiver circuitry is a control instruction for controlling control circuitry of the second apparatus to open the content associated with the selected icon of the content provider UI element. 17. A second apparatus according to any one of clauses 10 to 16, wherein the identifier of the UI element associated with the predetermined object identified in the object information request comprises information for allowing the download of the UI element from an internet server. 18. A system comprising a first apparatus according to clause 1 and a second apparatus according to clause 10. 19. A method of operating a first apparatus for controlling a second apparatus, the first apparatus comprising a camera, transmitter circuitry, receiver circuitry, display output circuitry and input circuitry, wherein the method comprises: controlling the camera to capture an image of a displayed image generated by the second apparatus; detecting a predetermined object in the captured image; controlling the transmitter circuitry to transmit an object information request to the second apparatus, the object information request comprising an identifier of the detected predetermined object; controlling the receiver circuitry to receive object information from the second apparatus, the object information comprising an identifier of a user interface (UI) element associated with the predetermined object identified in the object information request; controlling the display output circuitry to output for display the UI element identified in the object information; controlling the input circuitry operable to receive an input command associated with the displayed UI element; and controlling the transmitter circuitry to transmit a control instruction to control an operation of the second apparatuses on the basis of the input command. 20. A storage medium storing a computer program for controlling a computer to perform a method according to clause 19. 21. A method of operating a second apparatus for being controlled by a first apparatus, the second apparatus comprising image generator circuitry, receiver circuitry and transmitter circuitry, wherein the method comprises: controlling the image generator circuitry to generate an image and to output the generated image for display, the generated image comprising a predetermined object which is detectable by the first apparatus in an image of the generated image captured by a camera of the first apparatus; controlling the receiver circuitry to receive an object information request from the first apparatus, the object information request comprising an identifier of the detected predetermined object; controlling the transmitter circuitry to transmit object information to the first apparatus, the object information comprising an identifier of a user interface (UI) element associated with the predetermined object identified in the object information request; and controlling the receiver circuitry to receive a control instruction from the first apparatus to control an operation of the second apparatus, the control instruction being transmitted by the first apparatus on the basis of an input command received at the first apparatus, the input command being associated with the UI element identified in the object information when the UI element is displayed. 22. A storage medium storing a computer program for controlling a computer to perform a method according to clause 21.
Obviously, numerous modifications and variations of the present disclosure are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the disclosure may be practiced otherwise than as specifically described herein.
In so far as embodiments of the disclosure have been described as being implemented, at least in part, by software-controlled data processing apparatus, it will be appreciated that a non-transitory machine-readable medium carrying such software, such as an optical disk, a magnetic disk, semiconductor memory or the like, is also considered to represent an embodiment of the present disclosure.
It will be appreciated that the above description for clarity has described embodiments with reference to different functional units, circuitry and/or processors. However, it will be apparent that any suitable distribution of functionality between different functional units, circuitry and/or processors may be used without detracting from the embodiments.
Described embodiments may be implemented in any suitable form including hardware, software, firmware or any combination of these. Described embodiments may optionally be implemented at least partly as computer software running on one or more data processors and/or digital signal processors. The elements and components of any embodiment may be physically, functionally and logically implemented in any suitable way. Indeed the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units. As such, the disclosed embodiments may be implemented in a single unit or may be physically and functionally distributed between different units, circuitry and/or processors.
Although the present disclosure has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognize that various features of the described embodiments may be combined in any manner suitable to implement the technique.

Claims (22)

1. A first apparatus for controlling a second apparatus, the first apparatus comprising: a camera operable to capture an image of a displayed image generated by the second apparatus; object detection circuitry operable to detect a predetermined object in the captured image; transmitter circuitry operable to transmit an object information request to the second apparatus, the object information request comprising an identifier of the detected predetermined object; receiver circuitry operable to receive object information from the second apparatus, the object information comprising an identifier of a user interface (UI) element associated with the predetermined object identified in the object information request; display output circuitry operable to output for display the UI element identified in the object information; and input circuitry operable to receive an input command associated with the displayed UI element; wherein the transmitter circuitry is operable to transmit a control instruction to control an operation of the second apparatuses on the basis of the input command.
2. A first apparatus according to claim 1, wherein the first apparatus comprises controller circuitry, and wherein: the transmitter circuitry is operable to transmit the object information request to a plurality of second apparatuses; the receiver is operable to receive object information from two or more of the plurality of second apparatuses, the object information received from each of the two or more second apparatuses comprising an identifier of the second apparatus and an identifier of a UI element associated with the predetermined object identified in the object information request; the controller circuitry is operable to select one of the two or more of the plurality of second apparatuses from which object information is received; the display output circuitry is operable to output for display the UI element identified in the object information received from the selected one of the two or more of the plurality of second apparatuses; the input circuitry operable to receive an input command associated with the displayed UI element; and the transmitter circuitry is operable to transmit a control instruction to control an operation of the selected one of the two or more second apparatuses on the basis of the input command and on the basis of the identifier of the selected one of the two or more second apparatuses.
3. A first apparatus according to claim 1, wherein: the object detection circuitry is operable to detect an identifying object in the captured image, the identifying object comprising an identifier of the second apparatus which generates the displayed image; and the transmitter circuitry is operable to transmit the object information request and the control instruction to the second apparatus on the basis of the identifier of the identifying object.
4. A first apparatus according to claim 1, wherein: the predetermined object is an on-screen keyboard; the UI element is a keyboard UI element, the keyboard UI element comprising a plurality of virtual keys each representing at least one alphanumeric character; the input command received by the input circuitry is an input command to select of one of the virtual keys of the keyboard UI element; and the control instruction transmitted by the transmitter circuitry is a control instruction for controlling the second apparatus to identify the at least one alphanumeric character represented by the selected virtual key of the keyboard UI element as a text input.
5. A first apparatus according to claim 1, wherein: the predetermined object is an icon menu, the icon menu comprising one or more icons each associated with respective content; the UI element is an icon UI element, the icon UI element comprising one or more selectable icons each corresponding to a respective one of the one or more icons of the icon menu; the input command received by the input circuitry is an input command to select of one the selectable icons of the icon UI element; and the control instruction transmitted by the transmitter circuitry is a control instruction for controlling the second apparatus to open the content associated with the icon of the icon menu corresponding to the selected icon of the icon UI element.
6. A first apparatus according to claim 1, wherein: the predetermined object is a logo associated with a content provider; the UI element is a content provider UI element, the content provider UI element comprising one or more selectable icons each associated with respective content available from the content provider; the input command received by the input circuitry is an input command to select of one the selectable icons of the content provider UI element; and the control instruction transmitted by the transmitter circuitry is a control instruction for controlling the second apparatus to open the content associated with the selected icon of the content provider UI element.
7. A first apparatus according to claim 2, wherein the controller circuitry is operable to select one of the two or more of the plurality of second apparatuses from which object information is received on the basis of a selection command received by the input circuitry.
8. A first apparatus according to claim 2, wherein the controller circuitry is operable to select one of the two or more of the plurality of second apparatuses from which object information is received on the basis of a measurement of a radio signal transmitted from each of the two or more of the plurality of second apparatuses from which object information is received.
9. A first apparatus according to claim 1, wherein the identifier of the UI element associated with the predetermined object identified in the object information request comprises information for allowing the download of the UI element from an internet server.
10. A second apparatus for being controlled by a first apparatus, the second apparatus comprising: image generator circuitry for generating an image and outputting the generated image for display, the generated image comprising a predetermined object which is detectable by the first apparatus in an image of the generated image captured by a camera of the first apparatus; receiver circuitry operable to receive an object information request from the first apparatus, the object information request comprising an identifier of the detected predetermined object; and transmitter circuitry operable to transmit object information to the first apparatus, the object information comprising an identifier of a user interface (UI) element associated with the predetermined object identified in the object information request; wherein the receiver circuitry is operable to receive a control instruction from the first apparatus to control an operation of the second apparatus, the control instruction being transmitted by the first apparatus on the basis of an input command received at the first apparatus, the input command being associated with the UI element identified in the object information when the UI element is displayed.
11. A second apparatus according to claim 10, wherein the object information comprises an identifier of the second apparatus.
12. A second apparatus according to claim 10, wherein the second apparatus is associated with an identifying object for being captured by the camera of the first apparatus, the identifying object comprising an identifier of the second apparatus.
13. A second apparatus according to claim 12, wherein the image generated by the image generator circuitry comprises the identifying object.
14. A second apparatus according to claim 10, wherein: the predetermined object is an on-screen keyboard; the UI element is a keyboard UI element, the keyboard UI element comprising a plurality of virtual keys each representing at least one alphanumeric character; the input command received at the first apparatus is an input command to select of one of the virtual keys of the keyboard UI element; and the control instruction received by the receiver circuitry is a control instruction for controlling control circuitry of the second apparatus to identify the at least one alphanumeric character represented by the selected virtual key of the keyboard UI element as a text input.
15. A second apparatus according to claim 10, wherein: the predetermined object is an icon menu, the icon menu comprising one or more icons each associated with respective content; the UI element is an icon UI element, the icon UI element comprising one or more selectable icons each corresponding to a respective one of the one or more icons of the icon menu; the input command received at the first apparatus is an input command to select of one the selectable icons of the icon UI element; and the control instruction received by the receiver circuitry is a control instruction for controlling control circuitry of the second apparatus to open the content associated with the icon of the icon menu corresponding to the selected icon of the icon UI element.
16. A second apparatus according to claim 10, wherein: the predetermined object is a logo associated with a content provider; the UI element is a content provider UI element, the content provider UI element comprising one or more selectable icons each associated with respective content available from the content provider; the input command received by the input circuitry is an input command to select of one the selectable icons of the content provider UI element; and the control instruction received by the receiver circuitry is a control instruction for controlling control circuitry of the second apparatus to open the content associated with the selected icon of the content provider UI element.
17. A second apparatus according to claim 10, wherein the identifier of the UI element associated with the predetermined object identified in the object information request comprises information for allowing the download of the UI element from an internet server.
18. A system comprising a first apparatus according to claim 1 and a second apparatus according to claim 10.
19. A method of operating a first apparatus for controlling a second apparatus, the first apparatus comprising a camera, transmitter circuitry, receiver circuitry, display output circuitry and input circuitry, wherein the method comprises: controlling the camera to capture an image of a displayed image generated by the second apparatus; detecting a predetermined object in the captured image; controlling the transmitter circuitry to transmit an object information request to the second apparatus, the object information request comprising an identifier of the detected predetermined object; controlling the receiver circuitry to receive object information from the second apparatus, the object information comprising an identifier of a user interface (UI) element associated with the predetermined object identified in the object information request; controlling the display output circuitry to output for display the UI element identified in the object information; controlling the input circuitry operable to receive an input command associated with the displayed UI element; and controlling the transmitter circuitry to transmit a control instruction to control an operation of the second apparatuses on the basis of the input command.
20. A storage medium storing a computer program for controlling a computer to perform a method according to claim 19.
21. A method of operating a second apparatus for being controlled by a first apparatus, the second apparatus comprising image generator circuitry, receiver circuitry and transmitter circuitry, wherein the method comprises: controlling the image generator circuitry to generate an image and to output the generated image for display, the generated image comprising a predetermined object which is detectable by the first apparatus in an image of the generated image captured by a camera of the first apparatus; controlling the receiver circuitry to receive an object information request from the first apparatus, the object information request comprising an identifier of the detected predetermined object; controlling the transmitter circuitry to transmit object information to the first apparatus, the object information comprising an identifier of a user interface (UI) element associated with the predetermined object identified in the object information request; and controlling the receiver circuitry to receive a control instruction from the first apparatus to control an operation of the second apparatus, the control instruction being transmitted by the first apparatus on the basis of an input command received at the first apparatus, the input command being associated with the UI element identified in the object information when the UI element is displayed.
22. A storage medium storing a computer program for controlling a computer to perform a method according to claim 21.
GB1517062.4A 2015-09-28 2015-09-28 A first apparatus for controlling a second apparatus Withdrawn GB2542777A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1517062.4A GB2542777A (en) 2015-09-28 2015-09-28 A first apparatus for controlling a second apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1517062.4A GB2542777A (en) 2015-09-28 2015-09-28 A first apparatus for controlling a second apparatus

Publications (2)

Publication Number Publication Date
GB201517062D0 GB201517062D0 (en) 2015-11-11
GB2542777A true GB2542777A (en) 2017-04-05

Family

ID=54544170

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1517062.4A Withdrawn GB2542777A (en) 2015-09-28 2015-09-28 A first apparatus for controlling a second apparatus

Country Status (1)

Country Link
GB (1) GB2542777A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111400729A (en) * 2020-03-10 2020-07-10 维沃移动通信有限公司 Control method and electronic device
CN113296677A (en) * 2021-05-27 2021-08-24 维沃移动通信有限公司 Input method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090285443A1 (en) * 2008-05-15 2009-11-19 Sony Ericsson Mobile Communications Ab Remote Control Based on Image Recognition
WO2011019154A2 (en) * 2009-08-14 2011-02-17 Lg Electronics Inc. Remote control device and remote control method using the same
US20120041925A1 (en) * 2008-04-18 2012-02-16 Zilog, Inc. Using HDMI-CEC to identify a codeset
US20120068857A1 (en) * 2010-09-22 2012-03-22 Apple Inc. Configurable remote control
EP2775374A2 (en) * 2013-03-04 2014-09-10 Honeywell International Inc. User interface and method
EP2874401A1 (en) * 2013-11-19 2015-05-20 Humax Co., Ltd. Apparatus, method, and system for controlling device based on user interface that reflects user's intention

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120041925A1 (en) * 2008-04-18 2012-02-16 Zilog, Inc. Using HDMI-CEC to identify a codeset
US20090285443A1 (en) * 2008-05-15 2009-11-19 Sony Ericsson Mobile Communications Ab Remote Control Based on Image Recognition
WO2011019154A2 (en) * 2009-08-14 2011-02-17 Lg Electronics Inc. Remote control device and remote control method using the same
US20120068857A1 (en) * 2010-09-22 2012-03-22 Apple Inc. Configurable remote control
EP2775374A2 (en) * 2013-03-04 2014-09-10 Honeywell International Inc. User interface and method
EP2874401A1 (en) * 2013-11-19 2015-05-20 Humax Co., Ltd. Apparatus, method, and system for controlling device based on user interface that reflects user's intention

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111400729A (en) * 2020-03-10 2020-07-10 维沃移动通信有限公司 Control method and electronic device
CN111400729B (en) * 2020-03-10 2023-10-20 维沃移动通信有限公司 Control method and electronic equipment
CN113296677A (en) * 2021-05-27 2021-08-24 维沃移动通信有限公司 Input method and device

Also Published As

Publication number Publication date
GB201517062D0 (en) 2015-11-11

Similar Documents

Publication Publication Date Title
US9811240B2 (en) Operating method of image display apparatus
US10474322B2 (en) Image display apparatus
US11449297B2 (en) Image display apparatus
JP5566428B2 (en) Remote controller and control method for multimedia device
US20160021415A1 (en) Image display apparatus and method for operating the same
US9715287B2 (en) Image display apparatus and method for operating the same
EP3396965B1 (en) Image display device
CN113259741B (en) Demonstration method and display device for classical viewpoint of episode
US20100302059A1 (en) Graphical user interface and device for controlling it
US20120260292A1 (en) Remote control system, television, remote controller and computer-readable medium
US20140366061A1 (en) Method for operating image display apparatus
US11822776B2 (en) Methods, systems, and media for providing media guidance with contextual controls
US10219045B2 (en) Server, image providing apparatus, and image providing system comprising same
GB2542777A (en) A first apparatus for controlling a second apparatus
KR102104438B1 (en) Image display apparatus, and method for operating the same
KR101971965B1 (en) Multimedia device for communicating with at least one device and method for controlling the same
KR101990866B1 (en) Method and apparatus of providing broadcast service
KR102110532B1 (en) Image display apparatus, and method for operating the same
EP2703934A1 (en) Method and display apparatus for processing in input signal
KR101873753B1 (en) Remote controller and method for processing data in multimedia device
JP2007329650A (en) Remote-control device, display device, and information acquisition system using them
KR20160147578A (en) Video display device and operating method thereof
KR102281839B1 (en) Apparatus for providing Image
US10555028B2 (en) Image providing device
US12093309B2 (en) Display device

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)