WO2006079939A2 - Method for control of a device - Google Patents

Method for control of a device Download PDF

Info

Publication number
WO2006079939A2
WO2006079939A2 PCT/IB2006/050164 IB2006050164W WO2006079939A2 WO 2006079939 A2 WO2006079939 A2 WO 2006079939A2 IB 2006050164 W IB2006050164 W IB 2006050164W WO 2006079939 A2 WO2006079939 A2 WO 2006079939A2
Authority
WO
WIPO (PCT)
Prior art keywords
visual identifier
pointing device
pointing
trigger signal
target area
Prior art date
Application number
PCT/IB2006/050164
Other languages
French (fr)
Other versions
WO2006079939A3 (en
Inventor
Jan Kneissler
Original Assignee
Philips Intellectual Property & Standards Gmbh
Koninklijke Philips Electronics N. V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Philips Intellectual Property & Standards Gmbh, Koninklijke Philips Electronics N. V. filed Critical Philips Intellectual Property & Standards Gmbh
Priority to EP06710683A priority Critical patent/EP1844456A2/en
Priority to JP2007552764A priority patent/JP2008529147A/en
Publication of WO2006079939A2 publication Critical patent/WO2006079939A2/en
Publication of WO2006079939A3 publication Critical patent/WO2006079939A3/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link

Definitions

  • This invention relates to a method and system for control of a device, and to such a device.
  • a remote control For control of a device, such as any consumer electronics device, e.g. television, DVD player, tuner, etc., a remote control is generally used.
  • multiple remote controls can be required, often one for each consumer electronics device.
  • the on-screen menu-driven navigation available for some consumer electronics devices is often less than intuitive, particularly for users that might not possess an in-depth knowledge of the options available for the device. The result is that the user must continually examine the menu presented on the screen to locate the option he is looking for, and then look down at the remote control to search for the appropriate button. Quite often, the buttons are given non-intuitive names or abbreviations.
  • a button on the remote control might also perform a further function, which is accessed by first pressing a mode button.
  • a single remote control device can be used to control a number of devices. Since the remote control is not capable of distinguishing one device from another, the remote control must be equipped with a dedicated button for each device, and the user must explicitly press the appropriate button before interacting with the desired device.
  • a possible alternative to the usual remote control may be a pointer, such as a laser pointer or "wand".
  • the pointers available to date, the use of which has become widespread in recent years, are essentially passive devices, since they can only be used to point at objects, typically for pointing out items on a screen or projection to members of an audience. Nevertheless, developments in the field of camera technology may lead to a widespread use of a type of hand-held pointing device comprising a camera for remote control of electronic devices, where the camera is used to track the point at which the user is aiming the pointing device.
  • the problem still remains to determine the device at which the user is aiming the pointing device, made even more difficult when the devices are similar in shape and appearance, for example, a bank of television or computer monitors of the same size and shape, or consumer electronics devices such as video cassette recorder, DVD player, etc., which often appear similar. Even if the devices can under normal circumstances be told apart, it may no longer be possible to do this in poor visibility conditions, for example in a darkened room.
  • an object of the present invention is to provide an easy and intuitive way of automatically identifying a device to be controlled from among a number of devices.
  • the present invention provides a method for control of a device, which method comprises the process steps of aiming a pointing device comprising a camera at the device to be controlled, and generating an image of a target area aimed at by the pointing device.
  • the target area image is interpreted to determine the presence of a visual identifier associated with the device to be controlled, so that the device at which the pointing device is aimed can be identified on the basis of the target area image analysis, and a control signal can be sent to the identified device.
  • the term 'visual identifier' is be understood to mean an identifier than can be 'seen', whether by the human eye or by a camera.
  • a visual identifier can be generated in any region of the electromagnetic spectrum, for instance an optical signal such as a flashing or other change in brightness.
  • an optical signal such as a flashing or other change in brightness.
  • the visual identifier might equally well be invisible to the human eye but still discernible by the camera of the pointing device.
  • Another possible type of visual identifier is a simple gesture which can be seen, for the case in which a part of the device can be made to move, so that the movement is registered by the camera and interpreted by the image analysis.
  • a system for controlling a device comprises a pointing device with a camera for generating images of a target area in the direction in which the pointing device is aimed, and an image analysis unit for analysing the images to determine the presence of a visual identifier. Furthermore, the system comprises a device list management unit for maintaining a list of possible devices with then- associated visual identifiers and/or associated trigger signals for activating the corresponding visual identifier.
  • a control signal generation unit a control signal is generated for the device to be controlled according to the results of the image analysis, and is communicated to the device to be controlled by means of a suitable interface.
  • the units of the system such as the image analysis unit, control signal generation unit, etc., used in interpreting, analysing, and generating signals between the pointing device and the device to be controlled, are, for the sake of convenience, referred to in the following simply as a 'device control interface'.
  • a device for use in the method according to the present invention comprises a visual identifier, whereby the visual identifier might be activated in response to an action taken by the pointing device, and serves to identify this device from among a number of devices.
  • a device can be a personal computer, a consumer electronics device, a household appliance, or any type of electrically or electronically controllable device in any environment.
  • the visual identifier can be physically superimposed on the device, for example a type of tag, sticker, or other object.
  • the visual identifier of the device might be realised as an integral part of the device, for example as an area of the display of a device such as a computer monitor.
  • a device might be identified by its visual identifier, or by the activation of the visual identifier in response to a particular trigger signal.
  • the method according to the invention offers increased usage possibilities for a pointing device.
  • a pointing device is a particularly universal control tool, since one only has to point at a device or object for a control signal to be generated on the basis of the images generated.
  • a user can, with such a pointing device, easily identify and interact with any device - even if all devices are similar or even identical in appearance, or if the surroundings are in darkness.
  • This capability of the pointing device, together with its convenient pointing modality, as described above, combine to make the present invention a powerful and practical tool for myriad situations in everyday life.
  • the dependent claims and the subsequent description disclose particularly advantageous embodiments and features of the invention.
  • the images generated by the camera are preferably further analysed to determine any option presented on the device that might be chosen by the user.
  • a suitable control signal is generated on the basis of this image analysis and forwarded to the device in order to cause the device to carry out the desired user option.
  • the user can point at the screen of a device in order to navigate through a menu hierarchy displayed on the screen.
  • the images can be analysed to deduce any option at which the user is pointing.
  • the invention can also be used as a typical multi-device remote control, where the commands for each device are entered by the user by pressing one or more buttons on the remote control, but where the device at which the user is aiming the pointing device, and for which the command is intended, is automatically identified according to the invention.
  • the pointing device transmits a trigger signal, which is received by any device in the vicinity equipped with a suitable receiver.
  • the trigger signal may be transmitted from a source other than the pointing device, but which is, for example, in contact with the pointing device.
  • an activator of the device might cause its visual identifier to be activated.
  • the activator might be, for example, a motor for causing a part of the device to move, should this be the device's visual identifier, or a suitable electronic circuit for causing a part of the device to emit electromagnetic radiation in the visible or invisible region, as appropriate.
  • the visual identifier of the device might comprise a visual signal as emitted by an LED or light bulb.
  • the visual signal might be generated by causing part of the monitor to vary in brightness or colour in a predefined manner.
  • Another possible realisation of the visual identifier might be as a physical element incorporated in or attached to the device and capable of generating the necessary electromagnetic radiation.
  • each device of a group of devices may be equipped with a unique visual identifier, so that a single trigger signal would suffice to activate any of these visual identifiers. All the devices would respond to the trigger signal if they are within range of its transmission.
  • the visual identifiers might differ in colour, intensity, duration of activity, frequency, etc., where the term 'frequency' here applies to the rate at which the visual identifier changes in appearance, for example, a visual identifier might flash ten times per second, so that this visual identifier 'operates' at 10Hz.
  • a unique trigger signal is assigned to each device, so that a device only activates its visual identifier when it receives its own dedicated trigger signal. Should the device receive a trigger signal other that its dedicated trigger signal, its visual identifier remains inactive. This allows the devices to be realised such that their visual identifiers are similar or the same, but so that the devices respond differently to different trigger signals. A further possibility is given by a device whose visual identifier is automatically activated at intervals without a trigger signal. This type of realisation might be advantageous for the case in which a device cannot be modified to include a receiver and activator.
  • the visual identifier being continually active, can be detected by the system whenever the pointing device is being aimed at the device. In this case, the device would be assigned a unique visual identifier to distinguish it from other devices.
  • a device list management unit of the system may store and manage descriptive information regarding each device, its associated visual identifier and/or its associated trigger signal.
  • the device list management unit can also be regarded as part of the device control interface.
  • a device or group of devices might be given visual identifiers at time of manufacture, or it might be possible to update or replace such visual identifiers.
  • the system avails of an assigning unit for assigning a visual identifier and/or a trigger signal to a particular device, for example by sending an appropriate signal to the device, causing the device in future to activate its visual identifier in the specified manner, and/or to respond to a particular trigger signal.
  • the system might support a type of training mode, in which the device list management unit learns which devices are present in the group, which visual identifier is associated with each device, and to which type of trigger signal each device responds.
  • the training process might also include a step of collecting or learning the set of control signals or command protocol which can be understood by each device.
  • a device might be equipped with a means for supplying the pointing device with an "end-of-feedback" signal, indicating that the device has already supplied its visual feedback.
  • the device might comprise a suitable transmitter for transmitting the end-of-feedback signal.
  • the device control interface can decide to have the pointing device transmit the appropriate trigger signal once more, or to try a different trigger signal.
  • the pointing device may be equipped with an authorization code, for example if a user enables the pointing device with an appropriate access code or personal identification number.
  • the pointing device might in turn transmit this authorization code along with its trigger signal to the device at which the user is aiming the pointing device.
  • the device might only activate its visual identifier if the authorization code is valid.
  • This type of authorization might be useful in a situation where only certain persons are permitted to operate or interact with a device.
  • One example might be a television, "out of bounds" for children after a certain time; or a security system in a research laboratory, only accessible to certain persons.
  • the authorization information might be hardwired in the pointing device, or might be entered by the user in some way, for example by means of a suitable input modality such as a keypad. Another way of specifying authorization information for the pointing device might be by programming it with the aid of a suitable interface, similar to known methods of programming a remote control.
  • the camera for generating images of the object is preferably incorporated in the pointing device but might equally be mounted on the pointing device, and is preferably oriented in such a way that it generates images of the area in front of the pointing device targeted by the user.
  • the camera might be constructed in a basic manner, or it might feature powerful functions such as zoom capability or certain types of filter.
  • the 'target area' is the area in front of the pointing device, which can be captured as an image by the camera.
  • the image of the target area - or target area image - might cover only a small subset of the device aimed at, or it might encompass the entire device, or it might also include an area surrounding the device.
  • the size of the target area image in relation to the entire device might depend on the size of the device, the distance between the pointing device and the device to be controlled, and on the capabilities of the camera itself.
  • the user might be positioned so that the pointing device is at some distance from the device to be controlled, for example when the user is standing at the other end of the room. Equally, the user might hold the pointing device quite close to the device in order to obtain a more detailed image.
  • the pointing device might be used to interact with a device in order to cause the device to perform a function or functions desired by the user. For example, the user might wish to select one of a number of options presented on a display of the device, or he might want to navigate through a menu hierarchy shown on the device's display. In the case of a device which does not feature a display, the user might wish to activate a function represented by, for example, a button on the device casing, such as a button on the front of a DVD player. To this end, the target area images are analysed in an image analysis unit to determine the point at which the user is aiming the pointing device. This image analysis unit might be the same as that used for detecting the presence of a visual identifier in the target area image, or it might be dedicated image analysis unit.
  • the pointing device might feature a control input to allow the user to specify a certain action or actions.
  • a control input might be a button that the user can press to indicate that an action is to be carried out.
  • a manipulation of the control input might be encoded into an appropriate signal and transferred, along with the images from the camera, to the device control interface, where the control input signal is interpreted with the images when generating the control signal for the device.
  • the user might aim the pointing device at a particular area of the screen, such as an icon, or an item in a list of menu items, and simultaneously press the control input to indicate that this icon or item is the chosen one.
  • a source of a concentrated beam of light might be mounted in or on the pointing device and directed so that the ensuing point of light appears more or less in the centre of the target area that can be captured by the camera.
  • the source of a concentrated beam of light might be a laser light source, such as those used in many types of laser pointers currently available. In the following, it is therefore assumed that the source of a concentrated beam of light is a laser light source, without limiting the scope of the invention in any way.
  • the image analysis unit of the device control interface preferably compares the image of the target area to a number of pre-defined templates, by applying the usual image processing techniques or computer vision algorithms.
  • a single pre-defined template might suffice for the comparison, or it may be necessary to compare the image data to more than one template.
  • Pre-defined templates can be stored in an internal memory of the device control interface, or might equally be accessed from an external source.
  • the device control interface comprises an accessing unit with an appropriate interface for obtaining pre-defined templates for the objects from, for example, an internal or external memory, a memory stick, an intranet or the internet.
  • a manufacturer of an appliance which can be controlled by a pointing device according to the invention, can make templates for these appliances available to users of the devices.
  • a template can be a graphic representation of any kind of object.
  • a template might show the positions of a number of menu options for the television, so that, by analysing image data of the target area when the user aims the pointing device at the television, the image analysis unit can determine which option is being aimed at by the user.
  • a device control interface is implemented to allow interaction with the device to be controlled.
  • a device control interface comprises at least the image analysis unit for analysing the images, the control signal generation unit for generating a control signal for the device to be controlled, a device list management unit for overseeing the relationship between devices, visual identifiers and trigger signals, and also the interface for communicating the control signals to the device to be controlled.
  • a device control interface can be incorporated in the pointing device or can be realised as an external unit, coupled with the pointing device by means of a suitable communication interface.
  • the device control interface In the case where the device control interface is incorporated in the pointing device, it can obtain the images directly from the camera. The image analysis, device list management, and control signal generation can take place in the pointing device, and the control signals can be transmitted in appropriate form from the pointing device directly to the device to be controlled.
  • the capabilities of these units might be limited by the physical dimensions of the pointing device, which is preferably realised to be held comfortably in the hand, such an image analysis unit might suffice for rudimentary image analysis only, for example for the identification of the device at which the user is aiming the pointing device.
  • the more advanced image processing required for determining the user's intentions and for generating the resulting control signals necessitating a larger unit, might take place in an external device control interface. Therefore, the pointing device might altogether dispense with image analysis, device list management, and control signal generation functions, allowing these tasks to be carried out by the external device control interface, thereby allowing the pointing device to be realised in a smaller, more compact form.
  • An external device control interface as described above might be a standalone device, and might be realised so that the device identification, image processing, device list management and control signal generation take place centrally, while the trigger signals might be transmitted from device control interface, from the pointing device or from another location.
  • a receiver for receiving an end-of-feedback signal might be realised, for example, as part of the pointing device or be incorporated in the device control interface.
  • the camera of the pointing device might be realised such that the image of the target area comprises the relevant image data. For instance, when identifying a device, it may only be necessary to detect the relative brightness of an image, or the shape of the visual identifier in the image. If the visual identifier operates in the - for humans - invisible part of the electromagnetic spectrum, the camera might be equipped with a suitable filter for detecting, for example, infrared or ultraviolet radiation.
  • the image analysis unit might detect a pattern, colour or shape corresponding to a visual identifier description of the device list management unit.
  • the image analysis unit can identify this pattern and thus identify the device with which the visual identifier is associated.
  • Another possible way of identifying the device to be controlled is to directly compare the received target area image encompassing the visual identifier with a pre-defined template of the device, also encompassing the visual identifier, and to use methods such as pattern-matching or pattern recognition, based o the information from the device list management unit, to deduce the identity of the device being pointed at. To determine what the user actually intends the device to do, for example to determine an option at which he is pointing, a more detailed image generation might be necessary, as mentioned above.
  • the camera might capture any significant points of the entire image, e.g. enhanced contours, corners, edges etc., or it might capture a more detailed image with picture quality.
  • the chosen option is preferably determined by identifying the option in the image that contains or encompasses a particular target point in the target area.
  • a fixed point in the target area image preferably the centre of the target area image, obtained by extending an imaginary line in the direction of the longitudinal axis of the pointing device to the option, might be used as the target point.
  • a method of processing the target area images of the device using computer vision algorithms might comprise detecting distinctive points in the target image, determining corresponding points in the template of the device, and developing a transformation for mapping the points in the target image to the corresponding points in the template.
  • the distinctive points of the target area image might be distinctive points of the device or might equally be points in the area surrounding the device.
  • This transformation can then be used to determine the position and aspect of the pointing device relative to the device so that the intersection point of an axis of the pointing device with the device to be controlled can be located in the template.
  • the position of this intersection in the template corresponds to the target point on the device to be controlled and can be used to easily determine which option has been targeted by the user.
  • comparing the target area image with the pre-defined template may be restricted to identifying and comparing only salient points such as distinctive corner points.
  • the term "comparing”, as applicable in this invention, is to be understood in a broad sense, i.e. by only comparing sufficient features in order to quickly identify the option at which the user is aiming.
  • the invention thus provides, in all, an easy and flexible way to interact with any type of electrically or electronically controllable device in any environment.
  • the pointing device can be in the shape of a wand or pen in an elongated form that can be grasped comfortably and easily carried around by the user. The user can thus direct the pointing device at a device to be controlled while positioned at a distance from that device. Equally, the pointing device might be shaped in the form of a pistol.
  • an additional light source might be mounted in or on the pointing device, serving to illuminate the area at which the pointing device is aimed, so that the user can easily locate a device to be controlled, even if the surroundings are in darkness.
  • the pointing device and device control interface described in the above combine to give a powerful control system, for use in practically any kind of environment.
  • the system might find use in any environment featuring devices that can be interacted with by means of a pointing device, such as an office, museum, hospital or hotel environment, to name but a few, where a user can use the pointing device to identify and control unfamiliar devices in a convenient and intuitive manner, without first having to familiarise himself with the functionality of the device.
  • the method according to the invention can be applied to any electrically or electronically controllable device.
  • the device to be controlled can comprise any number of modules, components or units, and can be distributed in any manner.
  • Fig. 1 is a schematic diagram of a system for controlling a device, showing a pointing device and a number of devices in accordance with an embodiment of the present invention
  • Fig. 2 is a block diagram showing the steps involved in identifying one device from among a group of devices in accordance with an embodiment of the present invention
  • Fig. 3 is a schematic diagram of a pointing device in accordance with an embodiment of the present invention.
  • Fig. 1 shows a system comprising a number of devices D 1 , D 2 , D 3 , and a pointing device 1 for identifying and interacting with these devices D 1 , D 2 , D 3 , according to an embodiment of the invention.
  • the devices D 1 , D 2 , D 3 with which the pointing device 1 can interact, might be difficult to tell apart visually, being of similar shape and appearance.
  • the user aims the pointing device 1 at the desired device, let us assume, for the purpose of illustration, that he his aiming at device D 1 .
  • the pointing device is equipped with a transmitter 6 for transmitting a trigger signal T 1 , T 2 , T to any of the devices D 1 , D 2 , D 3 within range of the pointing device 1.
  • the pointing device transmits each of these trigger signals T 1 , T 2 , T in turn.
  • a device list management unit 15 of the device control interface 18 contains a list of the devices D 1 , D 2 , D 3 with which the pointing device 1 can interact, and any information concerning the visual identifier V 1 , V 2 , V 3 and/or trigger signal T 1 , T 2 , T associated with each device D 1 , D 2 , D 3 .
  • the constituent elements and the functionality of the device control interface 18 are described in more detail further below.
  • the trigger signal T 1 has been assigned to the device D 1
  • the trigger signal T 2 has been assigned to the device D 2 .
  • a further trigger signal T can be used to trigger one or more other devices.
  • the device D 1 is equipped with a receiver 40, tuned to receive its trigger signal T 1 .
  • device D 2 is also equipped with a receiver 41, tuned to receive the trigger signal T 2 .
  • the receiver of the device D 3 might be configured so that it receives any trigger signal T within the range of its bandwidth.
  • an activator 30 responds to the trigger signal T 1 by causing the visual identifier V 1 of the device D 1 to be activated.
  • the visual identifier V 1 is a light source such as an LED, which is caused to blink or flash.
  • the visual identifier V 2 of the device D 2 is caused to be activated by an activator 31 whenever the receiver 41 of the device D 2 picks up the trigger signal T 2 .
  • This visual identifier V 2 in this example is a physical element attached in some way to the device D 2 , for example an infra-red identifier.
  • this device D 2 is equipped with a transmitter 35 for transmitting an end-of-feedback signal S EOF for the pointing device 1.
  • a third device D 3 is configured to respond to any trigger signal T. It is equipped with a suitable receiver 42 which causes an activator 32 to activate the visual identifier V 3 of this device D 3 .
  • the device D 3 is a computer monitor for displaying elements of a user interface such as a desktop and windows
  • the visual identifier V 3 can be the usual type of window frame or menu item being caused to change its brightness, intensity, colour, or hue in response to the trigger signal T.
  • the system 5 must first identify the device D 1 , D 2 , D 3 which is being pointed at.
  • the image analysis unit 12 can extract the information regarding the visual identifier V 1 , V 2 , V 3 from the images, and can compare this information with that stored in a device list management unit 15. For example, if the visual identifier located in the images repeatedly increases and decreases in intensity within a certain length of time, and the appearance of the visual identifier as well as its behavioural pattern matches that stored in the device list management unit 15 for the visual identifier V 1 of the device D 1 , the image analysis unit 12 deduces that the device pointed at must be device D 1 .
  • step 100 a list of all devices is compiled. This list can contain an identifier for each device such as an image template, and a description of the visual identifier and/or trigger signal associated with this device.
  • step 102 this list is sorted according to the decreasing probability of being aimed at by the pointing device. This step 102 ensures that no time is wasted in the process of identifying the a device, by starting with the device most often aimed at by the user, and ending with the least often selected device.
  • the system might avail of a suitable learning algorithm, and might update this list whenever necessary.
  • the first entry in the list is examined in step 103.
  • the pointing device transmits a suitable trigger signal in step 104, and the system waits for a response.
  • the image analysis unit of the system analyses, in step 105, the target area image to determine whether the target area images show a visual identifier. If, in step 106, a match is detected with the visual identifier of the list entry for the first device, identification of the device has been successful, and the process concludes in step 107.
  • step 106 the process checks in step 111 to see if a predefined length of time, or timeout, has passed. If not, it is checked in step 110 to see if an end-of-feedback signal has been detected. If not, the process resumes at step 105 as described above. If in step 110 the end-of-feedback signal has indeed been detected, or if in step 111 the timeout has been reached, the process continues to step 109, where it is checked to see if the end of the list has been reached. If not, the next entry in the list is selected in step 108, and the process resumes at step 104.
  • the process can resume at step 103, thus returning to the beginning of the list. It may become necessary at some point to update the list of devices and their associated visual identifiers and/or trigger signals in step 101. For example, a new device might be introduced, or a device might be removed from the group, or the rate of probable usage of the individual devices might have changed in the meantime, with a different device now taking over from the device originally at the top of the list. When any such condition arises, the process routine returns from step 109 to step 101 and resumes from there. A suitable monitor arrangement might detect such situations and cause the process routine to react accordingly.
  • the step of cycling through the trigger signals T 1 , T 2 , T can be managed by the device list management unit 15 and on the results of the image analysis as supplied by the image analysis unit 12.
  • a suitable signal 19 is sent to the pointing device 1, causing it to send the next trigger signal T 1 , T 2 , T, as appropriate.
  • the images 3 generated by the camera 2 of the pointing device 1 can be interpreted to carry out the user's intentions.
  • the image analysis unit 12 makes use of known image processing techniques such as pattern recognition to identify, from a number of templates, the template most closely matching the images 3, thus deducing the option at which the user is pointing, for example the user might use the pointing device 1 to traverse a menu as displayed on the screen of the device D 1 , D 2 , D 3 , or he might intend to cause the device D 1 , D 2 , D 3 to carry out a specific task, and use the pointing device 1 to initiate the task.
  • known image processing techniques such as pattern recognition to identify, from a number of templates, the template most closely matching the images 3, thus deducing the option at which the user is pointing, for example the user might use the pointing device 1 to traverse a menu as displayed on the screen of the device D 1 , D 2 , D 3 , or he might intend to cause the device D 1 , D 2 , D 3 to carry out a specific task, and use the pointing device 1 to initiate the task.
  • a control signal generation unit 13 uses the results of the image analysis to generate a control signal 17 for the device.
  • An application interface 14 performs any necessary conversion to the control signal 17 before sending it in appropriate form 27 to the device D 1 , D 2 , D 3 being pointed at.
  • an assigning unit 16 shown here as part of the device control interface 18, can be used to assign a particular visual identifier behaviour and/or a trigger signal description to a particular device D 1 , D 2 , D 3 .
  • the assigning unit 16 might specify that the visual identifier V 1 for the device D 1 should alternate colours between red and blue for a duration of five seconds, and should do this in response to a trigger signal T 1 , transmitted at a certain frequency.
  • This information can be encoded as required by the interface 14 before being sent as a signal 27 to the device in question.
  • the information transferred from the pointing device 1 to the device control interface 18 might be transmitted in a wireless manner, e.g. Bluetooth, 802.1 Ib or mobile telephony standards. If the user carries his pointing device 1 on his person, the pointing device 1 might be connected to the device control interface 18 by means of a cable.
  • the signals 27 sent from the device control interface 18 to the devices D 1 , D 2 , D 3 might be sent over a cabled interface, or might also, as appropriate, be transmitted in a wireless manner.
  • the camera 2 might continually send images 3 to the device control interface 18, or might cease transmission automatically if the pointing device 1 is not moved for a certain length of time.
  • the pointing device 1 might comprise a motion sensor, not shown in the diagram. Since the pointing device 1 is most likely powered by batteries, also not shown in the diagram, it is expedient to only transmit images 3 to the device control interface when required, for example when the user actually moves the pointing device 1, in order to prolong the lifetime of the batteries. Transmission of image data 3 might be initiated as soon as the user moves the pointing device, and might automatically cease thereafter.
  • FIG. 3 shows a schematic representation of a target area image 3 generated by a pointing device 1, aimed at the device D 1 , in this case a television screen or computer monitor, from a distance and at an oblique angle, so that the scale and perspective of the device D 1 in the target area A appear distorted in the target area image 3.
  • a number of options (M 1 , M 2 , M 3 ) can be seen on the display of the device D 1 .
  • the user may wish to select one of these options (M 1 , M 2 , M 3 ) with the aid of the pointing device 1.
  • the visual identifier V 1 for this device D 1 in this case a section of the display, having a characteristic shape, and which can increase or decrease in brightness.
  • the pointing device 1 in this diagram is equipped with a source 51 of laser light, causing a beam of laser light L to impinge on the display at a point P L , thus assisting the user is aiming the pointing device 1.
  • the images generated by the camera are analysed to detect a visual identifier.
  • the characteristic flashing or blinking of the section of display, giving the visual identifier V 1 is detected, and the device is identified as being device D 1 -
  • a simple analysis of the brightness level of the images, and comparison of the ensuing pattern to the information supplied by the device list management unit, is sufficient to identify the device D 1 .
  • the system can continue with interpreting the user's actions to control the device D 1 .
  • the target area images 3 are examined in more detail. Regardless of the angle of the pointing device 1 with respect to the device D 1 , the target area image 3 is always centred around a target point P T .
  • the laser light point P L also appears in the target area image 3, and may be a distance removed from the target point P T , or might coincide with the target point P T .
  • the image processing unit of the device control interface compares the target area image 3 with pre-defined templates 50 to determine the option (M 1 , M 2 , M 3 ) being pointed at by the user.
  • the point of intersection P T of the longitudinal axis of the pointing device 1 with the device D 1 is located in the target area image 3.
  • the point in the template 50 corresponding to the point of intersection P T can then be located.
  • Computer vision algorithms using edge- and corner detection methods are applied to locate points [(x a ', y a '), (xb 1 , yb'), yc')] m the target area image 3 which correspond to points [(x a , y a ), (xb, yb), (x c , yc)] in the template 50 of the device D 1 .
  • Each point can be expressed as a vector e.g. the point (x a , y a ) can be expressed as v a .
  • the parameter set ⁇ comprising parameters for rotation and translation of the image yielding the most cost-effective solution to the function, can be applied to determine the position and orientation of the pointing device 1 with respect to the device D 1 .
  • the computer vision algorithms make use of the fact that the camera 2 within the pointing device 1 is fixed and "looking" in the direction of the pointing gesture.
  • the next step is to calculate the point of intersection P T of the longitudinal axis of the pointing device 1, in the direction of pointing P, with the device D 1 .
  • This point may be taken to be the centre of the target area image 3.
  • the system can determine the option (M 1 , M 2 , M 3 ) at which the user is aiming, and can generate the appropriate control signal for the device D 1 .
  • the pointing device can serve as the universal user interface device in the home or any other environment with electrically or electronically controllable devices. In short, it can be beneficial wherever the user can express an intention by pointing. Its small form factor and its convenient and intuitive pointing modality can elevate such a simple pointing device to a powerful universal remote control.
  • the pointing device could for example also be a personal digital assistant (PDA) with a built-in camera, or a mobile phone with a built-in camera.
  • PDA personal digital assistant
  • a “unit” may comprise a number of blocks or devices, unless explicitly described as a single entity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention describes a method for control of a device (D1, D2, D3), which method comprises the process steps of aiming a pointing device (1) comprising a camera (2) at the device (D1, D2, D3) to be controlled, generating an image (3) of a target area (A) aimed at by the pointing device (1), and interpreting the target area image (3) to determine the presence of a unique visual identifier (V1, V2, V3) associated with the device (D1, D2, D3) to be controlled. On the basis of target area image analysis, the device (D1, D2, D3) at which the pointing device (1) is aimed can be identified, and a control signal (17) can be sent to the identified device (D1, D2, D3). The invention further describes a system (5) for controlling a device (D1, D2, D3), and to such a device (D1, D2, D3). Using the invention any device (D1, D2, D3) of a number of controllable devices (D1, D2, D3) is easily identified by means of the visual identifier (V1, V2, V3). In a preferred embodiment the visual identifier (V1, V2, V3) might be triggered by a trigger signal (T1, T2, T).

Description

Method for control of a device
This invention relates to a method and system for control of a device, and to such a device.
For control of a device, such as any consumer electronics device, e.g. television, DVD player, tuner, etc., a remote control is generally used. However, in the average household, multiple remote controls can be required, often one for each consumer electronics device. Even for a person well acquainted with the consumer electronics devices he owns, it is a challenge to remember what each button on each remote control is actually for. Furthermore, the on-screen menu-driven navigation available for some consumer electronics devices is often less than intuitive, particularly for users that might not possess an in-depth knowledge of the options available for the device. The result is that the user must continually examine the menu presented on the screen to locate the option he is looking for, and then look down at the remote control to search for the appropriate button. Quite often, the buttons are given non-intuitive names or abbreviations. Additionally, a button on the remote control might also perform a further function, which is accessed by first pressing a mode button.
Sometimes, a single remote control device can be used to control a number of devices. Since the remote control is not capable of distinguishing one device from another, the remote control must be equipped with a dedicated button for each device, and the user must explicitly press the appropriate button before interacting with the desired device.
A possible alternative to the usual remote control may be a pointer, such as a laser pointer or "wand". The pointers available to date, the use of which has become widespread in recent years, are essentially passive devices, since they can only be used to point at objects, typically for pointing out items on a screen or projection to members of an audience. Nevertheless, developments in the field of camera technology may lead to a widespread use of a type of hand-held pointing device comprising a camera for remote control of electronic devices, where the camera is used to track the point at which the user is aiming the pointing device. However, even for this type of remote control, the problem still remains to determine the device at which the user is aiming the pointing device, made even more difficult when the devices are similar in shape and appearance, for example, a bank of television or computer monitors of the same size and shape, or consumer electronics devices such as video cassette recorder, DVD player, etc., which often appear similar. Even if the devices can under normal circumstances be told apart, it may no longer be possible to do this in poor visibility conditions, for example in a darkened room.
Therefore, an object of the present invention is to provide an easy and intuitive way of automatically identifying a device to be controlled from among a number of devices.
To this end, the present invention provides a method for control of a device, which method comprises the process steps of aiming a pointing device comprising a camera at the device to be controlled, and generating an image of a target area aimed at by the pointing device. The target area image is interpreted to determine the presence of a visual identifier associated with the device to be controlled, so that the device at which the pointing device is aimed can be identified on the basis of the target area image analysis, and a control signal can be sent to the identified device.
The term 'visual identifier' is be understood to mean an identifier than can be 'seen', whether by the human eye or by a camera. Such a visual identifier can be generated in any region of the electromagnetic spectrum, for instance an optical signal such as a flashing or other change in brightness. For example, if the pointing device is being aimed at a monitor, an area of the monitor screen might be caused to flash, or might change colour or hue. The visual identifier might equally well be invisible to the human eye but still discernible by the camera of the pointing device. Another possible type of visual identifier is a simple gesture which can be seen, for the case in which a part of the device can be made to move, so that the movement is registered by the camera and interpreted by the image analysis.
According to the invention, a system for controlling a device comprises a pointing device with a camera for generating images of a target area in the direction in which the pointing device is aimed, and an image analysis unit for analysing the images to determine the presence of a visual identifier. Furthermore, the system comprises a device list management unit for maintaining a list of possible devices with then- associated visual identifiers and/or associated trigger signals for activating the corresponding visual identifier. By means of a control signal generation unit, a control signal is generated for the device to be controlled according to the results of the image analysis, and is communicated to the device to be controlled by means of a suitable interface. The units of the system such as the image analysis unit, control signal generation unit, etc., used in interpreting, analysing, and generating signals between the pointing device and the device to be controlled, are, for the sake of convenience, referred to in the following simply as a 'device control interface'.
A device for use in the method according to the present invention comprises a visual identifier, whereby the visual identifier might be activated in response to an action taken by the pointing device, and serves to identify this device from among a number of devices. Such a device can be a personal computer, a consumer electronics device, a household appliance, or any type of electrically or electronically controllable device in any environment. The visual identifier can be physically superimposed on the device, for example a type of tag, sticker, or other object. Equally, the visual identifier of the device might be realised as an integral part of the device, for example as an area of the display of a device such as a computer monitor. A device might be identified by its visual identifier, or by the activation of the visual identifier in response to a particular trigger signal.
The method according to the invention offers increased usage possibilities for a pointing device. Such a pointing device is a particularly universal control tool, since one only has to point at a device or object for a control signal to be generated on the basis of the images generated. In particular, by the method herein described, a user can, with such a pointing device, easily identify and interact with any device - even if all devices are similar or even identical in appearance, or if the surroundings are in darkness. This capability of the pointing device, together with its convenient pointing modality, as described above, combine to make the present invention a powerful and practical tool for myriad situations in everyday life. The dependent claims and the subsequent description disclose particularly advantageous embodiments and features of the invention.
Since the purpose of the pointing device according to the present invention is not only to be able to identify one device from among a number of devices, but also to allow the user to interact with the identified device, the images generated by the camera are preferably further analysed to determine any option presented on the device that might be chosen by the user. A suitable control signal is generated on the basis of this image analysis and forwarded to the device in order to cause the device to carry out the desired user option. For example, the user can point at the screen of a device in order to navigate through a menu hierarchy displayed on the screen. Once the device has been identified, the images can be analysed to deduce any option at which the user is pointing. However, the invention can also be used as a typical multi-device remote control, where the commands for each device are entered by the user by pressing one or more buttons on the remote control, but where the device at which the user is aiming the pointing device, and for which the command is intended, is automatically identified according to the invention.
In a preferred embodiment of the invention, the pointing device transmits a trigger signal, which is received by any device in the vicinity equipped with a suitable receiver. Alternatively, the trigger signal may be transmitted from a source other than the pointing device, but which is, for example, in contact with the pointing device. In response to the trigger signal, an activator of the device might cause its visual identifier to be activated. The activator might be, for example, a motor for causing a part of the device to move, should this be the device's visual identifier, or a suitable electronic circuit for causing a part of the device to emit electromagnetic radiation in the visible or invisible region, as appropriate. To this end, the visual identifier of the device might comprise a visual signal as emitted by an LED or light bulb. Equally, if the device is a monitor of a television or computer, the visual signal might be generated by causing part of the monitor to vary in brightness or colour in a predefined manner. Another possible realisation of the visual identifier might be as a physical element incorporated in or attached to the device and capable of generating the necessary electromagnetic radiation.
It may be desirable to identify one device - from among a group of devices - by unique trigger signal or by unique visual identifier, depending on the realisation possibilities for the system, and where the term 'unique' is understood to apply within this group of devices.
Therefore, in a further preferred embodiment of the invention, each device of a group of devices may be equipped with a unique visual identifier, so that a single trigger signal would suffice to activate any of these visual identifiers. All the devices would respond to the trigger signal if they are within range of its transmission. The device currently aimed at by the pointing device, along with its visual identifier, appear in the target area image, and the identity of that device can thus be determined on the basis of the image analysis of the unique visual identifier appearing in the target area image. The visual identifiers might differ in colour, intensity, duration of activity, frequency, etc., where the term 'frequency' here applies to the rate at which the visual identifier changes in appearance, for example, a visual identifier might flash ten times per second, so that this visual identifier 'operates' at 10Hz.
Alternatively, a unique trigger signal is assigned to each device, so that a device only activates its visual identifier when it receives its own dedicated trigger signal. Should the device receive a trigger signal other that its dedicated trigger signal, its visual identifier remains inactive. This allows the devices to be realised such that their visual identifiers are similar or the same, but so that the devices respond differently to different trigger signals. A further possibility is given by a device whose visual identifier is automatically activated at intervals without a trigger signal. This type of realisation might be advantageous for the case in which a device cannot be modified to include a receiver and activator. The visual identifier, being continually active, can be detected by the system whenever the pointing device is being aimed at the device. In this case, the device would be assigned a unique visual identifier to distinguish it from other devices.
Since it may be irritating for the user of the pointing device to be able to see the visual identifiers for all devices responding to the trigger signal, the visual identifier is therefore, in a preferred embodiment of the invention, generated in a region of the electromagnetic spectrum invisible to the human eye but nonetheless discernible by the camera of the pointing device, for example, in the infrared or ultraviolet regions. A device list management unit of the system may store and manage descriptive information regarding each device, its associated visual identifier and/or its associated trigger signal. The device list management unit can also be regarded as part of the device control interface. A device or group of devices might be given visual identifiers at time of manufacture, or it might be possible to update or replace such visual identifiers. Equally, it might be possible to assign a visual identifier to a device which has heretofore not had one. Therefore, in a preferred embodiment of the invention, the system avails of an assigning unit for assigning a visual identifier and/or a trigger signal to a particular device, for example by sending an appropriate signal to the device, causing the device in future to activate its visual identifier in the specified manner, and/or to respond to a particular trigger signal.
The system might support a type of training mode, in which the device list management unit learns which devices are present in the group, which visual identifier is associated with each device, and to which type of trigger signal each device responds. The training process might also include a step of collecting or learning the set of control signals or command protocol which can be understood by each device.
In some cases, it may be desirable for a device to inform the pointing device that the device has already activated its visual identifier, for example, if the pointing device is still aimed at a certain device, but has failed to detect its visual identifier in the target area images. Therefore, in a further preferred embodiment of the invention, a device might be equipped with a means for supplying the pointing device with an "end-of-feedback" signal, indicating that the device has already supplied its visual feedback. To this end, the device might comprise a suitable transmitter for transmitting the end-of-feedback signal. On receiving this signal, the device control interface can decide to have the pointing device transmit the appropriate trigger signal once more, or to try a different trigger signal.
For the case in which a device is only intended to be operated by certain authorized persons, the pointing device may be equipped with an authorization code, for example if a user enables the pointing device with an appropriate access code or personal identification number. The pointing device might in turn transmit this authorization code along with its trigger signal to the device at which the user is aiming the pointing device. The device might only activate its visual identifier if the authorization code is valid. This type of authorization might be useful in a situation where only certain persons are permitted to operate or interact with a device. One example might be a television, "out of bounds" for children after a certain time; or a security system in a research laboratory, only accessible to certain persons. The authorization information might be hardwired in the pointing device, or might be entered by the user in some way, for example by means of a suitable input modality such as a keypad. Another way of specifying authorization information for the pointing device might be by programming it with the aid of a suitable interface, similar to known methods of programming a remote control. The camera for generating images of the object is preferably incorporated in the pointing device but might equally be mounted on the pointing device, and is preferably oriented in such a way that it generates images of the area in front of the pointing device targeted by the user. The camera might be constructed in a basic manner, or it might feature powerful functions such as zoom capability or certain types of filter.
Therefore, the 'target area' is the area in front of the pointing device, which can be captured as an image by the camera. The image of the target area - or target area image - might cover only a small subset of the device aimed at, or it might encompass the entire device, or it might also include an area surrounding the device. The size of the target area image in relation to the entire device might depend on the size of the device, the distance between the pointing device and the device to be controlled, and on the capabilities of the camera itself. The user might be positioned so that the pointing device is at some distance from the device to be controlled, for example when the user is standing at the other end of the room. Equally, the user might hold the pointing device quite close to the device in order to obtain a more detailed image.
The pointing device according to the present invention might be used to interact with a device in order to cause the device to perform a function or functions desired by the user. For example, the user might wish to select one of a number of options presented on a display of the device, or he might want to navigate through a menu hierarchy shown on the device's display. In the case of a device which does not feature a display, the user might wish to activate a function represented by, for example, a button on the device casing, such as a button on the front of a DVD player. To this end, the target area images are analysed in an image analysis unit to determine the point at which the user is aiming the pointing device. This image analysis unit might be the same as that used for detecting the presence of a visual identifier in the target area image, or it might be dedicated image analysis unit.
The pointing device might feature a control input to allow the user to specify a certain action or actions. Such a control input might be a button that the user can press to indicate that an action is to be carried out. A manipulation of the control input might be encoded into an appropriate signal and transferred, along with the images from the camera, to the device control interface, where the control input signal is interpreted with the images when generating the control signal for the device. For example, the user might aim the pointing device at a particular area of the screen, such as an icon, or an item in a list of menu items, and simultaneously press the control input to indicate that this icon or item is the chosen one. To assist the user in accurately aiming the pointing device, a source of a concentrated beam of light might be mounted in or on the pointing device and directed so that the ensuing point of light appears more or less in the centre of the target area that can be captured by the camera. The source of a concentrated beam of light might be a laser light source, such as those used in many types of laser pointers currently available. In the following, it is therefore assumed that the source of a concentrated beam of light is a laser light source, without limiting the scope of the invention in any way.
To easily determine the object at which the user is aiming the pointing device, the image analysis unit of the device control interface preferably compares the image of the target area to a number of pre-defined templates, by applying the usual image processing techniques or computer vision algorithms. A single pre-defined template might suffice for the comparison, or it may be necessary to compare the image data to more than one template.
Pre-defined templates can be stored in an internal memory of the device control interface, or might equally be accessed from an external source. Preferably, the device control interface comprises an accessing unit with an appropriate interface for obtaining pre-defined templates for the objects from, for example, an internal or external memory, a memory stick, an intranet or the internet. In this way, a manufacturer of an appliance, which can be controlled by a pointing device according to the invention, can make templates for these appliances available to users of the devices. A template can be a graphic representation of any kind of object. If the objects are options of a menu displayed, for example on a television screen, a template might show the positions of a number of menu options for the television, so that, by analysing image data of the target area when the user aims the pointing device at the television, the image analysis unit can determine which option is being aimed at by the user.
As mentioned above, a device control interface is implemented to allow interaction with the device to be controlled. Such a device control interface comprises at least the image analysis unit for analysing the images, the control signal generation unit for generating a control signal for the device to be controlled, a device list management unit for overseeing the relationship between devices, visual identifiers and trigger signals, and also the interface for communicating the control signals to the device to be controlled. Such a device control interface can be incorporated in the pointing device or can be realised as an external unit, coupled with the pointing device by means of a suitable communication interface.
In the case where the device control interface is incorporated in the pointing device, it can obtain the images directly from the camera. The image analysis, device list management, and control signal generation can take place in the pointing device, and the control signals can be transmitted in appropriate form from the pointing device directly to the device to be controlled.
On the other hand, since the capabilities of these units might be limited by the physical dimensions of the pointing device, which is preferably realised to be held comfortably in the hand, such an image analysis unit might suffice for rudimentary image analysis only, for example for the identification of the device at which the user is aiming the pointing device. The more advanced image processing required for determining the user's intentions and for generating the resulting control signals necessitating a larger unit, might take place in an external device control interface. Therefore, the pointing device might altogether dispense with image analysis, device list management, and control signal generation functions, allowing these tasks to be carried out by the external device control interface, thereby allowing the pointing device to be realised in a smaller, more compact form.
An external device control interface as described above might be a standalone device, and might be realised so that the device identification, image processing, device list management and control signal generation take place centrally, while the trigger signals might be transmitted from device control interface, from the pointing device or from another location. A receiver for receiving an end-of-feedback signal might be realised, for example, as part of the pointing device or be incorporated in the device control interface.
The camera of the pointing device might be realised such that the image of the target area comprises the relevant image data. For instance, when identifying a device, it may only be necessary to detect the relative brightness of an image, or the shape of the visual identifier in the image. If the visual identifier operates in the - for humans - invisible part of the electromagnetic spectrum, the camera might be equipped with a suitable filter for detecting, for example, infrared or ultraviolet radiation. The image analysis unit might detect a pattern, colour or shape corresponding to a visual identifier description of the device list management unit. If the visual identifier identifies its associated device by a pattern, for example, a Morse code, amplitude modulation, frequency coding etc., the image analysis unit can identify this pattern and thus identify the device with which the visual identifier is associated. Another possible way of identifying the device to be controlled is to directly compare the received target area image encompassing the visual identifier with a pre-defined template of the device, also encompassing the visual identifier, and to use methods such as pattern-matching or pattern recognition, based o the information from the device list management unit, to deduce the identity of the device being pointed at. To determine what the user actually intends the device to do, for example to determine an option at which he is pointing, a more detailed image generation might be necessary, as mentioned above. Therefore, the camera might capture any significant points of the entire image, e.g. enhanced contours, corners, edges etc., or it might capture a more detailed image with picture quality. For processing the image data in order to determine the user's intentions, it is expedient to apply computer vision techniques to find a point on the device at which the user has aimed, i.e. the target point. Since the image of the target area might contain other options besides the actual option at which the user is aiming the pointing device, the chosen option is preferably determined by identifying the option in the image that contains or encompasses a particular target point in the target area. In one embodiment of the invention, a fixed point in the target area image, preferably the centre of the target area image, obtained by extending an imaginary line in the direction of the longitudinal axis of the pointing device to the option, might be used as the target point.
A method of processing the target area images of the device using computer vision algorithms might comprise detecting distinctive points in the target image, determining corresponding points in the template of the device, and developing a transformation for mapping the points in the target image to the corresponding points in the template. The distinctive points of the target area image might be distinctive points of the device or might equally be points in the area surrounding the device. This transformation can then be used to determine the position and aspect of the pointing device relative to the device so that the intersection point of an axis of the pointing device with the device to be controlled can be located in the template. The position of this intersection in the template corresponds to the target point on the device to be controlled and can be used to easily determine which option has been targeted by the user. In this way, comparing the target area image with the pre-defined template may be restricted to identifying and comparing only salient points such as distinctive corner points. The term "comparing", as applicable in this invention, is to be understood in a broad sense, i.e. by only comparing sufficient features in order to quickly identify the option at which the user is aiming.
The invention thus provides, in all, an easy and flexible way to interact with any type of electrically or electronically controllable device in any environment. For ease of use, the pointing device can be in the shape of a wand or pen in an elongated form that can be grasped comfortably and easily carried around by the user. The user can thus direct the pointing device at a device to be controlled while positioned at a distance from that device. Equally, the pointing device might be shaped in the form of a pistol. Furthermore, an additional light source might be mounted in or on the pointing device, serving to illuminate the area at which the pointing device is aimed, so that the user can easily locate a device to be controlled, even if the surroundings are in darkness.
The pointing device and device control interface described in the above combine to give a powerful control system, for use in practically any kind of environment. For instance, it is conceivable that the system might find use in any environment featuring devices that can be interacted with by means of a pointing device, such as an office, museum, hospital or hotel environment, to name but a few, where a user can use the pointing device to identify and control unfamiliar devices in a convenient and intuitive manner, without first having to familiarise himself with the functionality of the device. The method according to the invention can be applied to any electrically or electronically controllable device. Furthermore, the device to be controlled can comprise any number of modules, components or units, and can be distributed in any manner.
Other objects and features of the present invention will become apparent from the following detailed descriptions considered in conjunction with the accompanying drawing. It is to be understood, however, that the drawings are designed solely for the purposes of illustration and not as a definition of the limits of the invention.
Fig. 1 is a schematic diagram of a system for controlling a device, showing a pointing device and a number of devices in accordance with an embodiment of the present invention;
Fig. 2 is a block diagram showing the steps involved in identifying one device from among a group of devices in accordance with an embodiment of the present invention;
Fig. 3 is a schematic diagram of a pointing device in accordance with an embodiment of the present invention.
In the drawings, like numbers refer to like objects throughout. The pointing device described is held and operated by a user, not shown in the drawings.
Fig. 1 shows a system comprising a number of devices D1, D2, D3, and a pointing device 1 for identifying and interacting with these devices D1, D2, D3, according to an embodiment of the invention.
In the example shown, the devices D1, D2, D3, with which the pointing device 1 can interact, might be difficult to tell apart visually, being of similar shape and appearance. In his interaction with one of these devices D1, D2, D3, the user aims the pointing device 1 at the desired device, let us assume, for the purpose of illustration, that he his aiming at device D1. The pointing device is equipped with a transmitter 6 for transmitting a trigger signal T1, T2, T to any of the devices D1, D2, D3 within range of the pointing device 1. The pointing device transmits each of these trigger signals T1, T2, T in turn.
A control signal 19, specifying which trigger signal to transmit, originates from a device control interface 18, shown in the figure as a separate block, but which might equally well be integrated entirely or in part in the pointing device 1 itself. A device list management unit 15 of the device control interface 18 contains a list of the devices D1, D2, D3 with which the pointing device 1 can interact, and any information concerning the visual identifier V1, V2, V3 and/or trigger signal T1, T2, T associated with each device D1, D2, D3. The constituent elements and the functionality of the device control interface 18 are described in more detail further below. In this example, the trigger signal T1 has been assigned to the device D1, and the trigger signal T2 has been assigned to the device D2. A further trigger signal T can be used to trigger one or more other devices.
The device D1 is equipped with a receiver 40, tuned to receive its trigger signal T1. Similarly, device D2 is also equipped with a receiver 41, tuned to receive the trigger signal T2. The receiver of the device D3 might be configured so that it receives any trigger signal T within the range of its bandwidth.
If the device D1 picks up the trigger signal T1, an activator 30 responds to the trigger signal T1 by causing the visual identifier V1 of the device D1 to be activated. In this example, the visual identifier V1 is a light source such as an LED, which is caused to blink or flash.
Similarly, the visual identifier V2 of the device D2 is caused to be activated by an activator 31 whenever the receiver 41 of the device D2 picks up the trigger signal T2. This visual identifier V2 in this example is a physical element attached in some way to the device D2, for example an infra-red identifier. Furthermore, this device D2 is equipped with a transmitter 35 for transmitting an end-of-feedback signal SEOF for the pointing device 1. A third device D3, is configured to respond to any trigger signal T. It is equipped with a suitable receiver 42 which causes an activator 32 to activate the visual identifier V3 of this device D3. In this example, the device D3 is a computer monitor for displaying elements of a user interface such as a desktop and windows, and the visual identifier V3 can be the usual type of window frame or menu item being caused to change its brightness, intensity, colour, or hue in response to the trigger signal T.
While the pointing device 1 is being aimed at a device D1, D2, D3, images of the area in front of the pointing device 1 are continually being generated and transmitted by a sending unit 4 to the device control interface 18, where they are received by a receiver 10. To identify the device D1, D2, D3 being pointed at, and to determine the user's intentions at pointing to a particular part of the device D1, D2, D3, the images generated by the camera 2 of the pointing device 1 are analysed in an image analysis unit 12.
Before any control signals 17, 27 can be generated and sent to the device
D1, D2, D3, the system 5 must first identify the device D1, D2, D3 which is being pointed at. To this end, the image analysis unit 12 can extract the information regarding the visual identifier V1, V2, V3 from the images, and can compare this information with that stored in a device list management unit 15. For example, if the visual identifier located in the images repeatedly increases and decreases in intensity within a certain length of time, and the appearance of the visual identifier as well as its behavioural pattern matches that stored in the device list management unit 15 for the visual identifier V1 of the device D1, the image analysis unit 12 deduces that the device pointed at must be device D1. The steps of the decision making process required to identify a device from among a number of devices are shown in Fig. 2. Here, the process commences at step 100, which might occur, for instance, after switching on the pointing device or reconfiguring the system. In step 101, a list of all devices is compiled. This list can contain an identifier for each device such as an image template, and a description of the visual identifier and/or trigger signal associated with this device. In a following step 102, this list is sorted according to the decreasing probability of being aimed at by the pointing device. This step 102 ensures that no time is wasted in the process of identifying the a device, by starting with the device most often aimed at by the user, and ending with the least often selected device. The system might avail of a suitable learning algorithm, and might update this list whenever necessary.
When the user aims the pointing device at a device to be controlled, the first entry in the list is examined in step 103. The pointing device transmits a suitable trigger signal in step 104, and the system waits for a response. The image analysis unit of the system analyses, in step 105, the target area image to determine whether the target area images show a visual identifier. If, in step 106, a match is detected with the visual identifier of the list entry for the first device, identification of the device has been successful, and the process concludes in step 107.
However, if in step 106, no matching visual identifier has been detected, the process checks in step 111 to see if a predefined length of time, or timeout, has passed. If not, it is checked in step 110 to see if an end-of-feedback signal has been detected. If not, the process resumes at step 105 as described above. If in step 110 the end-of-feedback signal has indeed been detected, or if in step 111 the timeout has been reached, the process continues to step 109, where it is checked to see if the end of the list has been reached. If not, the next entry in the list is selected in step 108, and the process resumes at step 104. If the end of the list has indeed been reached, the process can resume at step 103, thus returning to the beginning of the list. It may become necessary at some point to update the list of devices and their associated visual identifiers and/or trigger signals in step 101. For example, a new device might be introduced, or a device might be removed from the group, or the rate of probable usage of the individual devices might have changed in the meantime, with a different device now taking over from the device originally at the top of the list. When any such condition arises, the process routine returns from step 109 to step 101 and resumes from there. A suitable monitor arrangement might detect such situations and cause the process routine to react accordingly.
Returning to Fig. 1, the step of cycling through the trigger signals T1, T2, T can be managed by the device list management unit 15 and on the results of the image analysis as supplied by the image analysis unit 12. A suitable signal 19 is sent to the pointing device 1, causing it to send the next trigger signal T1, T2, T, as appropriate. At the same time as the identification process, as described above, is in progress, or following this identification process, the images 3 generated by the camera 2 of the pointing device 1 can be interpreted to carry out the user's intentions. To this end, the image analysis unit 12 makes use of known image processing techniques such as pattern recognition to identify, from a number of templates, the template most closely matching the images 3, thus deducing the option at which the user is pointing, for example the user might use the pointing device 1 to traverse a menu as displayed on the screen of the device D1, D2, D3, or he might intend to cause the device D1, D2, D3 to carry out a specific task, and use the pointing device 1 to initiate the task.
To this end, a control signal generation unit 13 uses the results of the image analysis to generate a control signal 17 for the device. An application interface 14 performs any necessary conversion to the control signal 17 before sending it in appropriate form 27 to the device D1, D2, D3 being pointed at.
Furthermore, an assigning unit 16, shown here as part of the device control interface 18, can be used to assign a particular visual identifier behaviour and/or a trigger signal description to a particular device D1, D2, D3. For example, the assigning unit 16 might specify that the visual identifier V1 for the device D1 should alternate colours between red and blue for a duration of five seconds, and should do this in response to a trigger signal T1, transmitted at a certain frequency. This information can be encoded as required by the interface 14 before being sent as a signal 27 to the device in question.
If the device control interface 18 is not incorporated in the pointing device, as shown in Fig. 1, the information transferred from the pointing device 1 to the device control interface 18 might be transmitted in a wireless manner, e.g. Bluetooth, 802.1 Ib or mobile telephony standards. If the user carries his pointing device 1 on his person, the pointing device 1 might be connected to the device control interface 18 by means of a cable. The signals 27 sent from the device control interface 18 to the devices D1, D2, D3 might be sent over a cabled interface, or might also, as appropriate, be transmitted in a wireless manner.
The camera 2 might continually send images 3 to the device control interface 18, or might cease transmission automatically if the pointing device 1 is not moved for a certain length of time. To this end, the pointing device 1 might comprise a motion sensor, not shown in the diagram. Since the pointing device 1 is most likely powered by batteries, also not shown in the diagram, it is expedient to only transmit images 3 to the device control interface when required, for example when the user actually moves the pointing device 1, in order to prolong the lifetime of the batteries. Transmission of image data 3 might be initiated as soon as the user moves the pointing device, and might automatically cease thereafter.
The user will not always aim the pointing device at a device to be controlled from directly in front - it is more likely that the pointing device 1 will be aimed at a more or less oblique angle to the device, since it is often more convenient to aim the pointing device 1 than it is to change one's own position. This is illustrated in Figure 3, which shows a schematic representation of a target area image 3 generated by a pointing device 1, aimed at the device D1 , in this case a television screen or computer monitor, from a distance and at an oblique angle, so that the scale and perspective of the device D1 in the target area A appear distorted in the target area image 3. A number of options (M1, M2, M3) can be seen on the display of the device D1. The user, not shown in the diagram, may wish to select one of these options (M1, M2, M3) with the aid of the pointing device 1. Also visible in the display of the device D1 is the visual identifier V1 for this device D1, in this case a section of the display, having a characteristic shape, and which can increase or decrease in brightness. The pointing device 1 in this diagram is equipped with a source 51 of laser light, causing a beam of laser light L to impinge on the display at a point PL, thus assisting the user is aiming the pointing device 1.
Once the pointing device 1 has transmitted its trigger signal, the images generated by the camera are analysed to detect a visual identifier. In this example, the characteristic flashing or blinking of the section of display, giving the visual identifier V1, is detected, and the device is identified as being device D1- A simple analysis of the brightness level of the images, and comparison of the ensuing pattern to the information supplied by the device list management unit, is sufficient to identify the device D1.
Once the device has been identified D1, the system can continue with interpreting the user's actions to control the device D1. To this end, the target area images 3 are examined in more detail. Regardless of the angle of the pointing device 1 with respect to the device D1, the target area image 3 is always centred around a target point PT. The laser light point PL also appears in the target area image 3, and may be a distance removed from the target point PT, or might coincide with the target point PT. The image processing unit of the device control interface compares the target area image 3 with pre-defined templates 50 to determine the option (M1, M2, M3) being pointed at by the user.
To this end, the point of intersection PT of the longitudinal axis of the pointing device 1 with the device D1 is located in the target area image 3. The point in the template 50 corresponding to the point of intersection PT can then be located.
Computer vision algorithms using edge- and corner detection methods are applied to locate points [(xa', ya'), (xb1, yb'),
Figure imgf000020_0001
yc')] m the target area image 3 which correspond to points [(xa, ya), (xb, yb), (xc, yc)] in the template 50 of the device D1 . Each point can be expressed as a vector e.g. the point (xa, ya) can be expressed as va . As a next step, a transformation function Tχ is developed to map the target area image 3 to the template 50: f(λ)= ∑|τλ(v,)-v; i where the vector V1 represents the coordinate pair (XJ, yO in the template 50, and the vector v[ represents the corresponding coordinate pair (x'i, y'i) in the target area image 3. The parameter set λ, comprising parameters for rotation and translation of the image yielding the most cost-effective solution to the function, can be applied to determine the position and orientation of the pointing device 1 with respect to the device D1. The computer vision algorithms make use of the fact that the camera 2 within the pointing device 1 is fixed and "looking" in the direction of the pointing gesture. The next step is to calculate the point of intersection PT of the longitudinal axis of the pointing device 1, in the direction of pointing P, with the device D1. This point may be taken to be the centre of the target area image 3. Once the coordinates of the point of intersection have been calculated, it is a simple matter to locate this point in the template 50. In this way, the system can determine the option (M1, M2, M3) at which the user is aiming, and can generate the appropriate control signal for the device D1.
Although the present invention has been disclosed in the form of preferred embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention. The pointing device can serve as the universal user interface device in the home or any other environment with electrically or electronically controllable devices. In short, it can be beneficial wherever the user can express an intention by pointing. Its small form factor and its convenient and intuitive pointing modality can elevate such a simple pointing device to a powerful universal remote control. As an alternative to the pen shape, the pointing device could for example also be a personal digital assistant (PDA) with a built-in camera, or a mobile phone with a built-in camera.
For the sake of clarity, it is also to be understood that the use of "a" or "an" throughout this application does not exclude a plurality, and "comprising" does not exclude other steps or elements. A "unit" may comprise a number of blocks or devices, unless explicitly described as a single entity.

Claims

CLAIMS:
1. A method for control of a device (D1, D2, D3), which method comprises the following process steps: aiming a pointing device (1) comprising a camera (2) at the device (D1, D2, D3) to be controlled; - generating an image (3) of a target area (A) aimed at by the pointing device (1); interpreting the target area image (3) to determine the presence of a unique visual identifier (V1, V2, V3) associated with the device (D1, D2, D3) to be controlled; - identifying the device (D1, D2, D3) at which the pointing device (1) is aimed on the basis of the target area image analysis, and sending a control signal (17) to the identified device (D1, D2, D3).
2. A method according to claim 1, where the target area image (3) is further interpreted to determine a chosen option (M1, M2, M3), and the control signal (17) is generated according to the results of the image interpretation.
3. A method according to claim 1 or claim 2, wherein a trigger signal (T1, T2, T) is transmitted to a device (D1, D2, D3) to be controlled, and a visual identifier (V1, V2, V3) is activated in response by the device (D1, D2, D3) that has received the trigger signal (Ti, T2, T) .
4. A method according to any of the preceding claims, wherein a unique trigger signal (Ti, T2) is generated for each particular device (Di, D2).
5. A method according to any of the preceding claims, wherein the visual identifier (V1, V 2, V3) is the invisible region of the electromagnetic spectrum.
6. A method according to any of the preceding claims, wherein the device (D2) to be controlled emits an end of feedback signal (SEOF) in response to the trigger signal (T2).
7. A system (5) for controlling a device (D1, D2, D3) comprising a pointing device (1) with a camera (2) for generating images (3) of a target area (A) in the direction (P) in which the pointing device (1) is aimed; - an image analysis unit (12) for analysing the images (3) to determine the presence of a visual identifier (V1, V2, V3); a device list management unit (15) for maintaining a list of possible devices (D1, D2, D3) with their associated visual identifiers (V1, V2, V3) and/or associated trigger signals (T1, T2, T) for activating a visual identifier (Vi5 V25 V3); a control signal generation unit (13) for generating a control signal (17) for the device (Di5 D2, D3) to be controlled according to the results of the image analysis; and an interface (14) for communicating the control signal (17) to the device (Di5 D2, D3) to be controlled.
8. A system according to claim 7, comprising an assigning unit (16) for assigning a visual identifier (Vi5 V2) and/or a trigger signal (Ti5 T2) to a particular device (Di5 D2).
9. A device (Di5 D2, D3) for use with the method according to claims 1 to 6, comprising a unique visual identifier (Vi5 V2, V3) for uniquely identifying that device (Di5 D2, D3) from among a number of devices (Di5 D2, D3).
10. A controllable device (Di5 D2, D3) comprising a receiver (40, 41, 42) for receiving a trigger signal (Ti5 T2, T) and an activator (30, 31 , 32) for activating a visual identifier (Vi5 V2, V3) in response to the trigger signal (Ti5 T2, T).
11. A device (D2) according to claim 10, comprising a transmitter (35) for transmitting an end of feedback signal (SEOF) in response to the trigger signal (T2)
12. A device according to any of claims 9 to 11, wherein the visual identifier (V1, V2, V3) comprises an optical signal and/or a physical identifier.
PCT/IB2006/050164 2005-01-28 2006-01-17 Method for control of a device WO2006079939A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP06710683A EP1844456A2 (en) 2005-01-28 2006-01-17 Method for control of a device
JP2007552764A JP2008529147A (en) 2005-01-28 2006-01-17 Device control method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05100585 2005-01-28
EP05100585.8 2005-01-28

Publications (2)

Publication Number Publication Date
WO2006079939A2 true WO2006079939A2 (en) 2006-08-03
WO2006079939A3 WO2006079939A3 (en) 2006-11-16

Family

ID=36740876

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/050164 WO2006079939A2 (en) 2005-01-28 2006-01-17 Method for control of a device

Country Status (4)

Country Link
EP (1) EP1844456A2 (en)
JP (1) JP2008529147A (en)
CN (1) CN101111874A (en)
WO (1) WO2006079939A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2074867A2 (en) * 2006-10-12 2009-07-01 Koninklijke Philips Electronics N.V. System and method for light control
JP2015038750A (en) * 2007-09-07 2015-02-26 アップル インコーポレイテッド Gui applications for 3d remote controller
US9189082B2 (en) 2009-04-08 2015-11-17 Qualcomm Incorporated Enhanced handheld screen-sensing pointer
WO2016050708A1 (en) * 2014-09-29 2016-04-07 Koninklijke Philips N.V. Remote control device, user device and system thereof, and method, computer program product and identification signal
US9891879B2 (en) 2015-09-29 2018-02-13 International Business Machines Corporation Enabling proximity-aware visual identification
US10549319B2 (en) 2013-03-15 2020-02-04 United States Postal Service Systems, methods and devices for item processing
WO2021024238A1 (en) * 2019-08-08 2021-02-11 7hugs Labs SAS Supervised setup for control device with imager

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402842A (en) * 2010-09-15 2012-04-04 宏碁股份有限公司 Augmented reality remote control method and device thereof
US8446364B2 (en) * 2011-03-04 2013-05-21 Interphase Corporation Visual pairing in an interactive display system
CN104541580B (en) * 2012-08-16 2018-03-02 飞利浦灯具控股公司 The system that control includes one or more controllable devices
CN104582167B (en) * 2014-12-30 2017-08-25 生迪光电科技股份有限公司 Lighting apparatus, intelligent terminal, lighting apparatus group network system and method
CN106023578A (en) * 2016-07-14 2016-10-12 广州视源电子科技股份有限公司 Wearable equipment and control method of home equipment
US11210932B2 (en) * 2019-05-21 2021-12-28 Apple Inc. Discovery of and connection to remote devices

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000060534A1 (en) * 1999-03-31 2000-10-12 Koninklijke Philips Electronics N.V. Remote control for display apparatus
DE10110979A1 (en) * 2001-03-07 2002-09-26 Siemens Ag Optical pattern and information association device for universal remote-control device for audio-visual apparatus
WO2004047011A2 (en) * 2002-11-20 2004-06-03 Koninklijke Philips Electronics N.V. User interface system based on pointing device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000060534A1 (en) * 1999-03-31 2000-10-12 Koninklijke Philips Electronics N.V. Remote control for display apparatus
DE10110979A1 (en) * 2001-03-07 2002-09-26 Siemens Ag Optical pattern and information association device for universal remote-control device for audio-visual apparatus
WO2004047011A2 (en) * 2002-11-20 2004-06-03 Koninklijke Philips Electronics N.V. User interface system based on pointing device

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2074867A2 (en) * 2006-10-12 2009-07-01 Koninklijke Philips Electronics N.V. System and method for light control
JP2015038750A (en) * 2007-09-07 2015-02-26 アップル インコーポレイテッド Gui applications for 3d remote controller
US9335912B2 (en) 2007-09-07 2016-05-10 Apple Inc. GUI applications for use with 3D remote controller
US10146298B2 (en) 2009-04-08 2018-12-04 Qualcomm Incorporated Enhanced handheld screen-sensing pointer
US9189082B2 (en) 2009-04-08 2015-11-17 Qualcomm Incorporated Enhanced handheld screen-sensing pointer
US11759827B2 (en) 2013-03-15 2023-09-19 United States Postal Service Systems, methods and devices for item processing
US10549319B2 (en) 2013-03-15 2020-02-04 United States Postal Service Systems, methods and devices for item processing
WO2016050708A1 (en) * 2014-09-29 2016-04-07 Koninklijke Philips N.V. Remote control device, user device and system thereof, and method, computer program product and identification signal
US10289213B2 (en) 2014-09-29 2019-05-14 Koninklijke Philips N.V. Remote control device, user device and system thereof, and method , computer program product and identification signal
US20180217677A1 (en) * 2014-09-29 2018-08-02 Koninklijke Philips N.V. Remote control device, user device and system thereof, and method , computer program product and identification signal
US9891879B2 (en) 2015-09-29 2018-02-13 International Business Machines Corporation Enabling proximity-aware visual identification
WO2021024238A1 (en) * 2019-08-08 2021-02-11 7hugs Labs SAS Supervised setup for control device with imager
US11445107B2 (en) 2019-08-08 2022-09-13 Qorvo Us, Inc. Supervised setup for control device with imager

Also Published As

Publication number Publication date
EP1844456A2 (en) 2007-10-17
JP2008529147A (en) 2008-07-31
WO2006079939A3 (en) 2006-11-16
CN101111874A (en) 2008-01-23

Similar Documents

Publication Publication Date Title
EP1844456A2 (en) Method for control of a device
US11561519B2 (en) Systems and methods of gestural interaction in a pervasive computing environment
US8190278B2 (en) Method for control of a device
US8994656B2 (en) Method of controlling a control point position on a command area and method for control of a device
EP2093650B1 (en) User interface system based on pointing device
KR101224351B1 (en) Method for locating an object associated with a device to be controlled and a method for controlling the device
US20090295595A1 (en) Method for control of a device
KR20170001435A (en) Mobile terminal capable of remotely controlling a plurality of device
US11184473B2 (en) Electronic apparatus and method of selectively applying security mode in mobile device
US7952063B2 (en) Method and system for operating a pointing device to control one or more properties of a plurality of other devices
CN108604404A (en) infrared remote control method, terminal and device
KR20170001434A (en) Mobile terminal capable of remotely controlling a plurality of device, and case for moblie terminal
KR20160054799A (en) Remote controll device and operating method thereof
US20170269697A1 (en) Under-wrist mounted gesturing
KR20160067706A (en) Object recognizing remote controller and remote control method using the same

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2006710683

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 200680003345.6

Country of ref document: CN

Ref document number: 2007552764

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 3346/CHENP/2007

Country of ref document: IN

WWP Wipo information: published in national office

Ref document number: 2006710683

Country of ref document: EP