WO2007063447A2 - Procede de commande de systeme interactif et systeme d'interface utilisateur - Google Patents

Procede de commande de systeme interactif et systeme d'interface utilisateur Download PDF

Info

Publication number
WO2007063447A2
WO2007063447A2 PCT/IB2006/054356 IB2006054356W WO2007063447A2 WO 2007063447 A2 WO2007063447 A2 WO 2007063447A2 IB 2006054356 W IB2006054356 W IB 2006054356W WO 2007063447 A2 WO2007063447 A2 WO 2007063447A2
Authority
WO
WIPO (PCT)
Prior art keywords
user interface
stationary base
unit
input
portable user
Prior art date
Application number
PCT/IB2006/054356
Other languages
English (en)
Other versions
WO2007063447A3 (fr
Inventor
Vasanth Philomin
Original Assignee
Philips Intellectual Property & Standards Gmbh
Koninklijke Philips Electronics N. V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Philips Intellectual Property & Standards Gmbh, Koninklijke Philips Electronics N. V. filed Critical Philips Intellectual Property & Standards Gmbh
Publication of WO2007063447A2 publication Critical patent/WO2007063447A2/fr
Publication of WO2007063447A3 publication Critical patent/WO2007063447A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1688Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being integrated loudspeakers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1626Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1632External expansion units, e.g. docking stations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/0227Cooperation and interconnection of the input arrangement with other functional units of a computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes

Definitions

  • the invention relates to a method of driving a dialog system comprising a number of stationary base units and a number of portable user interface units. Moreover, the invention relates to an appropriate user interface system and to a dialog system comprising such a user interface system.
  • a dialog system comprising such a user interface system.
  • Advanced user interface systems no longer rely on display, keyboard, mouse, or remote control, but implement more intuitive input/output modalities like speech, gestural input, etc.
  • Such interactive systems it is possible to use hands- free and even eyes-free interaction, for example in the home, in the car, or in a work environment.
  • Such a dialog system can offer a user an intuitive means of interacting with a variety of different applications, such as mailbox applications, home entertainment applications, etc. Due to progress in technological trends towards network environment and minimization, dialog systems with various input/output modalities are set to become a part of everyday life.
  • An interactive device for use in such a dialog system can be made to resemble, for example, a human, animal, robot, or abstract figure. A typical example of such a dialog system is shown in DE 102 49 060 Al.
  • An input/output modality can be any hardware and, if required, software which allows a user to communicate with a dialog system by means of the input sensors featured by that system, such as microphone or camera, and/or output devices such as loudspeaker or display for command and information input and output.
  • An input/output modality can be, for example, speech recognition, face recognition, gesture recognition, etc.
  • an input/output modality uses an algorithm, usually comprising one or more computer software modules, to interpret input information and apply this to an algorithmic model in order to identify, for example, words that have been detected by a microphone or a face in an image generated by a camera.
  • Such an algorithmic model can be adaptable, for example in a training procedure, to suit the environment in which it is being used, or to suit the characteristics of the input sensors which deliver input information to the adaptable algorithmic model.
  • the algorithmic model thus adapted can be stored in a memory and retrieved whenever required.
  • Such a portable user interface unit can be, for example, a detachable "head" of an interactive device for attaching to a stationary base unit.
  • the portable user interface unit comprises at least one input sensor such as a microphone or camera, and at least one output element such as a loudspeaker or a display.
  • the user can communicate with the dialog system by means of the input sensors of the portable user interface unit and can receive feedback from the dialog system by means of the output elements of the portable user interface unit.
  • the stationary base unit is a unit which might generally be restricted to use within a particular environment, by being, for example, fixedly located in that environment. Because of the hardware required for performing operations such as speech recognition, face recognition, etc., these functions might often be implemented in the stationary base unit, or in a central unit connected to the different stationary base units of the dialog system, allowing the portable head unit to be simply detached and easily moved from one environment to the next.
  • the input/output modalities of a dialog system might be, for example, speech recognition, speech synthesis, face recognition, gesture recognition etc., with which the dialog system can interpret the user's input.
  • the quality of operation of such an input/output modality can be considerably influenced by, for example, microphone quality and room reverberation in the case of speech recognition, and camera properties and lighting conditions in the case of face and gesture recognition. Therefore, the quality of operation depends on the environment in which a stationary base unit is located, as well as on the characteristics of the input sensors of the portable head unit mounted on that stationary base unit.
  • the present invention provides a method of driving a dialog system comprising a number of stationary base units and a number of portable user interface units, any of which can be connected in a wired or wireless manner to any of the stationary base units, wherein a user interacts with the dialog system through a portable user interface unit connected to a stationary base unit using an input/output modality, which input/output modality utilises an algorithmic model and a number of adaptable algorithmic models utilised by the input/output modality are assigned to the stationary base units and/or to the portable user interface units.
  • the stationary base unit and/or the portable user interface unit used by user to interact with the dialog system is detected, and an adapted algorithmic model for the interaction is subsequently allocated to the input/output modality according to the stationary base unit and/or the portable user interface unit used by the user for interaction with the dialog system.
  • An obvious advantage of the method according to the invention is that the adaptable algorithmic model required by the input/output modality used by the portable user interface unit or stationary base unit is automatically implemented. Therefore, regardless of which portable user interface unit is used in connection with a stationary base unit in any environment, the method according to the invention ensures that the input/output modalities utilised in the interaction can avail of the corresponding algorithmic models, optimally adapted to the current constellation, thereby allowing the dialog system to operate with a high degree of robustness in the various environments. It is not necessary for a model to re-adapt to a new environment or input sensor each time the interactive system is used in a new constellation, instead, an adaptive model already adapted to a current constellation can be continually refined to suit this constellation.
  • a corresponding user interface system for a dialog system with a number of stationary base units and portable user interface units comprises an input/output modality using an adaptable model enabling communication between a user and the dialog system by a portable user interface unit connected to a stationary base unit.
  • a detection unit detects which stationary base unit and/or portable user interface unit is currently being utilised by the user to interact with the dialog system, and a memory means stores a number of adapted algorithmic models for the input/output modality, which adapted algorithmic models are each assigned to the stationary base units and/or to the portable user interface units.
  • An allocation unit allocates an adapted algorithmic model to the input/output modality according to the stationary base unit and/or the portable user interface unit utilised by the user in an interaction with the dialog system.
  • the adaptation of the model is carried out to suit the environment.
  • adaptation is carried out to suit characteristics of the input sensors. Should all the stationary base units be used in the same environment, the models only depend on the input sensors of the various portable user interface units. On the other hand, if only one type of portable user interface unit is being used in the dialog system, each one being equipped with the same input sensors, then each portable user interface unit can simply avail of the same models which then only depend on the different stationary base units.
  • the adapted algorithmic models are assigned to different interface/base-combinations, where an interface/base combination comprises a specific portable user interface unit attached to a specific stationary base unit, and an adapted algorithmic model is allocated to the input/output modality according to the specific interface/base- combination used by the user to interact with the dialog system.
  • Which portable user interface unit is connected to which stationary base unit is determined by the detecting unit of the user interface system. This can be determined automatically on connection, or some time later.
  • the corresponding models will be assigned to the input/output modalities available for that portable user interface unit.
  • an adapted algorithmic model which takes into consideration the microphone of a certain portable user interface unit and for the environment of the stationary base unit to which that portable user interface unit is attached can be allocated to that interface/base combination.
  • the adaptable algorithmic models might be stored locally, i.e. in a stationary base unit or in a portable user interface unit. This might be advantageous when a model is only dependent on a stationary base unit or a portable head unit.
  • the adapted algorithmic models are stored in a central database and retrieved and allocated to the input/output modality by a central model managing unit, or model manager, which can be realised, for example, as part of a dialog manager.
  • retrieval of the models can be effected by data transfer from the central database to the stationary base unit, for example by a cable connection such as a USB (universal serial bus) connection, or a WLAN (wireless local area network) connection.
  • a cable connection such as a USB (universal serial bus) connection, or a WLAN (wireless local area network) connection.
  • a stationary base unit can also be realised in a very basic way, for example, with only a requisite connection to a power supply such as the mains power supply or a battery power source, if the input/output modalities are located either in a portable user interface unit or in a central unit.
  • a power supply such as the mains power supply or a battery power source
  • an adapted algorithmic model assigned to a specific portable user interface unit or to an interface/base- combination comprising this portable user interface unit should preferably also be stored in a memory of the portable user interface unit or in the central unit as mentioned above.
  • a portable user interface unit or stationary base unit is used for a first time in the dialog system, or that conditions in the environment of a stationary base unit have changed, or that a portable user interface unit has been equipped with new input sensor hardware.
  • an adapted algorithmic model for the relevant portable user interface unit or stationary base unit may be outdated or unavailable. Therefore, in a preferred embodiment of the invention, if there is no adapted algorithmic model available for a certain input/output modality of a portable user interface unit, a default algorithmic model for that input/output modality is assigned to the stationary base unit and/or to the portable user interface unit. This default model is then adapted to the particular environment, for example in a training process, and stored for further interactive sessions.
  • the model training can occur in the background, without actively involving the user, or the user might be required to perform specific actions, such as saying certain words or phrases, or standing at certain positions in the room, in order for the training to result in a robust adapted model for that environment.
  • Each portable user interface unit, stationary base unit, or interface/base combination can be associated with a number of input/output modalities, and therefore also a number of adapted algorithmic models.
  • a portable user interface unit, a stationary base unit, or a single interface/base combination might have an adapted algorithmic model for face recognition, another for speech recognition, etc.
  • these adapted algorithmic models are, in a further preferred embodiment of the invention, preferably grouped together in a suitable profile associated with the corresponding portable user interface unit, stationary base unit or interface/base-combination.
  • a dialog system can comprise any number of stationary base units, any number of portable user interface units, and a user interface system as described above.
  • the stationary base units can be distributed in various different kinds of environments, and any of the portable user interface units can be attached to any of the stationary base units, as desired.
  • the elements of the user interface system can be divided among the stationary base units, portable user interface units, and, if required, an external model manager, as appropriate.
  • Fig. 1 is a schematic representation of a dialog system comprising a number of portable user interface units and stationary base units according to an embodiment of the invention
  • Fig. 2 shows a block diagram of a user interface system pursuant to a first embodiment the invention
  • Fig. 3 shows a block diagram of a user interface system pursuant to a second embodiment the invention
  • Fig. 4 shows a block diagram of a user interface system pursuant to a third embodiment the invention.
  • a number of portable user interface units H 1 , H 2 , H 3 of a dialog system are shown connected to a number of stationary base units B 1 , B 2 , B 3 , where each stationary base unit B 1 , B 2 , B 3 is located in a separate environment, as indicated by the dashed lines.
  • each stationary base unit B 1 , B 2 , B 3 is located in a separate environment, as indicated by the dashed lines.
  • only three combinations of interface/base units are shown, although any number of such combinations is possible.
  • a first interface/base combination consisting of the portable user interface unit H 2 and the stationary base unit B 1 , and therefore called “H 2 Bi" in the following, is shown on the left.
  • the stationary base unit Bi is assigned to a first environment, and is installed, perhaps permanently, in that environment.
  • a user of the dialog system has placed the portable user interface unit H 2 on that stationary base unit Bi, at least for the time being.
  • Two other interface/base combinations "H 1 B 3 " and "H 3 B 2 " are shown, where the combination "H 1 B 3 " consists of the portable user interface unit Hi attached to the stationary base unit B 3 , and the combination "H 3 B 2 " consists of the portable user interface unit H 3 attached to the stationary base unit B 2 .
  • Each interface/base combination of portable user interface units H 1 , H 2 , H 3 and stationary base units B 1 , B 2 , B 3 can avail of different input/output modalities and different hardware elements.
  • the portable user interface unit H 2 features a camera 10, a display 11, a pair of microphones 12, and a loudspeaker 13.
  • the stationary base unit Bi to which the portable user interface unit is attached features a number of communication interfaces, here a USB interface 14 and a WLAN interface 15. Using these interfaces, the stationary base unit can communicate with a remote server, not shown in the diagram.
  • the other portable user interface units H 1 , H 3 and stationary base units B 2 , B 3 can avail of the same or similar input/output modalities and communication interfaces for communication with a remote server, as indicated in the diagram.
  • Any of the portable user interface units H 1 , H 2 , H 3 can be used in conjunction with any of the stationary base units B 1 , B 2 , B 3 .
  • portable user interface unit H 2 might be removed from the stationary base unit Bi to which it is connected, and mounted instead onto either of the stationary base units B 2 , B 3 , or onto any another stationary base unit not shown in the diagram.
  • the information exchange between a portable user interface unit and a stationary base unit will be explained in detail with the aid of Fig. 2.
  • a user interface system 3 for a dialog system 1 is shown in relation to a user 2 and a number of applications A 1 , A 2 , ..., A n , such as a mailbox application, home entertainment application, intelligent home management system, etc.
  • applications A 1 , A 2 , ..., A n such as a mailbox application, home entertainment application, intelligent home management system, etc.
  • the portable user interface unit H 2 and stationary base unit Bi are shown in an abstract representation by means of the dashed lines.
  • the user interface system 3 comprises a number of input/output modalities, which are incorporated in the portable user interface unit H 2 .
  • a speech-based input/output modality 200 in the form of a speech recognition arrangement 200, uses, on the input side, a microphone 20 for detecting speech input of the user 2.
  • the speech recognition arrangement 200 can comprise the usual speech recognition module and a following language understanding module, so that speech utterances of the user 2 can be converted into digital form.
  • a speech- based input/output modality features a speech synthesis arrangement 210, which can comprise, for example, a language generation unit and a speech synthesis unit. The synthesised speech is then output to the user 2 by means of a loudspeaker 21.
  • a visual input/output modality 230 uses a camera 23 on the input side, and comprises an image analysis unit, here a face recognition unit 230, for processing the images generated by the camera 23.
  • an input/output modality comprises a display driver 220 for rendering visual output signals into a form suitable for displaying on a screen or display 22.
  • the operation of the input/output modalities 200, 210, 220, 230, as described above, depends on the models used, and therefore on the interface/base combination, particularly in the case of the speech recognition arrangement 200 and the face recognition unit 230.
  • a detection unit 4 determines which stationary base unit the portable user interface unit has been connected to.
  • the detection unit 4 informs an allocation unit 6, which can then retrieve the necessary adapted algorithmic models M 1 , M 2 from a memory 5 and allocate them to the appropriate input/output modalities 200, 210, 220, 230.
  • the adaptable algorithmic model Mi is a model for the user's speech adapted to the environment in which the stationary base unit Bi is located and to the microphone of the portable user interface unit H 2 , so that the speech- based input/output modality 200 can successfully interpret utterances spoken by the user 2 in that environment.
  • the adaptable algorithmic model M 2 is a model for the user's appearance and the properties of the visual sensor 23 of the portable user interface unit H 2 , so that the user 2 can be successfully recognised by the visual input/output modality 230 in the conditions prevalent in that environment.
  • the models M 1 , M 2 are stored in the stationary base unit, and differ only in the characteristics of the various input sensors of the portable user interface unit currently mounted onto the stationary base unit.
  • a dialog manager 7 manages the interaction between the user 2 and the applications A 1 , A 2 ,..., A n with which the user 2 can communicate in the dialog system 1. Such a dialog manager 7 analyses user input and issues appropriate instructions to the corresponding application, and deals with feedback or requests from the applications A 1 , A 2 , ..., A n . All of the components of the input/output modalities mentioned here, such as speech recognition 200, speech synthesis 210, face recognition 230 and visual output 220, and the components of the dialog manager 7 and the required interfaces (not shown in the diagram) between the dialog manager 7 and the individual applications A 1 , A 2 , ..., A n , are known to a person skilled in the art and will not therefore be described in more detail.
  • the detection module 4, allocation module 6 and dialog manager 7 could either be part of the portable user interface unit or of the stationary base unit.
  • Fig. 3 shows a different realisation of the user interface system 3.
  • the user 2 has mounted the portable user interface unit Hi on a stationary base unit B 3 which does not avail of any storage capacity.
  • the detection unit 4 determines the stationary base unit to which the portable user interface unit Hi is attached, and notes that this stationary base unit does not store any adaptable algorithmic models.
  • the portable user interface unit Hi in this case is equipped with a memory 5' from which the allocation unit 6 can retrieve the adaptable algorithmic models M 1 , M 2 required for the input/output modalities, which are shown to be the same as those for the portable user interface unit H 2 described above, but which need not necessarily be so.
  • the models M 1 , M 2 are stored in the portable user interface unit H 1 , and differ only in the characteristics of the environment of stationary base unit currently connected to the portable user interface unit.
  • the detection module 4, allocation module 6 and dialog manager 7 could preferably also be part of the portable user interface unit H 1 , so that the stationary base unit B 3 need only be a sort of base with power supply and a connector for receiving the portable user interface unit Hi and connecting to an external central unit.
  • FIG. 4 A further realisation of the user interface system 3 is shown in Fig. 4.
  • the adaptable algorithmic models M, Mi, M 2 , ..., M n for the different environments of the stationary base units of the user interface system 3 are gathered in a profile manager 8 of a central unit 9, which also comprises the detection module 4, the allocation module 6 and the dialog manager 7 as well as the various input/output modalities 200, 210, 220, 230.
  • the stationary base units and the portable user interface units of the user interface system 3 do not necessarily need to be equipped with storage capabilities for storing adaptable algorithmic models, or it may be that some stationary base units and/or portable user interface unit have such storage capabilities, while others do not, so that the profile manager 8 manages the adaptable algorithmic models for those units not availing of storage capabilities.
  • the detection unit 4 determines which interface/base combination is being used, and informs the allocation unit 6.
  • the allocation unit 6, issues appropriate commands to the profile manager 8 in order to retrieve the required models and allocate them to the corresponding input/output modalities 200, 210, 220, 230.
  • the stationary base unit B 2 to which the portable user interface unit H 3 is attached does not yet avail of an adaptable algorithmic model for one of the input/output modalities of the portable user interface unit H 3 .
  • the stationary base unit B 2 is new, or has been relocated to a new environment, or that conditions in its environment have changed, so that a new adaptable algorithmic model is required for speech recognition and/or face recognition.
  • a default algorithmic model M is retrieved from the profile manager 8 and allocated to the appropriate input/output modality. Thereafter, this default algorithmic model M can be trained in this environment for this stationary base unit B 2 , and then stored again in the profile manager 8, so that the next time this portable user interface unit H 3 is attached to this particular stationary base unit B 2 , the adapted algorithmic model for the input/output modality is available.
  • a “unit” or “module” can comprises a number of units or modules, unless otherwise stated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)
  • Stored Programmes (AREA)

Abstract

La présente invention se rapporte à un procédé qui permet de commander un système de dialogue (1) comprenant un nombre d'unités de base fixes (B1, B2, B3) et un nombre d'unités d'interface utilisateur portables (H1, H2, H3) pouvant toutes être connectées à l'une des unités de base fixes (B1, B2, B3). Selon le procédé de l'invention, un utilisateur (2) interagit avec le système de dialogue (1) via une unité d'interface utilisateur portable (H1, H2, H3) reliée à une unité de base fixe (B1, B2, B3) au moyen d'une modalité d'entrée/sortie (200, 210, 220, 230). La modalité d'entrée/sortie (200, 210, 220, 230) fait appel à un modèle algorithmique, un nombre de modèles algorithmiques adaptables (M, M1, M2, ..., Mn) étant attribués aux unités de base fixes (B1, B2, B3) et/ou aux unités interfaces utilisateur portables. L'unité de base fixe (B1, B2, B3) et/ou l'unité d'interface utilisateur portable (H1, H2, H3) utilisées par l'utilisateur (2) pour interagir avec le système de dialogue (1) sont détectées et un modèle algorithmique adaptable (M, M1, M2, ..., Mn) d'interaction est attribué à la modalité d'entrée/sortie (200, 210, 220, 230) en fonction de l'unité de base fixe (B1, B2, B3) et/ou de l'unité d'interface utilisateur portable (H1, H2, H3) utilisées par l'utilisateur (2) dans son interaction avec le système de dialogue (1). L'invention concerne en outre un système d'interface utilisateur appropriée (3) et un système de dialogue (1) comprenant ledit système d'interface utilisateur (3).
PCT/IB2006/054356 2005-11-30 2006-11-21 Procede de commande de systeme interactif et systeme d'interface utilisateur WO2007063447A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05111480.9 2005-11-30
EP05111480 2005-11-30

Publications (2)

Publication Number Publication Date
WO2007063447A2 true WO2007063447A2 (fr) 2007-06-07
WO2007063447A3 WO2007063447A3 (fr) 2008-02-14

Family

ID=38092644

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/054356 WO2007063447A2 (fr) 2005-11-30 2006-11-21 Procede de commande de systeme interactif et systeme d'interface utilisateur

Country Status (2)

Country Link
TW (1) TW200802035A (fr)
WO (1) WO2007063447A2 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003096171A1 (fr) * 2002-05-14 2003-11-20 Philips Intellectual Property & Standards Gmbh Commande a dialogue pour un appareil electrique
US20040235463A1 (en) * 2003-05-19 2004-11-25 France Telecom Wireless system having a dynamically configured multimodal user interface based on user preferences

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003096171A1 (fr) * 2002-05-14 2003-11-20 Philips Intellectual Property & Standards Gmbh Commande a dialogue pour un appareil electrique
US20040235463A1 (en) * 2003-05-19 2004-11-25 France Telecom Wireless system having a dynamically configured multimodal user interface based on user preferences

Also Published As

Publication number Publication date
WO2007063447A3 (fr) 2008-02-14
TW200802035A (en) 2008-01-01

Similar Documents

Publication Publication Date Title
EP2778865B1 (fr) Procédé de commande d'entrée et dispositif électronique le supportant
CN104049732B (zh) 多输入控制方法和系统和支持该方法和系统的电子装置
US6988070B2 (en) Voice control system for operating home electrical appliances
US11615792B2 (en) Artificial intelligence-based appliance control apparatus and appliance controlling system including the same
US6052666A (en) Vocal identification of devices in a home environment
US10595380B2 (en) Lighting wall control with virtual assistant
CN106023995A (zh) 一种语音识别方法及运用该方法的穿戴式语音控制设备
CN109062468B (zh) 分屏显示方法、装置、存储介质和电子设备
CN108231077A (zh) 油烟机的语音控制方法及系统、服务器、智能终端、计算机可读存储介质
JP2022500682A (ja) スマートデバイスの、効率的で低レイテンシである自動アシスタント制御
CN105609122A (zh) 终端设备的控制方法及装置
CN109754795A (zh) 接近感知语音代理
CN106708265A (zh) 带有语音及手势识别的空气管理系统
CN110737335A (zh) 机器人的交互方法、装置、电子设备及存储介质
CN109756825A (zh) 智能个人助理的位置分类
CN104423538A (zh) 一种信息处理方法及装置
US20100223548A1 (en) Method for introducing interaction pattern and application functionalities
CN111417924A (zh) 电子设备及其控制方法
CN102033578A (zh) 一体机系统
CN114613362A (zh) 设备控制方法和装置、电子设备和介质
WO2007063447A2 (fr) Procede de commande de systeme interactif et systeme d'interface utilisateur
CN107391015A (zh) 一种智能平板的控制方法、装置、设备及存储介质
KR20170133989A (ko) 음성 인식 기능을 구비한 전자칠판 및 전자칠판시스템, 이를 이용한 전자칠판의 모드 변환 방법
CN101546474B (zh) 遥控器以及其系统
CN114005431A (zh) 语音系统的配置方法、装置、设备以及可读存储介质

Legal Events

Date Code Title Description
NENP Non-entry into the national phase in:

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06821515

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 06821515

Country of ref document: EP

Kind code of ref document: A2