WO2015011217A1 - User-interface using rfid-tags or voice as input - Google Patents

User-interface using rfid-tags or voice as input Download PDF

Info

Publication number
WO2015011217A1
WO2015011217A1 PCT/EP2014/065876 EP2014065876W WO2015011217A1 WO 2015011217 A1 WO2015011217 A1 WO 2015011217A1 EP 2014065876 W EP2014065876 W EP 2014065876W WO 2015011217 A1 WO2015011217 A1 WO 2015011217A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
unit
communications device
message
module
Prior art date
Application number
PCT/EP2014/065876
Other languages
French (fr)
Inventor
Raquel NAVARRO PRIETO
Anna Karolina HILTUNEN
Original Assignee
Telefonica Digital España, S.L.U.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonica Digital España, S.L.U. filed Critical Telefonica Digital España, S.L.U.
Publication of WO2015011217A1 publication Critical patent/WO2015011217A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/0202Constructional details or processes of manufacture of the input device
    • G06F3/0219Special purpose keyboards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/23Construction or mounting of dials or of equivalent devices; Means for facilitating the use thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/26Devices for calling a subscriber
    • H04M1/27Devices whereby a plurality of signals may be stored simultaneously
    • H04M1/271Devices whereby a plurality of signals may be stored simultaneously controlled by voice recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/04Details of telephonic subscriber devices including near field communication means, e.g. RFID
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Definitions

  • the present invention refers to a communications device, to a method for providing adaptive communications and to a computer program storage medium thereof.
  • the communication device comprises a tangible (or adaptive) interface and a system that optimizes communication channels via translation to the user.
  • the system integrates three key elements in order to provide the adaptation to the communication channels, the content translation and the physical to digital interface.
  • the adaptation is done with a unit that connects the device with third parties communication providers and provides an internal unique interface to the system.
  • the content translation is done by an algorithm that adjusts the translation from text to voice/ voice to text and selects the best channel for the output/ input of text based communications and services.
  • the physical to digital interface consists of buttons, a gesture interaction interface, RFID based input and output units (content cards) updatable with contents that function as a command between audio and direct manipulation interactions and their related content. However, more implementations could be also possible.
  • User Context such as demographic variables, social influence and personal factors such as age and functional ability, for example.
  • Social influence is the prevalent external variable and therefore depicted as a module in the user context.
  • Perceived usefulness is defined as 'the extent to which a person believes that using the system will enhance his or her job performance
  • Confirmed usefulness is the usefulness of the person's phone to him or her - composed of the features he or she is able to learn to use.
  • Text-based communications are surpassing traditional phone calls or meeting face to face as the most frequent ways of keeping in touch for UK adults (http://stakeholders.ofcom.org.uk/ market-data-research/market-data/communications-market-reports/cmr12/).
  • the mobile handset accessibility features for the visual impaired are shown in the following table:
  • Nuance TALKS&ZOOMS for Symbian OS provides Text-to-speech and Large- print for blind/ low-vision users (www.nuance.com/for-individuals/by- solution/talks-zooms/index.htm)
  • Voice commands for interaction technology refer to advanced speech recognition software that will help to undertake basic communication using a mobile phone. These voice commands are used for working on computers and cell phones for placing calls, writing text messages, composing documents, opening and closing applications, making calendar entries and setting reminders, playing music and videos, and surfing the web.
  • Captioned speech relay services (captioned telephony or CapTel service) ( Figure 3) that translate real-time conversation into captions and is useful for people who can communicate orally, but have difficulty in hearing.
  • Real-time captioning provides both voice and text forms of conversations. Users of these services need a CapTel telephone as well as a captioning service.
  • Easy Chirp is an example of technologies to make internet based communication accessible. It is a web-accessible Twitter application first build in early 2009. It's designed to be easy to use and is optimized for users with disabilities. Easy Chirp is great for use with assistive technology such as screen readers. It also works with keyboard-only users, older mobile devices, older desktop browsers such as IE7, low-band Internet connections, and even without JavaScript. Further, TweetSpeak33, is a speaking twitter tool for Android that allows blind users to do searches on twitter.
  • Nostalgia a prototype which users can use for listening to old news and music from the twentieth century.
  • the design of Nostalgia is an attempt to design an artefact that in a seamless and simple way can trigger the memory of past events both individually and in the company of others.
  • a device for addressing the challenges elderly users encounter when using computers is also disclosed.
  • said device with few control components and a simple, comfortable, and intuitive hardware interface provides current information, music, and entertainment for the elderly.
  • the invention provides a communications device that has an adaptive user interaction interface and multiple application channels.
  • the communications device such as a radio-like user interface, preferably embedded in a radio set, includes: - a user interaction module having a user command identification unit, said user command identification unit having at least a physical object interaction unit and/or a voice command unit;
  • sending communications' module configured for selecting said application channels to send messages to third parties of a user based on user profiles and/or preferences
  • a receiving communications' module configured to process information received from said third parties through several input application channels and to output a processed output to said user through one or more selected output application channels based on said user profiles and/or preferences.
  • the information received from several input application channels is selected among messages from third parties and web pages content.
  • the application channels preferably comprise messaging channels selected from the group of email, Short Message Service and web applications capable of receiving messages or voice channels.
  • the physical object interaction unit includes a physical slot configured for receiving a physical card or key, for instance an RFID or a NFC card.
  • the physical object interaction unit may also include at least one button, switch or wheel.
  • the physical object interaction unit can be configured for triggering a direct communications event when inserted, contacted or in close range with said communications device and for sending a message to a predetermined user contact and/or for receiving all messages from the user's contacts.
  • the physical object interaction unit can be associated with predetermined web page location identification, preferably its URL.
  • the communications device also comprises a unit for selecting at least a time to output a processed output to said user.
  • the receiving communications' module has a radio broadcast interrupting unit that is configured to select an optimal time to output information to the user to a later time if said radio set is receiving a radio broadcast.
  • the broadcast interrupting unit may be configured to be overridden by user commands, and the output information may be immediately notified to the user if said radio broadcast is off.
  • the communications device which is connected to a mobile telephone network, including at least WiFi, 2G, 3G or LTE networks, also comprises a user identification and preferences module having a user identification unit.
  • the user identification unit has a dynamic user profile configured for controlling parameters of said processed output to the user, where said parameters can include speed mode, language and volume.
  • the dynamic user profile is configured for receiving inputs from a user database, where said user database contains information relating to the user's age, history of use, cognitive, visual or hearing user conditions.
  • the receiving communications' module further comprises a message memory unit configured for receiving messages, preferably from the user's contacts.
  • a second object of the invention is a physical object comprising a circuitry configured for interacting with the physical object interaction unit, said physical object preferably comprising a key or a card.
  • a third object of the invention is a method for providing adaptive communications. Said method comprising the steps of: a user giving a command by manipulating a physical object or giving a voice command, said physical object being associated with a communications device, upon said command being given, a sending communications' module selects application channels for sending messages to third parties based on user profiles and/or preferences, and receiving third party information through several input application channels and output a processed output to the user through one or more selected output application channels based on said user profiles and/or preferences.
  • the third party information received from several input application channels is selected among messages from third parties and web pages content.
  • step a) is carried out by the user associating a listen-to-messages card or key with said communications device. Furthermore, step a) can be carried out by the user doing a physical action that is associated with accessing a predetermined web page or App.
  • step a) is carried out by associating a send- message-to-a-predetermined-contact card key with said communications device.
  • step c) includes transforming text information into voice and output said voice to said user.
  • a last object of the invention is a computer storage medium coupled to a communications device according to the first object of the invention and including instructions that when executed by said communications device causes the device to perform operations comprising the steps of any of the steps of the method of the third object of the invention.
  • Figure 1 illustrates a senior technology acceptance and adoption model.
  • Figure 2 is a phone with big letters.
  • Figure 3 is an example of a captioned speech relay service.
  • FIG. 4 illustrates the general architecture of the communications device, according to several embodiments.
  • FIG. 4 illustrates the receive message flowchart.
  • Figure 5 is a flowchart illustrating proposed steps when a user sends a message.
  • Figure 6 is a flowchart illustrating request information from internet. In this case for a weather application.
  • Figure 8 illustrates the identification and preferences module.
  • Figure 9 is a more detailed illustration of the architecture of the proposed communications device (or Radio. me).
  • Figures 10, 1 1 and 12 are other flowcharts illustrating in more detail the proposed steps for the receive message process, the send message process, and the web content access process, respectively.
  • the architecture is composed of 4 modules:
  • Identification and preferences module 400 which is responsible for all the information about the end user and the personalization of the user preferences based on the knowledge from the user.
  • User Interaction module 100 which is responsible for understanding user's command and orders about what s/he wants to do.
  • Receiving module 300 which is responsible for processing of information and messages that the system receives from outside the system.
  • Communications from the user module 600 which is responsible for recognizing the user voice messages, record them and translate them to text.
  • Communications sent by end user module or sending communications' module 200 is responsible for selecting, preferably based on the user preferences, the channel how the receiver of the intended communication will receive the message; namely, email, SMS, Facebook®, WhatsApp®, or any other application that receives messages.
  • Each module is composed from several units. In bolt letters it has been highlighted the core unit of each module without which the module will not work. Inside each module all the units are connected among them. In Figure 4, it has only been portrait the most relevant relationships among units and modules. Following is the description of the modules from each unit and its interrelation with other units.
  • the Identification and preferences module 400 includes the User Identification module 401 , which knows that which user is using the system at this point of time. The identification could happen using different methods, primarily by voice recognition technology but other variations include finger print identification and a NFC card with a password. Also this module contains a module that is able detect if the radio is on or off, Radio status identification 407, which will send the information to the Unit selecting when to interrupt the broadcast in the Receiving module 300 in order that this module could take decisions only if the radio is on. In the case that the radio broadcast is off, the Unit selecting when to interrupt broadcast 301 (or a radio broadcast interrupting unit) does not gets called and the Confirmation and notification unit 104 will immediately communicate the user that a message have been received.
  • the User Identification module 401 knows that which user is using the system at this point of time. The identification could happen using different methods, primarily by voice recognition technology but other variations include finger print identification and a NFC card with a password. Also this module contains a module that is able detect if the radio is on or
  • a User database 500 although it stores the data in the cloud, is also part of this module. It stores the information about the user password and characteristics that have been entered in the first time that the system is used as the age and the phone numbers of the contacts.
  • the user database 500 also receives information from the User interaction module 100 about the usage that the user has done of the system including number of communications, type of chosen way to send a message to a particular contact, selected volume of the radio broadcast, etc. All the information and in particular all the updates from the database 500 is transmitted to the Dynamic user profile 402 which processes all the information from the database 500 and is in charge of customization and adaptation that is at the core of the proposed system.
  • the key inputs for the adaptation are the age of the user and the previous selections of controls and previous selections of how to send a message to a particular contact. As the age is higher the volume of the speaker gets higher, the speed of the speech when transmitting a message gets lower and the volume and speed of the confirmation messages follow the same pattern.
  • the language can either be configured in the initial set up or could be pick up by the Communication from end user module 600. Special needs inputs are not required all the time but refer to the cases when the user has particular needs like being blind and therefore all the interactions need to happen by voice command.
  • the User Interaction module 100 main unit is the Command identification unit 101 , which identifies all the action from the users both physical and voice, and understand what sequence of actions needs to perform.
  • the basic categories of actions are either input actions (like a message to be send from the user) or output actions (like reading the messages the end users have receives from his or her contacts).
  • the Command identification unit 101 has the capability to differentiate between these two categories in order to unfold different type of actions in the system depending on the interaction that it receives from the end user either physical from the Physical Card (or object) interaction unit 102 or voice from the Voice command unit 103.
  • the Physical Card interaction unit 102 is the unit that decodes the input from the physical cards with its receptor of the NFC that has the physical cards. Other alternatives to the NFC interactions are Smart card technology or QR codes.
  • the Voice command unit 103 starts in action when the Command Identification Unit 101 interprets that there is an input/output action has been input in the system by the user.
  • these unit will communicate with the Communication from end user module 600 to transmit the order to activate this unit so that it stars to record and translate the voice of the user.
  • These recording and translation action is ended when the user tells the system the voice command "stop message”.
  • this command is recorded by the Communication from end user module 600 this module sends the message to the voice command unit 103 which communicates this action to the Command Identification Unit 101 .
  • the Confirmation and notification unit 104 is responsible for providing feedback after two types of actions occurs. During the first type of actions, when the user performs a command (i.e. command to send a message) this unit read to the user a message indicating the system has sent the message. The second type of actions would be the messages incoming from the user contacts.
  • these messages could be provided by a light, specific sound or even an elnk display.
  • the receiving module 300 will send a command to this unit to switch on a light indicating that a message has been received.
  • Alternative methods to indicate to the user that a message has been received could be a specific sound or an elnk display.
  • the Communication from end user model 600 is primarily responsible for the translating from speech to text in case this is the preferred output option for the message from the user. Alternatively, it could record voice message and send it as a voice message to the Communication send by end user module 200.
  • the Communication from end user modules 600 is activated by the Voice command unit 103 and also tells this unit when the message has finishes.
  • the Communication sent by the end user unit 200 is a critical element of the system because it is responsible for taking the decision of the channel in which the system will send the message to his or her contacts.
  • this unit takes to type of inputs: a) the input from the Command Identification Unit 101 when the user that triggers that the user wants to send a message (an in some alternative instantiations it may also select the channel by direct interaction); b) the input from the Dynamic user profiles (DUP) 402 that defines the preferred output channel (namely, SMS, email, Facebook, etc.) for a particular contact.
  • DUP Dynamic user profiles
  • the Receiving module 300 is responsible for receiving the messages from the user's contacts.
  • the key piece of this module is the Message memory unit 302.
  • the technology for receiving the messages would be for instance 2G and 3G connectivity provided through a 2G/3G receptor.
  • Other alternative for the reception of messages is WIFI connectivity, among others.
  • This module 300 is directly connected with the personalize content unit 303 which identifies if the message sender is a particular contact, a newsgroups to which the user has subscribe, twitter accounts or any other social media network.
  • Other alternative senders may be advertising companies that may send messages to the user based on the information in the user profile.
  • the Unit selecting when to interrupt the broadcast 301 gathers the information about whether the radio is or not from the Radio status identification 407 as it has been described above.
  • the last unit of this module is the text to voice unit 304, which translates from the text form in which the message is received into a voice form using Text to voice algorithms. Alternatively the text is not translated into voice but displayed in a LED or sent for a printer connected.
  • the invention encompasses four distinct core functionalities/ features
  • a sender wants to send a message to the proposed communication device (or Radio. me as it will be called from now on), and does it via the tools and channels they usually use.
  • the system receives the message after which the system selects the channel and device to contact the user to receive the message via the dynamic user profile 402 and the sending communications' module 200.
  • the system releases a "received message" notification command, and the command identification unit 101 tells the radio. me to pulsate a light notification of the received message. So, a light pulsates on the radio. me device to indicate a new received message and total number of messages saved in the system.
  • the user does a physical/ direct interaction and inserts "listen messages" card on a radio. me RFID card reader slot.
  • the command identification unit 101 recognizes the command action and releases an audio feedback of inserted card and an automatic sequence of events for user interact with the radio in order to enable the user to listening and replying to message(s).
  • the system starts reading the messages to the users via the text to voice unit 304, starting from the most recent one.
  • the memory unit 302 releases messages in the memory whereas the speed mode unit 406 controls the volume, time between messages, and speed of messages and repetition of the messages.
  • the system releases an audio notification of end of message.
  • the voice command unit 103 releases the next audio command prompt: "Reply to the message?” and the system waits for user audio interaction: "Yes” or “No”. If the user replies "Yes” the voice interaction sequence follows same as B. USER SENDS A MESSAGE flow. If user replies "No”, the system moves to the next event and requests the user: “Save the message?”. If the user replies: “Yes” the message is saved to the memory unit 302.
  • the system keeps reading the messages in chronological order from the memory unit until the user takes out the RFID "listen messages” card from the slot.
  • the user does a physical/ direct interaction and takes out the card from the slot.
  • the system releases an audio feedback of a removed card.
  • the user wants to send a message to one of their contacts, so the user does a physical/ direct interaction and inserts "send message to X" card on the radio. me RFID card reader slot.
  • the command identification unit 101 recognizes the command action and releases an audio feedback of inserted card and an automatic sequence of events for user to interact with the radio in order to enable the user to listening and replying to message(s).
  • the user may use other means to signal that he wants to send a message like voice command or a push button.
  • the voice interaction command with the audio message to the user of "starting the message" by the Voice command unit 103.
  • Other alternative ways to signal it would be by lights or sounds.
  • ASR Automatic Speech Recognition
  • text unit 600 which translate the user message into a text format.
  • the message may not be sent to the ASR unit 600 but being kept for sending in its audio format.
  • the recording of the user messages stops when the user says the sounds corresponding to the auditory command of "end of message".
  • Other user actions to finish the message may include direct action like using a card or pressing a button.
  • the ASR unit 600 When the ASR unit 600 detects the "end of message” command it send an command to the voice command unit 103 to be ready for the next action of the user regarding the command to send the message.
  • this direct action triggers the user database 500 sent a command to the sending communications' module 200 which selects the best channel of communication for that contact (namely, SMS, WhatsApp®, email or any other textual or verbal communication channel).
  • the selection of the channel decision is done taken into account the variables described in the system diagram.
  • the message gets send through a 2G/3G connection, or any other.
  • the user and receiver get feedback notification of the message sent and received respectively.
  • This message is sent by the Confirmation and notification unit 104 and could be an auditory message, using lights or sounds.
  • the example described in the diagram refer to the case when the card is associated to a weather page address and therefore the system connects to the web page and reads the content information in that web page selecting the text information in the page using the algorithms already existing for blind people.
  • the Text to voice unit 304 converts the text to voice and reads the text in an auditory form following the same procedure as when the system reads an incoming message from a contact.
  • This functionality refers to the capability of the Dynamic user profile 402 to personalize different variables from the way the audio content both from the Confirmation and notification unit 104 and from the Text to voice unit 304 are presented in an audio form to the user.
  • the core variables that can be personalized are: the volume of the audio, the speed of the audio language and the language of the messages. However, other variables may be added.
  • - User Interaction Unit (or module) 100 is responsible for allowing user physical interaction with Radio. me.
  • Command Unit 101 is responsible for adapting actions in User Interaction Unit 100 to the logic of the system and vice versa.
  • Command Identification Unit 101 is also responsible for delivering the messages in the format chosen by the user and for guiding the user through the communication process.
  • - Core unit 802 is responsible for orchestrating the interaction among all Radio. me units and triggering the changes of status required in each step of the process.
  • - Admin Unit 803 is responsible for user's identification and contains user's phone address book and other settings.
  • Dynamic User Profile Unit 402 is responsible for personalizing Radio. me experience to the user characteristics based on her/his usage pattern.
  • - Memory Unit 906 which is a storage space for the communication interchanged between user and her/his contacts.
  • - Communication Integration Unit (or sending communications' module) 200, is responsible for adapting the Core Unit 802 communication interface to all the third party communication providers supported by Radio. me and vice versa (namely, email, SMS, Facebook®, WhatsApp®, or any other application that receives messages).
  • - Internet Content Adapter Unit 908 is responsible for adapting the content of web pages to the Core Unit 802 content interface
  • - Broadcast radio 910 it is a module with AM/FM radio connected to the User Interaction Unit 100.
  • User Interaction Unit 100 identifies user interactions (physical, gesture, press buttons, voice, etc.) and calls user to undertake actions.
  • the basic categories of actions are either input actions (like a message to be sent from the user) or output actions (like reading a message received or reading a web page).
  • the User Interaction Unit 100 has the capability to differentiate between these two categories in order to unfold different type of actions in the system depending on the interaction that it receives from the user.
  • the User Interaction Unit 100 In case of physical actions user inserts a physical card into the slot of the User Interaction Unit 100 to start a communication process or a web page content reading.
  • the User Interaction Unit 100 decodes the input from the physical card with its NFC receptor and sends a notification to the Command Identification Unit 101 .
  • Other alternatives to the NFC interactions are Smart card technology, QR codes or by clicking buttons.
  • User Interaction Unit 100 is connected to the broadcast radio 910 and can determine volume and tuning adjustments.
  • Command Identification Unit 101 receives/sends notifications from/to User Interaction Unit 100 and Core Unit 802 and releases commands/requests to guide the user in Radio. me usage. It has interfaces to the ASR 600 for interpreting user's voice commands and to Text to Text to voice (or TTS) unit 304, audio player 902 or other module for delivering the message or web page content to the user. Command Identification Unit 101 handles the corpus of the message capture in sending process and the delivery of received messages and web pages content.
  • Core Unit 802 is the brain of Radio. me. It controls and triggers the state changes in all the processes and has interfaces to the rest of the units.
  • Core Unit 802 starts the reception process, notifies the event to the Command Identification Unit 101 , saves message in memory unit 906, queries admin unit 803 about user's delivery preferences and requests the Command Identification Unit 101 to handle the message according to them.
  • Core Unit 802 also sends notifications to the memory unit 906 if the message after read has to be kept or deleted. Once the message has been read Core Unit 802 goes back to the idle state.
  • Core Unit 802 When Core Unit 802 receives a notification from Command Identification Unit 101 about actions undertaken by the user to send a message, Core Unit 802 starts up the sending process. Once the user has chosen the recipient and the message is ready (audio note or text as result of the ASR or other format) Core Unit 802 queries admin unit 803 in order to get information about the default third party communication provider linked to the recipient. After that Core Unit 802 moves the message to IU that adapts it to the proper communication channel and sends it by the radio modem module.
  • Core Unit 802 starts up the web page reading process.
  • Core Unit 802 queries admin unit 803 in order to get information about the web page the user wants to read. After that Core Unit 802 notifies the request to the Internet Content Adapter Unit 908 that retrieves the information required by the user. Once available, Core Unit 802 notifies the event to Command Identification Unit 101 for the delivery to the user.
  • Admin Unit 803 includes features like user's identification module as well as storage functionality (in local or in the cloud) for user's personal information and for her/his phone address book.
  • Admin Unit 803 enables user's identification utilizing different methods, primarily by voice recognition technology but other variations could include finger print identification and a NFC card with a password.
  • Admin Unit 803 also stores the user's password and other information requested to the user at the first time of use (age, volume of the broadcast radio, language, speed of audio messages, type of interaction - voice/text, etc.). This set-up configuration could be made by an operator or a familiar of the user.
  • Part of the data stored in Admin Unit 803 can be updated with the recommendations made by the Dynamic User Profile 402.
  • Admin Unit 803 contains the user's phone address book and preference about which third party communication provider select to send a message to a particular contact or which delivery format using for a message received from a particular contact. Admin Unit 803 also contains information about web pages the user is able to read. All this information when required is retrieved by Core Unit 802 and it is shared with Command Identification Unit 101 , Communication Integration Unit 200 and Internet Content Adapter Unit 908.
  • Admin Unit 803 could be implemented locally in Radio. me device or in the cloud.
  • Dynamic User Profile 402 creates recommendations to better adapt Radio. me features to the user. It recollects information about the usage including number of communications, web pages accessed, age of the user, format and third party communication provider chosen to send a message to a particular contact, selected volume of the radio broadcast, etc. Dynamic User Profile 402 also receives all the manual updates made in the Admin Unit 803.
  • Dynamic User Profile 402 analyses and processes these data to create recommendations about the usage of Radio. me. These recommendations update the user's data in the Admin Unit 803.
  • the key input for the adaptation is the age of the user: the higher the age, the higher the volume of the speaker and the lower the speed of the speech. Volume and speed of auditory commands follow the same pattern.
  • the language can either be configured in the initial set up or could be picked up once messages are sent or received.
  • Memory Unit 906 is the local Radio. me storage space dedicated to the messages received and sent.
  • Memory Unit 906 or part of it could be implemented in the cloud maintaining the same features.
  • Memory Unit 906 is addressed by the Core Unit 802 each time a message is created or received and the user wishes to save it.
  • Communication Integration Unit 200 is responsible to adapt send/receive messages to/from the different third party communication providers supported by Radio. me.
  • Communication Integration Unit 200 provides a single interface to the Core Unit 802 and interfaces toward the third parties communication providers supported by Radio. me
  • the interface offered by the Communication Integration Unit 200 to the Core Unit 802 could have the following methods (assuming a REST implementation as an example):
  • Parameters between brackets are variables to be fulfilled on each API method execution.
  • Core Unit 802 delivers it to the Communication Integration Unit 200 specifying which third party communication provider to be used and the Communication Integration Unit 200 adapts and sends the message.
  • Internet Content Adapter Unit 908 is responsible for adapting the content of web pages to the Core Unit 802 content interface.
  • Fetch content GET: /content ⁇ contentProvider ⁇
  • Core Unit 802 notifies Internet Content Adapter Unit 908 that the user wants to read the information of a specific web page.
  • Internet Content Adapter Unit 908 connects to the web page and retrieves the content required by the user. This content is adapted to be handled by the Core Unit 802 and after that Core Unit 802 make it available to the Command Identification Unit 101 for the delivery to the user.
  • Radio. me has several interfaces toward third party communication providers and web pages content providers which make it a device totally integrated in the internet scenario, and to overcome the barrier of the introduction of a new device, Radio. me is provided with a "traditional" broadcast AM/FM radio whose functionality is totally integrated with the rest of features offered to the user.
  • a sender, block 1001 wants to send a message to a Radio. me user and does it via the tools and channels s/he usually uses.
  • Radio. me, block 1002 receives the message via one of the third party communication provider integrated in Radio. me.
  • This message gets into the Communication Integration Unit 200 by the correspondent communication channel and is adapted to the Core Unit interface 802.
  • Core Unit 802 queries the Admin Unit 803, block 1003, in order to retrieve user preference for message delivery and stores the message in the Memory Unit 906.
  • Core Unit 802 also triggers, block 1004, the Command Unit 101 to activate a notification of the received message in the User Interaction Unit 100.
  • a notification could be a light in Radio. me or an auditory message, among others. In the example of a notification by light, it blinks on the device to indicate a new available message to read and in some implementation the total number of messages saved in the system. The event of a new incoming message also gets lower the volume of broadcast radio 910 (if is on) as an additional notification to the user.
  • the User Interaction Unit 100 block 1005 When the user decides to retrieve the message s/he does an interaction with the User Interaction Unit 100 block 1005. This interaction could be implemented according to an embodiment for example by inserting "listen messages" card on the Radio. me RFID card reader slot or pressing buttons on Radio. me or by voice, gestures, etc.
  • the Command Unit 101 recognizes the user action and releases a feedback block 1006.
  • the Core Unit 802 queries the Admin Unit 803 to select the format to present the message to the user. If the message received is a text and the delivery format is voice the system starts reading the messages to the user via the Text to Voice engine (TTS) 304 starting from the most recent one. If the message received is an audio note the system plays it. Other formats could be possible.
  • TTS Text to Voice engine
  • the Memory Unit 906 releases messages whereas the Admin Unit 803 controls the volume, time between messages, speed and repetition of the messages. Once the message ends, the Command Unit 101 releases a notification and requests the user if s/he wants to reply the message read block 1009.
  • One implementation of this part of the process could be made by an audio notification of end of message and afterwards an audio command prompt: "Reply to the message?" waiting for user audio interaction: "Yes” or "No".
  • the system keeps delivering the messages in chronological order from the Memory Unit 906 until the user undertakes an action to interrupt it or until messages are all delivered.
  • Possible implementation of the delivery process interruption could be made taking out the RFID "listen messages" card from the slot in the User Interaction Unit 100, gestures or by voice commands. Anyhow the Command Unit releases a feedback of delivery interrupted or delivery finished and if broadcast radio is on, the volume turns to the same level as before message reception.
  • Figure 1 1 it is illustrated a more detailed flowchart of the send message process.
  • the user wants to send a message to one of her/his contacts.
  • the user does an interaction with the User Interaction Unit 100, block 1020. This interaction could be implemented according to an embodiment for example by inserting "send message" card on the radio. me RFID card reader slot or, alternatively, by pressing buttons on Radio. me or by voice, gestures, etc.
  • the Command Unit 101 recognizes the user action and releases a feedback.
  • the Core Unit 802 starts up the sending process.
  • Other implementations are also possible.
  • Command Unit 101 asks Core Unit 802 to query Admin Unit 803 in order to retrieve information of the chosen contact as the preferred third party communication provider by through sending the message and the format with whom sending it (e.g. text, audio), block 1022. Once this information is available, Command Unit 101 releases a notification inviting the user to start to record the message, block 1023. If the delivery format is text the ASR 600 translates the user message into a text format. If the delivery format is audio no further actions are required to the ASR 600. Other delivery formats could be included. Anyhow the message is saved on the Memory Unit 906.
  • ASR Automatic Speech Recognition engine
  • the recording of the user messages ends when the user undertakes the correspondent action block 1024.
  • This action could be an auditory command of "end of message", sending card removal, gestures, pressing a button, etc.
  • the Core Unit 802 moves the message to the Communication Integration Unit 200 that adapts, block 1026, it to the proper third party communication provider for that specific contact (e.g. namely, SMS, WhatsApp®, email or any other textual or verbal communication channel) and sends it block 1027.
  • the proper third party communication provider for that specific contact e.g. namely, SMS, WhatsApp®, email or any other textual or verbal communication channel
  • FIG 12 it is illustrated a more detailed flowchart of the web content access process.
  • the functionally is triggered by the user undertaking a proper action, block 1030, like for example inserting "web content" card in the RFID card reader slot of the User Interaction Unit 100, by voice, buttons, gestures, etc.
  • the Command Unit 101 block 1031 , interprets this action as a command to access a particular web page and releases a feedback. If the interaction is made by voice, the Command Unit 101 invokes the ASR 600 for interpreting the vocal command given by the user. Command Unit 101 notifies it to Core Unit 802 that starts up the web page connection process. Then Core Unit 802 queries the Admin Unit 803, block 1032, in order to retrieve information about the web page the user wants to access. Once this information is available, the Core Unit 802 asks the Internet Content Adapter Unit 908, block 1033, to connect with the selected web page.
  • Internet Content Adapter Unit 908 adapts, block 1034, it to the Core Unit 802 content interface and Core Unit 802 gets the information from the web page.
  • This information could be text (using the algorithms already existing for blind people), audio or videos. Further formats could be included.
  • Core Unit 802 also triggers the Command Unit 101 to activate a notification of the available web page content in the User Interaction Unit 100, block 1035.
  • This notification could be a light in Radio. me or an auditory message, etc. In the example of a notification by light, it blinks on the device to indicate a new available web content to read.
  • This interaction could be implemented pressing buttons on Radio. me or by voice, gestures, etc.
  • the Command Unit 101 enables the TTS 304 to convert the text to voice and to allow the user listening to the web page content. If it is audio or video it is played.
  • the system keeps delivering the web content, block 1037, until the user undertakes an action to interrupt it or until it is finished.
  • Possible implementation of the delivery process interruption could be made taking out the RFID "web content" card from the slot in the User Interaction Unit 100, gestures, by voice commands, other ways. Anyhow the Command Unit 101 releases a feedback of delivery interrupted or delivery finished and if broadcast radio is on, the volume turns to the same level as before web content delivery.

Abstract

A communications device has an adaptive user interaction interface and multiple application channels, including: a user interaction module (100) having a user command identification unit (101) that has at least a physical object interaction unit (102) and/or a voice command unit (103); a sending communication's module (200) configured for selecting said application channels to send messages to third parties of a user based on user profiles and/or preferences; a receiving communication's module (300) configured to process information received from said third parties through several input application channels and to output a processed output to said user through one or more selected output application channels based on said user profiles and/or preferences, wherein said information received from several input application channels being selected among messages from third parties and web pages content

Description

USER-INTERFACE USING RFID-TAGS OR VOICE AS INPUT
Field of the art
The present invention refers to a communications device, to a method for providing adaptive communications and to a computer program storage medium thereof. The communication device comprises a tangible (or adaptive) interface and a system that optimizes communication channels via translation to the user.
The system integrates three key elements in order to provide the adaptation to the communication channels, the content translation and the physical to digital interface. The adaptation is done with a unit that connects the device with third parties communication providers and provides an internal unique interface to the system. The content translation is done by an algorithm that adjusts the translation from text to voice/ voice to text and selects the best channel for the output/ input of text based communications and services. The physical to digital interface consists of buttons, a gesture interaction interface, RFID based input and output units (content cards) updatable with contents that function as a command between audio and direct manipulation interactions and their related content. However, more implementations could be also possible.
Prior State of the Art
People with special needs or problems to understand digital communications beyond voice call (seniors, illiterate) cannot benefit from the advantage of text based communications and services that provide information in a written form (e.g. twitter, newsletter).
Seniors problems with current communications technologies:
Previous work on the factors that determine the usage and acceptance of technology by seniors have identified the variables that explain whether or not a technology and in particular communication technologies (i.e. mobile phone) would be used by them. These factors include not only the psychophysiological and cognitive impairments (also included in User Context in Figure 1 [1]:
User Context such as demographic variables, social influence and personal factors such as age and functional ability, for example. Social influence is the prevalent external variable and therefore depicted as a module in the user context. Perceived usefulness is defined as 'the extent to which a person believes that using the system will enhance his or her job performance
Intention to Use is influenced by perceived usefulness and also by user context. Experimentation and Exploration, which is the module where the user first starts using the technology and forms first impressions of the ease of use.
Confirmed usefulness is the usefulness of the person's phone to him or her - composed of the features he or she is able to learn to use.
Actual use is indirectly predicted by the outcome of the experimentation, which leads to ease of learning & use.
- Facilitating conditions and the consequent ease of learning & use predict actual use.
Finally, acceptance or rejection is predicted by ease of learning & use and actual use, with the former more strongly influencing acceptance. - Problems due to Illiteracy:
According to the EU (http://ec.europa.eu/education/literacy/about/adults/ index_en.htm) tackling poor adult literacy effectively has the potential to transform lives. Some 75 million adults across Europe suffer from low qualification levels.
Some 75 million adults across Europe suffer from low qualification levels which make is very hard for these people to use text based communication. Text-based communications are surpassing traditional phone calls or meeting face to face as the most frequent ways of keeping in touch for UK adults (http://stakeholders.ofcom.org.uk/ market-data-research/market-data/communications-market-reports/cmr12/).
Youth literacy (http://huebler.blogspot.com.es/2007/07/disparity-between-adult- and-youth.html) is generally higher than adult literacy, due to increasing levels of school attendance over the past decades.
In order to try and solve the above existing problems (seniors, illiterate) the applicant knows about the following types of existing technology:
1 . Phones with big letters but limited functionality.
2. Text to speech technologies but no adapted to user's need or segmenting them in anyway.
3. Voice commands for interaction.
4. Services doing speech to text (such as SIRI application within I Phone) but not segmenting or adapting it to users. 5. Technologies to make internet based communication accessible but only deal with one type of communication channel.
Regarding the above technologies, the phones with big letters (Figure 2) but low functionality solve the problem of visual impairment though a screen reader. This is software that translates and converts information displayed on the screen into speech, non-speech sounds and Braille for a Braille display. Newest generations of touchscreen phones come with gesture-based screen-readers.
The mobile handset accessibility features for the visual impaired are shown in the following table:
Figure imgf000005_0001
Technologies regarding text to speech technologies but no adapted to user's need or segmenting them anyway refer to the ability to convert displayed electronic text into speech removes the anxiety associated with reading contact names, caller ID, messages, emails, instructions / directions, textbooks and much more. Examples of the same are:
Mobile Speak for Symbian and Windows Mobile provides Text-to-speech as well as Braille device plugin support (www.codefactory.es/en/ products. aspid=318)
Nuance TALKS&ZOOMS for Symbian OS provides Text-to-speech and Large- print for blind/ low-vision users (www.nuance.com/for-individuals/by- solution/talks-zooms/index.htm)
Voice commands for interaction technology refer to advanced speech recognition software that will help to undertake basic communication using a mobile phone. These voice commands are used for working on computers and cell phones for placing calls, writing text messages, composing documents, opening and closing applications, making calendar entries and setting reminders, playing music and videos, and surfing the web.
Examples of devices with voice commands have been included in the following table:
Figure imgf000006_0001
The services doing speech to text but not segmenting or adapting it to users refer mainly to Captioned speech relay services (captioned telephony or CapTel service) (Figure 3) that translate real-time conversation into captions and is useful for people who can communicate orally, but have difficulty in hearing. Real-time captioning provides both voice and text forms of conversations. Users of these services need a CapTel telephone as well as a captioning service.
Easy Chirp is an example of technologies to make internet based communication accessible. It is a web-accessible Twitter application first build in early 2009. It's designed to be easy to use and is optimized for users with disabilities. Easy Chirp is great for use with assistive technology such as screen readers. It also works with keyboard-only users, older mobile devices, older desktop browsers such as IE7, low-band Internet connections, and even without JavaScript. Further, TweetSpeak33, is a speaking twitter tool for Android that allows blind users to do searches on twitter.
Apart from that, another related technologies providing communication tools for elderly users is Nostalgia [2] a prototype which users can use for listening to old news and music from the twentieth century. The design of Nostalgia is an attempt to design an artefact that in a seamless and simple way can trigger the memory of past events both individually and in the company of others.
Moreover, in [3] a device for addressing the challenges elderly users encounter when using computers is also disclosed. In this case, said device, with few control components and a simple, comfortable, and intuitive hardware interface provides current information, music, and entertainment for the elderly.
References:
[1] Renaud, K.,van Biljon, J. Predicting Technology Acceptance and Adoption by the
Elderly: A Qualitative study. SAICSIT 2008, 6 - 8 October 2008.
[2] Magnus Nilsson et.al. Nostalgia: An Evocative Tangible Interface for Elderly Users.
HCI/lnteraction Design, IT-university of Goteborg.
[3] Bruce R. Lerario. Simple Web Interface For the Elderly
Summary of the Invention
According to a first object, the invention provides a communications device that has an adaptive user interaction interface and multiple application channels. In a characteristic manner, the communications device, such as a radio-like user interface, preferably embedded in a radio set, includes: - a user interaction module having a user command identification unit, said user command identification unit having at least a physical object interaction unit and/or a voice command unit;
- a sending communications' module configured for selecting said application channels to send messages to third parties of a user based on user profiles and/or preferences; and
- a receiving communications' module configured to process information received from said third parties through several input application channels and to output a processed output to said user through one or more selected output application channels based on said user profiles and/or preferences.
In the communication device, the information received from several input application channels is selected among messages from third parties and web pages content.
The application channels preferably comprise messaging channels selected from the group of email, Short Message Service and web applications capable of receiving messages or voice channels.
According to an embodiment, the physical object interaction unit includes a physical slot configured for receiving a physical card or key, for instance an RFID or a NFC card. In addition, the physical object interaction unit may also include at least one button, switch or wheel. The physical object interaction unit, can be configured for triggering a direct communications event when inserted, contacted or in close range with said communications device and for sending a message to a predetermined user contact and/or for receiving all messages from the user's contacts. In addition, the physical object interaction unit can be associated with predetermined web page location identification, preferably its URL.
According to an embodiment, the communications device also comprises a unit for selecting at least a time to output a processed output to said user.
The receiving communications' module has a radio broadcast interrupting unit that is configured to select an optimal time to output information to the user to a later time if said radio set is receiving a radio broadcast. The broadcast interrupting unit may be configured to be overridden by user commands, and the output information may be immediately notified to the user if said radio broadcast is off.
According to another embodiment, the communications device, which is connected to a mobile telephone network, including at least WiFi, 2G, 3G or LTE networks, also comprises a user identification and preferences module having a user identification unit. The user identification unit has a dynamic user profile configured for controlling parameters of said processed output to the user, where said parameters can include speed mode, language and volume. Moreover, the dynamic user profile is configured for receiving inputs from a user database, where said user database contains information relating to the user's age, history of use, cognitive, visual or hearing user conditions.
According to another embodiment, the receiving communications' module further comprises a message memory unit configured for receiving messages, preferably from the user's contacts.
A second object of the invention is a physical object comprising a circuitry configured for interacting with the physical object interaction unit, said physical object preferably comprising a key or a card.
A third object of the invention is a method for providing adaptive communications. Said method comprising the steps of: a user giving a command by manipulating a physical object or giving a voice command, said physical object being associated with a communications device, upon said command being given, a sending communications' module selects application channels for sending messages to third parties based on user profiles and/or preferences, and receiving third party information through several input application channels and output a processed output to the user through one or more selected output application channels based on said user profiles and/or preferences. The third party information received from several input application channels is selected among messages from third parties and web pages content.
According to an embodiment, step a) is carried out by the user associating a listen-to-messages card or key with said communications device. Furthermore, step a) can be carried out by the user doing a physical action that is associated with accessing a predetermined web page or App.
According to another embodiment, step a) is carried out by associating a send- message-to-a-predetermined-contact card key with said communications device.
According to an embodiment, step c) includes transforming text information into voice and output said voice to said user.
In the method, said association is carried out by inserting said cards or keys or contacting, or providing in close range with into said communications device and the communications device comprises a radio-like user interface, preferably embedded in a radio set. A last object of the invention is a computer storage medium coupled to a communications device according to the first object of the invention and including instructions that when executed by said communications device causes the device to perform operations comprising the steps of any of the steps of the method of the third object of the invention.
The communications device thanks to its tangible interface and its system that optimizes communication channels via translation to the user solves the problems of the prior art technologies including the following advantages:
It will be adapted to the user's need and preference taking into account a lot of variables.
It will adapt both the content and the interaction method.
It is expandable to other applications or technologies text based beyond the existing ones.
Easy to use and learn because the users don't need to learn any new interaction.
Brief Description of the Drawings
The previous and other advantages and features will be more fully understood from the following detailed description of embodiments, with reference to the attached, which must be considered in an illustrative and non-limiting manner, in which:
Figure 1 illustrates a senior technology acceptance and adoption model.
Figure 2 is a phone with big letters.
Figure 3 is an example of a captioned speech relay service.
Figure 4 illustrates the general architecture of the communications device, according to several embodiments.
Figure 4 illustrates the receive message flowchart.
Figure 5 is a flowchart illustrating proposed steps when a user sends a message.
Figure 6 is a flowchart illustrating request information from internet. In this case for a weather application.
Figure 8 illustrates the identification and preferences module.
Figure 9 is a more detailed illustration of the architecture of the proposed communications device (or Radio. me). Figures 10, 1 1 and 12 are other flowcharts illustrating in more detail the proposed steps for the receive message process, the send message process, and the web content access process, respectively. Detailed Description of Several Embodiments
With reference to Figure 4 it is illustrated the general architecture of the communications device, according to several embodiments. As can be seen in Figure 4, the architecture is composed of 4 modules:
a) Identification and preferences module 400, which is responsible for all the information about the end user and the personalization of the user preferences based on the knowledge from the user.
b) User Interaction module 100, which is responsible for understanding user's command and orders about what s/he wants to do.
c) Receiving module 300 which is responsible for processing of information and messages that the system receives from outside the system.
d) Communications from the user module 600, which is responsible for recognizing the user voice messages, record them and translate them to text. e) Communications sent by end user module or sending communications' module 200 is responsible for selecting, preferably based on the user preferences, the channel how the receiver of the intended communication will receive the message; namely, email, SMS, Facebook®, WhatsApp®, or any other application that receives messages.
In the following description it has to be noted that term 'module' or term 'unit' has been indistinguishably used.
Each module is composed from several units. In bolt letters it has been highlighted the core unit of each module without which the module will not work. Inside each module all the units are connected among them. In Figure 4, it has only been portrait the most relevant relationships among units and modules. Following is the description of the modules from each unit and its interrelation with other units.
The Identification and preferences module 400 includes the User Identification module 401 , which knows that which user is using the system at this point of time. The identification could happen using different methods, primarily by voice recognition technology but other variations include finger print identification and a NFC card with a password. Also this module contains a module that is able detect if the radio is on or off, Radio status identification 407, which will send the information to the Unit selecting when to interrupt the broadcast in the Receiving module 300 in order that this module could take decisions only if the radio is on. In the case that the radio broadcast is off, the Unit selecting when to interrupt broadcast 301 (or a radio broadcast interrupting unit) does not gets called and the Confirmation and notification unit 104 will immediately communicate the user that a message have been received.
A User database 500 although it stores the data in the cloud, is also part of this module. It stores the information about the user password and characteristics that have been entered in the first time that the system is used as the age and the phone numbers of the contacts. The user database 500 also receives information from the User interaction module 100 about the usage that the user has done of the system including number of communications, type of chosen way to send a message to a particular contact, selected volume of the radio broadcast, etc. All the information and in particular all the updates from the database 500 is transmitted to the Dynamic user profile 402 which processes all the information from the database 500 and is in charge of customization and adaptation that is at the core of the proposed system. The key inputs for the adaptation are the age of the user and the previous selections of controls and previous selections of how to send a message to a particular contact. As the age is higher the volume of the speaker gets higher, the speed of the speech when transmitting a message gets lower and the volume and speed of the confirmation messages follow the same pattern. On the other hand, the language can either be configured in the initial set up or could be pick up by the Communication from end user module 600. Special needs inputs are not required all the time but refer to the cases when the user has particular needs like being blind and therefore all the interactions need to happen by voice command.
The User Interaction module 100 main unit is the Command identification unit 101 , which identifies all the action from the users both physical and voice, and understand what sequence of actions needs to perform. The basic categories of actions are either input actions (like a message to be send from the user) or output actions (like reading the messages the end users have receives from his or her contacts). The Command identification unit 101 has the capability to differentiate between these two categories in order to unfold different type of actions in the system depending on the interaction that it receives from the end user either physical from the Physical Card (or object) interaction unit 102 or voice from the Voice command unit 103. The Physical Card interaction unit 102 is the unit that decodes the input from the physical cards with its receptor of the NFC that has the physical cards. Other alternatives to the NFC interactions are Smart card technology or QR codes.
The Voice command unit 103 starts in action when the Command Identification Unit 101 interprets that there is an input/output action has been input in the system by the user. In the case when the user provides a voice command with the following words "record a message" these unit will communicate with the Communication from end user module 600 to transmit the order to activate this unit so that it stars to record and translate the voice of the user. These recording and translation action is ended when the user tells the system the voice command "stop message". When this command is recorded by the Communication from end user module 600 this module sends the message to the voice command unit 103 which communicates this action to the Command Identification Unit 101 .
The Confirmation and notification unit 104 is responsible for providing feedback after two types of actions occurs. During the first type of actions, when the user performs a command (i.e. command to send a message) this unit read to the user a message indicating the system has sent the message. The second type of actions would be the messages incoming from the user contacts.
Alternatively, instead of providing the information by voice, these messages could be provided by a light, specific sound or even an elnk display.
In the case of a message being received by the user from his or her contacts the receiving module 300 will send a command to this unit to switch on a light indicating that a message has been received. Alternative methods to indicate to the user that a message has been received could be a specific sound or an elnk display.
As described above the Communication from end user model 600 is primarily responsible for the translating from speech to text in case this is the preferred output option for the message from the user. Alternatively, it could record voice message and send it as a voice message to the Communication send by end user module 200. The Communication from end user modules 600 is activated by the Voice command unit 103 and also tells this unit when the message has finishes.
The Communication sent by the end user unit 200 is a critical element of the system because it is responsible for taking the decision of the channel in which the system will send the message to his or her contacts. In order to take this decision this unit takes to type of inputs: a) the input from the Command Identification Unit 101 when the user that triggers that the user wants to send a message (an in some alternative instantiations it may also select the channel by direct interaction); b) the input from the Dynamic user profiles (DUP) 402 that defines the preferred output channel (namely, SMS, email, Facebook, etc.) for a particular contact.
The Receiving module 300 is responsible for receiving the messages from the user's contacts. The key piece of this module is the Message memory unit 302. The technology for receiving the messages would be for instance 2G and 3G connectivity provided through a 2G/3G receptor. Other alternative for the reception of messages is WIFI connectivity, among others. This module 300 is directly connected with the personalize content unit 303 which identifies if the message sender is a particular contact, a newsgroups to which the user has subscribe, twitter accounts or any other social media network. Other alternative senders may be advertising companies that may send messages to the user based on the information in the user profile.
In order to know if the message will be read at the moment or not the Unit selecting when to interrupt the broadcast 301 gathers the information about whether the radio is or not from the Radio status identification 407 as it has been described above.
The last unit of this module is the text to voice unit 304, which translates from the text form in which the message is received into a voice form using Text to voice algorithms. Alternatively the text is not translated into voice but displayed in a LED or sent for a printer connected.
The invention encompasses four distinct core functionalities/ features
A. RECEIVE MESSAGE
B. USER SENDS A MESSAGE
C. REQUEST INFORMATION FROM INTERNET: WEATHER APP
D. PERSONALIZATION OF AUDIO CONTENT
The elements of each core functionality are described below and attending to the flowchart figures.
A. RECEIVE MESSAGE (Figure 5):
A sender wants to send a message to the proposed communication device (or Radio. me as it will be called from now on), and does it via the tools and channels they usually use. The system receives the message after which the system selects the channel and device to contact the user to receive the message via the dynamic user profile 402 and the sending communications' module 200.
The system releases a "received message" notification command, and the command identification unit 101 tells the radio. me to pulsate a light notification of the received message. So, a light pulsates on the radio. me device to indicate a new received message and total number of messages saved in the system.
The user does a physical/ direct interaction and inserts "listen messages" card on a radio. me RFID card reader slot. The command identification unit 101 recognizes the command action and releases an audio feedback of inserted card and an automatic sequence of events for user interact with the radio in order to enable the user to listening and replying to message(s).
Following that the system starts reading the messages to the users via the text to voice unit 304, starting from the most recent one. The memory unit 302 releases messages in the memory whereas the speed mode unit 406 controls the volume, time between messages, and speed of messages and repetition of the messages.
After a message finishes, the system releases an audio notification of end of message. The voice command unit 103 releases the next audio command prompt: "Reply to the message?" and the system waits for user audio interaction: "Yes" or "No". If the user replies "Yes" the voice interaction sequence follows same as B. USER SENDS A MESSAGE flow. If user replies "No", the system moves to the next event and requests the user: "Save the message?". If the user replies: "Yes" the message is saved to the memory unit 302.
The system keeps reading the messages in chronological order from the memory unit until the user takes out the RFID "listen messages" card from the slot.
The user does a physical/ direct interaction and takes out the card from the slot. The system releases an audio feedback of a removed card.
B. USER SENDS A MESSAGE (Figure 6)
The user wants to send a message to one of their contacts, so the user does a physical/ direct interaction and inserts "send message to X" card on the radio. me RFID card reader slot. The command identification unit 101 recognizes the command action and releases an audio feedback of inserted card and an automatic sequence of events for user to interact with the radio in order to enable the user to listening and replying to message(s). Alternatively to the usage of a card the user may use other means to signal that he wants to send a message like voice command or a push button.
Among the sequence of actions that get triggered is the voice interaction command with the audio message to the user of "starting the message" by the Voice command unit 103. Other alternative ways to signal it would be by lights or sounds. After the audio message all the audio sounds get recorded and sent to the Automatic Speech Recognition (ASR) voice to text unit 600 which translate the user message into a text format. Alternatively, depending on the user preferences, the message may not be sent to the ASR unit 600 but being kept for sending in its audio format.
The recording of the user messages stops when the user says the sounds corresponding to the auditory command of "end of message". Other user actions to finish the message may include direct action like using a card or pressing a button.
When the ASR unit 600 detects the "end of message" command it send an command to the voice command unit 103 to be ready for the next action of the user regarding the command to send the message.
When the user retrieves the card from the RFID reader slot this direct action triggers the user database 500 sent a command to the sending communications' module 200 which selects the best channel of communication for that contact (namely, SMS, WhatsApp®, email or any other textual or verbal communication channel). The selection of the channel decision is done taken into account the variables described in the system diagram.
Finally, after the system selects the best channel to send the message, the message gets send through a 2G/3G connection, or any other. When the message has been sent, the user and receiver get feedback notification of the message sent and received respectively. This message is sent by the Confirmation and notification unit 104 and could be an auditory message, using lights or sounds.
C. REQUEST INFORMATION FROM INTERNET: WEATHER APP (Figure 7) The functionally is triggered by the user inserting a card in the RFID card reader slot which the Command identification unit 101 and the Physical Card interaction unit 102 interpret as a command to access a particular web page or an App. Other alternatives to trigger this information could be voice command or physical buttons.
The example described in the diagram refer to the case when the card is associated to a weather page address and therefore the system connects to the web page and reads the content information in that web page selecting the text information in the page using the algorithms already existing for blind people. After the text is available to the system, the Text to voice unit 304 converts the text to voice and reads the text in an auditory form following the same procedure as when the system reads an incoming message from a contact.
D. PERSONALIZATION OF AUDIO CONTENT
This functionality refers to the capability of the Dynamic user profile 402 to personalize different variables from the way the audio content both from the Confirmation and notification unit 104 and from the Text to voice unit 304 are presented in an audio form to the user. The core variables that can be personalized are: the volume of the audio, the speed of the audio language and the language of the messages. However, other variables may be added.
The process of the input information that is taken by the Dynamic user profile 402 has been described before and can be shown in Figure 8.
A more detailed representation of the architecture of the proposed Radio. me device is illustrated in Figure 9, wherein:
- User Interaction Unit (or module) 100, is responsible for allowing user physical interaction with Radio. me.
- Command Unit (or Command Identification Unit) 101 , is responsible for adapting actions in User Interaction Unit 100 to the logic of the system and vice versa. Command Identification Unit 101 is also responsible for delivering the messages in the format chosen by the user and for guiding the user through the communication process.
- Core unit 802, is responsible for orchestrating the interaction among all Radio. me units and triggering the changes of status required in each step of the process.
- Admin Unit 803, is responsible for user's identification and contains user's phone address book and other settings.
- Dynamic User Profile Unit 402, is responsible for personalizing Radio. me experience to the user characteristics based on her/his usage pattern.
- Memory Unit 906, which is a storage space for the communication interchanged between user and her/his contacts.
- Communication Integration Unit (or sending communications' module) 200, is responsible for adapting the Core Unit 802 communication interface to all the third party communication providers supported by Radio. me and vice versa (namely, email, SMS, Facebook®, WhatsApp®, or any other application that receives messages).
- Internet Content Adapter Unit 908, is responsible for adapting the content of web pages to the Core Unit 802 content interface
- Broadcast radio 910, it is a module with AM/FM radio connected to the User Interaction Unit 100.
User Interaction Unit 100 identifies user interactions (physical, gesture, press buttons, voice, etc.) and calls user to undertake actions. The basic categories of actions are either input actions (like a message to be sent from the user) or output actions (like reading a message received or reading a web page). The User Interaction Unit 100 has the capability to differentiate between these two categories in order to unfold different type of actions in the system depending on the interaction that it receives from the user.
In case of physical actions user inserts a physical card into the slot of the User Interaction Unit 100 to start a communication process or a web page content reading. The User Interaction Unit 100 decodes the input from the physical card with its NFC receptor and sends a notification to the Command Identification Unit 101 . Other alternatives to the NFC interactions are Smart card technology, QR codes or by clicking buttons.
In case of voice the user's order goes to the ASR module 600 for its interpretation. Other means of interaction could be possible. Finally User Interaction Unit 100 is connected to the broadcast radio 910 and can determine volume and tuning adjustments.
Command Identification Unit 101 receives/sends notifications from/to User Interaction Unit 100 and Core Unit 802 and releases commands/requests to guide the user in Radio. me usage. It has interfaces to the ASR 600 for interpreting user's voice commands and to Text to Text to voice (or TTS) unit 304, audio player 902 or other module for delivering the message or web page content to the user. Command Identification Unit 101 handles the corpus of the message capture in sending process and the delivery of received messages and web pages content.
Core Unit 802 is the brain of Radio. me. It controls and triggers the state changes in all the processes and has interfaces to the rest of the units. When a communication channel of Communication Integration Unit 200 gets a new message, Core Unit 802 starts the reception process, notifies the event to the Command Identification Unit 101 , saves message in memory unit 906, queries admin unit 803 about user's delivery preferences and requests the Command Identification Unit 101 to handle the message according to them.
Core Unit 802 also sends notifications to the memory unit 906 if the message after read has to be kept or deleted. Once the message has been read Core Unit 802 goes back to the idle state.
When Core Unit 802 receives a notification from Command Identification Unit 101 about actions undertaken by the user to send a message, Core Unit 802 starts up the sending process. Once the user has chosen the recipient and the message is ready (audio note or text as result of the ASR or other format) Core Unit 802 queries admin unit 803 in order to get information about the default third party communication provider linked to the recipient. After that Core Unit 802 moves the message to IU that adapts it to the proper communication channel and sends it by the radio modem module.
When Core Unit 802 receives a notification from Command Identification Unit
101 about actions undertaken by the user to read the content of a web page, Core Unit 802 starts up the web page reading process. Core Unit 802 queries admin unit 803 in order to get information about the web page the user wants to read. After that Core Unit 802 notifies the request to the Internet Content Adapter Unit 908 that retrieves the information required by the user. Once available, Core Unit 802 notifies the event to Command Identification Unit 101 for the delivery to the user.
Admin Unit 803 includes features like user's identification module as well as storage functionality (in local or in the cloud) for user's personal information and for her/his phone address book.
Admin Unit 803 enables user's identification utilizing different methods, primarily by voice recognition technology but other variations could include finger print identification and a NFC card with a password.
Admin Unit 803 also stores the user's password and other information requested to the user at the first time of use (age, volume of the broadcast radio, language, speed of audio messages, type of interaction - voice/text, etc.). This set-up configuration could be made by an operator or a familiar of the user.
Part of the data stored in Admin Unit 803 can be updated with the recommendations made by the Dynamic User Profile 402.
Furthermore, Admin Unit 803 contains the user's phone address book and preference about which third party communication provider select to send a message to a particular contact or which delivery format using for a message received from a particular contact. Admin Unit 803 also contains information about web pages the user is able to read. All this information when required is retrieved by Core Unit 802 and it is shared with Command Identification Unit 101 , Communication Integration Unit 200 and Internet Content Adapter Unit 908.
Admin Unit 803 could be implemented locally in Radio. me device or in the cloud.
Dynamic User Profile 402 creates recommendations to better adapt Radio. me features to the user. It recollects information about the usage including number of communications, web pages accessed, age of the user, format and third party communication provider chosen to send a message to a particular contact, selected volume of the radio broadcast, etc. Dynamic User Profile 402 also receives all the manual updates made in the Admin Unit 803.
Dynamic User Profile 402 analyses and processes these data to create recommendations about the usage of Radio. me. These recommendations update the user's data in the Admin Unit 803.
For instance, the key input for the adaptation is the age of the user: the higher the age, the higher the volume of the speaker and the lower the speed of the speech. Volume and speed of auditory commands follow the same pattern. On the other hand the language can either be configured in the initial set up or could be picked up once messages are sent or received.
Memory Unit 906 is the local Radio. me storage space dedicated to the messages received and sent.
Memory Unit 906 or part of it could be implemented in the cloud maintaining the same features.
Memory Unit 906 is addressed by the Core Unit 802 each time a message is created or received and the user wishes to save it.
Communication Integration Unit 200 is responsible to adapt send/receive messages to/from the different third party communication providers supported by Radio. me.
It also allows small configurations such as defining the way that the Communication Integration Unit 200 can interact with each provider. For example if the Core Unit 802 will pull the received messages from Communication Integration Unit 907 or the Communication Integration Unit 200 will push the received messages to the Core Unit 802. Communication Integration Unit 200 provides a single interface to the Core Unit 802 and interfaces toward the third parties communication providers supported by Radio. me
The interface offered by the Communication Integration Unit 200 to the Core Unit 802 could have the following methods (assuming a REST implementation as an example):
Parameters between brackets are variables to be fulfilled on each API method execution.
Send a message
POST: luser/{user}/message/send
Parameters in payload:
messageContent -> eg: url of an audio file, audio file or text content to be sent
provider -> through wich provider Is the message to be sent user -> User to whom send the message
type -> message type (audio, text...)
Receive a message
GET: luser/{user}/message/receive ->to be used on pull mode Configure provider
POST ; /provider/{provider}/con figure
Parameters:
Mode -> push or pull When there is a new incoming message Communication Integration Unit 200 receives it and adapts it to the unique interface toward the Core Unit 802 and notifies or makes it available it to Core Unit 802 that starts the reception process.
When a new message is ready to be sent Core Unit 802 delivers it to the Communication Integration Unit 200 specifying which third party communication provider to be used and the Communication Integration Unit 200 adapts and sends the message.
Internet Content Adapter Unit 908 is responsible for adapting the content of web pages to the Core Unit 802 content interface.
Content interface description:
Fetch content: GET: /content {contentProvider}
Configure content provider's content formating
POST: /content/{contentProvider}/configure
type -> content type (text, audio, etc)
Core Unit 802 notifies Internet Content Adapter Unit 908 that the user wants to read the information of a specific web page. Internet Content Adapter Unit 908 connects to the web page and retrieves the content required by the user. This content is adapted to be handled by the Core Unit 802 and after that Core Unit 802 make it available to the Command Identification Unit 101 for the delivery to the user.
Therefore, Radio. me simplifies and makes easier the communication via messages and the access to the web page contents.
It allows adaptation to user preference in messages sending and receiving and in web pages access since it counts with tools like Text voice unit 304, Automatic Speech Recognition unit 600 and audio player 902 that allow user to interact with the electronic devices in a more natural way.
Radio. me has several interfaces toward third party communication providers and web pages content providers which make it a device totally integrated in the internet scenario, and to overcome the barrier of the introduction of a new device, Radio. me is provided with a "traditional" broadcast AM/FM radio whose functionality is totally integrated with the rest of features offered to the user.
With reference to Figure 10 it is illustrated a more detailed flowchart of the receive message process, in this case, particularly referring to the interaction with the proposed Radio. me device (Fig. 9). A sender, block 1001 , wants to send a message to a Radio. me user and does it via the tools and channels s/he usually uses. Radio. me, block 1002, receives the message via one of the third party communication provider integrated in Radio. me. This message gets into the Communication Integration Unit 200 by the correspondent communication channel and is adapted to the Core Unit interface 802. Core Unit 802 queries the Admin Unit 803, block 1003, in order to retrieve user preference for message delivery and stores the message in the Memory Unit 906. Core Unit 802 also triggers, block 1004, the Command Unit 101 to activate a notification of the received message in the User Interaction Unit 100. One implementation of this notification could be a light in Radio. me or an auditory message, among others. In the example of a notification by light, it blinks on the device to indicate a new available message to read and in some implementation the total number of messages saved in the system. The event of a new incoming message also gets lower the volume of broadcast radio 910 (if is on) as an additional notification to the user.
When the user decides to retrieve the message s/he does an interaction with the User Interaction Unit 100 block 1005. This interaction could be implemented according to an embodiment for example by inserting "listen messages" card on the Radio. me RFID card reader slot or pressing buttons on Radio. me or by voice, gestures, etc. The Command Unit 101 recognizes the user action and releases a feedback block 1006. The Core Unit 802 queries the Admin Unit 803 to select the format to present the message to the user. If the message received is a text and the delivery format is voice the system starts reading the messages to the user via the Text to Voice engine (TTS) 304 starting from the most recent one. If the message received is an audio note the system plays it. Other formats could be possible.
The Memory Unit 906 releases messages whereas the Admin Unit 803 controls the volume, time between messages, speed and repetition of the messages. Once the message ends, the Command Unit 101 releases a notification and requests the user if s/he wants to reply the message read block 1009. One implementation of this part of the process could be made by an audio notification of end of message and afterwards an audio command prompt: "Reply to the message?" waiting for user audio interaction: "Yes" or "No".
If the user wishes to reply, the process follows the same flow as the one described in Figure 1 1 referring to the send message process. If user does not want to reply, the Core Unit 802 moves to the next event and the user is asked by Command Unit 101 about saving or not the message block 1007. One implementation of this part of the process could be made by voice requesting the user: "Save the message?" If the user replies: "Yes" the message is kept in the Memory Unit 906 if "No" message is deleted.
The system keeps delivering the messages in chronological order from the Memory Unit 906 until the user undertakes an action to interrupt it or until messages are all delivered. Possible implementation of the delivery process interruption could be made taking out the RFID "listen messages" card from the slot in the User Interaction Unit 100, gestures or by voice commands. Anyhow the Command Unit releases a feedback of delivery interrupted or delivery finished and if broadcast radio is on, the volume turns to the same level as before message reception. In Figure 1 1 it is illustrated a more detailed flowchart of the send message process. The user wants to send a message to one of her/his contacts. The user does an interaction with the User Interaction Unit 100, block 1020. This interaction could be implemented according to an embodiment for example by inserting "send message" card on the radio. me RFID card reader slot or, alternatively, by pressing buttons on Radio. me or by voice, gestures, etc. The Command Unit 101 recognizes the user action and releases a feedback. The Core Unit 802 starts up the sending process. Other implementations are also possible.
The user is asked on "who is the recipient" of the message block 1021 . One possible implementation of this part of the process is by voice; in this case the user answers with the name of the recipient and once the Automatic Speech Recognition engine (ASR) 600 has recognized it, Command Unit 101 asks Core Unit 802 to query Admin Unit 803 in order to retrieve information of the chosen contact as the preferred third party communication provider by through sending the message and the format with whom sending it (e.g. text, audio), block 1022. Once this information is available, Command Unit 101 releases a notification inviting the user to start to record the message, block 1023. If the delivery format is text the ASR 600 translates the user message into a text format. If the delivery format is audio no further actions are required to the ASR 600. Other delivery formats could be included. Anyhow the message is saved on the Memory Unit 906.
The recording of the user messages ends when the user undertakes the correspondent action block 1024. This action could be an auditory command of "end of message", sending card removal, gestures, pressing a button, etc.
After that the Core Unit 802 moves the message to the Communication Integration Unit 200 that adapts, block 1026, it to the proper third party communication provider for that specific contact (e.g. namely, SMS, WhatsApp®, email or any other textual or verbal communication channel) and sends it block 1027. Once the message has been sent, user and receiver get a notification, block 1028.
In Figure 12 it is illustrated a more detailed flowchart of the web content access process. The functionally is triggered by the user undertaking a proper action, block 1030, like for example inserting "web content" card in the RFID card reader slot of the User Interaction Unit 100, by voice, buttons, gestures, etc. The Command Unit 101 , block 1031 , interprets this action as a command to access a particular web page and releases a feedback. If the interaction is made by voice, the Command Unit 101 invokes the ASR 600 for interpreting the vocal command given by the user. Command Unit 101 notifies it to Core Unit 802 that starts up the web page connection process. Then Core Unit 802 queries the Admin Unit 803, block 1032, in order to retrieve information about the web page the user wants to access. Once this information is available, the Core Unit 802 asks the Internet Content Adapter Unit 908, block 1033, to connect with the selected web page.
Since that this information is obtained Internet Content Adapter Unit 908 adapts, block 1034, it to the Core Unit 802 content interface and Core Unit 802 gets the information from the web page. This information could be text (using the algorithms already existing for blind people), audio or videos. Further formats could be included. Core Unit 802 also triggers the Command Unit 101 to activate a notification of the available web page content in the User Interaction Unit 100, block 1035. One implementation of this notification could be a light in Radio. me or an auditory message, etc. In the example of a notification by light, it blinks on the device to indicate a new available web content to read. When the user decides to retrieve the web content s/he does an interaction with the User Interaction Unit 100. This interaction could be implemented pressing buttons on Radio. me or by voice, gestures, etc.
Once the user decides to access the web page information, block 1036, in the case of been a text, the Command Unit 101 enables the TTS 304 to convert the text to voice and to allow the user listening to the web page content. If it is audio or video it is played.
The system keeps delivering the web content, block 1037, until the user undertakes an action to interrupt it or until it is finished. Possible implementation of the delivery process interruption could be made taking out the RFID "web content" card from the slot in the User Interaction Unit 100, gestures, by voice commands, other ways. Anyhow the Command Unit 101 releases a feedback of delivery interrupted or delivery finished and if broadcast radio is on, the volume turns to the same level as before web content delivery.
A person skilled in the art could introduce changes and modifications in the embodiments described without departing from the scope of the invention as it is defined in the attached claims.

Claims

Claims
1. A communication device, having an adaptive user interaction interface and multiple application channels, characterized in that it comprises:
- a user interaction module (100) having a user command identification unit (101 ), said user command identification unit (101 ) having at least a physical object interaction unit (102) and/or a voice command unit (103);
- a sending communications' module (200) configured for selecting said application channels to send messages to third parties of a user based on user profiles and/or preferences;
- a receiving communications' module (300) configured to process information received from said third parties through several input application channels that is selected among messages from third parties and web pages content and to output a processed output to said user through one or more selected output application channels based on said user profiles and/or preferences.
2. A communications device according to any preceding claim, wherein said application channels comprise:
- messaging channels selected from the group of email, Short Message Service and web applications capable of receiving messages; or
- voice channels.
3. A communications device according to any preceding claim, wherein said physical object interaction unit (102) comprises a physical slot configured for receiving a physical card or key, the physical object interaction unit (102) including at least an RFI D or a NFC card.
4. A communications device according to any preceding claim, wherein said physical object interaction unit (102) comprises at least one button, switch or wheel.
5. A communications device according to any preceding claim, wherein said communications device comprises a radio-like user interface, preferably embedded in a radio set.
6. A communications device according to claim 1 , comprising a unit for selecting at least a time to output a processed output to said user.
7. A communications device according to any preceding claim, wherein said receiving communications' module (300) has a radio broadcast interrupting unit (301 ) configured to select an optimal time to output information to the user to a later time if said radio set is receiving a radio broadcast.
8. A communications device according to any preceding claim, wherein said broadcast interrupting unit (301 ) is configured to be overridden by user commands.
9. A communications device according to any preceding claim, wherein said output information is immediately notified to the user if said radio broadcast is off.
10. A communications device according to any preceding claim, further comprising a user identification and preferences module (400) having a user identification unit (401 ), said user identification unit (401 ) having a dynamic user profile (402) configured for controlling parameters of said processed output to the user, said parameters comprising speed mode, language and volume.
1 1. A communications device according to any preceding claim, wherein said dynamic user profile (402) is configured for receiving inputs from a user database (500), where said user database (500) contains information relating to the user's age, history of use, cognitive, visual or hearing user conditions.
12. A communications device according to any preceding claim, wherein the receiving communications' module (300) further comprises a message memory unit (302) configured for receiving messages, preferably from the user's contacts.
13. A communications device according to any preceding claim, wherein the receiving communications' module (300) is connected to a mobile telephone network, including at least WiFi, 2G, 3G or LTE networks.
14. A communications device according to any preceding claim, wherein the receiving communications' module (300) is configured to output said processed output to the user by voice, via a display, or through a printer.
15. A communications device according to any preceding claim, wherein said physical object interaction unit (102) is configured for triggering a direct communications event when inserted, contacted or in close range with said communications device.
16. A communications device according to any proceeding claim, wherein said physical object interaction unit (102) being associated with a predetermined web page location identification, preferably its url.
17. A communications device according to any preceding claim, wherein said physical object interaction unit (102) is configured for sending a message to a predetermined user contact and/or for receiving all messages from the user's contacts.
18. A physical object comprising a circuitry configured for interacting with the physical object interaction unit (102) according to any preceding claim, said physical object comprising a key or a card.
19. A method for providing adaptive communications, said method being characterized in that it comprises following steps:
a) a user giving a command by manipulating a physical object or giving a voice command, said physical object being associated with a communications device, b) upon said command being given, a sending communications' module selects application channels for sending messages to third parties based on user profiles and/or preferences, and
c) receiving third party information selected among messages from third parties and web pages content through several input application channels and output a processed output to the user through one or more selected output application channels based on said user profiles and/or preferences.
20. A method for providing adaptive communication according to claim 19, wherein said step a) is carried out by the user associating a listen-to-messages card or key with said communications device.
21. A method for providing adaptive communication according to any preceding claim 19 or 20, wherein said step a) is carried out by the user doing a physical action that is associated with accessing a predetermined web page or App.
22. A method for providing adaptive communication according to any preceding claim 19 to 21 , wherein said step c) includes transforming text information into voice and output said voice to said user.
23. A method for providing adaptive communication according to any preceding claim 19 to 22, wherein said step a) is carried out by associating a send-message-to-a- predetermined-contact card key with said communications device.
24. A method for providing adaptive communication according to any preceding claim 19 to 23, wherein said association being carried out by inserting said cards or keys or contacting, or providing in close range with into said communications device.
25. A method for providing adaptive communication according to any preceding claim 19 to 24, wherein said communications device comprises a radio-like user interface, preferably embedded in a radio set.
26. A method for providing adaptive communication according to any preceding claims 19 to 24, comprising a unit for selecting at least a time to output a processed output to said user.
27. A computer program storage medium coupled to a communications device according to any preceding claims and including instructions that when executed by said communications device causes the device to perform operations comprising the steps of any of preceding claims 19 to 26.
PCT/EP2014/065876 2013-07-24 2014-07-24 User-interface using rfid-tags or voice as input WO2015011217A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP13382299.9 2013-07-24
EP13382299 2013-07-24

Publications (1)

Publication Number Publication Date
WO2015011217A1 true WO2015011217A1 (en) 2015-01-29

Family

ID=48918342

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/065876 WO2015011217A1 (en) 2013-07-24 2014-07-24 User-interface using rfid-tags or voice as input

Country Status (1)

Country Link
WO (1) WO2015011217A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020248788A1 (en) * 2019-06-10 2020-12-17 海信视像科技股份有限公司 Voice control method and display device
CN112968990A (en) * 2021-02-25 2021-06-15 立芯科技股份有限公司 Children communication auxiliary assembly

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6085111A (en) * 1996-07-31 2000-07-04 U.S. Philips Corporation Telephone and device intended to be adapted to said telephone when operating in restricted mode
WO2005018258A1 (en) * 2003-08-13 2005-02-24 Modelabs Limited Configuration of a portable keyless telephone by sms
EP1868085A1 (en) * 2006-06-13 2007-12-19 Nederlandse Organisatie voor toegepast-natuurwetenschappelijk onderzoek TNO Computer system with RFID interface
US20110234379A1 (en) * 2009-08-18 2011-09-29 Aq Co., Ltd. Automatic transmission apparatus and method of automatic-transmitting signal between efid tag and mobile terminal in the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6085111A (en) * 1996-07-31 2000-07-04 U.S. Philips Corporation Telephone and device intended to be adapted to said telephone when operating in restricted mode
WO2005018258A1 (en) * 2003-08-13 2005-02-24 Modelabs Limited Configuration of a portable keyless telephone by sms
EP1868085A1 (en) * 2006-06-13 2007-12-19 Nederlandse Organisatie voor toegepast-natuurwetenschappelijk onderzoek TNO Computer system with RFID interface
US20110234379A1 (en) * 2009-08-18 2011-09-29 Aq Co., Ltd. Automatic transmission apparatus and method of automatic-transmitting signal between efid tag and mobile terminal in the same

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020248788A1 (en) * 2019-06-10 2020-12-17 海信视像科技股份有限公司 Voice control method and display device
CN112968990A (en) * 2021-02-25 2021-06-15 立芯科技股份有限公司 Children communication auxiliary assembly
CN112968990B (en) * 2021-02-25 2022-04-29 立芯科技股份有限公司 Children communication auxiliary assembly

Similar Documents

Publication Publication Date Title
KR101542136B1 (en) Method for inputting character message and mobile terminal using the same
EP2747389B1 (en) Mobile terminal having auto answering function and auto answering method for use in the mobile terminal
KR102108500B1 (en) Supporting Method And System For communication Service, and Electronic Device supporting the same
US9363379B2 (en) Communication system with voice mail access and call by spelling functionality
EP2008193B1 (en) Hosted voice recognition system for wireless devices
CN102117614B (en) Personalized text-to-speech synthesis and personalized speech feature extraction
JP5148083B2 (en) Mobile communication terminal providing memo function and method thereof
EP1732295B1 (en) Device and method for sending and receiving voice call contents via text messages
KR100689396B1 (en) Apparatus and method of managing call history using speech recognition
EP2992666B1 (en) An apparatus for answering a phone call when a recipient of the phone call decides that it is inappropriate to talk, and related method
KR20130084856A (en) Apparatus and method for processing a call service of mobile terminal
US20100223055A1 (en) Mobile wireless communications device with speech to text conversion and related methods
KR100365860B1 (en) Method for transmitting message in mobile terminal
JP5486062B2 (en) Service server device, service providing method, service providing program
EP2590393A1 (en) Service server device, service provision method, and service provision program
JP2010026686A (en) Interactive communication terminal with integrative interface, and communication system using the same
WO2015011217A1 (en) User-interface using rfid-tags or voice as input
KR100843325B1 (en) Method for displaying text of portable terminal
JP2013026779A (en) Communication terminal and communication method
KR101581778B1 (en) Method for inputting character message and mobile terminal using the same
GB2427500A (en) Mobile telephone text entry employing remote speech to text conversion
KR100722881B1 (en) Mobile terminal and method for saving message contents thereof
KR102496398B1 (en) A voice-to-text conversion device paired with a user device and method therefor
KR20100121072A (en) Management method for recording communication history of portable device and supporting portable device using the same
KR100774481B1 (en) The apparatus and method for text transformation of mobile telecommunication terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14742223

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14742223

Country of ref document: EP

Kind code of ref document: A1