WO2011047218A1 - Dispositif électronique avec fonctionnalité de commande automatique audio (aac) et interface utilisateur correspondante - Google Patents

Dispositif électronique avec fonctionnalité de commande automatique audio (aac) et interface utilisateur correspondante Download PDF

Info

Publication number
WO2011047218A1
WO2011047218A1 PCT/US2010/052769 US2010052769W WO2011047218A1 WO 2011047218 A1 WO2011047218 A1 WO 2011047218A1 US 2010052769 W US2010052769 W US 2010052769W WO 2011047218 A1 WO2011047218 A1 WO 2011047218A1
Authority
WO
WIPO (PCT)
Prior art keywords
user interface
language
user
navigation bar
button
Prior art date
Application number
PCT/US2010/052769
Other languages
English (en)
Inventor
Bob Cunningham
Bryan Moulton
Richard Ellenson
Original Assignee
Dynavox Systems, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dynavox Systems, Llc filed Critical Dynavox Systems, Llc
Publication of WO2011047218A1 publication Critical patent/WO2011047218A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72475User interfaces specially adapted for cordless or mobile telephones specially adapted for disabled users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6058Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72466User interfaces specially adapted for cordless or mobile telephones with selection means, e.g. keys, having functions defined by the mode or the status of the device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/22Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector

Definitions

  • the present invention generally pertains to electronic devices, and more particularly to electronic devices configured to operate with Alternative and Augmentative Communication (AAC) functionality.
  • AAC Augmentative Communication
  • Speech generation devices can include a variety of features to assist with a user's communication.
  • a speech generation device may include an electronic interface with specialized software configured to permit the creation and manipulation of digital messages that can be translated into audio speech output. Additional communication- related features also may be provided depending on user preferences and abilities. Users may provide input to a speech generation device by physical selection using a touch screen, mouse, joystick or the like or by other means such as eye tracking or audio control.
  • a conventional speech generation device may include an onboard computer with substantial computer processing functionality as well as a plethora of peripheral devices such as a display, touch screen, microphone, speakers and other options.
  • Such an SGD also may include a substantial portion of on-board memory in the form of a hard drive or other dedicated media source to store speech generation programs, user interfaces, and/or communications databases of text, symbols, and the like.
  • Suitable internal and/or external power sources also may be required to operate such components. These integrated units are highly functional but can
  • the present subject matter is directed to various exemplary speech generation devices (SGDs) having improved configurations for providing selected AAC features and functions to a user.
  • SGDs speech generation devices
  • a network-based system and method is provided by which a terminal device is in communication with a server device over a network.
  • the terminal device can correspond to an AAC device with some or all functional features of a conventional AAC device or to a mobile device such as a cell phone, smartphone, PDA, personal media player or the like.
  • a terminal device includes at least a processing module, a
  • communications module and one or more speakers and is configured to access selected AAC functionality from a remote location, such as a server computer on a network.
  • a system and method for providing a unique set of communication interfaces to a user of a device facilitates speech generation and other communication-related needs for a user.
  • user interfaces may be provided that allow a user to operate a mobile device with AAC or other speech generation and communication related functionality.
  • One exemplary user interface contains a plurality of interface areas, including a message window, a contextual navigation bar, a button field, a position indicator and/or a language navigation bar.
  • the message window may be provided for transforming user input into a visual depiction of selected words, phrases, symbols, alphanumeric text and the like for assembly and ultimate conversion into electronically generated audio speech output of a composed message or other selected buttons.
  • the contextual navigation bar may be provided for displaying current or available selection options or different interface elements or buttons as a user navigates through different options of user interface screens.
  • the button field may show a plurality of specific symbols or text corresponding to selectable items which a user can edit and/or select for inclusion in a message.
  • the position indicator can show a user where in a plurality of interface pages the user is currently viewing or accessing.
  • the language navigation bar can provide a plurality of fixed user interface elements for selection by a user, such as but not limited to language buttons for phrases, core words, quickwords and/or a keyboard as well as an optional utilities button,
  • the SGD may correspond to a particular special-purpose electronic device that permits a user to communicate with others by producing digitized or synthesized speech based on configured messages.
  • Such messages may be preconfigured and/or selected and/or composed by a user within a message window viewable on the display device associated with the speech generation device.
  • a variety of other physical input devices and software interface features may be provided to facilitate the capture of user input to define what information should be displayed in a message window and ultimately communicated to others as spoken output, text message, phone call, e-mail or other outgoing
  • FIG. 1 provides a schematic view of exemplary devices and related networked communication arrangements for using such devices to provide AAC functionality to device users in accordance with an exemplary aspect of some embodiments of the present invention
  • FIG. 2A provides a schematic view of exemplary hardware components for use in a first exemplary speech generation terminal device in accordance with an aspect of some embodiments of the present invention
  • Fig. 2B provides a schematic view of exemplary hardware components for use in a second exemplary speech generation terminal device in accordance with an aspect of some embodiments of the present invention
  • FIG. 3 illustrates exemplary interface elements for a device user interface in accordance with an aspect of some embodiments of the present invention
  • FIG. 4 illustrates a schematic relationship between different interface elements for a device user interface in accordance with an aspect of some embodiments of the present invention
  • Fig. 5 illustrates a blank message window within an exemplary device user interface in accordance with an aspect of some embodiments of the present invention
  • Fig. 6 illustrates an exemplary message window with text and symbols during the construction of a message via a device user interface in accordance with aspects of some embodiments of the present invention
  • Fig. 7 illustrates an example of a contextual navigation bar with one exemplary active context button for use with a device user interface in
  • FIG. 8 illustrates an example of a contextual navigation bar with three exemplary active context buttons for use with a device user interface in accordance with an aspect of some embodiments of the present invention
  • Fig. 9 illustrates an exemplary button field with left and right active areas for optional inclusion in a device user interface in accordance with an aspect of some embodiments of the present invention
  • Fig. 0 illustrates an exemplary language navigation bar for use with a device user interface in accordance with an aspect of some embodiments of the present invention
  • FIG. 1 1 illustrates an exemplary symbol browser interface for performing searching and/or editing in accordance with an aspect of some embodiments of the present invention
  • Fig. 12 illustrates a schematic relationship among a first exemplary hierarchical category of user interface elements, namely language interface elements related to phrases, in accordance with an aspect of some embodiments of the present invention
  • Fig. 13 illustrates a schematic relationship among a second exemplary hierarchical category of user interface elements, namely language interface elements related to core words, in accordance with an aspect of some
  • Fig. 14 illustrates a schematic relationship among a third exemplary hierarchical category of user interface elements, namely language interface elements related to quickwords, in accordance with an aspect of some
  • Fig. 15 illustrates a schematic relationship among a fourth exemplary hierarchical category of user interface elements, namely language interface elements related to a keyboard, in accordance with an aspect of some
  • Fig. 16 illustrates a schematic relationship among a fifth exemplary hierarchical category of user interface elements, namely utility interface elements, in accordance with an aspect of some embodiments of the present invention.
  • Fig. 17 provides a flow chart of exemplary method steps in accordance with an embodiment of the present invention.
  • the actual data may travel between the systems directly or indirectly. For example, if a first computer accesses a file or data from a second computer, the access may involve one or more intermediary computers, proxies, or the iike. The actual file or data may move between the computers, or one computer may provide a pointer or metafile that the second computer uses to access the actual data from a computer other than the first computer.
  • Embodiments of the methods and systems set forth herein may be implemented by one or more general- purpose or customized computing devices adapted in any suitable manner to provide desired functionality.
  • the device(s) may be adapted to provide additional functionality, either complementary or unrelated to the present subject matter.
  • one or more computing devices may be adapted to provide desired functionality by accessing software instructions rendered in a computer-readable form.
  • any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein.
  • software need not be used exclusively, or at all.
  • embodiments of the methods disciosed herein may be executed by one or more suitable computing devices that render the device(s) operative to implement such methods.
  • Such devices may access one or more computer- readable media that embody computer-readable instructions which, when executed by at least one computer, cause the at least one computer to implement one or more embodiments of the methods of the present subject matter.
  • Any suitable computer-readable medium or media may be used to implement or practice the presently-disclosed subject matter, including, but not limited to, diskettes, drives, and other magnetic-based storage media, optical storage media, including disks (including CD-ROMS, DVD-ROMS, and variants thereof), flash, RAM, ROM, and other solid-state memory devices, and the like.
  • Fig. 1 provides a schematic overview of an exemplary device communication network for use in accordance with aspects of the present technology
  • selected aspects of the disclosed technology involve operation of an electronic device as a speech generation device or alternative and augmentative communication (AAC) device.
  • AAC augmentative communication
  • Any number of devices may be configured to perform the speech generation and related functions disclosed herein by adapting such devices with the specific features and functionality as presently disclosed.
  • a device should include at least some sort of processing element for implementing computer- readable instructions and one or more communications interfaces for connecting to a network.
  • a mobile device 102, a conventional AAC device 104, and/or a personal computer 106 may be configured to operate as a speech generation device.
  • Mobile device 102 may correspond to a variety of devices, such as but not limited to a mobile computing device, a handheld computer, a mobile phone, a cellular phone, a VoIP phone, a smart phone, a personal digital assistant (PDA), a Blackberry device, a Treo, an iPhone, an iPod Touch, an iPad, a media player, a navigation device, an e-mail device, a game console or other portable electronic device capable of some form of network access, or a combination of any two or more of such data processing devices or other data processing devices.
  • PDA personal digital assistant
  • AAC device 104 may correspond to a variety of devices such as but not limited to a device such as offered for sale by DynaVox Mayer-Johnson of Pittsburgh, Pennsylvania including but not limited to the V, Vmax, Xpress, Tango, 3 and/or DynaWrite products or any other suitable component adapted with the features and functionality disclosed herein.
  • PC 106 may correspond, for example, to a stand-alone computer terminal, such as a desktop computer, laptop computer, netbook computer, palmtop computer, or personal digital assistant (PDA).
  • Exemplary stand-alone computers may include, but are not limited to, Apple®, Sun Microsystems®, IBM®, or IBM®- compatible personal computers outfitted with an operating system such as based on Microsoft Windows®, Linux, Mac OS X or other suitable interface.
  • a device configured with speech generation functionality in accordance with the present technology may be communicatively coupled with a network 108 such that the device can ultimately be connected to a server computer 110.
  • Network 108 may include but is not limited to a dial-in network, a local area network (LAN), wide area network (WAN), Wireless Local Area Network (WLAN), a Personal area network (PAN), a Campus area network (CAN), a Metropolitan area network (MAN), a Wireless wide area network (WWAN), Bluetooth, Wi-Fi, messaging protocols such as, TCP/IP, SMS, MMS, extensible messaging and presence protocol (XMPP), real time messaging protocol (RTMP), instant messaging and presence protocol (iMPP), instant messaging, USSD, !RC, or any other wireless data networks or messaging protocols, public switched telephone network (PSTN), integrated services digital network (ISDN), asymmetric digital subscriber line (ADSL), digital subscriber line (DSL) and/or some other type of telephone network the Internet, intranet or Ethernet type networks and others implemented over any combination of open and/or private, hard-wired and/or wireless communication links.
  • PSTN public switched telephone network
  • ISDN integrated services digital network
  • ADSL digital subscriber line
  • DSL digital subscriber line
  • network 108 When network 108 is configured as a wireless network (e.g., mobile wireless network), the network 108 can be any network able to establish connections with mobile devices, such as a Global System for Mobile Communications (GSM) network, Code Division Multiple Access (CDMA) network, Evolution-Data Optimized (EV- DO) network, Enhanced Data Rates for GSM Evolution (EDGE) network, 3GSM network, Fixed Wireless Data, 2G, 2.5G, 3G networks, enhanced data rates for GSM evolution (EDGE) network, General packet radio service (GPRS) network, enhanced GPRS network, Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA) network, and Integrated Digital Enhanced Network (iDEN).
  • GSM Global System for Mobile Communications
  • CDMA Code Division Multiple Access
  • EV- DO Evolution-Data Optimized
  • EDGE Enhanced Data Rates for GSM Evolution
  • 3GSM Fixed Wireless Data
  • 2G, 2.5G, 3G networks enhanced data rates for GSM evolution (EDGE) network
  • One or more gateways may be provided to interface the devices 102-106 to the network and ultimately to a server computer 1 10.
  • Gateways can include a number of components such as, but not limited to, protocol transistors, impedance matching devices, rate converters, fault isolators, and/or signal translators, etc., to interface to one or more networks with different protocols than the protocols under which an original signal was sent. Such gateways can further facilitate the establishment of a set of rules and
  • protocol converters such as gateways can operate at any network layer (e.g., the application layer, the presentation layer, the session layer, the transport layer, the network layer, the data link layer, and/or the physical layer) of the Open System interconnection (OSI) model and convert one protocol stack into another.
  • OSI Open System interconnection
  • a gateway can connect a LAN to the internet.
  • gateways can also connect two IP- based networks.
  • network 108 can be used for one or more of devices 102, 104 or 06 to access AAC
  • devices 102, 104 and 106 does not necessarily need to be outfitted locally with the entire functionality required for providing the disclosed features herein. Instead, devices 102, 104 and/or 106 can operate as a terminal device or client device that accesses data stored remotely at the server computer 1 10 or that employs remotely accessed data or other computer-readable instructions in combination with locally stored data and instructions. Examples of remotely accessed data may include the symbol sets described herein and instructions for implementing various user interfaces or other graphical features set forth herein and others as may be utilized for providing AAC functionality.
  • devices 102, 104 and 106 are not directly coupled through the network to server computer 1 0 but instead are coupled to one of the other devices which then may be connected to network 108.
  • mobile device 02 may be configured with a communications link to AAC device 104 and/or PC 106 which is then connected to network 108.
  • one of the devices 102, 104 and 106 may serve effectively as a gateway for network access to another one of the devices 02, 104 and 106.
  • Various security protocols may be utilized for establishing network connections to server computer 1 10. For example, standard signal encryption techniques may be employed, alone or in combination with other secure connections such as via secure shell (SSH) or virtual private network (VPN) connection which would add an extra security layer with stronger encryption.
  • SSL secure shell
  • VPN virtual private network
  • Fig. 2A provides a schematic illustration of exemplary components of a device 200, which could correspond to any one of the devices 102, 104 or 106 shown in Fig. 1.
  • One or more memory devices or databases within a speech generation device may correspond to computer-readable medium that may include computer-executable instructions for performing various steps/tasks associated with a device, and particularly in the context of speech generation and other AAC-related functions.
  • the electronic components of an SGD 200 enable the device to transmit and receive messages to assist a user in communicating with others.
  • the SGD may correspond to a particular special-purpose electronic device that permits a user to communicate with others by producing digitized or synthesized speech based on configured messages.
  • Such messages may be preconfigured and/or selected and/or composed by a user within a message window provided as part of the speech generation device user interface.
  • a variety of physical input devices and software interface features may be provided to facilitate the capture of user input to define what information should be displayed in a message window and ultimately communicated to others as spoken output, text message, phone call, e-mail or other outgoing communication.
  • central computing device 201 may include a variety of internal and/or peripheral components. Power to such devices may be provided from a battery 203, such as but not limited to a lithium polymer battery or other rechargeable energy source. A power switch or button 205 may be provided as an interface to toggle the power connection between the battery 203 and the other hardware components.
  • any peripheral hardware device 207 may be provided and interfaced to the speech generation device via a USB port 209 or other communicative coupling.
  • the components shown in Fig. 2A may be provided in different configurations and may be provided with different arrangements of direct and/or indirect physical and communicative links to perform the desired functionality of such components.
  • a central computing device 201 is provided to function as the central controller within a SGD and may generally include such components as at least one memory/media element or database for storing data and software
  • processor(s) 202 and associated memory/media devices 204a and 204b are configured to perform a variety of computer-implemented functions (i.e., software-based data services).
  • processors 202 within computing device 201 may be configured for operation with any predetermined operating systems, such as but not limited to Windows XP, and thus is an open system that is capable of running any application that can be run on Windows XP.
  • Other possible operating systems include BSD UNIX, Darwin ⁇ Mac OS X), Linux, SunOS (Solaris/OpenSolaris), and Windows NT (XP/Vista/7).
  • At least one memory/media device (e.g., device 204a in Fig. 2A) is dedicated to storing software and/or firmware in the form of computer-readable and executable instructions that will be implemented by the one or more processors) 202.
  • Other memory/media devices e.g., memory/media devices 204b
  • memory/media device 204a The various memory/media devices of Fig. 2A may be provided as a single portion or multiple portions of one or more varieties of computer-readable media, such as but not limited to any combination of volatile memory (e.g., random access memory (RAM, such as DRAM, SRAM, etc.)) and nonvolatile memory (e.g., ROM, flash, hard drives, magnetic tapes, CD-ROM, DVD-ROM, etc.) or any other memory devices including diskettes, drives, other magnetic-based storage media, optical storage media and others.
  • RAM random access memory
  • nonvolatile memory e.g., ROM, flash, hard drives, magnetic tapes, CD-ROM, DVD-ROM, etc.
  • at least one memory device corresponds to an electromechanical hard drive and/or or a solid state drive (e.g.
  • a flash drive that easily withstands shocks, for example that may occur if the SGD 200 is dropped.
  • Fig. 2A shows two separate memory/media devices 204a and 204b, the content dedicated to such devices may actually be stored in one memory/media device or in multiple devices. Any such possible variations and other variations of data storage will be appreciated by one of ordinary skill in the art.
  • a first portion of memory/media device 204b is configured to store input data received from a user for performing the desired functional steps associated with a speech generation device.
  • data in memory 204b may include inputs received from one or more peripheral devices, including but not limited to touch screen 206, microphone 208, camera 219, and other peripheral devices 210, which indicate a user's selections of text to be spoken by the SGD or other related actions.
  • Memory device 204a includes computer-executable software instructions that can be read and executed by processor(s) 202 to act on the data stored in memory/media device 204b to create new output data (e.g., audio signals, display signals, RF communication signals and the like) for temporary or permanent storage in one of the memory/media devices.
  • output data may be communicated to a peripheral output device, such as display device 212, speakers 214, cellular phone or RF device 216, wireless network adapter 218, or as control signals to still further components.
  • Computing/processing device(s) 202 may be adapted to operate as a special-purpose machine by executing the software instructions rendered in a computer-readable form stored in memory/media element 204a.
  • any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein.
  • the methods disclosed herein may alternatively be implemented by hard-wired logic or other circuitry, including, but not limited to application-specific integrated circuits.
  • various input devices may be part of an SGD 200 and thus coupled to the computing device 20 .
  • a touch screen 206 may be provided to capture user inputs directed to a display location by a user hand or stylus.
  • a microphone 208 for example a surface mount
  • CMOS/MEMS silicon-based microphone or others may be provided to capture user audio inputs.
  • Other exemplary input devices e.g. , peripheral device 210) may include but are not limited to a peripheral keyboard, peripheral touch-screen monitor, peripheral microphone, mouse and the [ike.
  • a camera 219 such as but not limited to an optical sensor, e.g., a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, or other device can be utilized to facilitate camera functions, such as recording
  • Hardware components of SGD 200 may also include one or more integrated output devices, such as but not limited to display 212 and/or speakers 214.
  • Display device 212 may correspond to one or more substrates outfitted for providing images to a user.
  • Display device 212 may employ one or more of liquid crystal display (LCD) technology, light emitting polymer display (LPD) technology, light emitting diode (LED), organic light emitting diode (OLED) and/or transparent organic light emitting diode (TOLED) or some other display
  • LCD liquid crystal display
  • LPD light emitting polymer display
  • LED light emitting diode
  • OLED organic light emitting diode
  • TOLED transparent organic light emitting diode
  • a display device 212 and touch screen 206 are integrated together as a touch-sensitive display that implements one or more of the above-referenced display technologies (e.g., LCD, LPD, LED, OLED, TOLED, etc.) or others.
  • the touch sensitive display 102 can be sensitive to haptic and/or tactile contact with a user.
  • a touch sensitive display that is a capacitive touch screen may provide such advantages as overall thinness and light weight.
  • a capacitive touch panel requires no activation force but only a slight contact, which is an advantage for a user who may have motor control limitations.
  • Capacitive touch screens also accommodate multi-touch applications (i.e., a set of interaction techniques which allow a user to control graphical applications with several fingers) as well as scrolling
  • a touch-sensitive display can comprise a muiti-touch-sensitive display.
  • a multi-touch-sensitive display can, for example, process multiple simultaneous touch points, including processing data related to the pressure, degree, and/or position of each touch point. Such processing facilitates gestures and interactions with multiple fingers, chording, and other interactions.
  • Other touch-sensitive display technologies also can be used, e.g., a display in which contact is made using a stylus or other pointing device.
  • Speakers 214 may generally correspond to any compact high power audio output device. Speakers 214 may function as an audible interface for the speech generation device when computer processor(s) 202 utilize text-to-speech functionality. Speakers can be used to speak the messages composed in a message window as described herein as well as to provide audio output for telephone calls, speaking e-mails, reading e-books, and other functions.
  • a volume control module 222 may be controlled by one or more scrolling switches or touch-screen buttons.
  • SGD hardware components may also include various communications devices and/or modules, such as but not limited to an antenna 215, cellular phone or RF device 2 6 and wireless network adapter 218.
  • Antenna 215 can support one or more of a variety of RF communications protocols.
  • a cellular phone or other RF device 216 may be provided to enable the user to make phone calls directly and speak during the phone conversation using the SGD, thereby eliminating the need for a separate telephone device.
  • a wireless network adapter 218 may be provided to enable access to a network, such as but not limited to a dial-in network, a local area network (LAN), wide area network (WAN), public switched telephone network (PSTN), the Internet, intranet or ethernet type networks or others.
  • Additional communications modules such as but not limited to an infrared (IR) transceiver may be provided to function as a universal remote control for the SGD that can operate devices in the user's environment, for example including TV, DVD player, and CD player.
  • IR infrared
  • a dedicated communications interface module 220 may be provided within central computing device 201 to provide a software interface from the processing components of computer 201 to the communication device(s).
  • communications interface module 220 includes computer
  • additional executable instructions stored in memory associated with central computing device 201 provide a web browser to serve as a graphical user interface for interacting with the Internet or other network.
  • software instructions may be provided to call preconfigured web browsers such as Microsoft Internet Explorer or Firefox® internet browser available from Mozilla software.
  • Antenna 215 may be provided to facilitate wireless communications with other devices in accordance with one or more wireless communications protocols, including but not limited to BLUETOOTH, WI-FI (802.1 1 b/g) and ZIGBEE wireless communication protocols.
  • the wireless interface afforded by antenna 215 may couple the SGD 200 to any output device to communicate audio signals, text signals (e.g., as may be part of a text, e-mail, SMS or other text-based communication message) or other electronic signals.
  • the antenna 215 enables a user to use the SGD 200 with a
  • Bluetooth headset for making phone calls or otherwise providing audio input to the SGD.
  • antenna 215 may provide an interface between SGD 200 and a powered speaker or other peripheral device that is physically separated from SGD 200.
  • the SGD also can generate Bluetooth radio signals that can be used to control a desktop computer, which appears on the SGD's display as a mouse and keyboard.
  • Another option afforded by Bluetooth communications features involves the benefits of a Bluetooth audio pathway. Many users utilize an option of auditory scanning to operate their SGD. A user can choose to use a Bluetooth-enabled headphone to listen to the scanning, thus affording a more private listening environment that eliminates or reduces potential disturbance in a classroom environment without public broadcasting of a user's communications.
  • a Bluetooth (or other wire!essly configured headset) can provide advantages over traditional wired headsets, again by overcoming the cumbersome nature of the traditional headsets and their associated wires.
  • the cell phone component 216 shown in Fig. 2A may include additional sub-components, such as but not limited to an RF transceiver module, coder/decoder (CODEC) module, digital signal processor (DSP) module, communications interfaces, microcontrollers) and/or subscriber identity module (SIM) cards.
  • CDDEC coder/decoder
  • DSP digital signal processor
  • SIM subscriber identity module
  • An access port for a subscriber identity module (SIM) card enables a user to provide requisite information for identifying user information and cellular service provider, contact numbers, and other data for cellular phone use.
  • SIM subscriber identity module
  • associated data storage within the SGD itself can maintain a list of frequently-contacted phone numbers and individuals as well as a phone history or phone call and text messages.
  • One or more memory devices or databases within a speech generation device may correspond to computer-readable medium that may include computer-executable instructions for performing various steps/tasks associated with a cellular phone and for providing related graphical user interface menus to a user for initiating the execution of such tasks.
  • the input data received from a user via such graphical user interfaces can then be transformed into a visual display or audio output that depicts various information to a user regarding the phone call, such as the contact information, call status and/or other identifying information.
  • General icons available on SGD or displays provided by the SGD can offer access points for quick access to the cell phone menus and functionality, as well as information about the integrated cell phone such as the cellular phone signal strength, battery life and the like.
  • FIG. 2B Another exemplary embodiment of a device used in accordance with aspects of the disclosed technology corresponds to device 230 as schematically depicted in Fig. 2B.
  • Device 230 includes a subset of the features represented in Fig. 2A, with similar or different functionality.
  • the device 230 need not contain the entirety of the functionality offered by a device 200 such as represented by Fig. 2A.
  • device 230 can include a communication module 224 capable of linking device 230 to a network such as network 108 as shown in Fig. 1. Additional functionality including data and computer instructions then can be accessible from a remote location instead of or in addition to being stored locally on the device 2-30.
  • a single memory element 204c in device 230 that contains a subset of information that may have been stored in memory elements 204a and 204b in device 200.
  • the battery 203, processor 202, speakers 214 and/or other elements of device 230 in Fig. 2B may not be as expensive or top-of-the-line in functionality as those provided in device 200 of Fig. 2A.
  • buttons corresponding to a graphical user interface, including text, symbols, icons, menus, templates, so-called “buttons” or other features may be displayed on an output device associated with an electronic device such as an AAC device or mobile device.
  • Buttons provide a user interface element by which a user can select additional interface options or language elements. Buttons can generally be characterized as symbols, text and/or additional graphical identifiers that provide selectable access to different functional elements (e.g., different language elements). It should be appreciated that the use of particular descriptive identifiers for the buttons referenced herein are only examples. Different identifiers can be used to represent the various functional elements accessible by one or more of such buttons.
  • Specific exemplary language elements disclosed herein are accessible by one or more button interfaces of the subject technology including words, phrases, quickwords, keyboard symbols and the like. Such language elements enable a user to have a wide variety of language elements immediately available. By listing words and phrases by topic or by common useage, language elements enable a user to quickly search through a wide range of text and/or symbol options when composing a message. User customization through My Lists or My Phrases listings can further facilitate user communication.
  • Features also may be provided to trigger actions performed by the SGD upon selection of one or more items from available lists, for example, to automatically "speak” or provide as audio output the words/phrases/symbols immediately as they are selected by a user, or to send the selected words/phrases/symbols to the Message Window for composing more complex communications before conversion to audio output.
  • user interface elements are used to help a user communicate, some of such features are referred to herein as "language data.” Such user interface features then may be selectable by a user ⁇ e.g. , via an input device, such as a mouse, keyboard, touchscreen, virtual keypad or the like). When selected, the user input features can trigger control signals that can be relayed to the central computing device within an SGD to perform an action in accordance with the selection of the user buttons. Such additional actions may result in execution of additional instructions, display of new or different user interface elements, or other actions as desired. As such, user interface elements also may be viewed as display objects, which are graphical representations of system objects that are selectable by a user. Some examples of system objects include device functions, applications, windows, files, alerts, events or other identifiable system objects. Additional features of such user interfaces and related systems and methods of providing AAC functionality in accordance with such user interfaces are presented in Figs. 3-16.
  • User interfaces may provide users with access to a hierarchical system of on screen buttons that contain language elements and can be activated by user selection to "speak" words or phrases.
  • Speaking consists of playing a recorded message or sound or speaking text using a voice synthesizer, in accordance with such functionality, some user interfaces are provided with a "Message Window” in which a user provides text, symbols corresponding to text, and/or related or additional information which then may be interpreted by a text-to-speech engine and provided as audio output via device speakers.
  • Speech output may be generated in accordance with one or more preconfigured text-to-speech generation tools in male or female and adult or child voices, such as but not limited to such products as offered for sale by Cepstral, HQ Voices offered by Acapela, Flexvoice offered by Mindmaker, DECtalk offered by Fonix, Loquendo products, VoiceText offered by NeoSpeech, products by AT&T's Natural Voices offered by Wizzard, Microsoft Voices, digitized voice (digitally recorded voice clips) or others.
  • preconfigured text-to-speech generation tools in male or female and adult or child voices, such as but not limited to such products as offered for sale by Cepstral, HQ Voices offered by Acapela, Flexvoice offered by Mindmaker, DECtalk offered by Fonix, Loquendo products, VoiceText offered by NeoSpeech, products by AT&T's Natural Voices offered by Wizzard, Microsoft Voices, digitized voice (digitally recorded voice clips) or others.
  • the button hierarchies and other aspects of the disclosed user interfaces are designed to provide access to specific types of language elements and may be organized into a language system, sometimes referred to herein as the "language data.”
  • a device may be preprogrammed with a set of language data, or may download or otherwise access such language data from another device via a network as previously described.
  • Buttons that are part of this language data can contain one or more text labels and/or images and/or language data.
  • User selection of a button may trigger the execution of instructions that configure a speech generation device to make a click noise and/or other graphical indication that the button was selected.
  • the language data can be a message which is spoken immediately (e.g., via a speaking button provided on a device user interface) or a word which can be used to build a phrase that be spoken upon completion (e.g., via a message button).
  • a first exemplary mode corresponds to a "Use Mode” in which a user interface uses buttons and toolbars to navigate through language data, moving through one or more levels of the language hierarchy before ultimately providing access to preprogrammed words or phrases.
  • a second exemplary mode corresponds to an "Edit Mode” in which language data can be edited and expanded by a user, thus providing access to buttons for editing and the necessary dialogs and utilities that allow for the modification of both visual and functional elements of a button.
  • a third exemplary mode corresponds to a "Select Mode” in which the pages of language data can be navigated as with other modes such as "Use Mode” and "Edit Mode.”
  • buttons may be copied into an Export folder for sharing over a network.
  • Exemplary user interfaces may include different types of buttons to generate words and phrases that can be spoken.
  • One exemplary type of button is a Hierarchical Navigation Button, which generally provides access to different levels or categories of buttons (e.g., child buttons).
  • Another exemplary type of button is a Speaking Button which will speak a word or phrase directly after it is activated.
  • Yet another type of button is a Message Button which can be utilized to build phrases that can be spoken once completed. The phrases can be built by entering text into a message window. Once completed, the contents of the message window then can be spoken. Additional message buttons may be provided as keys on a virtual keypad.
  • Software instructions may be provided via a keyboard in addition to the other preprogrammed language data so that a user can use the keyboard when building a word or multi-word phrases prior to speaking.
  • the language system may be further supplemented by a set of Utilities which will provide access to non-language functions of a program, such as Settings, data management functions, optional stiil and/or video camera, story telling utility, some language elements which are accessed at a lower frequency than the main language branches, etc.
  • Such functions may be accessed through one or more buttons having a predetermined identifier, such as "Utility", "More” or other designation, which can be displayed for example in a corresponding "Utilities” or other branch of the system.
  • an exemplary user interface 300 includes a plurality of areas, each of which may include one or more arrangements of interface elements.
  • a first user interface area 302 generally corresponds to a Message Window, in which text, symbols, or other items chosen by a user may be displayed.
  • a second user interface area 304 generally corresponds to a contextual navigation bar configure to display different current or available selection options or different interface elements or buttons as a user navigates through different options of user interface screens.
  • a third user interface area 306 corresponds to a button field which may show a plurality of specific symbols or text corresponding to selectable items which a user can edit and/or select for inclusion in a message.
  • a fourth exemplary user interface area 308 corresponds to a position indicator that can show a user where in a plurality of interface pages the user is currently viewing or accessing.
  • a fifth exemplary user interface area 310 corresponds to a language navigation bar that can provide a plurality of fixed user interface elements for selection by a user, such as but not limited to language buttons for phrases, core words, quickwords and/or a keyboard as well as an optional utilities button. Examples of buttons illustrated in such exemplary user interface areas in Fig. 3 may be considered placeholders, with the actual buttons containing appropriate icons and possibly text to represent their functionality. Interface elements can change to reflect the current state of the system. [0069] Referring now to Fig.
  • Fig. 4 provides a schematic overview of the relations among exemplary parent, child and cousin interface elements.
  • User interface elements 400 and 410 are referred to as "Parent 1" and "Parent 2" and are siblings of one another.
  • Parent 1 can have a plurality of children, namely Child 1.1 , Child 1 .2 Child 1.N shown as user interface elements 402, 404 and 406.
  • Parent 2 can have a plurality of children, namely Child 2.1 , Child 2.2, Child 2.N shown as user interface elements 412,
  • the message window 302 serves to display the constructed message and 'speak' the message - which could consist of text (which would be spoken by the speech synthesizer) and recorded sounds. Both text and recorded sounds may be intermixed in the message window.
  • the message window 302 may be configured to display both text and images as well as provide for some basic editing functions such as Delete Word/ Clear, and Undo Clear.
  • the message window also may be able to repeat the last message spoken, regardless of whether it was constructed in the message window or spoken directly upon button activation. In one embodiment, the message window always display the text associated with a Message Button.
  • Message Window functionality may be provided through one or more different regions such as shown in Figs. 5 and 6.
  • a Delete/Repeat Button and an Active Touch Space may be provided, each of which may be a function of the context with which each is used.
  • pressing the Active Touch Space will speak the contents of the message window and then clear the message window, !t also can put the Delete/Repeat Button into a Repeat mode. Pressing the Delete/Repeat button while it is in Repeat mode can repeat the last message spoken.
  • the user interface can be configured such that a user taps, double taps, presses and holds, or otherwise selects the
  • Delete/Repeat Button as defined by a selection option setting to fill the Message Window with the last message (essentially undoing the last clear).
  • the Delete/Repeat Button can be in Delete mode. Selection of the Delete/Repeat Button while a device is in Delete Mode can remove the last word or symbol-word pair in the message window. Selection of the button while it is in Delete mode can clear any contents of the Message Window.
  • touching the Active Touch Space can pause the speech, with touching again to continue. Double tapping or pressing and holding can cancel the speech.
  • the Message Window may further show, when visible, a banner that indicates the mode in which the device is currently operating (e.g., Use Mode, Edit Mode, Select Mode, etc.)
  • an exemplary Contextual Navigation Bar may be broken up into multiple areas, such as the Dynamic Function Space and the Mode Indicator as illustrated.
  • the Dynamic Function Space can show Context Buttons which navigate between sets of button pages, while the Mode Indicator on the right side of the interface area may be configured to display a status icon representative of the current operating mode (e.g., Use Mode, Edit Mode, Select Mode). Additional indicators may be available to visually indicate to a user the type of mode in which the user is operating. For example, color cueing and labels may be placed on the Contextual Navigation bar to signify one or more of the different modes, particularly Edit Mode so that a user is aware that he/she may be modifying certain device functionality.
  • the color of the contextual navigation bar is changed in Edit Mode relative to the color in different operating modes, and the button field is outlined with a colored border to indicate the available area for editing.
  • the language navigation bar may be disabled during Edit Mode as well as the message window so as to limit the user input to those items that are currently editable.
  • a mode tab 702 is also present relative to the Contextual Navigation Bar 304 by which a user may toggle between the various operating modes (e.g., Use Mode, Edit Mode, Select Mode).
  • the mode tab 702 may correspond to a particular area of the interface or to the entire active area to the far right side of the Contextual Navigation Bar 304. In one
  • a user can actuate the mode tab 702 by touch, left swipe across the page, or other predetermined and programmed actuation technique.
  • actuation of the mode tab 702 may be configured to bring up a menu by which a user selects among multiple mode options.
  • actuation of the mode tab 702 results in automatic change from one mode to another.
  • the Dynamic Function space within a Contextual Navigation Bar can be used to present contextually appropriate function buttons depending on which one of a plurality of different potential modes a device is operating in (e.g., Use Mode, Edit Mode, Select Mode, etc.)
  • This display area may present both contextual information (icons and text) and any necessary Contextual Navigation buttons that will provide navigation to any cousin pages that may be present for a given language context.
  • the Contextual Navigation Bar may display contextual information and dynamic links to other cousin pages as well as the Mode Indicator Icon which will be shown to represent its current state as being in Use Mode.
  • the Contextual Navigation Bar may display contextual information and dynamic links to other cousin pages as well as the Mode Indicator which will be shown to represent its current state as being in Edit Mode.
  • an additional button such as a "+" or “Add” button may be present in the Contextual Navigation Bar so that user selection of such button will allow a user to add a new element (e.g., button or other graphical element) to the current display context.
  • a "Done” button may also be present so that a user can actuate the button to exit the Edit Mode.
  • the Contextual Navigation Bar can display contextual information and dynamic links to other cousin pages as well as a Mode Indicator.
  • buttons 901 may be selected by a user to navigate to related button pages (e.g., sibling pages that contain more buttons in a current language context).
  • Optional grid sizes for the button field may correspond to an array of m x n buttons, e.g., 2x2, 2x3, 3x4, 3x3, 4x2, 4x3, 4x4, 5x2, 5x3, 5x4, 5x5, or others.
  • the button field may display parent buttons (i.e., buttons that are used to navigate to pages of Speaking or Message Buttons), the Speaking or Message buttons of the current language context, or Utilities Buttons.
  • the buttons 901 in the button field 901 may be configured for actuation by a user selection method such as touching via a touch screen input, at which point one or more specific actions(s) may be executed as configured for such buttons.
  • User selection of the one or more active areas e.g., left active area 902 and right active area 903 can occur upon touch, left or right swipes across the touch screen surface, or other actuation technique.
  • buttons 901 in a button field 306 may be touched to set the currently active item and launch the edit dialog for that particular Button Type.
  • certain buttons e.g., Utility Buttons
  • buttons can be dynamically rearranged by a user, such as by swapping places with another button. In one example, when a button selected by a user's finger or stylus is dragged to a new location (i.e., the drop target), the button is inserted at the drop target location while moving other buttons over one space or into another predetermined configuration.
  • a button when a button is dragged to a drop target corresponding to another button, the currently dragged button and the drop target button are swapped. If a button is dragged to the right or left active area, it will swap with the first button on the sibling page. Hover over left or right area with a dragged button, and the page will flip to the sibling page where it can be swapped with a button on that page.
  • the active areas 902 and 903 may move the displayed page to the left or right sibling page. If activating the right area and there isn't a next page, a new page can be created if the current page is not completely blank. The number of new pages is only limited by the capacity of the Position Indicator, thus providing a substantial number of pages for use by an end user. Swiping the Button Field 306 left or right will also change to the respective sibling page.
  • buttons in the Button Field may be touched to toggle their selection state.
  • Selection State can be indicated graphically, for example by disabling (graying out) the button and superimposing a colorful 'selected' icon on the button (e.g., a green circle with a check in it).
  • the Utilities category departs from the standard Button Field function in that with the exception of a Story Utility Button, all buttons are disabled and unable to be selected.
  • the Story Utility button operates as it normally would, presenting a page or pages of Story Buttons that can be selected.
  • the stories Utility utilizes a basic hierarchical structure to define 'stories' - which are a list of sequentially presented 'full screen' Phrase Buttons.
  • the position indicator may include a plurality of display elements arranged in an array or other preconfigured alignment.
  • the display element corresponding to the active display may be lighted or selected such that the position of the active display can be determined from its relationship among other display options. For example, if the button field has enough buttons to correspond to four pages of buttons and the second page is currently displayed as the active page, then the second of the four display elements may appear highlighted. As such, the position indicator 308 can show the number of sibling pages and the current location within the current sibling page list.
  • an exemplary language navigation bar 310 includes a plurality of interface elements (i.e., buttons) that may generally correspond to different language categories, available utilities or other designated items.
  • interface elements 1000, 1002, 1004 and 1006 correspond to different language categories and interface element 1008 corresponds to a Utilities branch.
  • the language navigation bar remains filled with the same set of user interface elements no matter what the current language context of the button field. There may be exceptions for some of the functions in the Utilities section. For all language categories, the buttons will be present, and selecting those buttons takes a user to the root parent for that language category.
  • buttons within language navigation bar 310 may change to indicate which language branch describes the current language context, with the current branch being highlighted or otherwise indicated with predetermined graphics.
  • the interface may return to the last context used in those categories. In some cases, e.g., during some options under the Utilities branch, the language navigation bar 310 may not be present in a user interface display.
  • buttons - bar buttons and field buttons The language on the subject devices generally may be defined using two classes of buttons - bar buttons and field buttons.
  • the bar buttons may correspond to those that reside on the context navigation bar and the language navigation bar.
  • the field buttons may
  • buttons may be of a constant, known size and have specific, fixed icons (and optional text) associated with them.
  • the field buttons may vary in size depending upon the grid setting for the button field array size, and may be modifiable by a user.
  • buttons for use in embodiments of the present user interfaces include but are not limited to parent buttons, child buttons, a message button and a speaking button.
  • the primary function of a parent button is to hold information about the child buttons that will be used to fill the Button Field when this button is selected.
  • Such parent buttons are used in the Context Navigation Bar and as the Button Field Parent Buttons in the two level hierarchy that exists in some part of the language system. When these buttons are activated, the Button field is filled with the child buttons.
  • the list of Child buttons is greater than the current grid can display, sibling pages of child buttons are created.
  • the Message Button actually 'speaks' a word or phrase, and then puts its text and symbol (depending upon the Message Window setting and if a symbol is present) into the Message window.
  • 'speaks' it means that if there is a Recorded Sound, then that is played, if not, then the text is spoken using the voice synthesizer as previously described.
  • the Speaking Button just speaks a word or phrase; nothing is put into the Message Window.
  • the language navigation bar defines the four main categories (or branches) of the language system.
  • the four categories are the Phrases, Quickwords, Core Words, Keyboard, and Utilities Categories.
  • the meat of the language system can be described in the three Categories of Phrases, Core Words, and Quickwords.
  • the other language Category branches, Keyboard and Utilities are more functionally oriented rather than hierarchical in nature ⁇ with the exception of the Story Utility).
  • the diagrams set forth in Figs. 12-16 help to show the overall system, each category at a time. In such diagrams, the top level Language Categories corresponding to elements 1000-1006 on the language navigation bar 310 shown in Fig. 10 are shown on the left of the hierarchy. Shaded boxes represent the buttons that are displayed in the Contextual Navigation Bar, and non-shaded boxes are buttons that would appear in the button field for a given context.
  • a language data file may be stored in memory associated with an electronic device.
  • Such language data file may be used to define the language content of the hierarchical data system depicted in Figs. 12- 15, including the specification for all buttons in the system and their relation to one another. Additional data may be included to define the symbols that are included for display on the buttons or other elements for composing messages.
  • symbol data may include separate files for user included pre-programmed data (The Symbol Library) and later user-defined data (My Symbols Library). Such separation of symbol data will typically make it more efficient to perform backups and factory data upgrades.
  • symbols can be referenced via an identifier (ID), rather than directly including pertinent symbol data in a button.
  • ID identifier
  • symbols can be reused and the system can avoid having to back up the memory intensive, factory-provided, symbol library.
  • the Symbol Library and the My Symbols Library may be somewhat unique in that they include data that is referenced by the other objects. They don't directly describe the language presented by the system, but serve as a resource for the objects that do. They are defined and available for use by any button that is edited to reference them. If no button references a particular symbol, it still may be in the library, ready for use in the future. All other language related data may be contained in the button and only exists if the button exists.
  • the Symbol Library may be preconfigured as part of the subject instructions and interfaces
  • the My Symbols Library consisting of user generated symbol data may be added to the program by the user who can capture an image with a camera available on the electronic device to use for a symbol or cut and paste an image from another location for use as a symbol.
  • Each symbol may be defined in terms of a plurality of data elements, for example, a symbol identifier (symbol ID), image data, name, tags and folder path(s) in the symbol hierarchy.
  • symbol ID symbol identifier
  • image data name, tags and folder path(s) in the symbol hierarchy.
  • multiple names, identifiers or other elements may be associated with one particular image such that a symbol can have multiple names.
  • Additional data may be stored for defining settings associated with the subject interfaces, For example, a list of settings and their values may be provided, with or without a complete listing of factory default settings that can be used by the program when the software is initially used, or if a user setting is undefined (as might be the case after a version upgrade).
  • Settings may be used to define such exemplary parameters as symbol appearance, symbol grid size, symbol text, programming/editing button, speech, internet settings, related help and others.
  • symbols appearance may be provided that allow a user to select message window font color, background and whether to show symbols.
  • Settings for grid size may be provided that allow a user to select from a set of predefined symbol grid sizes, where such predefined symbol grid sizes can be defined in terms of relative size descriptor such as large, medium, small, etc. or in terms of specific array dimensions such as 2x2, 3x4, 4x5 or others.
  • Settings for symbol text may be provided that allow a user to select whether to place text above or below the symbols in the message window and/or in the field buttons and bar buttons, as well as the hold time required to launch a Grammar Popup Window.
  • Settings for symbol programming time may be provided to establish whether to use single tap, double tap, press and hold (with optional hold time settings) or other selection method for actuating symbol selection.
  • Settings for speech may be provided that allow a user to select the name of the voice, volume and/or speech rate for the spoken symbols. Settings may be provided defining how a user can access help features such as through a link to on-line help or link to a help document or index of help topics.
  • Fig. 12 provides a schematic relationship of an exemplary hierarchical category of user interface elements accessed upon user selection of the "phrases" user interface element (i.e., button) 1000 as shown in the language navigation bar 310 of Figs. 3 and 10.
  • the phrases interface provides user access to pre-programmed phrases.
  • the language navigation bar 310 may show the phrase category button 1000 as active or currently selected, while leaving the option open for a user to select one of the other user interface elements in the language navigation bar.
  • phrase topics context icon may be presented in context navigation bar 304 and the button field 306 may be filled with the first n (n being determined by the button grid size) Phrase Topic Buttons as defined by the language data file.
  • the position indicator 308 shows the number of phrase topic buttons 1201 (determined by the grid size and number of phrase topic buttons defined in the language file) and initially indicates a position at the far left page in the plurality of pages containing possible phrase topic buttons.
  • the contextual navigation bar 304 may show the associated phrase context buttons 1 02 for the chosen phrase topic in the dynamic function space.
  • a predetermined set of phrase context buttons 202 (e.g., four context buttons) will be shown for all phrase topics.
  • the button field 306 may then show the appropriate phrase buttons 1204 for this particular phrase topic 1201 as defined in the language data file. Again, the position indicator shows where a user is in the plurality of pages containing possible phrases. Once a phrase 1204 is selected, a recorded sound associated with such phrase may be played or the selected phrase may be spoken via synthetic speech, recorded sound, voice or other message.
  • the selected phrase may be added by placement of text and/or symbol to the message window 302 for later use as part of a larger or different message to be spoken or relayed as output text to a communications channel.
  • FIG. 3 provides a schematic relationship of an exemplary hierarchical category of user interface elements accessed upon user selection of the "core words" user interface element (i.e., button) 1002 as shown in the language navigation bar 310 of Figs. 3 and 10.
  • the core words interface provides user access to the most commonly used single words.
  • the language navigation bar 310 may show the core words category button 1002 as active or currently selected, while leaving the option open for a user to select one of the other user interface elements in the language navigation bar 310.
  • the contextual navigation bar 304 may show three core word context buttons in the dynamic function space - namely, a "Core Words” button 1300, a “Lists” button 1302 and a “My Lists” button 1305.
  • the button field 306 may be filled with the first n (n being determined by the button grid size) Core Word Buttons 1301 as defined by the language data file.
  • the button field 306 may be filled with the first n (n being determined by the button grid size) Word List Buttons 1303 as are preprogrammed in a device.
  • words 1304 within the selected word list 1303 then may be displayed as buttons.
  • the button field 306 may be filled with the first n (n being determined by the button grid size) Word List Buttons 1306 as are defined by a user in a device.
  • words 1307 within a selected word list 306 then may be displayed as buttons.
  • User selection of any one of the three word context buttons 1300, 302 and 305 may result in highlighting of that button in the contextual navigation bar 304.
  • a user interface element associated with the "Core Words" user interface may be selected to provide a popup interface with variant options for a user's consideration.
  • such interface may be provided automatically upon user selection of a particular word or manually upon user selection of a popup button.
  • Such interface may be configured in an overlay window or popup window smaller than the entire screen size.
  • the popup is a modal window that may be smaller than the button field 306 and can be centered within the button field upon user activation.
  • the popup may show buttons that are the same as those in the button field 306 but in a smaller grid and/or zoom view.
  • the popup interface corresponds to a "Grammar Popup" interface, which displays word buttons having grammatical variations of some or ali of the words provided in the regular button field or selected by a user.
  • the grammar popup interface may show plural and possessive versions of the selected noun.
  • the grammar popup interface may show conjugate forms of the verb in question.
  • the popup interface corresponds to an interface that provides
  • categorical variations for a word may be determined based on parts of speech or relations for a given word. For instance, a core word “person” may result in a popup interface showing word choices for boy, gir!, man, woman, etc. In a still further example, alternate word forms such as synonyms, antonyms, descriptive identifiers, etc. may be provided. For instance, a core word “Hello” may result in a popup interface showing word choices for "Hi", “Hey There!, “What's up?” and others.
  • popup windows may be configured to provide alternatives for one or many of the words in the starter phrase/sentence. So, for example, consider an interface displaying a starter phrase such as "I'd like to eat.” Variants could be stored in a lookup table that would accommodate presentation of alternatives to a user in any dimension.
  • the popup interface can provide pronoun alternatives such as “She'd like to eat” or “They'd like to eat.”
  • degree alternatives could be presented such as “I'd LOVE to eat” or “I'd rather not eat” or any other type of alternative that would be suitably configured given a particular language context.
  • FIG. 14 provides a schematic relationship of an exemplary hierarchical category of user interface elements accessed upon user selection of the "Quickwords" user interface element (i.e., button) 1004 as shown in the language navigation bar 310 of Figs. 3 and 10.
  • the quickwords interface provides user access to commonly used conversational phrases such as those used to initiate a conversation, keep a conversation moving or to provide immediate feedback.
  • Quickwords may be single words and some phrases that allow quick interactions in any conversation any time, any place for a variety of communicative functions (i.e., gain attention, comment, question, directions, closure).
  • Quickwords are fillers, interjections or generic comments such as “yeah,” “uh-huh,” “really” or “wait.” All quickwords can be used in a variety of ways, either alone or in combination with other quickwords.
  • some exemplary quickwords may correspond to language such as “Hi”, “Okay”, “Bye”, “Yes”, “Maybe”, “No”, “Right”, “Yeah”, “Don't”, “Good”, “Oh”, “That's Awful”, “Please”, “Thanks”, “Fine”, “Excuse Me”, “Hang On”, “Sorry”, “Uh- oh”, “Really?”, “Damn”, “What", “But”, “Anyway”.
  • the quickwords can correspond to different customized sets of single words and phrases that are specifically appropriate for a particular topic of conversation. Such particular sets can be chosen by manual user selection or can be automatically determined by the context of a user's conversation.
  • the language navigation bar 310 may show the quickwords button 004 as active or currently selected, while leaving the option open for a user to select one of the other user interface elements in the language navigation bar 310.
  • a user may be provided with buttons in the contextual navigation bar 304 by which a user can select either a Core Quickwords button 400 or a Quickwords Topics button 1402.
  • User selection of button 400 can cause the button field 306 to be filled with the first n (n being determined by the button grid size) quickwords words or phrases 1401 as defined by the language data file.
  • button 1402 can cause the button field 306 to be filed with quickword topic buttons 1403, which may be further selected for user display of words/phrases 1404 that are categorized in accordance with the topics 1403. Once a word/phrase 1401 or 1404 is selected, a recorded sound associated with such word may be played or the selected word may be spoken immediately or added to the message window 302 for being spoken at a later time.
  • FIG. 15 provides a schematic re!ationship of an exemplary hierarchical category of user interface elements accessed upon user selection of the "keyboard" user interface element (i.e., button) 1006 as shown in the language navigation bar 310 of Figs. 3 and 10.
  • the keyboard interface provides user access to a keypad or keyboard feature 1500 such as a "QWERTY" keypad or other arrangement of one or more letters and/or numbers and symbols on a display.
  • the keyboard assists a user in constructing words and phrases that will be spoken upon completion.
  • the diagram of Fig. 15 captures the simplicity of the keyboard.
  • keyboard icon 1500 can be shown in the dynamic function space of the contextual navigation bar 304, while the button field 306 can be populated with a standard keyboard 1501 such as may be preprogrammed in or otherwise made available to the speech generation device.
  • the position indicator need not be available during selection of the keyboard. Spell check functionality, word completion, word prediction, phrase completion and/or abbreviation expansion upon typing using the keyboard interface may be available to a user.
  • FIG. 16 provides a schematic relationship of an exemplary hierarchical category of user interface elements accessed upon user selection of the "utilities" user interface element (i.e., button) 1008 as shown in the language navigation bar 310 of Figs. 3 and 10.
  • the various functions can utilize an interface that is specific to their role.
  • the contextual navigation bar 304 may be hidden such that the button field 306 can be expanded to utilize more screen real estate.
  • a plurality of utility buttons 1600 may be displayed to a user.
  • a camera utility button may be provided to allow a user to take a picture with a camera built into the device, crop a picture, name a picture, and/or tag people, places or items in a picture.
  • a stories utility button may be provided that opens a page or pages much like the standard phrase topic page(s) described with reference to phrases button 1000 and presents a population of Story buttons 1602.
  • each Story Button 1602 presents a sequence of single Story Page Buttons 1604, each filling the button field space 306.
  • the Story Page Buttons behave similar to regular phrase buttons (containing a symbol, label, and a recorded or spoken phrase).
  • stories can be presented using the components of the standard interface, with dynamic function space on the context navigation bar 304 showing a "Story" icon and text that is the title of the story, and the utilities category button in the language bar being identified as active if such navigation bars are shown on the user display at that time.
  • the message window 302 is disabled during use of the story utility and the contextual navigation bar 304 is moved to the top of the screen.
  • a "Recent Phrases" utility button may be provided to access the last N phrases spoken (not words typed, but only the entire phrases that were spoken). Such recent phrases can be presented as phrase buttons on pages of buttons that are the current grid size. When selected, such buttons may operate as the previously described phrase buttons.
  • a "My Symbols" or "Symbol Browser” utility button may be provided to initiate a symbol finder interface as described below in order to search for and display symbols. Options also may be available for adding symbols to the device's library. For example, symbols may be added by accessing a standard photo library associated with the device, from a camera integrated in the device, or from clipboard cut/copy and paste if an image is on the device clipboard.
  • Features may also be provided for editing new or pre-existing symbols, such as editing the names, tags or other metadata associated with a symbol and/or deleting a symbol from the My Symbols Library and/or factory symbol libraries.
  • a share space utilities button may be provided that allows a user to share information from his device to an online portal to which the user has an accessible account. For example, a user may select the share space button to share existing buttons, create a button, review an export folder or export buttons to the online portal. To share existing buttons, a user can select a button or buttons from any of the user's pages and put it/them into an Export folder. A user can also select entire groupings of buttons for sharing, such as but not limited to selected branches in a hierarchical tree structure of buttons. A specialized Select Mode may be configured for a device that allows a user to navigate the language of the system for this particular reason to select various buttons to put in the Export folder.
  • Such a Select Mode may afford user navigation using an interface that displays language elements similar to their configuration during Use Mode.
  • a Select Mode may be configured such that a user can select buttons, symbols and the like for export by navigating a different interface than the language interface available in traditional Use Mode.
  • One example of a different type of interface that may be available for such purpose is a tree structure that illustrates language elements in hierarchical, parallel or other relational grouping.
  • a tree-based interface allows a user to more easily navigate among language elements and also to select specific elements or entire branches of elements within the language system for sharing or exporting. Provision of a different type of interface for Select Mode or other modes may help a user distinguish between the times when he is selecting language elements for sharing and when he is navigating within the language to communicate.
  • buttons can also create and edit a new button from scratch for sharing by putting into the Export folder.
  • the same dialog that is used for editing an existing Story Page Button may be presented - allowing the construction of a button and the ability to view a full-screen preview.
  • the user can accept (and the button will be added to the export folder) or cancel ⁇ discarding any button data that was defined).
  • a user can review the export folder, which can be presented to a user in a list similar to that used by the Symbol Browser, in which the buttons selected for the export folder can be viewed and removed from the export folder before an export occurs.
  • buttons can be selected for removal by toggling the selection with a touch and having a button for 'remove', or dragging the selected items to a trash can. Buttons may be exported to an online portal by connecting to the account and submitting a request to transfer the button data (with the data size), and a response containing any error or denial notifications or permission to proceed.
  • a settings button may be provided that brings up a settings interface that lists the settings available for modification by the user. Exemplary user settings are described above, in this interface the user can adjust settings (which take effect immediately) and return to the Utilities Category Page.
  • a still further utility button 1600 corresponds to a backup/restore data utilities button that can be available for a user with access to an online portal that allows the user to backup his data to their portal. Also, if a backup is already stored on such portal, the user can restore the backup from the portal to an electronic device (overwriting all existing user data). A user can additionally or alternatively append new data to pre-existing data. Both such options essentially correspond to conducting a file transfer of user data files to a location specified by the portal. It should be appreciated that different instructions may accompany different types of data stored in the online portal so that it is imported or consumed by an electronic device in different predetermined manners.
  • backup data stored in an online portal may be configured with software instructions that instruct an electronic device to replace some of all of the existing data on the device with the backup data.
  • shared data stored in an online portal may be configured with software instructions that instruct an electronic device to append the shared data to existing data already stored on the electronic device.
  • Edit Mode may be accessed within the context of the system as it is in Use Mode.
  • Edit Mode may be accessed by activating a Mode tab 702 relative to the Context Navigation Bar 304 and then the user can select the button on the screen that he wants to modify. Once entered, the interface can indicate that it is in Edit Mode with a clear change in the interface of the device. Edit Mode can be identified by a unique banner that may appear in the Message Window 302. When in Edit Mode, the buttons available in the button field 306 may be available for modification.
  • buttons can be rearranged within their pages by dragging and dropping, thus inserting at or swapping with their drop target.
  • Edit Mode can be used to modify the phrases symbol browser, edit a button text label, edit button phrase text or recorded message associated with a button or delete phrase buttons or clear phrase topic buttons.
  • Edit Mode can be used to modify the word symbol browser, edit a symbol text label, edit button word text or recorded message associated with a button or delete word buttons or clear word list buttons.
  • Edit Mode can be used to modify the symbol browser, edit a symbol text label, edit quickwords word text or recorded messages associated with a button or delete quickword buttons or clear quickword topic buttons. Similar editing functionality also may be provided while a user is in the stories interface. When the contextual navigation bar 304 is unavailable, a user may need to access the Edit Mode from the utilities category level or at the story utility level instead of directly from a language interface.
  • Editing controls may be available for a user, which may be particularly advantageous when a user is operating in Edit Mode.
  • exemplary editing controls may include a symbol browser and/or text box for editing text and/or a sound recorder for recording and previewing sounds.
  • a symbol browser may provide a folder browse-abie interface including the comprehensive contents of all symbol libraries, an example of which is shown in Fig. 1 .
  • a search box 100 may be available that will allow for searching of the symbol libraries based on name, tag and/or category.
  • the initial screen of the symbol browser shown in Fig. 11 has search box 1 00 on the top portion of the interface, followed by a "breadcrumbs" interface area 1 102 to show the current context of the symbols, which are shown in the large, symbol field section 1104.
  • a position indicator 106 similar to position indicator 308 may be provided as well as a function button bar 1 108 to include user interface elements for selecting such functions as select, cancel, take picture, paste, etc.
  • the exemplary interface 1 10 shown in the left side of Fig. 1 shows the symbol finder when first opened, and the interface 1 120 shown in the right side of Fig. 1 1 shows the symbol finder after browsing in an exemplary category such as "Animals/Pets.”
  • a first exemplary step 1700 involves presenting a user Interface to a user in a touch-sensitive display of an electronic device.
  • Such user interface may optionally include a plurality of user interface areas (e.g., a message window, a contextual navigation bar, a button field, a position indicator and/or a language navigation bar).
  • Exemplary step 1702 then involves receiving an input from user selection of user interface elements (i.e., buttons) in one or more of the user interface areas (e.g., the contextual navigation bar, button field and/or language navigation bar). Then, a response to the received user input is electronically generated in step 1704. For example, a plurality of buttons may be visually presented in the button field or one or more symbols, words, phrases or icons corresponding to a selected button may be visually presented in the Message Window portion of the user interface.
  • Step 1706 then involves electronically transforming the contents of the Message Window or other selected interface elements into an audio speech output of words or sounds

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Telephone Function (AREA)

Abstract

L'invention porte sur un dispositif électronique, un procédé et une interface graphique utilisateur qui fournissent une pluralité de zones d'interface utilisateur, comprenant au moins une barre de navigation, un champ de boutons et une fenêtre de messages. La ou les barres de navigation contiennent une pluralité d'éléments fixes d'interface utilisateur correspondant à différentes catégories de langage pour une sélection par un utilisateur. Le champ de boutons contient une pluralité d'éléments de langage peuplés en fonction de l'élément fixe d'interface utilisateur de la barre de navigation sélectionné par l'utilisateur. La fenêtre de messages permet à un utilisateur de composer un message à partir des éléments de langage sélectionnés dans le champ de boutons, dont certains ou tous peuvent être téléchargés à partir d'un ordinateur serveur sur un réseau auquel le dispositif électronique est couplé. Les éléments de langage contenus dans la fenêtre de messages sont ensuite envoyés à un moteur de texte-à-parole afin de générer une sortie audio.
PCT/US2010/052769 2009-10-16 2010-10-15 Dispositif électronique avec fonctionnalité de commande automatique audio (aac) et interface utilisateur correspondante WO2011047218A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US25227309P 2009-10-16 2009-10-16
US61/252,273 2009-10-16
US30087710P 2010-02-03 2010-02-03
US61/300,877 2010-02-03

Publications (1)

Publication Number Publication Date
WO2011047218A1 true WO2011047218A1 (fr) 2011-04-21

Family

ID=43876560

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/052769 WO2011047218A1 (fr) 2009-10-16 2010-10-15 Dispositif électronique avec fonctionnalité de commande automatique audio (aac) et interface utilisateur correspondante

Country Status (1)

Country Link
WO (1) WO2011047218A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013140011A2 (fr) * 2012-03-21 2013-09-26 Universitat Politècnica De Catalunya Appareil de communication sans fil pour personnes éprouvant des difficultés à parler

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030090474A1 (en) * 2001-10-27 2003-05-15 Philip Schaefer Computer interface for navigating graphical user interface by touch
US20060136221A1 (en) * 2004-12-22 2006-06-22 Frances James Controlling user interfaces with contextual voice commands
US20080154604A1 (en) * 2006-12-22 2008-06-26 Nokia Corporation System and method for providing context-based dynamic speech grammar generation for use in search applications

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030090474A1 (en) * 2001-10-27 2003-05-15 Philip Schaefer Computer interface for navigating graphical user interface by touch
US20060136221A1 (en) * 2004-12-22 2006-06-22 Frances James Controlling user interfaces with contextual voice commands
US20080154604A1 (en) * 2006-12-22 2008-06-26 Nokia Corporation System and method for providing context-based dynamic speech grammar generation for use in search applications

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013140011A2 (fr) * 2012-03-21 2013-09-26 Universitat Politècnica De Catalunya Appareil de communication sans fil pour personnes éprouvant des difficultés à parler
WO2013140011A3 (fr) * 2012-03-21 2013-11-07 Universitat Politècnica De Catalunya Appareil de communication sans fil pour personnes éprouvant des difficultés à parler

Similar Documents

Publication Publication Date Title
Murad et al. Design guidelines for hands-free speech interaction
Corbett et al. What can I say? addressing user experience challenges of a mobile voice user interface for accessibility
NL2017003B1 (en) Canned answers in messages
JP7513684B2 (ja) ユーザと、自動化されたアシスタントと、他のコンピューティングサービスとの間のマルチモーダル対話
CN106462340B (zh) 尺寸减小的用户界面
US11347801B2 (en) Multi-modal interaction between users, automated assistants, and other computing services
US20090313582A1 (en) System, Method and Computer Program for User-Friendly Social Interaction
CN110364148A (zh) 自然助理交互
US20110202842A1 (en) System and method of creating custom media player interface for speech generation device
CN110019752A (zh) 多方向对话
US20110197156A1 (en) System and method of providing an interactive zoom frame interface
CN108701013A (zh) 多任务环境中的智能数字助理
US9251717B2 (en) Augmentative and alternative communication language system
US11200893B2 (en) Multi-modal interaction between users, automated assistants, and other computing services
CN106471570A (zh) 多命令单一话语输入方法
CN108352006A (zh) 即时消息环境中的智能自动化助理
CN108292203A (zh) 基于设备间对话通信的主动协助
Friscira et al. Getting in touch with text: Designing a mobile phone application for illiterate users to harness SMS
CN107615378A (zh) 设备语音控制
AU2013295634A1 (en) Apparatus, method and computer readable medium for a multifunctional interactive dictionary database for referencing polysemous symbol sequences
US20080195375A1 (en) Echo translator
WO2006124620A2 (fr) Procede et appareil d'individualisation de contenu dans un dispositif de communication augmentative et alternative
Kuber et al. Determining the accessibility of mobile screen readers for blind users
WO2011082053A1 (fr) Système et procédé d'utilisation d'un modèle de sens pour l'attribution de symbole
Reitmaier et al. Situating Automatic Speech Recognition Development within Communities of Under-heard Language Speakers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10824136

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10824136

Country of ref document: EP

Kind code of ref document: A1