US7246151B2 - System, method and apparatus for communicating via sound messages and personal sound identifiers - Google Patents

System, method and apparatus for communicating via sound messages and personal sound identifiers Download PDF

Info

Publication number
US7246151B2
US7246151B2 US10/851,815 US85181504A US7246151B2 US 7246151 B2 US7246151 B2 US 7246151B2 US 85181504 A US85181504 A US 85181504A US 7246151 B2 US7246151 B2 US 7246151B2
Authority
US
United States
Prior art keywords
user
sound
network
message
users
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US10/851,815
Other versions
US20040215728A1 (en
Inventor
Ellen Isaacs
Alan Walendowski
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
AT&T Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AT&T Corp filed Critical AT&T Corp
Priority to US10/851,815 priority Critical patent/US7246151B2/en
Publication of US20040215728A1 publication Critical patent/US20040215728A1/en
Priority to US11/810,831 priority patent/US7653697B2/en
Application granted granted Critical
Publication of US7246151B2 publication Critical patent/US7246151B2/en
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/107Computer-aided management of electronic mailing [e-mailing]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/58Message adaptation for wireless communication

Definitions

  • This invention relates to interactive communications, and more particularly, to a system, method and apparatus for communicating in a distributed network via sound instant messages and personal sound identifiers.
  • Text based instant messaging and electronic mail are still both somewhat impersonal, especially compared with something like conventional telephone conversations where vocal intonation, tone and feedback provide a much needed flavor of civilization and personality to the communications.
  • Text based instant messaging and electronic mail also typically require the users to have access to input devices such as keyboards to facilitate the creation and transmission of messages to one user from another. The quality of such communications thus depends heavily on each user's typing speed, accuracy and network connection quality of service.
  • users without access to input devices such as keyboards may find it very difficult to conduct meaningful conversations without have to endure tedious keystroke input procedures.
  • the present invention is a system, method and apparatus for facilitating communications among a number of distributed users who can send and receive short sound earcons or sound message which are associated with specific conversational messages.
  • the earcons are typically melodies made up of short strings of notes. Users conversing with one another via the earcons are responsible for learning the meaning of each earcon in order to effectively communicate via the earcons. Visual aids may be provided to aid users in learning the meaning of the earcons.
  • the earcons are represented via visual icons on their respective communicative devices, such as their personal digital assistant devices, personal computers and/or wireless telephones.
  • One embodiment of the present invention is a system for facilitating communication among a plurality of distributed users.
  • the system includes a plurality of distributed communicative devices, a plurality of sound instant messages for playing on each of the distributed communicative devices and a central server which receives a request from one or more of the plurality of distributed communicative devices, transmits the request to one or more of the plurality of distributed communicative devices identified in the request wherein the one or more of the plurality of distributed communicative devices identified in the request will play the one or more of the plurality of sound instant messages also identified in the request.
  • the present invention is also an apparatus for facilitating distributed communications between a plurality of remote users which includes a display screen, at least one icon displayed on the display screen, the at least one visual icon associated with an earcon made up of a series of notes associated with a communicative message, and a transmitter for transmitting the earcon from the first user to at least one other user.
  • the present invention also is a method for communicating via sound instant messages which includes receiving one or more sound instant messages, caching the plurality of sound instant messages, receiving a request to play at least one of the cached sound instant messages and playing the at least one of the received sound instant messages from the plurality of cached sound instant messages.
  • the present invention further includes a method of establishing sound based communications among a plurality of distributed users in a communicative network which includes determining which of the plurality of distributed users are currently on the network, receiving a request from at least one user on the network, wherein the request identifies one or more users in the network and at least one sound instant message designated for the one or more identified users and transmitting the one or more sound instant messages to the one or more identified users in the network.
  • personal sound identifiers may accompany a sound message or earcon such that the receiving user will be alerted to the identity of the user who sent them the sound message or earcon.
  • the earcons are typically short snippets of song riffs or some otherwise random selection of notes or sounds which are used to uniquely identify each user to one another.
  • FIG. 1 is a diagram of an exemplary system in accordance with the teachings of the present invention.
  • FIG. 2 is a diagram of an illustrative communicative device in accordance with the teachings of the present invention.
  • FIG. 3 is an exemplary method in accordance with the teachings of the present invention.
  • FIG. 4 is another diagram of an illustrative communicative device in accordance with the teachings of the present invention.
  • FIG. 5 is another exemplary method in accordance with the teachings of the present invention.
  • an exemplary communications system 10 is shown in accordance with the present invention wherein users in the system may communicate with one another using sound messages or “earcons” and/or personal sound identifiers.
  • sound messages sound instant messages
  • earcons sound instant messages
  • earcons mean a short series of notes and/or sounds which are associated with or representative of any number of short communicative phrases.
  • These short communicative phrase may be any conversational message such as “Hi”, “Hello”, “Are you ready to go?”, “Meet you in five minutes”, “I'm heading home” and a virtually infinite variety of these and other phrases.
  • a short string of six notes could be constructed to mean “Are you ready to go?” while another unique short string of four notes could be constructed to mean “Hello.”
  • each user will be provided with a basic “set” of conventional or standardized earcons which have predefined meanings such that users may readily communicate with one another using these standardized earcons without having to decipher or learn the meaning of the earcons.
  • new earcons may be created by each user such that when using these user-created earcons, each user is responsible for the task of interpreting and learning each other user's respective earcons in order to effectively communicate via the earcons or sound messages.
  • the term “personal sound identifier” refers to one or more short or abbreviated sound snippets which a user may use to identify themselves to another user. These sound snippets will typically be short melodies made up of short strings of notes which a user will use to identify themselves to other users in the system.
  • the personal sound identifiers may also be snippets or riffs of popular songs, themes or melodies. Both the earcons and personal sound identifiers may be selected by a user from a predetermined selection or the sound messages and person sound identifiers may be created by user individually, as discussed in more detail later herein.
  • the earcons and personal sound identifiers are used on a selective basis, whereby a user may or may not provide their personal sound identifier with each earcon sent by that user to other user(s).
  • every earcon is accompanied the user's personal sound identifier. For example, if a user's earcon is a three note melody and that user wishes to send another user an earcon which means “Are you ready to go?”, the other user will hear the three note melody followed by the earcon which means “Are you ready to go?” In this manner, users can readily identify the source of the earcon which is especially valuable when multiple users are sending each other earcons during a single communicative session.
  • Certain system rules may also be implemented regarding the playing of the personal sound identifiers. For example, if a user has received a series of earcons from a single other user, the sending user's earcon will not be played everytime since it can be assumed that the receiving user is already aware of the sending user's identity. Other rules may be implemented, for example, if a user has not received any earcons for a specified period of time, such as 15 minutes, any earcons received will automatically be preceded by the sending user's personal sound identifier.
  • the system 10 includes one or more communicative devices, such as personal digital assistant (PDA) devices 20 , 30 , wireless telephone 40 and personal computer 50 .
  • the devices such as personal digital assistant (PDA) devices 20 , 30 , wireless telephone 40 and personal computer 50 are in communication with one another and with a central server 60 via a plurality of communication transmissions 70 .
  • each device is associated with an individual user or client but in other embodiments, a single user or client may be associated with two or more devices in the system.
  • Each device may be in communication with one another and central server 60 through a wireless and/or a wired connection such as via dedicated data lines, optical fiber, coaxial lines, a wireless network such as cellular, microwave, satellite networks and/or a public switched phone network, such as those provided by a local or regional telephone operating company.
  • the devices may communicate using a variety of protocols including Transmission Control Protocol/Internet Protocol (TCP/IP) and User Datagramn Protocol/internet Protocol (UDP/IP). Both the TCP/IP and/or the UDP/IP may use a protocol such as a Cellular Digital Packet Data (CDPD) or other similar protocol as an underlying data transport mechanism in such a configuration.
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • UDP/IP User Datagramn Protocol/internet Protocol
  • Both the TCP/IP and/or the UDP/IP may use a protocol such as a Cellular Digital Packet Data (CDPD) or other similar protocol as an underlying data transport mechanism in such a configuration.
  • CDPD Cellular
  • the devices preferably include some type of central processor (CPU) which is coupled to one or more of the following including some Random Access Memory (RAM), Read Only Memory (ROM), an operating system (OS), a network interface, a sound playback facility and a data storage facility.
  • CPU central processor
  • RAM Random Access Memory
  • ROM Read Only Memory
  • OS operating system
  • network interface a sound playback facility
  • sound playback facility a data storage facility.
  • central server 60 a conventional personal computer or computer workstation with sufficient memory and processing capability may be used as central server 60 .
  • central server 60 operates as a communication gateway, both receiving and transmitting sound communications sent to and from users in the system.
  • central controller 70 is configured in a distributed architecture, with two or more servers are in communication with one another over the network.
  • PDA 100 includes a includes a low profile box shaped case or housing 110 having a front face 114 extending from a top end 118 to a bottom end 122 . Mounted or disposed within front face 114 is a display screen 126 . Positioned proximate bottom end 122 are control buttons 132 .
  • Display screen 126 may be activated and responsive to a stylus, control pen, a finger, or other similar facility, not shown.
  • a processor Disposed within housing 110 is a processor coupled with memory such as RAM, a storage facility and a power source, such as rechargeable batteries for powering the system.
  • the microprocessor interacts with an operating system that runs selective software depending on the intended use of PDA 12 .
  • memory is loaded with software code for selecting/generating, storing and communicating via sound messages and/or personal sound identifiers with one or more other users in the system.
  • the display screen 126 includes a screen portion 130 which displays the name, screen identification or other identifying indicia one or more other users on the network.
  • a user may be able to maintain a list of users on their device and when such as user becomes active on the network, the display will provide some indication to the user, such as by highlighting the name in some manner, to indicate that the user is available on the system. For example, an icon may appear proximate to the name of a user who is available or present on the system.
  • the term “available” may include both when a user is currently “active”, such as when they are presently using their communicative device or the term “available” may include when a user is “idle”, such as when the user is logged on but is not currently using their respective communicative device. In certain embodiments, a different icon may be used to distinguish between when a user is in an “active” or in an “idle” state.
  • clients or users via their respective communicative devices such as PDAs, laptops, PCs, etc. may update a centralized server with their presence information via a lightweight UDP-based protocol. Typically, the server will fan a client's presence information out to other users or clients that have indicated an interest and have permission to see it.
  • the sound message request will be transmitted to the user on the device which is deemed to be currently in an “active” state.
  • users may be alerted as to the state change of other users in the system, such as when a certain user becomes “active” or changes from “active” to “idle.” Such alerts may be provided via sound-based alerts which will indicate the state changes to the users. Such alerts may be followed, for example, by the user's personal sound identifier which identifies the user who has changed their respective “state.”
  • the display screen 126 includes one or more visual indicia or icons 134 which are associated with one or more sound messages, sound instant messages or earcons. For example, five different representative sound icons 134 are shown, each icon associated with a distinct sound message or earcon such as “Hi”, “Bye”, “Eat”, “Yep” and “No”. To facilitate communication via the earcons, each icon may include a textual or visual label to assist the user in remembering which icon is associate with which earcon.
  • the “Eat” icon may includes a picture which hints as to the meaning of the earcon, such as a fork and spoon as illustrated and may also include a textual label such as “Eat?”
  • each sound message may be user created, such as the user employing a sound creation/editing utility which the user may use to compose the earcon or the user may select from system provided earcons from which a user may make selections.
  • icons 134 which are associated with the earcons may be user created such as via specialized software for designing and editing bitmaps of icons and/or the icons may be provided by the system from which a user may select.
  • the display screen 126 may further include a visual log for recording and displaying the different sound message or earcons which a user may have received. Such a visual log may aid a user in learning the meaning of earcons for which the user is unfamiliar with.
  • FIGS. 3 and 4 an exemplary method and device is shown for creating and transmitting sound messages and/or personal sound identifiers between users in the system.
  • the user creates a sound message, step 136 .
  • a sound message may be created by simply selected a sound message from a selection of pre-recorded sound messages or sound message may be newly created by a user, such as by employing a sound editor utility to construct a sound message.
  • a sound message is created, he sound message is saved, step 140 . Saving may be done locally on a user's personal communicative device by simply saving the sound message with, for example, a sound editor utility as a sound file on the device's storage facility.
  • the user may then select or create an icon to be associated with the sound message, step 144 .
  • the icon may be selected from a selection of already existing icons or may be specially created by the user via a graphics utility or facility. In other embodiments, an icon may be assigned to the sound message automatically.
  • the user may send the sound message to any number of users in the system. To accomplish this, the user may select one or more users to send the sound message to, step 148 . This may be accomplished, as discussed in more detail later herein, such as by selecting one or more user names from a directory of users.
  • the user may then transmit the sound message to the selected users by selecting or activating the icon associated with the desired sound message, step 152 .
  • the file in which the sound message or earcon is stored is not itself transmitted to users directly.
  • each user already has a “copy” of the sound message stored or cached locally such that only a request or command to play the sound message is transmitted by the user.
  • the sound message would first need to be distributed to the other users in the system. Preferably this is accomplished on “as-needed” basis whereby the new sound message is transferred “on-the-fly” to users who does not yet have a stored or cached version of the new sound message.
  • the user who has created the new sound message will simply send the sound message like any other sound message at which point the receiving user who does not yet have the sound message will request transfer of the new sound message.
  • the proliferation and distribution of sound messages or earcons may be accomplished by having specialized software automatically distribute a new sound message to the other users when the software detects that new message has been created.
  • a central repository of sound messages or earcons may be administered via a central server, such as illustrated in FIG. 1 .
  • the central server would maintain a central repository of all sound messages or earcons in the system and would periodically update user's devices with the earcons as new one were created. Similar methods may be used to delete sound messages or earcons which are obsolete or unwanted.
  • each sound message is assigned a unique identifier, which can be a numerical identification (ID), alphabetical ID, a combination thereof or other unique identifier which is unique to that particular sound message.
  • ID numerical identification
  • a unique identifier which is unique to that particular sound message.
  • the files containing the sound messages or earcons are stored locally on each user's local device such as their PDA.
  • Sound messages may be stored as sound files in any one or other file formats such as in a MIDI file format, a .MP3 file format, a .WAV file format, a .RAM file format, .AAC file format and a .AU file format.
  • a user may send one or more other users a sound message or earcon as follows.
  • the user employing the device 160 makes a selection from a screen portion 164 which lists some identifying indicia, such as the names of system users, shown herein “Elena, Alan, Dipti, Bonnie and Maya.”
  • Elena the names of system users
  • one user say for example, “Elena”, selects “Bonnie”, by selecting via a stylus, not shown, the name “Bonnie” which is then subsequently highlighted.
  • a sound playback facility which may include a sound processor and a speaker component.
  • the “BYE” earcon is played on “Bonnie's” device and in other embodiments, the “BYE” earcon is accompanied by “Elena's” personal sound identifier.
  • “Bonnie” did not already know that the earcon originated from “Elena”
  • “Elena” personal sound identifier should provide “Bonnie” with this information.
  • the personal sound identifier will be played before playing the earcon but the personal sound identifier may also be played after the earcon is played.
  • a user may send another user a series of sound message by multiply selecting two or more earcons to send to the user. In this manner, a user may actually construct phrases or sentences with a variety of independent earcons strung together. A user may also send the same earcon to multiple users simultaneously.
  • a command or request is received from a user to send one or more users a sound message(s) or earcon(s), step 200 .
  • a user request identifies the user or users to which the sound message is intended for, and a unique identifier or ID of the sound message to be played.
  • the request may be simply the user selecting one or more names on the user's display screen and activating the icon associated with the sound messages the user wishes to send.
  • the request may also include the requesting user's personal sound identifier as discussed earlier herein.
  • the request will be transmitted to the receiving user's device, step 210 . Once the request is received, it is determined if the sound message exists on the receiving user's device, step 220 .
  • each user's device in the system will preferably have a locally cached or stored selection of sound messages or earcons created by other users in the system such that when one user sends another user a sound message, the sound will simply be played from the selection of locally resident sound messages.
  • a determination if a sound message exists on the receiving user's device may be accomplished by comparing the unique identifier of the sound message contained in the request with the unique identifiers of the sound messages already existing on the receiving user's device. If a sound message does not exist on a user's device, a request for the missing sound message is made, step 240 .
  • specialized software on the receiving user's device will automatically administer the request for a missing sound message.
  • the missing sound message may either be requested directly from the requesting user or from a central server which may maintain a current selection of sound messages.
  • the missing sound message is then provided to the receiving user, step 250 .
  • the message can then be played on the receiving user's device, step 230 .
  • the sound message request includes the requesting user's personal sound identifier or at least some indication as to the identity of the user sending the request
  • the receiving user(s) device will play the personal sound identifier along with playing the sound message.
  • each user's personal sound identifier may be distributed to other users in the system similar to the manner in which sound message sound files are distributed to users in the system and stored on their local devices.
  • the actual personal sound identifier may also be simply transmitted along with the request as discussed above.
  • a receiving user would receive the personal sound identifier along with the request to play a certain sound message.
  • the personal sound identifier would be played along with the stored sound message.
  • the playing of a user's personal sound identifier may be performed automatically by each user's device.
  • the user's device would play a user's personal sound identifier whenever a sound message is received from that specific user.
  • specialized software provided on the device will determine which user has sent a sound message and then play that user's respective personal sound identifier.
  • PDA clients will communicate with one other and the server via a Cellular Digital Packet Data (CDPD) service such as AT&T's Pocketnet CDPD service using a Palm Vx, a Palm V, a Palm III, a Palm IIIx or other similar variations, updates or descendents thereof.
  • CDPD Cellular Digital Packet Data
  • These PDA devices may be equipped with a Novatel Wireless Minstrel V modem or other similar component.
  • Client software development will be in C, via the freely available GNU/Palm SDK environment.
  • a Win32 client implementation for desktop clients may be used that will also send and receive presence information and have the required sound support, etc.
  • an HDML-based version of the client may be used, through a separate set of server functionality.
  • the sound message communications will support message authentication, and optional message encryption.
  • authentication will likely be accomplished by including an MD5(message+recipient-assigned-token) MAC with the message.
  • MD5 messagessage+recipient-assigned-token
  • a Tiny Encryption Algorithm (TEA) for the encryption layer may also be used in one exemplary embodiment. Of course other authentication and encryption algorithms may be used.
  • each unique device such as a PDA, wireless telephone or personal computer is associated with a single user.
  • a single user may be active on two or more devices, such that a user may communicate via the sound messages with users via the two or more devices.
  • a single user may be in communication via their PDA as well as their wireless telephone at the same time.
  • a display screen such as the one shown in FIGS. 1 , 2 and 4 may provide some indication that the user is on multiple devices at the same time.
  • some type of visual indicator such as a representative icon may be displayed next to the user's name to show that the user is on both their PDA and wireless telephone device simultaneously.
  • a request or command to play a sound message will be sent to the user's device on which the user is currently active.
  • Personal sound identifiers or sound identification may also be used herein to identify users to one another on the system. As discussed earlier herein, personal sound identifiers are unique abbreviated sounds which associated with specific users.
  • user “Ann” may have a personal sound identifier which resembles a portion of the “Hawaii-Five-O” theme song
  • user “Bonnie” may have a random three note melody as a personal sound identifier
  • user “Dipti” may have a personal sound identifier which resembles a portion of the famous song “Smoke on the Water”.
  • users may selectively accept and reject earcons from certain users or all users as desired.
  • user “Ann” may configure her device to accept earcons from all users, specific users such as “Bonnie” and “Dipti” or alternatively, not accept any earcons from any user.
  • Such a configuration may be provided via specialized software on the user's respective device which allows the setting of these possible configurations.
  • exemplary USER X, USER Y and USER Z would allow each others sound messages to be propagated to one another such that USER X, USER Y and USER Z each would have a complete set of locally stored sound messages selected/created by the other users.
  • USER X would have locally saved versions of all the sound messages selected/created by USER Y and USER Z and so on.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Computer Hardware Design (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Telephonic Communication Services (AREA)
  • Information Transfer Between Computers (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A system, method and apparatus for facilitating communication among a number of distributed clients in a distributed network is disclosed. A user, such as through a personal digital assistant device, may select one or more sound messages for transmission to one or more other users in the network. Each sound message may be preceded by a sound identifier which identifies the sending user. Users may select or create their sounds message and/or person sound identifiers. The sound messages will typically be abbreviated melodies or note strings which are associated with certain conversational messages.

Description

This application is a continuation of prior application Ser. No. 09/609,893 filed Jul. 5, 2000, now U.S. Pat. No. 6,760,754, which is incorporated herein by reference, and said prior application Ser. No. 09/609,893 claims the benefit of U.S. provisional application No. 60/184,180 filed Feb. 22, 2000.
BACKGROUND OF THE INVENTION
This invention relates to interactive communications, and more particularly, to a system, method and apparatus for communicating in a distributed network via sound instant messages and personal sound identifiers.
One of the more beneficial aspects of the Internet, aside from the vast array of information and content sources it provides, is the varied and newfound ways people can now communicate and stay in touch with one another. Users all around the world, or even just around the corner, may now communicate in a relatively low cost and efficient manner via a myriad of Internet facilities including electronic mail, chat rooms, message boards, text based instant messaging and video teleconferencing.
These methods of communication offer distinct advantages over standard communicative methods such as paper based mail and conventional telephone calls. For example, facilities like electronic mail are typically considerable faster and cheaper than these conventional methods of communication. Rapidly escalating in popularity is text based instant messaging which offers more instantaneous gratification with respect to interactive communications between two or more users.
However, one main problem with presently available forms of text based instant messaging and facilities like electronic mail is that both text based instant messaging and electronic mail are still both somewhat impersonal, especially compared with something like conventional telephone conversations where vocal intonation, tone and feedback provide a much needed flavor of humanity and personality to the communications. Text based instant messaging and electronic mail also typically require the users to have access to input devices such as keyboards to facilitate the creation and transmission of messages to one user from another. The quality of such communications thus depends heavily on each user's typing speed, accuracy and network connection quality of service. Furthermore, users without access to input devices such as keyboards may find it very difficult to conduct meaningful conversations without have to endure tedious keystroke input procedures.
Accordingly, it would be desirable to have a way to communicate with other users in still an efficient and quick manner but with a more personal touch than provided by other modes of electronic based communications.
SUMMARY OF THE INVENTION
The present invention is a system, method and apparatus for facilitating communications among a number of distributed users who can send and receive short sound earcons or sound message which are associated with specific conversational messages. The earcons are typically melodies made up of short strings of notes. Users conversing with one another via the earcons are responsible for learning the meaning of each earcon in order to effectively communicate via the earcons. Visual aids may be provided to aid users in learning the meaning of the earcons.
In one embodiment of the present invention, the earcons are represented via visual icons on their respective communicative devices, such as their personal digital assistant devices, personal computers and/or wireless telephones. One embodiment of the present invention is a system for facilitating communication among a plurality of distributed users. The system includes a plurality of distributed communicative devices, a plurality of sound instant messages for playing on each of the distributed communicative devices and a central server which receives a request from one or more of the plurality of distributed communicative devices, transmits the request to one or more of the plurality of distributed communicative devices identified in the request wherein the one or more of the plurality of distributed communicative devices identified in the request will play the one or more of the plurality of sound instant messages also identified in the request.
The present invention is also an apparatus for facilitating distributed communications between a plurality of remote users which includes a display screen, at least one icon displayed on the display screen, the at least one visual icon associated with an earcon made up of a series of notes associated with a communicative message, and a transmitter for transmitting the earcon from the first user to at least one other user.
The present invention also is a method for communicating via sound instant messages which includes receiving one or more sound instant messages, caching the plurality of sound instant messages, receiving a request to play at least one of the cached sound instant messages and playing the at least one of the received sound instant messages from the plurality of cached sound instant messages.
The present invention further includes a method of establishing sound based communications among a plurality of distributed users in a communicative network which includes determining which of the plurality of distributed users are currently on the network, receiving a request from at least one user on the network, wherein the request identifies one or more users in the network and at least one sound instant message designated for the one or more identified users and transmitting the one or more sound instant messages to the one or more identified users in the network.
In the present invention, personal sound identifiers may accompany a sound message or earcon such that the receiving user will be alerted to the identity of the user who sent them the sound message or earcon. The earcons are typically short snippets of song riffs or some otherwise random selection of notes or sounds which are used to uniquely identify each user to one another.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a diagram of an exemplary system in accordance with the teachings of the present invention.
FIG. 2 is a diagram of an illustrative communicative device in accordance with the teachings of the present invention.
FIG. 3 is an exemplary method in accordance with the teachings of the present invention.
FIG. 4 is another diagram of an illustrative communicative device in accordance with the teachings of the present invention.
FIG. 5 is another exemplary method in accordance with the teachings of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
U.S. provisional application No. 60/184,180 filed Feb. 22, 2000 is hereby incorporated by reference herein in its entirety.
Referring to FIG. 1, an exemplary communications system 10 is shown in accordance with the present invention wherein users in the system may communicate with one another using sound messages or “earcons” and/or personal sound identifiers. As used herein and described in more detail later herein, the terms “sound messages”, “sound instant messages” and “earcons” which are used interchangeably herein, mean a short series of notes and/or sounds which are associated with or representative of any number of short communicative phrases. These short communicative phrase may be any conversational message such as “Hi”, “Hello”, “Are you ready to go?”, “Meet you in five minutes”, “I'm heading home” and a virtually infinite variety of these and other phrases. For example, a short string of six notes could be constructed to mean “Are you ready to go?” while another unique short string of four notes could be constructed to mean “Hello.” Typically, each user will be provided with a basic “set” of conventional or standardized earcons which have predefined meanings such that users may readily communicate with one another using these standardized earcons without having to decipher or learn the meaning of the earcons. Additionally, new earcons may be created by each user such that when using these user-created earcons, each user is responsible for the task of interpreting and learning each other user's respective earcons in order to effectively communicate via the earcons or sound messages.
As used herein and described in more detail later herein, the term “personal sound identifier” refers to one or more short or abbreviated sound snippets which a user may use to identify themselves to another user. These sound snippets will typically be short melodies made up of short strings of notes which a user will use to identify themselves to other users in the system. The personal sound identifiers may also be snippets or riffs of popular songs, themes or melodies. Both the earcons and personal sound identifiers may be selected by a user from a predetermined selection or the sound messages and person sound identifiers may be created by user individually, as discussed in more detail later herein.
In one embodiment, the earcons and personal sound identifiers are used on a selective basis, whereby a user may or may not provide their personal sound identifier with each earcon sent by that user to other user(s). In another embodiment, every earcon is accompanied the user's personal sound identifier. For example, if a user's earcon is a three note melody and that user wishes to send another user an earcon which means “Are you ready to go?”, the other user will hear the three note melody followed by the earcon which means “Are you ready to go?” In this manner, users can readily identify the source of the earcon which is especially valuable when multiple users are sending each other earcons during a single communicative session. Certain system rules may also be implemented regarding the playing of the personal sound identifiers. For example, if a user has received a series of earcons from a single other user, the sending user's earcon will not be played everytime since it can be assumed that the receiving user is already aware of the sending user's identity. Other rules may be implemented, for example, if a user has not received any earcons for a specified period of time, such as 15 minutes, any earcons received will automatically be preceded by the sending user's personal sound identifier.
As shown in FIG. 1, the system 10 includes one or more communicative devices, such as personal digital assistant (PDA) devices 20, 30, wireless telephone 40 and personal computer 50. In the present invention, the devices, such as personal digital assistant (PDA) devices 20, 30, wireless telephone 40 and personal computer 50 are in communication with one another and with a central server 60 via a plurality of communication transmissions 70. In one embodiment, each device is associated with an individual user or client but in other embodiments, a single user or client may be associated with two or more devices in the system.
Each device may be in communication with one another and central server 60 through a wireless and/or a wired connection such as via dedicated data lines, optical fiber, coaxial lines, a wireless network such as cellular, microwave, satellite networks and/or a public switched phone network, such as those provided by a local or regional telephone operating company. In a wireless configuration, the devices may communicate using a variety of protocols including Transmission Control Protocol/Internet Protocol (TCP/IP) and User Datagramn Protocol/internet Protocol (UDP/IP). Both the TCP/IP and/or the UDP/IP may use a protocol such as a Cellular Digital Packet Data (CDPD) or other similar protocol as an underlying data transport mechanism in such a configuration. In the present invention, one to one messaging as well as multicast messaging from one user to a group of two or more users may be implemented easily via a UDP-based protocol.
In an exemplary embodiment, the devices preferably include some type of central processor (CPU) which is coupled to one or more of the following including some Random Access Memory (RAM), Read Only Memory (ROM), an operating system (OS), a network interface, a sound playback facility and a data storage facility. In one embodiment of the present invention, a conventional personal computer or computer workstation with sufficient memory and processing capability may be used as central server 60. In one embodiment, central server 60 operates as a communication gateway, both receiving and transmitting sound communications sent to and from users in the system.
While the above embodiment describes a single computer acting as a central server, those skilled in the art will realize that the functionality can be distributed over a plurality of computers. In one embodiment, central controller 70 is configured in a distributed architecture, with two or more servers are in communication with one another over the network.
Referring to FIG. 2, an exemplary device for creating, storing, transmitting and receiving sound messages and/or personal sound identifiers is shown. As shown in FIG. 2, the device is a type of Personal Digital Assistant (PDA) 100. It is known that PDAs come in a variety of makes, styles, and configurations and only one out of the many makes, styles and configurations is shown. In one embodiment of the present invention, PDA 100 includes a includes a low profile box shaped case or housing 110 having a front face 114 extending from a top end 118 to a bottom end 122. Mounted or disposed within front face 114 is a display screen 126. Positioned proximate bottom end 122 are control buttons 132. Display screen 126 may be activated and responsive to a stylus, control pen, a finger, or other similar facility, not shown. Disposed within housing 110 is a processor coupled with memory such as RAM, a storage facility and a power source, such as rechargeable batteries for powering the system. The microprocessor interacts with an operating system that runs selective software depending on the intended use of PDA 12. As used in accordance with the teachings herein, memory is loaded with software code for selecting/generating, storing and communicating via sound messages and/or personal sound identifiers with one or more other users in the system.
Referring again to FIG. 2, in one embodiment, the display screen 126 includes a screen portion 130 which displays the name, screen identification or other identifying indicia one or more other users on the network. In one embodiment, a user may be able to maintain a list of users on their device and when such as user becomes active on the network, the display will provide some indication to the user, such as by highlighting the name in some manner, to indicate that the user is available on the system. For example, an icon may appear proximate to the name of a user who is available or present on the system.
As used herein, the term “available” may include both when a user is currently “active”, such as when they are presently using their communicative device or the term “available” may include when a user is “idle”, such as when the user is logged on but is not currently using their respective communicative device. In certain embodiments, a different icon may be used to distinguish between when a user is in an “active” or in an “idle” state. In the present invention, clients or users via their respective communicative devices such as PDAs, laptops, PCs, etc. may update a centralized server with their presence information via a lightweight UDP-based protocol. Typically, the server will fan a client's presence information out to other users or clients that have indicated an interest and have permission to see it. Thus in a case where one user may be “logged on” on two or more devices, the sound message request will be transmitted to the user on the device which is deemed to be currently in an “active” state. In the present system, users may be alerted as to the state change of other users in the system, such as when a certain user becomes “active” or changes from “active” to “idle.” Such alerts may be provided via sound-based alerts which will indicate the state changes to the users. Such alerts may be followed, for example, by the user's personal sound identifier which identifies the user who has changed their respective “state.”
As shown in FIG. 2, the display screen 126 includes one or more visual indicia or icons 134 which are associated with one or more sound messages, sound instant messages or earcons. For example, five different representative sound icons 134 are shown, each icon associated with a distinct sound message or earcon such as “Hi”, “Bye”, “Eat”, “Yep” and “No”. To facilitate communication via the earcons, each icon may include a textual or visual label to assist the user in remembering which icon is associate with which earcon. For example, referring to the icons 134, the “Eat” icon may includes a picture which hints as to the meaning of the earcon, such as a fork and spoon as illustrated and may also include a textual label such as “Eat?” As discussed in more detail later herein, each sound message may be user created, such as the user employing a sound creation/editing utility which the user may use to compose the earcon or the user may select from system provided earcons from which a user may make selections. Similarly, icons 134 which are associated with the earcons may be user created such as via specialized software for designing and editing bitmaps of icons and/or the icons may be provided by the system from which a user may select.
Referring again to FIG. 2, the display screen 126 may further include a visual log for recording and displaying the different sound message or earcons which a user may have received. Such a visual log may aid a user in learning the meaning of earcons for which the user is unfamiliar with.
Referring now to FIGS. 3 and 4, an exemplary method and device is shown for creating and transmitting sound messages and/or personal sound identifiers between users in the system. As shown in FIG. 3, the user creates a sound message, step 136. A sound message may be created by simply selected a sound message from a selection of pre-recorded sound messages or sound message may be newly created by a user, such as by employing a sound editor utility to construct a sound message. Once a sound message is created, he sound message is saved, step 140. Saving may be done locally on a user's personal communicative device by simply saving the sound message with, for example, a sound editor utility as a sound file on the device's storage facility. The user may then select or create an icon to be associated with the sound message, step 144. The icon may be selected from a selection of already existing icons or may be specially created by the user via a graphics utility or facility. In other embodiments, an icon may be assigned to the sound message automatically. Once an icon is selected/created and is now associated with a specific sound message, the user may send the sound message to any number of users in the system. To accomplish this, the user may select one or more users to send the sound message to, step 148. This may be accomplished, as discussed in more detail later herein, such as by selecting one or more user names from a directory of users. The user may then transmit the sound message to the selected users by selecting or activating the icon associated with the desired sound message, step 152.
As discussed in more detail later herein, typically the file in which the sound message or earcon is stored is not itself transmitted to users directly. Preferably, each user already has a “copy” of the sound message stored or cached locally such that only a request or command to play the sound message is transmitted by the user. However, in cases where a user just created a new sound message, the sound message would first need to be distributed to the other users in the system. Preferably this is accomplished on “as-needed” basis whereby the new sound message is transferred “on-the-fly” to users who does not yet have a stored or cached version of the new sound message. For example, the user who has created the new sound message will simply send the sound message like any other sound message at which point the receiving user who does not yet have the sound message will request transfer of the new sound message.
In other embodiments, the proliferation and distribution of sound messages or earcons may be accomplished by having specialized software automatically distribute a new sound message to the other users when the software detects that new message has been created. In another embodiment, a central repository of sound messages or earcons may be administered via a central server, such as illustrated in FIG. 1. In this embodiment, the central server would maintain a central repository of all sound messages or earcons in the system and would periodically update user's devices with the earcons as new one were created. Similar methods may be used to delete sound messages or earcons which are obsolete or unwanted.
In the present invention, as new sound messages or earcons are created, each sound message is assigned a unique identifier, which can be a numerical identification (ID), alphabetical ID, a combination thereof or other unique identifier which is unique to that particular sound message. In this manner, sound messages or earcons are identified within the system between users via these unique identifiers.
In one embodiment of the present invention, the files containing the sound messages or earcons are stored locally on each user's local device such as their PDA. Sound messages may be stored as sound files in any one or other file formats such as in a MIDI file format, a .MP3 file format, a .WAV file format, a .RAM file format, .AAC file format and a .AU file format.
Referring now to FIG. 4, an exemplary device 160 for implementing the steps as discussed above and shown in FIG. 3 is shown. In this embodiment, a user may send one or more other users a sound message or earcon as follows. The user employing the device 160 makes a selection from a screen portion 164 which lists some identifying indicia, such as the names of system users, shown herein “Elena, Alan, Dipti, Bonnie and Maya.” In an exemplary embodiment, one user say for example, “Elena”, selects “Bonnie”, by selecting via a stylus, not shown, the name “Bonnie” which is then subsequently highlighted. The user then taps or selects the appropriate icon from the selection of icons 168 which is associated with the sound message or earcon the user wishes to send to “Bonnie.” For example, if the user wishes to send the sound message “BYE” to “Bonnie” the user will simply select the icon “BYE” 172 which will transmit the associated earcon to “Bonnie”, or more specifically a command or request will be transmitted to “Bonnie” to play the earcons associated with icon 172. “Bonnie's” respective device will then undertake playing the sound message, such as via a sound playback facility which may include a sound processor and a speaker component. In one embodiment, only the “BYE” earcon is played on “Bonnie's” device and in other embodiments, the “BYE” earcon is accompanied by “Elena's” personal sound identifier. Thus, if “Bonnie” did not already know that the earcon originated from “Elena”, “Elena” personal sound identifier should provide “Bonnie” with this information. Typically, the personal sound identifier will be played before playing the earcon but the personal sound identifier may also be played after the earcon is played. In the present invention, it is contemplated that a user may send another user a series of sound message by multiply selecting two or more earcons to send to the user. In this manner, a user may actually construct phrases or sentences with a variety of independent earcons strung together. A user may also send the same earcon to multiple users simultaneously.
Referring to FIG. 5, an exemplary method for facilitating communications in accordance with the present invention is shown. In this embodiment, a command or request is received from a user to send one or more users a sound message(s) or earcon(s), step 200. In its most basic form, a user request identifies the user or users to which the sound message is intended for, and a unique identifier or ID of the sound message to be played. As discussed above, the request may be simply the user selecting one or more names on the user's display screen and activating the icon associated with the sound messages the user wishes to send. Alternatively, the request may also include the requesting user's personal sound identifier as discussed earlier herein. The request will be transmitted to the receiving user's device, step 210. Once the request is received, it is determined if the sound message exists on the receiving user's device, step 220.
As discussed earlier herein, each user's device in the system will preferably have a locally cached or stored selection of sound messages or earcons created by other users in the system such that when one user sends another user a sound message, the sound will simply be played from the selection of locally resident sound messages. Thus, a determination if a sound message exists on the receiving user's device may be accomplished by comparing the unique identifier of the sound message contained in the request with the unique identifiers of the sound messages already existing on the receiving user's device. If a sound message does not exist on a user's device, a request for the missing sound message is made, step 240. Ideally, specialized software on the receiving user's device will automatically administer the request for a missing sound message. The missing sound message may either be requested directly from the requesting user or from a central server which may maintain a current selection of sound messages. The missing sound message is then provided to the receiving user, step 250. The message can then be played on the receiving user's device, step 230.
In one embodiment of the present invention, the sound message request includes the requesting user's personal sound identifier or at least some indication as to the identity of the user sending the request Thus, the receiving user(s) device will play the personal sound identifier along with playing the sound message. In one embodiment, each user's personal sound identifier may be distributed to other users in the system similar to the manner in which sound message sound files are distributed to users in the system and stored on their local devices. The actual personal sound identifier may also be simply transmitted along with the request as discussed above. In this embodiment, a receiving user would receive the personal sound identifier along with the request to play a certain sound message. The personal sound identifier would be played along with the stored sound message.
In another embodiment of the present invention, the playing of a user's personal sound identifier may be performed automatically by each user's device. The user's device would play a user's personal sound identifier whenever a sound message is received from that specific user. In this manner, specialized software provided on the device will determine which user has sent a sound message and then play that user's respective personal sound identifier.
In one exemplary implementation of the present invention, PDA clients will communicate with one other and the server via a Cellular Digital Packet Data (CDPD) service such as AT&T's Pocketnet CDPD service using a Palm Vx, a Palm V, a Palm III, a Palm IIIx or other similar variations, updates or descendents thereof. These PDA devices may be equipped with a Novatel Wireless Minstrel V modem or other similar component. Client software development will be in C, via the freely available GNU/Palm SDK environment. A Win32 client implementation for desktop clients may be used that will also send and receive presence information and have the required sound support, etc. In a wireless telephone implementation, an HDML-based version of the client may be used, through a separate set of server functionality.
In one embodiment of the present invention, the sound message communications will support message authentication, and optional message encryption. In one embodiment, authentication will likely be accomplished by including an MD5(message+recipient-assigned-token) MAC with the message. A Tiny Encryption Algorithm (TEA) for the encryption layer may also be used in one exemplary embodiment. Of course other authentication and encryption algorithms may be used.
In the present invention, each unique device such as a PDA, wireless telephone or personal computer is associated with a single user. However, at times a single user may be active on two or more devices, such that a user may communicate via the sound messages with users via the two or more devices. For example, a single user may be in communication via their PDA as well as their wireless telephone at the same time. In this manner, a display screen such as the one shown in FIGS. 1, 2 and 4 may provide some indication that the user is on multiple devices at the same time. For example, some type of visual indicator such as a representative icon may be displayed next to the user's name to show that the user is on both their PDA and wireless telephone device simultaneously. In such an embodiment, a request or command to play a sound message will be sent to the user's device on which the user is currently active.
In the present invention, a potentially unlimited variety of communication scenarios are possible using the sound messages of the present invention, such an exemplary ritualized conversations is displayed below between a number of exemplary users where the users are exchanging a series of communicative earcons with one another:
Ann: <Earcon for “Hi!”>
Nancy: <Earcon for “Hi!”>
Bonnie: <Earcon for “Lunch?”>
Dipti: <Earcon for “Sure!”>
George: <Earcon for “Ready?”>
Maya: <Earcon for “In 5”>.
In this manner, users can quickly contact each other and make arrangements or just let each other know they're thinking about each other without requiring undue amounts of keystrokes, actions or input on the part of the users. Personal sound identifiers or sound identification may also be used herein to identify users to one another on the system. As discussed earlier herein, personal sound identifiers are unique abbreviated sounds which associated with specific users. For example, in the above illustrative communication, user “Ann” may have a personal sound identifier which resembles a portion of the “Hawaii-Five-O” theme song, user “Bonnie” may have a random three note melody as a personal sound identifier and user “Dipti” may have a personal sound identifier which resembles a portion of the famous song “Smoke on the Water”. Thus, if user “Ann” were to send user “Bonnie” an earcon, the earcon would be preceded by the short snippet from the “Hawaii Five-O” theme song followed by the earcon to signal user “Bonnie” that the earcon was from “Ann.” In conversing via the earcons, users may selectively accept and reject earcons from certain users or all users as desired. For example, user “Ann” may configure her device to accept earcons from all users, specific users such as “Bonnie” and “Dipti” or alternatively, not accept any earcons from any user. Such a configuration may be provided via specialized software on the user's respective device which allows the setting of these possible configurations.
In the present invention, only those users who have indicated a willingness or provided the necessary permission to receive such sound messages will receive such sound message. In one further exemplary scenario, exemplary USER X, USER Y and USER Z would allow each others sound messages to be propagated to one another such that USER X, USER Y and USER Z each would have a complete set of locally stored sound messages selected/created by the other users. For example, USER X would have locally saved versions of all the sound messages selected/created by USER Y and USER Z and so on.
It will be apparent to those skilled in the art that many changes and substitutions can be made to the system and method described herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A method for providing network communications, comprising:
receiving a communication from at least one originating user in a network destined for at least one other user in the network, wherein at least a portion of the communication includes a sound based identifier; and
providing the communication to the at least one other user in the network, wherein the at least one other user in the network recognizes the at least one originating user by the sound based identifier in the communication, the sound based identifier comprising a series of notes.
2. The method of claim 1 wherein the communication includes a sound based conversational message comprising a series of notes.
3. A method for providing communications via sound messages, comprising:
providing a plurality of sound based communications each having a different string of notes;
receiving a first selection of one of the plurality of sound based communications from at least one originating user in a network; and
providing the first selection as part of a message to at least one other user in the network, wherein the first selection has a pre-established conversational meaning between the users.
4. The method of claim 3 wherein some of the plurality of sound based communications are cached in a local storage facility accessible to said at least one originating user in the network.
5. The method of claim 3 wherein some of the plurality of sound based communications are cached in a local storage facility accessible to said at least one other user in the network.
6. The method of claim 3 which further comprises receiving a second selection of one of the plurality of sound based communications from the at least one originating user in the network; and
providing the second selection as part of the message to the at least one other user in the network, wherein the second selection identifies the at least one originating user.
7. A method for providing communications via sound messages, comprising:
receiving a communication from at least one originating user in a network destined for at least one other user in the network, wherein at least a portion of the communication includes a sound based identifier and a sound based message; and
providing the communication to the at least one other user in the network, wherein the at least one other user in the network recognizes the at least one originating user by the sound based identifier, the sound based message having a pre-established conversational meaning between the users, and the sound based identifier comprising notes identifying the at least one originating user.
8. The method of claim 7 further comprising displaying one or more visual indicia or icons on a screen of an originating user, said one or more visual indicia or icons representing the sound based message or portions thereof.
9. The method of claim 8 further comprising displaying textual or visual labels in the visual indicia or icons.
10. The method of claim 8 further comprising selecting or activating the visual indicia or icons in order to send the sound based message represented by such indicia or icons.
11. The method of claim 7 further comprising displaying an online status of a plurality of network users on a screen of an originating user as a basis for selecting recipients of the communication.
US10/851,815 2000-02-22 2004-05-21 System, method and apparatus for communicating via sound messages and personal sound identifiers Expired - Lifetime US7246151B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/851,815 US7246151B2 (en) 2000-02-22 2004-05-21 System, method and apparatus for communicating via sound messages and personal sound identifiers
US11/810,831 US7653697B2 (en) 2000-02-22 2007-06-06 System, method and apparatus for communicating via sound messages and personal sound identifiers

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US18418000P 2000-02-22 2000-02-22
US09/609,893 US6760754B1 (en) 2000-02-22 2000-07-05 System, method and apparatus for communicating via sound messages and personal sound identifiers
US10/851,815 US7246151B2 (en) 2000-02-22 2004-05-21 System, method and apparatus for communicating via sound messages and personal sound identifiers

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/609,893 Continuation US6760754B1 (en) 2000-02-22 2000-07-05 System, method and apparatus for communicating via sound messages and personal sound identifiers

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/810,831 Continuation US7653697B2 (en) 2000-02-22 2007-06-06 System, method and apparatus for communicating via sound messages and personal sound identifiers

Publications (2)

Publication Number Publication Date
US20040215728A1 US20040215728A1 (en) 2004-10-28
US7246151B2 true US7246151B2 (en) 2007-07-17

Family

ID=26879877

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/609,893 Expired - Lifetime US6760754B1 (en) 2000-02-22 2000-07-05 System, method and apparatus for communicating via sound messages and personal sound identifiers
US10/851,815 Expired - Lifetime US7246151B2 (en) 2000-02-22 2004-05-21 System, method and apparatus for communicating via sound messages and personal sound identifiers
US11/810,831 Expired - Fee Related US7653697B2 (en) 2000-02-22 2007-06-06 System, method and apparatus for communicating via sound messages and personal sound identifiers

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/609,893 Expired - Lifetime US6760754B1 (en) 2000-02-22 2000-07-05 System, method and apparatus for communicating via sound messages and personal sound identifiers

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/810,831 Expired - Fee Related US7653697B2 (en) 2000-02-22 2007-06-06 System, method and apparatus for communicating via sound messages and personal sound identifiers

Country Status (4)

Country Link
US (3) US6760754B1 (en)
JP (1) JP2001296899A (en)
KR (1) KR100773706B1 (en)
TW (1) TWI235583B (en)

Cited By (123)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060095848A1 (en) * 2004-11-04 2006-05-04 Apple Computer, Inc. Audio user interface for computing devices
US20070198650A1 (en) * 2003-11-11 2007-08-23 Heino Hameleers Method For Providing Multimedia Information To A Calling Party At Call Set Up
US20070244979A1 (en) * 2000-02-22 2007-10-18 Ellen Isaacs System, method and apparatus for communicating via sound messages and personal sound identifiers
US7805487B1 (en) * 2000-02-22 2010-09-28 At&T Intellectual Property Ii, L.P. System, method and apparatus for communicating via instant messaging
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
US8898568B2 (en) 2008-09-09 2014-11-25 Apple Inc. Audio user interface
US9190062B2 (en) 2010-02-25 2015-11-17 Apple Inc. User profiling for voice input processing
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10607140B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US10791216B2 (en) 2013-08-06 2020-09-29 Apple Inc. Auto-activating smart responses based on activities from remote devices
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification

Families Citing this family (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6978293B1 (en) * 2000-02-29 2005-12-20 Microsoft Corporation Methods and systems for selecting criteria for a successful acknowledgement message in instant messaging
US7958212B1 (en) * 2000-02-29 2011-06-07 Microsoft Corporation Updating presence information
US6760580B2 (en) * 2000-03-06 2004-07-06 America Online, Incorporated Facilitating instant messaging outside of user-defined buddy group in a wireless and non-wireless environment
US6714793B1 (en) 2000-03-06 2004-03-30 America Online, Inc. Method and system for instant messaging across cellular networks and a public data network
US7624172B1 (en) 2000-03-17 2009-11-24 Aol Llc State change alerts mechanism
US9246975B2 (en) 2000-03-17 2016-01-26 Facebook, Inc. State change alerts mechanism
US6563913B1 (en) * 2000-08-21 2003-05-13 Koninklijke Philips Electronics N.V. Selective sending of portions of electronic content
US7606864B2 (en) * 2000-11-10 2009-10-20 At&T Intellectual Property I, L.P. Setting and display of communication receipt preferences by users of multiple communication devices
US7957514B2 (en) * 2000-12-18 2011-06-07 Paltalk Holdings, Inc. System, method and computer program product for conveying presence information via voice mail
US20030097406A1 (en) * 2001-11-16 2003-05-22 Ben Stafford Method of exchanging messages
US6996777B2 (en) * 2001-11-29 2006-02-07 Nokia Corporation Method and apparatus for presenting auditory icons in a mobile terminal
US7216143B2 (en) * 2002-01-03 2007-05-08 International Business Machines Corporation Instant messaging with voice conference feature
US20030130014A1 (en) * 2002-01-07 2003-07-10 Rucinski David B Reduced complexity user interface
WO2003058372A2 (en) * 2002-01-07 2003-07-17 Flash Networks Ltd. A system and a method for accelerating communication between client and an email server
US20030134260A1 (en) * 2002-01-11 2003-07-17 Hartman Richard M. Multi-client type learning system
US7353455B2 (en) * 2002-05-21 2008-04-01 At&T Delaware Intellectual Property, Inc. Caller initiated distinctive presence alerting and auto-response messaging
US20030234812A1 (en) * 2002-06-24 2003-12-25 Drucker Steven M. Visual decision maker
US20040024822A1 (en) * 2002-08-01 2004-02-05 Werndorfer Scott M. Apparatus and method for generating audio and graphical animations in an instant messaging environment
US7275215B2 (en) * 2002-07-29 2007-09-25 Cerulean Studios, Llc System and method for managing contacts in an instant messaging environment
US7428580B2 (en) 2003-11-26 2008-09-23 Aol Llc Electronic message forwarding
US7899862B2 (en) 2002-11-18 2011-03-01 Aol Inc. Dynamic identification of other users to an online user
US8122137B2 (en) 2002-11-18 2012-02-21 Aol Inc. Dynamic location of a subordinate user
US8701014B1 (en) 2002-11-18 2014-04-15 Facebook, Inc. Account linking
CA2506585A1 (en) 2002-11-18 2004-06-03 Valerie Kucharewski People lists
US8005919B2 (en) 2002-11-18 2011-08-23 Aol Inc. Host-based intelligent results related to a character stream
US8965964B1 (en) 2002-11-18 2015-02-24 Facebook, Inc. Managing forwarded electronic messages
US7590696B1 (en) 2002-11-18 2009-09-15 Aol Llc Enhanced buddy list using mobile device identifiers
US7640306B2 (en) 2002-11-18 2009-12-29 Aol Llc Reconfiguring an electronic message to effect an enhanced notification
AU2003293788A1 (en) * 2002-12-18 2004-07-09 Orange S.A. Mobile graphics device and server
US7672439B2 (en) * 2003-04-02 2010-03-02 Aol Inc. Concatenated audio messages
US7644166B2 (en) * 2003-03-03 2010-01-05 Aol Llc Source audio identifiers for digital communications
US7613776B1 (en) 2003-03-26 2009-11-03 Aol Llc Identifying and using identities deemed to be known to a user
US7653693B2 (en) 2003-09-05 2010-01-26 Aol Llc Method and system for capturing instant messages
US7752270B2 (en) * 2004-01-21 2010-07-06 At&T Mobility Ii Llc Linking sounds and emoticons
TWI255116B (en) * 2004-07-09 2006-05-11 Xcome Technology Co Ltd Integrated real-time message system with gateway function, and its implementation method
US8521828B2 (en) * 2004-07-30 2013-08-27 The Invention Science Fund I, Llc Themes indicative of participants in persistent communication
US20060085515A1 (en) * 2004-10-14 2006-04-20 Kevin Kurtz Advanced text analysis and supplemental content processing in an instant messaging environment
TW200614010A (en) * 2004-10-28 2006-05-01 Xcome Technology Co Ltd Instant messenger system with transformation model and implementation method
US20060126599A1 (en) * 2004-11-22 2006-06-15 Tarn Liang C Integrated message system with gateway functions and method for implementing the same
US7627124B2 (en) * 2005-09-22 2009-12-01 Konica Minolta Technology U.S.A., Inc. Wireless communication authentication process and system
US20070129090A1 (en) * 2005-12-01 2007-06-07 Liang-Chern Tarn Methods of implementing an operation interface for instant messages on a portable communication device
US20070129112A1 (en) * 2005-12-01 2007-06-07 Liang-Chern Tarn Methods of Implementing an Operation Interface for Instant Messages on a Portable Communication Device
US20090013254A1 (en) * 2007-06-14 2009-01-08 Georgia Tech Research Corporation Methods and Systems for Auditory Display of Menu Items
MY168177A (en) * 2007-06-27 2018-10-11 Karen Knowles Entpr Pty Lty Communication method, system and products
AU2012216544B2 (en) * 2007-06-29 2014-08-28 Microsoft Technology Licensing, Llc Providing sender-selected sound items to conversation participants
US8762458B2 (en) 2007-06-29 2014-06-24 Microsoft Corporation Providing sender-selected sound items to conversation participants
US20090112996A1 (en) * 2007-10-25 2009-04-30 Cisco Technology, Inc. Determining Presence Status of End User Associated with Multiple Access Terminals
US20090110169A1 (en) * 2007-10-25 2009-04-30 Cisco Technology, Inc. Initiating a Conference Session Based on Availability of End Users
US20090125594A1 (en) * 2007-11-13 2009-05-14 Avaya Technology Llc Instant Messaging Intercom System
US8713440B2 (en) * 2008-02-13 2014-04-29 Microsoft Corporation Techniques to manage communications resources for a multimedia conference event
JP2011198109A (en) * 2010-03-19 2011-10-06 Hitachi Ltd Id management method, id management system, and id management program
US20140025233A1 (en) 2012-07-17 2014-01-23 Elwha Llc Unmanned device utilization methods and systems
US9254363B2 (en) 2012-07-17 2016-02-09 Elwha Llc Unmanned device interaction methods and systems
JP5807094B1 (en) 2014-07-01 2015-11-10 株式会社 ディー・エヌ・エー System, method and program enabling voice chat
JP6531196B2 (en) * 2018-03-20 2019-06-12 株式会社 ディー・エヌ・エー System, method and program for enabling voice chat

Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6026156A (en) 1994-03-18 2000-02-15 Aspect Telecommunications Corporation Enhanced call waiting
US6131121A (en) 1995-09-25 2000-10-10 Netspeak Corporation Point-to-point computer network communication utility utilizing dynamically assigned network protocol addresses
US6141341A (en) 1998-09-09 2000-10-31 Motorola, Inc. Voice over internet protocol telephone system and method
US6192395B1 (en) 1998-12-23 2001-02-20 Multitude, Inc. System and method for visually identifying speaking participants in a multi-participant networked event
US6198738B1 (en) 1997-04-16 2001-03-06 Lucent Technologies Inc. Communications between the public switched telephone network and packetized data networks
US6229880B1 (en) 1998-05-21 2001-05-08 Bell Atlantic Network Services, Inc. Methods and apparatus for efficiently providing a communication system with speech recognition capabilities
US6256663B1 (en) 1999-01-22 2001-07-03 Greenfield Online, Inc. System and method for conducting focus groups using remotely loaded participants over a computer network
US20010011293A1 (en) 1996-09-30 2001-08-02 Masahiko Murakami Chat system terminal device therefor display method of chat system and recording medium
US6304648B1 (en) 1998-12-21 2001-10-16 Lucent Technologies Inc. Multimedia conference call participant identification system and method
US20010033298A1 (en) 2000-03-01 2001-10-25 Benjamin Slotznick Adjunct use of instant messenger software to enable communications to or between chatterbots or other software agents
US20020023131A1 (en) 2000-03-17 2002-02-21 Shuwu Wu Voice Instant Messaging
US6385303B1 (en) 1997-11-13 2002-05-07 Legerity, Inc. System and method for identifying and announcing a caller and a callee of an incoming telephone call
US20020059144A1 (en) 2000-04-28 2002-05-16 Meffert Gregory J. Secured content delivery system and method
US6397184B1 (en) 1996-08-29 2002-05-28 Eastman Kodak Company System and method for associating pre-recorded audio snippets with still photographic images
US6424647B1 (en) 1997-08-13 2002-07-23 Mediaring.Com Ltd. Method and apparatus for making a phone call connection over an internet connection
US6434604B1 (en) 1998-01-19 2002-08-13 Network Community Creation, Inc. Chat system allows user to select balloon form and background color for displaying chat statement data
US20020110121A1 (en) 2001-02-15 2002-08-15 Om Mishra Web-enabled call management method and apparatus
US6484196B1 (en) 1998-03-20 2002-11-19 Advanced Web Solutions Internet messaging system and method for use in computer networks
US6519326B1 (en) 1998-05-06 2003-02-11 At&T Corp. Telephone voice-ringing using a transmitted voice announcement
US6532477B1 (en) 2000-02-23 2003-03-11 Sun Microsystems, Inc. Method and apparatus for generating an audio signature for a data item
US6636602B1 (en) 1999-08-25 2003-10-21 Giovanni Vlacancich Method for communicating
US6654790B2 (en) 1999-08-03 2003-11-25 International Business Machines Corporation Technique for enabling wireless messaging systems to use alternative message delivery mechanisms
US6671370B1 (en) 1999-12-21 2003-12-30 Nokia Corporation Method and apparatus enabling a calling telephone handset to choose a ringing indication(s) to be played and/or shown at a receiving telephone handset
US6691162B1 (en) 1999-09-21 2004-02-10 America Online, Inc. Monitoring users of a computer network
US6699125B2 (en) 2000-07-03 2004-03-02 Yahoo! Inc. Game server for use in connection with a messenger server
US6714965B2 (en) 1998-07-03 2004-03-30 Fujitsu Limited Group contacting system, and recording medium for storing computer instructions for executing operations of the contact system
US6714793B1 (en) 2000-03-06 2004-03-30 America Online, Inc. Method and system for instant messaging across cellular networks and a public data network
US6732148B1 (en) 1999-12-28 2004-05-04 International Business Machines Corporation System and method for interconnecting secure rooms
US6760749B1 (en) 2000-05-10 2004-07-06 Polycom, Inc. Interactive conference content distribution device and methods of use thereof
US6760754B1 (en) 2000-02-22 2004-07-06 At&T Corp. System, method and apparatus for communicating via sound messages and personal sound identifiers
US6784901B1 (en) 2000-05-09 2004-08-31 There Method, system and computer program product for the delivery of a chat message in a 3D multi-user environment
US6807562B1 (en) 2000-02-29 2004-10-19 Microsoft Corporation Automatic and selective assignment of channels to recipients of voice chat data
US6907447B1 (en) 2001-04-30 2005-06-14 Microsoft Corporation Method and apparatus for providing an instant message notification

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6427064B1 (en) * 1994-01-05 2002-07-30 Daniel A. Henderson Method and apparatus for maintaining a database in a portable communication device
US7937312B1 (en) * 1995-04-26 2011-05-03 Ebay Inc. Facilitating electronic commerce transactions through binding offers
US5960173A (en) * 1995-12-22 1999-09-28 Sun Microsystems, Inc. System and method enabling awareness of others working on similar tasks in a computer work environment
US6574604B1 (en) * 1996-05-13 2003-06-03 Van Rijn Percy Internet message system
US5826064A (en) * 1996-07-29 1998-10-20 International Business Machines Corp. User-configurable earcon event engine
US5892813A (en) * 1996-09-30 1999-04-06 Matsushita Electric Industrial Co., Ltd. Multimodal voice dialing digital key telephone with dialog manager
US6240405B1 (en) * 1997-04-17 2001-05-29 Casio Computer Co., Ltd. Information processors having an agent function and storage mediums which contain processing programs for use in the information processor
US6012030A (en) * 1998-04-21 2000-01-04 Nortel Networks Corporation Management of speech and audio prompts in multimodal interfaces
US6252588B1 (en) * 1998-06-16 2001-06-26 Zentek Technology, Inc. Method and apparatus for providing an audio visual e-mail system
US6510452B1 (en) * 1998-08-21 2003-01-21 Nortel Networks Limited System and method for communications management with a network presence icon
US6324507B1 (en) * 1999-02-10 2001-11-27 International Business Machines Corp. Speech recognition enrollment for non-readers and displayless devices
US6519771B1 (en) * 1999-12-14 2003-02-11 Steven Ericsson Zenith System for interactive chat without a keyboard
BR0107993A (en) 2000-01-31 2004-01-06 Infonxx Inc Communication aid system and method
US7043530B2 (en) * 2000-02-22 2006-05-09 At&T Corp. System, method and apparatus for communicating via instant messaging
US20020034281A1 (en) * 2000-02-22 2002-03-21 Ellen Isaacs System and method for communicating via instant messaging

Patent Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6026156A (en) 1994-03-18 2000-02-15 Aspect Telecommunications Corporation Enhanced call waiting
US6131121A (en) 1995-09-25 2000-10-10 Netspeak Corporation Point-to-point computer network communication utility utilizing dynamically assigned network protocol addresses
US6397184B1 (en) 1996-08-29 2002-05-28 Eastman Kodak Company System and method for associating pre-recorded audio snippets with still photographic images
US20010011293A1 (en) 1996-09-30 2001-08-02 Masahiko Murakami Chat system terminal device therefor display method of chat system and recording medium
US6198738B1 (en) 1997-04-16 2001-03-06 Lucent Technologies Inc. Communications between the public switched telephone network and packetized data networks
US6424647B1 (en) 1997-08-13 2002-07-23 Mediaring.Com Ltd. Method and apparatus for making a phone call connection over an internet connection
US6385303B1 (en) 1997-11-13 2002-05-07 Legerity, Inc. System and method for identifying and announcing a caller and a callee of an incoming telephone call
US6434604B1 (en) 1998-01-19 2002-08-13 Network Community Creation, Inc. Chat system allows user to select balloon form and background color for displaying chat statement data
US6484196B1 (en) 1998-03-20 2002-11-19 Advanced Web Solutions Internet messaging system and method for use in computer networks
US6519326B1 (en) 1998-05-06 2003-02-11 At&T Corp. Telephone voice-ringing using a transmitted voice announcement
US6229880B1 (en) 1998-05-21 2001-05-08 Bell Atlantic Network Services, Inc. Methods and apparatus for efficiently providing a communication system with speech recognition capabilities
US6714965B2 (en) 1998-07-03 2004-03-30 Fujitsu Limited Group contacting system, and recording medium for storing computer instructions for executing operations of the contact system
US6141341A (en) 1998-09-09 2000-10-31 Motorola, Inc. Voice over internet protocol telephone system and method
US6304648B1 (en) 1998-12-21 2001-10-16 Lucent Technologies Inc. Multimedia conference call participant identification system and method
US6192395B1 (en) 1998-12-23 2001-02-20 Multitude, Inc. System and method for visually identifying speaking participants in a multi-participant networked event
US6256663B1 (en) 1999-01-22 2001-07-03 Greenfield Online, Inc. System and method for conducting focus groups using remotely loaded participants over a computer network
US6654790B2 (en) 1999-08-03 2003-11-25 International Business Machines Corporation Technique for enabling wireless messaging systems to use alternative message delivery mechanisms
US6636602B1 (en) 1999-08-25 2003-10-21 Giovanni Vlacancich Method for communicating
US6691162B1 (en) 1999-09-21 2004-02-10 America Online, Inc. Monitoring users of a computer network
US6671370B1 (en) 1999-12-21 2003-12-30 Nokia Corporation Method and apparatus enabling a calling telephone handset to choose a ringing indication(s) to be played and/or shown at a receiving telephone handset
US6732148B1 (en) 1999-12-28 2004-05-04 International Business Machines Corporation System and method for interconnecting secure rooms
US6760754B1 (en) 2000-02-22 2004-07-06 At&T Corp. System, method and apparatus for communicating via sound messages and personal sound identifiers
US6532477B1 (en) 2000-02-23 2003-03-11 Sun Microsystems, Inc. Method and apparatus for generating an audio signature for a data item
US6807562B1 (en) 2000-02-29 2004-10-19 Microsoft Corporation Automatic and selective assignment of channels to recipients of voice chat data
US20010033298A1 (en) 2000-03-01 2001-10-25 Benjamin Slotznick Adjunct use of instant messenger software to enable communications to or between chatterbots or other software agents
US6714793B1 (en) 2000-03-06 2004-03-30 America Online, Inc. Method and system for instant messaging across cellular networks and a public data network
US20020023131A1 (en) 2000-03-17 2002-02-21 Shuwu Wu Voice Instant Messaging
US20020059144A1 (en) 2000-04-28 2002-05-16 Meffert Gregory J. Secured content delivery system and method
US6784901B1 (en) 2000-05-09 2004-08-31 There Method, system and computer program product for the delivery of a chat message in a 3D multi-user environment
US6760749B1 (en) 2000-05-10 2004-07-06 Polycom, Inc. Interactive conference content distribution device and methods of use thereof
US6699125B2 (en) 2000-07-03 2004-03-02 Yahoo! Inc. Game server for use in connection with a messenger server
US20020110121A1 (en) 2001-02-15 2002-08-15 Om Mishra Web-enabled call management method and apparatus
US6907447B1 (en) 2001-04-30 2005-06-14 Microsoft Corporation Method and apparatus for providing an instant message notification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Taiwan Patent Publication No. 307970, Paging system with voice recognition, issued on Jun. 11, 1997.

Cited By (173)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070244979A1 (en) * 2000-02-22 2007-10-18 Ellen Isaacs System, method and apparatus for communicating via sound messages and personal sound identifiers
US7653697B2 (en) * 2000-02-22 2010-01-26 At&T Intellectual Property Ii, L.P. System, method and apparatus for communicating via sound messages and personal sound identifiers
US7805487B1 (en) * 2000-02-22 2010-09-28 At&T Intellectual Property Ii, L.P. System, method and apparatus for communicating via instant messaging
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US8990304B2 (en) * 2003-11-11 2015-03-24 Telefonaktiebolaget L M Ericsson (Publ) Method for providing multimedia information to a calling party at call set up
US20070198650A1 (en) * 2003-11-11 2007-08-23 Heino Hameleers Method For Providing Multimedia Information To A Calling Party At Call Set Up
US20070180383A1 (en) * 2004-11-04 2007-08-02 Apple Inc. Audio user interface for computing devices
US7735012B2 (en) * 2004-11-04 2010-06-08 Apple Inc. Audio user interface for computing devices
US7779357B2 (en) * 2004-11-04 2010-08-17 Apple Inc. Audio user interface for computing devices
US20060095848A1 (en) * 2004-11-04 2006-05-04 Apple Computer, Inc. Audio user interface for computing devices
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9117447B2 (en) 2006-09-08 2015-08-25 Apple Inc. Using event alert text as input to an automated assistant
US8930191B2 (en) 2006-09-08 2015-01-06 Apple Inc. Paraphrasing of user requests and results by automated digital assistant
US8942986B2 (en) 2006-09-08 2015-01-27 Apple Inc. Determining user intent based on ontologies of domains
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US10381016B2 (en) 2008-01-03 2019-08-13 Apple Inc. Methods and apparatus for altering audio output signals
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
US10108612B2 (en) 2008-07-31 2018-10-23 Apple Inc. Mobile device having human language translation capability with positional feedback
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US8898568B2 (en) 2008-09-09 2014-11-25 Apple Inc. Audio user interface
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US10475446B2 (en) 2009-06-05 2019-11-12 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10795541B2 (en) 2009-06-05 2020-10-06 Apple Inc. Intelligent organization of tasks items
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US11080012B2 (en) 2009-06-05 2021-08-03 Apple Inc. Interface for a virtual digital assistant
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10706841B2 (en) 2010-01-18 2020-07-07 Apple Inc. Task flow identification based on user intent
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US11423886B2 (en) 2010-01-18 2022-08-23 Apple Inc. Task flow identification based on user intent
US9548050B2 (en) 2010-01-18 2017-01-17 Apple Inc. Intelligent automated assistant
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US12087308B2 (en) 2010-01-18 2024-09-10 Apple Inc. Intelligent automated assistant
US8903716B2 (en) 2010-01-18 2014-12-02 Apple Inc. Personalized vocabulary for digital assistant
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US11410053B2 (en) 2010-01-25 2022-08-09 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10607140B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10607141B2 (en) 2010-01-25 2020-03-31 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10984326B2 (en) 2010-01-25 2021-04-20 Newvaluexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10984327B2 (en) 2010-01-25 2021-04-20 New Valuexchange Ltd. Apparatuses, methods and systems for a digital conversation management platform
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
US9190062B2 (en) 2010-02-25 2015-11-17 Apple Inc. User profiling for voice input processing
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10102359B2 (en) 2011-03-21 2018-10-16 Apple Inc. Device access using voice authentication
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US11120372B2 (en) 2011-06-03 2021-09-14 Apple Inc. Performing actions associated with task items that represent tasks to perform
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US10978090B2 (en) 2013-02-07 2021-04-13 Apple Inc. Voice trigger for a digital assistant
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US11388291B2 (en) 2013-03-14 2022-07-12 Apple Inc. System and method for processing voicemail
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US10657961B2 (en) 2013-06-08 2020-05-19 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
US10791216B2 (en) 2013-08-06 2020-09-29 Apple Inc. Auto-activating smart responses based on activities from remote devices
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US11133008B2 (en) 2014-05-30 2021-09-28 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US11257504B2 (en) 2014-05-30 2022-02-22 Apple Inc. Intelligent assistant for home automation
US10083690B2 (en) 2014-05-30 2018-09-25 Apple Inc. Better resolution when referencing to concepts
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US10497365B2 (en) 2014-05-30 2019-12-03 Apple Inc. Multi-command single utterance input method
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10904611B2 (en) 2014-06-30 2021-01-26 Apple Inc. Intelligent automated assistant for TV user interactions
US9668024B2 (en) 2014-06-30 2017-05-30 Apple Inc. Intelligent automated assistant for TV user interactions
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10431204B2 (en) 2014-09-11 2019-10-01 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US11556230B2 (en) 2014-12-02 2023-01-17 Apple Inc. Data detection
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US10311871B2 (en) 2015-03-08 2019-06-04 Apple Inc. Competing devices responding to voice triggers
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US11087759B2 (en) 2015-03-08 2021-08-10 Apple Inc. Virtual assistant activation
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US11500672B2 (en) 2015-09-08 2022-11-15 Apple Inc. Distributed personal assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US11526368B2 (en) 2015-11-06 2022-12-13 Apple Inc. Intelligent automated assistant in a messaging environment
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US11069347B2 (en) 2016-06-08 2021-07-20 Apple Inc. Intelligent automated assistant for media exploration
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US11037565B2 (en) 2016-06-10 2021-06-15 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US11152002B2 (en) 2016-06-11 2021-10-19 Apple Inc. Application integration with a digital assistant
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10553215B2 (en) 2016-09-23 2020-02-04 Apple Inc. Intelligent automated assistant
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US11405466B2 (en) 2017-05-12 2022-08-02 Apple Inc. Synchronization and task delegation of a digital assistant
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services

Also Published As

Publication number Publication date
KR20010083177A (en) 2001-08-31
US20070244979A1 (en) 2007-10-18
US20040215728A1 (en) 2004-10-28
KR100773706B1 (en) 2007-11-09
TWI235583B (en) 2005-07-01
US6760754B1 (en) 2004-07-06
US7653697B2 (en) 2010-01-26
JP2001296899A (en) 2001-10-26

Similar Documents

Publication Publication Date Title
US7246151B2 (en) System, method and apparatus for communicating via sound messages and personal sound identifiers
US7043530B2 (en) System, method and apparatus for communicating via instant messaging
US20020034281A1 (en) System and method for communicating via instant messaging
AU2009251161B2 (en) Instant messaging terminal adapted for Wi-Fi access
AU2005200442B2 (en) Command based group SMS with mobile message receiver and server
US6539421B1 (en) Messaging application user interface
JP5033756B2 (en) Method and apparatus for creating and distributing real-time interactive content on wireless communication networks and the Internet
US8224901B2 (en) Method and apparatus for enhancing compound documents with questions and answers
CN101341482A (en) Voice initiated network operations
JP2009112000A6 (en) Method and apparatus for creating and distributing real-time interactive content on wireless communication networks and the Internet
US20030110211A1 (en) Method and system for communicating, creating and interacting with content between and among computing devices
US20110076995A1 (en) System and method for providing multimedia object linked to mobile communication network
JP4911076B2 (en) Karaoke equipment
JP2004288111A (en) E-mail device, mail server, and mailer program
Niklfeld et al. Device independent mobile multimodal user interfaces with the MONA Multimodal Presentation Server
JP2004318529A (en) Presence information management device and user&#39;s terminal
KR100789223B1 (en) Message string correspondence sound generation system
JP2002132277A (en) On-line karaoke sing-along machine system having function to deliver e-mail to karaoke sing-along machine terminal and to display the same and karaoke sing-along machine terminal
JP3942980B2 (en) Karaoke performance terminal that outputs a message message via the user&#39;s mobile phone
KR20070078210A (en) Method and device for creating and transmitting message using common message
JP2003263173A (en) Method for musical piece distribution service, program for executing the method, information recording medium with the program recorded thereon, information terminal, management device, and musical piece distribution system
JP2004030319A (en) Method for distributing message
JP2004032431A (en) Distribution method for e-mail
KR100595880B1 (en) Melody Letter Message Service System Of Mobile Communication Terminal
KR20020017864A (en) Method for transmitting memo note by using messenger

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12