US20080031475A1 - Personal audio assistant device and method - Google Patents

Personal audio assistant device and method Download PDF

Info

Publication number
US20080031475A1
US20080031475A1 US11/774,965 US77496507A US2008031475A1 US 20080031475 A1 US20080031475 A1 US 20080031475A1 US 77496507 A US77496507 A US 77496507A US 2008031475 A1 US2008031475 A1 US 2008031475A1
Authority
US
United States
Prior art keywords
user
system
audio
audio content
personal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/774,965
Inventor
Steven Goldstein
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dm Staton Family LP
Staton Techiya LLC
Original Assignee
Personics Holdings Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US80676906P priority Critical
Priority to US11/774,965 priority patent/US20080031475A1/en
Application filed by Personics Holdings Inc filed Critical Personics Holdings Inc
Assigned to PERSONICS HOLDINGS INC. reassignment PERSONICS HOLDINGS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GOLDSTEIN, STEVEN WAYNE
Assigned to PERSONICS HOLDINGS INC. reassignment PERSONICS HOLDINGS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GOLDSTEIN, STEVEN WAYNE
Publication of US20080031475A1 publication Critical patent/US20080031475A1/en
Assigned to PERSONICS HOLDINGS INC. reassignment PERSONICS HOLDINGS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GOLDSTEIN, STEVEN
Assigned to STATON FAMILY INVESTMENTS, LTD. reassignment STATON FAMILY INVESTMENTS, LTD. SECURITY AGREEMENT Assignors: PERSONICS HOLDINGS, INC.
Priority claimed from US14/109,954 external-priority patent/US10009677B2/en
Assigned to PERSONICS HOLDINGS, LLC reassignment PERSONICS HOLDINGS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PERSONICS HOLDINGS, INC.
Assigned to DM STATON FAMILY LIMITED PARTNERSHIP (AS ASSIGNEE OF MARIA B. STATON) reassignment DM STATON FAMILY LIMITED PARTNERSHIP (AS ASSIGNEE OF MARIA B. STATON) SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PERSONICS HOLDINGS, LLC
Assigned to DM STATON FAMILY LIMITED PARTNERSHIP (AS ASSIGNEE OF MARIA B. STATON) reassignment DM STATON FAMILY LIMITED PARTNERSHIP (AS ASSIGNEE OF MARIA B. STATON) SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PERSONICS HOLDINGS, LLC
Assigned to DM STATON FAMILY LIMITED PARTNERSHIP reassignment DM STATON FAMILY LIMITED PARTNERSHIP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PERSONICS HOLDINGS, INC., PERSONICS HOLDINGS, LLC
Assigned to STATON TECHIYA, LLC reassignment STATON TECHIYA, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DM STATON FAMILY LIMITED PARTNERSHIP
Assigned to DM STATON FAMILY LIMITED PARTNERSHIP reassignment DM STATON FAMILY LIMITED PARTNERSHIP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PERSONICS HOLDINGS, INC., PERSONICS HOLDINGS, LLC
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers; Analogous equipment at exchanges
    • H04M1/02Constructional features of telephone sets
    • H04M1/04Supports for telephone transmitters or receivers
    • H04M1/05Supports for telephone transmitters or receivers adapted for use on head, throat, or breast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers; Analogous equipment at exchanges
    • H04M1/247Telephone sets including user guidance or features selection means facilitating their use; Fixed telephone terminals for accessing a variety of communication services via the PSTN network
    • H04M1/2471Configurable and interactive telephone terminals with subscriber controlled features modifications, e.g. with ADSI capability [Analog Display Services Interface]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1016Earpieces of the intra-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers; Analogous equipment at exchanges
    • H04M1/60Substation equipment, e.g. for use by subscribers; Analogous equipment at exchanges including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers; Analogous equipment at exchanges including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6058Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone
    • H04M1/6066Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone including a wireless connection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers; Analogous equipment at exchanges
    • H04M1/72Substation extension arrangements; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selecting
    • H04M1/725Cordless telephones
    • H04M1/72519Portable communication terminals with improved user interface to control a main telephone operation mode or to indicate the communication status
    • H04M1/72522With means for supporting locally a plurality of applications to increase the functionality
    • H04M1/72558With means for supporting locally a plurality of applications to increase the functionality for playing back music files

Abstract

At least one exemplary embodiment is directed to an earpiece comprising: an ambient microphone; an ear canal microphone; an ear canal receiver; a sealing section; a logic circuit; a communication module; a memory storage unit,; and a user interaction element, where the user interaction element is configured to send a play command to the logic circuit when activated by a user where the logic circuit reads registration parameters stored on the memory storage unit and sends audio content to the ear canal receiver according to the registration parameters.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the priority benefit of 60/806,769, under 35 U.S.C. §119(e), filed 8 Jul. 2006, which is incorporated herein by reference in its entirety.
  • FIELD OF THE INVENTION
  • The invention relates in general to methods and devices for the storage and recall of audio content via an earpiece, and in particular, though not exclusively, for the storage and playing of music or verbal content on a system that is built into a headphone.
  • BACKGROUND OF THE INVENTION
  • Present audio content playing devices are separated from the headphone system that normally contains the speakers (also referred to as receivers). The reason for this has typically been that audio content has been stored on disks that require a separate playing system. However, even with the advent of storing audio content on non-disk RAM (Random Access Memory) storage systems, the audio content player has been separated from the earpiece system (e.g., plug in headphones or earbuds). Combining the capacity for audio download and playing in an earpiece system is not obvious over related art since the user interaction system (e.g., play button, keyboard system) does not readily appear compatible with the size of an earpiece device and the difficulty of user interaction.
  • Additionally, no system currently exists for registration and download of audio content into an earpiece.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the present invention will become apparent from the following detailed description, taken in conjunction with the drawings in which:
  • FIG. 1 illustrates the connection between an earpiece device (103 and 104) and a communication network;
  • FIG. 2 illustrates at least one exemplary embodiment where earpiece devices share information with other earpiece devices within range (e.g., GPS location and identity);
  • FIG. 3 illustrates an example of various elements that can be part of an earpiece device in accordance with at least one exemplary embodiment;
  • FIG. 4 illustrates an example of a communication system in accordance with at least one exemplary embodiment that a user can use to register via his/her computer;
  • FIG. 5A illustrates an earpiece that can store and download audio content in accordance with at least one exemplary embodiment;
  • FIG. 5B illustrates a block diagram of the earpiece of FIG. 5A; and
  • FIG. 6 illustrates a user interface for setting the parameters of the Personal Audio Assistant.
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS OF THE PRESENT INVENTION
  • The following description of exemplary embodiment(s) is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
  • Processes, methods, materials and devices known by one of ordinary skill in the relevant arts can not be discussed in detail but are intended to be part of the enabling discussion where appropriate for example the generation and use of transfer functions.
  • Notice that similar reference numerals and letters refer to similar items in the following figures, and thus once an item is defined in one figure, it can not be discussed for following figures.
  • Note that herein when referring to correcting or corrections of an error (e.g., noise), a reduction of the error and/or a correction of the error is intended.
  • SUMMARY OF EXEMPLARY EMBODIMENTS
  • At least one exemplary embodiment is directed to a system for Personalized Services delivered to a Personal Audio Assistant incorporated within an earpiece (e.g., earbuds, headphones). Personalized Services include content such as music files (for preview or purchase) related to a user's preferences, reminders from personal scheduling software, delivery and text-to-speech, speech-to-text processing of email, marketing messages, delivery and text-to-speech of stock market information, medication reminders, foreign language instruction, academic instruction, time and date information, speech-to-speech delivery, instructions from a GPS system and others. A Personal Audio Assistant can be an audio playback platform for providing the user with Personalized Services.
  • At least one exemplary embodiment is directed to a Personal Audio Assistant system that is included as part of an earpiece (e.g., Headphone system). The Personal Audio Assistant is capable of digital audio playback, mitigating the need to carry a personal music player. Furthermore, a subscription-based service provides audio content to the user through the Personal Audio Assistant. The type of audio content, which is automatically provided to the user, is based on the user's preferences, which are obtained through a registration process.
  • The audio content, which is seamlessly downloaded to the Personal Audio Assistant in the background, is managed from a Server system and is only available on the Personal Audio Assistant for a predetermined period of time or for a fixed number of playback counts. However, the user can purchase any music file or electronic book directly from the Personal Audio Assistant with a simple one-click control interface, storing the purchased audio content on the Personal Audio Assistant as well as storing the content permanently in a user storage lock-box location on the Server system.
  • The system provides for audio content to be new and “fresh” each time the user auditions the content. As such, the content is typically auditioned in a first-in:first-out scenario. In one such example, the user has turned on the Personal Audio Assistant at 8:00 am and by 10:00 am has auditioned 2 hours of content that were created for the user as a manifestation of the user's choices of their preferences of genre, artist, their demographics, day of the week, time of day and purchase history. The system also provides for the elimination of a particular song or playlist in situ.
  • As the user's Listening History Envelope is updated based on experience, subsequent downloads will only contain content incorporating these revised preferences. The Personal Audio Assistant provides for ample memory, thus permitting hours of uninterrupted playback without the need to download additional content from the server. When in need, the Personal Audio Assistant automatically interrogates various communication platforms as it searches for connections. Once a connection is made, the Listener History Envelope file is uploaded to the server, and a new set of personalized playlist content is downloaded to the Personal Audio Assistant. Accordingly, as the Personal Audio Assistant content is auditioned and thus depleted, the communications system provides for constant replenishment.
  • In another embodiment, the Personal Audio Assistant also provides for a new set of business solutions to be offered to the music industry. As the personalized audio content is only available for audition for a limited period of time, and can not be sent to the user again from for weeks to months, the user's purchasing behavior can be demonstrated as spontaneous. The basic model of “Try before you buy” is the expected outcome. In another iteration, the distributor of the music can choose to offer discounts, which can be time-sensitive or quantity-sensitive in nature, in effect promoting greater purchase activity from the user.
  • In another iteration, while in audition a user can wish to place the desired content in a hold status. The hold status forms the basis of a “wish list,” thus allowing the user to hold for future consideration audio content while it is being auditioned. This content resides in the memory of the Personal Audio Assistant for a defined period, and is automatically erased, or the user can do so manually. The selected content will also appear on the user's computer via a URL address; here it resides on the server ready for audition or purchase and download.
  • The system is designed to operate as simply as possible. Using a single button, which has multiple contacts, the interface allows the user to purchase, delete, skip to next, and adds to wish list and even control listening level.
  • In another iteration, the user can download their own music to the Personal Audio Assistant for audition. The Personal Audio Assistant system is capable of text-to-speech processing and can interface with personal scheduling software to provide auditory schedule reminders for the user. Auditory reminders relating to the user's medication schedule are also generated by the system.
  • At least one exemplary embodiment includes input Acoustic Transducers (microphones) for capturing user's speech as well as Environmental Audio. In further embodiments, stereo input Acoustic Transducers capture Environmental Audio, and, mixing it with the audio signal path, present the ambient sound field to the user, mitigating the need to remove the Headphone apparatus for normal conversation.
  • Additional exemplary embodiments are directed to various scenarios for the delivery and consumption of audio content. The Personal Audio Assistant can store and play back audio content in compressed digital audio formats. In one embodiment, the storage memory of the Personal Audio Assistant is completely closed to the end-user and controlled from the Server. This allows for audio content to be distributed on a temporary basis, as part of a subscription service. In another iteration of the present invention, the storage memory of the Personal Audio Assistant is not completely closed to the end-user, allowing the user to transfer audio content to the Personal Audio Assistant from any capable device such as a Personal Computer or a Personal Music Player.
  • In at least one exemplary embodiment the Personal Audio Assistant automatically scans for other Bluetooth-enabled audio playback systems and notifies the user that additional devices are available. These additional devices can include a Bluetooth video system, television system, personal video player, video camera, cell phone, another Personal Audio Assistant and others.
  • In another iteration, the Personal Audio Assistant can be directly connected to a Terrestrial Radio receiver, or have such a receiver built in to the system.
  • In another exemplary embodiment, a technique known as Sonification can be used to convey statistical or other numerical information to a headphone. For example, the user would be able to receive information about the growth or decline of a particular stock, groups of stocks or even sectors of the markets though the Personal Audio Assistant. Many different components can be altered to change the user's perception of the sound, and in turn, their perception of the underlying information being portrayed. An increase or decrease in some level of share price or trading levels can be presented to the user. A stock market price can be portrayed by an increase in the frequency of a sine tone as the stock price rose, and a decline in frequency as it fell. To allow the user to determine that more than one stock was being portrayed, different timbres and spatial locations might be used for the different stocks, or they can be played to the user from different points in space, for example, through different sides of their headphones. The user can act upon this auditory information and use the controls built-in to the headphone to either purchase or sell a particular stock position.
  • Furthermore, specific sonification techniques and preferences can be presented to the user as “themes” from which the user can select. For example, one theme might auralize the current trading price of one stock with an ambient sine tone in the left ear, the price of another stock in the right ear, their respective trade volumes as perceived elevation using personalized head-related transfer function binauralization, and the current global index or other market indicator as the combined perceptual loudness of both tones. Such a scheme affords ambient auditory display in this example of five dimensions of financial data without compromising the user's ability to converse or work on other tasks. In another embodiment, the system affords users the ability to customize themes to their liking and to rapidly switch among them using simple speech commands. Additionally, the user can search the web from voice commands and receive results via text to speech synthesizer.
  • In yet another exemplary embodiment the PAA functions as a dictation device for medical professionals for dictating clinical information to a patient's medical record, or write prescriptions for medication or devices. Conversely, the PAA can function as text-to-speech allowing the clinician to audition information from a medical record, rather than reading. BF thought: can save considerable time preparing clinician interaction with patient.]
  • In another iteration, the Personal Audio Assistant can function as a tool to locate other users of Personal Audio Assistant who share common interests, or who are searching for particular attributes of other users. Whereas the 1st user has stored specific personal information in the Public Data memory of the Personal Audio Assistant, an example of which might be related to schools attended, marital status, profession etc, or the 1st user can be in search of an another user with these attributes and whereas 2nd user of a Personal Audio Assistant comes within communication range of the 1st user, the individual Personal Audio Assistants communicate with each other, access the personal information stored in each of their respective Public Data memory's to ascertain if these users have common interests. If a match occurs, each unit can contain both audible and visual indicators announcing that a match has been made and thus each user can start dialog either physically or electronically via the environmental microphones.
  • Examples of Terminology
  • Note that the following non-limiting examples of terminology are soley intended to aid in understanding various exemplary embodiments and is not intended to be restrictive of the meaning of terms nor all inclusive.
  • Acoustic Isolation Cushion: An “Acoustic Isolation Cushion” shall be defined as a circum-aural or intra-aural device that provides acoustic isolation from Environmental Noise. Acoustic Isolation Cushions can be included as part of a Headphones system, allowing the output of the acoustical transducers to reach the ear unimpeded, but still providing acoustic isolation from Environmental Noise.
  • Acoustic Transducer: An “Acoustic Transducer” shall be defined as a device that converts sound pressure level variations into electronic voltages or vice versa. Acoustic Transducers include microphones, loudspeakers, Headphones, and other devices.
  • Audio Playback: “Audio Playback” shall be defined as the auditory stimuli generated when Playback Hardware reproduces audio content (music, speech, etc) for a listener or a group of listeners listening to Headphones.
  • Audition: “Audition” shall be defined as the process of detecting sound stimulus using the human auditory system. This includes the physical, psychophysical, psychoacoustic, and cognitive processes associated with the perception of acoustic stimuli.
  • Client: A “Client” shall be defined as a system that communicates with a Server, usually over a communications network, and directly interfaces with a user. Examples of Client systems include personal computers and mobile phones.
  • Communications Port: A Communication Port shall be defined as an interface port supporting bidirectional transmission protocols (TCP/IP, USB, IEEE 1394, IEEE 802.11, Bluetooth, A2SP, GSM, CDMA, or others) via a communications network (e.g., the Internet, cellular networks).
  • Control Data: “Control Data” shall be defined as information that dictates the operating parameters for a system or a set of systems.
  • Earcon: An Earcon shall be defined as a Personalized Audio signal that informs the User of a pending event typically inserted in advance of the upcoming audio content.
  • Ear Mold Style: “Ear Mold Style” shall be defined as a description of the form factor for an intra-aural device (e.g., hearing aids). Ear Mold Styles include completely in the canal (CIC), in the canal (ITC), in the ear (ITE), and behind the ear (BTE).
  • Environmental Audio: “Environmental Audio” shall be defined as auditory stimuli of interest to the user in the environment where the user is present. Environmental Audio includes speech and music in the environment.
  • Environmental Noise: “Environmental Noise” shall be defined as the auditory stimuli inherent to a particular environment where the user is present and which the user does not wish to audition. The drone of highway traffic is a common example of Environmental Noise. Note that Environmental Noise and Audio Playback are two distinct types of auditory stimuli. Environmental Noise does not typically include Music or other audio content.
  • E-Tailing System: An “E-tailing System” shall be defined as a web-based solution through which a user can search, preview and acquire some available product or service. Short for “electronic retailing,” E-tailing is the offering of retail goods or services on the Internet. Used in Internet discussions as early as 1995, the term E-tailing seems an almost inevitable addition to e-mail, e-business, and e-commerce. E-tailing is synonymous with business-to-consumer (B2C) transactions. Accordingly, the user can be required to register by submitting personal information, and the user can be required to provide payment in the form of Currency or other consideration in exchange for the product or service. Optionally, a sponsor can bear the cost of compensating the E-tailer, while the user would receive the product or service.
  • Generic HRTF: A “Generic HRTF” shall be defined as a set of HRTF data that is intended for use by any Member. A Generic HRTF can provide a generalized model of the parts of the human anatomy relevant to audition and localization, or simply a model of the anatomy of an individual other than the Member. The application of Generic HRTF data to Audio Content provides the least convincing Spatial Image for the Member, relative to Semi-Personalized and Personalized HRTF data. Generic HRTF data is generally retrieved from publicly available databases such as the CIPIC HRTF database.
  • Headphones: “Headphones” (also known as earphones, earbuds, stereophones, headsets Canalphones, or the slang term “cans”) are a pair of transducers that receive an electrical signal from a media player, communication receivers and transceivers, and use speakers placed in close proximity to the ears (hence the name earphone) to convert the signal into audible sound waves. Headphones are intended as personal listening devices that are placed either circum-aural or intra-aural according to one of the Ear Mold Styles, as well as other devices that meet the above definition such as advanced eyewear that includes Acoustical Transducers (i.e. Dataview). Headphones can also include stereo input Acoustic Transducers (microphones) included as part of the Ear Mold Style form factor.
  • HRTF: “HRTF” is an acronym for head-related transfer function—a set of data that describes the acoustical reflection characteristics of an individual's anatomy relevant to audition. Although in practice they are distinct (but directly related), this definition of HRTF encompasses the head-related impulse response (HRIR) or any other set of data that describes some aspects of an individual's anatomy relevant to audition.
  • Informed Consent: “Informed Consent” shall be defined as a legal condition whereby a person can be the to have given formal consent based upon an appreciation and understanding of the facts and implications associated with a specific action. For minors or individuals without complete possession of their faculties, Informed Consent includes the formal consent of a parent or guardian.
  • Listening History Envelope: “Listening History Envelope” shall be defined as a record of a user's listening habits over time. The envelope includes system data, time system was turned off, time system is presenting [BF thought: system doesn't audition, system transducers, and user auditions] content, time stamp of content being auditioned, content which is: skipped, deleted played multiple times, saved in the Wish List, and time between listening sessions.
  • Music: “Music” shall be defined as a form of expression in the medium of time using the structures of tones and silence to create complex forms in time through construction of patterns and combinations of natural stimuli, principally sound. Music can also be referred to as audio media or audio content.
  • Playback Hardware: Any device used to play previously recorded or live streaming audio. Playback Hardware includes Headphones, loudspeakers, personal music players, mobile phones, and other devices.
  • Personal Audio Assistant: A “Personal Audio Assistant” shall be defined as a portable system capable of interfacing with a communications network, directly or through an intermediate, to transmit and receive audio signals and other data.
  • Personal Computer: “Personal Computer” shall be defined as any piece of hardware that is an open system capable of compiling, linking, and executing a programming language (such as C/C++, java, etc.).
  • Personal Music Player: “Personal Music Player” shall be defined as any portable device that implements perceptual audio decoder technology but is a closed system in that users are not generally allowed or able to write software for the device.
  • Personalized HRTF: A “Personalized HRTF” shall be defined as a set of HRTF data that is measured for a specific Member and unique to that Member. The application of Personalized HRTF data to Audio Content creates, by far, the most convincing Spatial Image for the the Member (Begault et. al. 2001, D. Zotkin, R. Duraiswami, and L. Davis 2002).
  • Personalized Services: “Personalized Services” shall be defined as services customized to better meet the needs of an individual. Personalized Services include media content (for preview or purchase) related to a user's preferences, reminders from personal scheduling software, delivery and text-to-speech processing of email, marketing messages, delivery and text-to-speech of stock market information, medication reminders, foreign language instruction [real-time foreign language translation?], academic instruction, time and date information, and others.
  • Public Data: “Public Data” shall be defined as data which contains specific and personal information about the registered user of the Personal Audio Assistant. The registered user chooses which portions of their complete Registration Process data they wish to include in this subset. This data becomes distributed to other users who have compliant devices thus allows other users to know specific details of the registered user.
  • Registration Process: “Registration Process” includes the acquisition of the user's preference via a web page. Typically, the process would include the items to be captured: Age, demographics, email, gender, Relative Audiogram, Personal Preferences, banking information, credit card information, wake-up and sleep times, music preferences by genre, artist, preferences for writers and authors, desire to receive advertising, turn-on listening level, equalization, email preferences, parental control setup as well as other user-controlled settings.
  • Relative Audiogram: A “Relative Audiogram” shall be defined as a measured set of data describing a specific individual's hearing threshold level as a function of frequency. A Relative Audiogram is only an approximate Audiogram, leaving more complete Audiogram analysis to qualified audiologists.
  • Semi-Personalized HRTF: A “Semi-Personalized HRTF” shall be defined as a set of HRTF data that is selected from a database of known HRTF data as the “best-fit” for a specific user. Semi-Personalized HRTF data is not necessarily unique to one user; however, interpolation and matching algorithms can be employed to modify HRTF data from the database to improve the accuracy of a Semi-Personalized HRTF. The application of Semi-Personalized HRTF data to Audio Content provides a Spatial Image that is improved compared to that of Generic HRTF data, but less effective than that of Personalized HRTF data. The embodiments within speak to a variety of methods for determining the best-fit HRTF data for a particular Member including anthropometrical measurements extracted from photographs and deduction.
  • Server: A “Server” shall be defined as a system that controls centrally held data and communicates with Clients.
  • Sonification: “Sonification” shall be defined as the use of non-speech audio to convey information or to aurally perceptualize non-acoustic data (auralize). Due to a variety of phenomena involving human cognition, certain types of information can be better or more efficiently conveyed using auditory means than, for example, visual means.
  • Exemplary Embodiments
  • FIG. 1 illustrates the connection between an earpiece device (103 and 104) and a communication network (101), which can be operatively (via wire or wireless) to a server system (100) and/or an e-mail server (105). Additionally a radio signal (e.g., satellite radio) can be input into the earpiece 500 via a communication module (e.g., bluetooth wireless module 515).
  • FIG. 2 illustrates at least one exemplary embodiment where earpiece devices share information with other earpiece devices within range (e.g., GPS location and identity). For example multiple users (e.g., 202, 203, 204, and 206) can send signals to each individual earpiece (e.g., 500) when in range (e.g., via a wireless connection 205) or to a mobile audio communications device 200 via a wireless connection (201) with each earpiece (500). Additionally a information (e.g., audio content, software download) can be sent via a client's computer 207 to each earpiece, either directly (e.g., 205), or via 200. For example audio content can be retrieved on a user's computer and sent to the earpieces that have authorization to use it.
  • FIG. 3 illustrates an example of various elements that can be part of an earpiece device in accordance with at least one exemplary embodiment. The earpiece can include all or some of the elements illustrated in FIG. 3. For example the logic circuit 570 or the operatively connected memory storage device 585, can include spatial enhancement software 329, a DSP code 330, a speech synthesis and recognition system 311, and a digital timer 312. Additional elements can be connected to the logic circuit 570 as needed, for example a software communication interface 307 (e.g., wireless module 515), data port interface 306, audio input buffers 300 connected to digital audio input 302 and/or analog audio input converted to digital via an ADC 301, environmental audio input acoustic transducer(s) 321 converted to digital via an ADC 316, user control 324, digital audio output 328, output acoustic transducers 319, display systems 318, communication buffers 325 as well as other electronic devices as known by one of ordinary skill in the relevant arts.
  • FIG. 4 illustrates an example of a communication system in accordance with at least one exemplary embodiment that a user can use to register via his/her computer 419, via a communication network 400 (e.g., internet connection) connected to many various database and registration systems as illustrated and labeled in FIG. 4.
  • FIG. 5A illustrates an earpiece that can store and download audio content in accordance with at least one exemplary embodiment. The earpiece 500, can include a first user interaction element 530 (e.g., a button), that can be used to turn the earpiece 500 on, or if on then activate an audio play command to start playing saved audio content. The earpiece 500 can also include a second user interaction element 550 (e.g., a slide control) that can be used for example to control the volume. The earpiece can also include recharge ports, that can accept two wires of varying voltage that can be inserted into the recharge ports 570 to recharge any batteries in the earpiece 500. The earpiece can include an ambient microphone 520 and an optional communication antenna 510, that if needed can aid in the communication between the earpiece 500 and a communication network.
  • FIG. 5B illustrates a block diagram of the earpiece of FIG. 5A, illustrating the first user interaction element 530, the ambient microphone (AM) 520, that can be used to pick up ambient audio content, an ear canal microphone (ECM) 570 that can pick up audio in the ear canal region, an ear canal receiver (ECR) 580 that can direct audio content to the ear drum, all of which can be connected operatively to a logic circuit 570. A memory storage device can be operatively connected to the logic circuit (LC) 570, and can store data such as registration, preference, and audio content data. The optional communication antenna 510 can be connected to a communication module (e.g., wireless module 515), and can receive or transmit information 560 to a communication network.
  • FIG. 6 illustrates a user interface for setting the parameters stored in the memory storage device 585. For example a user can use his/her computer 419 to communicate with a server 401 (e.g., via a communication network 400) to start the user's registration (e.g., with an audio content provider). The registration information can then be transmitted 600 to set the stored parameters in the memory storage device 585 of the earpiece 500. Additionally a requested (e.g., bought) audio content can be downloaded 610 into the memory storage device 585 of the earpiece 500.
  • At least one exemplary embodiment is directed to an earpiece comprising: an ambient microphone; an ear canal microphone; an ear canal receiver; a sealing section; a logic circuit; a communication module; a memory storage unit; and a user interaction element, where the user interaction element is configured to send a play command to the logic circuit when activated by a user where the logic circuit reads registration parameters stored on the memory storage unit and sends audio content to the ear canal receiver according to the registration parameters.
  • In at least one exemplary embodiment the audio content is stored in the memory storage unit. The earpiece according to claim 2, where the communications module is a wireless communications module. Additionally the earpiece can include a second user interaction element configured to alter the volume of the audio content that is emitted from the ear canal receiver.
  • Upon a play command being received by the logic circuit the logic circuit can check registration parameters stored in the memory storage device for example one of the registration parameters can be whether the audio content is a sample audio content or a fully purchased audio content, or the allowed number of times an audio content can be played, and a counter value that keeps track of the number of times the audio content has been played.
  • The earpiece can send an auditory warning to be emitted by the ear canal receiver when the counter value is greater than or equal to the allowed number of times the audio content can be played, and where the logic circuit does not send the audio content to the ear canal receiver.
  • Further Exemplary Embodiments
  • At least one exemplary embodiment is directed to a system for the delivery of Personalized Services to Personal Audio Assistants, the system comprising: a Personal Audio Assistant system for presenting Personalized Services to the user as Audio Playback; a Server system for user registration, Personalized Service management, and communication; a Registration Process for collecting detailed registration information from users, including the information necessary for creating Personalized Services; a communications protocol (TCP/IP, USB, IEEE 1394, IEEE 802.11, Bluetooth, A2SP, GSM, CDMA, or other) and a communications network (i.e. the Internet, cellular networks) connecting the Personal Audio Assistant to the Server or connecting the Personal Audio Assistant to other Personal Audio Assistants (peer-to-peer behavior).
  • In at least one exemplary embodiment a Personal Computer acts as an intermediate, connecting to the Server system over a communications network and connecting to the Personal Audio Assistant over a local connection. At least one exemplary embodiment includes a Personal Hearing Damage Intervention System (e.g., USPTO—60/805985—Goldstein).
  • In at least one exemplary embodiment a Personal Audio Assistant system included as part of a Headphone system, the system comprising: a Communications Port supporting a communications protocol enabling communication with the Server system, peer devices, and other capable devices; a non-volatile program memory storage system for storing Control Data, dictating system behavior; a data memory storage system for storing data and audio content; an analog audio input/output and corresponding ADC/DAC; a digital audio input/output and a digital audio signal path; a user control system allowing the user to adjust the level of the audio output and control the behavior of the system; a user control system allowing the user to purchase the content being auditioned in real time; a user control system allowing the user to control, delete, fast forward, output level control, scan, advance, the date stored both stored in memory as well as new streaming data emails and reminders; a display system for presenting information to the user(s) visually using any method familiar to those skilled in the art (LED, LCD, or other); a display system for presenting information to the user(s) (e.g., using Earcons and other sound files); a speech synthesis system for converting text-to-speech and generating speech signals; a speech recognition system for converting speech to-text to respond and send emails and to interface with the control language as to provide navigational commands; a digital timer system; a power supply system in the form of a battery; a unique identification number for each Personal Audio Assistant; an Input Acoustic Transducers; an Output Acoustic Transducers; an Audio amplification system; an Acoustic Isolation Cushions conforming to one of the Ear Mold Styles (CIC, ITC, ITE, or BTE; see definitions) and other elements common to Headphone systems; a digital signal processor (DSP) system; and a CODEC processor capable of improving the perceptual sound quality of the content to be auditioned while governed by delivering the correct SPL dose.
  • In at least one exemplary embodiment the system is independent of a Headphone array or can be included and imbedded as part of a Personal Computer system, a Personal Music Player system, a personal monitoring system, an automotive audio system, a home audio system, an avionics audio system, a personal video system, a mobile cell phone system, a personal digital assistant system, a standalone accessory, or an advanced eye-wear system with acoustical transducers.
  • In at least one exemplary embodiment the various processing needed to derive the intended functions are distributed among any combination of a Server system, a Personal Computer system, a Personal Music Player system, a personal monitoring system, an automotive audio system, a home audio system, an avionics audio system, a personal video system, a mobile cell phone system, a personal digital assistant system, a standalone accessory, or an advanced eye-wear system with acoustical transducers.
  • In at least one exemplary embodiment the Personal Audio Assistant system can exchange audio signals with a mobile phone via the Communications Port, allowing the Personal Audio Assistant to function as a mobile phone accessory.
  • In at least one exemplary embodiment a communications buffer is included. For example when a network connection is available, the communications buffer uploads stored content (e.g., Listening Habits Envelope) and stores incoming transmissions (e.g., music, electronic books, and updates to the firmware or operating system) from the Communications Port; The contents of the communications buffer are then transmitted whenever a network connection becomes available. At least one exemplary embodiment includes a perceptual audio codec decoding technology in the DSP, enabling the storage and playback of compressed digital audio formats (e.g., MP3, MC, FLAC, etc.). At least one exemplary embodiment is compliant and compatible with DRM, FairPlay and other forms of digital content governance.
  • At least one exemplary embodiment includes a user control system for selecting and playing back audio content stored in memory that operates using any combination of the following methods: a button or tactile interface which upon auditioning song can be pressed to order content; a button, tactile and/ or voice controlled interface which, when pressed once, to commanded to, activates playback of short audio clips or audio thumbnails of the audio content stored in memory; When the button is pressed again during audio thumbnail playback, the current audio content selection is played in its entirety; The behavior of this interface is similar to the “scan” button interface common in FM/AM radio devices; a button, tactile and/or voice controlled interface that, when pressed or commanded to, skips to the next piece of audio content, which is selected randomly from all available audio content that has a play count equal to or less than the play count of the piece of audio content currently playing; The behavior of this interface is similar to the “shuffle” behavior found in some personal music players; an interface for browsing audio content storage devices familiar to those skilled in the art; and a process to allow for increased data memory storage capacity for storing audio content.
  • In at least one exemplary embodiment the contents of the data memory are encrypted and controlled by the Server system only, prohibiting the end-user from loading unauthorized audio content into the data memory. Further the contents of the data memory can be manipulated by the end-user, allowing the user to transfer audio content to the Personal Audio Assistant system from any device capable of interfacing with the communications port; For example, audio content can be transferred to the system from a Personal Music Player or a Personal Computer. At least one exemplary embodiment audio content (or other media content) updates are retrieved from the Server system any time a connection is detected by the communications port. Furthermore can include an acoustical and or visual indicator informing the user when a transfer of data is activated.
  • In at least one exemplary embodiment radio wave transmissions are used to implement some communications protocol and the communications port acts as a radio receiver. Additionally the Personal Audio Assistant can include: interfaces with some personal scheduling software through the communications port; a speech synthesis system which generates speech-signal reminders corresponding to information from the scheduling software, where the digital timer system triggers the presentation of the speech-signal reminders at the appropriate time.
  • Additionally the Personal Audio Assistant can interface with an email platform through the communications port; The speech synthesis system converts the email in text to speech and provides email to the user in aural presentation format. The system further comprising: a process in the Registration engine allowing the user to optimize their personalization process of incoming emails by associating a specific Earcon with the importance of the incoming email, As such, normal priority email contains a introduction sound announcing to the user the level of importance the sender associated with their email; a speech recognition system for converting speech-to-text which interfaces with the control language as to provide navigational commands allowing the user to respond and send emails.
  • In at least one exemplary embodiment the communications port system makes use of some wireless communications protocol (802.11, Bluetooth, A2SP, or other) to transmit and receive digital audio data for playback, the system further comprising: an audio codec to encode and decode digital audio transmissions; a wireless communications system (802.11, Bluetooth, A2SP etc.) for transmitting and receiving data (digital audio transmissions, Control Data, etc.); a method for pairing two or more Personal Audio Assistants through a wireless communications protocol to provide a secure exchange of audio content, data such as the user's Public Data; an audio warning signal or visual display system output that notifies the user anytime a compatible transmission becomes available; and a user control system enabling the user to switch between available compatible transmissions.
  • In at least one exemplary embodiment the system enables listeners to share digital audio transmissions, the system further comprising: a method for scanning for available digital audio transmissions within range; a user control interface for specifying digital audio transmission behavior; a method for employing the system as a relay to other compliant devices; re-broadcasting digital audio transmissions to increase wireless range. In at least one exemplary embodiment multiple systems are capable of sharing the contents of their program and data memory using the wireless communications protocol.
  • In at least one exemplary embodiment the system the input Acoustic Transducer is used to record audio content to the data memory storage system, the system further comprising: an implementation of some perceptual audio codec technology in the DSP, enabling the storage of compressed audio formats (e.g., MP3, AAC, FLAC, etc); and an increased data memory storage capacity for storing recorded audio content.
  • In at least one exemplary embodiment the stereo input Acoustic Transducers are ultimately connected to the audio signal path at the DSP, allowing the user to audition Environmental Audio (e.g., speech or music) and mitigating the need for the user to remove the Headphone apparatus to audition Environmental Audio, the system further comprising: a stereo pair of input Acoustic Transducers placed close to the user's ear canal input, conforming to one of the Ear Mold Styles (CIC, ITC, ITE, or BTE, see definitions); and by mounting the input Acoustic Transducers in a CIC or ITC configuration, spatial-acoustic cues are preserved, creating a spatially-accurate Environmental Audio input signal—essentially a personal binaural recording; a method for acoustically compensating for the non-linear frequency response characteristics of the Acoustical Isolation Cushions of a given Headphone system by applying corresponding inverse filters to the Environmental Audio input signal at the DSP; With this method, the system acts as a linear-frequency-response hearing protection apparatus (e.g., USPTO—60805985—Goldstein).
  • At least one exemplary embodiment includes a system for first attenuating Audio Playback and then mixing the Environmental Audio input signals, at a louder listening level, with the audio signal path using the DSP, where the system is activated by any combination of the following methods: a manual switch to activate/deactivate the system; a speech-detection apparatus to activate the system when speech is detected as the principal component of the Environmental Audio input; and a music-detection apparatus to activate the system when music is detected as the principal component of the Environmental Audio input.
  • At leas one exemplary embodiment can include active noise reduction, echo cancellation and signal conditioning that can be environmentally customized through the registration process to better meet the user's specific needs (i.e., occupation-related noise cancellation); A typical application would be a special set of noise cancellation parameters tuned to the drilling equipment used by a dentist.
  • In at least one exemplary embodiment the input Acoustic Transducers are instead mounted within circum-aural, intra-aural BTE, or intra-aural ITE molds (see Ear Mold Style), the system further comprising: a spatial audio enhancement system for supplementing the spatial-acoustic cues captured by the stereo pair of input Acoustical Transducers to provide improved spatial perception of Environmental Audio using any combination of the following methods: the application of Generic, Semi-Personalized, or Personalized HRTF data to the Environmental Audio input signal; the application of binaural enhancement algorithms, familiar to those skilled in the art, to the Environmental Audio input signals; the application of a pinna simulation algorithm to the Environmental Audio input signal; and a synthetic pinna apparatus placed just before the stereo input Acoustic Transducers.
  • At least one exemplary embodiment includes a Server system for the creation, Registration, management, and delivery of Personalized Services, the system comprising: a communications system for interfacing with public communication networks to exchange data with Personal Audio Assistants, Client's computer, Mobil phones, PDA or other capable devices; a database and database management system for storing and retrieving information relating to user Registration, Personalized Services, audio content, Control Data, and other data; a Registration interface system for collecting, storing, and applying information provided by users; a method for creating Personalized Services based on user Registration information; an end-user audio content Lock-Box storage system, providing every registered user access to their purchased media content; a business-to-business interface system for acquiring audio content with record labels, copyright holders, and other businesses; an E-tailing system including an electronic transactions system enabling users to purchase content, items offered for sale or pay subscription fees electronically; an E-Payment system compensating the various copyholders upon purchase of content by user; a Playlist engine, which acquires the users Registration information, Listening History Envelope and then creates audio playlists, which is optimized for the user preferences and further refinements; and an Email server, which distributes communications to the user and others, regarding marketing data, the status of the user weekly SPL dose, and other information.
  • At least one exemplary embodiment includes machine-learning techniques are employed to better optimize the users' preferences relating to audio content and other media content, the system further comprising: a method for tracking the purchase history of each user, relating the purchase history to media content preferences, and using the purchase history to make media content recommendations; a method for examining a user's digital media library, stored on a Personal Computer, Personal Music Player, or Personal Audio Assistant, from the Server system, and relating media content preferences and media content recommendations to the user's digital media library; and a method for examining a user's Listening History Profile.
  • At least one exemplary embodiment includes a Registration system for collecting a wide variety information from users, including information necessary for creating Personalized Services, the system comprising: a Server system; an interface system for querying the user to collect registration information including demographics (age, gender), Playback Hardware information, Headphone information, occupational information, home and work locations, medication information, music-related preferences, video-related preferences, and other information; a method for customizing Control Data based on registration information; and a method for creating Personalized Services based on registration information.
  • In at least one exemplary embodiment a fast HRTF acquisition process is included as part of the Registration process, the system further comprising a method for the fast acquisition of Semi-Personalized HRTF data via a deduction process, the method comprising: a database system containing indexed, clustered HRTF data sets; an auditory test signal with distinctive spatial characteristics, where two or more distinct sound source locations exist; a system for the application of potential HRTF matches to the auditory test signal; and a feedback system, allowing the user to select the best listening experience from a number of candidate listening experiences, based on the spatial quality perceived in the HRTF-processed auditory test signal.
  • In at least one exemplary embodiment Personalized HRTF data is measured and used instead of Semi-Personalized HRTF data, by any method familiar to those skilled in the art.
  • In at least one exemplary embodiment the user is provided some Personal Audio Assistant free-of-charge or at a discount, given the user agrees to a subscription service commitment to receive Personalized Services for a certain amount of time.
  • In at least one exemplary embodiment, as part of the Personalized Services, the user is provided with temporary audio content corresponding to the preferences indicated during the registration process; Further, the user is given the option to purchase the audio content permanently; Otherwise, the audio content is replaced with new audio content from the Server, after a predetermined amount of time or a predetermined number of playback counts, the system comprising: a Personal Audio Assistant with an enhanced user control system, enabling a registered user to purchase media content directly from the Personal Audio Assistant with a button; and a Personal Audio Assistant with an enhanced user control system, enabling a registered user to store a reference to media content that can be purchased by the user at a later time.
  • In at least one exemplary embodiment, video or gaming content is included as well as audio content, the system further comprising: a Personal Audio Assistant with an enhanced visual display system, capable of playing video and/or gaming content.
  • In at least one exemplary embodiment, as part of the Personalized Services, the user receives medication reminders in the form of speech signals, audio signals, text, or graphics on the user's Personal Audio Assistant; Medication reminders are generated by the Server system based on the user's registration information.
  • In at least one exemplary embodiment, as part of the Personalized Services, the user receives stock market information in the form of speech signals, audio signals, text, or graphics on the user's Personal Audio Assistant; The stock market information is selected by the Server system based on the user's registration information, the system further comprising: the user having successfully registered their Personal Audio Assistant with a brokerage firm, or other stock trading engines, the user can then purchase or sell a stock by use of user button or speech command.
  • Further in at least one exemplary embodiment, the user is able to request specific media content to be transferred temporarily or permanently to the user's Personal Audio Assistant, the system further comprising: an interface system operating on the Server allowing users to request specific media content by artist, title, genre, format, keyword search, or other methods familiar to those skilled in the art; and media content search engine system.
  • In at least one exemplary embodiment a Relative Audiogram compensation filter is applied to audio signal path by the digital signal processor, the system either (e.g., USPTO—60805985—Goldstein): (a) Retrieves Relative Audiogram compensation information from a remote Server after a registration process (during transmission, the information can include HIPAA compliant encoding); or (b) calculates a compensation filter from Relative Audiogram information obtained by the system local. For example U.S. Pat. No. 6,840,908—Edwards, and U.S. Pat. No. 6,379,314—Horn, discuss methods for the acquisition of an individual's Relative Audiogram.
  • In at least one exemplary embodiment a Satellite Radio transmitter/receiver (transceiver) is incorporated within the Headphone proper, allowing the user to at least: receive XM, Sirius and other broadcasts for playback over the system; select radio stations for playback over the system via the control system, the control system comprising either a single-click tactile interface or the speech-controlled circuitry; store selected portions of such broadcasts in memory for later recall and playback via the control systems; engage a novel commercial-skip feature for attenuating the playback level of suspected sales commercials broadcasts; and engage a speech-skip feature for attenuating the playback of speech (e.g., news, announcements, etc.).
  • At least one exemplary embodiment includes a Walkie-Talkie mode, which broadcasts input to the system's built-in microphone, whereby the user's speech can be detected by the input acoustic transducer and remotely broadcast where at least one of the following occurs: the Walkie-Talkie mode receives input via AM/FM broadcasts (as well as digital communications protocols) from a nearby user; the Walkie-Talkie mode allows nearby users to engage in conversation with increased perceptual clarity in noisy environments (e.g., aircraft cockpits), using for example a noise-cancellation system; selectively engage and disengage the Walkie-Talkie mode using the control system; detect other users of the system within a given range; and alert the user of the system when other detected systems contain certain Public Data and contain a predefined Public Message Key (e.g., “If the detected system belongs to a single male between the ages of 25 and 30 and whose favorite sport is tennis, then broadcast the message, ‘I like tennis also; would you like to have coffee?”’ or “If the detected system belongs to a user who attended Princeton University, then broadcast the message, ‘Go Tigers!””).
  • At least one exemplary embodiment can use other communications to accomplish this service rather than AM/FM; as such the system can incorporate communications t (TCP/IP, USB, IEEE 1394, IEEE 802.11, Bluetooth, A2SP, GSM, CDMA, or other) and a communications network (i.e. the Internet, cellular networks) connecting the Personal Audio Assistant to other Personal Audio Assistants. At least one exemplary embodiment can selectively control the broadcast of public data and public message keys via the control system.
  • At least one exemplary embodiment includes a Sonification algorithm within the Headphone, which enables auditory display of digitally received data, including for example financial data, news, GPS data, the system further containing a variety of sonification “themes” selected during the registration process that map requested data (e.g., current average trading price of AAPL stock, the DOW Jones Industrial Index, and the NASDAQ Composite) to corresponding audio content (e.g., the frequency of a sine tone presented in the left ear, the frequency of a sine tone presented in the right ear, and the global amplitude of both sine tones, respectively).
  • At least one exemplary embodiment includes an auditory display, which is synthesized by the onboard Digital Signal Processor. In at least one exemplary embodiment the auditory display is created through the digital audio signal processing effects applied to any other acoustic data the system is capable of reproducing (e.g., terrestrial radio, prepurchased audio content in the user's digital library, electronic books, etc.); For example, a sudden listening level increase in the playback level of a song to which the user was listening can be triggered by a predefined alert condition (e.g., NASDAQ composite has exceeded 2200 points).
  • At least one exemplary embodiment includes the ability to create themes using a computer program and uploading a file to the Headphone system.
  • At least one exemplary embodiment includes a speech recognition system for converting speech to HTML, (Voice Browser), whereby the user can access the Internet, provide navigational commands, perform searches and receive results via the Headphones through a text (HTML)-speech synthesize.
  • Additionally, the Personal Audio Assistant can be totally incorporated with a mobile cell phone, or any portable technology which incorporates any of the following protocols, TCP/IP, USB, IEEE 1394, IEEE 802.11, Bluetooth, A2SP, GSM, CDMA, or others known to those of ordinary skill in the arts via a communications network (e.g., the Internet, cellular networks), the system further comprising: an Acoustic Transducer constructed as part of the mobile cell phone or a series of Acoustic Transducers, which are constructed as part of mobile cell phone; a commutations path incorporated into the mobile cell phone providing for bidirectional communication with a Headphone array; the incorporation of the mobile cell phone's microphone(s) to act as the Environmental Audio Acoustical Transducer(s); and the incorporation of the mobile cell phone's keyboard or touch sensitive screen to function as a manual input or to complement speech commands and that can act in a way to respond to Personalized Services offered to a user.
  • While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

Claims (7)

1. An earpiece comprising:
an ambient microphone;
an ear canal microphone;
an ear canal receiver;
a sealing section;
a logic circuit;
a communication module;
a memory storage unit,; and
a user interaction element, where the user interaction element is configured to send a play command to the logic circuit when activated by a user where the logic circuit reads registration parameters stored on the memory storage unit and sends audio content to the ear canal receiver according to the registration parameters.
2. The earpiece according to claim 1, where audio content is stored in the memory storage unit.
3. The earpiece according to claim 2, where the communications module is a wireless communications module.
4. The earpiece according to claim 1, further comprising:
a second user interaction element configured to alter the volume of the audio content that is emitted from the ear canal receiver.
5. The earpiece according to claim 1, where one of the registration parameters is whether the audio content is a sample audio content or a fully purchased audio content.
6. The earpiece according to claim 1, where one of the registration parameters is the allowed number of times an audio content can be played, and a counter value that keeps track of the number of times the audio content has been played.
7. The earpiece according to claim 6, where an auditory warning is sounded by the ear canal receiver when the counter value is greater than or equal to the allowed number of times the audio content can be played, and where the logic circuit does not send the audio content to the ear canal receiver.
US11/774,965 2006-07-08 2007-07-09 Personal audio assistant device and method Abandoned US20080031475A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US80676906P true 2006-07-08 2006-07-08
US11/774,965 US20080031475A1 (en) 2006-07-08 2007-07-09 Personal audio assistant device and method

Applications Claiming Priority (13)

Application Number Priority Date Filing Date Title
US11/774,965 US20080031475A1 (en) 2006-07-08 2007-07-09 Personal audio assistant device and method
US14/109,954 US10009677B2 (en) 2007-07-09 2013-12-17 Methods and mechanisms for inflation
US14/148,749 US20140119558A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,747 US10236011B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,751 US10236013B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,748 US10236012B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,752 US8805692B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,745 US20140122073A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,746 US20140119557A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,744 US20140123010A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,743 US20140123009A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,742 US20140123008A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/456,112 US20140350943A1 (en) 2006-07-08 2014-08-11 Personal audio assistant device and method

Related Child Applications (11)

Application Number Title Priority Date Filing Date
US14/109,954 Continuation US10009677B2 (en) 2006-07-08 2013-12-17 Methods and mechanisms for inflation
US14/148,749 Continuation US20140119558A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,745 Continuation US20140122073A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,751 Continuation US10236013B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,748 Continuation US10236012B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,752 Continuation US8805692B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,746 Continuation US20140119557A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,744 Continuation US20140123010A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,743 Continuation US20140123009A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,747 Continuation US10236011B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,742 Continuation US20140123008A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method

Publications (1)

Publication Number Publication Date
US20080031475A1 true US20080031475A1 (en) 2008-02-07

Family

ID=38924067

Family Applications (12)

Application Number Title Priority Date Filing Date
US11/774,965 Abandoned US20080031475A1 (en) 2006-07-08 2007-07-09 Personal audio assistant device and method
US14/148,751 Active US10236013B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,742 Pending US20140123008A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,747 Active 2029-04-03 US10236011B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,744 Pending US20140123010A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,745 Pending US20140122073A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,748 Active 2028-08-11 US10236012B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,752 Active US8805692B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,749 Pending US20140119558A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,743 Pending US20140123009A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,746 Pending US20140119557A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/456,112 Pending US20140350943A1 (en) 2006-07-08 2014-08-11 Personal audio assistant device and method

Family Applications After (11)

Application Number Title Priority Date Filing Date
US14/148,751 Active US10236013B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,742 Pending US20140123008A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,747 Active 2029-04-03 US10236011B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,744 Pending US20140123010A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,745 Pending US20140122073A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,748 Active 2028-08-11 US10236012B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,752 Active US8805692B2 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,749 Pending US20140119558A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,743 Pending US20140123009A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/148,746 Pending US20140119557A1 (en) 2006-07-08 2014-01-07 Personal audio assistant device and method
US14/456,112 Pending US20140350943A1 (en) 2006-07-08 2014-08-11 Personal audio assistant device and method

Country Status (3)

Country Link
US (12) US20080031475A1 (en)
EP (1) EP2044804A4 (en)
WO (1) WO2008008730A2 (en)

Cited By (124)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070078545A1 (en) * 2005-09-23 2007-04-05 Hon Hai Precision Industry Co., Ltd. Sound output system and method
US20070078546A1 (en) * 2005-09-23 2007-04-05 Hon Hai Precision Industry Co., Ltd. Sound output system and method
US20070078544A1 (en) * 2005-09-05 2007-04-05 Hon Hai Precision Industry Co., Ltd. Sound output system and method
US20070078547A1 (en) * 2005-09-09 2007-04-05 Hon Hai Precision Industry Co., Ltd. Sound output system and method
US20080025536A1 (en) * 2006-07-28 2008-01-31 Josef Chalupper Hearing device for musicians
US20080065741A1 (en) * 2006-09-13 2008-03-13 Stratton John G System and method for distributing and providing recommendations related to playable content
US20080205647A1 (en) * 2005-09-22 2008-08-28 Shanghai Yee Networks Co., Ltd Information Subscribing System for Portable Terminal Device Having Autonomous Network Access
US20090080680A1 (en) * 2007-09-24 2009-03-26 Siemens Medical Instruments Pte. Ltd. Hearing apparatus with variably mounted control element
US20100064218A1 (en) * 2008-09-09 2010-03-11 Apple Inc. Audio user interface
US20100082488A1 (en) * 2008-09-29 2010-04-01 Concert Technology Corporation Providing a radio station at a user device using previously obtained drm locked content
US20100119100A1 (en) * 2008-11-13 2010-05-13 Devine Jeffery Shane Electronic voice pad and utility ear device
US20100124947A1 (en) * 2008-11-14 2010-05-20 Sony Ericsson Mobile Communications Japan, Inc. Portable terminal, audio output control method, and audio output control program
US20100162117A1 (en) * 2008-12-23 2010-06-24 At&T Intellectual Property I, L.P. System and method for playing media
US20100304783A1 (en) * 2009-05-29 2010-12-02 Logan James R Speech-driven system with headset
US20100322454A1 (en) * 2008-07-23 2010-12-23 Asius Technologies, Llc Inflatable Ear Device
US20100329485A1 (en) * 2008-03-17 2010-12-30 Temco Japan Co., Ltd. Bone conduction speaker and hearing device using the same
US20110040800A1 (en) * 2009-03-04 2011-02-17 Tomoyuki Karibe Metadata generation management device, metadata generation system, integrated circuit for managing generation of metadata, metadata generation management method, and program
US20110111741A1 (en) * 2009-11-06 2011-05-12 Kirstin Connors Audio-Only User Interface Mobile Phone Pairing
WO2011091402A1 (en) * 2010-01-25 2011-07-28 Justin Mason Voice electronic listening assistant
US20110182453A1 (en) * 2010-01-25 2011-07-28 Sonion Nederland Bv Receiver module for inflating a membrane in an ear device
US20110188668A1 (en) * 2009-09-23 2011-08-04 Mark Donaldson Media delivery system
US20110228964A1 (en) * 2008-07-23 2011-09-22 Asius Technologies, Llc Inflatable Bubble
US20110280429A1 (en) * 2010-05-14 2011-11-17 Dartpoint Technology Co., LTD. Waterproof pillow with audio unit
US20120215532A1 (en) * 2011-02-22 2012-08-23 Apple Inc. Hearing assistance system for providing consistent human speech
US8309833B2 (en) * 2010-06-17 2012-11-13 Ludwig Lester F Multi-channel data sonification in spatial sound fields with partitioned timbre spaces using modulation of timbre and rendered spatial location as sonification information carriers
US8340310B2 (en) 2007-07-23 2012-12-25 Asius Technologies, Llc Diaphonic acoustic transduction coupler and ear bud
US20130123919A1 (en) * 2011-06-01 2013-05-16 Personics Holdings Inc. Methods and devices for radio frequency (rf) mitigation proximate the ear
US8550206B2 (en) 2011-05-31 2013-10-08 Virginia Tech Intellectual Properties, Inc. Method and structure for achieving spectrum-tunable and uniform attenuation
US20130329610A1 (en) * 2012-04-13 2013-12-12 Dominant Technologies, LLC Combined In-Ear Speaker and Microphone for Radio Communication
US8725498B1 (en) * 2012-06-20 2014-05-13 Google Inc. Mobile speech recognition with explicit tone features
US8774435B2 (en) 2008-07-23 2014-07-08 Asius Technologies, Llc Audio device, system and method
US20140278400A1 (en) * 2013-03-12 2014-09-18 Microsoft Corporation Search Results Using Intonation Nuances
US20140270227A1 (en) * 2013-03-14 2014-09-18 Cirrus Logic, Inc. Wireless earpiece with local audio cache
US20140314238A1 (en) * 2013-04-23 2014-10-23 Personics Holdings, LLC. Multiplexing audio system and method
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
US8892988B1 (en) * 2009-12-16 2014-11-18 Google Inc. Integrated user interface
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US20160066106A1 (en) * 2014-08-27 2016-03-03 Auditory Labs, Llc Mobile audio receiver
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US9333116B2 (en) 2013-03-15 2016-05-10 Natan Bauman Variable sound attenuator
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US20160277575A1 (en) * 2013-10-18 2016-09-22 Amos Joseph Alexander Call center system for personalized services
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9521480B2 (en) 2013-07-31 2016-12-13 Natan Bauman Variable noise attenuator with adjustable attenuation
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2017043688A1 (en) * 2015-09-09 2017-03-16 주식회사 사운드브릿지 Bluetooth earset having embedded ear canal microphone and method for controlling same
US9606986B2 (en) 2014-09-29 2017-03-28 Apple Inc. Integrated word N-gram and class M-gram language models
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US20170139668A1 (en) * 2015-11-13 2017-05-18 Bragi GmbH Method and apparatus for interfacing with wireless earpieces
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US20170170796A1 (en) * 2015-12-11 2017-06-15 Unlimiter Mfa Co., Ltd. Electronic device for adjusting an equalizer setting according to a user age, sound playback device, and equalizer adjustment method
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US20170345273A1 (en) * 2014-02-23 2017-11-30 Hush Technology Inc. Intelligent Earplug System
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9854414B2 (en) 2012-04-13 2017-12-26 Dominant Technologies, LLC Hopping master in wireless conference
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9955516B2 (en) 2014-12-05 2018-04-24 Dominant Technologies, LLC Duplex radio with auto-dial cell connect
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US10045133B2 (en) 2013-03-15 2018-08-07 Natan Bauman Variable sound attenuator with hearing aid
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10043535B2 (en) 2013-01-15 2018-08-07 Staton Techiya, Llc Method and device for spectral expansion for an audio signal
US10045135B2 (en) 2013-10-24 2018-08-07 Staton Techiya, Llc Method and device for recognition and arbitration of an input connection
US10043534B2 (en) 2013-12-23 2018-08-07 Staton Techiya, Llc Method and device for spectral expansion for an audio signal
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
WO2018183067A1 (en) * 2017-03-31 2018-10-04 Ecolink Intelligent Technology, Inc. Method and apparatus for interaction with an intelligent personal assistant
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10249323B2 (en) 2017-05-31 2019-04-02 Bose Corporation Voice activity detection for communication headset
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10255907B2 (en) 2015-09-04 2019-04-09 Apple Inc. Automatic accent detection using acoustic models

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6721705B2 (en) 2000-02-04 2004-04-13 Webley Systems, Inc. Robust voice browser system and voice activated device controller
US7516190B2 (en) 2000-02-04 2009-04-07 Parus Holdings, Inc. Personal voice-based information retrieval system
US7986914B1 (en) * 2007-06-01 2011-07-26 At&T Mobility Ii Llc Vehicle-based message control using cellular IP
US20100250253A1 (en) * 2009-03-27 2010-09-30 Yangmin Shen Context aware, speech-controlled interface and system
US20130018495A1 (en) * 2011-07-13 2013-01-17 Nokia Corporation Method and apparatus for providing content to an earpiece in accordance with a privacy filter and content selection rule
US9530409B2 (en) * 2013-01-23 2016-12-27 Blackberry Limited Event-triggered hands-free multitasking for media playback
JP6098216B2 (en) * 2013-02-20 2017-03-22 株式会社デンソー Timer reminder apparatus
WO2015017914A1 (en) * 2013-08-05 2015-02-12 Audilent Technologies Inc. Media production and distribution system for custom spatialized audio
US9609436B2 (en) * 2015-05-22 2017-03-28 Microsoft Technology Licensing, Llc Systems and methods for audio creation and delivery
KR101568314B1 (en) * 2015-05-26 2015-11-12 주식회사 단솔플러스 Apparatus and method for sound wave communication
US10099636B2 (en) 2015-11-27 2018-10-16 Bragi GmbH System and method for determining a user role and user settings associated with a vehicle
US9944295B2 (en) 2015-11-27 2018-04-17 Bragi GmbH Vehicle with wearable for identifying role of one or more users and adjustment of user settings
US10104460B2 (en) * 2015-11-27 2018-10-16 Bragi GmbH Vehicle with interaction between entertainment systems and wearable devices
US20170206899A1 (en) * 2016-01-20 2017-07-20 Fitbit, Inc. Better communication channel for requests and responses having an intelligent agent
US10235989B2 (en) * 2016-03-24 2019-03-19 Oracle International Corporation Sonification of words and phrases by text mining based on frequency of occurrence
WO2017176259A1 (en) * 2016-04-05 2017-10-12 Hewlett-Packard Development Company, L.P. Audio interface for multiple microphones and speaker systems to interface with a host
CN105930480B (en) * 2016-04-29 2019-03-15 苏州桑德欧声听觉技术有限公司 The generation method and managing irritating auditory phenomena system of managing irritating auditory phenomena music
US10033474B1 (en) * 2017-06-19 2018-07-24 Spotify Ab Methods and systems for personalizing user experience based on nostalgia metrics

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3865998A (en) * 1970-12-02 1975-02-11 Beltone Electronics Corp Ear seal
US4539439A (en) * 1983-04-18 1985-09-03 Unitron Industries Ltd. Plugs, receptacles and hearing aids
US5694475A (en) * 1995-09-19 1997-12-02 Interval Research Corporation Acoustically transparent earphones
US5751820A (en) * 1997-04-02 1998-05-12 Resound Corporation Integrated circuit design for a personal use wireless communication system utilizing reflection
US20020007315A1 (en) * 2000-04-14 2002-01-17 Eric Rose Methods and apparatus for voice activated audible order system
US6379314B1 (en) * 2000-06-19 2002-04-30 Health Performance, Inc. Internet system for testing hearing
US20030069854A1 (en) * 2001-10-09 2003-04-10 Hsu Michael M. Expiring content on playback devices
US6587822B2 (en) * 1998-10-06 2003-07-01 Lucent Technologies Inc. Web-based platform for interactive voice response (IVR)
US6636953B2 (en) * 2000-05-31 2003-10-21 Matsushita Electric Co., Ltd. Receiving apparatus that receives and accumulates broadcast contents and makes contents available according to user requests
US6661901B1 (en) * 2000-09-01 2003-12-09 Nacre As Ear terminal with microphone for natural voice rendition
US20040224723A1 (en) * 2003-05-09 2004-11-11 Jp Mobile Operating, L.P. Multimedia control with one-click device selection
US6840908B2 (en) * 2001-10-12 2005-01-11 Sound Id System and method for remotely administered, interactive hearing tests
US20050078843A1 (en) * 2003-02-05 2005-04-14 Natan Bauman Hearing aid system
US6965770B2 (en) * 2001-09-13 2005-11-15 Nokia Corporation Dynamic content delivery responsive to user requests
US20060193450A1 (en) * 2005-02-25 2006-08-31 Microsoft Corporation Communication conversion between text and audio
US7133924B1 (en) * 2000-03-08 2006-11-07 Music Choice Personalized audio system and method
US20070025194A1 (en) * 2005-07-26 2007-02-01 Creative Technology Ltd System and method for modifying media content playback based on an intelligent random selection
US20070079692A1 (en) * 2005-10-12 2007-04-12 Phonak Ag MIDI-compatible hearing device
US7206429B1 (en) * 2001-05-21 2007-04-17 Gateway Inc. Audio earpiece and peripheral devices
US7546144B2 (en) * 2006-05-16 2009-06-09 Sony Ericsson Mobile Communications Ab Mobile wireless communication terminals, systems, methods, and computer program products for managing playback of song files
US7558529B2 (en) * 2005-01-24 2009-07-07 Broadcom Corporation Earpiece/microphone (headset) servicing multiple incoming audio streams

Family Cites Families (122)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5220595A (en) 1989-05-17 1993-06-15 Kabushiki Kaisha Toshiba Voice-controlled apparatus using telephone and voice-control method
US5714997A (en) 1995-01-06 1998-02-03 Anderson; David P. Virtual reality television system
US5742689A (en) * 1996-01-04 1998-04-21 Virtual Listening Systems, Inc. Method and device for processing a multichannel signal for use with a headphone
US5757933A (en) 1996-12-11 1998-05-26 Micro Ear Technology, Inc. In-the-ear hearing aid with directional microphone system
US5978689A (en) 1997-07-09 1999-11-02 Tuoriniemi; Veijo M. Personal portable communication and audio system
US6157705A (en) 1997-12-05 2000-12-05 E*Trade Group, Inc. Voice control of a server
US7003463B1 (en) 1998-10-02 2006-02-21 International Business Machines Corporation System and method for providing network coordinated conversational services
US6836651B2 (en) * 1999-06-21 2004-12-28 Telespree Communications Portable cellular phone system having remote voice recognition
US6167251A (en) 1998-10-02 2000-12-26 Telespree Communications Keyless portable cellular phone system having remote voice recognition
US7233321B1 (en) * 1998-12-15 2007-06-19 Intel Corporation Pointing device with integrated audio input
US6937984B1 (en) * 1998-12-17 2005-08-30 International Business Machines Corporation Speech command input recognition system for interactive computer display with speech controlled display of recognized commands
US6480961B2 (en) 1999-03-02 2002-11-12 Audible, Inc. Secure streaming of digital audio/visual content
US20020012432A1 (en) 1999-03-27 2002-01-31 Microsoft Corporation Secure video card in computing device having digital rights management (DRM) system
US7181297B1 (en) 1999-09-28 2007-02-20 Sound Id System and method for delivering customized audio data
WO2001026272A2 (en) * 1999-09-28 2001-04-12 Sound Id Internet based hearing assessment methods
AU7863600A (en) 1999-10-05 2001-05-10 Zapmedia, Inc. System and method for distributing media assets to user devices and managing user rights of the media assets
US9076448B2 (en) 1999-11-12 2015-07-07 Nuance Communications, Inc. Distributed real time speech recognition system
US6633846B1 (en) 1999-11-12 2003-10-14 Phoenix Solutions, Inc. Distributed realtime speech recognition system
US6532446B1 (en) 1999-11-24 2003-03-11 Openwave Systems Inc. Server based speech recognition user interface for wireless devices
US20020068986A1 (en) 1999-12-01 2002-06-06 Ali Mouline Adaptation of audio data files based on personal hearing profiles
DE10002321C2 (en) * 2000-01-20 2002-11-14 Micronas Munich Gmbh A voice controlled device and system with such a voice-controlled device
US6721705B2 (en) 2000-02-04 2004-04-13 Webley Systems, Inc. Robust voice browser system and voice activated device controller
GB2360588B (en) 2000-03-23 2004-04-07 Yeoman Group Plc Navigation system
US20010054087A1 (en) 2000-04-26 2001-12-20 Michael Flom Portable internet services
US6421725B1 (en) 2000-05-04 2002-07-16 Worldcom, Inc. Method and apparatus for providing automatic notification
FI110296B (en) 2000-05-26 2002-12-31 Nokia Corp Handsfree
DE10030926A1 (en) 2000-06-24 2002-01-03 Alcatel Sa Noise-based adaptive echo cancellation
US7319992B2 (en) * 2000-09-25 2008-01-15 The Mission Corporation Method and apparatus for delivering a virtual reality environment
AU9498901A (en) 2000-10-04 2002-04-15 Clarity L L C Speech detection
US7451085B2 (en) 2000-10-13 2008-11-11 At&T Intellectual Property Ii, L.P. System and method for providing a compensated speech recognition model for speech recognition
US6590303B1 (en) * 2000-10-26 2003-07-08 Motorola, Inc. Single button MP3 player
US7031437B1 (en) 2000-10-30 2006-04-18 Nortel Networks Limited Method and system for providing remote access to previously transmitted enterprise messages
AU1986002A (en) 2000-11-10 2002-06-11 Full Audio Corp Digital content distribution and subscription system
US20020077988A1 (en) 2000-12-19 2002-06-20 Sasaki Gary D. Distributing digital content
US6832242B2 (en) * 2000-12-28 2004-12-14 Intel Corporation System and method for automatically sharing information between handheld devices
US6795688B1 (en) * 2001-01-19 2004-09-21 3Com Corporation Method and system for personal area network (PAN) degrees of mobility-based configuration
US7149319B2 (en) 2001-01-23 2006-12-12 Phonak Ag Telecommunication system, speech recognizer, and terminal, and method for adjusting capacity for vocal commanding
US20030014407A1 (en) * 2001-04-11 2003-01-16 Green Arrow Media, Inc. System and method for making media recommendations
US20030001978A1 (en) 2001-06-12 2003-01-02 Xsides Corporation Method and system for enhancing display functionality in a set-top box environment
US6996528B2 (en) 2001-08-03 2006-02-07 Matsushita Electric Industrial Co., Ltd. Method for efficient, safe and reliable data entry by voice under adverse conditions
US8210927B2 (en) 2001-08-03 2012-07-03 Igt Player tracking communication mechanisms in a gaming machine
US20030044002A1 (en) 2001-08-28 2003-03-06 Yeager David M. Three dimensional audio telephony
US6944474B2 (en) * 2001-09-20 2005-09-13 Sound Id Sound enhancement for mobile phones and other products producing personalized audio for users
US20030084188A1 (en) 2001-10-30 2003-05-01 Dreyer Hans Daniel Multiple mode input and output
US20030092468A1 (en) 2001-11-15 2003-05-15 North Vaughn W. Combination thinline phone and PDA
US7493259B2 (en) 2002-01-04 2009-02-17 Siebel Systems, Inc. Method for accessing data via voice
JP2003202888A (en) 2002-01-07 2003-07-18 Toshiba Corp Headset with radio communication function and voice processing system using the same
US20030139933A1 (en) 2002-01-22 2003-07-24 Zebadiah Kimmel Use of local voice input and remote voice processing to control a local visual display
US20030144846A1 (en) 2002-01-31 2003-07-31 Denenberg Lawrence A. Method and system for modifying the behavior of an application based upon the application's grammar
US7738434B1 (en) 2002-03-04 2010-06-15 Plantronics, Inc. Control and management of a wired or wireless headset
US20030202666A1 (en) * 2002-04-24 2003-10-30 Ching Bing Ren Hand-held acoustic assistant
US20040203611A1 (en) 2002-05-31 2004-10-14 Laporta Thomas F. Architecture and services for wireless data
US7494216B2 (en) * 2002-07-26 2009-02-24 Oakely, Inc. Electronic eyewear with hands-free operation
US7138575B2 (en) 2002-07-29 2006-11-21 Accentus Llc System and method for musical sonification of data
US7072686B1 (en) 2002-08-09 2006-07-04 Avon Associates, Inc. Voice controlled multimedia and communications device
US20040066924A1 (en) 2002-08-14 2004-04-08 Shalom Wertsberger Automated reminder system
US7421390B2 (en) 2002-09-13 2008-09-02 Sun Microsystems, Inc. Method and system for voice control of software applications
WO2004030390A2 (en) 2002-09-25 2004-04-08 Bright Star Technologies, Inc. Apparatus and method for monitoring the time usage of a wireless communication device
US20040064704A1 (en) 2002-09-27 2004-04-01 Monis Rahman Secure information display and access rights control
US7720229B2 (en) 2002-11-08 2010-05-18 University Of Maryland Method for measurement of head related transfer functions
US20060235938A1 (en) 2002-11-12 2006-10-19 Pennell Mark E System and method for delivery of information based on web page content
US7142814B2 (en) 2002-12-11 2006-11-28 Shary Nassimi Automatic Bluetooth inquiry mode headset
AU2003285644A1 (en) 2002-12-19 2004-07-14 Koninklijke Philips Electronics N.V. Method and system for network downloading of music files
US7500747B2 (en) * 2003-10-09 2009-03-10 Ipventure, Inc. Eyeglasses with electrical components
JP2004326278A (en) 2003-04-23 2004-11-18 Renesas Technology Corp Nonvolatile storage device and data processor
US20050045373A1 (en) 2003-05-27 2005-03-03 Joseph Born Portable media device with audio prompt menu
US20050136958A1 (en) 2003-05-28 2005-06-23 Nambirajan Seshadri Universal wireless multimedia device
US20050058313A1 (en) 2003-09-11 2005-03-17 Victorian Thomas A. External ear canal voice detection
US7496387B2 (en) 2003-09-25 2009-02-24 Vocollect, Inc. Wireless headset for use in speech recognition environment
US8190130B2 (en) * 2003-10-01 2012-05-29 General Motors Llc Method and system for notifying a subscriber of events
US20130097302A9 (en) 2003-10-01 2013-04-18 Robert Khedouri Audio visual player apparatus and system and method of content distribution using the same
WO2005043341A2 (en) 2003-10-31 2005-05-12 Miva, Inc. System and method for distributing content using advertising sponsorship
US7882034B2 (en) 2003-11-21 2011-02-01 Realnetworks, Inc. Digital rights management for content rendering on playback devices
US7342895B2 (en) * 2004-01-30 2008-03-11 Mark Serpa Method and system for peer-to-peer wireless communication over unlicensed communication spectrum
TWI241828B (en) * 2004-02-18 2005-10-11 Partner Tech Corp Handheld personal data assistant (PDA) for communicating with a mobile in music-playing operation
US8140684B2 (en) 2004-02-19 2012-03-20 Siemens Medical Solutions Usa, Inc. Voice activated system for dynamically re-connecting user computer operation sessions
US20060075429A1 (en) * 2004-04-30 2006-04-06 Vulcan Inc. Voice control of television-related information
US7412288B2 (en) 2004-05-10 2008-08-12 Phonak Ag Text to speech conversion in hearing systems
US7532877B2 (en) * 2004-05-21 2009-05-12 Cisco Technology, Inc. System and method for voice scheduling and multimedia alerting
DE102004035046A1 (en) 2004-07-20 2005-07-21 Siemens Audiologische Technik Gmbh Hearing aid or communication system with virtual signal sources providing the user with signals from the space around him
US7574415B2 (en) 2004-08-03 2009-08-11 Nokia, Inc. Personal support infrastructure for development of user applications and interfaces
JP4499735B2 (en) 2004-08-30 2010-07-07 パイオニア株式会社 The image display control device and an image display method
JP2006093792A (en) 2004-09-21 2006-04-06 Yamaha Corp Particular sound reproducing apparatus and headphone
WO2006033104A1 (en) * 2004-09-22 2006-03-30 Shalon Ventures Research, Llc Systems and methods for monitoring and modifying behavior
US7647022B2 (en) * 2004-09-29 2010-01-12 Alcatel-Lucent Usa Inc. Methods and systems for proximity communication
US7283850B2 (en) 2004-10-12 2007-10-16 Microsoft Corporation Method and apparatus for multi-sensory speech enhancement on a mobile device
US20060086236A1 (en) 2004-10-25 2006-04-27 Ruby Michael L Music selection device and method therefor
US20090150935A1 (en) 2004-11-15 2009-06-11 Koninklijke Philips Electronics, N.V. Method and Network Device for Assisting a User in Selecting Content
US20060165040A1 (en) 2004-11-30 2006-07-27 Rathod Yogesh C System, method, computer program products, standards, SOA infrastructure, search algorithm and a business method thereof for AI enabled information communication and computation (ICC) framework (NetAlter) operated by NetAlter Operating System (NOS) in terms of NetAlter Service Browser (NSB) to device alternative to internet and enterprise & social communication framework engrossing universally distributed grid supercomputing and peer to peer framework
US8482488B2 (en) * 2004-12-22 2013-07-09 Oakley, Inc. Data input management system for wearable electronically enabled interface
US20060143455A1 (en) 2004-12-28 2006-06-29 Gitzinger Thomas E Method and apparatus for secure pairing
US7529677B1 (en) 2005-01-21 2009-05-05 Itt Manufacturing Enterprises, Inc. Methods and apparatus for remotely processing locally generated commands to control a local device
US7542816B2 (en) 2005-01-27 2009-06-02 Outland Research, Llc System, method and computer program product for automatically selecting, suggesting and playing music media files
US20060168259A1 (en) * 2005-01-27 2006-07-27 Iknowware, Lp System and method for accessing data via Internet, wireless PDA, smartphone, text to voice and voice to text
US7343177B2 (en) 2005-05-03 2008-03-11 Broadcom Corporation Modular ear-piece/microphone (headset) operable to service voice activated commands
US8126159B2 (en) * 2005-05-17 2012-02-28 Continental Automotive Gmbh System and method for creating personalized sound zones
US7643458B1 (en) * 2005-05-25 2010-01-05 Hewlett-Packard Development Company, L.P. Communicating between wireless communities
US7394405B2 (en) 2005-06-01 2008-07-01 Gm Global Technology Operations, Inc. Location-based notifications
KR100703703B1 (en) 2005-08-12 2007-04-06 삼성전자주식회사 Method and apparatus for extending sound input and output
US20090076821A1 (en) 2005-08-19 2009-03-19 Gracenote, Inc. Method and apparatus to control operation of a playback device
US20070056042A1 (en) 2005-09-08 2007-03-08 Bahman Qawami Mobile memory system for secure storage and delivery of media content
US7995756B1 (en) 2005-10-12 2011-08-09 Sprint Communications Company L.P. Mobile device playback and control of media content from a personal media host device
US9665629B2 (en) 2005-10-14 2017-05-30 Yahoo! Inc. Media device and user interface for selecting media
US20070124142A1 (en) 2005-11-25 2007-05-31 Mukherjee Santosh K Voice enabled knowledge system
US20070165875A1 (en) * 2005-12-01 2007-07-19 Behrooz Rezvani High fidelity multimedia wireless headset
US20070136140A1 (en) * 2005-12-13 2007-06-14 Microsoft Corporation Provision of shopping information to mobile devices
US20070135096A1 (en) * 2005-12-14 2007-06-14 Symbol Technologies, Inc. Interactive voice browsing server for mobile devices on wireless networks
US8417185B2 (en) 2005-12-16 2013-04-09 Vocollect, Inc. Wireless headset and method for robust voice data communication
US7702279B2 (en) 2005-12-20 2010-04-20 Apple Inc. Portable media player as a low power remote control and method thereof
US7996228B2 (en) 2005-12-22 2011-08-09 Microsoft Corporation Voice initiated network operations
US7533061B1 (en) 2006-01-18 2009-05-12 Loudeye Corp. Delivering media files to consumer devices
US20070206247A1 (en) 2006-03-01 2007-09-06 Intouch Group, Inc. System, apparatus, and method for managing preloaded digital files for preview on a digital media playback apparatus
US20100311390A9 (en) * 2006-03-20 2010-12-09 Black Gerald R Mobile communication device
US8285595B2 (en) 2006-03-29 2012-10-09 Napo Enterprises, Llc System and method for refining media recommendations
US20100299590A1 (en) 2006-03-31 2010-11-25 Interact Incorporated Software Systems Method and system for processing xml-type telecommunications documents
US20070283033A1 (en) 2006-05-31 2007-12-06 Bloebaum L Scott System and method for mobile telephone as audio gateway
US20070294122A1 (en) 2006-06-14 2007-12-20 At&T Corp. System and method for interacting in a multimodal environment
US7903793B2 (en) 2006-06-16 2011-03-08 Applied Voice & Speech Technologies, Inc. Template-based electronic message generation using sound input
US8903843B2 (en) 2006-06-21 2014-12-02 Napo Enterprises, Llc Historical media recommendation service
US8260618B2 (en) 2006-12-21 2012-09-04 Nuance Communications, Inc. Method and apparatus for remote control of devices through a wireless headset using voice activation
US8498425B2 (en) 2008-08-13 2013-07-30 Onvocal Inc Wearable headset with self-contained vocal feedback and vocal command
US9787273B2 (en) 2013-06-13 2017-10-10 Google Technology Holdings LLC Smart volume control of device audio output based on received audio input

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3865998A (en) * 1970-12-02 1975-02-11 Beltone Electronics Corp Ear seal
US4539439A (en) * 1983-04-18 1985-09-03 Unitron Industries Ltd. Plugs, receptacles and hearing aids
US5694475A (en) * 1995-09-19 1997-12-02 Interval Research Corporation Acoustically transparent earphones
US5751820A (en) * 1997-04-02 1998-05-12 Resound Corporation Integrated circuit design for a personal use wireless communication system utilizing reflection
US6587822B2 (en) * 1998-10-06 2003-07-01 Lucent Technologies Inc. Web-based platform for interactive voice response (IVR)
US7133924B1 (en) * 2000-03-08 2006-11-07 Music Choice Personalized audio system and method
US20020007315A1 (en) * 2000-04-14 2002-01-17 Eric Rose Methods and apparatus for voice activated audible order system
US6636953B2 (en) * 2000-05-31 2003-10-21 Matsushita Electric Co., Ltd. Receiving apparatus that receives and accumulates broadcast contents and makes contents available according to user requests
US6379314B1 (en) * 2000-06-19 2002-04-30 Health Performance, Inc. Internet system for testing hearing
US6661901B1 (en) * 2000-09-01 2003-12-09 Nacre As Ear terminal with microphone for natural voice rendition
US7206429B1 (en) * 2001-05-21 2007-04-17 Gateway Inc. Audio earpiece and peripheral devices
US6965770B2 (en) * 2001-09-13 2005-11-15 Nokia Corporation Dynamic content delivery responsive to user requests
US20030069854A1 (en) * 2001-10-09 2003-04-10 Hsu Michael M. Expiring content on playback devices
US6840908B2 (en) * 2001-10-12 2005-01-11 Sound Id System and method for remotely administered, interactive hearing tests
US20050078843A1 (en) * 2003-02-05 2005-04-14 Natan Bauman Hearing aid system
US20040224723A1 (en) * 2003-05-09 2004-11-11 Jp Mobile Operating, L.P. Multimedia control with one-click device selection
US7558529B2 (en) * 2005-01-24 2009-07-07 Broadcom Corporation Earpiece/microphone (headset) servicing multiple incoming audio streams
US20060193450A1 (en) * 2005-02-25 2006-08-31 Microsoft Corporation Communication conversion between text and audio
US20070025194A1 (en) * 2005-07-26 2007-02-01 Creative Technology Ltd System and method for modifying media content playback based on an intelligent random selection
US20070079692A1 (en) * 2005-10-12 2007-04-12 Phonak Ag MIDI-compatible hearing device
US7546144B2 (en) * 2006-05-16 2009-06-09 Sony Ericsson Mobile Communications Ab Mobile wireless communication terminals, systems, methods, and computer program products for managing playback of song files

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Elias Pampalk, Tim Pohle, and Gerhard Widmer, "Dynamic Playlist Generation Based On Skipping Behavior", Published in: Proc. of the 6th International Society for Music Information Retrieval (ISMIR) Conference, 2005 *

Cited By (162)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US20070078544A1 (en) * 2005-09-05 2007-04-05 Hon Hai Precision Industry Co., Ltd. Sound output system and method
US20070078547A1 (en) * 2005-09-09 2007-04-05 Hon Hai Precision Industry Co., Ltd. Sound output system and method
US20080205647A1 (en) * 2005-09-22 2008-08-28 Shanghai Yee Networks Co., Ltd Information Subscribing System for Portable Terminal Device Having Autonomous Network Access
US20070078546A1 (en) * 2005-09-23 2007-04-05 Hon Hai Precision Industry Co., Ltd. Sound output system and method
US20070078545A1 (en) * 2005-09-23 2007-04-05 Hon Hai Precision Industry Co., Ltd. Sound output system and method
US20080025536A1 (en) * 2006-07-28 2008-01-31 Josef Chalupper Hearing device for musicians
US8213640B2 (en) * 2006-07-28 2012-07-03 Siemens Audiologische Technik Gmbh Hearing device for musicians
US9117447B2 (en) 2006-09-08 2015-08-25 Apple Inc. Using event alert text as input to an automated assistant
US8942986B2 (en) 2006-09-08 2015-01-27 Apple Inc. Determining user intent based on ontologies of domains
US8930191B2 (en) 2006-09-08 2015-01-06 Apple Inc. Paraphrasing of user requests and results by automated digital assistant
US9256602B2 (en) * 2006-09-13 2016-02-09 Cellco Partnership System and method for distributing and providing recommendations related to playable content to a user based on information extracted from one or more playback devices of the user
US20080065741A1 (en) * 2006-09-13 2008-03-13 Stratton John G System and method for distributing and providing recommendations related to playable content
US8340310B2 (en) 2007-07-23 2012-12-25 Asius Technologies, Llc Diaphonic acoustic transduction coupler and ear bud
US20090080680A1 (en) * 2007-09-24 2009-03-26 Siemens Medical Instruments Pte. Ltd. Hearing apparatus with variably mounted control element
US8165331B2 (en) * 2007-09-24 2012-04-24 Siemens Medical Instruments Pte. Ltd. Hearing apparatus with variably mounted control element
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US20100329485A1 (en) * 2008-03-17 2010-12-30 Temco Japan Co., Ltd. Bone conduction speaker and hearing device using the same
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
US8391534B2 (en) 2008-07-23 2013-03-05 Asius Technologies, Llc Inflatable ear device
US20110228964A1 (en) * 2008-07-23 2011-09-22 Asius Technologies, Llc Inflatable Bubble
US8774435B2 (en) 2008-07-23 2014-07-08 Asius Technologies, Llc Audio device, system and method
US8526652B2 (en) 2008-07-23 2013-09-03 Sonion Nederland Bv Receiver assembly for an inflatable ear device
US20100322454A1 (en) * 2008-07-23 2010-12-23 Asius Technologies, Llc Inflatable Ear Device
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US10108612B2 (en) 2008-07-31 2018-10-23 Apple Inc. Mobile device having human language translation capability with positional feedback
US20100064218A1 (en) * 2008-09-09 2010-03-11 Apple Inc. Audio user interface
US8898568B2 (en) * 2008-09-09 2014-11-25 Apple Inc. Audio user interface
US8825552B2 (en) 2008-09-29 2014-09-02 Lemi Technology, Llc Providing a radio station at a user device using previously obtained DRM locked content
US20100082488A1 (en) * 2008-09-29 2010-04-01 Concert Technology Corporation Providing a radio station at a user device using previously obtained drm locked content
US20100119100A1 (en) * 2008-11-13 2010-05-13 Devine Jeffery Shane Electronic voice pad and utility ear device
US20100124947A1 (en) * 2008-11-14 2010-05-20 Sony Ericsson Mobile Communications Japan, Inc. Portable terminal, audio output control method, and audio output control program
US8594743B2 (en) * 2008-11-14 2013-11-26 Sony Corporation Portable terminal, audio output control method, and audio output control program
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US20100162117A1 (en) * 2008-12-23 2010-06-24 At&T Intellectual Property I, L.P. System and method for playing media
US9826329B2 (en) 2008-12-23 2017-11-21 At&T Intellectual Property I, L.P. System and method for playing media
US8819554B2 (en) * 2008-12-23 2014-08-26 At&T Intellectual Property I, L.P. System and method for playing media
US8886683B2 (en) * 2009-03-04 2014-11-11 Panasonic Intellectual Property Corporation Of America Metadata generation management device, metadata generation system, integrated circuit for managing generation of metadata, metadata generation management method, and program
US20110040800A1 (en) * 2009-03-04 2011-02-17 Tomoyuki Karibe Metadata generation management device, metadata generation system, integrated circuit for managing generation of metadata, metadata generation management method, and program
US20100304783A1 (en) * 2009-05-29 2010-12-02 Logan James R Speech-driven system with headset
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US20110188668A1 (en) * 2009-09-23 2011-08-04 Mark Donaldson Media delivery system
US8219146B2 (en) * 2009-11-06 2012-07-10 Sony Corporation Audio-only user interface mobile phone pairing
US20110111741A1 (en) * 2009-11-06 2011-05-12 Kirstin Connors Audio-Only User Interface Mobile Phone Pairing
US8892988B1 (en) * 2009-12-16 2014-11-18 Google Inc. Integrated user interface
US9548050B2 (en) 2010-01-18 2017-01-17 Apple Inc. Intelligent automated assistant
US8903716B2 (en) 2010-01-18 2014-12-02 Apple Inc. Personalized vocabulary for digital assistant
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
US20130191122A1 (en) * 2010-01-25 2013-07-25 Justin Mason Voice Electronic Listening Assistant
WO2011091402A1 (en) * 2010-01-25 2011-07-28 Justin Mason Voice electronic listening assistant
US20110182453A1 (en) * 2010-01-25 2011-07-28 Sonion Nederland Bv Receiver module for inflating a membrane in an ear device
US8526651B2 (en) 2010-01-25 2013-09-03 Sonion Nederland Bv Receiver module for inflating a membrane in an ear device
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US8666103B2 (en) * 2010-05-14 2014-03-04 Dartpoint Tech Co., Ltd. Waterproof pillow with audio unit
US20110280429A1 (en) * 2010-05-14 2011-11-17 Dartpoint Technology Co., LTD. Waterproof pillow with audio unit
US8309833B2 (en) * 2010-06-17 2012-11-13 Ludwig Lester F Multi-channel data sonification in spatial sound fields with partitioned timbre spaces using modulation of timbre and rendered spatial location as sonification information carriers
US20120215532A1 (en) * 2011-02-22 2012-08-23 Apple Inc. Hearing assistance system for providing consistent human speech
US8781836B2 (en) * 2011-02-22 2014-07-15 Apple Inc. Hearing assistance system for providing consistent human speech
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10102359B2 (en) 2011-03-21 2018-10-16 Apple Inc. Device access using voice authentication
US8550206B2 (en) 2011-05-31 2013-10-08 Virginia Tech Intellectual Properties, Inc. Method and structure for achieving spectrum-tunable and uniform attenuation
US20130123919A1 (en) * 2011-06-01 2013-05-16 Personics Holdings Inc. Methods and devices for radio frequency (rf) mitigation proximate the ear
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US20130329610A1 (en) * 2012-04-13 2013-12-12 Dominant Technologies, LLC Combined In-Ear Speaker and Microphone for Radio Communication
US9854414B2 (en) 2012-04-13 2017-12-26 Dominant Technologies, LLC Hopping master in wireless conference
US9548854B2 (en) * 2012-04-13 2017-01-17 Dominant Technologies, LLC Combined in-ear speaker and microphone for radio communication
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US8725498B1 (en) * 2012-06-20 2014-05-13 Google Inc. Mobile speech recognition with explicit tone features
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US10043535B2 (en) 2013-01-15 2018-08-07 Staton Techiya, Llc Method and device for spectral expansion for an audio signal
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US20140278400A1 (en) * 2013-03-12 2014-09-18 Microsoft Corporation Search Results Using Intonation Nuances
US9378741B2 (en) * 2013-03-12 2016-06-28 Microsoft Technology Licensing, Llc Search results using intonation nuances
US9510078B2 (en) 2013-03-14 2016-11-29 Cirrus Logic, Inc. Wireless earpiece with local audio cache
US20170078783A1 (en) * 2013-03-14 2017-03-16 Cirrus Logic, Inc. Wireless earpiece with local audio cache
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US20140270227A1 (en) * 2013-03-14 2014-09-18 Cirrus Logic, Inc. Wireless earpiece with local audio cache
US9210493B2 (en) * 2013-03-14 2015-12-08 Cirrus Logic, Inc. Wireless earpiece with local audio cache
US9788094B2 (en) * 2013-03-14 2017-10-10 Cirrus Logic, Inc. Wireless earpiece with local audio cache
US9333116B2 (en) 2013-03-15 2016-05-10 Natan Bauman Variable sound attenuator
US10045133B2 (en) 2013-03-15 2018-08-07 Natan Bauman Variable sound attenuator with hearing aid
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US20140314238A1 (en) * 2013-04-23 2014-10-23 Personics Holdings, LLC. Multiplexing audio system and method
US9326067B2 (en) * 2013-04-23 2016-04-26 Personics Holdings, Llc Multiplexing audio system and method
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
US9521480B2 (en) 2013-07-31 2016-12-13 Natan Bauman Variable noise attenuator with adjustable attenuation
US20160277575A1 (en) * 2013-10-18 2016-09-22 Amos Joseph Alexander Call center system for personalized services
US10045135B2 (en) 2013-10-24 2018-08-07 Staton Techiya, Llc Method and device for recognition and arbitration of an input connection
US10043534B2 (en) 2013-12-23 2018-08-07 Staton Techiya, Llc Method and device for spectral expansion for an audio signal
US20170345273A1 (en) * 2014-02-23 2017-11-30 Hush Technology Inc. Intelligent Earplug System
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US10169329B2 (en) 2014-05-30 2019-01-01 Apple Inc. Exemplar-based natural language processing
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10083690B2 (en) 2014-05-30 2018-09-25 Apple Inc. Better resolution when referencing to concepts
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9668024B2 (en) 2014-06-30 2017-05-30 Apple Inc. Intelligent automated assistant for TV user interactions
US20160066106A1 (en) * 2014-08-27 2016-03-03 Auditory Labs, Llc Mobile audio receiver
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9606986B2 (en) 2014-09-29 2017-03-28 Apple Inc. Integrated word N-gram and class M-gram language models
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9955516B2 (en) 2014-12-05 2018-04-24 Dominant Technologies, LLC Duplex radio with auto-dial cell connect
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10255907B2 (en) 2015-09-04 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
WO2017043688A1 (en) * 2015-09-09 2017-03-16 주식회사 사운드브릿지 Bluetooth earset having embedded ear canal microphone and method for controlling same
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US20170139668A1 (en) * 2015-11-13 2017-05-18 Bragi GmbH Method and apparatus for interfacing with wireless earpieces
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US20170170796A1 (en) * 2015-12-11 2017-06-15 Unlimiter Mfa Co., Ltd. Electronic device for adjusting an equalizer setting according to a user age, sound playback device, and equalizer adjustment method
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
WO2018183067A1 (en) * 2017-03-31 2018-10-04 Ecolink Intelligent Technology, Inc. Method and apparatus for interaction with an intelligent personal assistant
US10249323B2 (en) 2017-05-31 2019-04-02 Bose Corporation Voice activity detection for communication headset

Also Published As

Publication number Publication date
US8805692B2 (en) 2014-08-12
EP2044804A4 (en) 2013-12-18
US20140123008A1 (en) 2014-05-01
US20140129229A1 (en) 2014-05-08
US20140122073A1 (en) 2014-05-01
US10236012B2 (en) 2019-03-19
US20140122092A1 (en) 2014-05-01
WO2008008730A3 (en) 2008-04-03
US10236013B2 (en) 2019-03-19
US20140123009A1 (en) 2014-05-01
US10236011B2 (en) 2019-03-19
US20140119557A1 (en) 2014-05-01
US20140119558A1 (en) 2014-05-01
EP2044804A2 (en) 2009-04-08
US20140350943A1 (en) 2014-11-27
US20140123010A1 (en) 2014-05-01
US20140119559A1 (en) 2014-05-01
WO2008008730A2 (en) 2008-01-17
US20140119574A1 (en) 2014-05-01

Similar Documents

Publication Publication Date Title
US8831761B2 (en) Method for determining a processed audio signal and a handheld device
CN101422055B (en) Media delivery system with improved interaction
EP1949579B1 (en) Personal People Meter PPM in the headset of a MP3 portable media player.
US7853664B1 (en) Method and system for purchasing pre-recorded music
US9959783B2 (en) Converting audio to haptic feedback in an electronic device
KR101435531B1 (en) Methods and systems for conducting research operations
US20090069911A1 (en) Digital media player and method for facilitating social music discovery and commerce
KR100597670B1 (en) mobile communication terminal capable of reproducing and updating multimedia content, and method for reproducing the same
EP3094106A1 (en) Method and device for reproducing audio signal with haptic device of acoustic headphones
US7551916B2 (en) Method and device for automatically changing a digital content on a mobile device according to sensor data
US8046689B2 (en) Media presentation with supplementary media
US8819554B2 (en) System and method for playing media
EP2109934B1 (en) Personalized sound system hearing profile selection
US20070270988A1 (en) Method of Modifying Audio Content
US7817803B2 (en) Methods and devices for hearing damage notification and intervention
KR100385925B1 (en) Digital mobile telehone for processing multi-media data and methods for executing and providing multi-media data contents
US9865240B2 (en) Command interface for generating personalized audio content
US20090099836A1 (en) Mobile wireless display providing speech to speech translation and avatar simulating human attributes
US9613028B2 (en) Remotely updating a hearing and profile
EP1168297B1 (en) Speech synthesis
US8948895B2 (en) System and method for engaging a person in the presence of ambient audio
US8428758B2 (en) Dynamic audio ducking
KR100841026B1 (en) Dynamic content delivery responsive to user requests
EP2025130B1 (en) Mobile wireless communication terminals, systems, methods, and computer program products for publishing, sharing and accessing media files
US20120183164A1 (en) Social network for sharing a hearing aid setting

Legal Events

Date Code Title Description
AS Assignment

Owner name: PERSONICS HOLDINGS INC., FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GOLDSTEIN, STEVEN WAYNE;REEL/FRAME:019916/0814

Effective date: 20070917

AS Assignment

Owner name: PERSONICS HOLDINGS INC., FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GOLDSTEIN, STEVEN WAYNE;REEL/FRAME:020033/0719

Effective date: 20070917

AS Assignment

Owner name: PERSONICS HOLDINGS INC., FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GOLDSTEIN, STEVEN;REEL/FRAME:025713/0409

Effective date: 20070917

AS Assignment

Owner name: STATON FAMILY INVESTMENTS, LTD., FLORIDA

Free format text: SECURITY AGREEMENT;ASSIGNOR:PERSONICS HOLDINGS, INC.;REEL/FRAME:030249/0078

Effective date: 20130418

AS Assignment

Owner name: PERSONICS HOLDINGS, LLC, FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PERSONICS HOLDINGS, INC.;REEL/FRAME:032189/0304

Effective date: 20131231

AS Assignment

Owner name: DM STATON FAMILY LIMITED PARTNERSHIP (AS ASSIGNEE

Free format text: SECURITY INTEREST;ASSIGNOR:PERSONICS HOLDINGS, LLC;REEL/FRAME:034170/0771

Effective date: 20131231

Owner name: DM STATON FAMILY LIMITED PARTNERSHIP (AS ASSIGNEE

Free format text: SECURITY INTEREST;ASSIGNOR:PERSONICS HOLDINGS, LLC;REEL/FRAME:034170/0933

Effective date: 20141017

AS Assignment

Owner name: STATON TECHIYA, LLC, FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DM STATON FAMILY LIMITED PARTNERSHIP;REEL/FRAME:047213/0128

Effective date: 20181008

Owner name: DM STATON FAMILY LIMITED PARTNERSHIP, FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PERSONICS HOLDINGS, INC.;PERSONICS HOLDINGS, LLC;REEL/FRAME:047785/0150

Effective date: 20181008

AS Assignment

Owner name: DM STATON FAMILY LIMITED PARTNERSHIP, FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PERSONICS HOLDINGS, INC.;PERSONICS HOLDINGS, LLC;REEL/FRAME:047509/0264

Effective date: 20181008