US20090253457A1 - Audio signal processing for certification enhancement in a handheld wireless communications device - Google Patents
Audio signal processing for certification enhancement in a handheld wireless communications device Download PDFInfo
- Publication number
- US20090253457A1 US20090253457A1 US12/330,339 US33033908A US2009253457A1 US 20090253457 A1 US20090253457 A1 US 20090253457A1 US 33033908 A US33033908 A US 33033908A US 2009253457 A1 US2009253457 A1 US 2009253457A1
- Authority
- US
- United States
- Prior art keywords
- delay
- communications device
- signal
- volume setting
- long
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03G—CONTROL OF AMPLIFICATION
- H03G9/00—Combinations of two or more types of control, e.g. gain control and tone control
- H03G9/02—Combinations of two or more types of control, e.g. gain control and tone control in untuned amplifiers
- H03G9/025—Combinations of two or more types of control, e.g. gain control and tone control in untuned amplifiers frequency-dependent volume compression or expansion, e.g. multiple-band systems
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03G—CONTROL OF AMPLIFICATION
- H03G9/00—Combinations of two or more types of control, e.g. gain control and tone control
- H03G9/005—Combinations of two or more types of control, e.g. gain control and tone control of digital or coded signals
Definitions
- This invention relates to handheld wireless communications devices that have a built-in processor for enhancing an audio signal.
- Handheld electronic devices and other portable electronic devices are becoming increasingly popular. Examples of handheld devices include handheld computers, cellular telephones, media players, and hybrid devices that include the functionality of multiple devices of this type. Popular portable electronic devices that are somewhat larger than traditional handheld electronic devices include laptop computers and tablet computers.
- Handheld wireless communications devices often have several functions that involve digital audio signal processing. For example, consider their use as a mobile telephony device (e.g., a cellular telephone handset). Following a call set up or connection phase, a simultaneous two-way voice conversation between a local user of the device and another (remote) user in a telephone call may be enabled as follows.
- a mobile telephony device e.g., a cellular telephone handset.
- a simultaneous two-way voice conversation between a local user of the device and another (remote) user in a telephone call may be enabled as follows.
- a so-called uplink chain in the device is responsible for digitizing the local user's speech that has been detected by a built-in microphone. This may result in a raw digital audio signal or bit stream, e.g. a pulse code modulated, PCM, audio signal or bitstream.
- the uplink chain then digitally codes the raw signal, to remove its redundant content. For instance, a 64 kbits/sec raw speech bitstream may be encoded as a 14 kbits/sec bitstream, without causing a noticeable drop in sound quality.
- the uplink chain modulates a RF carrier signal with the coded signal (and other information regarding the call). An antenna of the device is then driven with the modulated RF carrier. The local user's speech is thus transmitted to the cellular telephone network.
- a downlink chain is provided in the device, so that the local user can hear the remote user's speech.
- the downlink chain operates in parallel with or simultaneously as the uplink chain, to enable the real-time two-way conversation.
- the downlink chain may essentially perform the reverse of the uplink chain's operations.
- an antenna of the device outputs a downlink RF signal sent by the cellular telephone network.
- the downlink chain then demodulates the downlink RF signal to yield a so-called baseband signal.
- the latter contains a coded audio signal, which includes an encoded version of the captured speech of the remote user.
- the coded audio signal is decoded (e.g., into a PCM bitstream), converted to analog format and then played to the local user, through a receiver or speaker of the device.
- various signal processing operations may be performed on the digital audio signal in both the downlink and uplink chains. These may include noise filtering or noise suppression (sometimes referred to as noise cancellation), gain control, and echo cancellation.
- Most handheld wireless communications devices are typically certified for use with a given cellular communications network. This may be in accordance with a specification that is governed by an approved authority such as the PCS Type Certification Review Board (PTCRB).
- PCRB PCS Type Certification Review Board
- the certification process entails the laboratory testing of a manufactured specimen of the device, to determine its compliance with the specification.
- the audio portion of the specification for Global System for Mobile communications, GSM, devices requires that an artificial speech signal (or “cert signal”) be sent over the air during a wireless call with the device.
- the cert signal is received over the air by the device.
- the sound of this cert signal as output by the device's receiver (earpiece speaker) is then measured, at a given volume or loudness setting of the device.
- the cert signal is transmitted by the device over the air to a receiving test station where it is converted into sound.
- the measured sound output (which is a function of frequency) needs to fall within a certain range or mask that is defined in the specification, for the duration of the signal (e.g., about twenty seconds).
- An embodiment of the invention is a handheld wireless communications device having an adjustable volume setting, an uplink audio processor, and a downlink audio processor.
- a noise suppressor e.g., as part of the uplink audio processor and/or as part of the downlink audio processor attenuates its output signal in accordance with a delay parameter.
- the delay parameter controls how much the onset of said attenuation is delayed.
- the delay parameter is automatically set to indicate a “short” delay when the communications device is at a “higher” volume setting, and a “long” delay when the device is at a “lower” volume setting consistent.
- the long delay is about the same amount of time as an artificial voice signal defined by a communications device certification standard, e.g. about the same period of time as an ITU-T P.50 certification signal which may be on the order of twenty seconds.
- the short delay is a substantially shorter interval of time, e.g. no more than a few seconds.
- the longer delay is used at the lower volume setting, which may be a nominal setting defined by the certification standard as one that results in a given, received loudness rating, RLR, at the output of the receiver (earpiece speaker) of the device.
- the nominal setting is expected to be substantially lower in loudness than the normal setting needed to allow most end-users of the device to comfortably hear the far side of a telephone call that is being performed by the device.
- the noise suppressor may pass through the cert signal un-attenuated, thereby promoting compliance with the certification process.
- the noise suppressor is automatically configured to react more quickly to the noise that is typical in actual two-way conversations.
- FIG. 1 is a perspective view of an example handheld wireless communications device in which the embodiments of the invention can be implemented.
- FIG. 2 is a block diagram showing internal circuit components of the example wireless handheld communications device of FIG. 1 .
- FIG. 3 depicts example uplink and downlink audio processors integrated in a wireless handheld communications device and that can be used to implement the certification enhancement features described here.
- FIG. 4 explains how certification enhancement may be achieved by configuring a noise suppressor with variable delay, in accordance with an embodiment of the invention.
- FIG. 5 shows example sample sequences at the output of the noise suppressor configured with a long delay and with a short delay.
- FIG. 1 is a perspective view of an example handheld wireless communications device 200 in which the embodiments of the invention can be implemented. Note that the particular device 200 shown and described here is just an example—the concept of certification enhancement described further below may be implemented in other types of handheld wireless communications devices, e.g. ones that do not use a touch screen display, or ones that do not have a chocolate bar type housing.
- the device 200 shown and described here has similarities to the iPhoneTM device by Apple Inc. of Cupertino, Calif. Alternatively, it could be another portable or mobile, handheld multi-function electronic device or smart phone that has some or all of the certification enhancement functionality described below.
- the device 200 in this case has a fixed, single piece housing, sometimes described as a candy bar or chocolate bar type, in which the primary mechanism for visual and tactile interaction with the user is a touch sensitive display screen 252 .
- An alternative to this type of mobile device is one that has a moveable, multi-piece housing such as a clam shell design or one with a sliding, physical key pad as used by other smart phone manufacturers.
- the touch screen 252 will display typical smart phone features, such as visual voicemail, web browsing, email functions, digital camera pictures, as well as others.
- FIG. 1 shows the touch screen 252 displaying the home or main menu of a graphical user interface that allows a user of the device 200 to interact with various application programs that can run in the device 200 .
- the home menu displays icons or graphical images that represent application programs, files, and their associated commands as shown. These may include windows, fields, dialog boxes, menus, virtual buttons, cursors, scrollbars, etc.
- the user can select from these graphical images or objects by touching the surface of the screen 252 with her finger, in response to which the associated application program will be launched.
- the device 200 has a wireless telephony function that enables its user to receive and place audio and/or video calls.
- an opening 210 is formed through which downlink audio during a call is emitted from an earpiece speaker 220 .
- a microphone 216 is located to pickup the near end user's speech, which is then transmitted in an uplink signal to the far end user, during the call.
- the device 200 also has a speakerphone speaker 218 built into the device housing, which allows the user to conduct a call without having to hold the device 200 against her ear.
- a proximity sensor 254 may be integrated in the mobile device 200 , so as to detect proximity of the touch screen 252 to the user's face or head, and thereby automatically disable input through the touch screen 252 during a handset mode call.
- FIG. 2 is a block diagram of several internal circuit components of the example wireless handheld communications device 200 , presented as an overview of the device 200 .
- the device 200 has several built in electro-acoustic transducers including for example, a microphone 216 , a receiver (ear speaker or earpiece) 220 , and a speaker (speakerphone) 218 .
- the microphone 216 provides an output analog audio signal, whereas the receiver and speaker receive input analog audio signals. Collectively, these are referred to here as the analog acoustic transducer signals.
- An audio coder-decoder (codec) 214 acts as an interface to the analog input of the microphone and the analog outputs of the receiver and speaker, by providing any and all analog amplifiers and other analog signal conditioning circuitry that is needed for conditioning the analog acoustic transducer signals.
- the codec 214 may be a separate integrated circuit (IC) package.
- the codec 214 operates in two modes. It can be configured into either mode, by control signals or programming supplied by an applications processor 150 over an I2C bus or other component bus.
- media player mode the device 200 is operating as a digital media player (e.g., an MP3 player that is playing back a music file stored in the device 100 ).
- the codec 214 applies analog to digital and digital to analog conversion to the analog acoustic transducer signals to generate corresponding digital signals.
- the codec 214 supplies the digitized microphone signal to an applications processor 150 , and converts a digital audio signal from the applications processor 150 into analog form and then applies it to the receiver and/or speaker for play back.
- the device 200 is operating as a mobile telephony device (e.g., allowing its user to be in a real time audio conversation with another remote user during a cellular telephone call).
- the codec 24 acts as an analog pass through with no digital conversion, so that the analog acoustic transducer signals are passed through, with perhaps some analog amplification or buffering, between the baseband processor 52 and the acoustic transducers (signal line 152 outputs the microphone signal, while signal line 154 inputs the receiver or speaker signal).
- the baseband processor 52 includes an interface to receive and transmit signals from and transmitted to a cellular network.
- the baseband processor which may be a separate integrated circuit (IC) package, has an input port to receive a downlink signal, and an output port to transmit an uplink signal. These may be in a band around 26 MHz, for example, but alternatively they may be at other frequency bands that are considered intermediate (between baseband and RF at the antenna input).
- the downlink signal may be ready to be upconverted into a cellular network RF signal, such as a long range wireless communications signal that is directed to a cellular telephone network's base station, for example in a 3G or Universal Mobile Telecommunications System, UMTS, band, e.g. 850 MHz, 900 MHz, 1800 MHz, and 1900 MHz bands.
- the uplink signal that is input to the baseband processor has been downcoverted from such an RF band, down to intermediate frequencies, e.g. 26 MHz band.
- the downlink/uplink RF signal that is input/output from the baseband processor may be downcoverted/upconverted into the antenna's radiation band, by a frequency downconverter/upconverter that is external to the baseband processor IC package (e.g., as part of the RF transceiver IC package 54 ).
- the signal at the input/output port of the baseband processor may be an intermediate frequency (IF) signal that is above the baseband frequencies but below the cellular network band frequencies (so called RF frequencies here).
- IF intermediate frequency
- the RF up conversion and down conversion may be direct, from and to baseband, rather than going through an intermediate frequency.
- the baseband processor may perform known cellular baseband processing tasks including cellular protocol signaling, coding and decoding, and signaling with the external RF transceiver. These together with the RF processing in the external RF transceiver may be referred to as the radio section of the device 200 .
- the base band processor 52 may be programmable, in accordance with software that has been encoded and stored in its associated non-volatile memory 154 . Permission to access the cellular network may be granted to the near end user in accordance with a subscriber identify module, SIM, card that is installed in the mobile device 200 to connect with the SIM connector 258 .
- SIM subscriber identify module
- the device 200 and the cellular network may be in agreement with respect to a particular voice coding (vocoding) scheme that is to be applied to the raw digital audio signal from the microphone (uplink signal) which is transmitted by the device 200 . Similarly, an agreement is needed for the particular voice decode scheme which the device 200 should be applying to a downlink signal. Any known voice coding and decoding scheme that is suitable for the particular wireless communications protocol used may be adopted.
- the voice coding and decoding sections of the baseband processor may also be considered to be part of the radio section of the device 200 .
- the device 200 may also have further wireless communications capability, to provide a global positioning system, GPS, service, a Bluetooth link, and a TCP/IP link to a wireless local area network.
- a Bluetooth transceiver 160 is provided together with a wireless local area network, WLAN, transceiver 164 , which provide additional wireless communication channels for the device 200 . These two channels may share an antenna 63 for short range wireless communications (e.g., in accordance with a Bluetooth protocol and/or a wireless local area network protocol).
- An RF diplexer 188 has a pair of RF ports that are coupled to the antenna 63 .
- One of the RF ports is used for GPS services, which a GPS receiver integrated circuit 156 uses to obtain GPS data that allows the mobile device 200 to locate itself to its user.
- the other RF port of the diplexer 188 is coupled to an RF front end 172 that combines Bluetooth and WLAN RF signals.
- the cellular network, GPS, Bluetooth, and WLAN services may be managed by programming the applications processor 150 to communicate with the base band processor 52 , Bluetooth transceiver 160 , and wireless transceiver 164 through separate, component buses. Although not shown, there may also be separate component buses connecting the base band processor 52 to the Bluetooth transceiver 160 and WLAN transceiver 164 , to enable the latter transceivers to take advantage of the audio processing engine available in the base band processor 52 , to, for example, conduct a wireless voice over IP call (using the WLAN transceiver 164 ) and to allow the near end user to conduct the call through a wireless headset (using Bluetooth transceiver 160 ).
- the so-called power hungry components of the mobile device 200 may include the base band processor 52 , the applications processor 150 , the touch screen 252 , and the transmit RF power amplifiers that are part of the RF circuitry 54 . These are coupled to be monitored by a power management unit 248 .
- the power management unit 248 may monitor power consumption by individual components of the device 200 and may signal power management commands to one or more of the components as needed so as to conserve battery energy and control battery temperature.
- the mobile device 200 may also have a dock connector 230 that communicates with a USB port of the processor 150 , allowing the device 200 to, for example, synchronize certain files of the user with corresponding files that are stored in a desktop or notebook personal computer of the same user.
- the dock connector 230 may also be used to connect with a power adapter or other electricity source for charging the battery (via the battery connector 108 ).
- the mobile device 200 may have digital camera circuitry and optics 264 that are coupled to the processor 250 , enabling the mobile device to be used as a digital still or video camera.
- the device 200 may be essentially considered to be a computer whose processor 150 executes boot code and an operating system (OS) stored in the memory 262 within the device.
- OS operating system
- Additional applications or widgets may be executed by the processor 150 , such as those depicted in FIG. 1 , including a clock function, SMS or text messaging service application, a weather widget, a calendar application, a street map navigation application, and a music download service application (the iTunesTM service).
- the device 200 has a digital audio signal processing structure between its radio section 302 (responsible for interfacing with the cellular phone network for example or a wireless local area network), and its baseband analog front end (BB AFE) 304 , as depicted in FIG. 3 .
- the AFE 304 interfaces with the acoustic transducers of the device 200 , namely the microphone 216 and earpiece speaker 220 , by performing the needed analog to digital, A/D, and digital to analog, D/A, conversion.
- a block of audio amplifiers and other analog audio signal conditioning circuitry 306 are part of the analog interface with the microphone 216 and speakers 218 , 220 , which generates a speaker driver signal whose strength is based on an adjustable volume setting received as input. The latter may be a digital signal set by the user's manual actuation of a volume up/down button of the device 200 (see FIG. 2 , block 272 ).
- the digital signal processing structure includes a downlink signal processing chain or downlink audio processor 308 that has an input coupled to the radio section 302 and an output feeding the AFE 304 .
- the downlink audio processor 308 may perform several digital signal processing operations upon the decoded, digital or sampled voice signal or bit stream (also referred to as the downlink baseband or audio signal), in order to enhance the latter. These may include a combination of one or more of the following operations performed on the downlink audio signal: adjust its gain using a downlink programmable gain amplifier (PGA) 310 ; apply general filtering to it using a downlink programmable digital filter 312 ; perform multi-band audio compression upon it using a downlink multi-band compressor 314 ; and reduce noise using a noise suppressor 316 . Not all of these operations are needed in the downlink chain—for example, the noise suppressor 316 may be omitted.
- the downlink audio signal received from the radio section 302 may be in the form of a pulse code modulated bit stream.
- the device 200 may also perform enhancement operations on an uplink audio signal. This is done using an uplink audio processor 318 that acts upon an uplink digital voice signal or bit stream received from the microphone 216 via the AFE 304 (prior to the voice coding of the signal for transmission).
- the uplink audio processor 318 includes the following signal processing chain: an energy limiter 320 , an uplink PGA 322 ; an uplink programmable digital filter 324 ; an echo and noise canceller 326 (including a noise suppressor block); and an uplink multi-band compressor 328 .
- an energy limiter 320 an uplink PGA 322
- an uplink programmable digital filter 324 includes the following signal processing chain.
- an echo and noise canceller 326 including a noise suppressor block
- an uplink multi-band compressor 328 may be needed in every instance.
- the downlink audio processor 308 and the uplink audio processor 318 may each be implemented as a separate programmed processor, or a separate combination of a programmed processor and dedicated hardwired logic.
- the functions of the downlink and uplink chains may be performed by the combination of a single, programmable processor, e.g. such as one that is available in the baseband processor 52 (see FIG. 1 ).
- a processor that executes stored or encoded program instructions (e.g., that are stored or encoded in external memory 154 of the baseband processor, see FIG. 2 ) to enhance a voice signal that is being passed through it for purposes of both certification testing and normal end-user scenarios, as follows.
- the uplink voice signal, the downlink voice signal, or both can be enhanced for purposes of passing a certification test.
- some background on a typical certification test is provided.
- a multi-sine certification signal which is an artificial speech signal that lasts about twenty seconds, is sent over the air to the device 200 , becoming in effect the downlink audio signal.
- the characteristics of this cert signal including its duration and spectral content may be in accordance with an industry standard, e.g. the International Telecommunication Union telecommunication standardization sector's ITU-T, Recommendation P.50.
- a receiver (earpiece speaker) volume setting of the device is placed in a “nominal” setting, which is one that results in a required, received loudness rating, RLR, (measured typically in dB) from the device' receiver or earpiece speaker, while the received cert signal is being emitted from it.
- RLR received loudness rating
- the nominal RLR may be 2 dB.
- the nominal volume setting may be about the midpoint of the full volume range (between minimum and maximum) available from the device 200 .
- the measured sound output of the receiver, for the cert signal should fall within or meet a defined mask or envelope, over the entire required frequency range of the mask and for a given duration of the signal.
- a certification process may be one that is governed by the PCS Type Certification Review Board, PTCRB.
- the artificial speech signal is played to the microphone of the device 200 .
- the signal is processed by the uplink audio processor 318 and then transmitted over the air, e.g. during a wireless call, to a testing destination. There, the signal is converted to sound, picked up by a test microphone, and then evaluated for compliance with a specified, Send Loudness Rating, SLR (by further test equipment).
- SLR Send Loudness Rating
- the noise suppressor generally serves to reduce the level of noise in the voice signal being passed through it, to improve the quality of the sound eventually heard by the user through a receiver or speaker. It operates by digitally attenuating an input voice signal in accordance with a known noise detection algorithm. In other words, when the algorithm “sees” what is likely to be a time interval of just noise, in the input bit stream, the samples of the bit stream in that time interval are attenuated. In some cases, however, the noise suppressor may become “confused” by the certification signal that is applied to the device 200 during for example GSM certification testing. In that case, the typical noise suppressor would attenuate essentially the entirety of the cert signal, thinking it is noise, thereby causing the certification test to fail when the sound output drops.
- the multi-sine cert signal has essentially no pauses in it, unlike an actual speech signal in which there are naturally occurring pauses between spoken words.
- the noise suppressor may be designed to respond to such pauses, and, based on the detected pauses, appropriately attenuates those pause intervals. A large part of the cert signal might in that case be mistakenly taken to be a pause interval.
- a variable delay noise suppressor 416 can be inserted into the uplink audio processor 318 , the downlink audio processor 308 , or both.
- This delay is an interval of time (or number of samples in the input or output bit stream) that the noise suppressor 416 will essentially wait or skip, before starting attenuation (in accordance with its noise suppression algorithm).
- the delay is variable in that it will depend on the current volume setting for the receiver of the device 200 .
- FIG. 4 depicts an instance where the noise suppressor 416 is in the uplink chain.
- the uplink portion of a certification test scenario is depicted, where an artificial speech signal is played to the microphone of the device 200 .
- the signal is processed by the uplink audio processor 318 and then transmitted over the air, e.g. during a wireless call, to a testing destination. There, the signal is converted to sound and then picked up by a test microphone and then evaluated for compliance with a specified SLR (by further test equipment).
- the noise suppressor 416 could be in the downlink chain, in which case the relevant portion of the certification test scenario would be a downlink portion that evaluates sound output from the receiver of the device 200 (for compliance with a specified RLR).
- a “nominal volume setting” is one that is used in a certification test scenario. In many instances, it may be about half way in the full range of volume, but in other instances may be a little higher or lower depending on the characteristics of the receiver and the RLR required by the certification test.
- the noise suppressor's delay parameter is automatically set, by a decoder 404 , to indicate a “long” delay, i.e. about the same as or longer than the duration of the expected cert signal.
- the effect of this adjustable delay is depicted in the sample sequence shown, as a number of samples (interval of time) beginning at the start of the detected noise interval (when attenuation would normally begin). So configured, the noise suppressor 416 waits this delay time before it reacts by attenuating its output signal. This allows the cert test signal to pass through the noise suppressor block unattenuated, thereby not causing the device 200 to fail the certification test.
- variable delay noise suppressor 416 may pass its input samples through unattenuated, until it detects noise in the input samples.
- the noise suppressor 416 will then attenuate its output signal to suppress the detected noise. However, it delays attenuating the output signal, from some reference point in time or in the output sequence, i.e. continues to pass its input samples unattenuated, until after the set delay has elapsed. At a nominal volume setting, this delay is set based on the duration of the expected multi-sine cert signal to which the device 200 will be subjected for certification testing.
- the delay parameter is set to indicate a substantially shorter delay (e.g., 3 seconds or less).
- a substantially shorter delay e.g. 3 seconds or less.
- the device 200 responds automatically to a lower volume setting, e.g. set manually by its user, by configuring its noise suppressor's delay (of the onset of attenuation) to be long, and to a higher volume setting by configuring the delay to be short.
- FIG. 5 illustrates how this capability solves the certification test problem introduced above, using an example 15 second long cert signal, a short delay of 2 seconds, and a long delay of 18 seconds. If the noise suppressor delay is too short, then much of the cert signal is attenuated during the certification test (thereby causing the device 200 to fail the test). This problem is avoided by making the noise suppressor delay long enough so that a sufficient portion of the cert signal passes through unattenuated. The delay is adjusted based on the current volume setting of the device 200 .
- an embodiment of the invention may be a machine-readable medium having stored or encoded thereon instructions which program a processor to perform some of the operations described above. In other embodiments, some of these operations might be performed by specific hardware components that contain hardwired logic. Those operations might alternatively be performed by any combination of programmed data processing components and fixed hardware circuit components.
- a machine-readable medium may include any mechanism for storing or transferring information in a form readable by a machine (e.g., a computer), such as Compact Disc Read-Only Memory (CD-ROMs), Read-Only Memory (ROMs), Random Access Memory (RAM), and Erasable Programmable Read-Only Memory (EPROM).
- a machine e.g., a computer
- CD-ROMs Compact Disc Read-Only Memory
- ROMs Read-Only Memory
- RAM Random Access Memory
- EPROM Erasable Programmable Read-Only Memory
- the invention is not limited to the specific embodiments described above.
- the numerical values given in FIG. 5 for the duration of the cert signal and the adjustable delays may be different, depending on the type of cert signal and the type of noise suppressor used. Accordingly, other embodiments are within the scope of the claims.
Abstract
A handheld wireless communications device has an adjustable volume setting. The communications device also has an uplink audio processor and a downlink audio processor. One (or both) of the audio processors contains a noise suppressor. The noise suppressor attenuates its output signal in accordance with a delay parameter. The delay parameter controls how much the onset of the attenuation is delayed. The delay parameter is automatically set to indicate a short delay when the communications device is at a high volume setting, and a long delay when the device is at a low volume setting. Other embodiments are also described and claimed.
Description
- This application claims the benefit of the earlier filing date of U.S. provisional application Ser. No. 61/042,622, filed Apr. 4, 2008, entitled “Audio Signal Processing in a Handheld Wireless Communications Device”.
- This invention relates to handheld wireless communications devices that have a built-in processor for enhancing an audio signal.
- Handheld electronic devices and other portable electronic devices are becoming increasingly popular. Examples of handheld devices include handheld computers, cellular telephones, media players, and hybrid devices that include the functionality of multiple devices of this type. Popular portable electronic devices that are somewhat larger than traditional handheld electronic devices include laptop computers and tablet computers.
- Handheld wireless communications devices often have several functions that involve digital audio signal processing. For example, consider their use as a mobile telephony device (e.g., a cellular telephone handset). Following a call set up or connection phase, a simultaneous two-way voice conversation between a local user of the device and another (remote) user in a telephone call may be enabled as follows.
- A so-called uplink chain in the device is responsible for digitizing the local user's speech that has been detected by a built-in microphone. This may result in a raw digital audio signal or bit stream, e.g. a pulse code modulated, PCM, audio signal or bitstream. The uplink chain then digitally codes the raw signal, to remove its redundant content. For instance, a 64 kbits/sec raw speech bitstream may be encoded as a 14 kbits/sec bitstream, without causing a noticeable drop in sound quality. Next, the uplink chain modulates a RF carrier signal with the coded signal (and other information regarding the call). An antenna of the device is then driven with the modulated RF carrier. The local user's speech is thus transmitted to the cellular telephone network.
- To enable the above-mentioned two-way conversation, a downlink chain is provided in the device, so that the local user can hear the remote user's speech. The downlink chain operates in parallel with or simultaneously as the uplink chain, to enable the real-time two-way conversation. The downlink chain may essentially perform the reverse of the uplink chain's operations. Thus, an antenna of the device outputs a downlink RF signal sent by the cellular telephone network. The downlink chain then demodulates the downlink RF signal to yield a so-called baseband signal. The latter contains a coded audio signal, which includes an encoded version of the captured speech of the remote user. The coded audio signal is decoded (e.g., into a PCM bitstream), converted to analog format and then played to the local user, through a receiver or speaker of the device. To render higher quality or better sound when an audio signal is played back, various signal processing operations may be performed on the digital audio signal in both the downlink and uplink chains. These may include noise filtering or noise suppression (sometimes referred to as noise cancellation), gain control, and echo cancellation.
- Most handheld wireless communications devices are typically certified for use with a given cellular communications network. This may be in accordance with a specification that is governed by an approved authority such as the PCS Type Certification Review Board (PTCRB). The certification process entails the laboratory testing of a manufactured specimen of the device, to determine its compliance with the specification. For example, the audio portion of the specification for Global System for Mobile communications, GSM, devices requires that an artificial speech signal (or “cert signal”) be sent over the air during a wireless call with the device. For the downlink portion of the test, the cert signal is received over the air by the device. The sound of this cert signal as output by the device's receiver (earpiece speaker) is then measured, at a given volume or loudness setting of the device. For the uplink portion of the test, the cert signal is transmitted by the device over the air to a receiving test station where it is converted into sound. In order for the device to pass the certification, the measured sound output (which is a function of frequency) needs to fall within a certain range or mask that is defined in the specification, for the duration of the signal (e.g., about twenty seconds).
- An embodiment of the invention is a handheld wireless communications device having an adjustable volume setting, an uplink audio processor, and a downlink audio processor. A noise suppressor (e.g., as part of the uplink audio processor and/or as part of the downlink audio processor) attenuates its output signal in accordance with a delay parameter. The delay parameter controls how much the onset of said attenuation is delayed. The delay parameter is automatically set to indicate a “short” delay when the communications device is at a “higher” volume setting, and a “long” delay when the device is at a “lower” volume setting consistent.
- In one embodiment, the long delay is about the same amount of time as an artificial voice signal defined by a communications device certification standard, e.g. about the same period of time as an ITU-T P.50 certification signal which may be on the order of twenty seconds. In contrast, the short delay is a substantially shorter interval of time, e.g. no more than a few seconds. The longer delay is used at the lower volume setting, which may be a nominal setting defined by the certification standard as one that results in a given, received loudness rating, RLR, at the output of the receiver (earpiece speaker) of the device. The nominal setting is expected to be substantially lower in loudness than the normal setting needed to allow most end-users of the device to comfortably hear the far side of a telephone call that is being performed by the device. By automatically configuring the noise suppressor with a delay that is at least as long as the cert signal (when the volume is at the much lower nominal setting during certification testing), the noise suppressor may pass through the cert signal un-attenuated, thereby promoting compliance with the certification process. When the volume level has been raised to the normal level (or higher), which is typical of an end-user configuration, the noise suppressor is automatically configured to react more quickly to the noise that is typical in actual two-way conversations.
- The above summary does not include an exhaustive list of all aspects of the present invention. It is contemplated that the invention includes all systems and methods that can be practiced from all suitable combinations of the various aspects summarized above, as well as those disclosed in the Detailed Description below and particularly pointed out in the claims filed with the application. Such combinations have particular advantages not specifically recited in the above summary.
- The embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” embodiment of the invention in this disclosure are not necessarily to the same embodiment, and they mean at least one.
-
FIG. 1 is a perspective view of an example handheld wireless communications device in which the embodiments of the invention can be implemented. -
FIG. 2 is a block diagram showing internal circuit components of the example wireless handheld communications device ofFIG. 1 . -
FIG. 3 depicts example uplink and downlink audio processors integrated in a wireless handheld communications device and that can be used to implement the certification enhancement features described here. -
FIG. 4 explains how certification enhancement may be achieved by configuring a noise suppressor with variable delay, in accordance with an embodiment of the invention. -
FIG. 5 shows example sample sequences at the output of the noise suppressor configured with a long delay and with a short delay. - Various embodiments of the invention, as methods and circuitry for audio signal processing to be used in a handheld wireless communications device for meeting certification requirements are now described in some detail, beginning with an overview of the electronic hardware and software components that make up an example wireless handheld communications device.
-
FIG. 1 is a perspective view of an example handheldwireless communications device 200 in which the embodiments of the invention can be implemented. Note that theparticular device 200 shown and described here is just an example—the concept of certification enhancement described further below may be implemented in other types of handheld wireless communications devices, e.g. ones that do not use a touch screen display, or ones that do not have a chocolate bar type housing. - The
device 200 shown and described here has similarities to the iPhone™ device by Apple Inc. of Cupertino, Calif. Alternatively, it could be another portable or mobile, handheld multi-function electronic device or smart phone that has some or all of the certification enhancement functionality described below. Thedevice 200 in this case has a fixed, single piece housing, sometimes described as a candy bar or chocolate bar type, in which the primary mechanism for visual and tactile interaction with the user is a touchsensitive display screen 252. An alternative to this type of mobile device is one that has a moveable, multi-piece housing such as a clam shell design or one with a sliding, physical key pad as used by other smart phone manufacturers. Thetouch screen 252, or in other cases a simple display screen, will display typical smart phone features, such as visual voicemail, web browsing, email functions, digital camera pictures, as well as others. The example inFIG. 1 shows thetouch screen 252 displaying the home or main menu of a graphical user interface that allows a user of thedevice 200 to interact with various application programs that can run in thedevice 200. The home menu displays icons or graphical images that represent application programs, files, and their associated commands as shown. These may include windows, fields, dialog boxes, menus, virtual buttons, cursors, scrollbars, etc. The user can select from these graphical images or objects by touching the surface of thescreen 252 with her finger, in response to which the associated application program will be launched. - The
device 200 has a wireless telephony function that enables its user to receive and place audio and/or video calls. At the upper end of the housing, anopening 210 is formed through which downlink audio during a call is emitted from anearpiece speaker 220. At a bottom end portion of thedevice 200, amicrophone 216 is located to pickup the near end user's speech, which is then transmitted in an uplink signal to the far end user, during the call. In some cases, thedevice 200 also has aspeakerphone speaker 218 built into the device housing, which allows the user to conduct a call without having to hold thedevice 200 against her ear. A proximity sensor 254 (see alsoFIG. 2 ) may be integrated in themobile device 200, so as to detect proximity of thetouch screen 252 to the user's face or head, and thereby automatically disable input through thetouch screen 252 during a handset mode call. -
FIG. 2 is a block diagram of several internal circuit components of the example wirelesshandheld communications device 200, presented as an overview of thedevice 200. Thedevice 200 has several built in electro-acoustic transducers including for example, amicrophone 216, a receiver (ear speaker or earpiece) 220, and a speaker (speakerphone) 218. Themicrophone 216 provides an output analog audio signal, whereas the receiver and speaker receive input analog audio signals. Collectively, these are referred to here as the analog acoustic transducer signals. An audio coder-decoder (codec) 214 acts as an interface to the analog input of the microphone and the analog outputs of the receiver and speaker, by providing any and all analog amplifiers and other analog signal conditioning circuitry that is needed for conditioning the analog acoustic transducer signals. Thecodec 214 may be a separate integrated circuit (IC) package. - In one example, the
codec 214 operates in two modes. It can be configured into either mode, by control signals or programming supplied by anapplications processor 150 over an I2C bus or other component bus. In one mode, referred to as media player mode, thedevice 200 is operating as a digital media player (e.g., an MP3 player that is playing back a music file stored in the device 100). In that mode, thecodec 214 applies analog to digital and digital to analog conversion to the analog acoustic transducer signals to generate corresponding digital signals. In this mode, thecodec 214 supplies the digitized microphone signal to anapplications processor 150, and converts a digital audio signal from theapplications processor 150 into analog form and then applies it to the receiver and/or speaker for play back. - In another mode, referred to as call mode, the
device 200 is operating as a mobile telephony device (e.g., allowing its user to be in a real time audio conversation with another remote user during a cellular telephone call). In that mode the codec 24 acts as an analog pass through with no digital conversion, so that the analog acoustic transducer signals are passed through, with perhaps some analog amplification or buffering, between thebaseband processor 52 and the acoustic transducers (signal line 152 outputs the microphone signal, whilesignal line 154 inputs the receiver or speaker signal). - The
baseband processor 52 includes an interface to receive and transmit signals from and transmitted to a cellular network. The baseband processor, which may be a separate integrated circuit (IC) package, has an input port to receive a downlink signal, and an output port to transmit an uplink signal. These may be in a band around 26 MHz, for example, but alternatively they may be at other frequency bands that are considered intermediate (between baseband and RF at the antenna input). The downlink signal may be ready to be upconverted into a cellular network RF signal, such as a long range wireless communications signal that is directed to a cellular telephone network's base station, for example in a 3G or Universal Mobile Telecommunications System, UMTS, band, e.g. 850 MHz, 900 MHz, 1800 MHz, and 1900 MHz bands. Similarly, the uplink signal that is input to the baseband processor has been downcoverted from such an RF band, down to intermediate frequencies, e.g. 26 MHz band. - The downlink/uplink RF signal that is input/output from the baseband processor may be downcoverted/upconverted into the antenna's radiation band, by a frequency downconverter/upconverter that is external to the baseband processor IC package (e.g., as part of the RF transceiver IC package 54). Thus, the signal at the input/output port of the baseband processor may be an intermediate frequency (IF) signal that is above the baseband frequencies but below the cellular network band frequencies (so called RF frequencies here). As an alternative, the RF up conversion and down conversion may be direct, from and to baseband, rather than going through an intermediate frequency.
- The baseband processor may perform known cellular baseband processing tasks including cellular protocol signaling, coding and decoding, and signaling with the external RF transceiver. These together with the RF processing in the external RF transceiver may be referred to as the radio section of the
device 200. Thebase band processor 52 may be programmable, in accordance with software that has been encoded and stored in its associatednon-volatile memory 154. Permission to access the cellular network may be granted to the near end user in accordance with a subscriber identify module, SIM, card that is installed in themobile device 200 to connect with theSIM connector 258. - The
device 200 and the cellular network may be in agreement with respect to a particular voice coding (vocoding) scheme that is to be applied to the raw digital audio signal from the microphone (uplink signal) which is transmitted by thedevice 200. Similarly, an agreement is needed for the particular voice decode scheme which thedevice 200 should be applying to a downlink signal. Any known voice coding and decoding scheme that is suitable for the particular wireless communications protocol used may be adopted. The voice coding and decoding sections of the baseband processor may also be considered to be part of the radio section of thedevice 200. - The
device 200 may also have further wireless communications capability, to provide a global positioning system, GPS, service, a Bluetooth link, and a TCP/IP link to a wireless local area network. To this end, aBluetooth transceiver 160 is provided together with a wireless local area network, WLAN,transceiver 164, which provide additional wireless communication channels for thedevice 200. These two channels may share anantenna 63 for short range wireless communications (e.g., in accordance with a Bluetooth protocol and/or a wireless local area network protocol). AnRF diplexer 188 has a pair of RF ports that are coupled to theantenna 63. One of the RF ports is used for GPS services, which a GPS receiver integratedcircuit 156 uses to obtain GPS data that allows themobile device 200 to locate itself to its user. The other RF port of thediplexer 188 is coupled to an RFfront end 172 that combines Bluetooth and WLAN RF signals. - The cellular network, GPS, Bluetooth, and WLAN services may be managed by programming the
applications processor 150 to communicate with thebase band processor 52,Bluetooth transceiver 160, andwireless transceiver 164 through separate, component buses. Although not shown, there may also be separate component buses connecting thebase band processor 52 to theBluetooth transceiver 160 andWLAN transceiver 164, to enable the latter transceivers to take advantage of the audio processing engine available in thebase band processor 52, to, for example, conduct a wireless voice over IP call (using the WLAN transceiver 164) and to allow the near end user to conduct the call through a wireless headset (using Bluetooth transceiver 160). - The so-called power hungry components of the
mobile device 200 may include thebase band processor 52, theapplications processor 150, thetouch screen 252, and the transmit RF power amplifiers that are part of theRF circuitry 54. These are coupled to be monitored by apower management unit 248. Thepower management unit 248 may monitor power consumption by individual components of thedevice 200 and may signal power management commands to one or more of the components as needed so as to conserve battery energy and control battery temperature. - Other lower level hardware and functionality of the
mobile device 200 include an on/off or resetbutton 250, avibrator 274 used to indicate the ringing signal of an incoming call, an audio ringer, a physical menu button, and a volume up/down button (collectively referred to ascircuit elements 272 which may be coupled to output pins of theprocessor 150 as shown). Themobile device 200 may also have adock connector 230 that communicates with a USB port of theprocessor 150, allowing thedevice 200 to, for example, synchronize certain files of the user with corresponding files that are stored in a desktop or notebook personal computer of the same user. Thedock connector 230 may also be used to connect with a power adapter or other electricity source for charging the battery (via the battery connector 108). - In a further embodiment, the
mobile device 200 may have digital camera circuitry andoptics 264 that are coupled to theprocessor 250, enabling the mobile device to be used as a digital still or video camera. - Having described the lower level components of the
mobile device 200, a brief discussion of the higher level software functionality of the device is in order. As suggested above, thedevice 200 may be essentially considered to be a computer whoseprocessor 150 executes boot code and an operating system (OS) stored in thememory 262 within the device. Running on top of the operating system are several application programs or modules that, when executed by theprocessor 150, manage at a high level the following example functions: placing or receiving a call (phone module); retrieving and displaying email messages (mail module); browsing the web (browser module); and digital media playback (iPod™ player module). Additional applications or widgets may be executed by theprocessor 150, such as those depicted inFIG. 1 , including a clock function, SMS or text messaging service application, a weather widget, a calendar application, a street map navigation application, and a music download service application (the iTunes™ service). - The
device 200 has a digital audio signal processing structure between its radio section 302 (responsible for interfacing with the cellular phone network for example or a wireless local area network), and its baseband analog front end (BB AFE) 304, as depicted inFIG. 3 . TheAFE 304 interfaces with the acoustic transducers of thedevice 200, namely themicrophone 216 andearpiece speaker 220, by performing the needed analog to digital, A/D, and digital to analog, D/A, conversion. A block of audio amplifiers and other analog audiosignal conditioning circuitry 306 are part of the analog interface with themicrophone 216 andspeakers FIG. 2 , block 272). - The digital signal processing structure includes a downlink signal processing chain or
downlink audio processor 308 that has an input coupled to theradio section 302 and an output feeding theAFE 304. Thedownlink audio processor 308 may perform several digital signal processing operations upon the decoded, digital or sampled voice signal or bit stream (also referred to as the downlink baseband or audio signal), in order to enhance the latter. These may include a combination of one or more of the following operations performed on the downlink audio signal: adjust its gain using a downlink programmable gain amplifier (PGA) 310; apply general filtering to it using a downlink programmabledigital filter 312; perform multi-band audio compression upon it using adownlink multi-band compressor 314; and reduce noise using anoise suppressor 316. Not all of these operations are needed in the downlink chain—for example, thenoise suppressor 316 may be omitted. The downlink audio signal received from theradio section 302 may be in the form of a pulse code modulated bit stream. - The
device 200 may also perform enhancement operations on an uplink audio signal. This is done using anuplink audio processor 318 that acts upon an uplink digital voice signal or bit stream received from themicrophone 216 via the AFE 304 (prior to the voice coding of the signal for transmission). In the example ofFIG. 3 , theuplink audio processor 318 includes the following signal processing chain: anenergy limiter 320, anuplink PGA 322; an uplink programmabledigital filter 324; an echo and noise canceller 326 (including a noise suppressor block); and anuplink multi-band compressor 328. However, not all of these signal processing blocks may be needed in every instance. - The
downlink audio processor 308 and theuplink audio processor 318 may each be implemented as a separate programmed processor, or a separate combination of a programmed processor and dedicated hardwired logic. Alternatively, the functions of the downlink and uplink chains may be performed by the combination of a single, programmable processor, e.g. such as one that is available in the baseband processor 52 (seeFIG. 1 ). In either case, there is said to be a processor that executes stored or encoded program instructions (e.g., that are stored or encoded inexternal memory 154 of the baseband processor, seeFIG. 2 ) to enhance a voice signal that is being passed through it for purposes of both certification testing and normal end-user scenarios, as follows. - In accordance with an embodiment of the invention, the uplink voice signal, the downlink voice signal, or both, can be enhanced for purposes of passing a certification test. Before describing the problem and its solution, some background on a typical certification test is provided.
- In the downlink portion of GSM certification testing of cellular phone handsets, a multi-sine certification signal, which is an artificial speech signal that lasts about twenty seconds, is sent over the air to the
device 200, becoming in effect the downlink audio signal. The characteristics of this cert signal including its duration and spectral content may be in accordance with an industry standard, e.g. the International Telecommunication Union telecommunication standardization sector's ITU-T, Recommendation P.50. To test thedevice 200, a receiver (earpiece speaker) volume setting of the device is placed in a “nominal” setting, which is one that results in a required, received loudness rating, RLR, (measured typically in dB) from the device' receiver or earpiece speaker, while the received cert signal is being emitted from it. As an example, the nominal RLR may be 2 dB. The nominal volume setting may be about the midpoint of the full volume range (between minimum and maximum) available from thedevice 200. Once the volume has been set to this nominal setting, the cert signal is sent to thedevice 200 and is immediately emitted by the device as sound from its receiver (earpiece speaker). The measured sound output of the receiver, for the cert signal, should fall within or meet a defined mask or envelope, over the entire required frequency range of the mask and for a given duration of the signal. Such a certification process may be one that is governed by the PCS Type Certification Review Board, PTCRB. - For the uplink portion of the certification test, the artificial speech signal is played to the microphone of the
device 200. The signal is processed by theuplink audio processor 318 and then transmitted over the air, e.g. during a wireless call, to a testing destination. There, the signal is converted to sound, picked up by a test microphone, and then evaluated for compliance with a specified, Send Loudness Rating, SLR (by further test equipment). - The following problem might occur during certification testing of devices that have a certain type of noise suppressor in either the downlink or uplink chain of the digital audio DSP structure. The noise suppressor generally serves to reduce the level of noise in the voice signal being passed through it, to improve the quality of the sound eventually heard by the user through a receiver or speaker. It operates by digitally attenuating an input voice signal in accordance with a known noise detection algorithm. In other words, when the algorithm “sees” what is likely to be a time interval of just noise, in the input bit stream, the samples of the bit stream in that time interval are attenuated. In some cases, however, the noise suppressor may become “confused” by the certification signal that is applied to the
device 200 during for example GSM certification testing. In that case, the typical noise suppressor would attenuate essentially the entirety of the cert signal, thinking it is noise, thereby causing the certification test to fail when the sound output drops. - The multi-sine cert signal has essentially no pauses in it, unlike an actual speech signal in which there are naturally occurring pauses between spoken words. The noise suppressor may be designed to respond to such pauses, and, based on the detected pauses, appropriately attenuates those pause intervals. A large part of the cert signal might in that case be mistakenly taken to be a pause interval.
- In accordance with an embodiment of the invention, referring now to
FIG. 4 , a variabledelay noise suppressor 416 can be inserted into theuplink audio processor 318, thedownlink audio processor 308, or both. This delay is an interval of time (or number of samples in the input or output bit stream) that thenoise suppressor 416 will essentially wait or skip, before starting attenuation (in accordance with its noise suppression algorithm). The delay is variable in that it will depend on the current volume setting for the receiver of thedevice 200. -
FIG. 4 depicts an instance where thenoise suppressor 416 is in the uplink chain. The uplink portion of a certification test scenario is depicted, where an artificial speech signal is played to the microphone of thedevice 200. The signal is processed by theuplink audio processor 318 and then transmitted over the air, e.g. during a wireless call, to a testing destination. There, the signal is converted to sound and then picked up by a test microphone and then evaluated for compliance with a specified SLR (by further test equipment). As an alternative, thenoise suppressor 416 could be in the downlink chain, in which case the relevant portion of the certification test scenario would be a downlink portion that evaluates sound output from the receiver of the device 200 (for compliance with a specified RLR). - As seen in
FIG. 4 , a “nominal volume setting” is one that is used in a certification test scenario. In many instances, it may be about half way in the full range of volume, but in other instances may be a little higher or lower depending on the characteristics of the receiver and the RLR required by the certification test. At this setting, the noise suppressor's delay parameter is automatically set, by adecoder 404, to indicate a “long” delay, i.e. about the same as or longer than the duration of the expected cert signal. The effect of this adjustable delay is depicted in the sample sequence shown, as a number of samples (interval of time) beginning at the start of the detected noise interval (when attenuation would normally begin). So configured, thenoise suppressor 416 waits this delay time before it reacts by attenuating its output signal. This allows the cert test signal to pass through the noise suppressor block unattenuated, thereby not causing thedevice 200 to fail the certification test. - In other words, the variable
delay noise suppressor 416, which may be viewed as a type of digital filter, may pass its input samples through unattenuated, until it detects noise in the input samples. Thenoise suppressor 416 will then attenuate its output signal to suppress the detected noise. However, it delays attenuating the output signal, from some reference point in time or in the output sequence, i.e. continues to pass its input samples unattenuated, until after the set delay has elapsed. At a nominal volume setting, this delay is set based on the duration of the expected multi-sine cert signal to which thedevice 200 will be subjected for certification testing. - However, at a substantially higher volume setting, which is normally used by customers or end users of the
device 200 in a typical mobile phone usage environment, the delay parameter is set to indicate a substantially shorter delay (e.g., 3 seconds or less). In other words, the onset of attenuation in the output samples of thenoise suppressor 416 is much sooner. Thus, during normal end-user use of thedevice 200, where the volume setting is set much higher than the “nominal” setting, thenoise suppressor 416 works in accordance with a much shorter delay than when the volume is at the nominal setting. - Viewed another way, the
device 200 responds automatically to a lower volume setting, e.g. set manually by its user, by configuring its noise suppressor's delay (of the onset of attenuation) to be long, and to a higher volume setting by configuring the delay to be short.FIG. 5 illustrates how this capability solves the certification test problem introduced above, using an example 15 second long cert signal, a short delay of 2 seconds, and a long delay of 18 seconds. If the noise suppressor delay is too short, then much of the cert signal is attenuated during the certification test (thereby causing thedevice 200 to fail the test). This problem is avoided by making the noise suppressor delay long enough so that a sufficient portion of the cert signal passes through unattenuated. The delay is adjusted based on the current volume setting of thedevice 200. - To conclude, various aspects of a technique for giving a user of a communications device more convenient control of sound quality have been described. As explained above, an embodiment of the invention may be a machine-readable medium having stored or encoded thereon instructions which program a processor to perform some of the operations described above. In other embodiments, some of these operations might be performed by specific hardware components that contain hardwired logic. Those operations might alternatively be performed by any combination of programmed data processing components and fixed hardware circuit components.
- A machine-readable medium may include any mechanism for storing or transferring information in a form readable by a machine (e.g., a computer), such as Compact Disc Read-Only Memory (CD-ROMs), Read-Only Memory (ROMs), Random Access Memory (RAM), and Erasable Programmable Read-Only Memory (EPROM).
- The invention is not limited to the specific embodiments described above. For example, the numerical values given in
FIG. 5 for the duration of the cert signal and the adjustable delays may be different, depending on the type of cert signal and the type of noise suppressor used. Accordingly, other embodiments are within the scope of the claims.
Claims (17)
1. A handheld wireless communications device having an adjustable volume setting, comprising:
an uplink audio processor; and
a downlink audio processor, at least one of the uplink and downlink audio processors has a noise suppressor, the noise suppressor to attenuate its output signal in accordance with a delay parameter, the delay parameter controls how much the onset of said attenuation is delayed,
wherein the delay parameter is automatically set to indicate a short delay when the communications device is at a higher volume setting, and a long delay when the device is at a lower volume setting.
2. The handheld wireless communications device of claim 1 wherein the long delay is at least as long as an artificial voice signal defined by a handheld wireless communications device certification process.
3. The handheld wireless communications device of claim 2 wherein the long delay is at least as long as an ITU-T Recommendation P.50 certification signal.
4. The handheld wireless communications device of claim 2 wherein the short delay is no more than three seconds.
5. The handheld wireless communications device of claim 2 wherein the lower volume setting is one that results in a receive loudness rating, RLR, defined by the certification process for the artificial voice signal.
6. A method for operating a communications device having an adjustable volume setting, comprising:
decoding the adjustable volume setting into a delay parameter;
detecting noise in an audio signal of the device; and
attenuating the audio signal based on the detected noise to suppress said noise, wherein the onset of said attenuation is delayed in accordance with the delay parameter.
7. The method of claim 6 wherein the delay parameter indicates a short delay when the adjustable volume setting is high, and a long delay when the adjustable volume setting is low.
8. The method of claim 7 wherein the audio signal is an artificial voice signal defined by a communications device certification process, and wherein the long delay is at least as long as said artificial voice signal.
9. The method of claim 8 wherein the long delay is at least as long as an ITU-T P.50 certification signal.
10. The method of claim 7 wherein the short delay is no more than three seconds.
11. The method of claim 7 wherein when the adjustable volume setting is low, the device outputs a received loudness rating, RLR, defined by a communications device certification process.
12. A communications device comprising:
an audio processor having a noise suppressor to provide an output data stream that is attenuated relative to an input data stream, wherein the onset of said attenuation is to be delayed in accordance with a delay parameter; and
a decoder to provide the delay parameter, based on having decoded an adjustable volume setting for the device.
13. The communications device of claim 12 wherein the decoder is to set the delay parameter to indicate a short delay when the adjustable volume setting is high, and a long delay when the adjustable volume setting is low.
14. The communications device of claim 13 wherein the long delay is at least as long as an artificial voice signal defined by a communications device certification process.
15. The communications device of claim 13 wherein the long delay is at least as long as an ITU-T Recommendation P.50 certification signal.
16. The communications device of claim 13 wherein the short delay is no more than three seconds.
17. The communications device of claim 13 further comprising a speaker coupled to an output of the audio processor, and wherein when the adjustable volume setting is high, the speaker outputs a receive loudness rating, RLR, defined by a communications device certification process.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/330,339 US20090253457A1 (en) | 2008-04-04 | 2008-12-08 | Audio signal processing for certification enhancement in a handheld wireless communications device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US4262208P | 2008-04-04 | 2008-04-04 | |
US12/330,339 US20090253457A1 (en) | 2008-04-04 | 2008-12-08 | Audio signal processing for certification enhancement in a handheld wireless communications device |
Publications (1)
Publication Number | Publication Date |
---|---|
US20090253457A1 true US20090253457A1 (en) | 2009-10-08 |
Family
ID=41133312
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/330,339 Abandoned US20090253457A1 (en) | 2008-04-04 | 2008-12-08 | Audio signal processing for certification enhancement in a handheld wireless communications device |
US12/357,119 Expired - Fee Related US8111842B2 (en) | 2008-04-04 | 2009-01-21 | Filter adaptation based on volume setting for certification enhancement in a handheld wireless communications device |
US12/357,312 Active 2031-05-01 US8781820B2 (en) | 2008-04-04 | 2009-01-21 | Multi band audio compressor dynamic level adjust in a communications device |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/357,119 Expired - Fee Related US8111842B2 (en) | 2008-04-04 | 2009-01-21 | Filter adaptation based on volume setting for certification enhancement in a handheld wireless communications device |
US12/357,312 Active 2031-05-01 US8781820B2 (en) | 2008-04-04 | 2009-01-21 | Multi band audio compressor dynamic level adjust in a communications device |
Country Status (1)
Country | Link |
---|---|
US (3) | US20090253457A1 (en) |
Cited By (178)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080025276A1 (en) * | 2006-07-27 | 2008-01-31 | Samsung Electronics Co., Ltd. | Wireless communication device for receiving mobile broadcasting signal and transmitting/receiving bluetooth signal with single antenna |
US20120152990A1 (en) * | 2010-12-15 | 2012-06-21 | Kulas Charles J | Thigh-mounted device holder |
US20130201275A1 (en) * | 2010-10-22 | 2013-08-08 | Huizhou Tcl Mobile Communication Co., Ltd. | Method for implementing video call with bluetooth-based headset and video communication terminal for the same |
US8613674B2 (en) | 2010-10-16 | 2013-12-24 | James Charles Vago | Methods, devices, and systems for video gaming |
US8639516B2 (en) | 2010-06-04 | 2014-01-28 | Apple Inc. | User-specific noise suppression for voice quality improvements |
US8892446B2 (en) | 2010-01-18 | 2014-11-18 | Apple Inc. | Service orchestration for intelligent automated assistant |
US20150156329A1 (en) * | 2013-11-30 | 2015-06-04 | Fu Tai Hua Industry (Shenzhen) Co., Ltd. | Communications device, volume adjusting system and method |
US9190062B2 (en) | 2010-02-25 | 2015-11-17 | Apple Inc. | User profiling for voice input processing |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US9300784B2 (en) | 2013-06-13 | 2016-03-29 | Apple Inc. | System and method for emergency calls initiated by voice command |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US9697822B1 (en) | 2013-03-15 | 2017-07-04 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9922642B2 (en) | 2013-03-15 | 2018-03-20 | Apple Inc. | Training an at least partial voice command system |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US10008212B2 (en) * | 2009-04-17 | 2018-06-26 | The Nielsen Company (Us), Llc | System and method for utilizing audio encoding for measuring media exposure with environmental masking |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10643611B2 (en) | 2008-10-02 | 2020-05-05 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10748546B2 (en) | 2017-05-16 | 2020-08-18 | Apple Inc. | Digital assistant services based on device capabilities |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US10791216B2 (en) | 2013-08-06 | 2020-09-29 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11010127B2 (en) | 2015-06-29 | 2021-05-18 | Apple Inc. | Virtual assistant for media playback |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US11023513B2 (en) | 2007-12-20 | 2021-06-01 | Apple Inc. | Method and apparatus for searching using an active ontology |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US11217251B2 (en) | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US11269678B2 (en) | 2012-05-15 | 2022-03-08 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11468282B2 (en) | 2015-05-15 | 2022-10-11 | Apple Inc. | Virtual assistant in a communication session |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US11928604B2 (en) | 2019-04-09 | 2024-03-12 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7574010B2 (en) * | 2004-05-28 | 2009-08-11 | Research In Motion Limited | System and method for adjusting an audio signal |
CN101048935B (en) | 2004-10-26 | 2011-03-23 | 杜比实验室特许公司 | Method and device for controlling the perceived loudness and/or the perceived spectral balance of an audio signal |
US8189429B2 (en) * | 2008-09-30 | 2012-05-29 | Apple Inc. | Microphone proximity detection |
JP2010244602A (en) * | 2009-04-03 | 2010-10-28 | Sony Corp | Signal processing device, method, and program |
US20100318353A1 (en) * | 2009-06-16 | 2010-12-16 | Bizjak Karl M | Compressor augmented array processing |
US9154596B2 (en) * | 2009-07-24 | 2015-10-06 | Broadcom Corporation | Method and system for audio system volume control |
TWI447709B (en) | 2010-02-11 | 2014-08-01 | Dolby Lab Licensing Corp | System and method for non-destructively normalizing loudness of audio signals within portable devices |
GB201005454D0 (en) | 2010-03-31 | 2010-05-19 | Skype Ltd | Television apparatus |
GB201005386D0 (en) * | 2010-03-31 | 2010-05-12 | Skype Ltd | Communication using a user terminal |
US8963982B2 (en) | 2010-12-31 | 2015-02-24 | Skype | Communication system and method |
KR101247652B1 (en) * | 2011-08-30 | 2013-04-01 | 광주과학기술원 | Apparatus and method for eliminating noise |
US8509858B2 (en) | 2011-10-12 | 2013-08-13 | Bose Corporation | Source dependent wireless earpiece equalizing |
KR101156667B1 (en) | 2011-12-06 | 2012-06-14 | 주식회사 에이디알에프코리아 | Method for setting filter coefficient in communication system |
US9019336B2 (en) | 2011-12-30 | 2015-04-28 | Skype | Making calls using an additional terminal |
CN103325380B (en) | 2012-03-23 | 2017-09-12 | 杜比实验室特许公司 | Gain for signal enhancing is post-processed |
US10844689B1 (en) | 2019-12-19 | 2020-11-24 | Saudi Arabian Oil Company | Downhole ultrasonic actuator system for mitigating lost circulation |
CN112185397A (en) | 2012-05-18 | 2021-01-05 | 杜比实验室特许公司 | System for maintaining reversible dynamic range control information associated with a parametric audio encoder |
US9173086B2 (en) * | 2012-07-17 | 2015-10-27 | Samsung Electronics Co., Ltd. | Method and apparatus for preventing screen off during automatic response system service in electronic device |
KR101910509B1 (en) | 2012-07-17 | 2018-10-22 | 삼성전자주식회사 | Method and apparatus for preventing screen off during automatic response system service in electronic device |
WO2014037052A1 (en) * | 2012-09-07 | 2014-03-13 | Richard Witte | Method and devices for generating an audio signal |
HUE036119T2 (en) | 2013-01-21 | 2018-06-28 | Dolby Laboratories Licensing Corp | Audio encoder and decoder with program loudness and boundary metadata |
IN2015MN01766A (en) | 2013-01-21 | 2015-08-28 | Dolby Lab Licensing Corp | |
EP3582218A1 (en) | 2013-02-21 | 2019-12-18 | Dolby International AB | Methods for parametric multi-channel encoding |
CN104080024B (en) | 2013-03-26 | 2019-02-19 | 杜比实验室特许公司 | Volume leveller controller and control method and audio classifiers |
CN110083714B (en) | 2013-04-05 | 2024-02-13 | 杜比实验室特许公司 | Acquisition, recovery, and matching of unique information from file-based media for automatic file detection |
WO2014179021A1 (en) * | 2013-04-29 | 2014-11-06 | Dolby Laboratories Licensing Corporation | Frequency band compression with dynamic thresholds |
TWM487509U (en) | 2013-06-19 | 2014-10-01 | 杜比實驗室特許公司 | Audio processing apparatus and electrical device |
CN110675883B (en) | 2013-09-12 | 2023-08-18 | 杜比实验室特许公司 | Loudness adjustment for downmixed audio content |
CN109785851B (en) | 2013-09-12 | 2023-12-01 | 杜比实验室特许公司 | Dynamic range control for various playback environments |
US9118293B1 (en) * | 2013-09-18 | 2015-08-25 | Parallels IP Holdings GmbH | Method for processing on mobile device audio signals of remotely executed applications |
US9331835B1 (en) | 2014-03-19 | 2016-05-03 | Amazon Technologies, Inc. | Radio frequency (RF) front-end circuitry for wireless local area network (WLAN), wide area network (WAN) and global positioning system (GPS) communications |
CN110808723A (en) | 2014-05-26 | 2020-02-18 | 杜比实验室特许公司 | Audio signal loudness control |
EP3204943B1 (en) | 2014-10-10 | 2018-12-05 | Dolby Laboratories Licensing Corp. | Transmission-agnostic presentation-based program loudness |
GB2533579A (en) * | 2014-12-22 | 2016-06-29 | Nokia Technologies Oy | An intelligent volume control interface |
EP3512185B1 (en) * | 2016-09-27 | 2020-11-11 | Huawei Technologies Co., Ltd. | Volume adjustment method and terminal |
CN108363557B (en) * | 2018-02-02 | 2020-06-12 | 刘国华 | Human-computer interaction method and device, computer equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5615256A (en) * | 1994-05-13 | 1997-03-25 | Nec Corporation | Device and method for automatically controlling sound volume in a communication apparatus |
US20030174847A1 (en) * | 1998-07-31 | 2003-09-18 | Circuit Research Labs, Inc. | Multi-state echo suppressor |
US20040228472A1 (en) * | 2003-05-14 | 2004-11-18 | Mohamed El-Hennawey | Method and apparatus for controlling the transmit level of telephone terminal equipment |
US20050004796A1 (en) * | 2003-02-27 | 2005-01-06 | Telefonaktiebolaget Lm Ericsson (Publ), | Audibility enhancement |
US7280958B2 (en) * | 2005-09-30 | 2007-10-09 | Motorola, Inc. | Method and system for suppressing receiver audio regeneration |
US7337026B2 (en) * | 2004-03-19 | 2008-02-26 | Via Technologies Inc. | Digital audio volume control |
US20080130906A1 (en) * | 2006-11-20 | 2008-06-05 | Personics Holdings Inc. | Methods and Devices for Hearing Damage Notification and Intervention II |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4249042A (en) * | 1979-08-06 | 1981-02-03 | Orban Associates, Inc. | Multiband cross-coupled compressor with overshoot protection circuit |
US4965825A (en) * | 1981-11-03 | 1990-10-23 | The Personalized Mass Media Corporation | Signal processing apparatus and methods |
DE69533259T2 (en) * | 1995-05-03 | 2005-08-18 | Sony Corp. | NONLINEAR QUANTIZATION OF AN INFORMATION SIGNAL |
US5790671A (en) | 1996-04-04 | 1998-08-04 | Ericsson Inc. | Method for automatically adjusting audio response for improved intelligibility |
TW376611B (en) * | 1998-05-26 | 1999-12-11 | Koninkl Philips Electronics Nv | Transmission system with improved speech encoder |
US20040141572A1 (en) * | 2003-01-21 | 2004-07-22 | Johnson Phillip Marc | Multi-pass inband bit and channel decoding for a multi-rate receiver |
US20040247993A1 (en) * | 2003-05-21 | 2004-12-09 | Sony Ericsson Mobile Communications Ab | System and Method of Improving Talk-Time at the End of Battery Life |
US8280730B2 (en) * | 2005-05-25 | 2012-10-02 | Motorola Mobility Llc | Method and apparatus of increasing speech intelligibility in noisy environments |
US20070253578A1 (en) * | 2006-04-19 | 2007-11-01 | Verdecanna Michael T | System and method for adjusting microphone gain based on volume setting of a mobile device |
US20100080379A1 (en) * | 2008-09-30 | 2010-04-01 | Shaohai Chen | Intelligibility boost |
-
2008
- 2008-12-08 US US12/330,339 patent/US20090253457A1/en not_active Abandoned
-
2009
- 2009-01-21 US US12/357,119 patent/US8111842B2/en not_active Expired - Fee Related
- 2009-01-21 US US12/357,312 patent/US8781820B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5615256A (en) * | 1994-05-13 | 1997-03-25 | Nec Corporation | Device and method for automatically controlling sound volume in a communication apparatus |
US20030174847A1 (en) * | 1998-07-31 | 2003-09-18 | Circuit Research Labs, Inc. | Multi-state echo suppressor |
US20050004796A1 (en) * | 2003-02-27 | 2005-01-06 | Telefonaktiebolaget Lm Ericsson (Publ), | Audibility enhancement |
US20040228472A1 (en) * | 2003-05-14 | 2004-11-18 | Mohamed El-Hennawey | Method and apparatus for controlling the transmit level of telephone terminal equipment |
US7337026B2 (en) * | 2004-03-19 | 2008-02-26 | Via Technologies Inc. | Digital audio volume control |
US7280958B2 (en) * | 2005-09-30 | 2007-10-09 | Motorola, Inc. | Method and system for suppressing receiver audio regeneration |
US20080130906A1 (en) * | 2006-11-20 | 2008-06-05 | Personics Holdings Inc. | Methods and Devices for Hearing Damage Notification and Intervention II |
Cited By (259)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US7937046B2 (en) * | 2006-07-27 | 2011-05-03 | Samsung Electronics Co., Ltd | Wireless communication device for receiving mobile broadcasting signal and transmitting/receiving bluetooth signal with single antenna |
US20080025276A1 (en) * | 2006-07-27 | 2008-01-31 | Samsung Electronics Co., Ltd. | Wireless communication device for receiving mobile broadcasting signal and transmitting/receiving bluetooth signal with single antenna |
US9117447B2 (en) | 2006-09-08 | 2015-08-25 | Apple Inc. | Using event alert text as input to an automated assistant |
US8930191B2 (en) | 2006-09-08 | 2015-01-06 | Apple Inc. | Paraphrasing of user requests and results by automated digital assistant |
US8942986B2 (en) | 2006-09-08 | 2015-01-27 | Apple Inc. | Determining user intent based on ontologies of domains |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US11023513B2 (en) | 2007-12-20 | 2021-06-01 | Apple Inc. | Method and apparatus for searching using an active ontology |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US10643611B2 (en) | 2008-10-02 | 2020-05-05 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US10008212B2 (en) * | 2009-04-17 | 2018-06-26 | The Nielsen Company (Us), Llc | System and method for utilizing audio encoding for measuring media exposure with environmental masking |
US20190019521A1 (en) * | 2009-04-17 | 2019-01-17 | The Nielsen Company (Us), Llc | System and method for utilizing audio encoding for measuring media exposure with environmental masking |
US10475446B2 (en) | 2009-06-05 | 2019-11-12 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US10741185B2 (en) | 2010-01-18 | 2020-08-11 | Apple Inc. | Intelligent automated assistant |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US8892446B2 (en) | 2010-01-18 | 2014-11-18 | Apple Inc. | Service orchestration for intelligent automated assistant |
US8903716B2 (en) | 2010-01-18 | 2014-12-02 | Apple Inc. | Personalized vocabulary for digital assistant |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US10692504B2 (en) | 2010-02-25 | 2020-06-23 | Apple Inc. | User profiling for voice input processing |
US9190062B2 (en) | 2010-02-25 | 2015-11-17 | Apple Inc. | User profiling for voice input processing |
US10446167B2 (en) | 2010-06-04 | 2019-10-15 | Apple Inc. | User-specific noise suppression for voice quality improvements |
US8639516B2 (en) | 2010-06-04 | 2014-01-28 | Apple Inc. | User-specific noise suppression for voice quality improvements |
US8613674B2 (en) | 2010-10-16 | 2013-12-24 | James Charles Vago | Methods, devices, and systems for video gaming |
US8885011B2 (en) * | 2010-10-22 | 2014-11-11 | Huizhou Tcl Mobile Communication Co., Ltd. | Method for implementing video call with bluetooth-based headset and video communication terminal for the same |
US20130201275A1 (en) * | 2010-10-22 | 2013-08-08 | Huizhou Tcl Mobile Communication Co., Ltd. | Method for implementing video call with bluetooth-based headset and video communication terminal for the same |
US20120152990A1 (en) * | 2010-12-15 | 2012-06-21 | Kulas Charles J | Thigh-mounted device holder |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US10417405B2 (en) | 2011-03-21 | 2019-09-17 | Apple Inc. | Device access using voice authentication |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US11350253B2 (en) | 2011-06-03 | 2022-05-31 | Apple Inc. | Active transport based notifications |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US11069336B2 (en) | 2012-03-02 | 2021-07-20 | Apple Inc. | Systems and methods for name pronunciation |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US11269678B2 (en) | 2012-05-15 | 2022-03-08 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US10714117B2 (en) | 2013-02-07 | 2020-07-14 | Apple Inc. | Voice trigger for a digital assistant |
US10978090B2 (en) | 2013-02-07 | 2021-04-13 | Apple Inc. | Voice trigger for a digital assistant |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US11388291B2 (en) | 2013-03-14 | 2022-07-12 | Apple Inc. | System and method for processing voicemail |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US9697822B1 (en) | 2013-03-15 | 2017-07-04 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US9922642B2 (en) | 2013-03-15 | 2018-03-20 | Apple Inc. | Training an at least partial voice command system |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10769385B2 (en) | 2013-06-09 | 2020-09-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US11048473B2 (en) | 2013-06-09 | 2021-06-29 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US9300784B2 (en) | 2013-06-13 | 2016-03-29 | Apple Inc. | System and method for emergency calls initiated by voice command |
US10791216B2 (en) | 2013-08-06 | 2020-09-29 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US20150156329A1 (en) * | 2013-11-30 | 2015-06-04 | Fu Tai Hua Industry (Shenzhen) Co., Ltd. | Communications device, volume adjusting system and method |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10699717B2 (en) | 2014-05-30 | 2020-06-30 | Apple Inc. | Intelligent assistant for home automation |
US10878809B2 (en) | 2014-05-30 | 2020-12-29 | Apple Inc. | Multi-command single utterance input method |
US10417344B2 (en) | 2014-05-30 | 2019-09-17 | Apple Inc. | Exemplar-based natural language processing |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US10657966B2 (en) | 2014-05-30 | 2020-05-19 | Apple Inc. | Better resolution when referencing to concepts |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US10714095B2 (en) | 2014-05-30 | 2020-07-14 | Apple Inc. | Intelligent assistant for home automation |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10453443B2 (en) | 2014-09-30 | 2019-10-22 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10390213B2 (en) | 2014-09-30 | 2019-08-20 | Apple Inc. | Social reminders |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US10438595B2 (en) | 2014-09-30 | 2019-10-08 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US11556230B2 (en) | 2014-12-02 | 2023-01-17 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US10529332B2 (en) | 2015-03-08 | 2020-01-07 | Apple Inc. | Virtual assistant activation |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US10930282B2 (en) | 2015-03-08 | 2021-02-23 | Apple Inc. | Competing devices responding to voice triggers |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US11468282B2 (en) | 2015-05-15 | 2022-10-11 | Apple Inc. | Virtual assistant in a communication session |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US11127397B2 (en) | 2015-05-27 | 2021-09-21 | Apple Inc. | Device voice control |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10681212B2 (en) | 2015-06-05 | 2020-06-09 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US11010127B2 (en) | 2015-06-29 | 2021-05-18 | Apple Inc. | Virtual assistant for media playback |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10354652B2 (en) | 2015-12-02 | 2019-07-16 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10942703B2 (en) | 2015-12-23 | 2021-03-09 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10580409B2 (en) | 2016-06-11 | 2020-03-03 | Apple Inc. | Application integration with a digital assistant |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10942702B2 (en) | 2016-06-11 | 2021-03-09 | Apple Inc. | Intelligent device arbitration and control |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11656884B2 (en) | 2017-01-09 | 2023-05-23 | Apple Inc. | Application integration with a digital assistant |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US10741181B2 (en) | 2017-05-09 | 2020-08-11 | Apple Inc. | User interface for correcting recognition errors |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10847142B2 (en) | 2017-05-11 | 2020-11-24 | Apple Inc. | Maintaining privacy of personal information |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US10909171B2 (en) | 2017-05-16 | 2021-02-02 | Apple Inc. | Intelligent automated assistant for media exploration |
US10748546B2 (en) | 2017-05-16 | 2020-08-18 | Apple Inc. | Digital assistant services based on device capabilities |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US10984798B2 (en) | 2018-06-01 | 2021-04-20 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11009970B2 (en) | 2018-06-01 | 2021-05-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US10720160B2 (en) | 2018-06-01 | 2020-07-21 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10944859B2 (en) | 2018-06-03 | 2021-03-09 | Apple Inc. | Accelerated task performance |
US10504518B1 (en) | 2018-06-03 | 2019-12-10 | Apple Inc. | Accelerated task performance |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11928604B2 (en) | 2019-04-09 | 2024-03-12 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11217251B2 (en) | 2019-05-06 | 2022-01-04 | Apple Inc. | Spoken notifications |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11360739B2 (en) | 2019-05-31 | 2022-06-14 | Apple Inc. | User activity shortcut suggestions |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
US11237797B2 (en) | 2019-05-31 | 2022-02-01 | Apple Inc. | User activity shortcut suggestions |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
Also Published As
Publication number | Publication date |
---|---|
US20090254339A1 (en) | 2009-10-08 |
US20090252350A1 (en) | 2009-10-08 |
US8111842B2 (en) | 2012-02-07 |
US8781820B2 (en) | 2014-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20090253457A1 (en) | Audio signal processing for certification enhancement in a handheld wireless communications device | |
US8775172B2 (en) | Machine for enabling and disabling noise reduction (MEDNR) based on a threshold | |
US8600454B2 (en) | Decisions on ambient noise suppression in a mobile communications handset device | |
US8447595B2 (en) | Echo-related decisions on automatic gain control of uplink speech signal in a communications device | |
US8744524B2 (en) | User interface tone echo cancellation | |
US8924205B2 (en) | Methods and systems for automatic enablement or disablement of noise reduction within a communication device | |
CN107277208B (en) | Communication method, first communication device and terminal | |
US7720455B2 (en) | Sidetone generation for a wireless system that uses time domain isolation | |
US8463256B2 (en) | System including a communication apparatus having a digital audio interface for audio testing with radio isolation | |
TW201442484A (en) | Communication device with self-on-demand module and the method of the same | |
US20120172094A1 (en) | Mobile Communication Apparatus | |
US20070201431A1 (en) | Wireless communication device and method for processing voice over internet protocol signals thereof | |
KR20090027817A (en) | Method for output background sound and mobile communication terminal using the same | |
KR100726479B1 (en) | Method for controlling sound volume of communication terminal by using noise measuring sensor and communication terminal of enabling the method | |
US20200210140A1 (en) | Radio gateway audio port configuration | |
CN101815137A (en) | Double-mode conference telephone | |
KR100605894B1 (en) | Apparatus and method for automatic controlling audio and radio signal in mobile communication terminal | |
US9031619B2 (en) | Visual indication of active speech reception | |
KR100247192B1 (en) | Method for suppressing side audio signal in cordless telephone | |
US20150350771A1 (en) | Machine and a System for Automatically Controlling Noise Reduction Feature of a Communication Device | |
KR20050030980A (en) | Method for playing music file using ear-microphone in mobile communication terminal | |
KR20050100829A (en) | Apparatus for outputting stereo beep sound in mobile communication terminal | |
KR20070042026A (en) | Method for controlling sound volume of communication terminal and communication terminal of enabling the method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: APPLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SEGUIN, CHAD G.;REEL/FRAME:021948/0979 Effective date: 20081206 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |