US20120029672A1 - Continuous audio interaction with interruptive audio - Google Patents
Continuous audio interaction with interruptive audio Download PDFInfo
- Publication number
- US20120029672A1 US20120029672A1 US13/253,583 US201113253583A US2012029672A1 US 20120029672 A1 US20120029672 A1 US 20120029672A1 US 201113253583 A US201113253583 A US 201113253583A US 2012029672 A1 US2012029672 A1 US 2012029672A1
- Authority
- US
- United States
- Prior art keywords
- audio
- application
- playing
- generated
- resuming
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
- G11B27/30—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on the same track as the main recording
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B2020/10935—Digital recording or reproducing wherein a time constraint must be met
- G11B2020/10972—Management of interruptions, e.g. due to editing
Definitions
- This description relates to audio output devices.
- Devices may output or play continuous audio, such as podcasts, audio books, movies, or music.
- This continuous audio may be interrupted by time-sensitive or interruptive audio applications of the devices, such as incoming telephone calls. Transferring from the continuous audio to the interruptive audio and back to the continuous audio may lose the context or continuity of the continuous audio.
- method may include playing, by a computing device, continuous or first audio generated by a first application; determining that the first audio generated by the first application should be interrupted based on an interrupt or interruptive notification associated with a second application; pausing the continuous or first audio generated by the first application; playing interruptive audio generated by the second application during the pausing of the continuous or first audio; identifying a portion of the continuous or first audio that was previously played before the first audio was paused; and resuming the playing continuous or first audio so that the portion of the first audio that was previously played is replayed.
- an apparatus may include at least one processor and at least one memory.
- the at least one memory may include computer executable code that, when executed by the at least one processor, is configured to cause the apparatus to play continuous or first audio generated by a first application; determine that the continuous or first audio generated by the first application should be interrupted based on an interrupt or interruptive notification associated with a second application; pause the playing continuous or first audio generated by the first application; play interruptive audio generated by the second application during the pausing of the continuous or first audio; identify a portion of the continuous or first audio that was previously played before the continuous or first audio was paused; and resume the playing of the continuous or first audio so that the portion of the continuous or first audio that was previously played is replayed.
- a non-transitory computer-readable storage medium may include computer-executable code stored thereon that, when executed by a processor, is configured to cause an application executed by the processor to play continuous or first audio; receive a temporary interrupt or interruption message from an operating system executed by the processor; pause the playing the continuous or first audio based on receiving on the temporary interrupt or interruption message; receive a resume message from the operating system; identify a portion of the continuous or first audio that was previously played before the continuous or first audio was paused; and resume playing the continuous or first audio based on receiving the resume notification, the resuming including replaying a portion of the continuous or first audio which was playing before the pause or playing the continuous or first audio based on receiving the resume notification so that the portion of the first audio that was previously played is replayed.
- a method may include determining, by a first computing device, to interrupt playing of continuous or first audio by a second computing device based on receiving a prompt; sending a pause message to the second computing device based on the determining, the pause message instructing the second computing device to pause playing the continuous or first audio; playing interruptive audio after sending the pause message; and sending a resume message to the second computing device after playing the interruptive audio, the resume message instructing the second computing device to resume playing the continuous or first audio.
- an apparatus may include at least one processor and at least one memory.
- the at least one memory may include computer executable code that, when executed by the at least one processor, is configured to cause the apparatus to determine to interrupt playing of continuous or first audio by a computing device based on receiving a prompt; send a pause message to the computing device based on the determining, the pause message instructing the computing device to pause playing the continuous or first audio; play interruptive audio after sending the pause message; and send a resume message to the computing device after playing the interruptive audio, the resume message instructing the computing device to resume playing the continuous or first audio.
- a non-transitory computer-readable storage medium may include computer-executable code stored thereon that, when executed by a processor, is configured to cause an application executed by the processor to determine to interrupt playing of continuous or first audio by a computing device based on receiving a prompt; send a pause message to the computing device based on the determining, the pause message instructing the computing device to pause playing the continuous or first audio; play interruptive audio after sending the pause message; and send a resume message to the computing device after playing the interruptive audio, the resume message instructing the computing device to resume playing the continuous or first audio.
- FIG. 1 is an exemplary block diagram of a computing device according to an example implementation.
- FIG. 2 is a vertical-time sequence diagram showing actions performed by, and messages exchanged between, a first application, an operating system, and a second application of the computing device shown in FIG. 1 .
- FIG. 3 is a vertical-time sequence diagram showing messages sent between, and outputs provided by, a first device and a second device.
- FIG. 4 is a flowchart of a method according to an example implementation.
- FIG. 5 is a flowchart of a method according to another example implementation.
- FIG. 6 is a block diagram showing example or representative computing devices and associated elements that may be used to implement the systems and methods of FIGS. 1-5 .
- FIG. 1 is an exemplary block diagram of a computing device 100 according to an example implementation.
- the computing device 100 may include, for example, a smart phone, a personal digital assistant (PDA), a cellular phone with computing features and/or multiple application features, an iPhone®, or a Droid® smartphone, according to example implementations.
- the computing device 100 may be capable of running or performing multiple software applications at the same time.
- the computing device 100 may perform more than one application at a given time; however, the computing device 100 may have to allocate shared resources between the multiple applications.
- the computing device 100 may include an audio output 102 .
- the audio output 102 may include, for example, a speaker, and may play audio and/or provide different kinds of audio output based on applications executed by the computing device 100 .
- the audio output 102 may, for example, play or provide continuous or first audio output, which is non-time sensitive, but which provides audio output which is listened to by a user of the computing device 100 over an extended period of time.
- the continuous or first audio played or output by the audio output 102 may include, for example, playing podcasts, the sound portions of movies, audio books, or music.
- the continuous or first audio output may be based on an application executed by the computing device 100 .
- the audio output 102 also may play and/or output interruptive audio.
- Interruptive audio may include time sensitive audio output which is listened to for a shorter period of time.
- the interruptive audio may include, for example, a notification of emails, text messages or calendar alerts, the speech output of a turn-by-turn navigation application, the notification of a phone call, or the voice output of a phone call itself.
- the interruptive audio may be based on another application executed by the computing device 100 .
- the computing device 100 may allocate the resources of the audio output 102 so that two applications are not playing and/or providing output out of the audio output 102 at the same time.
- the computing device 100 may, for example, allow interruptive audio to take priority over continuous or first audio.
- the computing device 100 may include multiple applications.
- the multiple applications of the computing device 100 may provide continuous or first audio output and/or interruptive audio output.
- the computing device 100 as shown in FIG. 1 , includes two applications, a first application 104 , and a second application 106 ; however, the computing device 100 may include any number of applications.
- the first application 104 may play and/or output continuous or first audio 108 via the audio output 102 .
- the first application 104 may include, for example, an audio player such as a podcast player, an audio book player or music player, or may include a movie player which has both a video and an audio output component.
- the first application 104 may play and/or output the continuous or first audio 108 via the audio output 102 .
- the second application 106 may include an application which plays and/or outputs interruptive audio 110 .
- the second application 106 may include, for example, an email program, a text message program, a calendar program which provides audio alerts, a turn-by-turn navigation application which provides audio output of turns (such as, “turn left in one hundred feet”), or a telephone application which provides a notification of incoming phone calls and also plays and/or outputs the speech received by the computing device 100 .
- the second application 106 may play and/or output the interruptive audio 110 via the audio output 102 .
- a problem may arise when the first application 104 is playing and/or outputting the continuous or first audio 108 via the audio output 102 and the second application 106 needs to play and/or output the interruptive audio 110 via the audio output 102 ; allowing both the first application 104 , and the second application 106 , to play and/or output their respective audio outputs at the same time, may result in neither of the outputs 108 , 110 being intelligible and/or one or both of the outputs 108 , 110 being drowned out.
- the computing device 100 may need to prioritize between allocating the shared resource of the audio output 102 between the audio outputs 108 , 110 of the first application 104 and the second application 106 .
- the computing device 100 may include an operating system 112 .
- the operating system 112 may not necessarily be seen by, or visible to, a user of the computing device 100 .
- the operating system 112 may allocate the resources of the computing device 100 between the different applications in the computing device 100 , such as the first application 104 and the second application 106 .
- the resources of the computing device 100 may include, for example, the audio output 102 , a processor 114 , such as a microprocessor, a memory 116 , and input and output components of the computing device 100 .
- the processor 114 may execute instructions, such as the instructions stored in the memory 116 , may run applications, such as the applications 104 , 106 , and/or may receive input and provide output from and to the input and output devices described herein.
- the memory 116 which may include any type of memory, such as read only memory (ROM), random access memory (RAM), and/or flash memory, may store program instructions and/or data.
- the operating system 112 may, for example, allocate processor 114 resources, memory 116 resources, and/or the audio output 102 resource between the first application 104 and the second application 106 .
- the operating system 112 also may receive and send messages from and to the first application 104 and the second application 106 , discussed further below.
- the operating system 112 may act as an interface between the applications, such as the first application 104 and the second application 106 , and the resources of the computing device 100 .
- Input and output resources of the computing device 100 may include, for example, a visual output and/or tactile input 118 .
- the visual output and/or tactile input 118 may include, for example, a display which may display graphical icons, notifications, video, and other graphical outputs.
- the visual output/tactile input 118 also may receive input from a user, such as by providing a touch screen in the computing device 100 .
- the input and output resources of the computing device 100 also may include an antenna 120 .
- the antenna 120 may extend out of the computing device 100 , or may be internal to the computing device 100 .
- the antenna 120 may serve to receive and transmit wireless signals between the computing device 100 and other computing devices, such as base stations, cellular towers, access points, node B's, or other devices which serve mobile computing devices, or even other mobile computing devices.
- the input and output resources of the computing device 100 also may include a keyboard input 122 .
- the keyboard input 122 may receive text input from a user. While the computing device 100 shown in FIG. 1 includes a keyboard input 122 , the computing device 100 need not necessarily include the keyboard input 122 . The computing device 100 may instead (or also) receive text input via a touch screen, such as the visual output/tactile input 118 .
- the input and output resources of the computing device 100 also may include an audio input 124 .
- the audio input 124 may include, for example, a microphone which receives audio input such as speech from a user of the computing device.
- the operating system 112 may interface between the applications 104 , 106 and any or all of the input and output resources of the computing device 100 , such as, but not limited to, the audio output 102 , visual output/tactile input 118 , antenna 120 , keyboard input 122 , and audio input 124 .
- the operating system 112 may, for example, determine to temporarily interrupt the continuous or first audio 108 generated by the first application 104 based on an interrupt or interruptive notification associated with the second application 106 . Based on the interruption, the first application 104 may pause the continuous or first audio 108 . The second application 106 may generate the interruptive audio 110 during the pausing of the continuous or first audio 108 . After the interruptive audio 110 has completed, the first application 104 may resume playing and/or outputting the continuous or first audio 108 . However, simply resuming at the point where the first application 104 paused playing and/or outputting the continuous or first audio 108 may result in a loss of context or continuity of the continuous or first audio 108 .
- the first application 104 may identify a portion of the continuous or first audio 108 that was previously played or outputted before the continuous or first audio 108 was paused.
- the first application 104 may identify the portion by determining a time, such as five second or ten seconds, which should be replayed, or identifying a complete sentence or a scene which was interrupted and which should be replayed, according to example embodiments.
- the first application 104 may replay and/or re-output or regenerate a portion of the continuous or first audio 108 which was output or generated before the pausing or interruption, and/or resume playing of the continuous or first audio 108 so that the portion of the continuous or first audio 108 that was previously played is replayed, regaining the context of the continuous or first audio 108 .
- FIG. 2 is a vertical-time sequence diagram showing actions performed by, and messages exchanged between, the first application 104 , the operating system 112 , and the second application 106 .
- the first application 104 may be playing and/or outputting continuous or first audio 108 A.
- the continuous or first audio 108 A may include, for example, a podcast, an audio book, music, or the sound portion of a movie.
- the second application 106 may receive input 202 or a prompt.
- the input 202 or prompt may include, for example, a notification of a telephone call received via the antenna 120 , an input from a user of the computing device 100 (which may have been received via the visual output/tactile input 118 , the keyboard input 122 , or the audio input 124 ), a notification that a time of a calendar alert has come, or a determination by a turn-by-turn navigation program that a turn instruction should be provided to the user (which may be based on information included in the turn-by-turn navigation program combined with global positioning system (GPS) information).
- GPS global positioning system
- the second application 106 may determine that the second application 106 should provide, generate, play, or output interruptive audio 110 . Based on this determination, the second application 106 may send an interrupt or interruptive notification message 204 to the operating system 112 .
- the interrupt or interruptive notification message 204 may be a notification understood by the operating system 112 to indicate that the second application 106 needs to provide interruptive audio output 110 .
- the operating system 112 may receive the interrupt or interruptive notification message 204 from the second application 106 .
- the operating system 112 may understand that the interrupt or interruptive notification message 204 indicates that the second application 106 needs to provide, generate, play, or output interruptive audio 110 .
- the operating system 112 may have stored or determined that the first application 104 is providing the continuous or first audio output 108 A.
- the operating system 112 may, for example, have previously pushed the first application 104 onto a top of a “stack,” giving the first application 104 priority to the audio output 102 .
- the operating system 112 may push the second application 106 onto the top of the stack, and/or send a temporary interruption message 206 to the first application 104 .
- the first application 104 may receive the temporary interruption message 206 from the operating system 112 .
- the first application 104 may understand that the temporarily interruption message 206 indicates that another application, such as the second application 206 , needs to play and/or output interruptive audio 110 .
- the first application 104 may pause the continuous or first audio ( 208 ).
- the first application 104 may identify a portion of the continuous or first audio 108 A that was previously played before the pausing which should be replayed.
- the identifying the portion to replay by the first application 104 may include, for example, pausing the continuous or first audio, and/or setting a pointer to, a point at which the continuous or first audio 108 A was stopped (such as when the first application 104 received the temporary interruption message 206 ) and/or at a tag preceding the point at which the continuous or first audio 108 A was paused.
- Examples of storing the point at which the continuous or first audio 108 A was paused or the location of the tag preceding the point at which the continuous or first audio 108 A was paused include storing a bookmark, time stamp, frame count, position, or sample number of the audio file corresponding to the continuous or first audio output 108 A.
- the computing device 100 and/or first application 104 may store and/or buffer the streamed data during the pausing for later output, or may send a message to a sending device, such as a server, to pause and/or back up or rewind the streamed content, according to example implementations.
- the first application 104 may continue generating the continuous or first audio output 108 A for some period of time to enable buffering and subsequent continuation of the streaming, or may buffer during the entire time during which the continuous or first audio output 108 A is paused, according to example implementations.
- the continuous or first audio 108 A may, for example, have been tagged with sentence beginnings, and/or the first application 104 may determine a tag of the continuous or first audio 108 A.
- the tag may have been included in the file from which the first application 104 generated the continuous or first audio output 108 A, or the first application 104 may find a boundary and/or endpoint of the continuous or first audio 108 A, such as based on pauses or audio energy, sentence boundary detection, end points, speech recognition, based on a transcript (such as alignment or tagging of the continuous or first audio output 108 A with the text transcript), based on content indexing, tokenizations (such as by detecting word boundaries), and/or based on a transcript of the text.
- the first application 104 may tag the output (which may include both the video output and the continuous or first audio output 108 A) based on a beginning of a scene, which may, for example, be based on a change in the video output.
- the first application 104 may point to and/or store that tag point.
- the first application 104 and/or the computing device 100 has stored the entire contents of the continuous or first audio output 108 which will be played and/or outputted, the first application 104 may store the point at which the continuous or first audio output will resume based on the tag.
- the data which the first application 104 will read to generate the continuous or first audio output 108 A may, for example, be maintained in a continuous or first or circular buffer (which may be included in the memory 116 ), allowing the first application 104 to return to the tag or other point at which the first application 104 will resume the continuous or first audio output 108 A.
- the first application 104 and/or the computing device 100 may store the streamed data for later output.
- the first application 104 may pause the continuous or first audio 108 A by immediately ceasing play and/or output of the continuous or first audio 108 A, or may gradually reduce the volume of the continuous or first audio output, such as by fading the continuous or first audio output 108 A by gradually lowering the volume of the continuous or first audio output. This may provide a more gradual and/or seamless transition from the continuous or first audio 108 A to the interruptive audio 110 .
- the second application 106 may play and/or output the interruptive audio output 110 .
- the second application 106 may play and/or output the interruptive audio 110 , such as by providing the audio notification of the email, text message or calendar alert, by providing the speech output or turn direction of the navigation program, and/or by providing the ring and the speech output of a telephone call.
- the second application 106 may complete the interruptive audio 110 , such as the email notification and/or text-to-speech output of the email notification, text message, or calendar output, or providing the full turn direction of the turn-by-turn navigation program, and/or by completing the telephone call.
- the second application 106 may send an interruption complete message 210 to the operating system 112 .
- the interruption complete message 210 may inform the operating system 112 that the second application 106 has completed playing and/or outputting the interruptive audio output 110 , and the operating system 112 may pop the second application 106 off the top of the stack, and/or reallocate the audio output resources to the first application 104 .
- the first application 104 may now be back on top of the stack.
- the operating system 112 may have stored and/or determined which application, such as the first application 104 , had its continuous or first audio output 108 paused during the interruptive audio output 110 , such as by checking the top of the stack.
- the operating system 112 may, in response to receiving the interruption complete message 210 and determining that the first application 104 was the application which had its continuous or first audio output 108 paused, send a resume message 212 to the first application 104 .
- the first application 104 may receive the resume message 212 from the operating system 112 .
- the first application 104 may determine and/or know that the resume message 212 provides permission and/or authorization for the first application 104 to resume playing and/or outputting continuous or first audio 108 B.
- the first application 104 may respond to receiving the resume message 212 by resuming playing and/or outputting continuous or first audio 108 B.
- the first application 104 may resume playing and/or outputting the continuous or first audio 108 B by fading the continuous or first audio 108 B in, gradually increasing the volume until returning to the full volume, or may immediately play and/or output or generate the continuous or first audio 108 B at full volume.
- the resuming may include replaying and/or re-outputting a portion of the continuous or first audio 108 A, 108 B which was played and/or outputted by the first application 104 before the pausing of the continuous or first audio ( 208 ).
- the first application 104 may, for example, replay and/or re-output the portion of the continuous or first audio by playing and/or outputting a complete sentence or a complete portion or portion beginning with the most recent tag of a file read by the first application 104 .
- Replaying and/or re-outputting the portion of the continuous or first audio 108 which was generated, played, or output by the first application 104 may provide context and continuity to a user of the computing device 100 .
- the first application 104 may replay and/or re-output the same entire sentence and/or some preceding sentences, giving the user or listener of the computing device 100 the context of the continuous or first audio 108 B.
- the first application 104 may play and/or output continuous or first audio 108 B which was previously received via the stream (which was stored in the buffer and/or memory 116 and subsequently read out of the buffer and/or memory 116 ), and the continuous or first audio 108 B may be delayed from the received stream, according to an example implementation.
- the process described with reference to FIG. 2 may be performed automatically, and/or without user interaction or instruction.
- the first application 104 may be playing and/or outputting the continuous or first audio 108 A, such as a podcast, when the input 202 , such as a phone call, prompts the second application 106 to interrupt the continuous or first audio 106 A of the first application 104 (such as by the interrupt or interruptive notification message 204 and temporary interruption message 206 ).
- the first application 104 may pause the continuous or first audio ( 208 ), such as by pausing the podcast, while the second application 106 plays and/or outputs the interruptive audio 110 , such as the ring or ringtone associated with the telephone call.
- the interruptive audio 110 may be complete when the second application 106 has finished ringing, and the first application 104 may resume the continuous or first audio 108 B (such as based on the interruption complete message 210 and the resume message 212 ). If the user does answer the call, such as by providing an input into the computing device 100 instructing the computing device 100 to take the call, then the interruptive audio 110 may continue until the user instructs the computing device 100 to finish the call or the computing device 100 receives input from another source (such as a wireless signal) that the call is complete. In other examples of the second application 106 , such as calendar alerts or turn-by-turn navigation programs, the second application 106 may generate and finish the interruptive audio 110 without user input.
- FIG. 3 is a vertical-time sequence diagram showing messages sent between, and outputs provided by, a first device 302 and a second device 304 .
- the first device 302 may be a device which outputs interruptive audio and may include, for example, a cellular phone, a smartphone, a PDA, an iPhone, a Droid smartphone, or a turn-by-turn navigation system.
- the second device 304 may include a device which plays and/or outputs continuous or first audio, such as, for example, a music player or MP3 player.
- the first device 302 and second device 304 may, for example, both be in an automobile.
- the second device 304 may include a car stereo system which plays music or outputs podcasts or audio books.
- the first device 302 may be in wireless communication with the second device 304 such as via, for example, an IEEE 802.15 Bluetooth connection, an 802.11 IEEE wireless local area network (WLAN) connection, or other wireless connection.
- the first device 302 may be capable of sending instructions to the second device 304 via the wireless connection.
- the first device 302 may also be coupled to the second device 304 via a wired or guided connection, and may be capable of sending instructions to the second device 304 via the wired or guided connection.
- the second device 304 may play and/or output continuous or first audio 306 A.
- the second device 304 may play and/or output the continuous or first audio 306 A which may include, for example, a podcast or an audio book.
- the first device 302 may receive a prompt 308 .
- the prompt 308 may include, for example, a notification of an email, text message, calendar alert, or phone call, or a determination that a turn direction should be provided by a turn-by-turn navigation system.
- the prompt 308 may be based on a wireless signal received by the first device 302 , based on a user input, or based on timing determinations and/or location determinations by the first device 302 .
- the first device 302 may send a pause message 310 to the second device 304 .
- the pause message 310 may instruct the second device to pause playing and/or outputting the continuous or first audio 306 A.
- the pause message 310 may, for example, include a pause message by a remote control interface for the second device 304 .
- the second device 304 may pause playing and/or outputting the continuous or first audio 306 A.
- the first device 302 may play and/or output interruptive audio 312 .
- the interruptive audio 312 may include, for example, a notification of an email, text message, calendar alert, or phone call, as well as the audio output of a phone call and/or a turn notification by a turn-by-turn navigation system.
- the first device 302 may send a resume message 314 to the second device 304 .
- the resume message 314 may instruct the second device 304 to resume playing and/or outputting the continuous or first audio 306 B.
- the resume message 314 may, for example, include a play or un-pause instruction by a remote control interface for the second device 306 .
- the second device 304 may receive the resume message 314 and respond to receiving the resume message 314 by resuming playing and/or outputting of the continuous or first audio 306 B. In resuming playing and/or outputting the continuous or first audio 306 B, the second device 304 may replay and/or re-output a portion of the continuous or first audio 306 A which was played and/or output before receiving the pause message 310 . Replaying and/or-outputting the portion of the continuous or first audio 306 A may provide continuity and/or context in the outputting of the continuous or first audio 306 A, 306 B.
- the pause message 310 and/or the resume message 314 may have included an instruction for the second device 304 to rewind the continuous or first audio 306 A and/or to move to a previous portion where the continuous or first audio 306 A will resume being played and/or outputted.
- the rewind may include rewinding a predetermined period of time such as one second, five seconds, or ten seconds, and/or may be based on a tag.
- the first device 302 may have listened to the continuous or first audio output 306 A and found a tag, such as a beginning of a sentence or a reduced volume, at which the second device 304 should resume playing and/or outputting the continuous or first audio 306 B.
- the first device 302 may determine a time period before the pause message 310 at which time the tag began, and may instruct the second device 304 to rewind that period of time which will cause the second device 304 to resume playing and/or outputting the continuous or first audio 306 B at the beginning or at the determined tag point, according to an example implementation.
- the first device 302 and/or second device 304 may include any or all of the components as the computing device 100 shown in FIG. 1 , according to example implementations.
- FIG. 4 is a flowchart of a method 400 according to an example implementation.
- the method 400 may include playing and/or outputting, by a computing device 100 , continuous or first audio 108 generated by a first application 104 ( 402 ), determining that the continuous or first audio 108 generated by the first application 104 should be interrupted based on an interrupt or interruptive notification 204 associated with a second application ( 404 ), pausing the continuous or first audio 108 generated by the first application 104 based on the determining ( 406 ), playing and/or outputting interruptive audio 110 generated by the second application 106 during the pausing of the continuous or first audio 108 ( 408 ), identifying a portion of the continuous or first audio 108 that was previously played before the first audio was paused ( 410 ), and resuming the playing and/or outputting continuous or first audio 108 generated by the first application 104 , the resuming including replaying and/or re-outputting a portion of the continuous or first audio 108 which was generated by the
- the playing and/or outputting continuous or first audio 108 may include playing and/or outputting continuous or first audio 108 and continuous or first video generated by the first application 104
- the resuming may include resuming the playing and/or outputting continuous or first audio 108 and continuous or first video generated by the first application 104 , the resuming including replaying and/or re-outputting the portion of the continuous or first audio 108 and a portion of the continuous or first video which were generated by the first application 104 before the pausing.
- the interrupt or interruptive notification 204 associated with the second application 106 may be based on a user input received by the computing device 100 .
- the interrupt or interruptive notification 204 associated with the second application 106 may be based on the computing device receiving a wireless signal.
- the second application 106 may include a phone or telephone application.
- the second application 106 may include a navigation application.
- the pausing ( 406 ) may include gradually reducing a volume of the playing and/or outputting the continuous or first audio 108 generated by the first application 104
- the resuming may include gradually increasing the volume of the playing and/or outputting the continuous or first audio 108 generated by the first application 104 .
- the pausing ( 406 ) may include storing streamed data read by the first application 104 while pausing the playing and/or outputting continuous or first audio 108 generated by the first application 104 , and the resuming may include playing and/or outputting continuous or first audio 108 based on the stored streamed data.
- the resuming ( 412 ) may include resuming the playing and/or outputting continuous or first audio 108 generated by the first application 104 , the resuming including replaying and/or re-outputting a complete sentence which was interrupted by the pausing ( 406 ).
- the resuming ( 412 ) may include resuming the playing and/or outputting continuous or first audio 108 generated by the first application 104 , the resuming including replaying and/or re-outputting a complete sentence which was interrupted by the pausing ( 406 ), the complete sentence being determined by content indexing.
- the resuming ( 412 ) may include resuming the playing and/or outputting continuous or first audio 108 generated by the first application 104 , the resuming beginning at a most recent tag in a file read by the first application 104 .
- the playing and/or outputting ( 402 ) may include playing and/or outputting continuous or first audio and continuous or first video generated by the first application 104
- the resuming ( 412 ) may include resuming the playing and/or outputting continuous or first audio 108 and continuous or first video generated by the first application 104 , the resuming including replaying and/or re-outputting the portion of the continuous or first audio and a portion of the continuous or first video at a beginning of a most recent scene which was generated by the first application 104 before the pausing ( 406 ).
- FIG. 5 is a flowchart of a method 500 according to another example embodiment.
- the method may include determining, by a first computing device 302 , to temporarily interrupt playing and/or outputting of continuous or first audio 306 A by a second computing device 304 based on receiving a prompt ( 502 ), sending a pause message 310 to the second computing device 304 based on the determining, the pause message 310 instructing the second computing device 304 to pause playing and/or outputting the continuous or first audio ( 504 ), playing and/or outputting interruptive audio 312 after sending the pause message ( 506 ) and sending a resume message 314 to the second computing device 304 after playing and/or outputting the interruptive audio 312 , the resume message 314 instructing the second computing device 304 to resume playing and/or outputting the continuous or first audio 306 B ( 508 ).
- the pause message 310 may instruct the second computing device 304 to pause the playing and/or outputting and rewind the continuous or first audio 306 A.
- the resume message 314 may instruct the second computing device 304 to rewind and resume the playing and/or outputting the continuous or first audio 306 B.
- FIG. 6 is a block diagram showing example or representative computing devices and associated elements that may be used to implement the systems and methods of FIGS. 1-5 .
- FIG. 6 shows an example of a generic computer device 600 and a generic mobile computer device 650 , which may be used with the techniques described here.
- Computing device 600 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.
- Computing device 650 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, and other similar computing devices.
- the components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
- Computing device 600 includes a processor 602 , memory 604 , a storage device 606 , a high-speed interface 608 connecting to memory 604 and high-speed expansion ports 610 , and a low speed interface 612 connecting to low speed bus 614 and storage device 606 .
- Each of the components 602 , 604 , 606 , 608 , 610 , and 612 are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate.
- the processor 602 can process instructions for execution within the computing device 600 , including instructions stored in the memory 604 or on the storage device 606 to display graphical information for a GUI on an external input/output device, such as display 616 coupled to high speed interface 608 .
- multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory.
- multiple computing devices 600 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
- the memory 604 stores information within the computing device 600 .
- the memory 604 is a volatile memory unit or units.
- the memory 604 is a non-volatile memory unit or units.
- the memory 604 may also be another form of computer-readable medium, such as a magnetic or optical disk.
- the storage device 606 is capable of providing mass storage for the computing device 600 .
- the storage device 606 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations.
- a computer program product can be tangibly embodied in an information carrier.
- the computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above.
- the information carrier is a computer- or machine-readable medium, such as the memory 604 , the storage device 606 , or memory on processor 602 .
- the high speed controller 608 manages bandwidth-intensive operations for the computing device 600 , while the low speed controller 612 manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only.
- the high-speed controller 608 is coupled to memory 604 , display 616 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 610 , which may accept various expansion cards (not shown).
- low-speed controller 612 is coupled to storage device 606 and low-speed expansion port 614 .
- the low-speed expansion port which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
- input/output devices such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
- the computing device 600 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 620 , or multiple times in a group of such servers. It may also be implemented as part of a rack server system 624 . In addition, it may be implemented in a personal computer such as a laptop computer 622 . Alternatively, components from computing device 600 may be combined with other components in a mobile device (not shown), such as device 650 . Each of such devices may contain one or more of computing device 600 , 650 , and an entire system may be made up of multiple computing devices 600 , 650 communicating with each other.
- Computing device 650 includes a processor 652 , memory 664 , an input/output device such as a display 654 , a communication interface 666 , and a transceiver 668 , among other components.
- the device 650 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage.
- a storage device such as a microdrive or other device, to provide additional storage.
- Each of the components 650 , 652 , 664 , 654 , 666 , and 668 are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
- the processor 652 can execute instructions within the computing device 650 , including instructions stored in the memory 664 .
- the processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors.
- the processor may provide, for example, for coordination of the other components of the device 650 , such as control of user interfaces, applications run by device 650 , and wireless communication by device 650 .
- Processor 652 may communicate with a user through control interface 658 and display interface 656 coupled to a display 654 .
- the display 654 may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology.
- the display interface 656 may comprise appropriate circuitry for driving the display 654 to present graphical and other information to a user.
- the control interface 658 may receive commands from a user and convert them for submission to the processor 652 .
- an external interface 662 may be provide in communication with processor 652 , so as to enable near area communication of device 650 with other devices. External interface 662 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.
- the memory 664 stores information within the computing device 650 .
- the memory 664 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units.
- Expansion memory 674 may also be provided and connected to device 650 through expansion interface 672 , which may include, for example, a SIMM (Single In Line Memory Module) card interface.
- SIMM Single In Line Memory Module
- expansion memory 674 may provide extra storage space for device 650 , or may also store applications or other information for device 650 .
- expansion memory 674 may include instructions to carry out or supplement the processes described above, and may include secure information also.
- expansion memory 674 may be provide as a security module for device 650 , and may be programmed with instructions that permit secure use of device 650 .
- secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
- the memory may include, for example, flash memory and/or NVRAM memory, as discussed below.
- a computer program product is tangibly embodied in an information carrier.
- the computer program product contains instructions that, when executed, perform one or more methods, such as those described above.
- the information carrier is a computer- or machine-readable medium, such as the memory 664 , expansion memory 674 , or memory on processor 652 , that may be received, for example, over transceiver 668 or external interface 662 .
- Device 650 may communicate wirelessly through communication interface 666 , which may include digital signal processing circuitry where necessary. Communication interface 666 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 668 . In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning system) receiver module 670 may provide additional navigation- and location-related wireless data to device 650 , which may be used as appropriate by applications running on device 650 .
- GPS Global Positioning system
- Device 650 may also communicate audibly using audio codec 660 , which may receive spoken information from a user and convert it to usable digital information. Audio codec 660 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 650 . Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 650 .
- Audio codec 660 may receive spoken information from a user and convert it to usable digital information. Audio codec 660 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 650 . Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 650 .
- the computing device 650 may be implemented in a number of different forms, as shown in FIG. 6 .
- it may be implemented as a cellular telephone 680 . It may also be implemented as part of a smart phone 682 , personal digital assistant, or other similar mobile device.
- Implementations of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Implementations may implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers.
- a computer program such as the computer program(s) described above, can be written in any form of programming language, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
- a computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
- Method steps may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method steps also may be performed by, and an apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
- FPGA field programmable gate array
- ASIC application-specific integrated circuit
- processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
- a processor will receive instructions and data from a read-only memory or a random access memory or both.
- Elements of a computer may include at least one processor for executing instructions and one or more memory devices for storing instructions and data.
- a computer also may include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
- Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
- semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
- magnetic disks e.g., internal hard disks or removable disks
- magneto-optical disks e.g., CD-ROM and DVD-ROM disks.
- the processor and the memory may be supplemented by, or incorporated in special purpose logic circuitry.
- implementations may be implemented on a computer having a display device, e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
- a display device e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor
- keyboard and a pointing device e.g., a mouse or a trackball
- Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
- Implementations may be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation, or any combination of such back-end, middleware, or front-end components.
- Components may be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
- LAN local area network
- WAN wide area network
Abstract
According to an example implementation, a method may include playing, by a computing device, first audio generated by a first application, determining that the first audio generated by the first application should be interrupted based on a notification associated with a second application, pausing the first audio generated by the first application, playing interruptive audio generated by the second application during the pausing of the first audio, identifying a portion of the first audio that was previously played before the first audio was paused, and resuming the playing of the first audio so that the portion of the first audio that was previously played is replayed.
Description
- This application is a continuation of U.S. patent application Ser. No. 12/793,065, filed Jun. 3, 2010, entitled CONTINUOUS AUDIO INTERACTION WITH INTERRUPTIVE AUDIO,” incorporated by reference herein in its entirety.
- This description relates to audio output devices.
- Devices may output or play continuous audio, such as podcasts, audio books, movies, or music. This continuous audio may be interrupted by time-sensitive or interruptive audio applications of the devices, such as incoming telephone calls. Transferring from the continuous audio to the interruptive audio and back to the continuous audio may lose the context or continuity of the continuous audio.
- According to one general aspect, method may include playing, by a computing device, continuous or first audio generated by a first application; determining that the first audio generated by the first application should be interrupted based on an interrupt or interruptive notification associated with a second application; pausing the continuous or first audio generated by the first application; playing interruptive audio generated by the second application during the pausing of the continuous or first audio; identifying a portion of the continuous or first audio that was previously played before the first audio was paused; and resuming the playing continuous or first audio so that the portion of the first audio that was previously played is replayed.
- According to another general aspect, an apparatus may include at least one processor and at least one memory. The at least one memory may include computer executable code that, when executed by the at least one processor, is configured to cause the apparatus to play continuous or first audio generated by a first application; determine that the continuous or first audio generated by the first application should be interrupted based on an interrupt or interruptive notification associated with a second application; pause the playing continuous or first audio generated by the first application; play interruptive audio generated by the second application during the pausing of the continuous or first audio; identify a portion of the continuous or first audio that was previously played before the continuous or first audio was paused; and resume the playing of the continuous or first audio so that the portion of the continuous or first audio that was previously played is replayed.
- According to another general aspect, a non-transitory computer-readable storage medium may include computer-executable code stored thereon that, when executed by a processor, is configured to cause an application executed by the processor to play continuous or first audio; receive a temporary interrupt or interruption message from an operating system executed by the processor; pause the playing the continuous or first audio based on receiving on the temporary interrupt or interruption message; receive a resume message from the operating system; identify a portion of the continuous or first audio that was previously played before the continuous or first audio was paused; and resume playing the continuous or first audio based on receiving the resume notification, the resuming including replaying a portion of the continuous or first audio which was playing before the pause or playing the continuous or first audio based on receiving the resume notification so that the portion of the first audio that was previously played is replayed.
- According to another general aspect, a method may include determining, by a first computing device, to interrupt playing of continuous or first audio by a second computing device based on receiving a prompt; sending a pause message to the second computing device based on the determining, the pause message instructing the second computing device to pause playing the continuous or first audio; playing interruptive audio after sending the pause message; and sending a resume message to the second computing device after playing the interruptive audio, the resume message instructing the second computing device to resume playing the continuous or first audio.
- According to another general aspect, an apparatus may include at least one processor and at least one memory. The at least one memory may include computer executable code that, when executed by the at least one processor, is configured to cause the apparatus to determine to interrupt playing of continuous or first audio by a computing device based on receiving a prompt; send a pause message to the computing device based on the determining, the pause message instructing the computing device to pause playing the continuous or first audio; play interruptive audio after sending the pause message; and send a resume message to the computing device after playing the interruptive audio, the resume message instructing the computing device to resume playing the continuous or first audio.
- According to another general aspect, a non-transitory computer-readable storage medium may include computer-executable code stored thereon that, when executed by a processor, is configured to cause an application executed by the processor to determine to interrupt playing of continuous or first audio by a computing device based on receiving a prompt; send a pause message to the computing device based on the determining, the pause message instructing the computing device to pause playing the continuous or first audio; play interruptive audio after sending the pause message; and send a resume message to the computing device after playing the interruptive audio, the resume message instructing the computing device to resume playing the continuous or first audio.
- The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
-
FIG. 1 is an exemplary block diagram of a computing device according to an example implementation. -
FIG. 2 is a vertical-time sequence diagram showing actions performed by, and messages exchanged between, a first application, an operating system, and a second application of the computing device shown inFIG. 1 . -
FIG. 3 is a vertical-time sequence diagram showing messages sent between, and outputs provided by, a first device and a second device. -
FIG. 4 is a flowchart of a method according to an example implementation. -
FIG. 5 is a flowchart of a method according to another example implementation. -
FIG. 6 is a block diagram showing example or representative computing devices and associated elements that may be used to implement the systems and methods ofFIGS. 1-5 . -
FIG. 1 is an exemplary block diagram of acomputing device 100 according to an example implementation. Thecomputing device 100 may include, for example, a smart phone, a personal digital assistant (PDA), a cellular phone with computing features and/or multiple application features, an iPhone®, or a Droid® smartphone, according to example implementations. Thecomputing device 100 may be capable of running or performing multiple software applications at the same time. Thecomputing device 100 may perform more than one application at a given time; however, thecomputing device 100 may have to allocate shared resources between the multiple applications. - The
computing device 100 may include anaudio output 102. Theaudio output 102 may include, for example, a speaker, and may play audio and/or provide different kinds of audio output based on applications executed by thecomputing device 100. Theaudio output 102 may, for example, play or provide continuous or first audio output, which is non-time sensitive, but which provides audio output which is listened to by a user of thecomputing device 100 over an extended period of time. The continuous or first audio played or output by theaudio output 102 may include, for example, playing podcasts, the sound portions of movies, audio books, or music. The continuous or first audio output may be based on an application executed by thecomputing device 100. - The
audio output 102 also may play and/or output interruptive audio. Interruptive audio may include time sensitive audio output which is listened to for a shorter period of time. The interruptive audio may include, for example, a notification of emails, text messages or calendar alerts, the speech output of a turn-by-turn navigation application, the notification of a phone call, or the voice output of a phone call itself. The interruptive audio may be based on another application executed by thecomputing device 100. - The
computing device 100 may allocate the resources of theaudio output 102 so that two applications are not playing and/or providing output out of theaudio output 102 at the same time. Thecomputing device 100 may, for example, allow interruptive audio to take priority over continuous or first audio. - The
computing device 100 may include multiple applications. The multiple applications of thecomputing device 100 may provide continuous or first audio output and/or interruptive audio output. Thecomputing device 100, as shown inFIG. 1 , includes two applications, afirst application 104, and asecond application 106; however, thecomputing device 100 may include any number of applications. - In the example shown in
FIG. 1 , thefirst application 104 may play and/or output continuous orfirst audio 108 via theaudio output 102. Thefirst application 104 may include, for example, an audio player such as a podcast player, an audio book player or music player, or may include a movie player which has both a video and an audio output component. Thefirst application 104 may play and/or output the continuous orfirst audio 108 via theaudio output 102. - The
second application 106 may include an application which plays and/or outputsinterruptive audio 110. Thesecond application 106 may include, for example, an email program, a text message program, a calendar program which provides audio alerts, a turn-by-turn navigation application which provides audio output of turns (such as, “turn left in one hundred feet”), or a telephone application which provides a notification of incoming phone calls and also plays and/or outputs the speech received by thecomputing device 100. Thesecond application 106 may play and/or output theinterruptive audio 110 via theaudio output 102. - A problem may arise when the
first application 104 is playing and/or outputting the continuous orfirst audio 108 via theaudio output 102 and thesecond application 106 needs to play and/or output theinterruptive audio 110 via theaudio output 102; allowing both thefirst application 104, and thesecond application 106, to play and/or output their respective audio outputs at the same time, may result in neither of theoutputs outputs computing device 100 may need to prioritize between allocating the shared resource of theaudio output 102 between theaudio outputs first application 104 and thesecond application 106. - The
computing device 100 may include anoperating system 112. Theoperating system 112 may not necessarily be seen by, or visible to, a user of thecomputing device 100. Theoperating system 112 may allocate the resources of thecomputing device 100 between the different applications in thecomputing device 100, such as thefirst application 104 and thesecond application 106. The resources of thecomputing device 100 may include, for example, theaudio output 102, aprocessor 114, such as a microprocessor, amemory 116, and input and output components of thecomputing device 100. Theprocessor 114 may execute instructions, such as the instructions stored in thememory 116, may run applications, such as theapplications memory 116, which may include any type of memory, such as read only memory (ROM), random access memory (RAM), and/or flash memory, may store program instructions and/or data. Theoperating system 112 may, for example, allocateprocessor 114 resources,memory 116 resources, and/or theaudio output 102 resource between thefirst application 104 and thesecond application 106. Theoperating system 112 also may receive and send messages from and to thefirst application 104 and thesecond application 106, discussed further below. - The
operating system 112 may act as an interface between the applications, such as thefirst application 104 and thesecond application 106, and the resources of thecomputing device 100. Input and output resources of thecomputing device 100 may include, for example, a visual output and/ortactile input 118. The visual output and/ortactile input 118 may include, for example, a display which may display graphical icons, notifications, video, and other graphical outputs. The visual output/tactile input 118 also may receive input from a user, such as by providing a touch screen in thecomputing device 100. - The input and output resources of the
computing device 100 also may include anantenna 120. Theantenna 120 may extend out of thecomputing device 100, or may be internal to thecomputing device 100. Theantenna 120 may serve to receive and transmit wireless signals between thecomputing device 100 and other computing devices, such as base stations, cellular towers, access points, node B's, or other devices which serve mobile computing devices, or even other mobile computing devices. - The input and output resources of the
computing device 100 also may include akeyboard input 122. Thekeyboard input 122 may receive text input from a user. While thecomputing device 100 shown inFIG. 1 includes akeyboard input 122, thecomputing device 100 need not necessarily include thekeyboard input 122. Thecomputing device 100 may instead (or also) receive text input via a touch screen, such as the visual output/tactile input 118. - The input and output resources of the
computing device 100 also may include anaudio input 124. Theaudio input 124 may include, for example, a microphone which receives audio input such as speech from a user of the computing device. Theoperating system 112 may interface between theapplications computing device 100, such as, but not limited to, theaudio output 102, visual output/tactile input 118,antenna 120,keyboard input 122, andaudio input 124. - The
operating system 112 may, for example, determine to temporarily interrupt the continuous orfirst audio 108 generated by thefirst application 104 based on an interrupt or interruptive notification associated with thesecond application 106. Based on the interruption, thefirst application 104 may pause the continuous orfirst audio 108. Thesecond application 106 may generate theinterruptive audio 110 during the pausing of the continuous orfirst audio 108. After theinterruptive audio 110 has completed, thefirst application 104 may resume playing and/or outputting the continuous orfirst audio 108. However, simply resuming at the point where thefirst application 104 paused playing and/or outputting the continuous orfirst audio 108 may result in a loss of context or continuity of the continuous orfirst audio 108. Therefore, thefirst application 104 may identify a portion of the continuous orfirst audio 108 that was previously played or outputted before the continuous orfirst audio 108 was paused. Thefirst application 104 may identify the portion by determining a time, such as five second or ten seconds, which should be replayed, or identifying a complete sentence or a scene which was interrupted and which should be replayed, according to example embodiments. Thefirst application 104 may replay and/or re-output or regenerate a portion of the continuous orfirst audio 108 which was output or generated before the pausing or interruption, and/or resume playing of the continuous orfirst audio 108 so that the portion of the continuous orfirst audio 108 that was previously played is replayed, regaining the context of the continuous orfirst audio 108. -
FIG. 2 is a vertical-time sequence diagram showing actions performed by, and messages exchanged between, thefirst application 104, theoperating system 112, and thesecond application 106. In the example shown inFIG. 2 , thefirst application 104 may be playing and/or outputting continuous orfirst audio 108A. The continuous orfirst audio 108A may include, for example, a podcast, an audio book, music, or the sound portion of a movie. - While the
first application 104 is playing and/or outputting the continuous orfirst audio 108A, thesecond application 106 may receiveinput 202 or a prompt. Theinput 202 or prompt may include, for example, a notification of a telephone call received via theantenna 120, an input from a user of the computing device 100 (which may have been received via the visual output/tactile input 118, thekeyboard input 122, or the audio input 124), a notification that a time of a calendar alert has come, or a determination by a turn-by-turn navigation program that a turn instruction should be provided to the user (which may be based on information included in the turn-by-turn navigation program combined with global positioning system (GPS) information). - Upon receiving the
input 202, thesecond application 106 may determine that thesecond application 106 should provide, generate, play, or outputinterruptive audio 110. Based on this determination, thesecond application 106 may send an interrupt orinterruptive notification message 204 to theoperating system 112. The interrupt orinterruptive notification message 204 may be a notification understood by theoperating system 112 to indicate that thesecond application 106 needs to provideinterruptive audio output 110. - The
operating system 112 may receive the interrupt orinterruptive notification message 204 from thesecond application 106. Theoperating system 112 may understand that the interrupt orinterruptive notification message 204 indicates that thesecond application 106 needs to provide, generate, play, or outputinterruptive audio 110. Theoperating system 112 may have stored or determined that thefirst application 104 is providing the continuous or firstaudio output 108A. Theoperating system 112 may, for example, have previously pushed thefirst application 104 onto a top of a “stack,” giving thefirst application 104 priority to theaudio output 102. Based on receiving the interrupt orinterruptive notification message 204, and determining and/or having stored the fact that thefirst application 104 is providing the continuous orfirst audio 108, theoperating system 112 may push thesecond application 106 onto the top of the stack, and/or send atemporary interruption message 206 to thefirst application 104. - The
first application 104 may receive thetemporary interruption message 206 from theoperating system 112. Thefirst application 104 may understand that the temporarilyinterruption message 206 indicates that another application, such as thesecond application 206, needs to play and/or outputinterruptive audio 110. Based on receiving thetemporary interruption message 206, thefirst application 104 may pause the continuous or first audio (208). Thefirst application 104 may identify a portion of the continuous orfirst audio 108A that was previously played before the pausing which should be replayed. The identifying the portion to replay by thefirst application 104 may include, for example, pausing the continuous or first audio, and/or setting a pointer to, a point at which the continuous orfirst audio 108A was stopped (such as when thefirst application 104 received the temporary interruption message 206) and/or at a tag preceding the point at which the continuous orfirst audio 108A was paused. Examples of storing the point at which the continuous orfirst audio 108A was paused or the location of the tag preceding the point at which the continuous orfirst audio 108A was paused include storing a bookmark, time stamp, frame count, position, or sample number of the audio file corresponding to the continuous or firstaudio output 108A. - In an example in which the continuous or first
audio output 108A is streamed to thecomputing device 100, such as via theantenna 120, thecomputing device 100 and/orfirst application 104 may store and/or buffer the streamed data during the pausing for later output, or may send a message to a sending device, such as a server, to pause and/or back up or rewind the streamed content, according to example implementations. Thefirst application 104 may continue generating the continuous or firstaudio output 108A for some period of time to enable buffering and subsequent continuation of the streaming, or may buffer during the entire time during which the continuous or firstaudio output 108A is paused, according to example implementations. - The continuous or
first audio 108A may, for example, have been tagged with sentence beginnings, and/or thefirst application 104 may determine a tag of the continuous orfirst audio 108A. The tag may have been included in the file from which thefirst application 104 generated the continuous or firstaudio output 108A, or thefirst application 104 may find a boundary and/or endpoint of the continuous orfirst audio 108A, such as based on pauses or audio energy, sentence boundary detection, end points, speech recognition, based on a transcript (such as alignment or tagging of the continuous or firstaudio output 108A with the text transcript), based on content indexing, tokenizations (such as by detecting word boundaries), and/or based on a transcript of the text. In the example in which thefirst application 104 is providing video along with the audio output, thefirst application 104 may tag the output (which may include both the video output and the continuous or firstaudio output 108A) based on a beginning of a scene, which may, for example, be based on a change in the video output. Thefirst application 104 may point to and/or store that tag point. In the example in which thefirst application 104 and/or thecomputing device 100 has stored the entire contents of the continuous or firstaudio output 108 which will be played and/or outputted, thefirst application 104 may store the point at which the continuous or first audio output will resume based on the tag. The data which thefirst application 104 will read to generate the continuous or firstaudio output 108A may, for example, be maintained in a continuous or first or circular buffer (which may be included in the memory 116), allowing thefirst application 104 to return to the tag or other point at which thefirst application 104 will resume the continuous or firstaudio output 108A. In an example in which thecomputing device 100 is receiving the data in a stream which will be played and/or outputted as continuous or firstaudio output 108A, thefirst application 104 and/or thecomputing device 100 may store the streamed data for later output. - The
first application 104 may pause the continuous orfirst audio 108A by immediately ceasing play and/or output of the continuous orfirst audio 108A, or may gradually reduce the volume of the continuous or first audio output, such as by fading the continuous or firstaudio output 108A by gradually lowering the volume of the continuous or first audio output. This may provide a more gradual and/or seamless transition from the continuous orfirst audio 108A to theinterruptive audio 110. - At about the same time that the
first application 104 pauses the continuous or first audio output (208), thesecond application 106 may play and/or output theinterruptive audio output 110. Thesecond application 106 may play and/or output theinterruptive audio 110, such as by providing the audio notification of the email, text message or calendar alert, by providing the speech output or turn direction of the navigation program, and/or by providing the ring and the speech output of a telephone call. Thesecond application 106 may complete theinterruptive audio 110, such as the email notification and/or text-to-speech output of the email notification, text message, or calendar output, or providing the full turn direction of the turn-by-turn navigation program, and/or by completing the telephone call. - Upon completing the
interruptive audio 110, thesecond application 106 may send an interruptioncomplete message 210 to theoperating system 112. The interruptioncomplete message 210 may inform theoperating system 112 that thesecond application 106 has completed playing and/or outputting theinterruptive audio output 110, and theoperating system 112 may pop thesecond application 106 off the top of the stack, and/or reallocate the audio output resources to thefirst application 104. Thefirst application 104 may now be back on top of the stack. - The
operating system 112 may have stored and/or determined which application, such as thefirst application 104, had its continuous or firstaudio output 108 paused during theinterruptive audio output 110, such as by checking the top of the stack. Theoperating system 112 may, in response to receiving the interruptioncomplete message 210 and determining that thefirst application 104 was the application which had its continuous or firstaudio output 108 paused, send aresume message 212 to thefirst application 104. - The
first application 104 may receive theresume message 212 from theoperating system 112. Thefirst application 104 may determine and/or know that theresume message 212 provides permission and/or authorization for thefirst application 104 to resume playing and/or outputting continuous or first audio 108B. - The
first application 104 may respond to receiving theresume message 212 by resuming playing and/or outputting continuous or first audio 108B. Thefirst application 104 may resume playing and/or outputting the continuous or first audio 108B by fading the continuous or first audio 108B in, gradually increasing the volume until returning to the full volume, or may immediately play and/or output or generate the continuous or first audio 108B at full volume. - The resuming may include replaying and/or re-outputting a portion of the continuous or
first audio 108A, 108B which was played and/or outputted by thefirst application 104 before the pausing of the continuous or first audio (208). Thefirst application 104 may, for example, replay and/or re-output the portion of the continuous or first audio by playing and/or outputting a complete sentence or a complete portion or portion beginning with the most recent tag of a file read by thefirst application 104. Replaying and/or re-outputting the portion of the continuous orfirst audio 108 which was generated, played, or output by thefirst application 104 may provide context and continuity to a user of thecomputing device 100. Thus, instead of thefirst application 104 pausing the continuous orfirst audio 108A in mid-sentence and continuing the continuous or first audio 108B in the middle of the same sentence, thefirst application 104 may replay and/or re-output the same entire sentence and/or some preceding sentences, giving the user or listener of thecomputing device 100 the context of the continuous or first audio 108B. If thefirst application 104 was playing and/or outputting the continuous orfirst audio 108A by receiving a live stream via theantenna 120, thefirst application 104 may play and/or output continuous or first audio 108B which was previously received via the stream (which was stored in the buffer and/ormemory 116 and subsequently read out of the buffer and/or memory 116), and the continuous or first audio 108B may be delayed from the received stream, according to an example implementation. - The process described with reference to
FIG. 2 may be performed automatically, and/or without user interaction or instruction. For example, thefirst application 104 may be playing and/or outputting the continuous orfirst audio 108A, such as a podcast, when theinput 202, such as a phone call, prompts thesecond application 106 to interrupt the continuous or first audio 106A of the first application 104 (such as by the interrupt orinterruptive notification message 204 and temporary interruption message 206). Thefirst application 104 may pause the continuous or first audio (208), such as by pausing the podcast, while thesecond application 106 plays and/or outputs theinterruptive audio 110, such as the ring or ringtone associated with the telephone call. If a user of thecomputing device 100 does not answer the call (such as by not providing input to thecomputing device 100 in response to the ring or ringtone), then theinterruptive audio 110 may be complete when thesecond application 106 has finished ringing, and thefirst application 104 may resume the continuous or first audio 108B (such as based on the interruptioncomplete message 210 and the resume message 212). If the user does answer the call, such as by providing an input into thecomputing device 100 instructing thecomputing device 100 to take the call, then theinterruptive audio 110 may continue until the user instructs thecomputing device 100 to finish the call or thecomputing device 100 receives input from another source (such as a wireless signal) that the call is complete. In other examples of thesecond application 106, such as calendar alerts or turn-by-turn navigation programs, thesecond application 106 may generate and finish theinterruptive audio 110 without user input. -
FIG. 3 is a vertical-time sequence diagram showing messages sent between, and outputs provided by, afirst device 302 and asecond device 304. Thefirst device 302 may be a device which outputs interruptive audio and may include, for example, a cellular phone, a smartphone, a PDA, an iPhone, a Droid smartphone, or a turn-by-turn navigation system. Thesecond device 304 may include a device which plays and/or outputs continuous or first audio, such as, for example, a music player or MP3 player. Thefirst device 302 andsecond device 304 may, for example, both be in an automobile. - In an example implementation, the
second device 304 may include a car stereo system which plays music or outputs podcasts or audio books. In an example implementation, thefirst device 302 may be in wireless communication with thesecond device 304 such as via, for example, an IEEE 802.15 Bluetooth connection, an 802.11 IEEE wireless local area network (WLAN) connection, or other wireless connection. Thefirst device 302 may be capable of sending instructions to thesecond device 304 via the wireless connection. Thefirst device 302 may also be coupled to thesecond device 304 via a wired or guided connection, and may be capable of sending instructions to thesecond device 304 via the wired or guided connection. - In an example implementation, the
second device 304 may play and/or output continuous or first audio 306A. Thesecond device 304 may play and/or output the continuous or first audio 306A which may include, for example, a podcast or an audio book. - While the
second device 304 is playing and/or outputting the continuous or first audio 306A, thefirst device 302 may receive a prompt 308. The prompt 308 may include, for example, a notification of an email, text message, calendar alert, or phone call, or a determination that a turn direction should be provided by a turn-by-turn navigation system. The prompt 308 may be based on a wireless signal received by thefirst device 302, based on a user input, or based on timing determinations and/or location determinations by thefirst device 302. - Based on the prompt 308, the
first device 302 may send apause message 310 to thesecond device 304. Thepause message 310 may instruct the second device to pause playing and/or outputting the continuous or first audio 306A. Thepause message 310 may, for example, include a pause message by a remote control interface for thesecond device 304. - In response to receiving the
pause message 310, thesecond device 304 may pause playing and/or outputting the continuous or first audio 306A. After sending thepause message 310 to thesecond device 304, thefirst device 302 may play and/or outputinterruptive audio 312. Theinterruptive audio 312 may include, for example, a notification of an email, text message, calendar alert, or phone call, as well as the audio output of a phone call and/or a turn notification by a turn-by-turn navigation system. - After the
interruptive audio output 312 is complete, such as after an email message, text message or calendar alert output has been provided, and/or after a phone call has been completed or a turn direction has been played and/or output, thefirst device 302 may send aresume message 314 to thesecond device 304. Theresume message 314 may instruct thesecond device 304 to resume playing and/or outputting the continuous or first audio 306B. Theresume message 314 may, for example, include a play or un-pause instruction by a remote control interface for the second device 306. - The
second device 304 may receive theresume message 314 and respond to receiving theresume message 314 by resuming playing and/or outputting of the continuous or first audio 306B. In resuming playing and/or outputting the continuous or first audio 306B, thesecond device 304 may replay and/or re-output a portion of the continuous or first audio 306A which was played and/or output before receiving thepause message 310. Replaying and/or-outputting the portion of the continuous or first audio 306A may provide continuity and/or context in the outputting of the continuous or first audio 306A, 306B. - To facilitate the replaying and/or re-outputting of the portion of the continuous or first audio 306A by the
second device 304, thepause message 310 and/or theresume message 314 may have included an instruction for thesecond device 304 to rewind the continuous or first audio 306A and/or to move to a previous portion where the continuous or first audio 306A will resume being played and/or outputted. The rewind may include rewinding a predetermined period of time such as one second, five seconds, or ten seconds, and/or may be based on a tag. For example, thefirst device 302 may have listened to the continuous or first audio output 306A and found a tag, such as a beginning of a sentence or a reduced volume, at which thesecond device 304 should resume playing and/or outputting the continuous or first audio 306B. Thefirst device 302 may determine a time period before thepause message 310 at which time the tag began, and may instruct thesecond device 304 to rewind that period of time which will cause thesecond device 304 to resume playing and/or outputting the continuous or first audio 306B at the beginning or at the determined tag point, according to an example implementation. Thefirst device 302 and/orsecond device 304 may include any or all of the components as thecomputing device 100 shown inFIG. 1 , according to example implementations. -
FIG. 4 is a flowchart of amethod 400 according to an example implementation. In this example, themethod 400 may include playing and/or outputting, by acomputing device 100, continuous orfirst audio 108 generated by a first application 104 (402), determining that the continuous orfirst audio 108 generated by thefirst application 104 should be interrupted based on an interrupt orinterruptive notification 204 associated with a second application (404), pausing the continuous orfirst audio 108 generated by thefirst application 104 based on the determining (406), playing and/or outputtinginterruptive audio 110 generated by thesecond application 106 during the pausing of the continuous or first audio 108 (408), identifying a portion of the continuous orfirst audio 108 that was previously played before the first audio was paused (410), and resuming the playing and/or outputting continuous orfirst audio 108 generated by thefirst application 104, the resuming including replaying and/or re-outputting a portion of the continuous orfirst audio 108 which was generated by thefirst application 104 before the pausing, and/or resuming playing of the continuous orfirst audio 108 so that the portion of the continuous orfirst audio 108 that was previously played is replayed (412). - In an example implementation, the playing and/or outputting continuous or first audio 108 (402) may include playing and/or outputting continuous or
first audio 108 and continuous or first video generated by thefirst application 104, and the resuming may include resuming the playing and/or outputting continuous orfirst audio 108 and continuous or first video generated by thefirst application 104, the resuming including replaying and/or re-outputting the portion of the continuous orfirst audio 108 and a portion of the continuous or first video which were generated by thefirst application 104 before the pausing. - In an example implementation, the interrupt or
interruptive notification 204 associated with thesecond application 106 may be based on a user input received by thecomputing device 100. - In an example implementation, the interrupt or
interruptive notification 204 associated with thesecond application 106 may be based on the computing device receiving a wireless signal. - In an example implementation, the
second application 106 may include a phone or telephone application. - In an example implementation, the
second application 106 may include a navigation application. - In an example implementation, the pausing (406) may include gradually reducing a volume of the playing and/or outputting the continuous or
first audio 108 generated by thefirst application 104, and the resuming may include gradually increasing the volume of the playing and/or outputting the continuous orfirst audio 108 generated by thefirst application 104. - In an example implementation, the pausing (406) may include storing streamed data read by the
first application 104 while pausing the playing and/or outputting continuous orfirst audio 108 generated by thefirst application 104, and the resuming may include playing and/or outputting continuous orfirst audio 108 based on the stored streamed data. - In an example implementation, the resuming (412) may include resuming the playing and/or outputting continuous or
first audio 108 generated by thefirst application 104, the resuming including replaying and/or re-outputting a complete sentence which was interrupted by the pausing (406). - In an example implementation, the resuming (412) may include resuming the playing and/or outputting continuous or
first audio 108 generated by thefirst application 104, the resuming including replaying and/or re-outputting a complete sentence which was interrupted by the pausing (406), the complete sentence being determined by content indexing. - In an example implementation, the resuming (412) may include resuming the playing and/or outputting continuous or
first audio 108 generated by thefirst application 104, the resuming beginning at a most recent tag in a file read by thefirst application 104. - In an example implementation, the playing and/or outputting (402) may include playing and/or outputting continuous or first audio and continuous or first video generated by the
first application 104, and the resuming (412) may include resuming the playing and/or outputting continuous orfirst audio 108 and continuous or first video generated by thefirst application 104, the resuming including replaying and/or re-outputting the portion of the continuous or first audio and a portion of the continuous or first video at a beginning of a most recent scene which was generated by thefirst application 104 before the pausing (406). -
FIG. 5 is a flowchart of amethod 500 according to another example embodiment. In this example, the method may include determining, by afirst computing device 302, to temporarily interrupt playing and/or outputting of continuous or first audio 306A by asecond computing device 304 based on receiving a prompt (502), sending apause message 310 to thesecond computing device 304 based on the determining, thepause message 310 instructing thesecond computing device 304 to pause playing and/or outputting the continuous or first audio (504), playing and/or outputtinginterruptive audio 312 after sending the pause message (506) and sending aresume message 314 to thesecond computing device 304 after playing and/or outputting theinterruptive audio 312, theresume message 314 instructing thesecond computing device 304 to resume playing and/or outputting the continuous or first audio 306B (508). - In an example implementation, the
pause message 310 may instruct thesecond computing device 304 to pause the playing and/or outputting and rewind the continuous or first audio 306A. - In an example implementation, the
resume message 314 may instruct thesecond computing device 304 to rewind and resume the playing and/or outputting the continuous or first audio 306B. -
FIG. 6 is a block diagram showing example or representative computing devices and associated elements that may be used to implement the systems and methods ofFIGS. 1-5 .FIG. 6 shows an example of ageneric computer device 600 and a generic mobile computer device 650, which may be used with the techniques described here.Computing device 600 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Computing device 650 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document. -
Computing device 600 includes aprocessor 602,memory 604, a storage device 606, a high-speed interface 608 connecting tomemory 604 and high-speed expansion ports 610, and alow speed interface 612 connecting tolow speed bus 614 and storage device 606. Each of thecomponents processor 602 can process instructions for execution within thecomputing device 600, including instructions stored in thememory 604 or on the storage device 606 to display graphical information for a GUI on an external input/output device, such asdisplay 616 coupled tohigh speed interface 608. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also,multiple computing devices 600 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). - The
memory 604 stores information within thecomputing device 600. In one implementation, thememory 604 is a volatile memory unit or units. In another implementation, thememory 604 is a non-volatile memory unit or units. Thememory 604 may also be another form of computer-readable medium, such as a magnetic or optical disk. - The storage device 606 is capable of providing mass storage for the
computing device 600. In one implementation, the storage device 606 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as thememory 604, the storage device 606, or memory onprocessor 602. - The
high speed controller 608 manages bandwidth-intensive operations for thecomputing device 600, while thelow speed controller 612 manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller 608 is coupled tomemory 604, display 616 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 610, which may accept various expansion cards (not shown). In the implementation, low-speed controller 612 is coupled to storage device 606 and low-speed expansion port 614. The low-speed expansion port, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. - The
computing device 600 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as astandard server 620, or multiple times in a group of such servers. It may also be implemented as part of arack server system 624. In addition, it may be implemented in a personal computer such as alaptop computer 622. Alternatively, components fromcomputing device 600 may be combined with other components in a mobile device (not shown), such as device 650. Each of such devices may contain one or more ofcomputing device 600, 650, and an entire system may be made up ofmultiple computing devices 600, 650 communicating with each other. - Computing device 650 includes a
processor 652,memory 664, an input/output device such as adisplay 654, acommunication interface 666, and atransceiver 668, among other components. The device 650 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of thecomponents - The
processor 652 can execute instructions within the computing device 650, including instructions stored in thememory 664. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor may provide, for example, for coordination of the other components of the device 650, such as control of user interfaces, applications run by device 650, and wireless communication by device 650. -
Processor 652 may communicate with a user throughcontrol interface 658 anddisplay interface 656 coupled to adisplay 654. Thedisplay 654 may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. Thedisplay interface 656 may comprise appropriate circuitry for driving thedisplay 654 to present graphical and other information to a user. Thecontrol interface 658 may receive commands from a user and convert them for submission to theprocessor 652. In addition, anexternal interface 662 may be provide in communication withprocessor 652, so as to enable near area communication of device 650 with other devices.External interface 662 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used. - The
memory 664 stores information within the computing device 650. Thememory 664 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units.Expansion memory 674 may also be provided and connected to device 650 throughexpansion interface 672, which may include, for example, a SIMM (Single In Line Memory Module) card interface.Such expansion memory 674 may provide extra storage space for device 650, or may also store applications or other information for device 650. Specifically,expansion memory 674 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example,expansion memory 674 may be provide as a security module for device 650, and may be programmed with instructions that permit secure use of device 650. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner. - The memory may include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the
memory 664,expansion memory 674, or memory onprocessor 652, that may be received, for example, overtransceiver 668 orexternal interface 662. - Device 650 may communicate wirelessly through
communication interface 666, which may include digital signal processing circuitry where necessary.Communication interface 666 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 668. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning system)receiver module 670 may provide additional navigation- and location-related wireless data to device 650, which may be used as appropriate by applications running on device 650. - Device 650 may also communicate audibly using
audio codec 660, which may receive spoken information from a user and convert it to usable digital information.Audio codec 660 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 650. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 650. - The computing device 650 may be implemented in a number of different forms, as shown in
FIG. 6 . For example, it may be implemented as acellular telephone 680. It may also be implemented as part of asmart phone 682, personal digital assistant, or other similar mobile device. - Implementations of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Implementations may implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program, such as the computer program(s) described above, can be written in any form of programming language, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
- Method steps may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method steps also may be performed by, and an apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
- Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer may include at least one processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer also may include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in special purpose logic circuitry.
- To provide for interaction with a user, implementations may be implemented on a computer having a display device, e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
- Implementations may be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation, or any combination of such back-end, middleware, or front-end components. Components may be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
- While certain features of the described implementations have been illustrated as described herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the embodiments of the invention.
Claims (29)
1. A method comprising:
playing, by a computing device, first audio generated by a first application;
determining that the first audio generated by the first application should be interrupted based on a notification associated with a second application;
pausing the first audio generated by the first application;
playing interruptive audio generated by the second application during the pausing of the first audio;
identifying a portion of the first audio that was previously played before the first audio was paused; and
resuming the playing of the first audio so that the portion of the first audio that was previously played is replayed.
2. The method of claim 1 , wherein:
the playing first audio includes playing the first audio and video generated by the first application; and
the resuming includes playing the first audio and video generated by the first application, the resuming including playing the portion of the continuous audio and a portion of the video which were which were previously played.
3. The method of claim 1 , wherein the notification associated with the second application is based on a user input received by the computing device.
4. The method of claim 1 , wherein the notification associated with the second application is based on the computing device receiving a wireless signal.
5. The method of claim 1 , wherein the second application includes a phone application.
6. The method of claim 1 , wherein the second application includes a navigation application.
7. The method of claim 1 , wherein:
the pausing includes gradually reducing a volume of the playing the first audio generated by the first application; and
the resuming includes gradually increasing the volume of the playing the first audio generated by the first application.
8. The method of claim 1 , wherein:
the pausing includes storing streamed data read by the first application while pausing the playing the first audio generated by the first application; and
the resuming includes playing the first audio based on the stored streamed data.
9. The method of claim 1 , wherein the resuming includes resuming the playing the first audio generated by the first application, the resuming beginning at a most recent tag in a file read by the first application.
10. The method of claim 1 , wherein:
the playing includes playing the first audio and video generated by the first application;
the identifying includes identifying a most recent scene which began playing before the pausing; and
the resuming includes resuming the playing of the first audio and video generated by the first application, the resuming including replaying the portion of the first audio and a portion of the video at a beginning of a most recent scene which was generated by the first application before the pausing.
11. An apparatus comprising:
at least one processor; and
at least one memory comprising computer executable code that, when executed by the at least one processor, is configured to cause the apparatus to:
play first audio generated by a first application;
determine that the first audio generated by the first application should be interrupted based on a notification associated with a second application;
pause the first audio generated by the first application;
play interruptive audio generated by the second application during the pausing of the first audio;
identify a portion of the first audio that was previously played before the first audio was paused; and
resume the playing of the first audio so that the portion of the first audio that was previously played is replayed.
12. The apparatus of claim 11 , wherein:
the playing the first audio includes playing the first audio and video generated by the first application; and
the resuming includes playing of the first audio and video generated by the first application so that the portion of the first audio and a portion of the video that were previously played are replayed.
13. The apparatus of claim 11 , wherein the notification associated with the second application is based on a user input received by the apparatus.
14. The apparatus of claim 11 , wherein the notification associated with the second application is based on the apparatus receiving a wireless signal.
15. The apparatus of claim 11 , wherein the second application includes a phone application.
16. The apparatus of claim 11 , wherein the second application includes a navigation application.
17. The apparatus of claim 11 , wherein:
the pausing includes gradually reducing a volume of the playing the first audio generated by the first application; and
the resuming includes gradually increasing the volume of the playing the first audio generated by the first application.
18. The apparatus of claim 11 , wherein:
the pausing includes storing streamed data read by the first application while pausing the playing the first audio generated by the first application; and
the resuming includes playing the first audio based on the stored streamed data.
19. The apparatus of claim 11 , wherein:
the identifying includes identifying the portion of the first audio that was previously played before the first audio was played based on a most recent tag in a file read by the first application; and
the resuming includes the playing the first audio so that the portion of the first audio that was previously played is replayed beginning at the most recent tag in the file read by the first application.
20. The apparatus of claim 11 , wherein:
the playing includes playing the first audio and video generated by the first application;
the identifying includes identifying a most recent scene that was previously played before the first audio and video were paused; and
the resuming includes playing of the first audio and video generated by the first application so that the most recent scene that was previously played is replayed.
21. A non-transitory computer-readable storage medium comprising computer-executable code stored thereon that, when executed by a processor, is configured to cause an application executed by the processor to:
play first audio;
receive a temporary interruption message from an operating system executed by the processor;
pause the playing the first audio based on receiving on the temporary interruption message;
receive a resume message from the operating system;
identify a portion of the first audio that was previously played before the first audio was paused; and
resume playing the first audio based on receiving the resume notification so that the portion of the first audio that was previously played is replayed.
22. The storage medium of claim 21 , wherein:
the playing the first audio includes playing the first audio and video;
the identifying includes identifying the portion of the first audio and the video that were previously played; and
the resuming includes playing the first audio and video so that the portion of the first audio and video that were previously played are replayed.
23. The storage medium of claim 21 , wherein:
the pausing includes gradually reducing a volume of the playing the first audio; and
the resuming includes gradually increasing the volume of the playing the first audio.
24. The storage medium of claim 21 , wherein:
the pausing includes storing streamed data while pausing the playing the first audio; and
the resuming includes playing the first audio based on the stored streamed data.
25. The storage medium of claim 21 , wherein the identifying includes identifying a portion of the first audio that was previously played before the first audio based on a most recent file tag in a file read by the first application.
26. The storage medium of claim 21 , wherein:
the playing includes playing the first audio and video generated by the first application; and
the resuming includes playing of the first audio and video so that the portion of the first audio that was previously played and a portion of the video that was previously played is replayed.
27. A non-transitory computer-readable storage medium comprising computer-executable code stored thereon that, when executed by a processor, is configured to cause an application executed by the processor to:
determine to interrupt playing of first audio by a computing device based on receiving a prompt;
send a pause message to the computing device based on the determining, the pause message instructing the computing device to pause playing the first audio;
play interruptive audio after sending the pause message; and
send a resume message to the computing device after playing the interruptive audio, the resume message instructing the computing device to resume playing the first audio.
28. The storage medium of claim 27 , wherein the pause message instructs the computing device to pause the playing and rewind the first audio.
29. The storage medium of claim 27 , wherein the resume message instructs the computing device to rewind and resume the playing the first audio.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/253,583 US20120029672A1 (en) | 2010-06-03 | 2011-10-05 | Continuous audio interaction with interruptive audio |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/793,065 US8755921B2 (en) | 2010-06-03 | 2010-06-03 | Continuous audio interaction with interruptive audio |
US13/253,583 US20120029672A1 (en) | 2010-06-03 | 2011-10-05 | Continuous audio interaction with interruptive audio |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/793,065 Continuation US8755921B2 (en) | 2010-06-03 | 2010-06-03 | Continuous audio interaction with interruptive audio |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120029672A1 true US20120029672A1 (en) | 2012-02-02 |
Family
ID=44357952
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/793,065 Expired - Fee Related US8755921B2 (en) | 2010-06-03 | 2010-06-03 | Continuous audio interaction with interruptive audio |
US13/253,583 Abandoned US20120029672A1 (en) | 2010-06-03 | 2011-10-05 | Continuous audio interaction with interruptive audio |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/793,065 Expired - Fee Related US8755921B2 (en) | 2010-06-03 | 2010-06-03 | Continuous audio interaction with interruptive audio |
Country Status (2)
Country | Link |
---|---|
US (2) | US8755921B2 (en) |
WO (1) | WO2011153025A1 (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013177296A1 (en) * | 2012-05-23 | 2013-11-28 | Sonos, Inc. | Audio content auditioning |
WO2014172417A1 (en) * | 2013-04-16 | 2014-10-23 | Sonos, Inc. | Playback queue transfer in a media playback system |
US20150356537A1 (en) * | 2014-06-10 | 2015-12-10 | Toshiba Tec Kabushiki Kaisha | Electronic receipt management system |
US9361371B2 (en) | 2013-04-16 | 2016-06-07 | Sonos, Inc. | Playlist update in a media playback system |
US9495076B2 (en) | 2013-05-29 | 2016-11-15 | Sonos, Inc. | Playlist modification |
US9501533B2 (en) | 2013-04-16 | 2016-11-22 | Sonos, Inc. | Private queue for a media playback system |
US9654821B2 (en) | 2011-12-30 | 2017-05-16 | Sonos, Inc. | Systems and methods for networked music playback |
US9684484B2 (en) | 2013-05-29 | 2017-06-20 | Sonos, Inc. | Playback zone silent connect |
US9703521B2 (en) | 2013-05-29 | 2017-07-11 | Sonos, Inc. | Moving a playback queue to a new zone |
US9735978B2 (en) | 2013-05-29 | 2017-08-15 | Sonos, Inc. | Playback queue control via a playlist on a mobile device |
US9798510B2 (en) | 2013-05-29 | 2017-10-24 | Sonos, Inc. | Connected state indicator |
US9953179B2 (en) | 2013-05-29 | 2018-04-24 | Sonos, Inc. | Private queue indicator |
US10296884B2 (en) * | 2013-09-30 | 2019-05-21 | Sonos, Inc. | Personalized media playback at a discovered point-of-sale display |
US10440075B2 (en) * | 2012-06-29 | 2019-10-08 | Spotify Ab | Systems and methods for multi-context media control and playback |
US20190369949A1 (en) * | 2018-05-31 | 2019-12-05 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method, apparatus and device for playing audio and storage medium |
US10620797B2 (en) | 2012-06-29 | 2020-04-14 | Spotify Ab | Systems and methods for multi-context media control and playback |
US10715973B2 (en) | 2013-05-29 | 2020-07-14 | Sonos, Inc. | Playback queue control transition |
US11140116B2 (en) | 2017-01-31 | 2021-10-05 | Samsung Electronics Co., Ltd | Method for providing notification to uncover execution screen and electronic apparatus for performing same |
US11175883B2 (en) * | 2020-01-17 | 2021-11-16 | Sonos, Inc. | Playback session transitions across different platforms |
US11825174B2 (en) | 2012-06-26 | 2023-11-21 | Sonos, Inc. | Remote playback queue |
Families Citing this family (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040235520A1 (en) | 2003-05-20 | 2004-11-25 | Cadiz Jonathan Jay | Enhanced telephony computer user interface allowing user interaction and control of a telephone using a personal computer |
US7216221B2 (en) | 2003-09-30 | 2007-05-08 | Microsoft Corporation | Method and system for unified audio control on a personal computer |
US8838179B2 (en) * | 2009-09-25 | 2014-09-16 | Blackberry Limited | Method and apparatus for managing multimedia communication recordings |
JP2011210051A (en) * | 2010-03-30 | 2011-10-20 | Sharp Corp | Network system, communication method, and communication terminal |
KR101715381B1 (en) * | 2010-11-25 | 2017-03-10 | 삼성전자 주식회사 | Electronic device and control method thereof |
US8938312B2 (en) | 2011-04-18 | 2015-01-20 | Sonos, Inc. | Smart line-in processing |
US9042556B2 (en) | 2011-07-19 | 2015-05-26 | Sonos, Inc | Shaping sound responsive to speaker orientation |
US9482296B2 (en) | 2012-06-05 | 2016-11-01 | Apple Inc. | Rendering road signs during navigation |
US9886794B2 (en) | 2012-06-05 | 2018-02-06 | Apple Inc. | Problem reporting in maps |
US9418672B2 (en) | 2012-06-05 | 2016-08-16 | Apple Inc. | Navigation application with adaptive instruction text |
US9230556B2 (en) * | 2012-06-05 | 2016-01-05 | Apple Inc. | Voice instructions during navigation |
US8965696B2 (en) | 2012-06-05 | 2015-02-24 | Apple Inc. | Providing navigation instructions while operating navigation application in background |
US9996148B1 (en) * | 2013-03-05 | 2018-06-12 | Amazon Technologies, Inc. | Rule-based presentation of media items |
US9523585B2 (en) | 2013-03-15 | 2016-12-20 | Google Inc. | Systems and methods for handling application notifications |
US9858052B2 (en) * | 2013-03-21 | 2018-01-02 | Razer (Asia-Pacific) Pte. Ltd. | Decentralized operating system |
CN104038827B (en) | 2014-06-06 | 2018-02-02 | 小米科技有限责任公司 | Multi-medium play method and device |
US11599328B2 (en) * | 2015-05-26 | 2023-03-07 | Disney Enterprises, Inc. | Methods and systems for playing an audio corresponding to a text medium |
US9924010B2 (en) * | 2015-06-05 | 2018-03-20 | Apple Inc. | Audio data routing between multiple wirelessly connected devices |
US10121471B2 (en) * | 2015-06-29 | 2018-11-06 | Amazon Technologies, Inc. | Language model speech endpointing |
US10031719B2 (en) | 2015-09-02 | 2018-07-24 | Harman International Industries, Incorporated | Audio system with multi-screen application |
US9830126B2 (en) * | 2015-10-21 | 2017-11-28 | Bose Corporation | Controlling audio playback based on activity |
US10235124B2 (en) * | 2016-06-08 | 2019-03-19 | Google Llc | Audio announcement prioritization system |
US10824386B2 (en) * | 2016-07-11 | 2020-11-03 | Telenav, Inc. | Navigation system with message interchange mechanism and method of operation thereof |
WO2018080447A1 (en) | 2016-10-25 | 2018-05-03 | Rovi Guides, Inc. | Systems and methods for resuming a media asset |
WO2018080445A1 (en) * | 2016-10-25 | 2018-05-03 | Rovi Guides, Inc. | Systems and methods for resuming a media asset |
US10255031B2 (en) * | 2017-02-03 | 2019-04-09 | Facebook, Inc. | Music playback for affiliated services |
JP6844401B2 (en) * | 2017-04-26 | 2021-03-17 | ティアック株式会社 | Information processing equipment, audio equipment and programs |
WO2018211748A1 (en) * | 2017-05-16 | 2018-11-22 | ソニー株式会社 | Information processing device and information processing method |
US10530318B2 (en) * | 2017-11-30 | 2020-01-07 | Apple Inc. | Audio system having variable reset volume |
WO2019138651A1 (en) * | 2018-01-10 | 2019-07-18 | ソニー株式会社 | Information processing device, information processing system, information processing method and program |
US20210132896A1 (en) * | 2019-11-04 | 2021-05-06 | International Business Machines Corporation | Learned silencing of headphones for improved awareness |
US11080007B1 (en) * | 2020-02-05 | 2021-08-03 | Sap Se | Intelligent audio playback resumption |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020111812A1 (en) * | 2001-02-09 | 2002-08-15 | Buchholz Dale R. | Method and apparatus for encoding and decoding pause informantion |
US6876845B1 (en) * | 1999-09-06 | 2005-04-05 | Honda Giken Kogyo Kabushiki Kaisha | Radio communication system for vehicle |
US7466334B1 (en) * | 2002-09-17 | 2008-12-16 | Commfore Corporation | Method and system for recording and indexing audio and video conference calls allowing topic-based notification and navigation of recordings |
US20090006695A1 (en) * | 2007-06-28 | 2009-01-01 | Apple Inc. | Method and apparatus for mediating among media applications |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5953485A (en) | 1992-02-07 | 1999-09-14 | Abecassis; Max | Method and system for maintaining audio during video control |
US6360053B1 (en) * | 1998-08-07 | 2002-03-19 | Replaytv, Inc. | Method and apparatus for fast forwarding and rewinding in a video recording device |
KR20040041082A (en) | 2000-07-24 | 2004-05-13 | 비브콤 인코포레이티드 | System and method for indexing, searching, identifying, and editing portions of electronic multimedia files |
US6947728B2 (en) * | 2000-10-13 | 2005-09-20 | Matsushita Electric Industrial Co., Ltd. | Mobile phone with music reproduction function, music data reproduction method by mobile phone with music reproduction function, and the program thereof |
CA2327632C (en) | 2000-12-05 | 2008-10-07 | Mitchell J. Shnier | Methods for creating and playing a customized program of a variety of sources |
US8086287B2 (en) | 2001-01-24 | 2011-12-27 | Alcatel Lucent | System and method for switching between audio sources |
US7178047B2 (en) | 2003-01-31 | 2007-02-13 | Microsoft Corporation | Method to reduce or eliminate audio interference from computer components |
EP1509041A1 (en) * | 2003-08-19 | 2005-02-23 | Medion AG | Multifunctional device for processing audio/video signals |
TW200537941A (en) * | 2004-01-26 | 2005-11-16 | Koninkl Philips Electronics Nv | Replay of media stream from a prior change location |
US20050245240A1 (en) | 2004-04-30 | 2005-11-03 | Senaka Balasuriya | Apparatus and method for storing media during interruption of a media session |
US8594341B2 (en) | 2004-10-18 | 2013-11-26 | Leigh M. Rothschild | System and method for selectively switching between a plurality of audio channels |
WO2007053687A2 (en) | 2005-11-01 | 2007-05-10 | Vesco Oil Corporation | Audio-visual point-of-sale presentation system and method directed toward vehicle occupant |
US7984440B2 (en) | 2006-11-17 | 2011-07-19 | Sap Ag | Interactive audio task system with interrupt recovery and confirmations |
EP2088751B1 (en) | 2008-02-08 | 2013-07-03 | Accenture Global Services Limited | Streaming media interruption and resumption system |
US7610202B1 (en) | 2008-04-21 | 2009-10-27 | Nuance Communications, Inc. | Integrated system and method for mobile audio playback and dictation |
US8543230B2 (en) | 2008-05-30 | 2013-09-24 | Nokia Corporation | Optimizing seek functionality in media content |
-
2010
- 2010-06-03 US US12/793,065 patent/US8755921B2/en not_active Expired - Fee Related
-
2011
- 2011-05-24 WO PCT/US2011/037728 patent/WO2011153025A1/en active Application Filing
- 2011-10-05 US US13/253,583 patent/US20120029672A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6876845B1 (en) * | 1999-09-06 | 2005-04-05 | Honda Giken Kogyo Kabushiki Kaisha | Radio communication system for vehicle |
US20020111812A1 (en) * | 2001-02-09 | 2002-08-15 | Buchholz Dale R. | Method and apparatus for encoding and decoding pause informantion |
US7466334B1 (en) * | 2002-09-17 | 2008-12-16 | Commfore Corporation | Method and system for recording and indexing audio and video conference calls allowing topic-based notification and navigation of recordings |
US20090006695A1 (en) * | 2007-06-28 | 2009-01-01 | Apple Inc. | Method and apparatus for mediating among media applications |
Cited By (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9654821B2 (en) | 2011-12-30 | 2017-05-16 | Sonos, Inc. | Systems and methods for networked music playback |
US10757471B2 (en) | 2011-12-30 | 2020-08-25 | Sonos, Inc. | Systems and methods for networked music playback |
US10567831B2 (en) | 2011-12-30 | 2020-02-18 | Sonos, Inc. | Systems and methods for networked music playback |
US10945027B2 (en) | 2011-12-30 | 2021-03-09 | Sonos, Inc. | Systems and methods for networked music playback |
US10779033B2 (en) | 2011-12-30 | 2020-09-15 | Sonos, Inc. | Systems and methods for networked music playback |
US20220224971A1 (en) * | 2011-12-30 | 2022-07-14 | Sonos, Inc. | Systems and Methods for Networked Music Playback |
US9967615B2 (en) | 2011-12-30 | 2018-05-08 | Sonos, Inc. | Networked music playback |
US9883234B2 (en) | 2011-12-30 | 2018-01-30 | Sonos, Inc. | Systems and methods for networked music playback |
US9860589B2 (en) | 2011-12-30 | 2018-01-02 | Sonos, Inc. | Systems and methods for networked music playback |
US11743534B2 (en) | 2011-12-30 | 2023-08-29 | Sonos, Inc | Systems and methods for networked music playback |
US10338881B2 (en) | 2012-05-23 | 2019-07-02 | Sonos, Inc. | Audio content auditioning by playback device |
US9977647B2 (en) | 2012-05-23 | 2018-05-22 | Sonos, Inc. | Audio content auditioning by playback device |
US8908879B2 (en) | 2012-05-23 | 2014-12-09 | Sonos, Inc. | Audio content auditioning |
WO2013177296A1 (en) * | 2012-05-23 | 2013-11-28 | Sonos, Inc. | Audio content auditioning |
US11893306B2 (en) | 2012-05-23 | 2024-02-06 | Sonos, Inc. | Audio content auditioning by playback device |
US9395950B2 (en) | 2012-05-23 | 2016-07-19 | Sonos, Inc. | Audio content auditioning |
US9395951B2 (en) | 2012-05-23 | 2016-07-19 | Sonos, Inc. | Audio content auditioning |
US10956118B2 (en) | 2012-05-23 | 2021-03-23 | Sonos, Inc. | Audio content auditioning by playback device |
US9304735B2 (en) | 2012-05-23 | 2016-04-05 | Sonos, Inc. | Audio content auditioning by playback device |
US11825174B2 (en) | 2012-06-26 | 2023-11-21 | Sonos, Inc. | Remote playback queue |
US11294544B2 (en) | 2012-06-29 | 2022-04-05 | Spotify Ab | Systems and methods for multi-context media control and playback |
US10440075B2 (en) * | 2012-06-29 | 2019-10-08 | Spotify Ab | Systems and methods for multi-context media control and playback |
US10884588B2 (en) | 2012-06-29 | 2021-01-05 | Spotify Ab | Systems and methods for multi-context media control and playback |
US10620797B2 (en) | 2012-06-29 | 2020-04-14 | Spotify Ab | Systems and methods for multi-context media control and playback |
US10380179B2 (en) | 2013-04-16 | 2019-08-13 | Sonos, Inc. | Playlist update corresponding to playback queue modification |
US11188666B2 (en) | 2013-04-16 | 2021-11-30 | Sonos, Inc. | Playback device queue access levels |
US11727134B2 (en) | 2013-04-16 | 2023-08-15 | Sonos, Inc. | Playback device queue access levels |
US9247363B2 (en) | 2013-04-16 | 2016-01-26 | Sonos, Inc. | Playback queue transfer in a media playback system |
US10339331B2 (en) | 2013-04-16 | 2019-07-02 | Sonos, Inc. | Playback device queue access levels |
US9361371B2 (en) | 2013-04-16 | 2016-06-07 | Sonos, Inc. | Playlist update in a media playback system |
US11188590B2 (en) | 2013-04-16 | 2021-11-30 | Sonos, Inc. | Playlist update corresponding to playback queue modification |
US10466956B2 (en) | 2013-04-16 | 2019-11-05 | Sonos, Inc. | Playback queue transfer in a media playback system |
US11899712B2 (en) | 2013-04-16 | 2024-02-13 | Sonos, Inc. | Playback queue collaboration and notification |
US11775251B2 (en) | 2013-04-16 | 2023-10-03 | Sonos, Inc. | Playback transfer in a media playback system |
US9501533B2 (en) | 2013-04-16 | 2016-11-22 | Sonos, Inc. | Private queue for a media playback system |
US11321046B2 (en) | 2013-04-16 | 2022-05-03 | Sonos, Inc. | Playback transfer in a media playback system |
WO2014172417A1 (en) * | 2013-04-16 | 2014-10-23 | Sonos, Inc. | Playback queue transfer in a media playback system |
US9735978B2 (en) | 2013-05-29 | 2017-08-15 | Sonos, Inc. | Playback queue control via a playlist on a mobile device |
US9953179B2 (en) | 2013-05-29 | 2018-04-24 | Sonos, Inc. | Private queue indicator |
US9495076B2 (en) | 2013-05-29 | 2016-11-15 | Sonos, Inc. | Playlist modification |
US9684484B2 (en) | 2013-05-29 | 2017-06-20 | Sonos, Inc. | Playback zone silent connect |
US9703521B2 (en) | 2013-05-29 | 2017-07-11 | Sonos, Inc. | Moving a playback queue to a new zone |
US9798510B2 (en) | 2013-05-29 | 2017-10-24 | Sonos, Inc. | Connected state indicator |
US10248724B2 (en) | 2013-05-29 | 2019-04-02 | Sonos, Inc. | Playback queue control connection |
US10191981B2 (en) | 2013-05-29 | 2019-01-29 | Sonos, Inc. | Playback queue control indicator |
US10191980B2 (en) | 2013-05-29 | 2019-01-29 | Sonos, Inc. | Playback queue control via a playlist on a computing device |
US10152537B1 (en) | 2013-05-29 | 2018-12-11 | Sonos, Inc. | Playback queue control by a mobile device |
US10013233B2 (en) | 2013-05-29 | 2018-07-03 | Sonos, Inc. | Playlist modification |
US11514105B2 (en) | 2013-05-29 | 2022-11-29 | Sonos, Inc. | Transferring playback from a mobile device to a playback device |
US11687586B2 (en) | 2013-05-29 | 2023-06-27 | Sonos, Inc. | Transferring playback from a mobile device to a playback device |
US10715973B2 (en) | 2013-05-29 | 2020-07-14 | Sonos, Inc. | Playback queue control transition |
US10296884B2 (en) * | 2013-09-30 | 2019-05-21 | Sonos, Inc. | Personalized media playback at a discovered point-of-sale display |
US20150356537A1 (en) * | 2014-06-10 | 2015-12-10 | Toshiba Tec Kabushiki Kaisha | Electronic receipt management system |
US11140116B2 (en) | 2017-01-31 | 2021-10-05 | Samsung Electronics Co., Ltd | Method for providing notification to uncover execution screen and electronic apparatus for performing same |
US20190369949A1 (en) * | 2018-05-31 | 2019-12-05 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method, apparatus and device for playing audio and storage medium |
US11175883B2 (en) * | 2020-01-17 | 2021-11-16 | Sonos, Inc. | Playback session transitions across different platforms |
US11740857B2 (en) | 2020-01-17 | 2023-08-29 | Sonos, Inc. | Playback session transitions across different platforms |
Also Published As
Publication number | Publication date |
---|---|
US8755921B2 (en) | 2014-06-17 |
WO2011153025A1 (en) | 2011-12-08 |
US20110301728A1 (en) | 2011-12-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8755921B2 (en) | Continuous audio interaction with interruptive audio | |
CN110692055B (en) | Keyword group detection using audio watermarking | |
CA2837291C (en) | Event-triggered hands-free multitasking for media playback | |
US11330335B1 (en) | Presentation and management of audio and visual content across devices | |
CN105027194B (en) | Recognition of speech topics | |
EP2760015A1 (en) | Event-triggered hands-free multitasking for media playback | |
US20170215051A1 (en) | Remote access to a mobile communication device over a wireless local area network (wlan) | |
US20160055847A1 (en) | System and method for speech validation | |
KR20140074549A (en) | Method and apparatus for providing context aware service using speech recognition | |
KR20130135567A (en) | Method and apparatus for providing message service using voice of user | |
US9996148B1 (en) | Rule-based presentation of media items | |
KR100726464B1 (en) | Method of transacting multimedia data between communication terminals and interoperating between applications installed in the terminals, and communication terminal employing the same | |
WO2022267682A1 (en) | Navigation switching method and apparatus, electronic device, and storage medium | |
US20190356534A1 (en) | Notification timing for electronic devices | |
KR101275084B1 (en) | A method of reproducing streaming data and a terminal thereof | |
CN115130478A (en) | Intention decision method and device, and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |