BACKGROUND
Acoustic echo is a common phenomenon occurring in two-way voice communication when open speakers are used. For example, FIG. 1 illustrates one end 100 of a typical two-way communication system. The other end is exactly the same. In such a system, the far-end voice is played through a loud speaker 160 and captured by the microphone 110 in the system and sent back to the far end. The far-end user then hears his or her own voice with a certain delay.
There are a number of known approaches to reducing acoustic echo in two-way communication systems. However, these known approaches face particular problems when applied to voice communication systems using personal computers, such as internet telephony and voice chat applications on personal computers.
1. Acoustic Echo Cancellation
Acoustic Echo Cancellation (AEC) is a digital signal processing technology which is used to remove the acoustic echo from a speaker phone in two-way (full duplex) or multi-way communication systems, such as traditional telephone or modern internet audio conversation applications.
With reference again to the example near end 100 of a typical two-way communication system illustrated in FIG. 1, an Acoustic Echo Cancellation is used to remove echo of the far end user's voice. The example near end 100 includes a capture stream path and a render stream path for the audio data in the two directions. The far end of the two-way communication system is exactly the same. In the capture stream path in the figure, an analog to digital (A/D) converter 120 converts the analog sound mic(t) captured by microphone 110 to digital audio samples continuously at a sampling rate (fsmic). The digital audio samples are saved in capture buffer 130 sample by sample. The samples are retrieved from the capture buffer in frame increments (herein denoted as “mic[n]”). Frame here means a number (N) of digital audio samples. The index ‘n’ is used to indicate relative sampling instants for the frames. Finally, samples in mic[n] are processed, including encoding via a voice encoder 170 and sent to the other end.
In the render stream path, the system receives the encoded voice signal from the other end, decodes audio samples via voice decoder 180 and places the audio samples into a render buffer 140 in periodic frame increments (labeled “spk[n]” in the figure). Then the digital to analog (D/A) converter 150 reads audio samples from the render buffer sample by sample and converts them to an analog signal continuously at a sampling rate, fsspk. Finally, the analog signal is played by speaker 160.
In systems such as that depicted by FIG. 1, the near end user's voice is captured by the microphone 110 and sent to the other end. At the same time, the far end user's voice is transmitted through the network to the near end, and played through the speaker 160 or headphone. In this way, both users can hear each other and two-way communication is established. But, a problem occurs if a speaker is used instead of a headphone to play the other end's voice. For example, if the near end user uses a speaker as shown in FIG. 1, his microphone captures not only his voice but also an echo of the sound played from the speaker (labeled as “echo(t)”). In this case, the mic[n] signal that is sent to the far end user includes an echo of the far end user's voice. As the result, the far end user would hear a delayed echo of his or her voice, which is likely to cause annoyance and provide a poor user experience to that user.
Practically, the echo echo(t) can be represented by speaker signal spk(t) convolved by a linear response g(t) (assuming the room can be approximately modeled as a finite duration linear plant) as per the following equation:
where * means convolution, Te is the echo length or filter length of the room response.
In order to remove the echo for the remote user, AEC 210 is added in the system as shown in FIG. 2. When a frame of samples in the mic[n] signal is retrieved from the capture buffer 130, they are sent to the AEC 210. At the same time, when a frame of samples in the spk[n] signal is sent to the render buffer 140, they are also sent to the AEC 210. The AEC 210 uses the spk[n] signal from the far end to predict the echo in the captured mic[n] signal. Then, the AEC 210 subtracts the predicted echo from the mic[n] signal. This difference or residual is the clear voice signal (voice[n]), which is theoretically echo free and very close to the near end user's voice (voice(t)).
FIG. 3 depicts an implementation of the AEC 210 based on an adaptive filter 310. The AEC 210 takes two inputs, the mic[n] and spk[n] signals. It uses the spk[n] signal to predict the echo in the mic[n] signal. The prediction residual (difference of the mic[n] signal from the prediction based on spk[n]) is the voice[n] signal, which will be output as echo free voice and sent to the far end.
The actual room response (that is represented as g(t) in the above convolution equation) usually varies with time, such as due to change in position of the microphone 110 or speaker 160, body movement of the near end user, and even room temperature. The room response therefore cannot be pre-determined, and must be calculated adaptively at running time. The AEC 210 commonly is based on adaptive filters such as Least Mean Square (LMS) adaptive filters 310, which can adaptively model the varying room response.
The nature of adaptive filtering requires that the microphone signal and the reference or speaker signal must be accurately aligned. In basic terms, the AEC mode has to determine which samples in the speaker signal (spk[n]) are needed to predict the echo at a given sample in the microphone signal (mic[n]). In practical terms, the AEC operates on two streams (the microphone and speaker samples), which generally are sampled by two different sampling clocks and may each be subject to delays. Accordingly, the same indices in the two stream may not be necessarily aligned in physical time. On personal computers, timestamps are typically used to align the microphone and speaker signals, since the timestamp represents the physical time of when a sample is rendered (in the speaker stream) or captured (in the microphone stream). Frames of speaker spk[n] and microphone mic[n] signals are stored in separate data queues and the timestamps are used to make adjustments to the speaker (or microphone) data queues in order to align the speaker and microphone signals. A difference in render and capture sampling (clock) rates is called drift, and to compensate for this, periodic single sample adjustments commensurate with the drift rate are made to the speaker data queue. Also when a glitch occurs (i.e., data loss of one or multiple samples in the speaker or microphone streams) an adjustment of many samples of data may be made at once in the speaker data queue.
However, in practice, these timestamps are noisy and sometimes can be very wrong. One reason for this is that major operating systems, such as Microsoft Windows XP operating system, support numerous different audio devices. It is quite common that some audio device and its driver cannot provide accurate timestamps. In such case, the signals are often out of alignment, and the AEC fails to properly cancel echoes.
2. Voice Switching
Voice switching is a method used for half-duplex two-way communication. A typical example of such communication system has two signal channels: an incoming channel that receives the voice signal coming from the far-end, and an outgoing channel that sends the near end voice signal to the far-end. In a person-to-person scenario, the far-end may be another end user device. Alternatively, in a conference or multi-user scenario, the far end may be a server that hosts the multiple user conference. Based on voice activity being present at the two ends, the channels are selectively turned on or off. In other words, whenever there is voice activity in one channel, the other channel is turned off. By selectively switching off either incoming or outgoing channels based on voice activity in this way, the echo path is broken, which effectively removes acoustic echoes. The drawback of voice switching, however, is that it provides only half-duplex mode of communication, resulting in loss of easy interruptability in conversations.
Voice switching is commonly used on low-end desktop phones in speaker phone mode. A basic voice switching algorithm simply compares the strength of near-end and far-end voices and turns on the communication channel for the end with the stronger voice. It is relatively simple to compare voice activity on a standalone or dedicated phone device, because the microphone and speaker gains are known. During double talk scenarios (i.e., in which both ends are talking simultaneously), it is easy to estimate echo strength and thus easy to compare which voice is stronger. However, for voice communication applications on personal computers, any microphone or speaker may be connected to the computer, and the gains could be adjusted by the users at any time. This complicates the ability to estimate the echo strength, and therefore to compare the voice strength on the channels to accurately determine which channel should be switched on.
SUMMARY
The following Detailed Description concerns techniques (implemented via methods, devices and systems) to reduce acoustic echo in a two way voice communications system. According to the described techniques, the system utilizes acoustic echo cancellation for full duplex voice communications between two communication end devices under normal operating conditions. Additionally, the system includes a voice switching mode as a fall back for situations where acoustic echo cancellation fails or would likely fail to function properly. The technique concerns ways to appropriately decide between use of the normal operation mode utilizing acoustic echo cancellation, and use of the voice switching mode.
One way for making the decision between these modes of operation is to enable the voice switching mode based independently on timestamp based factors that can lead to speaker (or microphone data) queue adjustments during acoustic echo cancellation operation.
According to the technique described herein, the decision between modes relies on an overall measurement of timestamp quality. More particularly, the combined effect of all the timestamp parameter variations on the acoustic echo cancellation process occurs through adjustments to the input data queue (e.g., adjusting the relative offset between speaker and microphone queues or buffers). Hence, the overall impact of timestamp quality on the acoustic echo cancellation can be investigated by examining the rate at which adjustments are being made to the queue. In general summary, the technique therefore chooses or remains in the acoustic echo cancellation mode (i.e., does not enable voice switching) if adjustments to the queue are being made in a consistent/periodic manner, and the frequency of adjustments is within tolerance for drift of the acoustic echo cancellation process.
In one example implementation, the median and median absolute deviation of the rate at which adjustments are made to the queue is used as a measure of queue update consistency. This implementation of the mode decision technique then decides to enable voice switching if the median drift rate or median absolute deviation of the estimated drift rate (both based on queue adjustments) or the rate at which glitches occur exceed pre-determined thresholds. These checks are applied only after a minimum number of adjustments have been made, so as to ensure that the derived statistics are reliable.
This Summary is provided to introduce a selection of concepts in a simplified form that is further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. Additional features and advantages of the invention will be made apparent from the following detailed description of embodiments that proceeds with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram illustrating one end of a typical two-way communication system in the prior art.
FIG. 2 is a block diagram of the two-way communication system of FIG. 1 with audio echo cancellation.
FIG. 3 is a block diagram of an implementation of audio echo cancellation based on an adaptive filter.
FIG. 4 is a flow diagram illustrating an end device of a two-way voice communication system including selection of acoustic echo cancellation or voice switching mode operation based on timestamp quality.
FIG. 5 is a flow diagram illustrating a decision to operate in acoustic echo cancellation mode or to enable voice switching mode based on timestamp quality in the two-way voice communication end device of FIG. 4.
FIG. 6 is a flow diagram illustrating a process to evaluate timestamp quality for the acoustic echo cancellation or voice switching mode decision of FIG. 5 using measurements of consistency of the rate at which queue adjustments are made.
FIG. 7 is a block diagram of a generalized operating environment in conjunction with which various described embodiments may be implemented.
DETAILED DESCRIPTION
The following detailed description concerns various techniques and systems for providing acoustic echo cancellation with voice switching as a fall back mode in two-way communication systems. The described techniques provide a mode decision that reliably and accurately assesses whether acoustic echo cancellation is feasible based on timestamp quality, by measuring the consistency of the frequency at which adjustments are made to input data queues due to timestamp drift rate. The mode decision techniques are described with particular application in personal computer based telephony and voice chat applications, where the voice switching technique may be employed as a fall back measure in the case that acoustic echo cancellation fails to work properly (such as, due to inaccurate or “noisy” timestamps preventing alignment of microphone and speaker signals). However, the techniques to decide whether to fall back to voice switching can be applied more broadly to other two-way voice communication systems and scenarios.
The various techniques and tools described herein may be used independently. Some of the techniques and tools may be used in combination. Various techniques are described below with reference to flowcharts of processing acts. The various processing acts shown in the flowcharts may be consolidated into fewer acts or separated into more acts. For the sake of simplicity, the relation of acts shown in a particular flowchart to acts described elsewhere is often not shown. In many cases, the acts in a flowchart can be reordered.
I. Overview of Two-Way Communication System with Improved AEC/Voice Switch Mode Selection
FIG. 4 illustrates one end of a two-way communication system that includes the improved AEC or Voice Switching Mode selection, as described more fully below. The other end is typically, but not necessarily identical. Each end may be a communication device, such as a phone device or a personal computer with a telephony or voice chat application, or a game console, among other examples. In some implementations, the far end can be a communication server, such as a voice conferencing host server.
The illustrated near end 100 includes a capture stream path and a render stream path for the audio data in the two directions. In the capture stream path in the figure, an analog to digital (A/D) converter 120 converts the analog sound captured by microphone 110 to digital audio samples continuously at a sampling rate (fsmic). The digital audio samples are saved in capture buffer 130 sample by sample. The samples are retrieved from the capture buffer in frame increments (herein denoted as “mic[n]”). Frame here means a number (N) of digital audio samples. Finally, samples in mic[n] are processed, including encoding via a voice encoder 170 and sent to the other end.
In the render stream path, the system receives the encoded voice signal from the other end, decodes audio samples via voice decoder 180 and places the audio samples into a render buffer 140 in periodic frame increments (labeled “spk[n]” in the figure). Then the digital to analog (D/A) converter 150 reads audio samples from the render buffer sample by sample and converts them to an analog signal continuously at a sampling rate, fsspk. Finally, the analog signal is played by speaker 160.
The capture and render buffers (with associated histories) 130, 140 are also referred to herein as microphone and speaker data queues, respectively.
The illustrated communication system end device includes selective AEC or voice switching mode operation 410 to avoid or reduce acoustic echo (echo(t)) of the far end speaker's voice picked up by the microphone 110 to be sent back to the far end.
II. AEC or Voice Switch Mode Decision
In general use, the selective AEC/voice switching mode operation 410 of the two-way communication system provides full duplex two-way communication using acoustic echo cancellation 210, which is implemented as illustrated in FIG. 2 and discussed above. The selective mode operation instead enables voice switching mode as a fall back mechanism under operating conditions where the acoustic echo cancellation 210 would fail to work properly to cancel echo or would introduce unacceptable noise or distortion effects. Accordingly, the two-way communication end 400 analyzes the AEC mode operation via one or more quality checks to determine if the communication quality is sufficient for acoustic echo cancellation to work properly, so as to determine which operation mode to use.
For the AEC to maintain the speaker and microphone streams in synchrony for proper AEC behavior, information from the timestamps is used to make adjustments to the speaker data queue (or in alternative implementations, the adjustments can be made to the microphone queue). Depending on the physical conditions, adjustments to the speaker data queue may be required due to:
(1) Glitches: A glitch occurs due to data loss of one or multiple samples in the speaker or microphone streams. When a glitch occurs, an adjustment of many samples of data may be made at once in the speaker queue.
(2) Drift: A difference in render and capture sampling (clock) rates is called drift, and results in periodic single sample adjustments commensurate with the drift rate.
(3) Timestamp Noise: Additionally, the timestamp data is not always reliable or noise free. This can lead to irrelevant queue adjustments that can severely impact AEC performance.
In more detail, a timestamp marks the time when the first sample of a data frame is captured or rendered, such as at the A/D converter 120 (FIG. 4) and D/A converter 150, respectively. Ideally, the timestamp should match the device's stated sampling rate perfectly. For example, assuming the two-way communication end device 400 has a sampling rate of 16000 Hz and a 10 millisecond data frame is used by the device for capturing and rendering audio signals, then an audio data frame has 160 samples. This means that for each captured or rendered frame containing 160 samples, the timestamp of the first sample of consecutive frames should increase by exactly 10 milliseconds. In other words, the frame length calculated from the timestamps of consecutive frames should be exactly 10 milliseconds.
In practice, there may be errors in the timestamps, as discussed more fully in the background. This can result in the frame length calculated from timestamps being more or less than expected length (e.g., 10 milliseconds in this example implementation). If the long term average of the calculated frame length varies from the expected frame length, then the difference is called the timestamp drift. The drift divided by the nominal frame length is called the timestamp drift rate. A non-zero timestamp drift rate signifies the communication end device's sampling rate is off from its claimed or nominal rate. Finally, for each audio data frame, the difference of the respective frame's length from the long term average frame length is termed the “timestamp noise.”
Further, as already remarked, queue adjustments also may result from audio glitches. Audio glitches mean discontinuities in the audio stream. For audio data capture (e.g., the microphone 110 and A/D converter 120), audio glitches may occur when the application is not able to retrieve samples in the capture buffer in time so that the capture buffer becomes over full, which can result in lost audio capture samples. For the audio data rendering (e.g., from loudspeaker 160 and D/A converter), audio glitches can mean the application does not fill the render buffer quickly enough, so the audio rendering device has no data to play.
Even when the timestamp data matches the physical situation accurately and adjustments to the queue are necessary to maintain correct AEC operation, the discontinuity caused by an adjustment has a negative impact on AEC performance. Beyond a point at which the periodic or transient adjustments are made at too high of a rate, the adaptive filters used for AEC stop working entirely. In such situations, the two-way communication system should instead enable voice switching mode, which is far less sensitive to timing mismatches.
FIG. 5 illustrates a top level decision 500 made by the two-way communication system 400 to select between AEC or voice switching mode operation 410 so as to provide an echo free experience. For this decision 540, the two-way communication system evaluates 510 the quality of the microphone and speaker stream timestamps 520, 530. If the timestamp quality is found to be poor, the two-way communication system falls back to operate in the voice-switching (half duplex) mode 550. Else, the two-way communication system continues to operate in full duplex with AEC mode 560.
III. Timestamp Quality Evaluation
With reference now to FIG. 6, the evaluation 530 of the timestamp quality for the AEC or voice switching mode decision 500 (FIG. 5) is based on the rate of adjustments made to the queue during AEC operation. The evaluation considers the overall impact of timestamp quality on the acoustic echo cancellation by examining the rate at which adjustments are being made to the queue. In summary, the evaluation assesses whether adjustments to the queue are being made in a consistent/periodic manner, and the frequency of adjustments is within tolerance for drift of the acoustic echo cancellation process.
For the evaluation 530, the two-way communication end device 400 calculates an estimate of the drift rate based on queue adjustments (action 620). As discussed above, the timestamp drift generally results in periodic single sample adjustments of the queue. The frequency at which these queue adjustments are made therefore relates to the timestamp drift rate, and can be used as an estimate of the drift rate.
The timestamp quality evaluation then calculates consistency statistics of the periodic queue adjustments made by the AEC process to compensate for timestamp drift. In one example timestamp quality evaluation implementation, the statistical median and median absolute deviation of the estimated drift rate (action 630) are used as measures of how periodic and consistent the adjustments are made by the acoustic echo cancellation process to the queue. Because under normal operating conditions, the number of adjustments made in a second is quite low, the median and median absolute deviation of the queue adjustment rate provide a robust estimate of consistency. However, alternative timestamp quality implementation can use other statistical calculations of the consistency of drift rate queue adjustments.
In order to ensure that the derived statistics are reliable, the evaluation 530 first requires that a minimum number of adjustments to the queue occur (action 610) before the timestamp quality evaluation performs any timestamp quality checks. In one implementation of the evaluation 530, the evaluation first requires monitoring for a minimum number of queue adjustments over a 10 second window of time before checks on the calculated statistics are performed.
After at least this minimum time window (action 610) has passed, the two-way communication end device compares the median and median absolute deviation statistics to threshold values (action 640), which reflect the acoustic echo cancellation's tolerance for timestamp drift. For one example AEC implementation, thresholds of 0.1% for the median drift rate and 0.05 per 1000 for the median absolute deviation are used. However, other AEC implementations may have a lower or higher tolerance for timestamp drift. Accordingly, alternative implementations of the timestamp quality evaluation 530 may apply other threshold values of median estimated drift rate and median absolute deviation of the drift rate. If the calculated median and/or median absolute deviation of the estimated drift rate exceed the threshold values, then the queue adjustments are considered too frequent and/or inconsistent such as to exceed the AEC's tolerance for queue adjustments.
In addition to the periodic consistency check (action 640), the two-way communication end device also applies a glitch frequency check (action 650). As discussed previously, audio glitches occur when there is a loss of multiple samples of the microphone and/or speaker queues. This requires a large (multiple samples) adjustment of the queue. For the glitch check (action 650), the two-way communication end device checks whether a glitch of greater than 4 milliseconds has occurred more frequently than once per second. If this glitch frequency is exceeded, then the glitch frequency check is failed.
In the case that either the queue adjustments made by the AEC process exceed the consistency statistics thresholds or the glitch frequency check, then the timestamp quality evaluation determines that the timestamp quality is poor (action 660). If both the queue adjustments pass both the drift rate consistency (640) and glitch frequency checks (650), then the timestamp quality is considered adequate.
The two-way communication end device can perform the timestamp quality check at periodic intervals, or simply one or more times at the start of the communication system. In one example implementation, each end of the two-way communication system performs the quality check at preset intervals after the communication session (e.g., voice call or conference) is initiated. The initial quality check is done at about 4 seconds after the communication session starts, and is then repeated at 10 second intervals. If all quality checks produce the result that a sufficiently high quality for acoustic echo cancellation exists, then the two-way communication system end device may stop quality checks after 100 seconds. Initially, the two-way communication system end device operates in full duplex mode using acoustic echo cancellation, and continues with that operation so long as the quality checks have the sufficiently high quality result. However, if a quality check fails, then the two-way communication system end switches over to voice switching mode of operation. In alternative implementations, the two-way communication system may begin in half-duplex mode, continue quality checks throughout the communication session and switch to full duplex communication with acoustic echo cancellation when sufficiently high quality is detected, and otherwise remain in the voice switching mode. The quality checking is performed independently for each end device, which may result in one end device having sufficiently high quality to operate in full duplex with acoustic echo cancellation while the other device has insufficient quality and falls back to the voice switching mode.
IV. Computing Environment
The two-way communication end device 400 shown in FIG. 4 can be implemented as dedicated or special purpose communication device (e.g., a desktop phone, in which the selective AEC/voice switching mode operation 410 is implemented using a digital signal processor programmed by firmware or software to operate as illustrated in FIGS. 5 and 6.
Alternatively, the two-way communication system can be implemented using a general purpose computer with suitable programming to perform the selective AEC/voice switching mode operation using a digital signal processor on a sound card, or even the central processing unit of the computer to perform the digital audio signal processing. For example, the two-way communication system can be a laptop or desktop computer with voice communication software (e.g., a telephony, voice conferencing or voice chat application software). Alternatively, the two-way communication system can be a mobile computing device that provides voice communication. FIG. 7 illustrates a generalized example of a suitable computing environment 700 in which the two-way communication system 400 with selective AEC/voice switching mode operation 410 may be implemented on such general purpose computers. The computing environment 700 is not intended to suggest any limitation as to scope of use or functionality, as described embodiments may be implemented in diverse general-purpose or special-purpose computing environments, as well as dedicated audio processing equipment.
With reference to FIG. 7, the computing environment 700 includes at least one processing unit 710 and memory 720. In FIG. 7, this most basic configuration 730 is included within a dashed line. The processing unit 710 executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. The processing unit also can comprise a central processing unit and co-processors, and/or dedicated or special purpose processing units (e.g., an audio processor or digital signal processor, such as on a sound card). The memory 720 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory), or some combination of the two. The memory 720 stores software 780 implementing one or more audio processing techniques and/or systems according to one or more of the described embodiments.
A computing environment may have additional features. For example, the computing environment 700 includes storage 740, one or more input devices 750, one or more output devices 760, and one or more communication connections 770. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment 700. Typically, operating system software (not shown) provides an operating environment for software executing in the computing environment 700 and coordinates activities of the components of the computing environment 700.
The storage 740 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CDs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment 700. The storage 740 stores instructions for the software 780.
The input device(s) 750 may be a touch input device such as a keyboard, mouse, pen, touchscreen or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment 700. For audio or video, the input device(s) 750 may be a microphone, sound card, video card, TV tuner card, or similar device that accepts audio or video input in analog or digital form, or a CD or DVD that reads audio or video samples into the computing environment. The output device(s) 760 may be a display, printer, speaker, CD/DVD-writer, network adapter, or another device that provides output from the computing environment 700.
The communication connection(s) 770 enable communication over a communication medium to one or more other computing entities. The communication medium conveys information such as computer-executable instructions, audio or video information, or other data in a data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
Embodiments can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment 700, computer-readable media include memory 720, storage 740, and combinations of any of the above.
Embodiments can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
For the sake of presentation, the detailed description uses terms like “determine,” “receive,” and “perform” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
In view of the many possible embodiments to which the principles of our invention may be applied, we claim as our invention all such embodiments as may come within the scope and spirit of the following claims and equivalents thereto.