WO2012056198A1 - Audio routing - Google Patents
Audio routing Download PDFInfo
- Publication number
- WO2012056198A1 WO2012056198A1 PCT/GB2011/001533 GB2011001533W WO2012056198A1 WO 2012056198 A1 WO2012056198 A1 WO 2012056198A1 GB 2011001533 W GB2011001533 W GB 2011001533W WO 2012056198 A1 WO2012056198 A1 WO 2012056198A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- audio input
- display device
- audio
- determining
- display
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1423—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
Definitions
- AUDIO ROUTING This invention relates to a data processing system and to a method of operating the data processing system.
- the system is operated to route audio received from an audio input device.
- a microphone acting as an audio input device, is connected to the computer and this can be used to provide the audio for the application. For example, if a user is communicating with one or more colleagues who are located remotely from the user, then the user may have an Internet-based telephone conversation that will be run by an application that is running on their computer.
- Other utility applications may require audio inputs to control components within the application or may use voice recognition technologies to produce a text output, for example.
- a method of operating a data processing system comprising a processing device, a plurality of display devices connected to the processing device and an audio input device connected to the processing device, the method comprising the steps of running a plurality of applications, displaying a window for each application, receiving an audio input for an application, determining the display device towards which the audio input is directed, and routing the audio input to the application with a window displayed on the determined display device.
- a data processing system comprising a processing device, a plurality of display devices and an audio input device connected to the processing device, wherein the processing device is arranged to run a plurality of applications, the display devices are arranged to display a window for each application, the audio input device is arranged to receive an audio input for an application, and the processing device is arranged to determine the display device towards which the audio input is directed and to route the audio input to the application with a window displayed on the determined display device.
- a computer program product on a computer readable medium for operating a data processing system, the system comprising a processing device, a plurality of display devices connected to the processing device and an audio input device connected to the processing device, the product comprising instructions for running a plurality of applications, displaying a window for each application, receiving an audio input for an application, determining the display device towards which the audio input is directed, and routing the audio input to the application with a window displayed on the determined display device.
- the data processing system can determine to which monitor the user is speaking and accordingly route the microphone audio to the appropriate application.
- the user can switch conversations simply by talking towards a different display device.
- the system further comprises a video input device connected to the processing device and the step of determining the display device towards which the audio input is directed comprises determining the direction of a user from the input of the video input device.
- a video input device connected to the processing device and the step of determining the display device towards which the audio input is directed comprises determining the direction of a user from the input of the video input device.
- An alternative method by which the data processing system can determine the display device to which the user is directing audio input is to use image processing techniques on a video input of the user. Only a single audio input device is required, and the user will provide their audio input through this device. The user's direction is determined from face-tracking or eye-tracking and will determine the display device towards which the user is currently facing. The audio input that the user makes is routed accordingly.
- the system comprises a plurality of audio input devices and the step of determining the display device towards which the audio input is directed comprises determining amplitude differentials between the audio inputs and selecting an associated display device accordingly.
- the data processing system can be configured so that multiple microphones are being used.
- the processor can detect the amplitude of the received audio signal at each microphone and can use this information to determine the direction of the user's audio. For example, there may be two display devices in the system, each with a respective audio input device. The audio input device with the greatest amplitude of the received audio signal will indicate that the associated display device is the one to which the user is directing their audio.
- the number of display devices is greater than the number of audio input devices and the step of determining the display device towards which the audio input is directed further comprises mapping the determined amplitude differentials to an arrangement of the display devices. If more than two display devices are present in the system, then it is still possible to determine the direction of the user's audio output with just two audio input devices, using a process of comparing the measured amplitudes at each audio input device to a known arrangement of the display devices.
- Figure 2 is a schematic diagram of a processing device of the data processing system
- FIGS 3 to 5 are further schematic diagrams of embodiments of the data processing system.
- Figure 6 is a flowchart of a method of operating the data processing system.
- a data processing system is shown in Figure 1.
- the system comprises a processing device 10, two identical display devices 12 and two user interface devices 14.
- the user interface devices 14 are a keyboard 14a and a mouse 14b.
- the system shown in Figure 1 is a standard desktop computer, with an additional display device 12b, which is composed of discrete components that are locally located but could equally be a device such as a laptop computer or suitably enabled handheld device such as a mobile phone or pda (personal digital assistant) all using an additional display 12b.
- the system may comprise part of a networked or mainframe computing system, in which case the processing device 10 may be located remotely from the display devices 12 and the user input devices 14, or indeed may have its function distributed amongst separate devices.
- the display devices 12a and 12b show respective images 16a and 16b, and the display of the images 16 is controlled by the processing device 10.
- One or more applications are running on the processing device 10 and these are represented to the user by corresponding windows 18a and 18b, with which the user can interact in a conventional manner.
- a cursor 20 is shown, and the user can control the movement of the cursor 20 about the images 16 shown on the display devices 12 using the computer mouse 14b, again in a totally conventional manner.
- the user can perform actions with respect to the running applications via the user interface devices 14 and these actions result in corresponding changes in the images 16, displayed by the display devices 12.
- the operating system run by the processing device 10 uses virtual desktops to manage the multiple display devices 12.
- Each physical display device 12 is represented by a frame buffer that contains everything currently shown on that display device 12.
- the operating system arranges these frame buffers into a single virtual desktop.
- the operating system can draw objects on all the display devices 12 in a natural way.
- the virtual desktop is a combination of the respective images 16a and 16b being shown by the display devices 12a and 12b.
- Figure 2 illustrates a virtual desktop 22 within the processing device 10.
- the images 16a and 16b being shown by the respective display devices 12 make up the virtual desktop 22.
- Within the processing device 10 is a B2011/001533
- processing area 24 which is running two applications 26.
- a window 18 is displayed for each application 26 that is running.
- These windows 18 form part of the virtual desktop 22, with a suitable background making up the remainder.
- the virtual desktop 22 may be an addressable location within a central processing unit, or may be part of dedicated graphics component which is managing the overall image output.
- the processing area 24 will form part of a central processing unit that is controlling the overall operation of the processing device 10.
- An operating system within the processing device 10 manages the interaction between the user inputs from the user interface devices 14 and the applications 26, and adjusts the virtual desktop 22 accordingly.
- the user may drag windows 18 around the virtual desktop 22.
- the images 16 are updated to reflect the change in location of the windows 8.
- the user can drag a window 8 such that it will move from one image 16 on a display device 12 to another image 16 on a different display device 12.
- the window 18a may be moved by the user from the primary display device 12a to the secondary (additional) display device 12b.
- the application 26 represented by the window 18a may be an email client and the user wishes to keep the application open and running while they start a different task.
- the virtual desktop 22 will be updated to reflect the movement of the window 12a, as will the images 16, all in real-time.
- the system is configured to determine which display device 12 a user 30 is currently operating, as shown in Figure 3.
- a user 30 has two different voice messaging applications open, represented by the windows 18a and 18b, each on a different display device 12a and 12b.
- a microphone (an audio input device) 32 is provided for the user 30 to speak into, or one of the display devices 12 could be microphone-equipped.
- Software in the processing device 10 can determine towards which display device 12 the user 30 is speaking and route the microphone audio to the appropriate application 26.
- the user 30 can switch conversations simply by talking towards a different display device 12.
- the system is arranged to run a plurality of applications 26 on the processing device 10, to display a window 18 on a display device 12 for each application 26, to receive an audio input for an application 26, to determine the display device 12 towards which the audio input is directed, and to route the audio input to the application 26 with a window 8 displayed on the determined display device 12. In this way, the system routes the audio input from the user 30 to the correct application 26.
- the user 30 does not need to make any changes to the configuration of the running applications, the audio input software will detect the display device 12 towards which the user is directing the audio, and route that audio accordingly.
- the routing can be made based upon detection of a physiological parameter of the user 30, such as face or eye direction.
- a camera 34 (a video input device), which is connected to the processing device 10, can determine the direction 36 in which the user 30 is pointing, through face or eye tracking. This allows a determination of the display device 12 towards which the audio input from the user 30 is directed. If the user 30 changes their orientation, then this is detected by the camera 34 and the routing of the audio received from the microphone 32 is changed by the processor 0.
- each display device 12 in the system is provided with a respective audio input device 32.
- the processor 10 is connected to both of the microphones 32a and 32b and can measure the amplitude of the audio signal received by each individual microphone 32. This amplitude can be calculated on a ten second running average, for example. From the different measured amplitudes, the processor 10 can detect which of the microphones 32a and 32b is receiving a louder input from the user 30. From this information, it can be inferred that the user 30 is directing their audio at the display device 12 that is associated with the audio input device 32 with the greater measured amplitude. The audio from 5 the microphone 32 is routed accordingly.
- the user 30 can be seen to be closer to the right-hand display device 12b, having shifted their position from half-way in-between the two display devices 12 (in Figure 3).
- the user can be assumed to be directing their audio output to the application represented by the window 18b.
- microphones 32 will both pick up the user's audio output, but the amplitude of the received signal will be different.
- the amplitude of the signal received by the microphone 32b will be greater than the amplitude of the signal received by the microphone 32a and the audio from the user 30 will be routed accordingly to the correct application.
- a pair of 0 audio input devices 32 can be used to determine which display device 12 is the target of the audio. This is illustrated in Figure 5, where the user 30 has connected their laptop computer to the data processing system, and is using the display 12c of the laptop computer to further extend the usable display area.
- the three display devices 12 are arranged in a horizontal line, and the two display devices 12a and 12b are provided with respective audio input devices 32a and 32b.
- the position of the user 30 changes the amplitude of the audio signals received by the audio input devices 32, both in relative terms and in absolute terms.
- audio input device 32a has the greater received 0 amplitude
- the leftmost display device 12a is the target of the user's audio.
- the audio input device 32b When the audio input device 32b has the greater received amplitude, then the absolute levels of received audio need to be considered, in order to determine whether the user is targeting display device 12b or 12c. Whichever display device 12 is the target of the user's audio, the routing is arranged accordingly.
- the method of operating the data processing system is summarised in Figure 6.
- the method comprises, firstly step S1 of running a plurality of applications, secondly step S2 of displaying a window for each application, thirdly step S3 of receiving an audio input for an application, fourthly step S4 of determining the display device towards which the audio input is directed, and finally step S5 of routing the audio input to the application with a window displayed on the determined display device.
- step S4 the determination is made of the display device towards which the user is directing their audio.
- the user does not identify the desired display device directly, but the target display device is inferred, either from the user's position or from calculations made from the user's audio input, or a combination of these two factors.
- the user's eyes and/or face can be tracked with a camera, to determine the display device towards which the user is directing the audio.
- different amplitudes calculated at different microphones can be used to determine the direction of the user's audio input.
- step S5 the audio input from the user is then routed to an application that has a window on the display device that was determined in step S4. If there is more than one such application, then a priority decision has to be taken as to which application should receive the inputted audio. This could be based on a priority table, for example, or may be based upon the detection of a specific prior action. In the latter case, for example a video conferencing application may have made an output to a user, and the user's audio input is routed to that specific application, as a response.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
- Digital Computer Display Output (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
A data processing system comprises a processing device, a plurality of display devices connected to the processing device and an audio input device connected to the processing device. A method of operating the system comprises the steps of running a plurality of applications, displaying a window for each application, receiving an audio input for an application, determining the display device towards which the audio input is directed, and routing the audio input to the application with a window displayed on the determined display device.
Description
DESCRIPTION
AUDIO ROUTING This invention relates to a data processing system and to a method of operating the data processing system. The system is operated to route audio received from an audio input device.
In desktop computing, it is now common to use more than one display device. Traditionally, a user would have a computer with a single display device attached, but now it is possible to have more than one display device attached to the computer, which increases the usable desktop area for the worker. For example, International Patent Application Publication WO 2007/020408 discloses a display system which comprises a plurality of display devices, each displaying respectively an image, a data processing device connected to each display device and controlling the image displayed by each display device, and a user interface device connected to the data processing device. Connecting multiple display devices to a computer is a proven method of improving productivity.
Many applications exist that can be run by a computer that require or use an audio input. For some such applications, a microphone, acting as an audio input device, is connected to the computer and this can be used to provide the audio for the application. For example, if a user is communicating with one or more colleagues who are located remotely from the user, then the user may have an Internet-based telephone conversation that will be run by an application that is running on their computer. Other utility applications may require audio inputs to control components within the application or may use voice recognition technologies to produce a text output, for example.
Given the use of multiple display devices, described above, it is common for users who operate such desktop set-ups to have a large number of different applications running at the same time. Each application that is currently running will have a respective window, which will be displayed by a
respective display device. It is quite possible that a user will have running multiple applications that each utilise an audio input. In order to use these applications effectively, the user will have to ensure that the audio input being generated via the audio input device is directed to the correct application, for example by the continual changing of device settings or the continual opening and closing of applications. These existing techniques are not efficient enough in many situations.
It is therefore an object of the invention to improve upon the known art. According to a first aspect of the present invention, there is provided a method of operating a data processing system, the system comprising a processing device, a plurality of display devices connected to the processing device and an audio input device connected to the processing device, the method comprising the steps of running a plurality of applications, displaying a window for each application, receiving an audio input for an application, determining the display device towards which the audio input is directed, and routing the audio input to the application with a window displayed on the determined display device.
According to a second aspect of the present invention, there is provided a data processing system, the system comprising a processing device, a plurality of display devices and an audio input device connected to the processing device, wherein the processing device is arranged to run a plurality of applications, the display devices are arranged to display a window for each application, the audio input device is arranged to receive an audio input for an application, and the processing device is arranged to determine the display device towards which the audio input is directed and to route the audio input to the application with a window displayed on the determined display device.
According to a third aspect of the present invention, there is provided a computer program product on a computer readable medium for operating a data processing system, the system comprising a processing device, a plurality of display devices connected to the processing device and an audio input device connected to the processing device, the product comprising
instructions for running a plurality of applications, displaying a window for each application, receiving an audio input for an application, determining the display device towards which the audio input is directed, and routing the audio input to the application with a window displayed on the determined display device.
Owing to these aspects of the invention, it is possible to direct received audio to the correct application by determining towards which display device a user will be talking. For example, if the user has two different voice messaging applications open, each on a different microphone-equipped monitor, the data processing system can determine to which monitor the user is speaking and accordingly route the microphone audio to the appropriate application. Thus, the user can switch conversations simply by talking towards a different display device. Much research exists on microphone arrays, but most of this research focuses on determining the position of a person talking, not on taking actions based on the direction the person is talking.
Preferably, the system further comprises a video input device connected to the processing device and the step of determining the display device towards which the audio input is directed comprises determining the direction of a user from the input of the video input device. An alternative method by which the data processing system can determine the display device to which the user is directing audio input is to use image processing techniques on a video input of the user. Only a single audio input device is required, and the user will provide their audio input through this device. The user's direction is determined from face-tracking or eye-tracking and will determine the display device towards which the user is currently facing. The audio input that the user makes is routed accordingly.
Advantageously, the system comprises a plurality of audio input devices and the step of determining the display device towards which the audio input is directed comprises determining amplitude differentials between the audio inputs and selecting an associated display device accordingly. The data processing system can be configured so that multiple microphones are being used. The processor can detect the amplitude of the received audio signal at each microphone and can use this information to determine the direction of the
user's audio. For example, there may be two display devices in the system, each with a respective audio input device. The audio input device with the greatest amplitude of the received audio signal will indicate that the associated display device is the one to which the user is directing their audio.
In one embodiment, the number of display devices is greater than the number of audio input devices and the step of determining the display device towards which the audio input is directed further comprises mapping the determined amplitude differentials to an arrangement of the display devices. If more than two display devices are present in the system, then it is still possible to determine the direction of the user's audio output with just two audio input devices, using a process of comparing the measured amplitudes at each audio input device to a known arrangement of the display devices.
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:- Figure 1 is a schematic diagram of a data processing system,
Figure 2 is a schematic diagram of a processing device of the data processing system,
Figures 3 to 5 are further schematic diagrams of embodiments of the data processing system, and
Figure 6 is a flowchart of a method of operating the data processing system.
A data processing system is shown in Figure 1. The system comprises a processing device 10, two identical display devices 12 and two user interface devices 14. The user interface devices 14 are a keyboard 14a and a mouse 14b. The system shown in Figure 1 is a standard desktop computer, with an additional display device 12b, which is composed of discrete components that are locally located but could equally be a device such as a laptop computer or suitably enabled handheld device such as a mobile phone or pda (personal digital assistant) all using an additional display 12b. Similarly, the system may comprise part of a networked or mainframe computing system, in which case
the processing device 10 may be located remotely from the display devices 12 and the user input devices 14, or indeed may have its function distributed amongst separate devices.
The display devices 12a and 12b show respective images 16a and 16b, and the display of the images 16 is controlled by the processing device 10. One or more applications are running on the processing device 10 and these are represented to the user by corresponding windows 18a and 18b, with which the user can interact in a conventional manner. A cursor 20 is shown, and the user can control the movement of the cursor 20 about the images 16 shown on the display devices 12 using the computer mouse 14b, again in a totally conventional manner. The user can perform actions with respect to the running applications via the user interface devices 14 and these actions result in corresponding changes in the images 16, displayed by the display devices 12.
The operating system run by the processing device 10 uses virtual desktops to manage the multiple display devices 12. Each physical display device 12 is represented by a frame buffer that contains everything currently shown on that display device 12. The operating system arranges these frame buffers into a single virtual desktop. When these frame buffers are arranged in the virtual desktop 22 in the same relative positions in which the physical display devices 12a and 12b are relatively placed, then the operating system can draw objects on all the display devices 12 in a natural way. The virtual desktop is a combination of the respective images 16a and 16b being shown by the display devices 12a and 12b. If the user moves the mouse 14a such that the cursor 20 moves right off the edge of the image 16a being shown by the first display device 12a, then the cursor 20 appears on the left of the image 16b being shown by the second display device 12b. Similarly a window 8 spread across several display devices 12 appears properly lined up between the display devices 12.
Figure 2 illustrates a virtual desktop 22 within the processing device 10.
The images 16a and 16b being shown by the respective display devices 12 make up the virtual desktop 22. Within the processing device 10 is a
B2011/001533
6 processing area 24, which is running two applications 26. A window 18 is displayed for each application 26 that is running. These windows 18 form part of the virtual desktop 22, with a suitable background making up the remainder. The virtual desktop 22 may be an addressable location within a central processing unit, or may be part of dedicated graphics component which is managing the overall image output. The processing area 24 will form part of a central processing unit that is controlling the overall operation of the processing device 10.
An operating system within the processing device 10 manages the interaction between the user inputs from the user interface devices 14 and the applications 26, and adjusts the virtual desktop 22 accordingly. For example, the user may drag windows 18 around the virtual desktop 22. The images 16 are updated to reflect the change in location of the windows 8. The user can drag a window 8 such that it will move from one image 16 on a display device 12 to another image 16 on a different display device 12. For example, the window 18a may be moved by the user from the primary display device 12a to the secondary (additional) display device 12b. The application 26 represented by the window 18a may be an email client and the user wishes to keep the application open and running while they start a different task. The virtual desktop 22 will be updated to reflect the movement of the window 12a, as will the images 16, all in real-time.
The system is configured to determine which display device 12 a user 30 is currently operating, as shown in Figure 3. In this Figure, a user 30 has two different voice messaging applications open, represented by the windows 18a and 18b, each on a different display device 12a and 12b. A microphone (an audio input device) 32 is provided for the user 30 to speak into, or one of the display devices 12 could be microphone-equipped. Software in the processing device 10 can determine towards which display device 12 the user 30 is speaking and route the microphone audio to the appropriate application 26. Thus, the user 30 can switch conversations simply by talking towards a different display device 12.
The system is arranged to run a plurality of applications 26 on the processing device 10, to display a window 18 on a display device 12 for each application 26, to receive an audio input for an application 26, to determine the display device 12 towards which the audio input is directed, and to route the audio input to the application 26 with a window 8 displayed on the determined display device 12. In this way, the system routes the audio input from the user 30 to the correct application 26. The user 30 does not need to make any changes to the configuration of the running applications, the audio input software will detect the display device 12 towards which the user is directing the audio, and route that audio accordingly.
If there is a single audio input device 32, as in Figure 3, then the routing can be made based upon detection of a physiological parameter of the user 30, such as face or eye direction. In this case, a camera 34 (a video input device), which is connected to the processing device 10, can determine the direction 36 in which the user 30 is pointing, through face or eye tracking. This allows a determination of the display device 12 towards which the audio input from the user 30 is directed. If the user 30 changes their orientation, then this is detected by the camera 34 and the routing of the audio received from the microphone 32 is changed by the processor 0.
There are different methods for determining the directional element of the audio. If there are multiple audio input devices 32, for example with each display device 12 having its own microphone 32, then the direction in which the user is speaking can be determined using amplitude differentials of the inputs to the different microphones 32. The user 30 can be assumed to be closest to the microphone 32 for the desired display device 12, and therefore this microphone 32 will have a louder input than other microphones 32. This is illustrated in Figure 4. Each display device 12 in the system is provided with a respective audio input device 32.
The processor 10 is connected to both of the microphones 32a and 32b and can measure the amplitude of the audio signal received by each individual microphone 32. This amplitude can be calculated on a ten second running average, for example. From the different measured amplitudes, the processor
10 can detect which of the microphones 32a and 32b is receiving a louder input from the user 30. From this information, it can be inferred that the user 30 is directing their audio at the display device 12 that is associated with the audio input device 32 with the greater measured amplitude. The audio from 5 the microphone 32 is routed accordingly.
In Figure 4, the user 30 can be seen to be closer to the right-hand display device 12b, having shifted their position from half-way in-between the two display devices 12 (in Figure 3). The user can be assumed to be directing their audio output to the application represented by the window 18b. The two
10 microphones 32 will both pick up the user's audio output, but the amplitude of the received signal will be different. The amplitude of the signal received by the microphone 32b will be greater than the amplitude of the signal received by the microphone 32a and the audio from the user 30 will be routed accordingly to the correct application.
i s If more than two display devices 12 are present then each can have their own dedicated audio input device 32 and the comparison described above of the amplitudes of the received audio signals can be extended to determine to which of three display devices, for example, the user is directing their audio. However, even with more than two display devices 12, a pair of 0 audio input devices 32 can be used to determine which display device 12 is the target of the audio. This is illustrated in Figure 5, where the user 30 has connected their laptop computer to the data processing system, and is using the display 12c of the laptop computer to further extend the usable display area.
5 The three display devices 12 are arranged in a horizontal line, and the two display devices 12a and 12b are provided with respective audio input devices 32a and 32b. The position of the user 30 changes the amplitude of the audio signals received by the audio input devices 32, both in relative terms and in absolute terms. When audio input device 32a has the greater received 0 amplitude, the leftmost display device 12a is the target of the user's audio.
When the audio input device 32b has the greater received amplitude, then the absolute levels of received audio need to be considered, in order to determine
whether the user is targeting display device 12b or 12c. Whichever display device 12 is the target of the user's audio, the routing is arranged accordingly.
Effectively, when the number of display devices 12 is greater than the number of audio input devices 32 and the determining of the display device 12 towards which the user's audio input is directed involves mapping the determined amplitude differentials to a stored arrangement of the display devices 12 maintained by the processor 12. In this way the audio is routed to the correct application. An arrangement such as that shown in Figure 5 may require the user 30 to first perform an initialisation procedure, whereby test audio is provided by the user 30 and the processor 10 configures the display device routing by then requesting the user to make a selection between offered display device 12.
The method of operating the data processing system is summarised in Figure 6. The method comprises, firstly step S1 of running a plurality of applications, secondly step S2 of displaying a window for each application, thirdly step S3 of receiving an audio input for an application, fourthly step S4 of determining the display device towards which the audio input is directed, and finally step S5 of routing the audio input to the application with a window displayed on the determined display device. In this way a user who has several applications running at the same time, which can use an audio input, will have their audio routed to the correct application.
The principal step in the method is step S4 where the determination is made of the display device towards which the user is directing their audio. The user does not identify the desired display device directly, but the target display device is inferred, either from the user's position or from calculations made from the user's audio input, or a combination of these two factors. As discussed above, the user's eyes and/or face can be tracked with a camera, to determine the display device towards which the user is directing the audio. Alternatively, or additionally, different amplitudes calculated at different microphones can be used to determine the direction of the user's audio input.
In step S5, the audio input from the user is then routed to an application that has a window on the display device that was determined in step S4. If
there is more than one such application, then a priority decision has to be taken as to which application should receive the inputted audio. This could be based on a priority table, for example, or may be based upon the detection of a specific prior action. In the latter case, for example a video conferencing application may have made an output to a user, and the user's audio input is routed to that specific application, as a response.
Claims
1. A method of operating a data processing system, the system comprising a processing device, a plurality of display devices connected to the processing device and an audio input device connected to the processing device, the method comprising the steps of:
o running a plurality of applications,
o displaying a window for each application,
o receiving an audio input for an application,
o determining the display device towards which the audio input is directed, and
o routing the audio input to the application with a window displayed on the determined display device.
2. A method according to claim 1, wherein the system further comprises a video input device connected to the processing device and the step of determining the display device towards which the audio input is directed comprises determining the direction of a user from the input of the video input device.
3. A method according to claim 1 or 2, wherein the system comprises a plurality of audio input devices and the step of determining the display device towards which the audio input is directed comprises determining amplitude differentials between the audio inputs and selecting an associated display device accordingly.
4. A method according to claim 2, wherein the number of display devices is greater than the number of audio input devices and the step of determining the display device towards which the audio input is directed further comprises mapping the determined amplitude differentials to an arrangement of the display devices.
5. A data processing system, the system comprising a processing device, a plurality of display devices and an audio input device connected to the processing device, wherein:
o the processing device is arranged to run a plurality of applications,
o the display devices are arranged to display a window for each application,
o the audio input device is arranged to receive an audio input for an application, and
o the processing device is arranged to determine the display device towards which the audio input is directed and to route the audio input to the application with a window displayed on the determined display device.
6. A system according to claim 5, and further comprising a video input device connected to the processing device, wherein the processing device is arranged, when determining the display device towards which the audio input is directed, to determine the direction of a user from the input of the video input device.
7. A system according to claim 5 or 6, and comprising a plurality of audio input devices, and wherein the processing device is arranged, when determining the display device towards which the audio input is directed, to determine amplitude differentials between the audio inputs and select an associated display device accordingly.
8. A system according to claim 7, wherein the number of display devices is greater than the number of audio input devices and wherein the processing device is further arranged, when determining the display device towards which the audio input is directed, to map the determined amplitude differentials to an arrangement of the display devices.
9. A computer program product on a computer readable medium for operating a data processing system, the system comprising a processing device, a plurality of display devices connected to the processing device and an audio input device connected to the processing device, the product comprising instructions for:
o running a plurality of applications,
o displaying a window for each application,
o receiving an audio input for an application,
o determining the display device towards which the audio input is directed, and
o routing the audio input to the application with a window displayed on the determined display device.
10. A computer program product according to claim 9, wherein the system further comprises a video input device connected to the processing device and the instructions for determining the display device towards which the audio input is directed comprise instructions for determining the direction of a user from the input of the video input device.
11. A computer program product according to claim 9 or 10, wherein the system comprises a plurality of audio input devices and the instructions for determining the display device towards which the audio input is directed comprise instructions for determining amplitude differentials between the audio inputs and selecting an associated display device accordingly.
12. A computer program product according to claim 11, wherein the number of display devices is greater than the number of audio input devices and the instructions for determining the display device towards which the audio input is directed further comprise instructions for mapping the determined amplitude differentials to an arrangement of the display devices.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1018268.1 | 2010-10-29 | ||
GB201018268A GB2485145A (en) | 2010-10-29 | 2010-10-29 | Audio command routing method for voice-controlled applications in multi-display systems |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012056198A1 true WO2012056198A1 (en) | 2012-05-03 |
Family
ID=43401488
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2011/001533 WO2012056198A1 (en) | 2010-10-29 | 2011-10-26 | Audio routing |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2485145A (en) |
WO (1) | WO2012056198A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106201419A (en) * | 2016-06-28 | 2016-12-07 | 乐视控股(北京)有限公司 | Audio frequency input control method, device and terminal |
US10310800B2 (en) | 2017-02-08 | 2019-06-04 | Microsoft Technology Licensing, Llc | Selective routing of audio between applications |
US10467088B2 (en) | 2017-02-08 | 2019-11-05 | Microsoft Technology Licensing, Llc | Audio system maintenance using system call monitoring |
EP3719631A4 (en) * | 2018-01-04 | 2021-01-06 | Samsung Electronics Co., Ltd. | Display device and method for controlling same |
US11153604B2 (en) | 2017-11-21 | 2021-10-19 | Immersive Robotics Pty Ltd | Image compression for digital reality |
US11151749B2 (en) | 2016-06-17 | 2021-10-19 | Immersive Robotics Pty Ltd. | Image compression method and apparatus |
US11150857B2 (en) | 2017-02-08 | 2021-10-19 | Immersive Robotics Pty Ltd | Antenna control for mobile device communication |
US11553187B2 (en) | 2017-11-21 | 2023-01-10 | Immersive Robotics Pty Ltd | Frequency component selection for image compression |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0702355A2 (en) * | 1994-09-14 | 1996-03-20 | Canon Kabushiki Kaisha | Speech recognition method and apparatus |
US6219645B1 (en) * | 1999-12-02 | 2001-04-17 | Lucent Technologies, Inc. | Enhanced automatic speech recognition using multiple directional microphones |
WO2007020408A1 (en) | 2005-08-13 | 2007-02-22 | Displaylink (Uk) Limited | A display system and method of operating a display system |
US20080120141A1 (en) * | 2006-11-22 | 2008-05-22 | General Electric Company | Methods and systems for creation of hanging protocols using eye tracking and voice command and control |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000278626A (en) * | 1999-03-29 | 2000-10-06 | Sanyo Electric Co Ltd | Multiple screens sound output controller |
JP2002091491A (en) * | 2000-09-20 | 2002-03-27 | Sanyo Electric Co Ltd | Voice control system for plural pieces of equipment |
JP2009182769A (en) * | 2008-01-31 | 2009-08-13 | Kenwood Corp | On-vehicle device, program and display control method |
-
2010
- 2010-10-29 GB GB201018268A patent/GB2485145A/en not_active Withdrawn
-
2011
- 2011-10-26 WO PCT/GB2011/001533 patent/WO2012056198A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0702355A2 (en) * | 1994-09-14 | 1996-03-20 | Canon Kabushiki Kaisha | Speech recognition method and apparatus |
US6219645B1 (en) * | 1999-12-02 | 2001-04-17 | Lucent Technologies, Inc. | Enhanced automatic speech recognition using multiple directional microphones |
WO2007020408A1 (en) | 2005-08-13 | 2007-02-22 | Displaylink (Uk) Limited | A display system and method of operating a display system |
US20080120141A1 (en) * | 2006-11-22 | 2008-05-22 | General Electric Company | Methods and systems for creation of hanging protocols using eye tracking and voice command and control |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11151749B2 (en) | 2016-06-17 | 2021-10-19 | Immersive Robotics Pty Ltd. | Image compression method and apparatus |
CN106201419A (en) * | 2016-06-28 | 2016-12-07 | 乐视控股(北京)有限公司 | Audio frequency input control method, device and terminal |
US10310800B2 (en) | 2017-02-08 | 2019-06-04 | Microsoft Technology Licensing, Llc | Selective routing of audio between applications |
US10467088B2 (en) | 2017-02-08 | 2019-11-05 | Microsoft Technology Licensing, Llc | Audio system maintenance using system call monitoring |
US11150857B2 (en) | 2017-02-08 | 2021-10-19 | Immersive Robotics Pty Ltd | Antenna control for mobile device communication |
US11429337B2 (en) | 2017-02-08 | 2022-08-30 | Immersive Robotics Pty Ltd | Displaying content to users in a multiplayer venue |
US11153604B2 (en) | 2017-11-21 | 2021-10-19 | Immersive Robotics Pty Ltd | Image compression for digital reality |
US11553187B2 (en) | 2017-11-21 | 2023-01-10 | Immersive Robotics Pty Ltd | Frequency component selection for image compression |
EP3719631A4 (en) * | 2018-01-04 | 2021-01-06 | Samsung Electronics Co., Ltd. | Display device and method for controlling same |
US11488598B2 (en) | 2018-01-04 | 2022-11-01 | Samsung Electronics Co., Ltd. | Display device and method for controlling same |
Also Published As
Publication number | Publication date |
---|---|
GB2485145A (en) | 2012-05-09 |
GB201018268D0 (en) | 2010-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2012056198A1 (en) | Audio routing | |
US20230052490A1 (en) | Remote user interface | |
US11212449B1 (en) | User interfaces for media capture and management | |
KR102604685B1 (en) | Device control using gaze information | |
US11671697B2 (en) | User interfaces for wide angle video conference | |
US20240036703A1 (en) | Electronic message user interface | |
US10152967B2 (en) | Determination of an operational directive based at least in part on a spatial audio property | |
US10649622B2 (en) | Electronic message user interface | |
DK179635B1 (en) | USER INTERFACE FOR CAMERA EFFECTS | |
US9632618B2 (en) | Expanding touch zones of graphical user interface widgets displayed on a screen of a device without programming changes | |
US20170357324A1 (en) | Digital touch on live video | |
US20160062590A1 (en) | User interface for limiting notifications and alerts | |
CN108885485A (en) | Digital assistants experience based on Detection of Existence | |
US9753579B2 (en) | Predictive input system for touch and touchless displays | |
US10999088B2 (en) | Proximity and context-based telepresence in collaborative environments | |
JP7027826B2 (en) | Information processing equipment and programs | |
WO2015183756A1 (en) | Message user interfaces for capture and transmittal of media and location content | |
KR20220097547A (en) | Spatial management of audio | |
JP7440625B2 (en) | Methods and computer programs for controlling the display of content | |
US10848895B2 (en) | Contextual center-of-gravity for audio output in collaborative environments | |
US20240163358A1 (en) | User interfaces for presenting indications of incoming calls | |
KR102448223B1 (en) | Media capture lock affordance for graphical user interfaces | |
US20180089131A1 (en) | Physical configuration of a device for interaction mode selection | |
KR102446243B1 (en) | Methods and user interfaces for sharing audio | |
KR20190070730A (en) | Apparatus, method and computer program for processing multi input |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11787740 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11787740 Country of ref document: EP Kind code of ref document: A1 |