GB2607331A - Virtual interaction system - Google Patents

Virtual interaction system Download PDF

Info

Publication number
GB2607331A
GB2607331A GB2107960.3A GB202107960A GB2607331A GB 2607331 A GB2607331 A GB 2607331A GB 202107960 A GB202107960 A GB 202107960A GB 2607331 A GB2607331 A GB 2607331A
Authority
GB
United Kingdom
Prior art keywords
user
virtual
audio
users
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2107960.3A
Other versions
GB202107960D0 (en
Inventor
Korala Aravinda
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kal Atm Software GmbH
Original Assignee
Kal Atm Software GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kal Atm Software GmbH filed Critical Kal Atm Software GmbH
Priority to GB2107960.3A priority Critical patent/GB2607331A/en
Publication of GB202107960D0 publication Critical patent/GB202107960D0/en
Priority to PCT/EP2022/064810 priority patent/WO2022253856A2/en
Publication of GB2607331A publication Critical patent/GB2607331A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
    • H04M3/568Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities audio processing specific to telephonic conferencing, e.g. spatial distribution, mixing of participants
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/50Aspects of automatic or semi-automatic exchanges related to audio conference
    • H04M2203/5081Inform conference party of participants, e.g. of change of participants

Abstract

A virtual conference system configured to: display a plurality of windows on a display device wherein a first window provides audio and video content to the user and a second window includes a plurality of user identifiers representing a plurality of users; and, provide a plurality of audio streams simultaneously to an output device wherein the first stream is associated with the first window, and further streams are from further users. Identifiers may be positioned in the window to represent a position of its user in a virtual space such as a conference hall 41, lobby 42 or break-out room, or between virtual objects such as a table 60 or chair 62. The second stream may be from a group of users selected based on proximity to the user or virtual object within the virtual space. The plurality of windows may comprise a presentation window 52, a chat window 50 or a live video window 46.

Description

VIRTUAL INTERACTION SYSTEM
Field
The present application relates to virtual interaction systems and methods, for example virtual conference systems and methods.
Background
It is known to provide virtual meeting and video-conferencing systems in which multiple users can each connect via their own computing devices. A user interface is provided on each computing device which is used to provide to a user video and audio streams from the computing devices of each other user, thereby enabling all users to see and hear all other users.
More complex virtual meeting and video-conferencing systems are also known in which larger numbers of users are involved and in which each can selectively interact with sub-groups of users. The greater the number of users, the greater variation in interaction options, and the increased complexity of providing communication between multiple users in an efficient and manageable fashion without degradation of user experience, in such systems can present significant challenges. This is particularly the case as network quality, performance of each user's audio and/or video devices, and terminals, can vary significantly between users and over time.
Summary
In a first aspect there is provided a virtual conference or other interaction system to enable a user to interact with a plurality of further users in a virtual conference or other interaction environment, the system comprising a processing resource configured to: display a plurality of windows on at least one display device associated with the user, wherein a first window of the plurality of windows is configured to provide audio content and/or video content and/or to display images or other data to the user; at least one second window of the plurality of windows includes a plurality of user identifiers, each user identifier representing either the user or a respective one of the plurality of further users, and optionally each identifier is positioned in the window to represent a virtual position of its associated user in the virtual conference or other interaction environment; the system is configured to provide a plurality of audio streams simultaneously to at least one audio output device associated with the user; a first of the audio streams is associated with the first window, at least one second audio stream of the plurality of audio streams is from at least one of the further users represented by the further user identifiers.
The first audio stream may be provided to a first audio output and the at least one second audio stream is provided to at least one second audio output. The first audio stream may be provided to the first audio output and not to at least one second audio output. The second audio stream may be provided to at least one second audio output and not to the first audio output.
Alternatively, each of the first and second audio streams may be provided to both of the first and second audio outputs, with the relative volumes or other audio parameters of the first audio stream and second audio stream being different for the first audio output and for the at least one second audio output. The first audio stream may be more audible than the second audio stream via the first audio output. The first audio stream may be less audible than the second audio stream via the at least one second audio output.
The first audio output may comprise one of a left or right channel, speaker or headphone, and the at least one second audio output comprises the other of the left or right channel, speaker or headphone.
The at least one second audio stream may comprise a merged or otherwise combined audio stream that combines a plurality of audio streams.
The at least one second audio stream may be from a selected group of the further users.
The further users of the selected group may be selected based on proximity or association with the user or with at least one virtual object or virtual region in the virtual conference or other interaction environment.
The virtual object may comprise a virtual table, and the group of further users comprises further users whose user identifiers are positioned at the same virtual table as the user identifier of the user.
The virtual object may comprise a virtual table of a plurality of virtual tables, or a virtual chair, included in the second window.
The virtual region may comprise a region of a virtual lobby or virtual break-out room.
A further user may be selected for inclusion in the group based on input from the user and/or further user.
The processing resource may be configured to receive audio and/or video control input from the user and to perform at least one audio control action in response to the input.
The audio and/or video control action may comprise at least one of muting, unmuting, forwarding or rewinding, pausing, increasing or decreasing volume, encrypting or applying a language translation algorithm.
The audio and/or video control action may be applied to one or more of the first audio stream, the second audio stream(s), the first audio output, the second audio output, or audio and/or video stream(s) from a selected one or more of the further users.
The audio and/or video control action may comprise selection of at least one stream from a plurality of streams, optionally selection of a translated stream.
The processing resource may be configured to provide and/or control at least one user interface element configured to control and/or select between a plurality of audio and/or video streams.
The at least one user interface element may comprise at least one two-way, three-way, or more-than-three-way slider or other selection element.
The processing resource may configured to apply, either automatically or in response to user input, a filter or other audio process to different ones of the audio streams, optionally to a plurality of the second audio streams, to ensure that at least one property, for example signal-to-noise ratio or distortion or volume, is substantially the same and/or within a selected or predetermined range for audio coming from each of the users in the group.
The user identifier may be moveable in the virtual environment in response to input by the user.
The further user identifiers may be moveable in the virtual environment in response to input by the further users.
The user identifier and further user identifiers may be moveable between the virtual tables or other virtual objects based on user input.
The first window may comprise a presentation window configured to provide a presentation or other content to the user and the further users.
The virtual environment may include a virtual conference hall and the presentation window is configured to provide a presentation to all users whose user identifiers are present in the virtual conference hall.
The virtual event may comprise a virtual party or virtual film screening, and the first window is configured to show a film or other selected content, for example content selected by at least one of the users from a streaming site.
The plurality of windows may further comprise at least one chat window, configured to provide text chat between the user and at least one further user.
The at least one chat window may be configured to provide text chat between the user and at least one of or all of the selected group of further users, for example further users at the or a virtual table or further users invited to a virtual party or other event.
The at least one chat window may be configured to provide text chat between the user and further users present in the virtual conference hall and/or a presenter of the presentation and/or users at the virtual party, virtual film screening or other event.
The plurality of windows may further comprise at least one video window configured to display live video of the user and/or at least some of the further users.
The at least one video window may be configured to provide live video of the user and at least one of or all of the selected group of further users, for example further users at the or a virtual table or further users invited to a virtual party or other event.
The virtual environment may comprise a plurality of rooms and/or windows and the user identifier may be moveable between rooms and/or windows in response to user input.
The plurality of rooms may comprise a virtual conference room and at least one virtual breakout room or virtual lobby.
In a further aspect, which may be provided independently, there is provided a virtual conference or other interaction system to enable a user to interact with a plurality of further users in a virtual conference or other interaction environment, the system comprising a processing resource configured to: display at least one window on at least one display device associated with the user, wherein the at least one window includes a plurality of user identifiers, each user identifier representing either the user or a respective one of the plurality of further users, and each identifier is positioned in the window to represent a virtual position of its associated user in the virtual conference or other interaction environment; the virtual conference or other interaction environment comprises a plurality of virtual tables and/or virtual chairs; wherein the user identifier and further user identifiers are moveable between the virtual tables based on input from the user and further users; the system is configured to provide at least one communication channel between the user and a selected group of the further users, wherein the selected group of the further users are users whose user identifiers are at the same virtual table and/or associated with the same group of virtual chairs as the user identifier of the user.
The at least one communication channel comprises at least one audio stream, at least one chat box, and/or at least one video stream.
In a further aspect, which may be provided independently, there is provided a method of enabling a user to interact with a plurality of further users in a virtual conference or other interaction environment, the method comprising: displaying a plurality of windows to the user; using a first window of the plurality of windows to provide audio content and/or video content and/or to display images or other data to the user; using at least one second window of the plurality of windows to include a plurality of user identifiers, each user identifier representing either the user or a respective one of the plurality of further users, and optionally each identifier is positioned in the window to represent a virtual position of its associated user in the virtual conference or other interaction environment; providing a plurality of audio streams simultaneously to at least one audio output device associated with the user, wherein a first of the audio streams is associated with the first window, and at least one second audio stream of the plurality of audio streams is from at least one of the further users represented by the further user identifiers.
In a further aspect, which may be provided independently, there is provided a server configured to maintain a virtual environment for a plurality of users and to provide virtual environment data to a plurality of user terminals, wherein the server is configured to, for each user, selectively provide a plurality of audio and/or video streams to the user terminal associated with the user, wherein at least one of the audio and/or video streams comprises audio and/or video stream(s) from a selected group of further users and/or wherein the virtual environment comprises at least one virtual table and/or at least one virtual chair.
In a further aspect, which may be provided independently, there is provided a user terminal comprising a processing resource, at least one display device, and at least one audio output device, wherein the processing resource is configured to: display a plurality of windows on the at least one display device, wherein a first window of the plurality of windows is configured to provide audio content and/or video content and/or to display images or other data to the user, and at least one second window of the plurality of windows includes a plurality of user identifiers, each user identifier representing either the user or a respective one of a plurality of further users, and optionally each identifier is positioned in the window to represent a virtual position of its associated user in a virtual conference or other interaction environment; and the processing resource is further configured to provide a plurality of audio streams simultaneously to at least one audio output device associated with the user, wherein a first of the audio streams is associated with the first window, and at least one second audio stream of the plurality of audio streams is from at least one of the further users represented by the further user identifiers.
Features in one aspect may be applied as features in any other aspect, in any appropriate combination. For example, any one of system, apparatus, computer program product or method features may be provided as any one other of system, apparatus, computer program product or method features.
Brief Description of the Drawings
Various embodiments of the invention will now be described by way of example only, and with reference to the accompanying drawings, of which: Figure 1 is a schematic illustration of a virtual interaction system according to an embodiment; Figure 2 is a schematic illustration of a user terminal according to an embodiment; and Figure 3 is an illustration of a representation of a virtual environment according to an embodiment; Figure 4A shows selected parts of a user terminal in accordance with an embodiment; Figure 4B is a schematic illustration of a slider arrangement according to an embodiment; and Figure 5 is a schematic diagram of a user interface in accordance with an alternative embodiment.
Detailed description
Figure 1 shows a virtual conference system according to an embodiment. The system includes a server 2 which is connected via a network to a plurality of user terminals 4a, 4b, 4n.
The server 2 comprises a processing resource in the form of a processor 6 or set of processors and a memory 8. The processor 6 is configured to provide, in operation, a set of modules to provide desired functionality. In this embodiment, the set of modules comprises a server communication module 10 for managing communication with the user terminals, a virtual environment module 12 for managing a virtual environment and interactions with and/or between users, and a content streaming module 14 that controls, for example, the streaming of video and/or audio between users and the providing of a presentation or other content via the virtual environment. In other embodiments any other suitable combination of modules may be provided and/or dedicated circuitry may be provided to provide desired functionality. Functionality may be provided by any suitable number and combinations of modules or circuitries in alternative embodiments.
The server 2 in the embodiment of Figure 1 can be used to provide content or other data, for example audio and/or video streams and/or interaction data, to the user terminals, and to receive data, for example interaction data, audio and/or video streams and/or user input from the user terminals. In various embodiments the server can be used for any desired processing, management and/or monitoring operations. The server 2 generates and manages the virtual environment, and distributes in real time updates to the virtual environment to the user terminals 4a, 4b, 4n to take into account user actions and interactions, thereby to provide a common virtual environment at each of the user terminals 4a, 4b, 4n.
The server can also, for example, be used to distribute software to the user terminals for execution and/or installation, the software being for generation of a user interface including an interactive representation of the virtual environment, and for obtaining user input and/or other data.
As illustrated schematically in Figure 2, each terminal 4a, 4b 4n comprises a processing resource in the form of a processor 26a, 26b, 26n or set of processors, a memory 28, 28b, 28n, and an input device to enable an user to input data, for example a keyboard and/or mouse. Each user terminal 4a, 4b 4n also includes at least one display device 24a, 24b, 24n, for example a display screen, and at least one audio device 25a, 25b, 25n, for example a speaker or other audio output. For at least some of the user terminals and/or in at least some alternative embodiments the display device and/or the audio device may be provided as separate components that are connected to the respective user terminal and/or may be provided as a combined audio/visual device.
For each user terminal 4a, 4b, 4n, the respective processor 26a, 26b, 26n is configured to provide, in operation, a set of modules to provide desired functionality. In this embodiment, the set of modules comprises a terminal communication module 30 for managing communication with the server 2, a user interaction module 32 for obtaining and/or responding to user input, and a user interface module 34 that is operable to provide via the display device and/or audio device a user interface that includes the virtual environment. In other embodiments any other suitable combination of modules may be provided and/or dedicated circuitry may be provided to provide desired functionality. Functionality may be provided by any suitable number and/or combination of modules or circuitries in alternative embodiments.
In the embodiment of Figure 1, each of the server, and the user terminals are implemented as a suitably programmed PC, MAC, iOS device, Android device or other computing device, and each includes includes a hard drive and other components of a PC, MAC, iOS device, Android device or other suitable computing device including RAM, ROM, a data bus, an operating system, for example Windows 10 (RTM) or other Windows operating system, i0S, Android operating system or any other suitable operating system, including various device drivers, and hardware devices including, optionally a dedicated graphics card. In other embodiments, any other suitable computing devices or processing circuitries may be provided, for example one or more ASICs (application specific integrated circuits) or FPGAs (field programmable gate arrays) as well as or instead of the PCs, Android devices or other programmable computing devices. The user terminals may comprise fixed devices, for example desktop computers, or mobile devices, for examples laptops, tablet devices and/or mobile telephones.
Communication and transfer of data between the terminals and/or server can be provided using any suitable networking protocol or other communication protocol, for example UDP, TCP/IP and/or any other suitable packet-based protocol. Any suitable file-based or block-based access protocols may be used in some embodiments. The various memories may be implemented in any suitable fashion, for example using any suitable addressable memory structures and protocols. The memories may include databases under any suitable database system, for example but not limited to SQL, to store data. In the embodiment of Figure 1, the communication module 10 at the server and the terminal communication modules 30 at the user terminals 4a, 4b, 4n establish and maintain communication between the server and user terminals 4a, 4b, ... 4n, provide streaming of audio/video and transfer of other data including virtual environment data, in accordance with known communication and networking techniques.
In operation, each terminal 4a, 4b, 4n is configured to display to the respective user a representation of virtual environment via the user interface on the display device 24a, 24b, 24n. For example, in some embodiments each terminal 4a, 4b, 4n includes a program stored in memory 28a, 28b, 28n that upon execution by the processor 26a, 26b, 26n causes the display of the user interface and enables interaction with an user via the user interface. The program may be downloaded from the server and installed permanently or temporarily in the user terminal memory or, in alternative embodiments that program may be run remotely at the server and/or in a cloud environment.
A representation of a virtual environment 40 represented on a user interface displayed on display device 24a at one of the user terminals 4a according to an embodiment is shown in Figure 3. In Figure 3 the virtual environment 40 is a virtual conference environment but any other suitable type of virtual environment may be provided in alternative embodiments.
The user interface comprises a plurality of windows 42, 44, 46, 48, 50, 52, 54 that together represent or are associated with the virtual environment 40.
A first of the windows 41 represents a virtual conference hall that includes a plurality of virtual tables 60a, 60b, 60c, 60d.
Another of the windows 42 represents a virtual lobby or virtual break-out room.
A further one of the windows 44 is a presentation window used to provide a presentation to the users via the user terminals.
It is a feature of embodiments that at least some of the windows include user identifiers, each user identifier representing a respective one of the users, and each identifier being positioned in the window to represent a virtual position of its associated user in the virtual conference or other interaction environment.
In the embodiment of Figure 3, the user identifiers are represented by filled circles and can be moved within, and between, the virtual conference hall window 41 and the virtual lobby window 42.
The user identifier for the user who is operating the user terminal on which the virtual environment as shown in Figure 3 is displayed, is represented as a white circle. It will be understood that the same virtual environment is also represented on the user terminals of the other users. For each of those other user terminals the user identifier that is shown as a white circle in figure 3 will be represented as a filled circle, and the respective user identifier for the user of that particular user terminal will be represented as a white circle. Any other suitable user identifiers can be used in other embodiments and other ways of modifying appearance of identifiers to distinguish user of the terminal from other users. For example, images of the users, and/or avatars, and/or particular shapes or symbols, and/or text (e.g. names) can be used as the identifiers. Users can select or customize their identifier in some embodiments.
The user of a particular user terminal can move the user identifier within the virtual environment by providing input, for example via the user interface. User input can be provided using any suitable device, for example a mouse, touchscreen, pointer or any other user input device.
The virtual conference hall window includes a plurality of virtual tables 60a, 60b, 60c, 60d. The virtual tables 60a, 60b, 60c, 60d include, or have associated with them, virtual chairs 62. Only one virtual chair is labelled for each table in Figure 3 for clarity, but it can be seen that each table has 10 virtual chairs in this embodiment.
Users can move their user identifiers so that they are positioned at a particular table with other users. Users can move their user identifiers between virtual tables so they are associated with different virtual tables, and interact with different users, at different times during the virtual conference or other event. Virtual chairs are also positioned at at the side of the hall in Figure 3, and users can use those virtual chairs for example to form a group whilst waiting for chairs at a virtual table to become available or otherwise to interact or wait.
The virtual conference and the virtual lobby or virtual breakout room, and other virtual rooms or windows in other embodiments, can be referred to virtual regions of the virtual environment. Each user can move the user identifier within those virtual regions. For example, in the virtual conference hall a user can move the identifier to virtual chairs positioned at the side of the hall, or generally within the hall. Also, as shown, users can move their user identifiers with the virtual lobby for example into proximity with other users For example, in Figure 3 a group of three user identifiers, a group of two user identifiers, and single user identifiers alone are shown.
The user interface includes further windows including window 46 which is a video and/or audio window that includes a separate video and/or audio streaming window 48 for each user at the virtual table at which the user identifier of the user of this particular user interface is positioned. Each video streaming window is used to stream video and/or audio of a respective user for example using a camera associated with the user's laptop, tablet, mobile telephone or other computing device, using any suitable known video streaming or conferencing techniques.
The user interface also includes a text chat window 50 configured so that the users at the virtual table can exchange text chat by entering text using their laptop, tablet, mobile telephone or other computing device. All of the users grouped at the virtual table can see text entered by all of the other users at the virtual table.
The video window and the text chat window enable users in a selected group to interact via text and via video and/or audio. In the situation shown in Figure 3 the users are selected to be in the group based on their presence at the same virtual table. The system also enables users to be grouped in other ways, for example based on proximity or association between their user identifier or proximity or association with at least one virtual object or virtual region in the virtual conference or other interaction environment.
For instance, users can be grouped by proximity of their user identifiers in the virtual environment. For example, if the user were to move their user identifier from the virtual table to the lobby 42 and position it in proximity to (for example within a threshold distance from) user identifiers... and... then the users associated with those user identifiers would form a group, and the video and/or window 46 and chat window 50 would be used to stream video and/or audio, and exchange chat between those three users.
In other embodiments or modes of operation, users can be selected to be in a group based on user input. For example a user can invite one or more of the users to be in a group with them, for example by clicking on otherwise selecting their user identifier(s). In some embodiments or modes of operation the combination of proximity, user input and/or user identifier location can be used to select users to be in a particular interaction group and/or to be associated with a virtual object. For example in some such embodiments a user may select another user to be in the same group but only if they are within a threshold distance and/or their user identifiers are within the same virtual room or window in the virtual environment and/or they are in proximity to or associated with or have selected the same virtual object. In some embodiments, users can put themselves into or out of a private or restricted mode in which they are not available to be in interaction groups, or in which they are available for interaction only with certain users and/or not with other certain users.
In addition to the user interaction windows, including the video and/or audio window and the text chat window in this embodiment, there is also provided at least one further window that can be used to provide audio content and/or video content and/or to display images or other data to all users to a set of the users. In the embodiment of Figure 3 this further window is a presentation window 44 that is used to provide a PowerPoint or other presentation, either live or pre-recorded, to the users. An optional presenter video window 52 is also provided, which provides live or pre-recorded video stream of the presenter as they give the presentation. A further chat window 54 or other interaction window is also provided to enable text chat or other interaction between the presenter and users, for example to enable users to put questions and to enable the presenter to respond.
In some embodiments, a particular presentation, either scheduled or on-demand, is provided to all the users whose user identifiers are in a particular window, virtual room or other virtual region. For example the same presentation may be provided simultaneously via the presentation window 44 to all users user identifiers are present in the virtual conference hall. In some embodiments there is more than one conference hall or room in the virtual environment and the presentation that is provided via the presentation window to a particular user is dependent on which hall or room the user identifier of the user is located. If it is desired for the virtual environment to simulate a real world conference or other event then there may be a timetable of presentations, for example for each virtual conference hall or other virtual region, and the presentations are given simultaneously to all users in that virtual conference hall or other virtual region at their scheduled times.
The layout of the user interface may change depending on the location of the user's identifier in virtual environment. For example if a user moves from virtual conference hall 41 to the lobby 42, the presentation window 44 may be removed and/or replaced, and/or one of the other windows may be resized or reconfigured. For example, in one mode the presentation window may be removed or reduced, e.g. to thumbnail size, and the video and/or audio window 46 may be increased in size. Alternatively or additionally other content or functionality may be provided via the presentation window depending on the location of the user identifier. For example, in some embodiments, if a user moves from the conference hall to the lobby or other location then the presentation window may replace the presentation with other content, e.g. content concerning the conference or other event, or may be replaced with an internet browser, email application or other application. In some embodiments, the user may be able to change the functionality or content of the presentation or other window on command, without moving their user identifier. For example, a user may be attending a presentation in the virtual conference hall but may temporarily switch the presentation window to be a browser window or email window or other window.
The functionality described above in relation to Figure 3 is provided in the embodiment of Figure 1 by the communication module 10, the virtual environment module 12 and the presentation module 14 at the server 2, and the terminal communication module 30, the user interaction module 32, and the user interface module 34 at the user terminals 4a, 4b, 4n.
As mentioned above the the communication module 10 at the server and the terminal communication modules 30 at the user terminals 4a, 4b, 4n establish and maintain communication between the server and user terminals 4a, 4b, 4n in accordance with known communication and networking techniques.
In operation, the virtual environment module 12 generates and maintains the virtual environment including, for example, the number and position of the virtual tables and chairs and any other virtual objects, the size and configuration of virtual rooms or other virtual regions, and the identity, position, participation and other properties or actions of the various users, based for example on user input. The virtual environment module 12 maintains the virtual environment in real time based, for example, on user input and/or other data and distributes updated virtual environment data representing the virtual environment to the user terminals 4a, 4b, 4n in real time such that the representations of the virtual environment and the user terminals remain up-to-date enabling real-time interaction between users.
The streaming module 14 at the server 2 controls, for example, the streaming of video and/or audio between users and the providing of a presentation or other content via the virtual environment. The content streaming module 14 can for example receive video and/or audio streams from a user terminal 4a and selectively distribute them to user terminals of other users in the same group to enable real-time interaction. The content streaming module 14 also streams or otherwise distributes the presentation(s) that are presented to all users or selected set of users, as well as maintaining and distributing to the user terminals 4a, 4b, 4n other data concerning the virtual event.
At each user terminal the user interface module 34 receives the virtual environment data from the server 2 and generates the user interface which includes a representation of the virtual environment, for example as shown in Figure 3. The user interface modules 34 receive regular, for example real-time, updates of the virtual environment data from the server 2 and thus ensure that the representations of the virtual environment, including for example positions and statuses of the various users remain up-to-date at each of the user terminals 4a, 4b, 4n.
In embodiments, although the virtual environment is common for all users, the representation of the virtual environment as shown on the user interface may be different at different user terminals, for example based on user preference or behaviour. For example, at each user terminal the user identifier for the user of that terminal may be different than that for other users, so the user can easily distinguish themselves from the other users. Also, for example, a user may choose to display windows, virtual rooms or other virtual regions in different arrangements based on personal preference, for example they may resize, minimise or maximise, reorder or zoom in or out on particular windows, virtual rooms or other virtual regions. Furthermore, in some embodiments, the user interface may show only a particular part of the virtual environment centred or otherwise based on the position of the particular user.
At each user terminal, the user interaction module 32 receives user input and video and/or audio streams from the user and sends that input and/or streams to the server, where the virtual environment module 12 updates the virtual environment based on the input (e.g. updates user position or status) and distributes the video and/or audio streams to the users with whom the user is interacting, e.g. other users in the same group as the user. The user interaction module 32 also receives video and/or audio streams from other users, for example via the server 2, and outputs those streams to the user via, for example, the display device and/or audio device, for instance in the appropriate window of the user interface.
The embodiments of Figures 1 to 3, or variants of such embodiments, can provide for potentially complex virtual environments with large numbers of users and selective interaction of a user with groups of users that can change in real time. In such complex multi-user environments in particular, audio and other signal quality and user perception can have significant effects on performance. It is a feature of the embodiment of Figure 1 that control of audio streams to individual users is provided such that for each user a plurality of audio streams can be provided to the user in an effective and/or desired manner.
As shown schematically in Figure 4A, the user interaction module 32 at a terminal 4a includes an audio and/or video controller 80 that is concluded to control audio and/or streams provided to the audio device 25a and/or display device 24a, and to process user input concerning such audio streams. Corresponding audio and/or video controllers are also provided in the user interaction modules 32 of the user terminals 4b, 4c... 4n.
The audio device 25a in Figure 4A comprises two audio outputs in the form of first and second (for example, left and right) headphones 125, 127 of a set of headphones, and it is a feature of the embodiment of Figure 1 that different audio streams can be provided to the different audio outputs, for example under control of the audio and/or video controller 80.
In the embodiment of figure 4A, a first audio stream provided to one of the headphones 125 is associated with one of the windows, in this case the presentation window 44, and a second audio stream provided to the other of the headphones 127 is associated with at least one other of the windows and/or with at least one other of the users.
In the present example, the first audio stream represents the audio output from the presentation window 44, e.g. the presenter speaking and any other audio content of the presentation, and the second audio stream represents a combination of the audio outputs from the group of users represented in the separate video and/or audio streaming windows 48 for each user at the virtual table at which the user identifier of the user of this particular user interface is positioned.
Thus, in this example the user receives different audio inputs to their left and right ear, with a single audio stream to one area and a combined audio stream to the other ear.
This means that the user can simultaneously listen to the presentation and listen to, and participate in conversation with, users at the virtual table.
In the embodiment of Figure 4A, the combined audio stream from the group of users may be generated in any suitable way. For example, a merged or otherwise combined audio stream based on the audio streams received at the server by the individual users of the group of users may be generated by the server by processing those audio streams, and the combined audio stream may be provided by the server to the user terminal 4a. Alternatively, the server may select the individual audio streams from the users of the group of users and for each of those audio streams to the user terminal 4A, where the audio and/or video controller module 80 may combine the streams to generate the combined audio stream.
The same audio stream, or the same set of audio streams, may be provided to each of the user terminals included in the or group, for example the group of users at the particular virtual table. The audio stream may be a merged audio stream comprising audio obtained by the microphones at the user terminals of the group of users. The merged audio stream can be obtained using any suitable known audio processing techniques.
Although the audio outputs in the embodiment of Figure 4A are headphones, in other embodiments or at other terminals, the audio outputs may comprise any other suitable outputs, for example a left or right channel or other channel(s), a speaker or speakers or headphones. So, although in the example described in relation to Figure 4A the two audio streams are from the presentation window and from the group of users at the virtual table, the audio stream(s) may represent or be associated with any suitable user, group of users or window.
The provision of multiple audio streams to the user can assist the user when attending a multi-participant event and enables interaction with particular groups of users of a potentially large number of users, whilst also enabling simultaneously attending a presentation or receiving content. In the embodiment of Figure 4A, additional control features are provided to enable the user to control the audio streams which can provide for improved and more efficient participation of the user.
In the embodiment of Figure 4A, the user interface includes a scale 82 and associated slider 84 displayed to the user. The user can move the slider 84 along the scale 82 in order to alter the relative volume or other properties of the two different audio channels, in this example the volume of the audio output by the headphone 125 relative to the volume of the audio output by the headphone 127. For example, if the slider is at the extreme left of the scale then the volume at headphone 125 will be at a maximum (potentially subject to a total volume controlled by another volume control in some embodiments) and the volume at headphone 127 will be zero. In the same example, the slider is at the extreme left of the scale then the volume at headphone 125 will be at a maximum (potentially subject to a total volume controlled by another volume control in some embodiments) and the volume at headphone 127 will be zero.
The audio control described in the preceding paragraph is just one example of audio control that can be provided according to embodiments. In variants, or alternative modes of operation, of the embodiment of figure 4A, or in alternative embodiments, any suitable or desired audio control input can be obtained and corresponding audio control actions performed in response to the input. The audio control actions can be performed, for example, on a selected one or more of the audio streams or all of the streams, as desired. It will be understood that any suitable input arrangement, as well as or in addition to the slider, can be used to obtain the audio input from a user, for example one or more of a button, switch, toggle, text entry box, slider or drop-down menu.
In some embodiments, a user can select between alternative content, e.g. alternative audio streams, or alternative versions of that content for one or both of the audio outputs. For example, in some embodiments three or more audio streams may be available and the user can select one audio stream for each audio output, or can select a combination of audio streams to be provided via a single one of the outputs.
For example, in some embodiments translation from one language to another of one or more of the audio streams may be available and the user may be able to select an appropriate translation. For example the presentation may be available in the language used by the presenter or a translation presentation may be available, and the user can select which should be provided to the audio output that they are using, for example to headphone 125. The user may also for example switch between the translation and the original language, or combine both in a single audio stream and control the relative volume of each single audio stream In some embodiments, a three-way slider or other multi-way slider or other multi-option controller may be provided, for example to select audio or other properties of three or more audio streams that may be provided to the user via the audio outputs.
Any other suitable audio control action may be performed, either on command of the user or automatically in response to predetermined or selected rules, for example at least one of muting, unmuting, forwarding or rewinding, pausing, increasing or decreasing volume, encrypting or applying a language translation algorithm. The audio control actions can be applied to one or more of the first audio stream, the second audio stream(s), the first audio output, the second audio output, or audio stream(s) from a selected one or more of the further users as desired.
In some embodiments the audio control action can be used to alter the quality or other property of one or more of the audio streams or the audio represented by the audio stream(s), for example one or more of tone, pitch, timbre, volume, level of compression, signal-to-noise ratio, level of smoothing applied, level or nature of filtering applied.
In some embodiments, each of the first and second audio streams may be provided to both of the first and second audio outputs, with the relative volumes or other audio parameters of the first audio stream and second audio stream being different for the first audio output and for the at least one second audio output. In some embodiments the first audio stream may be more audible than the second audio stream via the first audio output, and the first audio stream may be less audible than the second audio stream via the at least one second audio output.
In practice, quality or other properties of audio streams or other audio data received, for example at the server, from different users can vary over time or from user-to-user, for example due to differences in audio or other devices at the user terminal, processing speed, or network connection speed or quality.
In some embodiments, the audio and/or video controller 80 can be used to apply audio control action(s) to provide desired audio or other properties of audio streams. For example, for the group of users at the virtual table there may be differences in audio quality between audio streams from different ones of the users of the group at the virtual table. In one embodiment, the audio and/or video controller 80 may be configured to apply filters and/or other processes selectively to different ones of the audio streams to ensure that the at least one property (e.g. signal-to-noise ratio or distortion) is substantially the same and/or within a selected or predetermined range for audio coming from each of the users in the group. Thus, distraction or communication difficulties may be reduced or avoided when participating in conversation with the group of users. The relative volumes of the audio from the different users of the group may also be controlled by the audio and/or video controller 80, for example so that audio from each users at the virtual table is represented with substantially the same volume in the combined audio stream at the device 125 or 127. Any other suitable audio effects or controls may be provided. For example, a user may turn up or down the relative volume of the audio from one or more of the users at the virtual table or otherwise in a selected group in to make it higher or lower than the volume for other of the users, for example thereby to concentrate more easily on conversation by or with that user(s).
Although audio and/or video controller 80 is shown at the user terminal 4a in Figure 4A, in alternative embodiments the audio and/or video controller 80 may be provided at the server 2, or functionality of the audio and/or video controller 80 may be split between the server 2 and the user terminal 4a and/or may be provided in the cloud.
The embodiment of Figure 1 is described in relation to the provision of a virtual environment comprising a virtual conference. In alternative embodiments, or modes of operation, any other suitable virtual environment may be provided and/or the virtual environment may be used to represent any desired event.
Figure 5 is a schematic representation of a user interface configured to enable multiple users to watch a film or other event simultaneously whilst also communicating with each other. The user interface of Figure 5 may be provided by operation of the embodiment of Figure 1 or by a system according to any other embodiment.
In the user interface of Figure 5, video window 144 is provided and has the same or similar properties to the presentation window 44 of Figure 3. The video window 144 is used to play a film or other content to a group of users. Each user of the group has a personal video and/or audio streaming window 148a, 148b, 148c or 148d, which has the same or similar properties as the video and/or audio streaming windows 48 of Figure 3. Each personal video and/or audio streaming window 148a, 148b, 148c, 148d is used to stream video and/or audio of a respective user for example using a camera associated with the user's laptop, tablet, mobile telephone or other computing device, using any suitable known video streaming or conferencing techniques.
The film displayed on the video window may be, for example, a film obtained from a movie or TV streaming service, for example Netflix (RTM) or Amazon Prime (RTM), and/or from an internet content site, for example YouTube (RTM). The personal video and/or audio streaming windows 148a, 148b, 148c, 148d can allow a group of friends or other users to converse with each other, for example, from their respective homes or other locations, whilst watching a movie or other content together, or have a virtual party or other event. Other windows can also be included in the user interface, for example text chat windows such as text chat windows 50 of the embodiment of Figure 3.
An audio and/or video controller, for example such as audio and/or video controller 80, can also be provided in the embodiment of Figure 6 and can be used, either by a selected user or by any or each of the users, to control the film or other content that is displayed using window 144, for example to rewind, fast forward, pause or play the film or other content simultaneously for all users. The audio and/or video controllers 80 can also be used by each user to control audio for example as described in relation to Figure 4.
A skilled person will appreciate that variations of the described embodiments are possible without departing from the invention. Accordingly, the above description of the specific embodiments is made by way of example only and not for the purposes of limitation. It will be clear to the skilled person that minor modifications may be made without significant changes to the operation of particular embodiments described.

Claims (36)

  1. CLAIMS1. A virtual conference or other interaction system to enable a user to interact with a plurality of further users in a virtual conference or other interaction environment, the system comprising a processing resource configured to: display a plurality of windows on at least one display device associated with the user, wherein a first window of the plurality of windows is configured to provide audio content and/or video content and/or to display images or other data to the user; at least one second window of the plurality of windows includes a plurality of user identifiers, each user identifier representing either the user or a respective one of the plurality of further users, and optionally each identifier is positioned in the window to represent a virtual position of its associated user in the virtual conference or other interaction environment; the system is configured to provide a plurality of audio streams simultaneously to at least one audio output device associated with the user; a first of the audio streams is associated with the first window, at least one second audio stream of the plurality of audio streams is from at least one of the further users represented by the further user identifiers.
  2. 2. A system according to claim 1, wherein the first audio stream is provided to a first audio output and the at least one second audio stream is provided to at least one second audio output.
  3. 3. A system according to claim 2, wherein the first audio output comprises one of a left or right channel, speaker or headphone, and the at least one second audio output comprises the other of the left or right channel, speaker or headphone.
  4. 4. A system according to any preceding claim, wherein the at least one second audio stream comprises a merged or otherwise combined audio stream that combines a plurality of audio streams.
  5. 5. A system according to any preceding claim, wherein the at least one second audio stream is from a selected group of the further users.
  6. 6. A system according to any preceding claim, wherein the further users of the selected group are selected based on proximity or association with the user or with at least one virtual object or virtual region in the virtual conference or other interaction environment.
  7. 7. A system according to claim 6, wherein the virtual object comprises a virtual table, and the group of further users comprises further users whose user identifiers are positioned at the same virtual table as the user identifier of the user.
  8. 8. A system according to claim 6 or 7, wherein the virtual object comprises a virtual table of a plurality of virtual tables, or a virtual chair, included in the second window.
  9. 9. A system according to any of claims 6 to 8, wherein the virtual region comprises a region of a virtual lobby or virtual break-out room.
  10. 10. A system according to any of claims 5 to 9, wherein a further user is selected for inclusion in the group based on input from the user and/or further user.
  11. 11. A system according to any preceding claim, wherein the processing resource is configured to receive audio and/or video control input from the user and to perform at least one audio control action in response to the input.
  12. 12. A system according to claim 11, wherein the audio and/or video control action comprises at least one of muting, unmufing, forwarding or rewinding, pausing, increasing or decreasing volume, encrypting or applying a language translation algorithm.
  13. 13. A system according to claim 11 or 12, wherein the audio and/or video control action is applied to one or more of the first audio stream, the second audio stream(s), the first audio output, the second audio output, or audio and/or video stream(s) from a selected one or more of the further users.
  14. 14. A system according to any of claims 11 to 13, wherein the audio and/or video control action comprises selection of at least one stream from a plurality of streams, optionally selection of a translated stream.
  15. 15. A system according to any of claims 11 to 14, wherein the processing resource is configured to provide and/or control at least one user interface element configured to control and/or select between a plurality of audio and/or video streams.
  16. 16. A system according to claim 15, wherein the at least one user interface element comprises at least one two-way, three-way, or more-than-three-way slider or other selection element.
  17. 17. A system according to any preceding claim, wherein the processing resource is configured to apply a filter or other audio process to different ones of the audio streams, optionally to a plurality of the second audio streams, to ensure that at least one property, for example signal-to-noise ratio or distortion or volume, is substantially the same and/or within a selected or predetermined range for audio coming from each of the users in the group.
  18. 18. A system according to any preceding claim, wherein the user identifier is moveable in the virtual environment in response to input by the user.
  19. 19. A system according to any preceding claim, wherein the further user identifiers are moveable in the virtual environment in response to input by the further users.
  20. 20. A system according to claim 19 as dependent on any of claims 6 to 10, wherein the user identifier and further user identifiers are moveable between the virtual tables or other virtual objects based on user input.
  21. 21. A system according to any preceding claim, wherein the first window comprises a presentation window configured to provide a presentation or other content to the user and the further users.
  22. 22. A system according to claim 16, wherein the virtual environment includes a virtual conference hall and the presentation window is configured to provide a presentation to all users whose user identifiers are present in the virtual conference hall.
  23. 23. A system according to any of claims 1 to 21, wherein the virtual event comprises a virtual party or virtual film screening, and the first window is configured to show a film or other selected content, for example content selected by at least one of the users from a streaming site.
  24. 24. A system according to any preceding claim, wherein the plurality of windows further comprises at least one chat window, configured to provide text chat between the user and at least one further user.
  25. 25. A system according to claim 26 as dependent on any of claims 6 to 10, wherein the at least one chat window is configured to provide text chat between the user and at least one of or all of the selected group of further users, for example further users at the or a virtual table or further users invited to a virtual party or other event.
  26. 26. A system according to claim 24 or 25 as dependent on claim 22 or 23, wherein the at least one chat window is configured to provide text chat between the user and further users present in the virtual conference hall and/or a presenter of the presentation and/or users at the virtual party, virtual film screening or other event.
  27. 27. A system according to any preceding claim wherein the plurality of windows further comprises at least one video window configured to display live video of the user and/or at least some of the further users.
  28. 28. A system according to claim 27 as dependent on any of claims 5 to 10, wherein the at least one video window is configured to provide live video of the user and at least one of or all of the selected group of further users, for example further users at the or a virtual table or further users invited to a virtual party or other event.
  29. 29. A system according to any preceding claim, wherein the virtual environment comprises a plurality of rooms and/or windows and the user identifier is moveable between rooms and/or windows in response to user input.
  30. 30. A system according to claim 29, wherein the plurality of rooms comprise a virtual conference room and at least one virtual breakout room or virtual lobby.
  31. 31. A virtual conference or other interaction system to enable a user to interact with a plurality of further users in a virtual conference or other interaction environment, the system comprising a processing resource configured to: display at least one window on at least one display device associated with the user, wherein the at least one window includes a plurality of user identifiers, each user identifier representing either the user or a respective one of the plurality of further users, and each identifier is positioned in the window to represent a virtual position of its associated user in the virtual conference or other interaction environment; the virtual conference or other interaction environment comprises a plurality of virtual tables and/or virtual chairs; wherein the user identifier and further user identifiers are moveable between the virtual tables based on input from the user and further users; the system is configured to provide at least one communication channel between the user and a selected group of the further users, wherein the selected group of the further users are users whose user identifiers are at the same virtual table and/or associated with the same group of virtual chairs as the user identifier of the user.
  32. 32. A system according to claim 31, wherein the at least one communication channel comprises at least one audio stream, at least one chat box, and/or at least one video stream.
  33. 33. A method of enabling a user to interact with a plurality of further users in a virtual conference or other interaction environment, the method comprising: displaying a plurality of windows to the user; using a first window of the plurality of windows to provide audio content and/or video content and/or to display images or other data to the user; using at least one second window of the plurality of windows to include a plurality of user identifiers, each user identifier representing either the user or a respective one of the plurality of further users, and optionally each identifier is positioned in the window to represent a virtual position of its associated user in the virtual conference or other interaction environment; providing a plurality of audio streams simultaneously to at least one audio output device associated with the user, wherein a first of the audio streams is associated with the first window, and at least one second audio stream of the plurality of audio streams is from at least one of the further users represented by the further user identifiers.
  34. 34. A server configured to maintain a virtual environment for a plurality of users and to provide virtual environment data to a plurality of user terminals, wherein the server is configured to, for each user, selectively provide a plurality of audio and/or video streams to the user terminal associated with the user, wherein at least one of the audio and/or video streams comprises audio and/or video stream(s) from a selected group of further users and/or wherein the virtual environment comprises at least one virtual table and/or at least one virtual chair.
  35. 35. A user terminal comprising a processing resource, at least one display device, and at least one audio output device, wherein the processing resource is configured to: display a plurality of windows on the at least one display device, wherein a first window of the plurality of windows is configured to provide audio content and/or video content and/or to display images or other data to the user, and at least one second window of the plurality of windows includes a plurality of user identifiers, each user identifier representing either the user or a respective one of a plurality of further users, and optionally each identifier is positioned in the window to represent a virtual position of its associated user in a virtual conference or other interaction environment; and the processing resource is further configured to provide a plurality of audio streams simultaneously to at least one audio output device associated with the user, wherein a first of the audio streams is associated with the first window, and at least one second audio stream of the plurality of audio streams is from at least one of the further users represented by the further user identifiers.
  36. 36. A computer program product comprising computer readable instructions that are executable to perform a method according to claim 33.
GB2107960.3A 2021-06-03 2021-06-03 Virtual interaction system Pending GB2607331A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2107960.3A GB2607331A (en) 2021-06-03 2021-06-03 Virtual interaction system
PCT/EP2022/064810 WO2022253856A2 (en) 2021-06-03 2022-05-31 Virtual interaction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2107960.3A GB2607331A (en) 2021-06-03 2021-06-03 Virtual interaction system

Publications (2)

Publication Number Publication Date
GB202107960D0 GB202107960D0 (en) 2021-07-21
GB2607331A true GB2607331A (en) 2022-12-07

Family

ID=76838715

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2107960.3A Pending GB2607331A (en) 2021-06-03 2021-06-03 Virtual interaction system

Country Status (2)

Country Link
GB (1) GB2607331A (en)
WO (1) WO2022253856A2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268803B (en) * 2021-12-21 2023-10-17 北京字跳网络技术有限公司 Live video display method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0942396A2 (en) * 1994-08-03 1999-09-15 Nippon Telegraph and Telephone Corporation Shared virtual space display method and apparatus using said method
US6330022B1 (en) * 1998-11-05 2001-12-11 Lucent Technologies Inc. Digital processing apparatus and method to support video conferencing in variable contexts
US20120079046A1 (en) * 2006-11-22 2012-03-29 Aol Inc. Controlling communications with proximate avatars in virtual world environment
US20140085406A1 (en) * 2012-09-27 2014-03-27 Avaya Inc. Integrated conference floor control
US20150256501A1 (en) * 2012-08-28 2015-09-10 Glowbl Graphical User Interface, Method, Computer Program and Corresponding Storage Medium
WO2019008320A1 (en) * 2017-07-05 2019-01-10 Maria Francisca Jones Virtual meeting participant response indication method and system
EP2774341B1 (en) * 2011-11-03 2020-08-19 Glowbl A communications interface and a communications method, a corresponding computer program, and a corresponding registration medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1515570B1 (en) * 2003-09-11 2007-01-10 Sony Ericsson Mobile Communications AB Multiparty call of portable devices with party positioning identification
US8571192B2 (en) * 2009-06-30 2013-10-29 Alcatel Lucent Method and apparatus for improved matching of auditory space to visual space in video teleconferencing applications using window-based displays
US8848028B2 (en) * 2010-10-25 2014-09-30 Dell Products L.P. Audio cues for multi-party videoconferencing on an information handling system
WO2013181026A1 (en) * 2012-06-02 2013-12-05 Social Communications Company Interfacing with a spatial virtual communications environment
US9924252B2 (en) * 2013-03-13 2018-03-20 Polycom, Inc. Loudspeaker arrangement with on-screen voice positioning for telepresence system
US9445050B2 (en) * 2014-11-17 2016-09-13 Freescale Semiconductor, Inc. Teleconferencing environment having auditory and visual cues
US10250848B2 (en) * 2016-06-03 2019-04-02 Avaya Inc. Positional controlled muting

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0942396A2 (en) * 1994-08-03 1999-09-15 Nippon Telegraph and Telephone Corporation Shared virtual space display method and apparatus using said method
US6330022B1 (en) * 1998-11-05 2001-12-11 Lucent Technologies Inc. Digital processing apparatus and method to support video conferencing in variable contexts
US20120079046A1 (en) * 2006-11-22 2012-03-29 Aol Inc. Controlling communications with proximate avatars in virtual world environment
EP2774341B1 (en) * 2011-11-03 2020-08-19 Glowbl A communications interface and a communications method, a corresponding computer program, and a corresponding registration medium
US20150256501A1 (en) * 2012-08-28 2015-09-10 Glowbl Graphical User Interface, Method, Computer Program and Corresponding Storage Medium
US20140085406A1 (en) * 2012-09-27 2014-03-27 Avaya Inc. Integrated conference floor control
WO2019008320A1 (en) * 2017-07-05 2019-01-10 Maria Francisca Jones Virtual meeting participant response indication method and system

Also Published As

Publication number Publication date
WO2022253856A3 (en) 2023-03-23
GB202107960D0 (en) 2021-07-21
WO2022253856A2 (en) 2022-12-08

Similar Documents

Publication Publication Date Title
US10567448B2 (en) Participation queue system and method for online video conferencing
US11256467B2 (en) Connected classroom
US11343293B1 (en) System and method of enabling a non-host, participant-initiated breakout session in a videoconferencing system, and simultaneously displaying a session view of a videoconferencing session and the participant-initiated breakout session
US11003335B2 (en) Systems and methods for forming group communications within an online event
CN106789914A (en) Multimedia conference control method and system
US11330230B2 (en) Internet communication system that modifies users' perceptions based on their proximity within a virtual space
US20160344780A1 (en) Method and system for controlling communications for video/audio-conferencing
EP4248645A2 (en) Spatial audio in video conference calls based on content type or participant role
US11405587B1 (en) System and method for interactive video conferencing
WO2022253856A2 (en) Virtual interaction system
Kachach et al. The owl: Immersive telepresence communication for hybrid conferences
US9741257B1 (en) System and method for coordinated learning and teaching using a videoconference system
Wong et al. Shared-space: Spatial audio and video layouts for videoconferencing in a virtual room
JP2005055846A (en) Remote educational communication system
JP2009253625A (en) Apparatus, method and program for information collection-video conference implementation control and video conference system
US11825026B1 (en) Spatial audio virtualization for conference call applications
US11647064B1 (en) Computer systems for managing interactive enhanced communications
Rusñák et al. CoUnSiL: A video conferencing environment for interpretation of sign language in higher education
US11902040B1 (en) Private communication in a teleconference
US11659138B1 (en) System and method for interactive video conferencing
KR20240040040A (en) Digital automation of virtual events
MOKHTAR Dissertation submitted in partial fulfillment of the requirements for the Bachelor of Technology (Hons)(Information System)
Bolei et al. Evolving Patterns of Human Interactions