US20240012550A1 - Providing bot participants within a virtual conferencing system - Google Patents

Providing bot participants within a virtual conferencing system Download PDF

Info

Publication number
US20240012550A1
US20240012550A1 US18/191,729 US202318191729A US2024012550A1 US 20240012550 A1 US20240012550 A1 US 20240012550A1 US 202318191729 A US202318191729 A US 202318191729A US 2024012550 A1 US2024012550 A1 US 2024012550A1
Authority
US
United States
Prior art keywords
participant
virtual space
bot
virtual
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US18/191,729
Other versions
US11880560B1 (en
Inventor
Andrew Cheng-min Lin
Walton Lin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Snap Inc
Original Assignee
Snap Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Snap Inc filed Critical Snap Inc
Priority to US18/191,729 priority Critical patent/US11880560B1/en
Assigned to SNAP INC. reassignment SNAP INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LIN, ANDREW CHENG-MIN, LIN, WALTON
Priority to US18/534,341 priority patent/US20240103708A1/en
Publication of US20240012550A1 publication Critical patent/US20240012550A1/en
Application granted granted Critical
Publication of US11880560B1 publication Critical patent/US11880560B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • G06T5/002
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1827Network arrangements for conference optimisation or adaptation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/02User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present disclosure relates generally to virtual conferencing systems, including providing bot participants within a virtual conferencing system.
  • a virtual conferencing system provides for the reception and transmission of audio and video data between devices, for communication between device users in real-time.
  • FIG. 1 is a diagrammatic representation of a networked environment in which the present disclosure may be deployed, in accordance with some examples.
  • FIG. 2 is a diagrammatic representation of a virtual conferencing system, in accordance with some examples, that has both client-side and server-side functionality.
  • FIG. 3 is a diagrammatic representation of a data structure as maintained in a database, in accordance with some examples.
  • FIG. 4 illustrates a virtual space design interface with interface elements for designing a virtual space, in accordance with some example embodiments.
  • FIG. 5 illustrates a virtual space navigation interface with interface elements to navigate between the rooms of a virtual space and to participate in virtual conferencing with respect to the rooms, in accordance with some example embodiments.
  • FIG. 7 illustrates design of a virtual space with bot participants, in accordance with some example embodiments.
  • FIG. 8 is a flowchart illustrating a process for providing bot participants within a virtual conferencing system, in accordance with some example embodiments.
  • FIG. 9 is a diagrammatic representation of a machine in the form of a computer system within which a set of instructions may be executed for causing the machine to perform any one or more of the methodologies discussed herein, in accordance with some examples.
  • FIG. 10 is a block diagram showing a software architecture within which examples may be implemented.
  • a virtual conferencing system provides for the reception and transmission of audio and video data between devices, for communication between device users in real-time.
  • a virtual conferencing system allows a user to design or select a virtual space with multiple rooms for real-time communication. Participants of a virtual conference may be associated with a respective video feed, and each video feed may be assignable to a participant video element within a room of the virtual space. Participants may switch between the different rooms of the virtual space, for example, to engage in different conversations, events, seminars, and the like. In some cases, a user designing the virtual space may wish to fine-tune the placement, audio levels, and video settings (e.g., blur) for participant video elements.
  • video settings e.g., blur
  • the disclosed embodiments provide for configuring bot participants, each of which is assignable to a respective participant video element.
  • the bot participants may be presented during design, before virtual conferencing with actual participants occurs.
  • the user e.g., designer of the virtual space
  • the user can also position participant bots at different participant video elements and/or different rooms. In this manner, it is possible to observe the properties (e.g., positions, audio levels, video filters) of participant video elements and fine-tune those properties, prior to hosting actual participants within the virtual space for virtual conferencing.
  • FIG. 1 is a block diagram showing an example virtual conferencing system 100 for exchanging data over a network.
  • the virtual conferencing system 100 includes multiple instances of a client device 102 , each of which hosts a number of applications, including a virtual conference client 104 and other application(s) 106 .
  • Each virtual conference client 104 is communicatively coupled to other instances of the virtual conference client 104 (e.g., hosted on respective other client devices 102 ), a virtual conference server system 108 and third-party servers 110 via a network 112 (e.g., the Internet).
  • a virtual conference client 104 can also communicate with locally-hosted applications 106 using Applications Program Interfaces (APIs).
  • APIs Application Program Interfaces
  • the virtual conferencing system 100 provides for the reception and transmission of audio, video, image, text and/or other signals by user devices (e.g., at different locations), for communication between users in real-time.
  • user devices e.g., at different locations
  • two users may utilize virtual conferencing to communicate with each other in one-to-one communication at their respective devices.
  • multiway virtual conferencing may be utilized by more than two users to participate in a real-time, group conversation.
  • multiple client devices 102 may participate in virtual conferencing, for example, with the client devices 102 participating in a group conversation in which audio-video content streams and/or message content (e.g., text, images) are transmitted between the participant devices.
  • message content e.g., text, images
  • a virtual conference client 104 is able to communicate and exchange data with other virtual conference clients 104 and with the virtual conference server system 108 via the network 112 .
  • the data exchanged between virtual conference clients 104 , and between a virtual conference client 104 and the virtual conference server system 108 includes functions (e.g., commands to invoke functions) as well as payload data (e.g., video, audio, other multimedia data, text).
  • the virtual conference server system 108 provides server-side functionality via the network 112 to a particular virtual conference client 104 .
  • the virtual conference client 104 e.g., installed on a first client device 102
  • the streaming content can correspond to audio and/or video content captured by sensors (e.g., microphones, video cameras) on the client devices 102 , for example, corresponding to real-time video and/or audio capture of the users (e.g., faces) and/or other sights and sounds captured by the respective device.
  • the streaming content may be supplemented with other audio/visual data (e.g., animations, overlays, emoticons and the like) and/or message content (e.g., text, stickers, emojis, other image/video data), for example, in conjunction with extension applications and/or widgets associated with the virtual conference client 104 .
  • While certain functions of the virtual conferencing system 100 are described herein as being performed by either a virtual conference client 104 or by the virtual conference server system 108 , the location of certain functionality either within the virtual conference client 104 or the virtual conference server system 108 may be a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the virtual conference server system 108 but to later migrate this technology and functionality to the virtual conference client 104 where a client device 102 has sufficient processing capacity.
  • the virtual conference server system 108 supports various services and operations that are provided to the virtual conference client 104 . Such operations include transmitting data to, receiving data from, and processing data generated by the virtual conference client 104 . This data may include the above-mentioned streaming content and/or message content, client device information, and social network information, as examples. Data exchanges within the virtual conferencing system 100 are invoked and controlled through functions available via user interfaces (UIs) of the virtual conference client 104 .
  • UIs user interfaces
  • an Application Program Interface (API) server 114 is coupled to, and provides a programmatic interface to, application servers 118 .
  • the application servers 118 are communicatively coupled to a database server 124 , which facilitates access to a database 126 that stores data associated with virtual conference content processed by the application servers 118 .
  • a web server 116 is coupled to the application servers 118 , and provides web-based interfaces to the application servers 118 . To this end, the web server 116 processes incoming network requests over the Hypertext Transfer Protocol (HTTP) and several other related protocols.
  • HTTP Hypertext Transfer Protocol
  • the Application Program Interface (API) server 114 receives and transmits virtual conference data (e.g., commands, audio/video payloads) between the client device 102 and the application servers 118 .
  • virtual conference data e.g., commands, audio/video payloads
  • the Application Program Interface (API) server 114 provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the virtual conference client 104 in order to invoke functionality of the application servers 118 .
  • the Application Program Interface (API) server 114 exposes various functions supported by the application servers 118 , including account registration, login functionality, the streaming of audio and/or video content, and/or the sending and retrieval of message content, via the application servers 118 , from a particular virtual conference client 104 to another virtual conference client 104 , the retrieval of a list of contacts of a user of a client device 102 , the addition and deletion of users (e.g., contacts) to a user graph (e.g., a social graph), and opening an application event (e.g., relating to the virtual conference client 104 ).
  • API Application Program Interface
  • the application servers 118 host a number of server applications and subsystems, including for example a virtual conference server 120 and a social network server 122 .
  • the virtual conference server 120 implements a number of virtual conference processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., streaming content) included in audio-video feeds received from multiple instances of the virtual conference client 104 .
  • content e.g., streaming content
  • Other processor and memory intensive processing of data may also be performed server-side by the virtual conference server 120 , in view of the hardware requirements for such processing.
  • the third-party server 110 provides for initiating communication between a user at the virtual conference client 104 of the client device 102 and a user external to the virtual conference server system 108 .
  • the third-party server 110 may correspond to a cloud-based service which allows for programmatically making phone calls, receiving phone calls, sending text messages, receiving text messages and/or performing other communication functions using web service APIs.
  • the social network server 122 supports various social networking functions and services and makes these functions and services available to the virtual conference server 120 . To this end, the social network server 122 maintains and accesses a user graph 304 (as shown in FIG. 3 ) within the database 126 . Examples of functions and services supported by the social network server 122 include the identification of other users of the virtual conferencing system 100 with which a particular user has relationships (e.g., contacts such as friends, colleagues, teachers, students, and the like).
  • a user interacting via the virtual conference client 104 running on a first client device 102 may select and invite participant(s) to a virtual conference.
  • the participants may be selected from contacts maintained by the social network server 122 .
  • the participants may be selected from contacts included within a contact address book stored in association with the first client device 102 (e.g., in local memory or in a cloud-based user account).
  • the participants may be selected by the user manually entering email addresses and/or phone numbers of the participants.
  • the user at the first client device 102 may initiate the virtual conference by selecting an appropriate user interface element provided by the virtual conference client 104 , thereby prompting the invited participants, at their respective devices (e.g., one or more second client devices 102 ), to accept or decline participation in the virtual conference.
  • the virtual conference server system 108 may perform an initialization procedure in which session information is published between the participant client devices 102 , including the user who provided the invite.
  • Each of the participant client devices 102 may provide respective session information to the virtual conference server system 108 , which in turn publishes the session information to the other participant client devices 102 .
  • the session information for each client device 102 may include content stream(s) and/or message content that is made available by the client device 102 , together with respective identifiers for the content stream(s) and/or message content.
  • the virtual conference may correspond to a virtual space which includes one or more rooms (e.g., virtual rooms).
  • the virtual space and its corresponding rooms may have been created at least in part by the inviting user and/or by other users.
  • an end user may act as an administrator, who creates their own virtual spaces with rooms, and/or designs a virtual space based on preset available rooms.
  • FIG. 2 is a block diagram illustrating further details regarding the virtual conferencing system 100 , according to some examples.
  • the virtual conferencing system 100 is shown to comprise the virtual conference client 104 and the application servers 118 .
  • the virtual conferencing system 100 embodies a number of subsystems, which are supported on the client-side by the virtual conference client 104 and on the server-side by the application servers 118 .
  • These subsystems include, for example, a virtual space creation system 202 which implements a virtual space design interface 204 , and a virtual space participation system 206 which implements a virtual space navigation interface 208 .
  • the virtual space creation system 202 provides for a user to design one or more virtual space(s) in which participants may engage in virtual conferencing.
  • a virtual space corresponds to an environment with one or more rooms configured to accommodate virtual conferencing.
  • the virtual space may be created and/or selected (e.g., from among a set of predefined virtual spaces with rooms) by an end user who wishes to invite other users for virtual conferencing.
  • the individual rooms of a virtual space may be newly-created and/or selected (e.g., from among a set of predefined rooms) by the end user.
  • the virtual space creation system 202 includes a virtual space design interface 204 , which is usable by the end user to design a virtual space, including creating and/or selecting rooms for including in the virtual space.
  • the virtual space design interface 204 enables an end user (e.g., an administrator) to select and/or position multiple elements for including in a room.
  • elements include, but are not limited to, participant video elements (e.g., for displaying the respective video feeds of participants), chat interfaces (e.g., for participants to provide text-based messages, stickers and/or reactions within a room), breakout buttons (e.g., for shuffling from a first room to one or more second rooms), and/or other user-definable elements for performing certain actions (e.g., speaking into a virtual microphone, querying an administrator via a button, and the like).
  • participant video elements e.g., for displaying the respective video feeds of participants
  • chat interfaces e.g., for participants to provide text-based messages, stickers and/or reactions within a room
  • breakout buttons e.g., for shuffling from a first room to one or more second rooms
  • other user-definable elements for performing certain actions (e.g.
  • the virtual space participation system 206 is configured to perform virtual conferencing among participants within a virtual space.
  • the participants may include the end user (e.g., administrator) who created the virtual space, as well as those users who were invited to participate in virtual conferencing with respect to the virtual space created/selected by the end user.
  • the virtual space participation system 206 includes a virtual space navigation interface 208 (e.g., discussed below with respect to FIG. 5 ) that allows participants to navigate between the rooms of a virtual space, and to participate in virtual conferencing with respect to the rooms.
  • the virtual space creation system 202 and the virtual space participation system 206 provide for an end user (e.g., an administrator) to create different types of environments (e.g., virtual spaces with rooms) for virtual conferencing, and for participants to engage in virtual conferencing within such environments.
  • environments e.g., virtual spaces with rooms
  • Examples of such virtual conferencing include, but are not limited to: business meetings, seminars, presentations, classroom lectures, teacher office hours, concerts, reunions, virtual dinners, escape rooms, and the like.
  • the database 126 includes profile data 302 , a user graph 304 and a user table 306 relating to the users (participants) of the virtual conferencing system 100 .
  • the user table 306 stores user data, and is linked (e.g., referentially) to the user graph 304 and the profile data 302 .
  • Each user of the virtual conferencing system 100 is associated with a unique identifier (email address, telephone number, social network identifier, etc.).
  • the user graph 304 stores (e.g., in conjunction with the social network server 122 ) information regarding relationships and associations between users. Such relationships may be social, professional (e.g., work at a common corporation or organization) interested-based or activity-based, merely for example. As noted above, the user graph 304 may be maintained and accessed at least in part by the social network server 122 .
  • the profile data 302 stores multiple types of profile data about a particular user.
  • the profile data 302 may be selectively used and presented to other users of the virtual conferencing system 100 , based on privacy settings specified by a particular user.
  • the profile data 302 includes, for example, a user name, telephone number, email address, and/or settings (e.g., notification and privacy settings), as well as a user-selected avatar representation.
  • the database 126 further includes a virtual spaces table 308 .
  • a virtual space corresponds to an environment with one or more rooms configured to accommodate virtual conferencing.
  • a virtual space may be newly-created by a user, or may be included within one or more sets of public virtual spaces made available (e.g., by other users, system administrators, and the like) for virtual conferencing.
  • the virtual spaces table 308 stores information representing the one or more sets of public virtual spaces, as well as any private virtual space(s) created by a user (e.g., in a case where the particular user did not make such virtual space(s) public).
  • the virtual spaces table 308 stores associations between its virtual spaces and users (e.g., within the user table 306 ) who selected those virtual spaces. In this manner, it is possible for a particular user to have one or more virtual spaces associated therewith.
  • the database 126 includes a rooms table 310 which may be associated with the virtual spaces within the virtual spaces table 308 . As noted above, a room may be newly-created by a user, or may be included within one or more sets (e.g., galleries) of public rooms made available for user selection.
  • the rooms table 310 stores information representing the one or more sets of rooms, as well as any private room(s) created by the user (e.g., in a case where the particular user did not make such room(s) public).
  • the stored information is usable by the virtual conferencing system 100 to create the corresponding rooms for use in a virtual space.
  • the stored information may further include recordings (e.g., audio and/or video recordings) of a particular virtual conference, for subsequent playback by corresponding participants.
  • FIG. 4 illustrates a virtual space design interface 204 with interface elements for designing a virtual space, in accordance with some example embodiments. Designing the virtual space may include creation and/or selection of rooms for including in the virtual space.
  • the virtual space design interface 204 includes a menu interface 402 , a room elements interface 404 , an element properties interface 406 , a controls interface 408 , a room list interface 410 , a room canvas interface 412 , and an administrator name 414 .
  • elements 402 - 414 correspond to an example of interface elements for the virtual space design interface 204 , and that additional, fewer and/or different interface elements may be used.
  • the menu interface 402 includes user-selectable categories (e.g., menu headings) relating to a virtual space (e.g., “workspace”), rooms within the virtual space, and/or elements within a room.
  • a virtual space e.g., “workspace”
  • rooms within the virtual space e.g., “workspace”
  • elements within a room e.g., “workspace”
  • the workspace category is user-selectable for presenting options (e.g., via a drop-down list) to manage settings for the virtual space, manage invites for the virtual space, manage versions of a virtual space, publish the virtual space (e.g., for future use by users), manage virtual space publications, and/or to start/manage recordings (e.g., audio and/or video recordings) with respect to the virtual space.
  • options e.g., via a drop-down list
  • the workspace category is user-selectable for presenting options (e.g., via a drop-down list) to manage settings for the virtual space, manage invites for the virtual space, manage versions of a virtual space, publish the virtual space (e.g., for future use by users), manage virtual space publications, and/or to start/manage recordings (e.g., audio and/or video recordings) with respect to the virtual space.
  • start/manage recordings e.g., audio and/or video recordings
  • the room category of the menu interface 402 is user-selectable for presenting options (e.g., via a drop-down list) to manage settings for a room within the virtual space, set a room background, set an order for the rooms listed in the room list interface 410 , create a new room, import a room from a set of available rooms, remove a room, publish a room, manage room publications, and/or to start/manage recordings with respect to a room.
  • options e.g., via a drop-down list
  • the element category is user-selectable for presenting options (e.g., via a drop-down list) to insert elements into a room, insert shapes into a room, foreground/background elements, arrange/position elements, and/or group elements.
  • Examples of elements include, but are not limited to: an action button, analog clock, audience question board, backpack item, breakout button, chat, closed caption display, closed caption input, countdown, clock, digital clock, external communication element (e.g., a doorbell), double-sided image, feedback, image, multiuser video chat, music, participant audio mixer, participant count, participant video element (e.g., single or multiple), picture strip, poll, random source, room preview, scheduled time, sound effect, stopwatch, take picture, text, timer, user search, video, waiting list, web media, website.
  • Examples of shapes include, but are not limited to, a circle, rectangle and triangle.
  • the users category of the menu interface 402 is user-selectable for presenting options (e.g., via a drop-down list) to manage users/participants of the virtual space (e.g., adding tags for participants, so as to distinguish between roles such as an administrator or an attendee/participant).
  • the edit category is user-selectable for performing edit operations (e.g., undo, redo, cut, copy, paste)
  • the help category is user-selectable for performing help operations (e.g., getting started, discord, live help, submitting feedback).
  • the room elements interface 404 includes user-selectable icons for inserting elements (e.g., corresponding to a subset of those available via the above-mentioned element category) into a current room.
  • the elements may be added and/or positioned within the current room by selecting the element and dragging the selected element onto the room canvas interface 412 , which represents the layout of the current room.
  • the room elements interface 404 include icons including but not limited to: a text icon for adding text to a room; a participant video icon for adding a single participant video element (e.g., an interface element which is selectable by a single participant for displaying that participant's video feed) to a room; a multiuser video icon for adding a multiple participant video element (e.g., an interface element which is selectable by one or more participants for displaying the video feeds for those participants) to a room; a chat icon for adding a chat interface (e.g., for messaging using text, stickers, emojis, etc.) to a room; a video playback icon for adding a video playback element (e.g., screen) to a room for playback of a selected video; a background icon for selecting a background color/gradient, image or video to a room; an action icon for adding an action element (e.g., button) to a room for performing a user-defined action (e.g.,
  • the single participant video element is configured to be assignable (e.g., during virtual conferencing) to a single participant video feed (e.g., including video and/or audio).
  • the multiple participant video element is configured to be assignable to multiple participant video feeds (e.g., including video and/or audio).
  • the multi-participant video element is configurable (e.g., via its respective element properties interface 406 ) to accommodate a user-specified number of the participant video feeds.
  • the element properties interface 406 include various fields for setting configuration properties for above-described room elements.
  • the element properties interface 406 includes fields for setting the element title/name, opacity, gradient, style, layout, borders/corners, shadows, interaction (e.g., to what extent participant(s) may delete, modify, resize the element), filtering, full screen status, conditions, accessibility and actions for the element.
  • the element properties interface 406 includes further fields for setting the manner in which users are placed into the participant video element during virtual conferencing (e.g., automatically, manually by the participant and/or the administrator end user).
  • the element properties interface 406 includes further properties for setting who (e.g., administrator and/or participants) can provide chat input, and/or which types of input (e.g., text, stickers, emojis, etc.) are available.
  • the element properties interface 406 includes further properties for setting what type of action is to be performed in response to user selection of the action element (e.g., button).
  • the element properties interface 406 includes further properties for selecting participants and/or breakout rooms.
  • the element properties interface 406 further includes fields for setting configuration properties for the room canvas interface 412 .
  • the element properties interface 406 includes fields for selecting a number of fake participants (e.g., simulated video feeds) in order to visualize multiple users, selecting music (e.g., background music), and/or selecting reaction buttons for participants to indicate real-time reactions with respect to virtual conferencing within a room.
  • the virtual space design interface 204 provides for a designer of a room to create a room background.
  • the room background may depict a number of objects including seating objects. Examples of seating objects include chairs, sofas, swings, mats, beanbags, grass, the ground and the like.
  • the virtual space design interface 204 is not limited to using seating objects for the positioning of participant video elements. For example, an administrator may select to position participant video elements on or near other objects in a room, such as windows, clocks, walls, etc.
  • the virtual space design interface 204 provides for a user to position, shape and/or size of room elements (e.g., participant video elements) anywhere within a room, such as relative to background objects.
  • the element properties interface 406 includes fields via which the user can input values to specify position, size or shape of the room elements.
  • the room canvas interface 412 of the virtual space design interface 204 provides for a user to reposition, reshape and/or resize the room elements (e.g., participant video elements) via predefined user gestures (e.g., selecting and dragging elements, edges of elements, corners of elements, and the like).
  • the virtual space design interface 204 may cause user changes made to room elements via the room canvas interface 412 to automatically update values within corresponding fields of the element properties interface 406 .
  • the room canvas interface 412 of the virtual space design interface 204 provides for a user to segment objects (e.g., background objects) depicted in the room, so as to define boundaries of the object.
  • segment objects e.g., background objects
  • the user may define line segments around an object (e.g., a sofa) using a predefined gesture in order to define the boundary of an object (e.g., clicking points corresponding to corners of the object in combination with pressing a predefined key).
  • the virtual space design interface 204 provides for displaying the segmented objects relative to room elements (e.g., participant video elements) of the room.
  • a segmented sofa may be displayed in front of, or behind (based on user-selected properties) elements, to create the perception of depth to the end user.
  • the controls interface 408 includes user-selectable icons corresponding to controls (e.g., administrative controls) for the virtual space.
  • the controls interface 408 include icons including but not limited to: a director mode icon for toggling between a director mode for designing a room and a user mode for viewing the room within the virtual space design interface 204 (e.g., with the director mode including the room elements interface 404 and the element properties interface 406 while the user mode does not); a view icon for viewing the room within the virtual space navigation interface 208 ; a share screen icon (e.g., for collaborative design with other user(s) such as co-administrators); a microphone icon for enabling or disabling the microphone; a help icon (e.g., getting started, discord, live help, submitting feedback); an invite icon (e.g., for displaying an invite link for sending to participants to visit the virtual space); a settings icon (e.g., for selecting the end user's video and audio devices for the virtual conferencing, and
  • the room list interface 410 displays the list of rooms for the virtual space.
  • Each listed room is user selectable to switch to edit (e.g., in director mode) and/or view (e.g., in user mode) the selected room.
  • the list of rooms may be modified (e.g., by adding, importing and/or removing rooms) via the options within the room category of the menu interface 402 .
  • FIG. 5 illustrates a virtual space navigation interface 208 with interface elements to navigate between the rooms of a virtual space and to participate in virtual conferencing with respect to the rooms, in accordance with some example embodiments.
  • the virtual space navigation interface 208 includes a controls interface 502 , a room list interface 504 , a current room interface 506 , a participant video element 508 and a participant video element 510 .
  • elements 502 - 512 correspond to an example of interface elements for the virtual space navigation interface 208 , and that additional, fewer and/or different interface elements may be used.
  • the controls interface 502 includes user-selectable icons corresponding to controls (e.g., administrative controls) for the virtual space.
  • the controls interface 408 include icons including but not limited to: an edit icon for redirecting to the virtual space design interface 204 to edit the current room; a volume icon for adjusting a volume level for the current room; a share screen icon (e.g., for allowing others to view the room without necessarily joining the room); a microphone icon for muting and unmuting the microphone; a help icon (e.g., getting started, discord, live help, submitting feedback); an invite icon (e.g., for displaying an invite link for participants to visit the virtual space); a settings icon (e.g., for selecting the end user's video and audio devices for the virtual conferencing, and for selecting a user avatar); and/or an exit icon for exiting the virtual space design interface 204 .
  • an edit icon for redirecting to the virtual space design interface 204 to edit the current room
  • a volume icon for adjusting a volume
  • the room list interface 504 displays the list of rooms for the virtual space. Each listed room is user selectable to switch to the selected room (e.g., for virtual conferencing). The selected room is presented as a current room within the current room interface 506 . In this manner, a participant may navigate among the multiple rooms available within the virtual space. Alternatively or in addition, navigation between rooms is possible via a virtual space map interface (not shown) which depicts a map view of the virtual space (e.g., a floor plan) and its corresponding rooms, with each room being user selectable to navigate thereto.
  • a virtual space map interface (not shown) which depicts a map view of the virtual space (e.g., a floor plan) and its corresponding rooms, with each room being user selectable to navigate thereto.
  • navigation between rooms is further possible by positioning a navigation button (not shown) within a room, where user selection of the button results in navigating to another room (e.g., a predefined room).
  • the virtual space design interface 204 allows for the design of a virtual space and its corresponding rooms.
  • navigation between rooms is based at least in part on the design of the virtual space (e.g., a virtual space may include one or more of the above-mentioned room list interface 504 , the virtual space map/floor plan interface and/or the navigation button).
  • each participant is represented as a respective participant video element.
  • a participant video element corresponds to an interface element (e.g., a box) which is selectable by a single participant for displaying that participant's video feed.
  • the example of FIG. 5 includes a first participant associated with the participant video element 508 and a second participant associated with the participant video element 510 .
  • the participant video element 510 showing the feed of the second participant may include participant button(s) 512 .
  • the participant button(s) 512 are selectable by the first participant so as to perform a predefined action (e.g., initiate a side conversation, designate the second participant to follow the first participant when the first participant moves rooms) with respect to the second participant.
  • the virtual space navigation interface 208 may vary based on whether a given participant is an administrator or another participant (e.g., an attendee). For example, some participant video elements may be designated (e.g., via the virtual space design interface 204 ) for administrators, while other participant video elements are designated for other participants.
  • the virtual conference server system 108 is configured to distinguish between these administrator or other participant roles, for example, based on the above-described tags assigned to participants via the users category of the menu interface 402 provided by the virtual space design interface 204 .
  • the current room interface 506 may accommodate additional participants for virtual conferencing.
  • the additional participants may be positioned (e.g., automatically and/or manually by dragging) based on the positioning of participant video elements (e.g., boxes) as designed by the virtual space design interface 204 .
  • the element properties interface 406 includes fields for setting the manner in which participant video feeds are assigned and placed into the participant video element during virtual conferencing (e.g., automatically, manually by the participant and/or the administrator end user).
  • the virtual conference server system 108 may follow a predefined order (e.g., as specified during room design) for placing participant video feeds with respective participant video elements.
  • the virtual conference server system 108 provides for the participants to manually select which available (e.g., unoccupied) participant video element to position their respective participant video feed. For example, the manual selection is performed using a predefined gesture (e.g., a drag and drop operation of the participant video feed to a selected participant video element). Based on room design, the virtual conference server system 108 provides one or more of the administrator (e.g., presented) or participants (e.g., attendees) to manually assign a participant video feed to a participant video element.
  • a predefined gesture e.g., a drag and drop operation of the participant video feed to a selected participant video element.
  • the virtual conference server system 108 provides one or more of the administrator (e.g., presented) or participants (e.g., attendees) to manually assign a participant video feed to a participant video element.
  • assignments between participant video feeds and participant video elements may be a combination of automatic and manual selection. For example, assignments may default to automatic and be initially be based on automatic selection. At a later time, with proper settings per the design of the room, participant(s) may manually select a different available participant video element to be assigned to.
  • the participant video feeds of the first participant and the second participant(s) are assigned to the participant video elements 508 - 510 within the current room interface 506 based on automatic and/or manual selection.
  • the first and second participants can participate in virtual conferencing, and observe each other's participant video feeds which may appear to be positioned relative to objects (e.g., seating objects) in the room.
  • the virtual conferencing system 100 allows a user, in designing a room for virtual conferencing, to position and/or size participant video elements (e.g., based on objects depicted in the room). Each of the participant video elements corresponds to a potential placeholder for a participant.
  • the video feeds of participants are assigned to the participant video elements 508 - 510 .
  • FIG. 6 is an interaction diagram illustrating a process 600 for providing bot participants within a virtual conferencing system, in accordance with some example embodiments.
  • the process 600 is described herein with reference to a first client device 602 , a second client device(s) 604 , and the virtual conference server system 108 .
  • Each of the first client device 602 and the second client device(s) 604 may correspond to a respective client device 102 .
  • the process 600 is not limited to the first client device 602 , the second client device(s) 604 , and the virtual conference server system 108 .
  • one or more blocks (or operations) of the process 600 may be performed by one or more other components of the first client device 602 , the second client device(s) 604 , or the virtual conference server system 108 , and/or by other suitable devices. Further for explanatory purposes, the blocks (or operations) of the process 600 are described herein as occurring in serial, or linearly. However, multiple blocks (or operations) of the process 600 may occur in parallel or concurrently. In addition, the blocks (or operations) of the process 600 need not be performed in the order shown and/or one or more blocks (or operations) of the process 600 need not be performed and/or can be replaced by other operations. The process 600 may be terminated when its operations are completed. In addition, the process 600 may correspond to a method, a procedure, an algorithm, etc.
  • Each of the first client device 602 and the second client device(s) 604 have instances of the virtual conference client 104 installed thereon.
  • the first client device 602 and the second client device(s) 604 are associated with a respective first participant and second participant(s) of the virtual conference server system 108 .
  • the first participant may be associated with a first user account of the virtual conference server system 108
  • the second participant(s) may be associated with second user account(s) of the virtual conference server system 108 .
  • the first participant and second participant(s) are identifiable by the virtual conference server system 108 based on unique identifiers (e.g., email addresses, telephone numbers) associated with respective user accounts for the first participant and second participant(s).
  • the virtual conference server system 108 implements and/or works in conjunction with a social network server 122 which is configured to identify contacts with which a particular user has relationships.
  • the first participant and second participant(s) may be contacts with respect to the virtual conference server system 108 .
  • the virtual conferencing system 100 provides the first participant (e.g., a designer of the virtual space) with interfaces to configure participant video elements in a virtual space, where the virtual space includes one or more rooms. Each participant video element is assignable to a respective participant video feed of a participant. In some cases, participants are permitted to move among rooms of the virtual space during virtual conferencing.
  • the virtual conferencing system 100 provides interfaces to configure bot participants, each of which is assignable to a respective participant video element.
  • the bot participants may be presented during design, before virtual conferencing with actual participants occurs.
  • the first participant can specify an audio level, audio track, blur level (e.g., filtering) and/or participant role for a given participant bot.
  • the first participant can also position participant bots at different participant video elements and/or different rooms. In this manner, it is possible to observe the properties (e.g., positions, audio levels, video filters) of participant video elements and fine-tune those properties, prior to hosting actual participants within the virtual space for virtual conferencing.
  • properties e.g., positions, audio levels, video filters
  • operations 606 - 620 may correspond to a first phase (e.g., a “design phase”) and operations 622 - 624 may correspond to a second phase (e.g., a “virtual conferencing phase”).
  • a user e.g., administrator
  • the virtual conference server system 108 presents the virtual space based on the bot participant(s) during the design phase, for user observation.
  • the participant video elements and other room elements are displayed based on their respective properties.
  • the participant video feeds are assigned to respective participant video elements (e.g., as opposed to participant bot(s) that were used for observation and testing during design).
  • the second phase may occur shortly after the first phase, or after an extended period of time after the first phase.
  • FIG. 6 includes a dashed line separating the first phase and the second phase for illustrative purposes.
  • FIG. 6 illustrates an example in which the user (e.g., administrator) designs the room with multiple participant video elements, and one or more bot participant(s). It is understood that the virtual conferencing system 100 provides for alternate arrangements, for example, with different numbers of participant video elements (e.g., none, one, multiple) and/or different numbers of bot participant(s) (e.g., none, one, multiple).
  • participant video elements e.g., none, one, multiple
  • bot participant(s) e.g., none, one, multiple
  • the virtual conference server system 108 provides, to the first client device 602 , interfaces for configuring participant video elements and bot participant(s) in the virtual space.
  • the first client device 602 may correspond to an administrator who designs the room, and who acts as a presenter during virtual conferencing.
  • the interface for configuring the participant video elements corresponds to the virtual space design interface 204 , which includes the element properties interface 406 .
  • the element properties interface 406 includes various fields for setting configuration properties for room elements, including the participant video elements.
  • Each participant video element is configured to be assignable to a respective participant video feed (e.g., corresponding to a respective participant during virtual conferencing).
  • the room elements interface 404 of the virtual space design interface 204 includes respective elements for adding single participant video elements and/or multiple participant video elements to a room.
  • the element properties interface 406 in conjunction with the virtual space design interface 204 provides for configuring the number of participant video elements and other elements in a room, as well as the positions, shapes and/or sizes for each of the elements.
  • the element properties interface 406 includes fields for setting the title/name, opacity, gradient, style, layout, borders/corners, shadows, interaction (e.g., to what extent participant(s) may delete, modify, resize the element), filtering, full screen status, conditions, accessibility and actions for the participant video elements.
  • the element properties interface 406 includes fields for setting the manner in which users are placed into the participant video elements during virtual conferencing (e.g., automatically, manually by the participant and/or the administrator end user).
  • the virtual space design interface 204 further provides for a designer of a room to create a room background (e.g., with seating objects), to position/place room elements anywhere within a room (e.g., relative to background objects), and/or to segment objects (e.g., background objects) depicted in the room for displaying room elements relative to one another.
  • a room background e.g., with seating objects
  • segment objects e.g., background objects
  • the element properties interface 406 in conjunction with the virtual space design interface 204 also provides for configuring audio and video properties for the participant video elements.
  • the user e.g., administrator
  • the interface for configuring the bot participant(s) corresponds to the virtual space design interface 204 , but is separate from the element properties interface 406 .
  • the virtual space design interface 204 includes a menu interface 402 with a workspace category.
  • the workspace category includes a user-selectable button (e.g., via a drop-down list) to manage workspace settings.
  • the workspace settings include the option to create bot participant(s) via selection of a bot participant button (e.g., labeled “create test bot”). For example, each time the user selects the “create test bot button,” a bot participant is added to the virtual space (e.g., to test and fine-tune the audio and video properties of participant video elements).
  • creation of a participant bot causes a corresponding participant thumbnail to appear within the room list interface (e.g., the room list interface 410 , the room list interface 712 ).
  • each newly-created participant bot appears as a respective icon (e.g., thumbnail) within the room list interface.
  • a participant properties interface e.g., discussed below with respect to FIG. 7
  • the participant properties interface in conjunction with the virtual space design interface 204 provides for the user to specify actions by the bot participants during virtual space design (e.g., the user/administrator and the bot participant(s)).
  • the participant properties interface provides for the user/administrator to specify one or more of: the name of the bot participant; the email address of the bot participant (e.g., a fake email address for testing purposes); a participant tag for the bot participant (e.g., to define a role for the bot, such as a presenter, viewer, etc.); whether to mute audio; an audio track (e.g., predefined tracks for counting in different selectable voices, user input text-to-speech, or repeating the name assigned to the bot participant); whether to remove the bot participant; whether to mute video; and/or a video track (e.g., pre-generated videos corresponding to participants, custom videos uploaded by the user).
  • the name of the bot participant e.g., the email address of the bot participant (e.g., a fake email address for testing purposes); a participant tag for the bot participant (e.g., to define a role for the bot, such as a presenter, viewer, etc.); whether to mute audio; an audio track
  • the user can specify the audio output (e.g., muted, an audio track) for a particular bot participant, and the physical appearance of the particular bot participant.
  • the participant properties interface allows the user to simulate assigning participants to participant video elements, without the use of actual participants.
  • the room list interface allows the user (e.g., administrator) to position/assign bot participants to different participant video elements within the virtual space.
  • a virtual space may have multiple rooms, each room having respective participant video elements.
  • the rooms list interface generally provides for displaying the list of rooms in the virtual space, and for navigating between the rooms within the virtual space.
  • the room list interface provides for the user to move the bot participant(s) between rooms.
  • the room list interface includes respective thumbnails of the participants assigned to (e.g., present within) each room.
  • the user may select the icon for the bot participant assigned to its current room, and drag the icon to a new room listed in the room list interface.
  • An available participant video element in the new room may be assigned (e.g., automatically or manually) to the bot participant, and the room canvas interface 412 of the virtual space design interface 204 may be updated to present the bot participant as assigned to the participant video element.
  • the virtual conference client 104 running on the first client device 602 provides display of the virtual space design interface 204 , for setting properties for the participant video elements (e.g., via the element properties interface 406 ) and the bot participant(s) (e.g., via the participant properties interface).
  • the first client device 602 receives user input setting such properties.
  • the first client device 602 sends an indication of the set properties to the virtual conference server system 108 (operation 610 ). For example, values input by the user at the first client device 602 , for configuring the participant video elements and the bot participant(s), are sent from the first client device 602 to the virtual conference server system 108 .
  • the virtual conference server system 108 stores the properties associated with the participant video elements and the bot participant(s) in association with the virtual space (block 612 ).
  • the virtual conference server system 108 provides for storing the properties (e.g., user-selected values for the various fields) within the virtual spaces table 308 of the database 126 , in association with the virtual space.
  • the virtual conference server system 108 presents, to the first client device 602 and not the second client device(s) 604 , the virtual space with based on the properties set for the participant video elements and the bot participant(s) (operation 614 ). While remaining in the design phase, the virtual conference server system 108 provides for presenting the virtual space based on the stored properties for the participant video elements and the bot participant(s). This includes presenting each of the participant video elements and other room elements based on each of their positions (e.g., including relative positions based on any segmented objects in the room), shapes, sizes and/or effects as configured by the user during design of the room, and presenting the participant bot(s) based on their user-specified values.
  • the virtual conference server system 108 assigns participant bots (e.g., with their respective audio tracks, video tracks, and the like) to respective participant video elements.
  • the element properties interface 406 includes fields for setting the manner in which participant video feeds (including those associated with a bot participant) are assigned and placed into the participant video element (e.g., automatically, manually by the participant and/or the administrator end user).
  • the audio mute/track and video mute/track for the bot participant(s) as specified by the user are assigned to participant video elements within the virtual based on automatic and/or manual selection as specified by the designer. The user may observe each of the participant video elements assigned to bot participant(s).
  • the user may continue to modify properties of the participant video elements and/or the bot participant(s). For example, the user may select to reposition/reassign the bot participant to a different participant video element within the room (e.g., via a drag gesture). Alternatively or in addition, the user may select to modify properties of the bot participant via the participant properties interface. Moreover, the user may select to modify properties (e.g., audio mute/track, video mute/track) of the assigned participant video element, for example, by selecting the participant video element and changing properties (e.g., audio properties such as muting/volume, and/or video filter values such as blur, brightness, color, contrast, hue as noted above) via the element properties interface 406 .
  • properties e.g., audio mute/track, video mute/track
  • changing properties e.g., audio properties such as muting/volume, and/or video filter values such as blur, brightness, color, contrast, hue as noted above
  • the first client device 602 may provide continuous indications of the updated properties to the virtual conference server system 108 (e.g., per operation 610 ), which in turn updates the virtual spaces table 308 and presents an updated virtual space (e.g., per block 612 and operation 614 ).
  • operations 608 - 614 may loop as shown by the dotted line following operation 614 and returning to block 608 .
  • the user may decide that configuration of the participant video elements and bot participant(s) in the virtual space is complete (e.g., the virtual space is satisfactory for virtual conferencing based on observation of the bot participant(s)).
  • the user may elect to remove the bot participant(s) from the virtual space (e.g., so as not to appear within room(s) during virtual conferencing with the actual participants).
  • the virtual space design interface 204 provides various interface elements for removing bot participant(s).
  • an individual bot participant is removable via the participant properties interface corresponding to that bot participant.
  • the virtual space design interface 204 provides an interface element (e.g., the element properties interface for the room) to remove all bots within a particular room.
  • the virtual space design interface 204 provides an interface element (e.g., via the workspace settings) to remove all bots from the entire virtual space (e.g., across all rooms of the virtual space).
  • the first client device 602 receives user input to remove bot participant(s).
  • the first client device 602 sends, to the virtual conference server system 108 , to remove the bot participant(s).
  • the virtual conference server system 108 removes the bot participant(s) from the virtual space (e.g., via updates to the virtual spaces tables 308 ).
  • the user may typically decide to remove all bot participant(s) from the virtual space so that no bot participant(s) are present during virtual conferencing with actual participants other than the first participant, the virtual conferencing system 100 is not limited in this manner.
  • the first participant may elect to include no bot participants, a subset of the bot participant(s), or all bot participant(s) used during design of the virtual space.
  • operations 622 - 624 relate to a virtual conferencing phase, in which actual participants (e.g., the first participant and the second participant(s)) engage in virtual conferencing within the room.
  • the virtual conference server system 108 provides for presenting the room based on the stored properties for all room elements. This includes presenting each of the participant video elements and other room elements based on each of their positions (e.g., including relative positions based on any segmented objects in the room), shapes, sizes and/or effects as configured by the user during design of the room.
  • the virtual conference server system 108 assigns the video feeds of the first participant and second participant(s) with respective participant video elements.
  • the element properties interface 406 includes fields for setting the manner in which participant video feeds are assigned and placed into the participant video element during virtual conferencing (e.g., automatically, manually by the participant and/or the administrator end user).
  • the participant video feeds of the first participant and the second participant(s) are assigned to participant video elements within the room based on automatic and/or manual selection as specified by the designer.
  • the first and second participants can participate in virtual conferencing, and observe each other's participant video feeds.
  • bot participant(s) as described herein allow for the user (e.g., administrator) to observe and test virtual spaces using bot participant(s) that perform predefined actions, in order to simulate actual participants.
  • Example use cases for configuring bot participant(s) include, but are not limited to breakout rooms, audio mixing between rooms of the virtual space, previewing into rooms of the virtual space, and side conversations.
  • the virtual space design interface 204 provides an interface to configure breakout rooms, for shuffling of selected participants between a current room and one or more other rooms.
  • a breakout room is separate from a current room and allows a small group of participants to discuss a particular issue before returning to the main meeting (e.g., room).
  • the selected participants may enter the breakout room in response to selecting a button (e.g., a prompt to break out from the main meeting to the breakout room selected for that participant).
  • the breakout room configurations may be specified during room design via the virtual space design interface 204 .
  • the user may add bot participant(s) to specific rooms of the virtual space, in order to test and confirm that participants are being directed to the appropriate break rooms during a breakout session.
  • the element properties interface may include user-selectable options to assign room participants to breakout rooms as follows: all participants in the current room (to move all of the participants in the room that the breakout button is currently in; all participants in the virtual space except pinned (to move all of the participants in the virtual except those that are pinned to a room, to move everyone back from breakout sessions into the main room); participants by tag/role (to specify which participants to move based on their specific tag or lack of a tag, for example, to move all participants tagged with “staff” into a green room or to move all supervisors into one room and all line workers to another); participants by email (send a specific participant, via their email associated with the virtual conferencing system 100 , to a specific participant video element or room (to send speakers into specific slots in an auditorium); participants by participant video element (mov
  • the virtual conference server system 108 provides for the user to save and/or load sets of test bots.
  • the virtual conference server system 108 the sets would save information on multiple bots with assigned roles placed into specific rooms.
  • the virtual conference server system 108 provides an interface to configure audio mixing, for allowing a first participant to mix participant audio from one or more second room(s) into audio presented to a first room.
  • the interface may include various user-selectable fields for specifying the manner in which the participant audio is mixed (e.g., which room(s) to sample, a number of participants to sample, an audio level for the mix, and the like).
  • the participant audio may correspond to the live audio feed (e.g., from microphones) of the participants within the other room(s).
  • a user may design the virtual space as a restaurant with the rooms corresponding to restaurant tables.
  • the user e.g., administrator
  • the virtual conference server system 108 provides an interface to configure previewing into other rooms during virtual conferencing.
  • a room preview element can be included within a first room, to display a live preview of a second room as a window or frame within a first room.
  • the virtual conferencing system provides audio output associated with the second room, at a reduced audio level relative to audio output associated with the first room.
  • the live video-audio feed of the room preview element allows participants within the first room to preview or otherwise observe a the second room, without requiring participants in the first room to navigate to the second room.
  • the user may add bot participant(s) to specific rooms of the virtual space, in order to observe and fine-tune video filtering across rooms (e.g., blurring participant videos in adjacent rooms).
  • the bot participant(s) may be used to observe and fine-tune audio levels of participants across rooms.
  • the virtual conference server system 108 provides an interface to configure side conversations, for allowing a first participant (e.g., administrator) to invite a second participant to a side conversation, while both maintaining presence in a room.
  • a first participant e.g., administrator
  • an indication of the side conversation is sent to the devices of other room participants.
  • Those other participants may join the side conversation, by either accepting an invitation received from the first or second participants, or by sending a request to join the side conversation to the devices of the first or second participants.
  • the user may configure audio levels for the side conversation as observed by those within the side conversation and those outside of the side conversation.
  • the user may add bot participant(s) within a room of the virtual space, in order to observe and fine-tune audio levels with respect to side conversations.
  • the virtual conferencing system 100 allows for configuring bot participants, each of which is assignable to a respective participant video element.
  • the bot participants may be presented during design, before virtual conferencing with actual participants.
  • the user e.g., designer of the virtual space
  • the user can also position participant bots at different participant video elements and/or different rooms. In this manner, it is possible to observe the properties (e.g., positions, audio levels, video filters) of participant video elements and fine-tune those properties, prior to hosting actual participants within the virtual space for virtual conferencing.
  • the video and audio levels for participants among the rooms of a virtual space design may be set in a manner that is more engaging for participants. Without allowing the user (e.g., administrator) to configure a virtual space in this manner, the user may have to further configure participant video elements during virtual conferencing with actual participants, which may be distracting and cumbersome for the user (e.g., who may also be presenting) and/or other participants.
  • the virtual conferencing system 100 facilitates creation and participation with respect to virtual conferencing environments, thereby saving time for the user, and reducing computational resources/processing power for the virtual conferencing system 100 .
  • FIG. 7 illustrates a virtual space design interface 700 with bot participants, in accordance with some example embodiments.
  • the virtual space design interface 700 is similar to the virtual space navigation interface 208 described above with respect to FIG. 4 .
  • the virtual space design interface 700 includes participant thumbnails 702 - 706 , participant video elements 708 - 710 , a room list interface 712 , a room canvas interface 714 and a participant properties interface 716 .
  • the room list interface 712 lists the rooms within the virtual space.
  • the room list interface further depicts respective participant thumbnails 702 - 706 for participant within the virtual space.
  • the participant thumbnail 702 corresponds to a first bot participant (“Bot_1”)
  • participant thumbnail 706 corresponds to a second bot participant (“Bot_2”)
  • participant thumbnail 704 corresponds to the user (e.g., designer) of the virtual space.
  • the participant video elements 708 - 710 are positioned within a room entitled “Nobu Malibu” within the room list interface 712 , where the room design is depicted within the room canvas interface 714 .
  • the participant thumbnail 702 e.g., bot
  • the participant thumbnail 704 is assigned to the participant video element 710 and corresponds to an actual video feed.
  • the participant thumbnail 704 is assigned to a participant video element within the room entitled “Dance Floor,” which does not correspond to the current room canvas interface 714 of FIG. 7 .
  • participant properties interface 716 includes user-selectable fields for configuring the bot participants, such as the bot participant's name, email address, participant tag, whether to mute audio, audio track, removal, whether to mute video, video track and the like.
  • user selection of any of the participant thumbnails 706 - 708 within the room canvas interface 714 surfaces the element properties interface (e.g., similar to element 406 , and positioned to replace the participant properties interface 716 ).
  • the element properties interface includes user-selectable fields for configuring the bot participants, such as the participant's title/name, opacity, gradient, style, layout, borders/corners, shadows, interaction (e.g., to what extent participant(s) may delete, modify, resize the element), filtering, full screen status, conditions, accessibility and actions for the participant video elements, the manner in which users are placed into the participant video elements during virtual conferencing (e.g., automatically, manually by the participant and/or the administrator end user), audio muting, audio volume, and/or video filter values for blur, brightness, color, contrast, hue and the like.
  • FIG. 8 is a flowchart illustrating a process 800 for providing bot participants within a virtual conferencing system, in accordance with some example embodiments.
  • the process 800 is primarily described herein with reference to the virtual conference server system 108 of FIG. 1 .
  • one or more blocks (or operations) of the process 800 may be performed by one or more other components, and/or by other suitable devices.
  • the blocks (or operations) of the process 800 are described herein as occurring in serial, or linearly. However, multiple blocks (or operations) of the process 800 may occur in parallel or concurrently.
  • the blocks (or operations) of the process 800 need not be performed in the order shown and/or one or more blocks (or operations) of the process 800 need not be performed and/or can be replaced by other operations.
  • the process 800 may be terminated when its operations are completed.
  • the process 800 may correspond to a method, a procedure, an algorithm, etc.
  • the virtual conference server system 108 provides, in association with designing a virtual space for virtual conferencing, a first interface for configuring plural participant video elements within the virtual space, each of the plural participant video elements being assignable to a respective participant (block 802 ). Each respective participant may be associated with a video feed for presenting within a respective one of the plural participant video elements.
  • the virtual conference server system 108 receives, via the first interface, an indication of user input for setting first properties for the plural participant video elements (block 804 ).
  • the virtual conference server system 108 provides, in association with designing the virtual space, a second interface for configuring a bot participant in the virtual space, the bot participant for simulating an actual participant in association with a participant video element of the plural participant video elements (block 806 ).
  • the virtual conference server system 108 receives, via the second interface, an indication of second user input for setting second properties for the bot participant (block 808 ).
  • the second properties may specify one or more of: an audio level for the bot participant, for user observation of audio levels across rooms during design of the virtual space; an audio track for the bot participant, for user observation of audio output across rooms during design of the virtual space; a blur level for the bot participant, for user observation of video filtering across rooms during design of the virtual space; and a participant role for the bot participant, for user observation of role-based breakouts during design of the virtual space.
  • the virtual conference server system 108 provides, in association with designing the virtual space, display of the virtual space based on the first properties and the second properties (block 810 ).
  • the bot participant is assigned to the participant video element (e.g., where the participant video element is selected from the plural participant video elements based on manual and/or automatic selection).
  • the virtual conference server system 108 may receive, in association with designing the virtual space, an indication of third user input to remove the bot participant from the virtual space. In response to receiving the indication of third user input, the virtual conference server system 108 may remove the bot participant from the virtual space. The virtual conference server system 108 may provide, in association with virtual conferencing, display of the virtual space based on removing the bot participant from the virtual space.
  • FIG. 9 is a diagrammatic representation of the machine 900 within which instructions 910 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 900 to perform any one or more of the methodologies discussed herein may be executed.
  • the instructions 910 may cause the machine 900 to execute any one or more of the methods described herein.
  • the instructions 910 transform the general, non-programmed machine 900 into a particular machine 900 programmed to carry out the described and illustrated functions in the manner described.
  • the machine 900 may operate as a standalone device or may be coupled (e.g., networked) to other machines.
  • the machine 900 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine 900 may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smartphone, a mobile device, a wearable device (e.g., a smartwatch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 910 , sequentially or otherwise, that specify actions to be taken by the machine 900 .
  • PC personal computer
  • PDA personal digital assistant
  • machine 900 may comprise the client device 102 or any one of a number of server devices forming part of the virtual conference server system 108 .
  • the machine 900 may also comprise both client and server systems, with certain operations of a particular method or algorithm being performed on the server-side and with certain operations of the particular method or algorithm being performed on the client-side.
  • the machine 900 may include processors 904 , memory 906 , and input/output I/O components 902 , which may be configured to communicate with each other via a bus 940 .
  • the processors 904 e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof
  • the processors 904 may include, for example, a processor 908 and a processor 912 that execute the instructions 910 .
  • processor is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously.
  • FIG. 9 shows multiple processors 904
  • the machine 900 may include a single processor with a single-core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof.
  • the memory 906 includes a main memory 914 , a static memory 916 , and a storage unit 918 , both accessible to the processors 904 via the bus 940 .
  • the main memory 906 , the static memory 916 , and storage unit 918 store the instructions 910 embodying any one or more of the methodologies or functions described herein.
  • the instructions 910 may also reside, completely or partially, within the main memory 914 , within the static memory 916 , within machine-readable medium 920 within the storage unit 918 , within at least one of the processors 904 (e.g., within the Processor's cache memory), or any suitable combination thereof, during execution thereof by the machine 900 .
  • the I/O components 902 may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on.
  • the specific I/O components 902 that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components 902 may include many other components that are not shown in FIG. 9 .
  • the I/O components 902 may include user output components 926 and user input components 928 .
  • the user output components 926 may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth.
  • visual components e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)
  • acoustic components e.g., speakers
  • haptic components e.g., a vibratory motor, resistance mechanisms
  • the user input components 928 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like.
  • alphanumeric input components e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components
  • point-based input components e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument
  • tactile input components e.g., a physical button,
  • the I/O components 902 may include biometric components 930 , motion components 932 , environmental components 934 , or position components 936 , among a wide array of other components.
  • the biometric components 930 include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye-tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like.
  • the motion components 932 include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope).
  • the environmental components 934 include, for example, one or cameras (with still image/photograph and video capabilities), illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment.
  • illumination sensor components e.g., photometer
  • temperature sensor components e.g., one or more thermometers that detect ambient temperature
  • humidity sensor components e.g., pressure sensor components (e.g., barometer)
  • acoustic sensor components e.g., one or more microphones that detect background noise
  • proximity sensor components e.
  • the client device 102 may have a camera system comprising, for example, front cameras on a front surface of the client device 102 and rear cameras on a rear surface of the client device 102 .
  • the front cameras may, for example, be used to capture still images and video of a user of the client device 102 (e.g., “selfies”), which may then be augmented with augmentation data (e.g., filters) described above.
  • the rear cameras may, for example, be used to capture still images and videos in a more traditional camera mode, with these images similarly being augmented with augmentation data.
  • the client device 102 may also include a 360° camera for capturing 360° photographs and videos.
  • the camera system of a client device 102 may include dual rear cameras (e.g., a primary camera as well as a depth-sensing camera), or even triple, quad or penta rear camera configurations on the front and rear sides of the client device 102 .
  • These multiple cameras systems may include a wide camera, an ultra-wide camera, a telephoto camera, a macro camera and a depth sensor, for example.
  • the position components 936 include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.
  • location sensor components e.g., a GPS receiver component
  • altitude sensor components e.g., altimeters or barometers that detect air pressure from which altitude may be derived
  • orientation sensor components e.g., magnetometers
  • the I/O components 902 further include communication components 938 operable to couple the machine 900 to a network 922 or devices 924 via respective coupling or connections.
  • the communication components 938 may include a network interface Component or another suitable device to interface with the network 922 .
  • the communication components 938 may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities.
  • the devices 924 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB).
  • the communication components 938 may detect identifiers or include components operable to detect identifiers.
  • the communication components 938 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals).
  • RFID Radio Frequency Identification
  • NFC smart tag detection components e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes
  • RFID Radio Fre
  • IP Internet Protocol
  • Wi-Fi® Wireless Fidelity
  • NFC beacon a variety of information may be derived via the communication components 938 , such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth.
  • IP Internet Protocol
  • the various memories may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions 910 ), when executed by processors 904 , cause various operations to implement the disclosed examples.
  • the instructions 910 may be transmitted or received over the network 922 , using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components 938 ) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions 910 may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices 924 .
  • a network interface device e.g., a network interface component included in the communication components 938
  • HTTP hypertext transfer protocol
  • the instructions 910 may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices 924 .
  • FIG. 10 is a block diagram 1000 illustrating a software architecture 1004 , which can be installed on any one or more of the devices described herein.
  • the software architecture 1004 is supported by hardware such as a machine 1002 that includes processors 1020 , memory 1026 , and I/O components 1038 .
  • the software architecture 1004 can be conceptualized as a stack of layers, where each layer provides a particular functionality.
  • the software architecture 1004 includes layers such as an operating system 1012 , libraries 1010 , frameworks 1008 , and applications 1006 .
  • the applications 1006 invoke API calls 1050 through the software stack and receive messages 1052 in response to the API calls 1050 .
  • the operating system 1012 manages hardware resources and provides common services.
  • the operating system 1012 includes, for example, a kernel 1014 , services 1016 , and drivers 1022 .
  • the kernel 1014 acts as an abstraction layer between the hardware and the other software layers.
  • the kernel 1014 provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality.
  • the services 1016 can provide other common services for the other software layers.
  • the drivers 1022 are responsible for controlling or interfacing with the underlying hardware.
  • the drivers 1022 can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., USB drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth.
  • the libraries 1010 provide a common low-level infrastructure used by the applications 1006 .
  • the libraries 1010 can include system libraries 1018 (e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like.
  • the libraries 1010 can include API libraries 1024 such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the
  • the frameworks 1008 provide a common high-level infrastructure that is used by the applications 1006 .
  • the frameworks 1008 provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services.
  • GUI graphical user interface
  • the frameworks 1008 can provide a broad spectrum of other APIs that can be used by the applications 1006 , some of which may be specific to a particular operating system or platform.
  • the applications 1006 may include a home application 1036 , a contacts application 1030 , a browser application 1032 , a book reader application 1034 , a location application 1042 , a media application 1044 , a messaging application 1046 , a game application 1048 , and a broad assortment of other applications such as a third-party application 1040 .
  • the applications 1006 are programs that execute functions defined in the programs.
  • Various programming languages can be employed to create one or more of the applications 1006 , structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language).
  • the third-party application 1040 may be mobile software running on a mobile operating system such as IOSTM, ANDROIDTM, WINDOWS® Phone, or another mobile operating system.
  • the third-party application 1040 can invoke the API calls 1050 provided by the operating system 1012 to facilitate functionality described herein.
  • Carrier signal refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such instructions. Instructions may be transmitted or received over a network using a transmission medium via a network interface device.
  • Client device refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices.
  • a client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smartphones, tablets, ultrabooks, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network.
  • PDAs portable digital assistants
  • smartphones tablets, ultrabooks, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network.
  • Communication network refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks.
  • VPN virtual private network
  • LAN local area network
  • WLAN wireless LAN
  • WAN wide area network
  • WWAN wireless WAN
  • MAN metropolitan area network
  • PSTN Public Switched Telephone Network
  • POTS plain old telephone service
  • a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other types of cellular or wireless coupling.
  • CDMA Code Division Multiple Access
  • GSM Global System for Mobile communications
  • the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1 ⁇ RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology.
  • RTT Single Carrier Radio Transmission Technology
  • GPRS General Packet Radio Service
  • EDGE Enhanced Data rates for GSM Evolution
  • 3GPP Third Generation Partnership Project
  • 4G fourth generation wireless (4G) networks
  • Universal Mobile Telecommunications System (UMTS) Universal Mobile Telecommunications System
  • HSPA High Speed Packet Access
  • WiMAX Worldwide Interoperability for Microwave Access
  • LTE
  • Component refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process.
  • a component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components.
  • a “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner.
  • one or more computer systems may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein.
  • a hardware component may also be implemented mechanically, electronically, or any suitable combination thereof.
  • a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations.
  • a hardware component may be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC).
  • FPGA field-programmable gate array
  • ASIC application specific integrated circuit
  • a hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations.
  • a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software), may be driven by cost and time considerations.
  • the phrase “hardware component” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein.
  • hardware components are temporarily configured (e.g., programmed)
  • each of the hardware components need not be configured or instantiated at any one instance in time.
  • a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor
  • the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times.
  • Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In examples in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access.
  • one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information).
  • a resource e.g., a collection of information.
  • the various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein.
  • processor-implemented component refers to a hardware component implemented using one or more processors.
  • the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components.
  • the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS).
  • the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API).
  • the performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines.
  • the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other examples, the processors or processor-implemented components may be distributed across a number of geographic locations.
  • Computer-readable storage medium refers to both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals.
  • machine-readable medium “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure.
  • Machine storage medium refers to a single or multiple storage devices and media (e.g., a centralized or distributed database, and associated caches and servers) that store executable instructions, routines and data.
  • the term shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors.
  • machine-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks
  • semiconductor memory devices e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices
  • magnetic disks such as internal hard disks and removable disks
  • magneto-optical disks magneto-optical disks
  • CD-ROM and DVD-ROM disks CD-ROM and DVD-ROM disks
  • machine-storage medium mean the same thing and may be used interchangeably in this disclosure.
  • the terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves
  • Non-transitory computer-readable storage medium refers to a tangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine.
  • Signal medium refers to any intangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine and includes digital or analog communications signals or other intangible media to facilitate communication of software or data.
  • signal medium shall be taken to include any form of a modulated data signal, carrier wave, and so forth.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal.
  • transmission medium and “signal medium” mean the same thing and may be used interchangeably in this disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing bot participants for virtual conferencing. The program and method provide, in association with designing a virtual space, a first interface for configuring plural participant video elements, each being assignable to a respective participant; receive, via the first interface, an indication of user input for setting first properties for the plural participant video elements; provide a second interface for configuring a bot participant for simulating an actual participant in association with a participant video element of the plural participant video elements; receive, via the second interface, an indication of second user input for setting second properties for the bot participant; and provide, in association with designing the virtual space, display of the virtual space based on the first and second properties, the bot participant being assigned to the participant video element.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This patent application claims the benefit of U.S. Provisional Patent Application No. 63/368,047, filed Jul. 9, 2022, entitled “PROVIDING BOT PARTICIPANTS WITHIN A VIRTUAL CONFERENCING SYSTEM”, which is incorporated by reference herein in its entirety.
  • TECHNICAL FIELD
  • The present disclosure relates generally to virtual conferencing systems, including providing bot participants within a virtual conferencing system.
  • BACKGROUND
  • A virtual conferencing system provides for the reception and transmission of audio and video data between devices, for communication between device users in real-time.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced. Some nonlimiting examples are illustrated in the figures of the accompanying drawings in which:
  • FIG. 1 is a diagrammatic representation of a networked environment in which the present disclosure may be deployed, in accordance with some examples.
  • FIG. 2 is a diagrammatic representation of a virtual conferencing system, in accordance with some examples, that has both client-side and server-side functionality.
  • FIG. 3 is a diagrammatic representation of a data structure as maintained in a database, in accordance with some examples.
  • FIG. 4 illustrates a virtual space design interface with interface elements for designing a virtual space, in accordance with some example embodiments.
  • FIG. 5 illustrates a virtual space navigation interface with interface elements to navigate between the rooms of a virtual space and to participate in virtual conferencing with respect to the rooms, in accordance with some example embodiments.
  • FIG. 6 is an interaction diagram illustrating a process for providing bot participants within a virtual conferencing system, in accordance with some example embodiments.
  • FIG. 7 illustrates design of a virtual space with bot participants, in accordance with some example embodiments.
  • FIG. 8 is a flowchart illustrating a process for providing bot participants within a virtual conferencing system, in accordance with some example embodiments.
  • FIG. 9 is a diagrammatic representation of a machine in the form of a computer system within which a set of instructions may be executed for causing the machine to perform any one or more of the methodologies discussed herein, in accordance with some examples.
  • FIG. 10 is a block diagram showing a software architecture within which examples may be implemented.
  • DETAILED DESCRIPTION
  • A virtual conferencing system provides for the reception and transmission of audio and video data between devices, for communication between device users in real-time. A virtual conferencing system allows a user to design or select a virtual space with multiple rooms for real-time communication. Participants of a virtual conference may be associated with a respective video feed, and each video feed may be assignable to a participant video element within a room of the virtual space. Participants may switch between the different rooms of the virtual space, for example, to engage in different conversations, events, seminars, and the like. In some cases, a user designing the virtual space may wish to fine-tune the placement, audio levels, and video settings (e.g., blur) for participant video elements.
  • The disclosed embodiments provide for configuring bot participants, each of which is assignable to a respective participant video element. The bot participants may be presented during design, before virtual conferencing with actual participants occurs. For example, the user (e.g., designer of the virtual space) can specify an audio level, audio track, blur level (e.g., filtering) and/or participant role for a given participant bot. The user can also position participant bots at different participant video elements and/or different rooms. In this manner, it is possible to observe the properties (e.g., positions, audio levels, video filters) of participant video elements and fine-tune those properties, prior to hosting actual participants within the virtual space for virtual conferencing.
  • By virtue of allowing a user to configure and use bot participants during design, it is possible to fine-tune the configuration participant video elements within a virtual space. Otherwise, the user may have to further configure participant video elements during virtual conferencing with actual participants, which may be distracting and cumbersome for the user (e.g., who may also be presenting) and/or other participants.
  • FIG. 1 is a block diagram showing an example virtual conferencing system 100 for exchanging data over a network. The virtual conferencing system 100 includes multiple instances of a client device 102, each of which hosts a number of applications, including a virtual conference client 104 and other application(s) 106. Each virtual conference client 104 is communicatively coupled to other instances of the virtual conference client 104 (e.g., hosted on respective other client devices 102), a virtual conference server system 108 and third-party servers 110 via a network 112 (e.g., the Internet). A virtual conference client 104 can also communicate with locally-hosted applications 106 using Applications Program Interfaces (APIs).
  • The virtual conferencing system 100 provides for the reception and transmission of audio, video, image, text and/or other signals by user devices (e.g., at different locations), for communication between users in real-time. In some cases, two users may utilize virtual conferencing to communicate with each other in one-to-one communication at their respective devices. In other cases, multiway virtual conferencing may be utilized by more than two users to participate in a real-time, group conversation. Thus, multiple client devices 102 may participate in virtual conferencing, for example, with the client devices 102 participating in a group conversation in which audio-video content streams and/or message content (e.g., text, images) are transmitted between the participant devices.
  • A virtual conference client 104 is able to communicate and exchange data with other virtual conference clients 104 and with the virtual conference server system 108 via the network 112. The data exchanged between virtual conference clients 104, and between a virtual conference client 104 and the virtual conference server system 108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., video, audio, other multimedia data, text).
  • The virtual conference server system 108 provides server-side functionality via the network 112 to a particular virtual conference client 104. For example, with respect to transmitting audio and/or video streams, the virtual conference client 104 (e.g., installed on a first client device 102) may facilitate in transmitting streaming content to the virtual conference server system 108 for subsequent receipt by other participant devices (e.g., one or more second client devices 102) running respective instances of the virtual conference client 104.
  • The streaming content can correspond to audio and/or video content captured by sensors (e.g., microphones, video cameras) on the client devices 102, for example, corresponding to real-time video and/or audio capture of the users (e.g., faces) and/or other sights and sounds captured by the respective device. The streaming content may be supplemented with other audio/visual data (e.g., animations, overlays, emoticons and the like) and/or message content (e.g., text, stickers, emojis, other image/video data), for example, in conjunction with extension applications and/or widgets associated with the virtual conference client 104.
  • While certain functions of the virtual conferencing system 100 are described herein as being performed by either a virtual conference client 104 or by the virtual conference server system 108, the location of certain functionality either within the virtual conference client 104 or the virtual conference server system 108 may be a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the virtual conference server system 108 but to later migrate this technology and functionality to the virtual conference client 104 where a client device 102 has sufficient processing capacity.
  • The virtual conference server system 108 supports various services and operations that are provided to the virtual conference client 104. Such operations include transmitting data to, receiving data from, and processing data generated by the virtual conference client 104. This data may include the above-mentioned streaming content and/or message content, client device information, and social network information, as examples. Data exchanges within the virtual conferencing system 100 are invoked and controlled through functions available via user interfaces (UIs) of the virtual conference client 104.
  • Turning now specifically to the virtual conference server system 108, an Application Program Interface (API) server 114 is coupled to, and provides a programmatic interface to, application servers 118. The application servers 118 are communicatively coupled to a database server 124, which facilitates access to a database 126 that stores data associated with virtual conference content processed by the application servers 118. Similarly, a web server 116 is coupled to the application servers 118, and provides web-based interfaces to the application servers 118. To this end, the web server 116 processes incoming network requests over the Hypertext Transfer Protocol (HTTP) and several other related protocols.
  • The Application Program Interface (API) server 114 receives and transmits virtual conference data (e.g., commands, audio/video payloads) between the client device 102 and the application servers 118. Specifically, the Application Program Interface (API) server 114 provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the virtual conference client 104 in order to invoke functionality of the application servers 118. The Application Program Interface (API) server 114 exposes various functions supported by the application servers 118, including account registration, login functionality, the streaming of audio and/or video content, and/or the sending and retrieval of message content, via the application servers 118, from a particular virtual conference client 104 to another virtual conference client 104, the retrieval of a list of contacts of a user of a client device 102, the addition and deletion of users (e.g., contacts) to a user graph (e.g., a social graph), and opening an application event (e.g., relating to the virtual conference client 104).
  • The application servers 118 host a number of server applications and subsystems, including for example a virtual conference server 120 and a social network server 122. The virtual conference server 120 implements a number of virtual conference processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., streaming content) included in audio-video feeds received from multiple instances of the virtual conference client 104. Other processor and memory intensive processing of data may also be performed server-side by the virtual conference server 120, in view of the hardware requirements for such processing.
  • In one or more embodiments, the third-party server 110 provides for initiating communication between a user at the virtual conference client 104 of the client device 102 and a user external to the virtual conference server system 108. For example, the third-party server 110 may correspond to a cloud-based service which allows for programmatically making phone calls, receiving phone calls, sending text messages, receiving text messages and/or performing other communication functions using web service APIs.
  • The social network server 122 supports various social networking functions and services and makes these functions and services available to the virtual conference server 120. To this end, the social network server 122 maintains and accesses a user graph 304 (as shown in FIG. 3 ) within the database 126. Examples of functions and services supported by the social network server 122 include the identification of other users of the virtual conferencing system 100 with which a particular user has relationships (e.g., contacts such as friends, colleagues, teachers, students, and the like).
  • In one or more embodiments, a user interacting via the virtual conference client 104 running on a first client device 102 may select and invite participant(s) to a virtual conference. For example, the participants may be selected from contacts maintained by the social network server 122. In another example, the participants may be selected from contacts included within a contact address book stored in association with the first client device 102 (e.g., in local memory or in a cloud-based user account). In another example, the participants may be selected by the user manually entering email addresses and/or phone numbers of the participants.
  • The user at the first client device 102 may initiate the virtual conference by selecting an appropriate user interface element provided by the virtual conference client 104, thereby prompting the invited participants, at their respective devices (e.g., one or more second client devices 102), to accept or decline participation in the virtual conference. When the participant(s) have accepted the invitation (e.g., via the prompt), the virtual conference server system 108 may perform an initialization procedure in which session information is published between the participant client devices 102, including the user who provided the invite. Each of the participant client devices 102 may provide respective session information to the virtual conference server system 108, which in turn publishes the session information to the other participant client devices 102. The session information for each client device 102 may include content stream(s) and/or message content that is made available by the client device 102, together with respective identifiers for the content stream(s) and/or message content.
  • As described below with respect to FIG. 2 , the virtual conference may correspond to a virtual space which includes one or more rooms (e.g., virtual rooms). The virtual space and its corresponding rooms may have been created at least in part by the inviting user and/or by other users. In this manner, an end user may act as an administrator, who creates their own virtual spaces with rooms, and/or designs a virtual space based on preset available rooms.
  • FIG. 2 is a block diagram illustrating further details regarding the virtual conferencing system 100, according to some examples. Specifically, the virtual conferencing system 100 is shown to comprise the virtual conference client 104 and the application servers 118. The virtual conferencing system 100 embodies a number of subsystems, which are supported on the client-side by the virtual conference client 104 and on the server-side by the application servers 118. These subsystems include, for example, a virtual space creation system 202 which implements a virtual space design interface 204, and a virtual space participation system 206 which implements a virtual space navigation interface 208.
  • The virtual space creation system 202 provides for a user to design one or more virtual space(s) in which participants may engage in virtual conferencing. In one or more embodiments, a virtual space corresponds to an environment with one or more rooms configured to accommodate virtual conferencing.
  • The virtual space may be created and/or selected (e.g., from among a set of predefined virtual spaces with rooms) by an end user who wishes to invite other users for virtual conferencing. In addition, the individual rooms of a virtual space may be newly-created and/or selected (e.g., from among a set of predefined rooms) by the end user. In one or more embodiments, the virtual space creation system 202 includes a virtual space design interface 204, which is usable by the end user to design a virtual space, including creating and/or selecting rooms for including in the virtual space.
  • As discussed below with respect to FIG. 4 , the virtual space design interface 204 enables an end user (e.g., an administrator) to select and/or position multiple elements for including in a room. Examples of elements include, but are not limited to, participant video elements (e.g., for displaying the respective video feeds of participants), chat interfaces (e.g., for participants to provide text-based messages, stickers and/or reactions within a room), breakout buttons (e.g., for shuffling from a first room to one or more second rooms), and/or other user-definable elements for performing certain actions (e.g., speaking into a virtual microphone, querying an administrator via a button, and the like).
  • The virtual space participation system 206 is configured to perform virtual conferencing among participants within a virtual space. The participants may include the end user (e.g., administrator) who created the virtual space, as well as those users who were invited to participate in virtual conferencing with respect to the virtual space created/selected by the end user. The virtual space participation system 206 includes a virtual space navigation interface 208 (e.g., discussed below with respect to FIG. 5 ) that allows participants to navigate between the rooms of a virtual space, and to participate in virtual conferencing with respect to the rooms.
  • In one or more embodiments, the virtual space creation system 202 and the virtual space participation system 206 provide for an end user (e.g., an administrator) to create different types of environments (e.g., virtual spaces with rooms) for virtual conferencing, and for participants to engage in virtual conferencing within such environments. Examples of such virtual conferencing include, but are not limited to: business meetings, seminars, presentations, classroom lectures, teacher office hours, concerts, reunions, virtual dinners, escape rooms, and the like.
  • FIG. 3 is a schematic diagram illustrating data structures 300, which may be stored in the database 126 of the virtual conference server system 108, according to certain examples. While the content of the database 126 is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database).
  • The database 126 includes profile data 302, a user graph 304 and a user table 306 relating to the users (participants) of the virtual conferencing system 100. The user table 306 stores user data, and is linked (e.g., referentially) to the user graph 304 and the profile data 302. Each user of the virtual conferencing system 100 is associated with a unique identifier (email address, telephone number, social network identifier, etc.).
  • The user graph 304 stores (e.g., in conjunction with the social network server 122) information regarding relationships and associations between users. Such relationships may be social, professional (e.g., work at a common corporation or organization) interested-based or activity-based, merely for example. As noted above, the user graph 304 may be maintained and accessed at least in part by the social network server 122.
  • The profile data 302 stores multiple types of profile data about a particular user. The profile data 302 may be selectively used and presented to other users of the virtual conferencing system 100, based on privacy settings specified by a particular user. The profile data 302 includes, for example, a user name, telephone number, email address, and/or settings (e.g., notification and privacy settings), as well as a user-selected avatar representation.
  • The database 126 further includes a virtual spaces table 308. As noted above, a virtual space corresponds to an environment with one or more rooms configured to accommodate virtual conferencing. A virtual space may be newly-created by a user, or may be included within one or more sets of public virtual spaces made available (e.g., by other users, system administrators, and the like) for virtual conferencing. The virtual spaces table 308 stores information representing the one or more sets of public virtual spaces, as well as any private virtual space(s) created by a user (e.g., in a case where the particular user did not make such virtual space(s) public).
  • In one or more embodiments, the virtual spaces table 308 stores associations between its virtual spaces and users (e.g., within the user table 306) who selected those virtual spaces. In this manner, it is possible for a particular user to have one or more virtual spaces associated therewith. Moreover, the database 126 includes a rooms table 310 which may be associated with the virtual spaces within the virtual spaces table 308. As noted above, a room may be newly-created by a user, or may be included within one or more sets (e.g., galleries) of public rooms made available for user selection. The rooms table 310 stores information representing the one or more sets of rooms, as well as any private room(s) created by the user (e.g., in a case where the particular user did not make such room(s) public). The stored information is usable by the virtual conferencing system 100 to create the corresponding rooms for use in a virtual space. In one or more embodiments, the stored information may further include recordings (e.g., audio and/or video recordings) of a particular virtual conference, for subsequent playback by corresponding participants.
  • FIG. 4 illustrates a virtual space design interface 204 with interface elements for designing a virtual space, in accordance with some example embodiments. Designing the virtual space may include creation and/or selection of rooms for including in the virtual space. The virtual space design interface 204 includes a menu interface 402, a room elements interface 404, an element properties interface 406, a controls interface 408, a room list interface 410, a room canvas interface 412, and an administrator name 414. It is noted that elements 402-414 correspond to an example of interface elements for the virtual space design interface 204, and that additional, fewer and/or different interface elements may be used.
  • An administrator (e.g., corresponding to administrator name 414) may use the various interface elements to design a virtual space. In one or more embodiments, the menu interface 402 includes user-selectable categories (e.g., menu headings) relating to a virtual space (e.g., “workspace”), rooms within the virtual space, and/or elements within a room. For example, the workspace category is user-selectable for presenting options (e.g., via a drop-down list) to manage settings for the virtual space, manage invites for the virtual space, manage versions of a virtual space, publish the virtual space (e.g., for future use by users), manage virtual space publications, and/or to start/manage recordings (e.g., audio and/or video recordings) with respect to the virtual space.
  • The room category of the menu interface 402 is user-selectable for presenting options (e.g., via a drop-down list) to manage settings for a room within the virtual space, set a room background, set an order for the rooms listed in the room list interface 410, create a new room, import a room from a set of available rooms, remove a room, publish a room, manage room publications, and/or to start/manage recordings with respect to a room.
  • In addition, the element category is user-selectable for presenting options (e.g., via a drop-down list) to insert elements into a room, insert shapes into a room, foreground/background elements, arrange/position elements, and/or group elements. Examples of elements include, but are not limited to: an action button, analog clock, audience question board, backpack item, breakout button, chat, closed caption display, closed caption input, countdown, clock, digital clock, external communication element (e.g., a doorbell), double-sided image, feedback, image, multiuser video chat, music, participant audio mixer, participant count, participant video element (e.g., single or multiple), picture strip, poll, random source, room preview, scheduled time, sound effect, stopwatch, take picture, text, timer, user search, video, waiting list, web media, website. Examples of shapes include, but are not limited to, a circle, rectangle and triangle.
  • The users category of the menu interface 402 is user-selectable for presenting options (e.g., via a drop-down list) to manage users/participants of the virtual space (e.g., adding tags for participants, so as to distinguish between roles such as an administrator or an attendee/participant). In addition, the edit category is user-selectable for performing edit operations (e.g., undo, redo, cut, copy, paste), and the help category is user-selectable for performing help operations (e.g., getting started, discord, live help, submitting feedback).
  • In one or more embodiments, the room elements interface 404 includes user-selectable icons for inserting elements (e.g., corresponding to a subset of those available via the above-mentioned element category) into a current room. For example, the elements may be added and/or positioned within the current room by selecting the element and dragging the selected element onto the room canvas interface 412, which represents the layout of the current room.
  • In one or more embodiments, the room elements interface 404 include icons including but not limited to: a text icon for adding text to a room; a participant video icon for adding a single participant video element (e.g., an interface element which is selectable by a single participant for displaying that participant's video feed) to a room; a multiuser video icon for adding a multiple participant video element (e.g., an interface element which is selectable by one or more participants for displaying the video feeds for those participants) to a room; a chat icon for adding a chat interface (e.g., for messaging using text, stickers, emojis, etc.) to a room; a video playback icon for adding a video playback element (e.g., screen) to a room for playback of a selected video; a background icon for selecting a background color/gradient, image or video to a room; an action icon for adding an action element (e.g., button) to a room for performing a user-defined action (e.g., speaking into a virtual microphone, querying an administrator via a button, and the like); and/or a breakout button for adding a breakout element (e.g., button) for shuffling selected participants between the current room and one or more other rooms.
  • The single participant video element is configured to be assignable (e.g., during virtual conferencing) to a single participant video feed (e.g., including video and/or audio). On the other hand, the multiple participant video element is configured to be assignable to multiple participant video feeds (e.g., including video and/or audio). For example, the multi-participant video element is configurable (e.g., via its respective element properties interface 406) to accommodate a user-specified number of the participant video feeds.
  • In one or more embodiments, the element properties interface 406 include various fields for setting configuration properties for above-described room elements. For example, with respect to elements in general (e.g., text, single participant video element, multi participant video element, chat interface, video element, background image, action element, breakout button), the element properties interface 406 includes fields for setting the element title/name, opacity, gradient, style, layout, borders/corners, shadows, interaction (e.g., to what extent participant(s) may delete, modify, resize the element), filtering, full screen status, conditions, accessibility and actions for the element.
  • For a participant video element, the element properties interface 406 includes further fields for setting the manner in which users are placed into the participant video element during virtual conferencing (e.g., automatically, manually by the participant and/or the administrator end user). In addition, for the chat interface, the element properties interface 406 includes further properties for setting who (e.g., administrator and/or participants) can provide chat input, and/or which types of input (e.g., text, stickers, emojis, etc.) are available. For the action element, the element properties interface 406 includes further properties for setting what type of action is to be performed in response to user selection of the action element (e.g., button). Moreover, for the breakout element, the element properties interface 406 includes further properties for selecting participants and/or breakout rooms.
  • In one or more embodiments, the element properties interface 406 further includes fields for setting configuration properties for the room canvas interface 412. For example, the element properties interface 406 includes fields for selecting a number of fake participants (e.g., simulated video feeds) in order to visualize multiple users, selecting music (e.g., background music), and/or selecting reaction buttons for participants to indicate real-time reactions with respect to virtual conferencing within a room.
  • In one or more embodiments, the virtual space design interface 204 provides for a designer of a room to create a room background. The room background may depict a number of objects including seating objects. Examples of seating objects include chairs, sofas, swings, mats, beanbags, grass, the ground and the like. However, the virtual space design interface 204 is not limited to using seating objects for the positioning of participant video elements. For example, an administrator may select to position participant video elements on or near other objects in a room, such as windows, clocks, walls, etc.
  • Thus, the virtual space design interface 204 provides for a user to position, shape and/or size of room elements (e.g., participant video elements) anywhere within a room, such as relative to background objects. For example, the element properties interface 406 includes fields via which the user can input values to specify position, size or shape of the room elements. Alternatively or in addition, the room canvas interface 412 of the virtual space design interface 204 provides for a user to reposition, reshape and/or resize the room elements (e.g., participant video elements) via predefined user gestures (e.g., selecting and dragging elements, edges of elements, corners of elements, and the like). The virtual space design interface 204 may cause user changes made to room elements via the room canvas interface 412 to automatically update values within corresponding fields of the element properties interface 406.
  • In one or more embodiments, the room canvas interface 412 of the virtual space design interface 204 provides for a user to segment objects (e.g., background objects) depicted in the room, so as to define boundaries of the object. For example, the user may define line segments around an object (e.g., a sofa) using a predefined gesture in order to define the boundary of an object (e.g., clicking points corresponding to corners of the object in combination with pressing a predefined key). In addition, the virtual space design interface 204 provides for displaying the segmented objects relative to room elements (e.g., participant video elements) of the room. For example, a segmented sofa may be displayed in front of, or behind (based on user-selected properties) elements, to create the perception of depth to the end user.
  • In one or more embodiments, the controls interface 408 includes user-selectable icons corresponding to controls (e.g., administrative controls) for the virtual space. For example, the controls interface 408 include icons including but not limited to: a director mode icon for toggling between a director mode for designing a room and a user mode for viewing the room within the virtual space design interface 204 (e.g., with the director mode including the room elements interface 404 and the element properties interface 406 while the user mode does not); a view icon for viewing the room within the virtual space navigation interface 208; a share screen icon (e.g., for collaborative design with other user(s) such as co-administrators); a microphone icon for enabling or disabling the microphone; a help icon (e.g., getting started, discord, live help, submitting feedback); an invite icon (e.g., for displaying an invite link for sending to participants to visit the virtual space); a settings icon (e.g., for selecting the end user's video and audio devices for the virtual conferencing, and for selecting a user avatar); and/or an exit icon for exiting the virtual space design interface 204.
  • In one or more embodiments, the room list interface 410 displays the list of rooms for the virtual space. Each listed room is user selectable to switch to edit (e.g., in director mode) and/or view (e.g., in user mode) the selected room. As noted above, the list of rooms may be modified (e.g., by adding, importing and/or removing rooms) via the options within the room category of the menu interface 402.
  • FIG. 5 illustrates a virtual space navigation interface 208 with interface elements to navigate between the rooms of a virtual space and to participate in virtual conferencing with respect to the rooms, in accordance with some example embodiments. The virtual space navigation interface 208 includes a controls interface 502, a room list interface 504, a current room interface 506, a participant video element 508 and a participant video element 510. It is noted that elements 502-512 correspond to an example of interface elements for the virtual space navigation interface 208, and that additional, fewer and/or different interface elements may be used.
  • In one or more embodiments, the controls interface 502 includes user-selectable icons corresponding to controls (e.g., administrative controls) for the virtual space. For example, the controls interface 408 include icons including but not limited to: an edit icon for redirecting to the virtual space design interface 204 to edit the current room; a volume icon for adjusting a volume level for the current room; a share screen icon (e.g., for allowing others to view the room without necessarily joining the room); a microphone icon for muting and unmuting the microphone; a help icon (e.g., getting started, discord, live help, submitting feedback); an invite icon (e.g., for displaying an invite link for participants to visit the virtual space); a settings icon (e.g., for selecting the end user's video and audio devices for the virtual conferencing, and for selecting a user avatar); and/or an exit icon for exiting the virtual space design interface 204.
  • In one or more embodiments, the room list interface 504 displays the list of rooms for the virtual space. Each listed room is user selectable to switch to the selected room (e.g., for virtual conferencing). The selected room is presented as a current room within the current room interface 506. In this manner, a participant may navigate among the multiple rooms available within the virtual space. Alternatively or in addition, navigation between rooms is possible via a virtual space map interface (not shown) which depicts a map view of the virtual space (e.g., a floor plan) and its corresponding rooms, with each room being user selectable to navigate thereto. Alternatively or in addition, navigation between rooms is further possible by positioning a navigation button (not shown) within a room, where user selection of the button results in navigating to another room (e.g., a predefined room). As noted above, the virtual space design interface 204 allows for the design of a virtual space and its corresponding rooms. As such, navigation between rooms is based at least in part on the design of the virtual space (e.g., a virtual space may include one or more of the above-mentioned room list interface 504, the virtual space map/floor plan interface and/or the navigation button).
  • With respect to the current room interface 506, each participant is represented as a respective participant video element. As noted above, a participant video element corresponds to an interface element (e.g., a box) which is selectable by a single participant for displaying that participant's video feed. The example of FIG. 5 includes a first participant associated with the participant video element 508 and a second participant associated with the participant video element 510. In one or more embodiments, with respect to the perspective of the first participant, the participant video element 510 showing the feed of the second participant may include participant button(s) 512. For example, the participant button(s) 512 are selectable by the first participant so as to perform a predefined action (e.g., initiate a side conversation, designate the second participant to follow the first participant when the first participant moves rooms) with respect to the second participant.
  • In one or more embodiments, the virtual space navigation interface 208 may vary based on whether a given participant is an administrator or another participant (e.g., an attendee). For example, some participant video elements may be designated (e.g., via the virtual space design interface 204) for administrators, while other participant video elements are designated for other participants. The virtual conference server system 108 is configured to distinguish between these administrator or other participant roles, for example, based on the above-described tags assigned to participants via the users category of the menu interface 402 provided by the virtual space design interface 204.
  • While the example of FIG. 5 illustrates two participants, it is possible for the current room interface 506 to accommodate additional participants for virtual conferencing. The additional participants may be positioned (e.g., automatically and/or manually by dragging) based on the positioning of participant video elements (e.g., boxes) as designed by the virtual space design interface 204. As noted above, the element properties interface 406 includes fields for setting the manner in which participant video feeds are assigned and placed into the participant video element during virtual conferencing (e.g., automatically, manually by the participant and/or the administrator end user). With respect to automatic placement, the virtual conference server system 108 may follow a predefined order (e.g., as specified during room design) for placing participant video feeds with respective participant video elements.
  • Regarding manual placement, the virtual conference server system 108 provides for the participants to manually select which available (e.g., unoccupied) participant video element to position their respective participant video feed. For example, the manual selection is performed using a predefined gesture (e.g., a drag and drop operation of the participant video feed to a selected participant video element). Based on room design, the virtual conference server system 108 provides one or more of the administrator (e.g., presented) or participants (e.g., attendees) to manually assign a participant video feed to a participant video element.
  • In one or more embodiments, assignments between participant video feeds and participant video elements may be a combination of automatic and manual selection. For example, assignments may default to automatic and be initially be based on automatic selection. At a later time, with proper settings per the design of the room, participant(s) may manually select a different available participant video element to be assigned to.
  • In the example of FIG. 5 , the participant video feeds of the first participant and the second participant(s) are assigned to the participant video elements 508-510 within the current room interface 506 based on automatic and/or manual selection. The first and second participants can participate in virtual conferencing, and observe each other's participant video feeds which may appear to be positioned relative to objects (e.g., seating objects) in the room.
  • Thus, the virtual conferencing system 100 as described herein allows a user, in designing a room for virtual conferencing, to position and/or size participant video elements (e.g., based on objects depicted in the room). Each of the participant video elements corresponds to a potential placeholder for a participant. During virtual conferencing within the room (e.g., the current room interface 506), the video feeds of participants are assigned to the participant video elements 508-510.
  • FIG. 6 is an interaction diagram illustrating a process 600 for providing bot participants within a virtual conferencing system, in accordance with some example embodiments. For explanatory purposes, the process 600 is described herein with reference to a first client device 602, a second client device(s) 604, and the virtual conference server system 108. Each of the first client device 602 and the second client device(s) 604 may correspond to a respective client device 102. The process 600 is not limited to the first client device 602, the second client device(s) 604, and the virtual conference server system 108. Moreover, one or more blocks (or operations) of the process 600 may be performed by one or more other components of the first client device 602, the second client device(s) 604, or the virtual conference server system 108, and/or by other suitable devices. Further for explanatory purposes, the blocks (or operations) of the process 600 are described herein as occurring in serial, or linearly. However, multiple blocks (or operations) of the process 600 may occur in parallel or concurrently. In addition, the blocks (or operations) of the process 600 need not be performed in the order shown and/or one or more blocks (or operations) of the process 600 need not be performed and/or can be replaced by other operations. The process 600 may be terminated when its operations are completed. In addition, the process 600 may correspond to a method, a procedure, an algorithm, etc.
  • Each of the first client device 602 and the second client device(s) 604 have instances of the virtual conference client 104 installed thereon. In the example of FIG. 6 , the first client device 602 and the second client device(s) 604 are associated with a respective first participant and second participant(s) of the virtual conference server system 108. For example, the first participant may be associated with a first user account of the virtual conference server system 108, and the second participant(s) may be associated with second user account(s) of the virtual conference server system 108.
  • As noted above, the first participant and second participant(s) are identifiable by the virtual conference server system 108 based on unique identifiers (e.g., email addresses, telephone numbers) associated with respective user accounts for the first participant and second participant(s). In one or more embodiments, the virtual conference server system 108 implements and/or works in conjunction with a social network server 122 which is configured to identify contacts with which a particular user has relationships. For example, the first participant and second participant(s) may be contacts with respect to the virtual conference server system 108.
  • As described herein, the virtual conferencing system 100 provides the first participant (e.g., a designer of the virtual space) with interfaces to configure participant video elements in a virtual space, where the virtual space includes one or more rooms. Each participant video element is assignable to a respective participant video feed of a participant. In some cases, participants are permitted to move among rooms of the virtual space during virtual conferencing. To test the virtual space before virtual conferencing, the virtual conferencing system 100 provides interfaces to configure bot participants, each of which is assignable to a respective participant video element. The bot participants may be presented during design, before virtual conferencing with actual participants occurs. For example, the first participant can specify an audio level, audio track, blur level (e.g., filtering) and/or participant role for a given participant bot. The first participant can also position participant bots at different participant video elements and/or different rooms. In this manner, it is possible to observe the properties (e.g., positions, audio levels, video filters) of participant video elements and fine-tune those properties, prior to hosting actual participants within the virtual space for virtual conferencing.
  • In the example of FIG. 6 , operations 606-620 may correspond to a first phase (e.g., a “design phase”) and operations 622-624 may correspond to a second phase (e.g., a “virtual conferencing phase”). During the design phase, a user (e.g., administrator) provides input for setting properties (e.g., sizes, shapes, positions, effects) for the participant video elements and other room elements in the virtual space, and provides additional input for configuring bot participant(s) that are assignable to the participant video elements. In addition, the virtual conference server system 108 presents the virtual space based on the bot participant(s) during the design phase, for user observation.
  • During the virtual conferencing phase, the participant video elements and other room elements are displayed based on their respective properties. In addition, the participant video feeds are assigned to respective participant video elements (e.g., as opposed to participant bot(s) that were used for observation and testing during design). It may be understood that the second phase may occur shortly after the first phase, or after an extended period of time after the first phase. As such, FIG. 6 includes a dashed line separating the first phase and the second phase for illustrative purposes.
  • FIG. 6 illustrates an example in which the user (e.g., administrator) designs the room with multiple participant video elements, and one or more bot participant(s). It is understood that the virtual conferencing system 100 provides for alternate arrangements, for example, with different numbers of participant video elements (e.g., none, one, multiple) and/or different numbers of bot participant(s) (e.g., none, one, multiple).
  • At operation 606, the virtual conference server system 108 provides, to the first client device 602, interfaces for configuring participant video elements and bot participant(s) in the virtual space. The first client device 602 may correspond to an administrator who designs the room, and who acts as a presenter during virtual conferencing.
  • In one or more embodiments, the interface for configuring the participant video elements corresponds to the virtual space design interface 204, which includes the element properties interface 406. As noted above, the element properties interface 406 includes various fields for setting configuration properties for room elements, including the participant video elements.
  • Each participant video element is configured to be assignable to a respective participant video feed (e.g., corresponding to a respective participant during virtual conferencing). As noted above, the room elements interface 404 of the virtual space design interface 204 includes respective elements for adding single participant video elements and/or multiple participant video elements to a room.
  • The element properties interface 406 in conjunction with the virtual space design interface 204 provides for configuring the number of participant video elements and other elements in a room, as well as the positions, shapes and/or sizes for each of the elements. As noted above, the element properties interface 406 includes fields for setting the title/name, opacity, gradient, style, layout, borders/corners, shadows, interaction (e.g., to what extent participant(s) may delete, modify, resize the element), filtering, full screen status, conditions, accessibility and actions for the participant video elements. In addition, the element properties interface 406 includes fields for setting the manner in which users are placed into the participant video elements during virtual conferencing (e.g., automatically, manually by the participant and/or the administrator end user). The virtual space design interface 204 further provides for a designer of a room to create a room background (e.g., with seating objects), to position/place room elements anywhere within a room (e.g., relative to background objects), and/or to segment objects (e.g., background objects) depicted in the room for displaying room elements relative to one another.
  • The element properties interface 406 in conjunction with the virtual space design interface 204 also provides for configuring audio and video properties for the participant video elements. For example, the user (e.g., administrator) may specify values for audio muting and/or audio volume, as well as video filter values for blur, brightness, color, contrast, hue and the like.
  • In one or more embodiments, the interface for configuring the bot participant(s) corresponds to the virtual space design interface 204, but is separate from the element properties interface 406. As noted above with respect to FIG. 4 , the virtual space design interface 204 includes a menu interface 402 with a workspace category. The workspace category includes a user-selectable button (e.g., via a drop-down list) to manage workspace settings. In one or more embodiments, the workspace settings include the option to create bot participant(s) via selection of a bot participant button (e.g., labeled “create test bot”). For example, each time the user selects the “create test bot button,” a bot participant is added to the virtual space (e.g., to test and fine-tune the audio and video properties of participant video elements).
  • As discussed further below with respect to FIG. 7 , creation of a participant bot causes a corresponding participant thumbnail to appear within the room list interface (e.g., the room list interface 410, the room list interface 712). Thus, each newly-created participant bot appears as a respective icon (e.g., thumbnail) within the room list interface.
  • User selection of a particular thumbnail (e.g., bot participant) surfaces a participant properties interface (e.g., discussed below with respect to FIG. 7 ) within the virtual space design interface. The participant properties interface in conjunction with the virtual space design interface 204 provides for the user to specify actions by the bot participants during virtual space design (e.g., the user/administrator and the bot participant(s)). For example, the participant properties interface provides for the user/administrator to specify one or more of: the name of the bot participant; the email address of the bot participant (e.g., a fake email address for testing purposes); a participant tag for the bot participant (e.g., to define a role for the bot, such as a presenter, viewer, etc.); whether to mute audio; an audio track (e.g., predefined tracks for counting in different selectable voices, user input text-to-speech, or repeating the name assigned to the bot participant); whether to remove the bot participant; whether to mute video; and/or a video track (e.g., pre-generated videos corresponding to participants, custom videos uploaded by the user).
  • In this manner, the user can specify the audio output (e.g., muted, an audio track) for a particular bot participant, and the physical appearance of the particular bot participant. Thus, the participant properties interface allows the user to simulate assigning participants to participant video elements, without the use of actual participants.
  • Moreover, the room list interface allows the user (e.g., administrator) to position/assign bot participants to different participant video elements within the virtual space. As noted above, a virtual space may have multiple rooms, each room having respective participant video elements. The rooms list interface generally provides for displaying the list of rooms in the virtual space, and for navigating between the rooms within the virtual space. Moreover, the room list interface provides for the user to move the bot participant(s) between rooms. For example, in association with listing each room, the room list interface includes respective thumbnails of the participants assigned to (e.g., present within) each room. To move a bot participant, the user may select the icon for the bot participant assigned to its current room, and drag the icon to a new room listed in the room list interface. An available participant video element in the new room may be assigned (e.g., automatically or manually) to the bot participant, and the room canvas interface 412 of the virtual space design interface 204 may be updated to present the bot participant as assigned to the participant video element.
  • Thus, with respect to operation 606, the virtual conference client 104 running on the first client device 602 provides display of the virtual space design interface 204, for setting properties for the participant video elements (e.g., via the element properties interface 406) and the bot participant(s) (e.g., via the participant properties interface). The first client device 602 receives user input setting such properties. In response, the first client device 602 sends an indication of the set properties to the virtual conference server system 108 (operation 610). For example, values input by the user at the first client device 602, for configuring the participant video elements and the bot participant(s), are sent from the first client device 602 to the virtual conference server system 108.
  • The virtual conference server system 108 stores the properties associated with the participant video elements and the bot participant(s) in association with the virtual space (block 612). For example, the virtual conference server system 108 provides for storing the properties (e.g., user-selected values for the various fields) within the virtual spaces table 308 of the database 126, in association with the virtual space.
  • Moreover, the virtual conference server system 108 presents, to the first client device 602 and not the second client device(s) 604, the virtual space with based on the properties set for the participant video elements and the bot participant(s) (operation 614). While remaining in the design phase, the virtual conference server system 108 provides for presenting the virtual space based on the stored properties for the participant video elements and the bot participant(s). This includes presenting each of the participant video elements and other room elements based on each of their positions (e.g., including relative positions based on any segmented objects in the room), shapes, sizes and/or effects as configured by the user during design of the room, and presenting the participant bot(s) based on their user-specified values.
  • For example, the virtual conference server system 108 assigns participant bots (e.g., with their respective audio tracks, video tracks, and the like) to respective participant video elements. As noted above, for a participant video element, the element properties interface 406 includes fields for setting the manner in which participant video feeds (including those associated with a bot participant) are assigned and placed into the participant video element (e.g., automatically, manually by the participant and/or the administrator end user). Thus, the audio mute/track and video mute/track for the bot participant(s) as specified by the user are assigned to participant video elements within the virtual based on automatic and/or manual selection as specified by the designer. The user may observe each of the participant video elements assigned to bot participant(s).
  • In one or more embodiments, the user may continue to modify properties of the participant video elements and/or the bot participant(s). For example, the user may select to reposition/reassign the bot participant to a different participant video element within the room (e.g., via a drag gesture). Alternatively or in addition, the user may select to modify properties of the bot participant via the participant properties interface. Moreover, the user may select to modify properties (e.g., audio mute/track, video mute/track) of the assigned participant video element, for example, by selecting the participant video element and changing properties (e.g., audio properties such as muting/volume, and/or video filter values such as blur, brightness, color, contrast, hue as noted above) via the element properties interface 406.
  • As the user makes these changes (e.g., per block 608), the first client device 602 may provide continuous indications of the updated properties to the virtual conference server system 108 (e.g., per operation 610), which in turn updates the virtual spaces table 308 and presents an updated virtual space (e.g., per block 612 and operation 614). As such, operations 608-614 may loop as shown by the dotted line following operation 614 and returning to block 608.
  • In one or more embodiments, the user may decide that configuration of the participant video elements and bot participant(s) in the virtual space is complete (e.g., the virtual space is satisfactory for virtual conferencing based on observation of the bot participant(s)). At this stage, the user may elect to remove the bot participant(s) from the virtual space (e.g., so as not to appear within room(s) during virtual conferencing with the actual participants).
  • In this regard, the virtual space design interface 204 provides various interface elements for removing bot participant(s). In a first example, an individual bot participant is removable via the participant properties interface corresponding to that bot participant. In another example, the virtual space design interface 204 provides an interface element (e.g., the element properties interface for the room) to remove all bots within a particular room. In yet another example, the virtual space design interface 204 provides an interface element (e.g., via the workspace settings) to remove all bots from the entire virtual space (e.g., across all rooms of the virtual space).
  • Thus, at block 616, the first client device 602 receives user input to remove bot participant(s). The first client device 602 sends, to the virtual conference server system 108, to remove the bot participant(s). In response to receiving the request, the virtual conference server system 108 removes the bot participant(s) from the virtual space (e.g., via updates to the virtual spaces tables 308). While the user may typically decide to remove all bot participant(s) from the virtual space so that no bot participant(s) are present during virtual conferencing with actual participants other than the first participant, the virtual conferencing system 100 is not limited in this manner. Thus, the first participant may elect to include no bot participants, a subset of the bot participant(s), or all bot participant(s) used during design of the virtual space.
  • As noted above, operations 622-624 relate to a virtual conferencing phase, in which actual participants (e.g., the first participant and the second participant(s)) engage in virtual conferencing within the room. At operations 622-624, the virtual conference server system 108 provides for presenting the room based on the stored properties for all room elements. This includes presenting each of the participant video elements and other room elements based on each of their positions (e.g., including relative positions based on any segmented objects in the room), shapes, sizes and/or effects as configured by the user during design of the room.
  • For example, the virtual conference server system 108 assigns the video feeds of the first participant and second participant(s) with respective participant video elements. As noted above, for a participant video element, the element properties interface 406 includes fields for setting the manner in which participant video feeds are assigned and placed into the participant video element during virtual conferencing (e.g., automatically, manually by the participant and/or the administrator end user). Thus, the participant video feeds of the first participant and the second participant(s) are assigned to participant video elements within the room based on automatic and/or manual selection as specified by the designer. The first and second participants can participate in virtual conferencing, and observe each other's participant video feeds.
  • Thus, the bot participant(s) as described herein allow for the user (e.g., administrator) to observe and test virtual spaces using bot participant(s) that perform predefined actions, in order to simulate actual participants. Example use cases for configuring bot participant(s) include, but are not limited to breakout rooms, audio mixing between rooms of the virtual space, previewing into rooms of the virtual space, and side conversations.
  • Regarding breakout rooms, the virtual space design interface 204 provides an interface to configure breakout rooms, for shuffling of selected participants between a current room and one or more other rooms. For example, a breakout room is separate from a current room and allows a small group of participants to discuss a particular issue before returning to the main meeting (e.g., room). The selected participants may enter the breakout room in response to selecting a button (e.g., a prompt to break out from the main meeting to the breakout room selected for that participant). The breakout room configurations may be specified during room design via the virtual space design interface 204.
  • During design, the user (e.g., administrator) may add bot participant(s) to specific rooms of the virtual space, in order to test and confirm that participants are being directed to the appropriate break rooms during a breakout session. For example, the element properties interface may include user-selectable options to assign room participants to breakout rooms as follows: all participants in the current room (to move all of the participants in the room that the breakout button is currently in; all participants in the virtual space except pinned (to move all of the participants in the virtual except those that are pinned to a room, to move everyone back from breakout sessions into the main room); participants by tag/role (to specify which participants to move based on their specific tag or lack of a tag, for example, to move all participants tagged with “staff” into a green room or to move all supervisors into one room and all line workers to another); participants by email (send a specific participant, via their email associated with the virtual conferencing system 100, to a specific participant video element or room (to send speakers into specific slots in an auditorium); participants by participant video element (move the current participant(s) in a specific slot/participant video element out of that slot, for example, to move a participant in a speaker slot to an audience slot without associating the movement to a specific email); participants selected from administrator rooms list (e.g., sending a participants selected from the rooms list for the user into a specific room; role-based breakouts (e.g., supervisors to one room, line workers to another, based on user tags).
  • Moreover, the virtual conference server system 108 provides for the user to save and/or load sets of test bots. For example, the virtual conference server system 108 the sets would save information on multiple bots with assigned roles placed into specific rooms.
  • Regarding audio mixing between rooms, the virtual conference server system 108 provides an interface to configure audio mixing, for allowing a first participant to mix participant audio from one or more second room(s) into audio presented to a first room. The interface may include various user-selectable fields for specifying the manner in which the participant audio is mixed (e.g., which room(s) to sample, a number of participants to sample, an audio level for the mix, and the like). The participant audio may correspond to the live audio feed (e.g., from microphones) of the participants within the other room(s). For example, a user may design the virtual space as a restaurant with the rooms corresponding to restaurant tables. During design, the user (e.g., administrator) may add bot participant(s) to specific rooms of the virtual space, in order to observe and fine-tune audio levels as observed by a participant within a room.
  • In one or more embodiments, the virtual conference server system 108 provides an interface to configure previewing into other rooms during virtual conferencing. For example, a room preview element can be included within a first room, to display a live preview of a second room as a window or frame within a first room. In addition to displaying the live preview, the virtual conferencing system provides audio output associated with the second room, at a reduced audio level relative to audio output associated with the first room. The live video-audio feed of the room preview element allows participants within the first room to preview or otherwise observe a the second room, without requiring participants in the first room to navigate to the second room. During design, the user (e.g., administrator) may add bot participant(s) to specific rooms of the virtual space, in order to observe and fine-tune video filtering across rooms (e.g., blurring participant videos in adjacent rooms). In addition, the bot participant(s) may be used to observe and fine-tune audio levels of participants across rooms.
  • With respect to side conversations, the virtual conference server system 108 provides an interface to configure side conversations, for allowing a first participant (e.g., administrator) to invite a second participant to a side conversation, while both maintaining presence in a room. When the second participant accepts and the side conversation commences, an indication of the side conversation is sent to the devices of other room participants. Those other participants may join the side conversation, by either accepting an invitation received from the first or second participants, or by sending a request to join the side conversation to the devices of the first or second participants. The user may configure audio levels for the side conversation as observed by those within the side conversation and those outside of the side conversation. During design, the user may add bot participant(s) within a room of the virtual space, in order to observe and fine-tune audio levels with respect to side conversations.
  • Thus, the virtual conferencing system 100 as described herein allows for configuring bot participants, each of which is assignable to a respective participant video element. The bot participants may be presented during design, before virtual conferencing with actual participants. For example, the user (e.g., designer of the virtual space) can specify an audio level, audio track, blur level (e.g., filtering) and/or participant role for a given participant bot. The user can also position participant bots at different participant video elements and/or different rooms. In this manner, it is possible to observe the properties (e.g., positions, audio levels, video filters) of participant video elements and fine-tune those properties, prior to hosting actual participants within the virtual space for virtual conferencing.
  • By virtue of allowing a user to configure and use bot participants during design, it is possible to fine-tune the configuration participant video elements within a virtual space. The video and audio levels for participants among the rooms of a virtual space design may be set in a manner that is more engaging for participants. Without allowing the user (e.g., administrator) to configure a virtual space in this manner, the user may have to further configure participant video elements during virtual conferencing with actual participants, which may be distracting and cumbersome for the user (e.g., who may also be presenting) and/or other participants. The virtual conferencing system 100 facilitates creation and participation with respect to virtual conferencing environments, thereby saving time for the user, and reducing computational resources/processing power for the virtual conferencing system 100.
  • FIG. 7 illustrates a virtual space design interface 700 with bot participants, in accordance with some example embodiments. The virtual space design interface 700 is similar to the virtual space navigation interface 208 described above with respect to FIG. 4 . In the example of FIG. 7 , the virtual space design interface 700 includes participant thumbnails 702-706, participant video elements 708-710, a room list interface 712, a room canvas interface 714 and a participant properties interface 716.
  • The room list interface 712 lists the rooms within the virtual space. The room list interface further depicts respective participant thumbnails 702-706 for participant within the virtual space. In the example of FIG. 7 , the participant thumbnail 702 corresponds to a first bot participant (“Bot_1”), participant thumbnail 706 corresponds to a second bot participant (“Bot_2”), and participant thumbnail 704 corresponds to the user (e.g., designer) of the virtual space.
  • The participant video elements 708-710 are positioned within a room entitled “Nobu Malibu” within the room list interface 712, where the room design is depicted within the room canvas interface 714. In this room, the participant thumbnail 702 (e.g., bot) is assigned to the participant video element 708 and corresponds to a simulated video feed. On the other hand, the participant thumbnail 704 (e.g., user, designer) is assigned to the participant video element 710 and corresponds to an actual video feed. Moreover, the participant thumbnail 704 (e.g., bot) is assigned to a participant video element within the room entitled “Dance Floor,” which does not correspond to the current room canvas interface 714 of FIG. 7 .
  • In one or more embodiments, user selection of any of the participant thumbnails 702-704 within the room list interface 712 surfaces the participant properties interface 716. As noted above with respect to FIG. 6 , the participant properties interface 716 includes user-selectable fields for configuring the bot participants, such as the bot participant's name, email address, participant tag, whether to mute audio, audio track, removal, whether to mute video, video track and the like.
  • In one or more embodiments, user selection of any of the participant thumbnails 706-708 within the room canvas interface 714 surfaces the element properties interface (e.g., similar to element 406, and positioned to replace the participant properties interface 716). The element properties interface includes user-selectable fields for configuring the bot participants, such as the participant's title/name, opacity, gradient, style, layout, borders/corners, shadows, interaction (e.g., to what extent participant(s) may delete, modify, resize the element), filtering, full screen status, conditions, accessibility and actions for the participant video elements, the manner in which users are placed into the participant video elements during virtual conferencing (e.g., automatically, manually by the participant and/or the administrator end user), audio muting, audio volume, and/or video filter values for blur, brightness, color, contrast, hue and the like.
  • FIG. 8 is a flowchart illustrating a process 800 for providing bot participants within a virtual conferencing system, in accordance with some example embodiments. For explanatory purposes, the process 800 is primarily described herein with reference to the virtual conference server system 108 of FIG. 1 . However, one or more blocks (or operations) of the process 800 may be performed by one or more other components, and/or by other suitable devices. Further for explanatory purposes, the blocks (or operations) of the process 800 are described herein as occurring in serial, or linearly. However, multiple blocks (or operations) of the process 800 may occur in parallel or concurrently. In addition, the blocks (or operations) of the process 800 need not be performed in the order shown and/or one or more blocks (or operations) of the process 800 need not be performed and/or can be replaced by other operations. The process 800 may be terminated when its operations are completed. In addition, the process 800 may correspond to a method, a procedure, an algorithm, etc.
  • The virtual conference server system 108 provides, in association with designing a virtual space for virtual conferencing, a first interface for configuring plural participant video elements within the virtual space, each of the plural participant video elements being assignable to a respective participant (block 802). Each respective participant may be associated with a video feed for presenting within a respective one of the plural participant video elements.
  • The virtual conference server system 108 receives, via the first interface, an indication of user input for setting first properties for the plural participant video elements (block 804). The virtual conference server system 108 provides, in association with designing the virtual space, a second interface for configuring a bot participant in the virtual space, the bot participant for simulating an actual participant in association with a participant video element of the plural participant video elements (block 806).
  • The virtual conference server system 108 receives, via the second interface, an indication of second user input for setting second properties for the bot participant (block 808). The second properties may specify one or more of: an audio level for the bot participant, for user observation of audio levels across rooms during design of the virtual space; an audio track for the bot participant, for user observation of audio output across rooms during design of the virtual space; a blur level for the bot participant, for user observation of video filtering across rooms during design of the virtual space; and a participant role for the bot participant, for user observation of role-based breakouts during design of the virtual space.
  • The virtual conference server system 108 provides, in association with designing the virtual space, display of the virtual space based on the first properties and the second properties (block 810). The bot participant is assigned to the participant video element (e.g., where the participant video element is selected from the plural participant video elements based on manual and/or automatic selection).
  • The virtual conference server system 108 may receive, in association with designing the virtual space, an indication of third user input to remove the bot participant from the virtual space. In response to receiving the indication of third user input, the virtual conference server system 108 may remove the bot participant from the virtual space. The virtual conference server system 108 may provide, in association with virtual conferencing, display of the virtual space based on removing the bot participant from the virtual space.
  • FIG. 9 is a diagrammatic representation of the machine 900 within which instructions 910 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 900 to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions 910 may cause the machine 900 to execute any one or more of the methods described herein. The instructions 910 transform the general, non-programmed machine 900 into a particular machine 900 programmed to carry out the described and illustrated functions in the manner described. The machine 900 may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine 900 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine 900 may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smartphone, a mobile device, a wearable device (e.g., a smartwatch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 910, sequentially or otherwise, that specify actions to be taken by the machine 900. Further, while only a single machine 900 is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions 910 to perform any one or more of the methodologies discussed herein. The machine 900, for example, may comprise the client device 102 or any one of a number of server devices forming part of the virtual conference server system 108. In some examples, the machine 900 may also comprise both client and server systems, with certain operations of a particular method or algorithm being performed on the server-side and with certain operations of the particular method or algorithm being performed on the client-side.
  • The machine 900 may include processors 904, memory 906, and input/output I/O components 902, which may be configured to communicate with each other via a bus 940. In an example, the processors 904 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor 908 and a processor 912 that execute the instructions 910. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. Although FIG. 9 shows multiple processors 904, the machine 900 may include a single processor with a single-core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof.
  • The memory 906 includes a main memory 914, a static memory 916, and a storage unit 918, both accessible to the processors 904 via the bus 940. The main memory 906, the static memory 916, and storage unit 918 store the instructions 910 embodying any one or more of the methodologies or functions described herein. The instructions 910 may also reside, completely or partially, within the main memory 914, within the static memory 916, within machine-readable medium 920 within the storage unit 918, within at least one of the processors 904 (e.g., within the Processor's cache memory), or any suitable combination thereof, during execution thereof by the machine 900.
  • The I/O components 902 may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components 902 that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components 902 may include many other components that are not shown in FIG. 9 . In various examples, the I/O components 902 may include user output components 926 and user input components 928. The user output components 926 may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The user input components 928 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like.
  • In further examples, the I/O components 902 may include biometric components 930, motion components 932, environmental components 934, or position components 936, among a wide array of other components. For example, the biometric components 930 include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye-tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components 932 include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope).
  • The environmental components 934 include, for example, one or cameras (with still image/photograph and video capabilities), illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment.
  • With respect to cameras, the client device 102 may have a camera system comprising, for example, front cameras on a front surface of the client device 102 and rear cameras on a rear surface of the client device 102. The front cameras may, for example, be used to capture still images and video of a user of the client device 102 (e.g., “selfies”), which may then be augmented with augmentation data (e.g., filters) described above. The rear cameras may, for example, be used to capture still images and videos in a more traditional camera mode, with these images similarly being augmented with augmentation data. In addition to front and rear cameras, the client device 102 may also include a 360° camera for capturing 360° photographs and videos.
  • Further, the camera system of a client device 102 may include dual rear cameras (e.g., a primary camera as well as a depth-sensing camera), or even triple, quad or penta rear camera configurations on the front and rear sides of the client device 102. These multiple cameras systems may include a wide camera, an ultra-wide camera, a telephoto camera, a macro camera and a depth sensor, for example.
  • The position components 936 include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.
  • Communication may be implemented using a wide variety of technologies. The I/O components 902 further include communication components 938 operable to couple the machine 900 to a network 922 or devices 924 via respective coupling or connections. For example, the communication components 938 may include a network interface Component or another suitable device to interface with the network 922. In further examples, the communication components 938 may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices 924 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB).
  • Moreover, the communication components 938 may detect identifiers or include components operable to detect identifiers. For example, the communication components 938 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components 938, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth.
  • The various memories (e.g., main memory 914, static memory 916, and memory of the processors 904) and storage unit 918 may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions 910), when executed by processors 904, cause various operations to implement the disclosed examples.
  • The instructions 910 may be transmitted or received over the network 922, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components 938) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions 910 may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices 924.
  • FIG. 10 is a block diagram 1000 illustrating a software architecture 1004, which can be installed on any one or more of the devices described herein. The software architecture 1004 is supported by hardware such as a machine 1002 that includes processors 1020, memory 1026, and I/O components 1038. In this example, the software architecture 1004 can be conceptualized as a stack of layers, where each layer provides a particular functionality. The software architecture 1004 includes layers such as an operating system 1012, libraries 1010, frameworks 1008, and applications 1006. Operationally, the applications 1006 invoke API calls 1050 through the software stack and receive messages 1052 in response to the API calls 1050.
  • The operating system 1012 manages hardware resources and provides common services. The operating system 1012 includes, for example, a kernel 1014, services 1016, and drivers 1022. The kernel 1014 acts as an abstraction layer between the hardware and the other software layers. For example, the kernel 1014 provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services 1016 can provide other common services for the other software layers. The drivers 1022 are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers 1022 can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., USB drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth.
  • The libraries 1010 provide a common low-level infrastructure used by the applications 1006. The libraries 1010 can include system libraries 1018 (e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries 1010 can include API libraries 1024 such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries 1010 can also include a wide variety of other libraries 1028 to provide many other APIs to the applications 1006.
  • The frameworks 1008 provide a common high-level infrastructure that is used by the applications 1006. For example, the frameworks 1008 provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services. The frameworks 1008 can provide a broad spectrum of other APIs that can be used by the applications 1006, some of which may be specific to a particular operating system or platform.
  • In an example, the applications 1006 may include a home application 1036, a contacts application 1030, a browser application 1032, a book reader application 1034, a location application 1042, a media application 1044, a messaging application 1046, a game application 1048, and a broad assortment of other applications such as a third-party application 1040. The applications 1006 are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications 1006, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application 1040 (e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application 1040 can invoke the API calls 1050 provided by the operating system 1012 to facilitate functionality described herein.
  • Glossary
  • “Carrier signal” refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such instructions. Instructions may be transmitted or received over a network using a transmission medium via a network interface device.
  • “Client device” refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smartphones, tablets, ultrabooks, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network.
  • “Communication network” refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other types of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology.
  • “Component” refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various examples, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software), may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering examples in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In examples in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some examples, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other examples, the processors or processor-implemented components may be distributed across a number of geographic locations.
  • “Computer-readable storage medium” refers to both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure.
  • “Machine storage medium” refers to a single or multiple storage devices and media (e.g., a centralized or distributed database, and associated caches and servers) that store executable instructions, routines and data. The term shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks The terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium.”
  • “Non-transitory computer-readable storage medium” refers to a tangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine.
  • “Signal medium” refers to any intangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine and includes digital or analog communications signals or other intangible media to facilitate communication of software or data. The term “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure.

Claims (20)

What is claimed is:
1. A method, comprising:
providing, in association with designing a virtual space for virtual conferencing, a first interface for configuring plural participant video elements within the virtual space, each of the plural participant video elements being assignable to a respective participant;
receiving, via the first interface, an indication of user input for setting first properties for the plural participant video elements;
providing, in association with designing the virtual space, a second interface for configuring a bot participant in the virtual space, the bot participant for simulating an actual participant in association with a participant video element of the plural participant video elements;
receiving, via the second interface, an indication of second user input for setting second properties for the bot participant;
providing, in association with designing the virtual space, display of the virtual space based on the first properties and the second properties, the bot participant being assigned to the participant video element;
receiving, in association with designing the virtual space, an indication of third user input to remove the bot participant from the virtual space; and
removing, in response to receiving the indication of third user input, the bot participant from the virtual space.
2. The method of claim 1, further comprising:
providing, in association with virtual conferencing, display of the virtual space based on removing the bot participant from the virtual space.
3. The method of claim 1, wherein the second properties specify an audio level for the bot participant, for user observation of audio levels across rooms during design of the virtual space.
4. The method of claim 1, wherein the second properties specify an audio track for the bot participant, for user observation of audio output across rooms during design of the virtual space.
5. The method of claim 1, wherein the second properties specify a blur level for the bot participant, for user observation of video filtering across rooms during design of the virtual space.
6. The method of claim 1, wherein the second properties specify a participant role for the bot participant, for user observation of role-based breakouts during design of the virtual space.
7. The method of claim 1, wherein each respective participant is associated with a video feed for presenting within a respective one of the plural participant video elements.
8. A system comprising:
a processor; and
a memory storing instructions that, when executed by the processor, configure the processor to perform operations comprising:
providing, in association with designing a virtual space for virtual conferencing, a first interface for configuring plural participant video elements within the virtual space, each of the plural participant video elements being assignable to a respective participant;
receiving, via the first interface, an indication of user input for setting first properties for the plural participant video elements;
providing, in association with designing the virtual space, a second interface for configuring a bot participant in the virtual space, the bot participant for simulating an actual participant in association with a participant video element of the plural participant video elements;
receiving, via the second interface, an indication of second user input for setting second properties for the bot participant;
providing, in association with designing the virtual space, display of the virtual space based on the first properties and the second properties, the bot participant being assigned to the participant video element;
receiving, in association with designing the virtual space, an indication of third user input to remove the bot participant from the virtual space; and
removing, in response to receiving the indication of third user input, the bot participant from the virtual space.
9. The system of claim 8, the operations further comprising:
providing, in association with virtual conferencing, display of the virtual space based on removing the bot participant from the virtual space.
10. The system of claim 8, wherein the second properties specify an audio level for the bot participant, for user observation of audio levels across rooms during design of the virtual space.
11. The system of claim 8, wherein the second properties specify an audio track for the bot participant, for user observation of audio output across rooms during design of the virtual space.
12. The system of claim 8, wherein the second properties specify a blur level for the bot participant, for user observation of video filtering across rooms during design of the virtual space.
13. The system of claim 8, wherein the second properties specify a participant role for the bot participant, for user observation of role-based breakouts during design of the virtual space.
14. The system of claim 8, wherein each respective participant is associated with a video feed for presenting within a respective one of the plural participant video elements.
15. A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to perform operations comprising:
providing, in association with designing a virtual space for virtual conferencing, a first interface for configuring plural participant video elements within the virtual space, each of the plural participant video elements being assignable to a respective participant;
receiving, via the first interface, an indication of user input for setting first properties for the plural participant video elements;
providing, in association with designing the virtual space, a second interface for configuring a bot participant in the virtual space, the bot participant for simulating an actual participant in association with a participant video element of the plural participant video elements;
receiving, via the second interface, an indication of second user input for setting second properties for the bot participant;
providing, in association with designing the virtual space, display of the virtual space based on the first properties and the second properties, the bot participant being assigned to the participant video element;
receiving, in association with designing the virtual space, an indication of third user input to remove the bot participant from the virtual space; and
removing, in response to receiving the indication of third user input, the bot participant from the virtual space.
16. The computer-readable medium of claim 15, the operations further comprising:
providing, in association with virtual conferencing, display of the virtual space based on removing the bot participant from the virtual space.
17. The computer-readable medium of claim 15, wherein the second properties specify an audio level for the bot participant, for user observation of audio levels across rooms during design of the virtual space.
18. The computer-readable medium of claim 15, wherein the second properties specify an audio track for the bot participant, for user observation of audio output across rooms during design of the virtual space.
19. The computer-readable medium of claim 15, wherein the second properties specify a blur level for the bot participant, for user observation of video filtering across rooms during design of the virtual space.
20. The computer-readable medium of claim 15, wherein the second properties specify a participant role for the bot participant, for user observation of role-based breakouts during design of the virtual space.
US18/191,729 2022-07-09 2023-03-28 Providing bot participants within a virtual conferencing system Active US11880560B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/191,729 US11880560B1 (en) 2022-07-09 2023-03-28 Providing bot participants within a virtual conferencing system
US18/534,341 US20240103708A1 (en) 2022-07-09 2023-12-08 Providing bot participants within a virtual conferencing system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263368047P 2022-07-09 2022-07-09
US18/191,729 US11880560B1 (en) 2022-07-09 2023-03-28 Providing bot participants within a virtual conferencing system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/534,341 Continuation US20240103708A1 (en) 2022-07-09 2023-12-08 Providing bot participants within a virtual conferencing system

Publications (2)

Publication Number Publication Date
US20240012550A1 true US20240012550A1 (en) 2024-01-11
US11880560B1 US11880560B1 (en) 2024-01-23

Family

ID=89431390

Family Applications (2)

Application Number Title Priority Date Filing Date
US18/191,729 Active US11880560B1 (en) 2022-07-09 2023-03-28 Providing bot participants within a virtual conferencing system
US18/534,341 Pending US20240103708A1 (en) 2022-07-09 2023-12-08 Providing bot participants within a virtual conferencing system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US18/534,341 Pending US20240103708A1 (en) 2022-07-09 2023-12-08 Providing bot participants within a virtual conferencing system

Country Status (1)

Country Link
US (2) US11880560B1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200252442A1 (en) * 2018-08-01 2020-08-06 Salesloft, Inc. Systems and methods for electronic notetaking
US20210250548A1 (en) * 2020-02-12 2021-08-12 LINE Plus Corporation Method, system, and non-transitory computer readable record medium for providing communication using video call bot

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8243119B2 (en) * 2007-09-30 2012-08-14 Optical Fusion Inc. Recording and videomail for video conferencing call systems
US20160170970A1 (en) * 2014-12-12 2016-06-16 Microsoft Technology Licensing, Llc Translation Control
US20180232705A1 (en) * 2017-02-15 2018-08-16 Microsoft Technology Licensing, Llc Meeting timeline management tool
US11722535B2 (en) 2021-03-30 2023-08-08 Snap Inc. Communicating with a user external to a virtual conference
US11855796B2 (en) 2021-03-30 2023-12-26 Snap Inc. Presenting overview of participant reactions within a virtual conferencing system
WO2022212391A1 (en) 2021-03-30 2022-10-06 Snap Inc. Presenting participant conversations within virtual conferencing system
US20220321617A1 (en) 2021-03-30 2022-10-06 Snap Inc. Automatically navigating between rooms within a virtual conferencing system
US11683447B2 (en) 2021-03-30 2023-06-20 Snap Inc. Providing side conversations within a virtual conferencing system
US11973613B2 (en) 2021-03-30 2024-04-30 Snap Inc. Presenting overview of participant conversations within a virtual conferencing system
US11362848B1 (en) 2021-03-30 2022-06-14 Snap Inc. Administrator-based navigating of participants between rooms within a virtual conferencing system
US12107698B2 (en) 2021-03-30 2024-10-01 Snap Inc. Breakout sessions based on tagging users within a virtual conferencing system
US11489684B2 (en) 2021-03-30 2022-11-01 Snap Inc. Assigning participants to rooms within a virtual conferencing system
US11689696B2 (en) 2021-03-30 2023-06-27 Snap Inc. Configuring participant video feeds within a virtual conferencing system
EP4315840A1 (en) 2021-03-30 2024-02-07 Snap Inc. Presenting participant reactions within virtual conferencing system
US11381411B1 (en) 2021-03-30 2022-07-05 Snap Inc. Presenting participant reactions within a virtual conferencing system
US11943072B2 (en) 2021-03-30 2024-03-26 Snap Inc. Providing a room preview within a virtual conferencing system
US11683192B2 (en) 2021-03-30 2023-06-20 Snap Inc. Updating element properties based on distance between elements in virtual conference
US11792031B2 (en) 2021-03-31 2023-10-17 Snap Inc. Mixing participant audio from multiple rooms within a virtual conferencing system
US11909784B2 (en) * 2021-07-29 2024-02-20 Vmware, Inc. Automated actions in a conferencing service
US20230094963A1 (en) 2021-09-30 2023-03-30 Snap Inc. Providing template rooms within a virtual conferencing system
US12120460B2 (en) 2021-09-30 2024-10-15 Snap Inc. Updating a room element within a virtual conferencing system
US20230101377A1 (en) 2021-09-30 2023-03-30 Snap Inc. Providing contact information within a virtual conferencing system
US11979244B2 (en) 2021-09-30 2024-05-07 Snap Inc. Configuring 360-degree video within a virtual conferencing system
US20230101879A1 (en) 2021-09-30 2023-03-30 Snap Inc. Providing a door for a room within a virtual conferencing system
US11894940B2 (en) * 2022-05-10 2024-02-06 Google Llc Automated testing system for a video conferencing system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200252442A1 (en) * 2018-08-01 2020-08-06 Salesloft, Inc. Systems and methods for electronic notetaking
US20210250548A1 (en) * 2020-02-12 2021-08-12 LINE Plus Corporation Method, system, and non-transitory computer readable record medium for providing communication using video call bot

Also Published As

Publication number Publication date
US11880560B1 (en) 2024-01-23
US20240103708A1 (en) 2024-03-28

Similar Documents

Publication Publication Date Title
US12088962B2 (en) Configuring participant video feeds within a virtual conferencing system
US11979244B2 (en) Configuring 360-degree video within a virtual conferencing system
US11973613B2 (en) Presenting overview of participant conversations within a virtual conferencing system
US11362848B1 (en) Administrator-based navigating of participants between rooms within a virtual conferencing system
US11792031B2 (en) Mixing participant audio from multiple rooms within a virtual conferencing system
US12107698B2 (en) Breakout sessions based on tagging users within a virtual conferencing system
US11683447B2 (en) Providing side conversations within a virtual conferencing system
US11943072B2 (en) Providing a room preview within a virtual conferencing system
US12120460B2 (en) Updating a room element within a virtual conferencing system
US20220321617A1 (en) Automatically navigating between rooms within a virtual conferencing system
US20230101377A1 (en) Providing contact information within a virtual conferencing system
US20230101879A1 (en) Providing a door for a room within a virtual conferencing system
WO2022212391A1 (en) Presenting participant conversations within virtual conferencing system
US20240340192A1 (en) Coordinating side conversations within virtual conferencing system
US12050758B2 (en) Presenting participant reactions within a virtual working environment
US20240073371A1 (en) Virtual participant interaction for hybrid event
US11880560B1 (en) Providing bot participants within a virtual conferencing system
US11979442B2 (en) Dynamically assigning participant video feeds within virtual conferencing system
US20230113024A1 (en) Configuring broadcast media quality within a virtual conferencing system
US20240069708A1 (en) Collaborative interface element within a virtual conferencing system
US12069409B2 (en) In-person participant interaction for hybrid event
US20240073050A1 (en) Presenting captured screen content within a virtual conferencing system
US12132769B2 (en) Communicating with a user external to a virtual conference
US20230344881A1 (en) Communicating with a user external to a virtual conference
US20240353969A1 (en) Presenting participant reactions within a virtual working environment

Legal Events

Date Code Title Description
AS Assignment

Owner name: SNAP INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIN, ANDREW CHENG-MIN;LIN, WALTON;REEL/FRAME:063136/0840

Effective date: 20220714

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE