CN117099365A - Presenting participant reactions within a virtual conference system - Google Patents

Presenting participant reactions within a virtual conference system Download PDF

Info

Publication number
CN117099365A
CN117099365A CN202280025397.2A CN202280025397A CN117099365A CN 117099365 A CN117099365 A CN 117099365A CN 202280025397 A CN202280025397 A CN 202280025397A CN 117099365 A CN117099365 A CN 117099365A
Authority
CN
China
Prior art keywords
virtual
participants
participant
reaction
virtual meeting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280025397.2A
Other languages
Chinese (zh)
Inventor
安德鲁·郑-敏·林
瓦尔顿·林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Snap Inc
Original Assignee
Snap Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/390,630 external-priority patent/US11855796B2/en
Application filed by Snap Inc filed Critical Snap Inc
Priority claimed from PCT/US2022/022360 external-priority patent/WO2022212386A1/en
Publication of CN117099365A publication Critical patent/CN117099365A/en
Pending legal-status Critical Current

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Aspects of the present disclosure relate to systems including computer-readable storage media storing a program and methods for presenting an overview of a participant reaction to a virtual meeting. The program and method: providing a virtual conference between a plurality of participants; providing a display of reaction buttons for each of a plurality of participants, the reaction buttons being selectable by the participants to indicate different reactions to the virtual meeting; receiving an indication of the selection of a reaction button by one or more of a plurality of participants; storing an indication of the selection over time in association with recording a virtual meeting; generating a graphical overview of the reaction to the virtual meeting based on the stored indication of the selection; and providing for display of the graphical overview for a first participant of the plurality of participants.

Description

Presenting participant reactions within a virtual conference system
Cross Reference to Related Applications
This patent application is a continuation of U.S. patent application Ser. No. 17/390,630, filed on day 7, month 30, 2021, which claims the benefit of U.S. provisional patent application No. 63/168,063, entitled "PRESENTING OVERVIEW OF PARTICIPANT REACTIONS WITHIN A VIRTUAL CONFERENCING SYSTEM," filed on day 3, 2021, which is incorporated herein by reference in its entirety.
Technical Field
The present disclosure relates generally to virtual conference systems, including presenting an overview of participant reactions within the virtual conference system.
Background
The virtual conference system provides for the reception and transmission of audio data and video data between devices for real-time communication between device users.
Drawings
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. To facilitate identification of a discussion of any particular element or act, one or more of the most significant digits in a reference numeral refer to the figure number in which that element was first introduced. Some non-limiting examples are shown in the figures of the accompanying drawings, in which:
FIG. 1 is a diagrammatic representation of a networking environment in which the present disclosure may be deployed, according to some examples.
Fig. 2 is a diagrammatic representation of a virtual conference system having both client-side and server-side functionality in accordance with some examples.
FIG. 3 is a diagrammatic representation of a data structure as maintained in a database in accordance with some examples.
FIG. 4 illustrates a virtual space design interface with interface elements for designing a virtual space, according to some example embodiments.
Fig. 5 illustrates a virtual space navigation interface having interface elements to navigate between rooms in a virtual space and to participate in a virtual meeting with respect to the rooms, according to some example embodiments.
Fig. 6 is an interaction diagram illustrating a process for presenting participant responses and a graphical overview of those participant responses within a virtual conference system according to some example embodiments.
Fig. 7 illustrates a virtual space navigation interface with a reaction icon indicating a reaction of a participant of a virtual meeting, according to some example embodiments.
Fig. 8 is a flow chart illustrating a process for presenting participant responses within a virtual conference system according to some example embodiments.
Fig. 9 is a flow diagram illustrating a process for presenting a graphical overview of participant reactions within a virtual conference system according to some example embodiments.
FIG. 10 is a diagrammatic representation of machine in the form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed according to some examples.
Fig. 11 is a block diagram illustrating a software architecture within which an example may be implemented.
Detailed Description
Virtual conference systems provide for the reception and transmission of audio data and video data between devices for real-time communication between users of the devices. The virtual conference system may provide a reaction interface with different reaction buttons. Each reaction button is selectable by the participant to indicate a corresponding reaction to the virtual meeting.
Selection of a particular reaction button (e.g., a clapping button depicted as clapping) causes the virtual conference system to display a corresponding reaction icon (e.g., clapping) on the screen. For example, the clapping hands are presented at the bottom of the screen and animated to move up the screen until vanishing after moving a predefined distance.
The clapping icon may be displayed each time the clapping button is pressed. In addition, the virtual conference system may play an audio file (e.g., a single clap) each time a clap button is pressed by any of the participants. In some cases, a greater number of presses relative to the applause button indicates a greater applause of the participant.
The disclosed embodiments provide for modifying the audio output and/or the display of a reaction icon (e.g., a clap icon) if the rate of depression for the reaction button (e.g., a clap button) meets a threshold rate. For example, the modified audio is based on the playing of an audio file corresponding to a louder, more intense applause. In another example, the display of the modified reaction icon is based on a supplemental reaction image suggesting increased applause and/or cheering.
Further, the disclosed embodiments provide for generating a graphical overview of the participant's reactions relative to the record of the virtual meeting. For example, the graphical overview is described as a timeline that indicates when different reactions are submitted (e.g., by a participant pressing a corresponding reaction button). The virtual meeting system can display the timeline to the moderator or other administrative user upon request.
By virtue of the foregoing, the virtual meeting system provides increased user participation relative to participating in the virtual meeting, either in accordance with the role of the presenter or in accordance with the role of the attendee.
Fig. 1 is a block diagram illustrating an example virtual conference system 100 for exchanging data over a network. Virtual conference system 100 includes multiple instances of client device 102, each of which hosts multiple applications, including virtual conference client 104 and other applications 106. Each virtual conference client 104 is communicatively coupled to other instances of virtual conference clients 104 (e.g., hosted on respective other client devices 102), virtual conference server system 108, and third party server 110 via network 112 (e.g., the internet). Virtual conference client 104 may also communicate with locally hosted applications 106 using Application Program Interfaces (APIs).
Virtual conference system 100 provides for the reception and transmission of audio, video, images, text, and/or other signals through user devices (e.g., at different locations) for real-time communication between users. In some cases, two users may communicate with each other in a one-to-one communication using a virtual conference on their respective devices. In other cases, more than two users may utilize a multi-way virtual conference to participate in a real-time group conversation. Thus, multiple client devices 102 may participate in a virtual conference, e.g., the client devices 102 participate in a group session that transmits audio-video content streams and/or message content (e.g., text, images) between participant devices.
Virtual conference client 104 is capable of communicating and exchanging data with other virtual conference clients 104 and with virtual conference server system 108 via network 112. The data exchanged between virtual conference clients 104 and virtual conference server system 108 includes functions (e.g., commands to activate functions) as well as payload data (e.g., video, audio, other multimedia data, text).
Virtual conference server system 108 provides server-side functionality to particular virtual conference clients 104 via network 112. For example, with respect to transmitting audio and/or video streams, virtual conference client 104 (e.g., installed on first client device 102) may facilitate transmission of streaming content to virtual conference server system 108 for subsequent receipt by other participant devices (e.g., one or more second client devices 102) running respective instances of virtual conference client 104.
The streaming content may correspond to audio and/or video content captured by sensors (e.g., microphones, cameras) on the client device 102, e.g., corresponding to real-time video and/or audio capture of a user (e.g., face) and/or other scenes and sounds captured by the respective device. For example, other audio/visual data (e.g., animations, overlays, emoticons, etc.) and/or message content (e.g., text, stickers, emoticons, other image/video data) may be used in conjunction with the extended applications and/or widgets associated with virtual conference client 104 to supplement the streaming content.
Although certain functions of virtual conference system 100 are described herein as being performed by virtual conference client 104 or by virtual conference server system 108, the location of certain functions within virtual conference client 104 or within virtual conference server system 108 may be a design choice. For example, it may be technically preferable that: certain techniques and functionality are initially deployed within virtual conference server system 108, but later migrated to virtual conference client 104 if client device 102 has sufficient processing power.
Virtual conference server system 108 supports various services and operations provided to virtual conference clients 104. Such operations include sending data to virtual conference client 104, receiving data from virtual conference client 104, and processing data generated by virtual conference client 104. For example, the data may include streaming content and/or message content, client device information, and social network information as mentioned above. The data exchange within virtual conference system 100 is activated and controlled by functionality available via a User Interface (UI) of virtual conference client 104.
Turning now specifically to virtual conference server system 108, application Program Interface (API) server 114 is coupled to application server 118 and provides a programming interface to application server 118. Application server 118 is communicatively coupled to database server 124, which database server 124 facilitates access to database 126, which database 126 stores data associated with virtual meeting content processed by application server 118. Similarly, web server 116 is coupled to application server 118 and provides a web-based interface to application server 118. To this end, the web server 116 processes incoming network requests through the hypertext transfer protocol (HTTP) and several other related protocols.
An Application Program Interface (API) server 114 receives and transmits virtual meeting data (e.g., commands, audio/video payloads) between the client device 102 and the application server 118. In particular, an Application Program Interface (API) server 114 provides a set of interfaces (e.g., routines and protocols) that may be invoked or queried by the virtual conference client 104 to activate the functions of the application server 118. Application Program Interface (API) server 114 exposes various functions supported by application server 118, including account registration, login functionality, streaming of audio and/or video content, and/or sending and retrieving message content from a particular virtual conference client 104 to another virtual conference client 104 via application server 118, retrieving a contact list of a user's client device 102, adding and deleting users (e.g., contacts) to a user graph (e.g., social graph), and opening (e.g., relating to virtual conference client 104) application events.
Application server 118 hosts a plurality of server applications and subsystems, including, for example, virtual meeting server 120 and social network server 122. Virtual conference server 120 implements a number of virtual conference processing techniques and functions, particularly those related to the aggregation and other processing of content (e.g., streaming content) included in audio-video feeds received from multiple instances of virtual conference client 104. Such processing may also be performed by virtual conference server 120 on the server side in view of hardware requirements for other processors and memory intensive processing of data.
Social networking server 122 supports various social networking functions and services, and makes these functions and services available to virtual conference server 120. To this end, social network server 122 maintains and accesses user graph 304 (shown in FIG. 3) within database 126. Examples of functions and services supported by social networking server 122 include identifying other users (e.g., contacts such as friends, colleagues, teachers, students, etc.) in virtual conference system 100 that have a relationship with a particular user.
In one or more implementations, a user interacting via a virtual conference client 104 running on the first client device 102 can select and invite participants to participate in a virtual conference. For example, the participant may be selected from contacts maintained by social network server 122. In another example, the participant may be selected from contacts included within a contact address book stored in association with the first client device 102 (e.g., in local memory or in a cloud-based user account). In another example, the participant may be selected by a user manually entering an email address and/or phone number of the participant.
A user at a first client device 102 may initiate a virtual meeting by selecting appropriate user interface elements provided by virtual meeting client 104, prompting invited participants to accept or decline to participate in the virtual meeting at their respective devices (e.g., one or more second client devices 102). When the participant has accepted the invitation (e.g., via a prompt), the virtual conference server system 108 may perform an initialization process in which session information is published between the participant client devices 102 including the user providing the invitation. Each of the participant client devices 102 may provide corresponding session information to the virtual conference server system 108, which in turn, the virtual conference server system 108 publishes the session information to the other participant client devices 102. The session information about each client device 102 may include content streams and/or message content available to the client device 102, as well as respective identifiers for the content streams and/or message content.
As described below with respect to fig. 2, a virtual meeting may correspond to a virtual space that includes one or more rooms (e.g., virtual rooms). The virtual space and its corresponding rooms may be created, at least in part, by inviting users and/or other users. In this way, the end user may act as an administrator that creates their own virtual space with rooms and/or designs the virtual space based on preset available rooms.
Fig. 2 is a block diagram illustrating additional details regarding virtual conference system 100 according to some examples. In particular, virtual conference system 100 is shown to include virtual conference client 104 and application server 118. Virtual conference system 100 includes a plurality of subsystems supported on the client side by virtual conference client 104 and on the server side by application server 118. These subsystems include, for example, a virtual space creation system 202 implementing a virtual space design interface 204 and a virtual space participation system 206 implementing a virtual space navigation interface 208.
Virtual space creation system 202 provides one or more virtual spaces in which users design to participate in a virtual conference. In one or more embodiments, the virtual space corresponds to an environment having one or more rooms configured to house a virtual meeting.
The virtual space may be created and/or selected by an end user who wishes to invite other users to the virtual conference (e.g., from a predefined set of virtual spaces with rooms). Additionally, the various rooms of the virtual space may be newly created and/or selected (e.g., from a predefined set of rooms) by the end user. In one or more embodiments, the virtual space creation system 202 includes a virtual space design interface 204, which virtual space design interface 204 can be used by an end user to design a virtual space, including creating and/or selecting rooms for inclusion in the virtual space.
As discussed below with respect to fig. 4, the virtual space design interface 204 enables an end user (e.g., acting as an administrator) to select and/or locate multiple elements within a room. Examples of elements include, but are not limited to: participant video elements (e.g., for displaying the participant's respective video feed), chat interfaces (e.g., for the participant to provide text-based messages, decals, and/or reactions within the room), break buttons (e.g., for switching from a first room to one or more second rooms), and/or other user-definable elements for performing certain actions (e.g., speaking into a virtual microphone, querying an administrator via buttons, etc.).
Virtual space participation system 206 is configured to perform virtual conferences among the participants within the virtual space. Participants may include end users (e.g., administrators) who create the virtual space, as well as those users invited to participate in a virtual meeting with respect to the virtual space created/selected by the end users. The virtual space participation system 206 includes a virtual space navigation interface 208 (e.g., discussed below with respect to fig. 5), the virtual space navigation interface 208 allowing participants to navigate between rooms in the virtual space and participate in virtual meetings with respect to the rooms.
In one or more embodiments, the virtual space creation system 202 provides for end users (e.g., administrators) to create different types of environments (e.g., virtual spaces with rooms) for conducting virtual conferences, and the virtual space participation system 206 provides for participants to participate in virtual conferences within such environments. Examples of such virtual conferences include, but are not limited to: business meetings, seminars, demonstrations, classroom lectures, teacher office hours, concerts, meetings, virtual dinner, escape rooms, and the like.
Fig. 3 is a schematic diagram illustrating a data structure 300 that may be stored in database 126 of virtual conference server system 108 according to some examples. While the contents of database 126 are shown as including a plurality of tables, it will be appreciated that data may be stored in other types of data structures (e.g., as an object-oriented database).
Database 126 includes profile data 302, user graphs 304, and user tables 306 associated with users (participants) of virtual conference system 100. The user table 306 stores user data and links (e.g., with reference to ground) to the user graph 304 and profile data 302. Each user in virtual meeting system 100 is associated with a unique identifier (email address, phone number, social network identifier, etc.).
User graph 304 stores (e.g., in conjunction with social network server 122) information about relationships and associations between users. By way of example only, such relationships may be social, professional (e.g., working at a common company or organization), interest-based, or activity-based. As described above, user graph 304 may be maintained and accessed, at least in part, by social network server 122.
The profile data 302 stores a plurality of types of profile data regarding a particular user. The profile data 302 may be selectively used and presented to other users of the virtual meeting system 100 based on privacy settings specified by the particular user. The profile data 302 includes, for example, a user name, a telephone number, an email address, and/or settings (e.g., notification and privacy settings), and a user-selected avatar representation.
Database 126 also includes virtual space table 308. As described above, the virtual space corresponds to an environment having one or more rooms configured to house virtual meetings. The virtual space may be newly created by the user or may be included (e.g., by other users, system administrators, etc.) within one or more common virtual space sets available for virtual meetings. Virtual space table 308 stores information representing one or more public virtual space sets and any private virtual spaces created by users (e.g., where a particular user does not disclose such virtual spaces).
In one or more embodiments, virtual space table 308 stores associations between its virtual spaces and users (e.g., within user table 306) that select these virtual spaces. In this way, a particular user may have one or more virtual spaces associated with it. In addition, database 126 includes a room table 310 that may be associated with virtual space within virtual space table 308. As described above, rooms may be newly created by a user, or may be included in a set of one or more public rooms (e.g., galleries) available for user selection. The room table 310 stores information representing one or more rooms as well as any private rooms created by a user (e.g., in the event that a particular user does not disclose such rooms). The stored information may be used by virtual conference system 100 to create a corresponding room for use in the virtual space. In one or more embodiments, the stored information may also include a recording (e.g., audio and/or video recording) of a particular virtual meeting for subsequent play by the corresponding participant.
FIG. 4 illustrates a virtual space design interface 204 having interface elements for designing a virtual space, according to some example embodiments. Designing the virtual space may include creating and/or selecting rooms for inclusion in the virtual space. Virtual space design interface 204 includes menu interface 402, room element interface 404, element properties interface 406, control interface 408, room list interface 410, room canvas interface 412, and administrator name 414. Note that elements 402 through 414 correspond to examples of interface elements for virtual space design interface 204, and additional, fewer, and/or different interface elements may be used.
An administrator (e.g., corresponding to administrator name 414) may design the virtual space using various interface elements. In one or more implementations, the menu interface 402 includes user-selectable categories (e.g., menu titles) related to a virtual space (e.g., a "workspace"), a room within the virtual space, and/or elements within the room. For example, the workspace category is a user selectable option for presenting (e.g., via a drop down list) to: managing settings for a virtual space, managing invitations for a virtual space, managing versions of a virtual space, publishing a virtual space (e.g., for future use by a user), managing virtual space publishing, and/or starting/managing recordings (e.g., audio and/or video recordings) with respect to a virtual space.
The room category of menu interface 402 is a user selectable option for presenting (e.g., via a drop down list) to: managing settings for rooms within the virtual space, setting a room context, setting an order for rooms listed in the room list interface 410, creating new rooms, importing rooms from a set of available rooms, removing rooms, publishing rooms, managing room publications, and/or starting/managing records about rooms.
Additionally, the element category is a user selectable option for presenting (e.g., via a drop down list) to: inserting elements into a room, inserting shapes into a room, treating elements as foreground/background, arranging/locating elements, and/or grouping elements. Examples of elements include, but are not limited to: motion buttons, analog clocks, audience problem boards, backpack items, meeting buttons, chat, closed captioning displays, closed captioning inputs, countdown, clocks, digital clocks, doorbell, double sided images, feedback, images, multi-user video chat, music, participant audio mixers, participant count, participant video, photo bars, voting, random sources, room previews, scheduled times, audio effects, stopwatches, photographs, text, timers, user searches, video, waiting lists, web media, websites. Examples of shapes include, but are not limited to, circular, rectangular, and triangular.
The user category of menu interface 402 is a user selectable option for presenting (e.g., via a drop down list) to: users/participants in the virtual space are managed (e.g., tagged to the participants in order to distinguish roles such as administrator or attendees/participants). In addition, the editing category is user-selectable for performing editing operations (e.g., undo, redo, cut, copy, paste), and the help category is user-selectable for performing help operations (e.g., novice entry, disagreement, real-time help, submit feedback).
In one or more implementations, the room element interface 404 includes a user-selectable icon for inserting elements (e.g., corresponding to a subset of those elements available via the element categories mentioned above) into the current room. For example, elements may be added and/or located within a current room by selecting an element and dragging the selected element onto the room canvas interface 412 representing the current room layout.
In one or more implementations, the room element interface 404 includes icons including, but not limited to: a text icon for adding text to a room; a participant video icon for adding a single participant video element (e.g., an interface element selectable by a single participant for displaying the participant's video feed) to the room; a multi-user video icon for adding a plurality of participant video elements (e.g., interface elements selectable by one or more participants for displaying video feeds of the participants) to the room; a chat icon for adding a chat interface (e.g., for sending messages using text, stickers, emoticons, etc.) to a room; a video play icon for adding a video play element (e.g., screen) to the room for playing the selected video; a background icon for selecting a background color/fade, image or video for the room; an action icon for adding an action element (e.g., button) to the room for performing a user-defined action (e.g., speaking into the virtual microphone, asking the administrator via the button, etc.); and/or a meeting button for adding a meeting element (e.g., button) to switch the selected participant between the current room and one or more other rooms.
In one or more embodiments, the element properties interface 406 includes various fields for setting configuration properties for the room elements described above. For example, with respect to general elements (e.g., text, single participant video element, multiple participant video element, chat interface, video element, background image, action element, meeting button), element properties interface 406 includes fields for setting: element title/name, opacity, gradient, style, layout, border/corner, shading, interaction (e.g., the extent to which a participant can delete, modify, resize an element), filtering, full screen status, conditions, accessibility, and actions for an element. For a single participant video element, element properties interface 406 includes additional fields for setting the manner in which the user is placed into the single participant video element during the virtual meeting (e.g., automatically, manually by the participant and/or administrator end user). In addition, for chat interfaces, element properties interface 406 includes additional properties for setting who (e.g., administrator and/or participant) may provide chat input and/or what types of input (e.g., text, decals, emoticons, etc.) are available. For an action element, element properties interface 406 includes additional properties that set what type of action is to be performed in response to user selection of the action element (e.g., button). Further, for the meeting element, the element properties interface 406 includes additional properties for selecting participants and/or meeting rooms.
In one or more implementations, the element properties interface 406 also includes fields for setting configuration properties for the room canvas interface 412. For example, element properties interface 406 includes fields for: a plurality of pseudo-participants (e.g., simulated video feeds) are selected for visualization by a plurality of users, music is selected (e.g., background music), and/or a reaction button is selected that causes the participants to indicate a real-time reaction with respect to a virtual meeting in a room.
In one or more implementations, the control interface 408 includes user-selectable icons corresponding to controls (e.g., management controls) for the virtual space. For example, control interface 408 includes icons including, but not limited to: a director mode icon for switching between a director mode for designing a room and a user mode for viewing the room within the virtual space design interface 204 (e.g., where the director mode includes a room element interface 404 and an element properties interface 406, and the user mode does not include a room element interface 404 and an element properties interface 406); a view icon for viewing a room within the virtual space navigation interface 208; a shared screen icon (e.g., for collaborative design with other users such as a common administrator); a microphone icon for enabling or disabling a microphone; help icons (e.g., novice entry, inconsistent, real-time help, submit feedback); an invitation icon (e.g., for displaying an invitation link for sending to a participant to access the virtual space); setting icons (e.g., for selecting video and audio devices for end users of the virtual meeting, and for selecting user avatars); and/or an exit icon for exiting the virtual space design interface 204.
In one or more implementations, the room list interface 410 displays a list of rooms for the virtual space. Each listed room is user selectable to switch to editing (e.g., in director mode) and/or viewing (e.g., in user mode) the selected room. As described above, the list of rooms may be modified (e.g., by adding, importing, and/or removing rooms) via options within the room category of menu interface 402.
Fig. 5 illustrates a virtual space navigation interface 208 according to some example embodiments, the virtual space navigation interface 208 having interface elements that navigate between rooms in a virtual space and participate in a virtual meeting with respect to the rooms. Virtual space navigation interface 208 includes a control interface 502, a room list interface 504, a current room interface 506, a participant video element 508, and a participant video element 510. Note that elements 502-514 correspond to examples of interface elements for virtual space navigation interface 208, and additional, fewer, and/or different interface elements may be used.
In one or more implementations, the control interface 502 includes user-selectable icons corresponding to controls (e.g., management controls) for the virtual space. For example, control interface 408 includes icons including, but not limited to: an edit icon for redirecting to the virtual space design interface 204 to edit the current room; a volume icon for adjusting a volume level for a current room; a shared screen icon (e.g., for allowing others to view a room without joining the room); a microphone icon for muting or unmuting a microphone; help icons (e.g., novice entry, inconsistent, real-time help, submit feedback); an invite icon (e.g., for displaying an invite link for a participant to access a virtual space); setting icons (e.g., for selecting video and audio devices for end users of the virtual meeting, and for selecting user avatars); and/or an exit icon for exiting the virtual space design interface 204.
In one or more implementations, the room list interface 504 displays a list of rooms for the virtual space. Each listed room is user selectable to switch to the selected room (e.g., for a virtual meeting). The selected room is presented as the current room within the current room interface 506. In this way, the participant can navigate among the multiple rooms available within the virtual space. Alternatively or additionally, it is possible to navigate between rooms via a virtual space map interface (not shown) that depicts a map view (e.g., plan view) of the virtual space and its corresponding rooms, where each room is user selectable for navigation. Alternatively or additionally, navigation may also be performed between rooms by positioning navigation buttons (not shown) within the rooms, wherein user selection of a button causes navigation to another room (e.g., a predefined room). As described above, the virtual space design interface 204 allows for the design of a virtual space and its corresponding room. Thus, navigation between rooms is based at least in part on the design of the virtual space (e.g., the virtual space may include one or more of the room list interface 504, the virtual space map/planogram interface, and/or the navigation buttons mentioned above).
With respect to the current room interface 506, each participant is represented as a respective participant video element. As described above, a participant video element corresponds to an interface element (e.g., a box) that is selectable by a single participant for displaying the participant's video feed. The example of fig. 5 includes a first participant associated with participant video element 508 and a second participant associated with participant video element 510. In one or more implementations, the participant video element 510 showing the feed of the second participant can include a participant button 512 relative to the perspective of the first participant. For example, the participant button 512 may be selected by the first participant to perform a predefined action with respect to the second participant (e.g., initiate an audible conversation specifying that the second participant follows the first participant as the first participant moves the room).
As described above, the element properties interface 406 of FIG. 4 may include a field for participants to select a reaction button to indicate a real-time reflection relative to a virtual meeting in a room. The example of fig. 5 includes a reaction interface 514, the reaction interface 514 having user-selectable elements to present input text, and icons (e.g., reaction buttons) for indicating different types of reactions/emotions. Examples of such reactions include, but are not limited to: love/happiness (e.g., heart button), laugh (e.g., smiley face button), boring (e.g., sleeping face button), countering (e.g., thumb down button, not shown), agreeing (e.g., thumb up or clapping icon), applause (e.g., applause button depicted as clapping), and praying/expressing a feeling (e.g., praying gesture icon).
Although the example of fig. 5 shows two participants, the current room interface 506 may accommodate additional participants for the virtual meeting. Additional participants may be located (e.g., automatically and/or manually by dragging) based on the location of participant video elements (e.g., boxes) designed through virtual space design interface 204.
In one or more implementations, the virtual space navigation interface 208 may vary based on whether a given participant is an administrator or another participant (e.g., attendee). For example, some participant video elements may be specified for an administrator (e.g., via virtual space design interface 204), while other participant video elements are specified for other participants. Virtual meeting server system 108 is configured to distinguish between these administrator or other participant roles based on the above-described labels assigned to the participants, e.g., via the user categories of menu interface 402 provided by virtual space design interface 204.
Fig. 6 is an interaction diagram illustrating a process 600 for presenting participant responses and a graphical overview of those participant responses within a virtual conference system according to some example embodiments. For purposes of illustration, process 600 is described herein with reference to a first client device 602, one or more second client devices 604, and virtual conference server system 108. Each of the first client device 602 and the second client device 604 may correspond to a respective client device 102. The process 600 is not limited to the first client device 602, the second client device 604, and the virtual conference server system 108. Further, one or more blocks (or operations) of process 600 may be performed by first client device 602, second client device 604, or one or more other components in virtual conference server system 108, and/or by other suitable devices. Further for purposes of illustration, blocks (or operations) of process 600 are described herein as occurring serially or linearly. However, multiple blocks (or operations) of process 600 may occur in parallel or concurrently. Additionally, the blocks (or operations) of process 600 need not be performed in the order shown, and/or one or more blocks (or operations) of process 600 need not be performed and/or may be replaced by other operations. Process 600 may terminate when its operations are completed. Further, process 600 may correspond to a method, an application, an algorithm, etc.
Each of the first client device 602 and the second client device 604 has an instance of the virtual conference client 104 installed thereon. In the example of fig. 6, a first client device 602 is associated with a respective first participant of the virtual conference server system 108 and one or more second client devices 604 are associated with a respective one or more second participants of the virtual conference server system 108. For example, a first participant may be associated with a first user account of virtual conference server system 108 and a second participant may be associated with a second user account of virtual conference server system 108. In one or more embodiments, the first participant corresponds to an administrator (e.g., a moderator, teacher, etc.) of the virtual conference, and the second participant corresponds to an attendee (e.g., a guest, student, or other type of attendee) of the virtual conference.
As described above, the first participant and the second participant are identified by the virtual conference server system 108 based on unique identifiers (e.g., email addresses, phone numbers) associated with respective user accounts for the first participant and the second participant. In one or more implementations, the virtual meeting server system 108 implements a social networking server 122 and/or works in conjunction with the social networking server 122, which social networking server 122 is configured to identify contacts with which a particular user has a relationship. For example, the first participant and the second participant may be contacts with respect to the virtual conference server system 108.
As described herein with respect to operations 606-622, virtual conference system 100 provides for modifying audio output associated with participant-based reactions during a virtual conference based on a rate at which the reactions are received from the participants. With respect to operations 624-628, virtual meeting system 100 also provides for generating and displaying a graphical overview (e.g., a timeline interface) of participant-based reactions to the virtual meeting for viewing by an administrator of the virtual meeting.
At operations 606 through 608, virtual conference server system 108 provides real-time communications between participant devices including a first client device 602 (e.g., corresponding to an administrator and/or moderator) and one or more second client devices 604 (e.g., corresponding to remaining participants such as attendees). As described above, virtual conference server system 108 provides for the reception and transmission of data between participating devices, including one or more of audio, video, images, text, and/or other signals. Real-time communication may occur in one of a plurality of rooms included in a virtual space.
In addition, virtual space navigation interface 208 provides (e.g., via reaction interface 514) one or more user-selectable reaction buttons for indicating different reactions to the virtual meeting. Further, each participant may be represented by a respective participant video element (e.g., elements 508-510 in fig. 5 corresponding to a respective video feed).
The second client device 604 receives a selection of a reaction button (e.g., a button press) (block 610). In a first example, the selection corresponds to input received from a plurality of participants. In another example, the selection corresponds to an input received from a single participant, wherein the single participant selects the reaction button multiple times. In yet another example, the selection corresponds to a combination of input received from multiple participants and multiple selections received from a single participant. In one or more embodiments, the participant can have selected a different reaction button (e.g., heart button, laugh button, sleep button, thumb up button, clapping button, applause button, prayer button) to indicate a corresponding reaction to the virtual meeting (e.g., love/happiness, laugh, sleep/boring, consent, congratulation, applause, and praying).
The second client device 604 sends an indication of the selection to the virtual conference server system 108 (operation 612). In response, virtual conference server system 108 provides for display and audio output of the reaction icon to each of first client device 602 (operation 614) and second client device 604 (operation 616) based on the selection.
In one or more embodiments, the displayed reaction icons use the same images as those of the reaction buttons. For example, a single selection of the heart-shaped button (e.g., via the reaction interface 514 on one of the second client devices 604) causes the virtual conference server system 108 to display a single instance of the heart-shaped icon (e.g., within the current room interface 506) on all participant devices (e.g., the first client device 602 and the second client device 604). In another example, two selections of thumb up buttons (e.g., via the reaction interface 514 from one of the second client devices 604) cause the virtual conference server system 108 to display two instances of thumb up icons on all participant devices. In yet another example, five selections of the palm buttons (e.g., from a combination of the plurality of second client devices 604) cause the virtual conference server system 108 to display five instances of the palm icons (e.g., claps) on all participant devices.
As discussed further below with respect to fig. 7, each reaction icon may be displayed as an animation. For example, each reaction icon is presented starting from the bottom edge of the screen (e.g., where different icons start at different random locations along the bottom edge), moving/floating a predefined distance upward from the bottom edge, and disappearing after moving the predefined distance.
With respect to the audio output provided at operations 614 through 616, one or more of the reaction buttons may be associated with a respective audio file. In this way, one or more reaction buttons are predefined to be associated with the audio output (e.g., associated with the audio file), while the remaining reaction buttons are not associated with the audio output (e.g., are not associated with the audio file). In response to each selection of a predefined button (e.g., each button press), virtual conference server system 108 causes the corresponding audio file to be played. For example, a applause button may be associated with an audio file corresponding to a single clapping sound. Each selection (e.g., press) of the response button of the applause button by the second client device 604 causes the virtual conference server system 108 to play an audio file for each of the participant devices.
In one or more implementations, the virtual conference server system 108 is configured to determine, for a button (e.g., a palm button) predefined to be associated with an audio output, a rate at which the button is selected/pressed. For example, for a given time period (e.g., 5 seconds from a given press of a predefined button), virtual conference server system 108 calculates the rate based on the number of times the predefined button was pressed during that time period.
Further, virtual conference server system 108 may store (e.g., in database 126) different thresholds representing different levels of modified audio output. For example, each threshold may correspond to a different intensity level for modifying the audio output associated with the predefined button. In addition, each threshold may be associated with an audio file representing a respective level. Using the example of applause, the applause button may be associated with a first threshold rate for playing the first modified audio file (e.g., pressed 10 times in total via one or more of the second client devices 604 within 5 seconds) and with a second threshold rate for playing the second modified audio file (e.g., pressed 25 times in total via the second client devices 604 within 10 seconds). The first modified audio file may correspond to a weak applause sound and the second modified audio file may correspond to a stronger applause sound.
In the example of fig. 6, virtual conference server system 108 determines that the rate of selection for the predefined button (e.g., applause button) satisfies at least a first threshold rate (block 618). In response, virtual conference server system 108 provides for modifying the audio output and/or the reaction icon based on meeting the threshold rate (operations 620-622). For example, in the event that the calculated rate meets the first threshold rate but does not meet the second threshold rate, the virtual conference server system 108 provides for playing the first modified audio file corresponding to the weak applause. In another example, where the calculated rate meets a second threshold rate, virtual conference server system 108 may first provide for playing the first modified audio file when the first threshold rate is met and then provide for playing the second modified audio file when the second threshold rate is met. In this way, each of the participants (e.g., at the first client device 602 and the second client device 604) may present audio output in the following order: multiple instances of a single clapping audio file, a single play of a first modified audio file (e.g., a faint applause), and a single play of a second modified audio file (e.g., a hotter applause). Such playback of audio may produce a sensation of increased applause intensity as the number of button presses of the applause button increases to satisfy the first threshold rate and the second threshold rate.
In contrast to modifying the reaction icon, virtual meeting server system 108 can also associate supplemental reaction images (e.g., first supplemental image and second supplemental image) with respective intensity levels. Thus, in the event that the calculated rate meets the first threshold rate but does not meet the second threshold rate, the virtual meeting server system 108 provides for display of a first supplemental image (e.g., an animation such as a faint colored paper special effect, fireworks, etc.). In another example, where the calculated rate meets a second threshold rate, virtual meeting server system 108 first provides for display of a first supplemental image when the first threshold rate is met and then provides for display of a second supplemental image (e.g., a more intense color special effect, a more intense firework) when the second threshold rate is met. For example, the supplemental image may be synchronized with the increased audio intensity described above.
As described above, operations 624-628 involve generating and displaying a graphical overview (e.g., a timeline interface) of the participant-based reaction to the virtual meeting. At block 624, virtual conference server system 108 stores an indication of the selection over time (e.g., button presses from block 610) in association with recording the virtual conference (block 624).
As described above, rooms may be designed via virtual space design interface 204 for recording virtual meetings. An administrator (e.g., a moderator, teacher, etc.) may initiate/manage recordings for rooms (e.g., classroom sessions) via virtual space design interface 204. The recordings may include video and/or audio and may be stored in association with the room table 310. Virtual space navigation interface 208 may provide interface elements for participants including administrators (e.g., teachers, moderators) and other participants (e.g., students, attendees) to access records.
With respect to block 624, virtual conference server system 108 is configured to store an indication of each reaction button press along with a timestamp. For example, upon sending the indication of the selection at operation 612, the second client device 604 may include an identifier for the reaction button and a timestamp (e.g., relative to the time at which the recording began) when the reaction button was pressed. The virtual conference server system 108 stores these reaction button-timestamp pairs in association with the record for the room within the room table 310. Thus, upon completion/termination of the virtual meeting, the stored reaction button-timestamp pairs may correspond to a history of all reactions during the virtual meeting (e.g., via the reaction interface 514 of the participant device).
Based on the stored reaction button-timestamp pairs, virtual meeting server system 108 generates a graphical overview of the reactions to the virtual meeting (block 626). In one or more implementations, the graphical overview is generated as a timeline that indicates different reactions to the virtual meeting over time. For example, each type of reaction (e.g., applause, laughter, etc.) is plotted as a respective line based on the number of corresponding reactions within the duration of the virtual meeting. Each line may be a different color to represent a corresponding reaction. In this way, the timeline represents different reactions over time, enabling a viewing user (e.g., an administrator) to visualize the times at which reactions were caused during the virtual meeting and what those particular reactions were. The virtual space navigation interface 208 for the first participant may include selectable interface elements for viewing the timeline, while for the second participant such interface elements are not included.
In response to a user selection by a first participant (e.g., an administrator) to view a graphical overview (e.g., a timeline), virtual conference server system 108 provides a display of the timeline on first client device 602 (operation 628). In one or more embodiments, the virtual meeting server system 108 provides that the time presented within the timeline (e.g., x-axis corresponds to time) is user selectable to initiate playback of a record of the virtual meeting at the selected point in time. In this way, the first participant may choose to play the corresponding video/audio at the time of the reaction during the viewing of the virtual meeting.
Accordingly, the virtual conference system 100 described herein provides increased user participation relative to participating in a virtual conference based on the role of the moderator or the role of the attendee. Further, virtual meeting system 100 facilitates participant feedback by including a reaction interface (e.g., reaction buttons/icons) as described herein. Without such a reaction interface, participants (e.g., attendees) may need to manually provide comments to the moderator in other ways, and such comments may not be viewable in real-time. Furthermore, the graphical overview (e.g., a timeline of reactions over time) provides the moderator with a form of feedback that is made possible by such a reaction interface.
Fig. 7 illustrates the virtual space navigation interface 208 of fig. 5, the virtual space navigation interface 208 having a reaction icon indicating a reaction of a participant of the virtual conference, according to some example embodiments. As described above, virtual space navigation interface 208 provides (e.g., via reaction interface 514) one or more user-selectable reaction buttons for indicating different reactions to the virtual meeting. One example reaction button is a applause button included in reaction interface 514.
The virtual space navigation interface 208 also displays a reaction icon 702, which reaction icon 702 uses the same image as the image of the reaction button within the reaction interface 514 or an image that is otherwise similar to the image of the reaction button within the reaction interface 514. In the example of fig. 7, the reaction icon is depicted as a clapping hand, using the same (e.g., but larger in size) image as the clapping button included in the reaction interface 514. As discussed above with respect to fig. 6, each of the reaction icons 702 may correspond to a single button press of a palm button in the reaction interface 514 (e.g., from one or more participants in a virtual teleconference). For example, each of the reaction icons 702 is presented as starting from the bottom edge of the screen (e.g., where different icons start at different locations along the bottom edge), moving/floating a predefined distance upward from the bottom edge, and disappearing after moving the predefined distance.
As described above, virtual meeting system 100 is configured to modify the display of reaction icon 702 with supplemental reaction images (e.g., colored paper, animation of fireworks) if the rate at which the reaction is received from the participant meets a threshold rate. In addition, virtual conference system 100 is configured to modify audio output associated with the display of reaction icon 702 when the threshold rate is met.
Fig. 8 is a flow diagram illustrating a process 800 for presenting participant responses within a virtual conference system according to some example embodiments. For purposes of illustration, process 800 is described herein primarily with reference to first client device 602, second client device 604, and virtual conference server system 108 of fig. 1 and 2. However, one or more blocks (or operations) of process 800 may be performed by one or more other components and/or other suitable devices. Further for purposes of illustration, blocks (or operations) of process 800 are described herein as occurring serially or linearly. However, multiple blocks (or operations) of process 800 may occur in parallel or concurrently. Additionally, the blocks (or operations) of process 800 need not be performed in the order shown, and/or one or more blocks (or operations) of process 800 need not be performed and/or may be replaced by other operations. Process 800 may terminate when its operations are completed. Otherwise, process 800 may correspond to a method, an application, an algorithm, etc.
Virtual conference server system 108 provides a virtual conference among a plurality of participants (block 802). The virtual meeting may be provided in a room of a plurality of rooms included in a virtual space for conducting the virtual meeting. Virtual conference server system 108 may provide a display of a participant video element for each of a plurality of participants, the participant video element corresponding to the participant and including a video feed for the participant.
Virtual meeting server system 108 provides a display of a reaction button for each of the plurality of participants, the reaction button being selectable by the participant to indicate a reaction to the virtual meeting (block 804). The reaction button may correspond to a applause button for indicating applause relative to the virtual meeting. The virtual conference server system 108 receives an indication of a selection of a reaction button by one or more of the plurality of participants (block 806).
The virtual conference server system 108 provides for each of the plurality of participants a display and audio output of the response icon based on the selection (block 808). Providing the audio output may include causing the first audio file to be played multiple times based on the plurality of selections. Virtual conference server system 108 determines that the rate of receipt of the selection meets a threshold rate (block 810)
In response to the determination, the virtual conference server system 108 provides a modified audio output associated with the selection (block 812). Providing the modified audio output may include causing the second audio file to be played once based on determining that the rate satisfies the threshold rate. The playing of the second audio file may correspond to an increased intensity of reaction relative to the playing of the first audio file.
Virtual conference server system 108 may determine that the rate at which the selection is received meets a second threshold rate and provide a second modified audio output associated with the selection in response to determining that the rate meets the second threshold rate.
Fig. 9 is a flow diagram illustrating a process 900 for presenting a graphical overview of participant reactions within a virtual conference system according to some example embodiments. For purposes of illustration, process 800 is described herein primarily with reference to first client device 602, second client device 604, and virtual conference server system 108 of fig. 1 and 2. However, one or more blocks (or operations) of process 800 may be performed by one or more other components and/or by other suitable devices. Further for purposes of illustration, blocks (or operations) of process 800 are described herein as occurring serially or linearly. However, multiple blocks (or operations) of process 800 may occur in parallel or concurrently. Additionally, the blocks (or operations) of process 800 need not be performed in the order shown, and/or one or more blocks (or operations) of process 800 need not be performed and/or may be replaced by other operations. Process 800 may terminate when its operations are completed. Otherwise, process 800 may correspond to a method, an application, an algorithm, etc.
Virtual conference server system 108 provides a virtual conference among a plurality of participants (block 902). The first participant may correspond to a moderator of the virtual conference and the remaining participants correspond to attendees of the virtual conference.
The virtual conference may be provided in one of a plurality of rooms included in a virtual space for conducting the virtual conference. The virtual conference server system 108 may provide a display of a participant video element for each of a plurality of participants, the participant video element corresponding to the participant and including a video feed for the participant.
Virtual conference server system 108 provides for each of the plurality of participants a display of reaction buttons that are selectable by the participant to indicate different reactions to the virtual conference (block 904). The different reactions may include two or more of applause, laugh, consent, objection, and pleasure, each of which is indicated by one of the reaction buttons.
The virtual conference server system 108 receives an indication of a selection of a reaction button by one or more of the plurality of participants (block 906). Virtual conference server system 108 stores an indication of the selection over time in association with recording the virtual conference (block 908).
Virtual meeting server system 108 generates a graphical overview of the reactions to the virtual meeting based on the stored indication of the selection (block 910). The graphical overview may be a timeline indicating different reactions to the virtual meeting over time. The timeline may be selectable by a user to initiate playback of a recording of the virtual meeting at a point in time selected by the user.
The virtual conference server system 108 provides a display of a graphical overview for a first participant of the plurality of participants (block 912). Virtual conference server system 108 may avoid displaying a graphical overview to the remaining participants of the plurality of participants.
Fig. 10 is a diagrammatic representation of a machine 1000 within which instructions 1010 (e.g., software, programs, applications, applets, apps, or other executable code) for causing the machine 1000 to perform any one or more of the methods discussed herein may be executed. For example, the instructions 1010 may cause the machine 1000 to perform any one or more of the methods described herein. The instructions 1010 transform a generic, un-programmed machine 1000 into a specific machine 1000 that is programmed to perform the functions described and illustrated in the manner described. The machine 1000 may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine 1000 may operate in the capacity of a server machine or a client machine in server-client network environments, or as a peer machine in peer-to-peer (or distributed) network environments. Machine 1000 may include, but is not limited to: a server computer, a client computer, a Personal Computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a Personal Digital Assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart device, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing instructions 1010 that specify actions to be taken by machine 1000, sequentially or otherwise. Furthermore, while only a single machine 1000 is illustrated, the term "machine" shall also be taken to include a collection of machines that individually or jointly execute instructions 1010 to perform any one or more of the methodologies discussed herein. For example, machine 1000 may include client device 102 or any one of several server devices that form part of virtual meeting server system 108. In some examples, machine 1000 may also include both a client system and a server system, where certain operations of a particular method or algorithm are performed on the server side, and where certain operations of the particular method or algorithm are performed on the client side.
The machine 1000 may include a processor 1004, a memory 1006, and an input/output I/O component 1002, which processor 1004, memory 1006, and input/output I/O component 1002 may be configured to communicate with each other via a bus 1040. In an example, the processor 1004 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, the processor 1008 and the processor 1012 to execute the instructions 1010. The term "processor" is intended to include a multi-core processor, which may include two or more separate processors (sometimes referred to as "cores") that may execute instructions simultaneously. Although fig. 10 shows multiple processors 1004, machine 1000 may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiple cores, or any combination thereof.
The memory 1006 includes a main memory 1014, a static memory 1016, and a storage unit 1018, all of which are accessible by the processor 1004 via the bus 1040. Main memory 1006, static memory 1016, and storage unit 1018 store instructions 1010 that implement any one or more of the methods or functions described herein. The instructions 1010 may also reside, completely or partially, within the main memory 1014, within the static memory 1016, within the machine-readable medium 1020 within the storage unit 1018, within at least one processor of the processors 1004 (e.g., within a cache memory of a processor), or any suitable combination thereof, during execution thereof by the machine 1000.
The I/O component 1002 can include various components for receiving input, providing output, generating output, sending information, exchanging information, capturing measurement results, and the like. The particular I/O components 1002 included in a particular machine will depend on the type of machine. For example, a portable machine such as a mobile phone may include a touch input device or other such input mechanism, while a headless server machine would be unlikely to include such a touch input device. It will be appreciated that the I/O component 1002 may include many other components not shown in fig. 10. In various examples, I/O components 1002 can include a user output component 1026 and a user input component 1028. The user output component 1026 may include visual components (e.g., a display such as a Plasma Display Panel (PDP), a Light Emitting Diode (LED) display, a Liquid Crystal Display (LCD), a projector, or a Cathode Ray Tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., vibration motors, resistance mechanisms), other signal generators, and so forth. The user input component 1028 may include an alphanumeric input component (e.g., a keyboard, a touch screen configured to receive alphanumeric input, an electro-optical keyboard, or other alphanumeric input component), a point-based input component (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), a tactile input component (e.g., a physical button, a touch screen that provides the location and force of a touch or touch gesture, or other tactile input component), an audio input component (e.g., a microphone), and the like.
In further examples, I/O component 1002 can include a biometric component 1030, a motion component 1032, an environmental component 1034, or a positioning component 1036, among various other components. For example, the biometric component 1030 includes components for detecting expressions (e.g., hand expressions, facial expressions, voice expressions, body gestures, or eye tracking), measuring biological signals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identifying a person (e.g., voice recognition, retinal recognition, facial recognition, fingerprint recognition, or electroencephalogram-based recognition), and the like. The motion component 1032 includes an acceleration sensor component (e.g., accelerometer), a gravity sensor component, a rotation sensor component (e.g., gyroscope).
The environmental components 1034 include, for example: one or more camera devices (with still image/photo and video capabilities), an illumination sensor component (e.g., a photometer), a temperature sensor component (e.g., one or more thermometers that detect ambient temperature), a humidity sensor component, a pressure sensor component (e.g., a barometer), an acoustic sensor component (e.g., one or more microphones that detect background noise), a proximity sensor component (e.g., an infrared sensor that detects nearby objects), a gas sensor (e.g., a gas detection sensor that detects the concentration of hazardous gases or measures contaminants in the atmosphere for safety), or other components that may provide an indication, measurement, or signal corresponding to the surrounding physical environment.
Regarding the camera, the client device 102 may have a camera system including, for example, a front camera on the front surface of the client device 102 and a rear camera on the rear surface of the client device 102. The front-facing camera may, for example, be used to capture still images and video (e.g., "self-timer") of the user of the client device 102, which may then be enhanced with the enhancement data (e.g., filters) described above. The rear camera may be used, for example, to capture still images and video in a more traditional camera mode, which are similarly enhanced with enhancement data. In addition to the front-end camera and the rear-end camera, the client device 102 may also include a 360 ° camera for capturing 360 ° photos and videos.
Further, the camera system of the client device 102 may include dual rear cameras (e.g., a main camera and a depth sensing camera), or even three, four, or five rear camera configurations on the front-to-rear side of the client device 102. For example, these multiple camera systems may include a wide-angle camera, an ultra-wide-angle camera, a tele camera, a macro camera, and a depth sensor.
The positioning component 1036 includes a position sensor component (e.g., a GPS receiver component), an altitude sensor component (e.g., an altimeter or barometer that detects barometric pressure from which altitude may be derived), an orientation sensor component (e.g., a magnetometer), and so forth.
Communication may be implemented using a variety of techniques. The I/O component 1002 also includes a communication component 1038, the communication component 1038 being operable to couple the machine 1000 to the network 1022 or the device 1024 via a corresponding coupling or connection. For example, communication components 1038 may include a network interface component or another suitable device to interface with network 1022. In a further example of this embodiment, the method comprises, the communication means 1038 may include wired communication means, wireless communication means cellular communication component, near Field Communication (NFC) component,Parts (e.g.)>Low energy consumption), wi->Components, and other communication components that provide communication via other modalities. Device 1024 may be another machine or any of a variety of peripheral devices (e.g., a peripheral device coupled via USB).
Further, communication component 1038 can detect an identifier or include a component operable to detect an identifier. For example, the communication component 1038 may include a Radio Frequency Identification (RFID) tag reader component, an NFC smart tag detection component, an optical reader component (e.g., an optical sensor for detecting one-dimensional barcodes such as Universal Product Code (UPC) barcodes, such as Quick Response (QR) codes, aztec codes, data matrices, data symbols (Dataglyph), maximum codes (MaxiCode), PDF417, ultra codes (Ultra Code), multidimensional barcodes of UCC RSS-2D barcodes, and other optical codes), or an acoustic detection component (e.g., a microphone for identifying marked audio signals). In addition, various information may be derived via the communication component 1038, e.g., via Internet Protocol (IP) geolocated locations, via Wi-The location of signal triangulation, the location of NFC beacon signals that may indicate a particular location via detection, etc.
The various memories (e.g., main memory 1014, static memory 1016, and memory of processor 1004) and storage unit 1018 may store one or more sets of instructions and data structures (e.g., software) implemented or used by any one or more of the methods or functions described herein. These instructions (e.g., instructions 1010), when executed by the processor 1004, cause various operations to implement the disclosed examples.
The instructions 1010 may be transmitted or received over the network 1022 via a network interface device (e.g., a network interface component included in the communication component 1038) using a transmission medium and using any one of a number of well-known transmission protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions 1010 may be transmitted or received using a transmission medium via a coupling (e.g., peer-to-peer coupling) with a device 1024.
Fig. 11 is a block diagram 1100 illustrating a software architecture 1104 that may be installed on any one or more of the devices described herein. The software architecture 1104 is supported by hardware, such as a machine 1102 that includes a processor 1120, memory 1126, and I/O components 1038. In this example, the software architecture 1104 may be conceptualized as a stack of layers in which each layer provides a particular function. The software architecture 1104 includes layers such as an operating system 1112, libraries 1110, frameworks 1108, and applications 1106. In operation, the application 1106 calls the API call 1150 through the software stack and receives a message 1152 in response to the API call 1150.
Operating system 1112 manages hardware resources and provides common services. Operating system 1112 includes, for example, kernel 1114, services 1116 and drivers 1122. The kernel 1114 acts as an abstraction layer between hardware and other software layers. For example, the kernel 1114 provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functions. Service 1116 may be other softwareThe layers provide other public services. The driver 1122 is responsible for controlling or interfacing with the underlying hardware. For example, the driver 1122 may include a display driver, an imaging device driver,Or->Low power consumption drive, flash drive, serial communication drive (e.g., USB drive), WI-Drivers, audio drivers, power management drivers, etc.
Library 1110 provides a common low-level infrastructure used by applications 1106. The library 1110 may include a system library 1118 (e.g., a C-standard library) that provides functions such as memory allocation functions, string manipulation functions, mathematical functions, and the like. In addition, libraries 1110 may include API libraries 1124, such as media libraries (e.g., libraries for supporting presentation and manipulation of various media formats, such as moving Picture experts group-4 (MPEG 4), advanced video coding (H.264 or AVC), moving Picture experts group-3 (MP 3), advanced Audio Coding (AAC), adaptive Multi-Rate (AMR) audio codec, joint Picture experts group (JPEG or JPG) or Portable Network Graphics (PNG)), graphics libraries (e.g., openGL framework for rendering in two-dimensional (2D) and three-dimensional (3D) in graphical content on a display), database libraries (e.g., SQLite providing various relational database functions), web libraries (e.g., webKit providing web browsing functions), and the like. The library 1110 may also include various other libraries 1128 to provide many other APIs to the application 1106.
Framework 1108 provides a common high-level infrastructure used by applications 1106. For example, framework 1108 provides various Graphical User Interface (GUI) functions, advanced resource management, and advanced location services. Framework 1108 can provide a wide variety of other APIs that can be used by applications 1106, some of which can be specific to a particular operating system or platform.
In an example, applications 1106 can include a home application 1136, a contacts application 1130, a browser application 1132, a book reader application 1134, a location application 1142, a media application 1144, a messaging application 1146, a gaming application 1148, and a variety of other applications such as a third party application 1140. The application 1106 is a program that performs the functions defined in the program. One or more of the applications 1106 that are variously structured may be created using a variety of programming languages, such as an object oriented programming language (e.g., objective-C, java or C++) or a procedural programming language (e.g., C-language or assembly language). In a particular example, third party application 1140 (e.g., using ANDROID by an entity other than the vendor of the particular platform) TM Or IOS TM Applications developed in Software Development Kits (SDKs) may be, for example, in IOS TM 、ANDROID TMThe Phone's mobile operating system or other mobile software running on the mobile operating system. In this example, third party application 1140 may activate an API call 1150 provided by operating system 1112 to facilitate the functionality described herein.
Glossary of terms
"carrier signal" refers to any intangible medium capable of storing, encoding or carrying instructions for execution by a machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such instructions. The instructions may be transmitted or received over a network via a network interface device using a transmission medium.
"client device" refers to any machine that interfaces with a communication network to obtain resources from one or more server systems or other client devices. The client device may be, but is not limited to, a mobile phone, desktop computer, laptop computer, portable Digital Assistant (PDA), smart phone, tablet computer, super book, netbook, laptop computer, multiprocessor system, microprocessor-based or programmable consumer electronics, game console, set top box, or any other communication device that a user can use to access a network.
"communication network" refers to one or more portions of a network, which may be an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a Wireless LAN (WLAN), a Wide Area Network (WAN), a Wireless WAN (WWAN), a Metropolitan Area Network (MAN), the Internet, a portion of the Public Switched Telephone Network (PSTN), a Plain Old Telephone Service (POTS) network, a cellular telephone network, a wireless network, wi-Powerty A network, other type of network, or a combination of two or more such networks. For example, the network or portion of the network may comprise a wireless network or cellular network, and the coupling may be a Code Division Multiple Access (CDMA) connection, a global system for mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling may implement any of various types of data transmission technologies, such as single carrier radio transmission technology (1 xRTT), evolution data optimized (EVDO) technology, general Packet Radio Service (GPRS) technology, enhanced data rates for GSM evolution (EDGE) technology, third generation partnership project (3 GPP) including 3G, fourth generation wireless (4G) networks, universal Mobile Telecommunications System (UMTS), high Speed Packet Access (HSPA), worldwide Interoperability for Microwave Access (WiMAX), long Term Evolution (LTE) standards, other data transmission technologies defined by various standards setting organizations, other long distance protocols, or other data transmission technologies.
"component" refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other techniques that provide partitioning or modularization of particular processing or control functions. Components may be combined with other components via their interfaces to perform machine processes. A component may be a part of a packaged-function hardware unit designed for use with other components, as well as a program that typically performs the specific functions of the relevant function. The components may constitute software components (e.g., code implemented on a machine-readable medium) or hardware components. A "hardware component" is a tangible unit capable of performing certain operations and may be configured or arranged in some physical manner. In various examples, one or more computer systems (e.g., stand-alone computer systems, client computer systems, or server computer systems) or one or more hardware components of a computer system (e.g., processors or groups of processors) may be configured by software (e.g., an application or application part) as hardware components that operate to perform certain operations as described herein. The hardware components may also be implemented mechanically, electronically, or in any suitable combination thereof. For example, a hardware component may include specialized circuitry or logic permanently configured to perform certain operations. The hardware component may be a special purpose processor such as a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). The hardware components may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, the hardware components may include software that is executed by a general purpose processor or other programmable processor. Once configured by such software, the hardware components become the specific machine (or specific component of the machine) uniquely tailored to perform the configured functions and are no longer general purpose processors. It will be appreciated that it may be decided, for cost and time considerations, to implement a hardware component mechanically in dedicated and permanently configured circuitry or in temporarily configured (e.g., by software configuration) circuitry. Thus, the phrase "hardware component" (or "hardware-implemented component") should be understood to include a tangible entity, i.e., an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in some manner or perform certain operations described herein. Considering the example where hardware components are temporarily configured (e.g., programmed), it is not necessary to configure or instantiate each of the hardware components at any one time. For example, where the hardware components include a general-purpose processor that is configured by software to be a special-purpose processor, the general-purpose processor may be configured at different times as respective different special-purpose processors (e.g., including different hardware components). The software configures the particular processor or processors accordingly, for example, to constitute a particular hardware component at one time and to constitute a different hardware component at a different time. A hardware component may provide information to and receive information from other hardware components. Thus, the described hardware components may be considered to be communicatively coupled. Where multiple hardware components are present at the same time, communication may be achieved by signal transmission between or among two or more hardware components (e.g., via appropriate circuitry and buses). In examples where multiple hardware components are configured or instantiated at different times, communication between such hardware components may be achieved, for example, by storing information in a memory structure accessible to the multiple hardware components and retrieving the information in the memory structure. For example, one hardware component may perform an operation and store an output of the operation in a memory device communicatively coupled thereto. Additional hardware components may then access the memory device at a later time to retrieve and process the stored output. The hardware component may also initiate communication with an input device or an output device, and may operate on a resource (e.g., a collection of information). Various operations of the example methods described herein may be performed, at least in part, by one or more processors that are temporarily configured (e.g., via software) or permanently configured to perform the relevant operations. Whether temporarily configured or permanently configured, such a processor may constitute a processor-implemented component that operates to perform one or more operations or functions described herein. As used herein, "processor-implemented components" refers to hardware components implemented using one or more processors. Similarly, the methods described herein may be implemented, at least in part, by processors, where a particular processor or processors are examples of hardware. For example, at least some of the operations of the method may be performed by one or more processors or processor-implemented components. In addition, one or more processors may also operate to support execution of related operations in a "cloud computing" environment or as "software as a service" (SaaS) operations. For example, at least some of the operations may be performed by a set of computers (as examples of machines including processors), where the operations may be accessed via a network (e.g., the internet) and via one or more suitable interfaces (e.g., APIs). The performance of certain operations may be distributed among processors, not only residing within a single machine, but also deployed across multiple machines. In some examples, the processor or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other examples, the processor or processor-implemented components may be distributed across multiple geographic locations.
"computer-readable storage medium" refers to both machine storage media and transmission media. Thus, these terms include both storage devices/media and carrier wave/modulated data signals. The terms "machine-readable medium," "computer-readable medium," and "device-readable medium" mean the same thing and may be used interchangeably in this disclosure.
"machine storage media" refers to single or multiple storage devices and media (e.g., centralized or distributed databases, and associated caches and servers) that store the executable instructions, routines, and data. Accordingly, the term should be taken to include, but is not limited to, solid-state memory, as well as optical and magnetic media, including memory internal or external to the processor. Specific examples of machine storage media, computer storage media, and device storage media include: nonvolatile memory including, for example, semiconductor memory devices such as erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disk; CD-ROM and DVD-ROM discs. The terms "machine storage medium," "device storage medium," "computer storage medium" mean the same and may be used interchangeably in this disclosure. The terms "machine storage medium," computer storage medium, "and" device storage medium "expressly exclude carrier waves, modulated data signals, and other such medium, at least some of which are contained within the term" signal medium.
"non-transitory computer-readable storage medium" refers to a tangible medium capable of storing, encoding or carrying instructions for execution by a machine.
"signal medium" refers to any intangible medium capable of storing, encoding, or carrying instructions for execution by a machine, and includes digital or analog communication signals or other intangible medium to facilitate communication of software or data. The term "signal medium" shall be taken to include any form of modulated data signal, carrier wave, and the like. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The terms "transmission medium" and "signal medium" mean the same thing and may be used interchangeably in this disclosure.

Claims (20)

1. A method, comprising:
providing a virtual conference between a plurality of participants;
providing a display of reaction buttons for each of the plurality of participants, the reaction buttons being selectable by the participant to indicate different reactions to the virtual meeting;
receiving an indication of a selection of the reaction button by one or more of the plurality of participants;
storing an indication of the selection over time in association with recording the virtual meeting;
Generating a graphical overview of the reactions to the virtual meeting based on the stored indication of the selection; and
a display of the graphical overview is provided for a first participant of the plurality of participants.
2. The method of claim 1, further comprising:
the graphical overview is prevented from being displayed to remaining participants of the plurality of participants.
3. The method of claim 2, wherein the first participant corresponds to a moderator of the virtual conference and the remaining participants correspond to attendees of the virtual conference.
4. The method of claim 1, wherein the graphical overview is a timeline indicating the different reactions to the virtual meeting over time.
5. The method of claim 4, wherein the timeline is selectable by a user to initiate playback of the recording of the virtual meeting at a user-selected point in time.
6. The method of claim 1, wherein the different reactions include two or more of applause, laughing, agreeing, objecting, and feeling happy, each of the reactions being indicated by one of the reaction buttons.
7. The method of claim 1, wherein the virtual meeting is provided within a room of a plurality of rooms included within a virtual space for the virtual meeting.
8. The method of claim 1, further comprising:
a display of participant video elements is provided for each of the plurality of participants, the participant video elements corresponding to the participants and including video feeds for the participants.
9. A system, comprising:
a processor, and
a memory storing instructions that, when executed by the processor, configure the processor to perform operations comprising:
providing a virtual conference between a plurality of participants;
providing a display of reaction buttons for each of the plurality of participants, the reaction buttons being selectable by the participant to indicate different reactions to the virtual meeting;
receiving an indication of a selection of the reaction button by one or more of the plurality of participants;
storing an indication of the selection over time in association with recording the virtual meeting;
generating a graphical overview of the reactions to the virtual meeting based on the stored indication of the selection; and
A display of the graphical overview is provided for a first participant of the plurality of participants.
10. The system of claim 9, the operations further comprising:
the graphical overview is prevented from being displayed to remaining participants of the plurality of participants.
11. The system of claim 10, wherein the first participant corresponds to a moderator of the virtual conference and the remaining participants correspond to attendees of the virtual conference.
12. The system of claim 9, wherein the graphical overview is a timeline indicating the different reactions to the virtual meeting over time.
13. The system of claim 12, wherein the timeline is selectable by a user to initiate playback of the recording of the virtual meeting at a user-selected point in time.
14. The system of claim 9, wherein the different reactions include two or more of applause, laughing, agreeing, objecting, and happing, each of the reactions indicated by one of the reaction buttons.
15. The system of claim 9, wherein the virtual meeting is provided within a room of the plurality of rooms included within a virtual space for the virtual meeting.
16. The system of claim 9, the operations further comprising:
a display of participant video elements is provided for each of the plurality of participants, the participant video elements corresponding to the participants and including video feeds for the participants.
17. A non-transitory computer-readable storage medium comprising instructions that, when executed by a computer, cause the computer to perform operations comprising:
providing a virtual conference between a plurality of participants;
providing a display of reaction buttons for each of the plurality of participants, the reaction buttons being selectable by the participant to indicate different reactions to the virtual meeting;
receiving an indication of a selection of the reaction button by one or more of the plurality of participants;
storing an indication of the selection over time in association with recording the virtual meeting;
generating a graphical overview of the reactions to the virtual meeting based on the stored indication of the selection; and
a display of the graphical overview is provided for a first participant of the plurality of participants.
18. The computer-readable medium of claim 17, the operations further comprising:
The graphical overview is prevented from being displayed to remaining participants of the plurality of participants.
19. The computer-readable medium of claim 18, wherein the first participant corresponds to a moderator of the virtual conference and the remaining participants correspond to attendees of the virtual conference.
20. The computer-readable medium of claim 17, wherein the graphical summary is a timeline indicating the different reactions to the virtual meeting over time.
CN202280025397.2A 2021-03-30 2022-03-29 Presenting participant reactions within a virtual conference system Pending CN117099365A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/168,063 2021-03-30
US17/390,630 2021-07-30
US17/390,630 US11855796B2 (en) 2021-03-30 2021-07-30 Presenting overview of participant reactions within a virtual conferencing system
PCT/US2022/022360 WO2022212386A1 (en) 2021-03-30 2022-03-29 Presenting participant reactions within virtual conferencing system

Publications (1)

Publication Number Publication Date
CN117099365A true CN117099365A (en) 2023-11-21

Family

ID=88772095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280025397.2A Pending CN117099365A (en) 2021-03-30 2022-03-29 Presenting participant reactions within a virtual conference system

Country Status (1)

Country Link
CN (1) CN117099365A (en)

Similar Documents

Publication Publication Date Title
US11784841B2 (en) Presenting participant reactions within a virtual conferencing system
US11855796B2 (en) Presenting overview of participant reactions within a virtual conferencing system
US11968055B2 (en) Assigning participants to rooms within a virtual conferencing system
US20240080215A1 (en) Presenting overview of participant reactions within a virtual conferencing system
US11689696B2 (en) Configuring participant video feeds within a virtual conferencing system
US11973613B2 (en) Presenting overview of participant conversations within a virtual conferencing system
US20220321373A1 (en) Breakout sessions based on tagging users within a virtual conferencing system
US11722535B2 (en) Communicating with a user external to a virtual conference
US20220321617A1 (en) Automatically navigating between rooms within a virtual conferencing system
US11792031B2 (en) Mixing participant audio from multiple rooms within a virtual conferencing system
US20230096597A1 (en) Updating a room element within a virtual conferencing system
US11979244B2 (en) Configuring 360-degree video within a virtual conferencing system
US20230094963A1 (en) Providing template rooms within a virtual conferencing system
US20230101377A1 (en) Providing contact information within a virtual conferencing system
WO2022212391A1 (en) Presenting participant conversations within virtual conferencing system
US20240069687A1 (en) Presenting participant reactions within a virtual working environment
US11972173B2 (en) Providing change in presence sounds within virtual working environment
CN117099365A (en) Presenting participant reactions within a virtual conference system
US20240073370A1 (en) Presenting time-limited video feed within virtual working environment
US20240073364A1 (en) Recreating keyboard and mouse sounds within virtual working environment
US11880560B1 (en) Providing bot participants within a virtual conferencing system
US11979442B2 (en) Dynamically assigning participant video feeds within virtual conferencing system
US20240073050A1 (en) Presenting captured screen content within a virtual conferencing system
US20240069708A1 (en) Collaborative interface element within a virtual conferencing system
US20240073371A1 (en) Virtual participant interaction for hybrid event

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination