EP3189622B1 - System and method for tracking events and providing feedback in a virtual conference - Google Patents
System and method for tracking events and providing feedback in a virtual conference Download PDFInfo
- Publication number
- EP3189622B1 EP3189622B1 EP15837998.2A EP15837998A EP3189622B1 EP 3189622 B1 EP3189622 B1 EP 3189622B1 EP 15837998 A EP15837998 A EP 15837998A EP 3189622 B1 EP3189622 B1 EP 3189622B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- participants
- virtual
- participant
- video
- event
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000000875 corresponding Effects 0.000 claims description 13
- 238000001914 filtration Methods 0.000 claims description 6
- 239000000463 materials Substances 0.000 description 44
- 238000000034 methods Methods 0.000 description 37
- 238000003860 storage Methods 0.000 description 32
- 230000004044 response Effects 0.000 description 24
- 230000000007 visual effect Effects 0.000 description 23
- 230000002452 interceptive Effects 0.000 description 22
- 230000015654 memory Effects 0.000 description 16
- 101710084500 YAML Proteins 0.000 description 14
- 230000002085 persistent Effects 0.000 description 12
- 238000007906 compression Methods 0.000 description 11
- 230000001360 synchronised Effects 0.000 description 10
- 238000009826 distribution Methods 0.000 description 9
- 239000000203 mixtures Substances 0.000 description 8
- 230000001149 cognitive Effects 0.000 description 7
- 239000002131 composite materials Substances 0.000 description 7
- 230000004048 modification Effects 0.000 description 7
- 238000006011 modification reactions Methods 0.000 description 7
- 239000000872 buffers Substances 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagrams Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000003287 optical Effects 0.000 description 5
- 238000004458 analytical methods Methods 0.000 description 4
- 230000000977 initiatory Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000005304 joining Methods 0.000 description 3
- 238000003825 pressing Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001419 dependent Effects 0.000 description 2
- 238000002360 preparation methods Methods 0.000 description 2
- 230000000644 propagated Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000010079 rubber tapping Methods 0.000 description 2
- 230000001755 vocal Effects 0.000 description 2
- 239000011089 white board Substances 0.000 description 2
- 280000405767 Alphanumeric companies 0.000 description 1
- 210000004556 Brain Anatomy 0.000 description 1
- 280000297270 Creator companies 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 240000004050 Pentaglottis sempervirens Species 0.000 description 1
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 1
- 281000137562 Samsung Group companies 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000003139 buffering Effects 0.000 description 1
- 239000000969 carriers Substances 0.000 description 1
- 230000001413 cellular Effects 0.000 description 1
- 239000008264 clouds Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001808 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reactions Methods 0.000 description 1
- 230000001186 cumulative Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 239000006185 dispersions Substances 0.000 description 1
- 238000005755 formation reactions Methods 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 230000001771 impaired Effects 0.000 description 1
- 239000003999 initiators Substances 0.000 description 1
- 230000004301 light adaptation Effects 0.000 description 1
- 239000004973 liquid crystal related substances Substances 0.000 description 1
- 239000011159 matrix materials Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003068 static Effects 0.000 description 1
- 239000000126 substances Substances 0.000 description 1
- 230000001960 triggered Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements or protocols for real-time communications
- H04L65/40—Services or applications
- H04L65/403—Arrangements for multiparty communication, e.g. conference
- H04L65/4038—Arrangements for multiparty communication, e.g. conference with central floor control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object or an image, setting a parameter value or selecting a range
- G06F3/04842—Selection of a displayed object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06Q—DATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation, e.g. computer aided management of electronic mail or groupware; Time management, e.g. calendars, reminders, meetings or time accounting
- G06Q10/101—Collaborative creation of products or services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements or protocols for real-time communications
- H04L65/10—Signalling, control or architecture
- H04L65/1066—Session control
- H04L65/1083—In-session procedures
- H04L65/1086—In-session procedures session scope modification
- H04L65/1089—In-session procedures session scope modification by adding or removing media
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements or protocols for real-time communications
- H04L65/10—Signalling, control or architecture
- H04L65/1066—Session control
- H04L65/1083—In-session procedures
- H04L65/1086—In-session procedures session scope modification
- H04L65/1093—In-session procedures session scope modification by adding or removing participants
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements or protocols for real-time communications
- H04L65/40—Services or applications
- H04L65/4007—Services involving a main real-time session and one or more additional parallel sessions
- H04L65/4015—Services involving a main real-time session and one or more additional parallel sessions where at least one of the additional parallel sessions is real time or time sensitive, e.g. white board sharing, collaboration or spawning of a subconference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements or protocols for real-time communications
- H04L65/40—Services or applications
- H04L65/403—Arrangements for multiparty communication, e.g. conference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/148—Interfacing a video terminal to a particular transmission medium, e.g. ISDN
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/152—Multipoint control units therefor
Description
- This application claims the benefit of copending nonprovisional U:S. Patent Application No.
14/840,471, filed August 31, 2015 U.S. Provisional Patent Application No. 62/046,880, filed September 5, 2014 - This application claims the benefit of copending nonprovisional
U.S. Patent Application No. 14/840,438, filed August 31, 2015 U.S. Provisional Patent Application No. 62/046,859, filed September 5, 2014 - This application claims the benefit of copending nonprovisional
U.S. Patent Application No. 14/840,513, filed August 31, 2015 U.S. Provisional Patent Application No. 62/046,879, filed September 5, 2014 - This invention relates generally to the field of computer systems. More particularly, the invention relates to a system and method for tracking events and providing feedback in a virtual conference.
- "Web conferencing" or "virtual conferencing" refers to various forms of online collaborative services including web seminars, webcasts, and peer-level web meetings. Web conferencing systems today support real-time audio and video streaming between participants, typically under the coordination of a central web conferencing server. Applications for web conferencing include online classrooms, meetings, training sessions, lectures, and seminars, to name a few.
- Participants to a web conference such as students in a virtual classroom can benefit from formative feedback on their contributions during classroom discussions. Such feedback requires identifying, classifying, and/or assessing each contribution (e.g., spoken contribution, written contribution) by each individual participant, which may be time consuming and impractical. Notwithstanding the pedagogical value of formative feedback and assessment, the time and resources required in providing such feedback and assessment to participants can prevent or diminish this learning opportunity for participants,
- Prior art document
US 2013/169742 A1 shows a method which includes selecting, from a plurality of participants in a real-time visual communication session, one or more active participants each being associated with an active state based at least in part on one or more participation properties related to the session. - A better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:
-
FIG. 1 illustrates an exemplary system architecture on which embodiments of the invention may be implemented; -
FIG. 2 illustrates one embodiment of a graphical user interface (GUI) which includes a center or primary speaker region and a speaker queue; -
FIG. 3 illustrates one embodiment of an architecture for selecting breakout groups; -
FIG. 4 illustrates a class initialization graphic for providing a participant entry into a virtual classroom; -
FIG. 5 illustrates an exemplary pre-class discussion region including video images of participants; -
FIG. 6 illustrates a group of students connecting to the virtual classroom using a shared projector; -
FIGS. 7A-B illustrates one embodiment of the graphical user interface as a professor or other moderator joins the virtual classroom; -
FIG. 8 illustrates a visual display of materials used by the speaker and a smaller video region for the speaker; -
FIG. 9 illustrates a professor or teacher meeting with a student during office hours; -
FIG. 10 illustrates visual results of a question on which participants have voted; -
FIG. 11 illustrates one embodiment in which content is displayed on top of the primary speaker region; -
FIG. 12 illustrates one embodiment of a graphical user interface comprising two current speaker regions; -
FIG. 13 illustrates one embodiment of a graphical user interface for a touch screen device; -
FIG. 14 illustrates one embodiment of a graphical user interface for initializing a breakout group; -
FIGS. 15A-B illustrate exemplary breakout group materials, annotations made to the materials, and video images of participants; -
FIGS. 16A-C illustrate an exemplary graphical user interface comprising multiple speaker regions, and a region for evaluating student performance data when visiting a professor or teacher during office hours; -
FIG. 17 illustrates an architecture for synchronizing state between clients in accordance with one embodiment of the invention; -
FIG. 18 illustrates additional details of the exemplary architecture for synchronizing state between clients; -
FIG. 19 illustrates one embodiment of an architecture for distributing audio and video among participants; -
FIG. 20 illustrates one embodiment of an architecture for storing and replaying video and audio captured during a class or other virtual conference type; -
FIGS. 21A-B illustrate embodiments of a graphical interactive timeline; -
FIGS. 22A-B illustrate different logic and processes for generating an interactive timeline in accordance with one embodiment of the invention; -
FIG. 23 illustrates an exemplary lesson plan in a human-readable format; -
FIG. 24 illustrates an exemplary interactive timeline and a mapping between a YAML representation and the interactive timeline; -
FIGS. 25A-C illustrate embodiments of a graphical design interface for designing interactive timelines; -
FIGS. 26A-B illustrate one embodiment of an architecture for generating a graphical interactive timeline; -
FIG. 27 illustrates how functions associated with a segment of a timeline may be synchronized with clients participating in a virtual conference; -
FIGS. 28A-C illustrate an interactive timeline displayed on a secondary display in accordance with one embodiment; -
FIG. 29 illustrates a computer system in accordance with one embodiment of the invention. -
FIG. 30 illustrates an exemplary decision support module employed in one embodiment of the invention; -
FIG. 31 illustrates an exemplary graphical user interface for identifying students recommended for participation; -
FIG. 32 illustrates a method in accordance with one embodiment of the invention; -
FIG. 33 illustrates an example feedback provision module configured to providing assessment of participants based on defined learning outcomes; -
FIG. 34 illustrates an example event timeline chart; -
FIG. 35A illustrates an example event list; -
FIG. 35B illustrates an example event list; -
FIG. 36 illustrates an example event filter control; -
FIG. 37A illustrates an example user interface for selecting a learning outcome; -
FIG. 37B illustrates an example user interface for selecting a learning outcome; -
FIG. 37C illustrates an example user interface for selecting a learning outcome; -
FIG. 38 illustrates an example evaluation rubric; -
FIG. 39 illustrates an example user interface for evaluating a participant's contribution; -
FIG. 40 illustrates an example user interface for evaluating a participant's contribution; -
FIG. 41 illustrates an example user interface for evaluating a participant's contribution; -
FIG. 42 illustrates an example user interface for a participant to review evaluations; -
FIG. 43 illustrates an example user interface for a participant to review evaluations; -
FIG. 44 illustrates a flow diagram of an example process of creating and maintaining a debate in web conferences; -
FIG. 45 illustrates an example discussion support module configured to support discussion in web conferences; -
FIG. 46 illustrates an example user interface for creating a discussion; -
FIG. 47 illustrates an example user interface for creating a discussion; -
FIG. 48 illustrates an example user interface for creating a discussion; -
FIG. 49 illustrates an example user interface for creating a discussion; -
FIG. 50 illustrates an example user interface for creating a discussion; -
FIG. 51 illustrates an example user interface for evaluating a discussion participant; -
FIG. 52 illustrates an example user interface for evaluating a discussion participant; -
FIG. 53 illustrates an example user interface for enabling or disabling evaluation of discussion participants; and -
FIG. 54 illustrates an example user interface for terminating a discussion. -
FIG. 55 illustrates an example user interface with a termination button for terminating a discussion. - In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention described below. It will be apparent, however, to one skilled in the art that the embodiments of the invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the underlying principles of the embodiments of the invention.
- The present invention is defined by the appended independent claims.
- Dependent claims constitute embodiments of the invention.
- Any other subject-matter outside the scope of protection of the claims is to be regarded as examples not in accordance with the invention.
-
Figure 1 illustrates a high level system architecture employed in one embodiment of the invention. In the illustrated embodiment, a plurality of clients 130, 140, 150, and 160 connect to a virtual conferencing service 100 over the Internet 180. The clients may comprise any form of end user devices including desktop/laptop computers (e.g., PCs or Macs), smartphones (e.g., iPhones, Android phones, etc), tablets (e.g., iPads, Galaxy Tablets, etc), and/or wearable devices (e.g., smartwatches such as the iWatch or Samsung Gear watch). Of course, the underlying principles of the invention are not limited to any particular form of user device. - In one embodiment, each of the client devices connect to the virtual conferencing service 100 through a browser or conferencing app/application 131, 141, 151, 161 which includes a graphical user interface 132, 142, 152, 162 to allow the end user to interact with the virtual conferencing service and participate in a virtual conference using the techniques described herein. In addition, each browser/app 131, 141, 151, 161 operates in accordance with a current state 135, 145, 155, 165 of the virtual conference which is synchronized between the clients 130, 140, 150, 160 using the synchronization techniques described below. By way of example, and not limitation, the current state 135 for client 130 may indicate positioning of the various graphical elements within the GUI 132, including the central position of the current speaker, a visual indication of the speaker queue, and a graphical representation and/or video images of participants in each breakout group.
- In the illustrated embodiment, the virtual conferencing service 100 includes a persistent state manager 110 for persistently storing updates to the state of each virtual conference within a state database 115. As described in detail below, the state may be continually updated in response to user input provided via the browsers/apps 131, 141, 151, 161 running on the various clients 130, 140, 150, 160. In one embodiment, when a new participant joins the conference, the persistent state manager 110 provides the client with stored virtual conference state data required to synchronize the new client state with the state of the other clients participating in the conference. The persistent state manager 110 may be implemented with a Web server. However, the underlying principles of the invention are not limited to this implementation.
- In one embodiment, after the client's state has been initialized from the virtual conference state database 115, a dynamic state synchronization service 120 provides dynamic state updates to the client in accordance with user input from all of the clients 130, 140, 150, 160 during the virtual conference. For example, one embodiment of the dynamic state synchronization service 120 implements a publish/subscribe mechanism in which each client publishes its own state updates to the dynamic state synchronization service 120. A client participating in the virtual conference subscribes to the dynamic state synchronization service 120 to receive the published state updates from all other clients (including itself). Thus, for a virtual conference in which Clients A-D are participants, if Client A publishes a state update (e.g., adding its user to the speaker queue), the dynamic state synchronization service 120 will forward the update to all subscribing clients (i.e., Clients A-D). This publish/subscribe mechanism is described in greater detail below. In addition, as discussed below, ordering techniques are employed in one embodiment to ensure that the state updates are applied to each client in the correct order (i.e., to ensure that the clients all remain in the same state).
- In one embodiment, a multimedia stream distribution service 125 manages the receipt and distribution of audio and video streams for each of the clients 130, 140, 150, 160. In particular, in one embodiment, each client 130, 140, 150, 160 captures audio and/or video of its participant and streams the audio/video to the multimedia stream distribution service 125, which forwards the audio/video streams to each of the clients 130, 140, 150, 160. The audio is then decoded and output from speakers (not shown) and the video is decoded and rendered within each of the conferencing GUIs 132, 142, 152, 162 (examples provided below).
- One embodiment of the multimedia stream distribution service 125 also implements a publish/subscribe mechanism in which each client subscribes to the audio/video streams from every other client. Thus, in the example shown in
Figure 1 , Client 130 may subscribe to the audio/video streams of clients 140, 150, and 160. The particular resolution and/or frame rate of each video stream captured on each client may be dependent on the current state 135, 145, 155, 165 of the video conference. For example, when a participant is designated as the current speaker and is provided with the central speaking position within the GUI, that participant's client may capture video having a relatively higher resolution and/or frame rate than when the participant is not the speaker (i.e., and the video of the user is rendered within a small thumbnail region of the GUI). Choosing higher quality video for only the current speaker (or current set of speakers if multiple speakers are permitted) significantly reduces the bandwidth requirements of the system. - In one embodiment, a multimedia storage service 190 records audio/video content from the virtual conference and other related data to allow the moderator and/or participants to play back and review the virtual conference at a later time. For example, in a classroom environment, a professor or teacher may play back portions of the conference to review discussions among participants or questions which were posed during the conference. The professor or teacher may then provide feedback to the participants (e.g., clarifying and issue which was discussed, answering additional questions, providing positive reinforcement, etc).
- The video and audio content stored on the multimedia storage service 190 may be a higher quality than the audio/video used during the live virtual conference. For example, each individual client may capture higher quality video and audio than may be possible to stream through the multimedia stream distribution service 130. The higher quality audio/video may be stored locally on each individual client 130, 140, 150, 160 during the virtual conference and may be uploaded to the multimedia storage service 190 following the conference. For example, each time a participant speaks, a local audio clip of the user's voice (e.g., an MP3 or AAC clip) may be recorded and subsequently uploaded to the multimedia storage service 190. Additionally, state data 135, 145, 155, 165 and/or other data required to reconstruct the virtual conference for playback may be stored on the multimedia storage service 190 (as described in greater detail below).
- The multimedia storage service 190 may be an external service from which the virtual conferencing service purchases storage resources. In another embodiment, the multimedia storage service 190 is implemented as a separate module within the virtual conferencing service 100.
- Additional details will now be provided for exemplary speaker queue and breakout group implementations, followed by a description of additional architectural details of the virtual conferencing service 100.
- In order to direct the visual attention of conference participants towards the focus of discussion in a multi-party conversation in a virtual conference, signals sent by participants themselves may be relied on to indicate an intention to speak. In contrast to systems which rely solely on speaker volume, this embodiment eliminates possible errors due to poor audio conditions, poor network connections, and ambiguity in speech patterns. For example, the signals sent by the participants can be used instead of or along with speech detection algorithms (e.g., manual or automatic).
- During a web video conference or virtual conference, meeting participants are provided with the ability to request and gain access to the "center of attention." For example, as illustrated in
Figure 2 , a participant has the center of attention if the participant is displayed in the largest video element or in a center element in the virtual conference, referred to as the "current speaker position" 203. In one embodiment, this is done by a "push-to-talk" or "trigger-to-talk" mechanism where the participant holds down a particular key on the keyboard, presses a graphical indication in the virtual conference environment, or performs any other suitable trigger action that would indicate that the participant would like to talk, herein referred to as the "queue key." The queue key may also toggle the microphone mute status if the microphone was previously muted. - By pressing the queue key, the participant places him or herself into a speaker queue which may be synchronized across all of the clients 130, 140, 150, 160 using the dynamic state synchronization service 120 as described herein. As illustrated in
Figure 2 , a visual indication 201 of the participants in the speaker queue may be rendered within the GUI of each client 130, 140, 150, 160. In one embodiment, each client 130, 140, 150, 160 maintains its own synchronized copy of the speaker queue. When a particular participant is added to the speaker queue (e.g., by holding the queue key), that participant is automatically added to the local speaker queue on the participant's client, thereby altering the local state. The local state change is then synchronized to the other clients through the publish/subscribe mechanism implemented by the dynamic state synchronization service. If another participant requested entry into the speaker queue at approximately the same time, the dynamic state synchronization service 120 resolves the potential conflict using an ordering mechanism (described in greater detail below) and propagates correct state updates to all of the clients 130, 140, 150, 160. - Thus, by holding the queue key, the participant ensures a place in the speaker queue and the speaker queue is made visible to all participants in the virtual conference. In
Figure 2 , the visual representation of the speaker queue 201 displays each participant in the queue through screenshots of the video stream of the participant in the virtual conference or any other suitable digital representation of the participant (e.g., a picture, avatar, etc). Video of the speaker at the front of the queue is displayed within the current speaker position 203 of the GUI. In addition, inFigure 2 , thumbnails of all other participants 202 (or a subset thereof) in the virtual conference are displayed within the GUI. - One embodiment of the system tracks how long each participant is in the speaker queue, how long each participant is given the center of attention and how much each participant has talked (e.g., based on signal processing of the participant's visual cue while the participant was given the center of attention). In one embodiment, this is accomplished by setting/resetting programmable timers on each of the clients 130, 140, 150, 160 and/or on the virtual conferencing service 100. In one embodiment, the time allocated to speak may be controlled by the professor or teacher (or other moderator).
- The same queue key can also be used to control the mute status of the microphone. If the microphone was previously muted, entering into the speaker queue by holding the queue key will also un-mute the microphone allowing the audio of that participant to be heard by all participants in the virtual conference. In another embodiment, the previously muted microphone may not be un-muted automatically and, instead, the microphone's status is presented to the participant or all participants. For example, if the microphone was muted prior to pressing the same key (or providing any of the other actions), then pressing the same key presents an indication that the microphone is currently muted.
- The action of the participant joining the speaker queue is communicated to all other participants via a message or indication such as a speaker queue visualization or a display of the speaker queue 201. In one embodiment, this is delivered to clients through the publish/subscribe mechanism employed by the dynamic state synchronization service 120.
- In one embodiment, one of the participants or a moderator/instructor is set as a "default" speaker (e.g., the professor, leader, or designated participant or student of the participants in the virtual conference) who may be configured as being "behind" the last participant in the speaker queue. Thus, when the speaker queue is empty, the default speaker is placed in the center and may indicate which participant should be given the center of attention. The default speaker can be designated, for example, by a professor to a student allowing the student to field or answer questions after a presentation is given (e.g., by the student).
- The speaker queue 201 may be implemented as a First In, First Out (FIFO) queue and may have a default speaker associated with the queue. For example, the default speaker would be placed in the last or trailer position of the speaker queue. In one embodiment, a participant is added to the speaker queue (e.g., at the end of the speaker queue visually) by selecting a queue key and the participant is kept in the speaker queue by holding the queue key. The queue key can be a control key or any other suitable key on their keyboard and/or may be implemented as a graphical icon in the GUI (which the user selects via a mouse or a touch-pad). In one embodiment, a participant is removed from the speaker queue when he or she releases the designated queue key or deselects the graphical icon.
- In one embodiment, the participant at the head of the speaker queue is given the center of attention by being visually featured in the conference. For example, the participant's visual cue is placed in a center element of the virtual conference or placed in the largest element in the virtual conference (e.g., center speaker position 203 in
Figure 2 ). Once the participant has been given the center of attention, the participant may be excluded/removed from the displayed speaker queue 201. - In one embodiment, the speaker queue is made visible to every participant in the virtual conference in a displayed speaker queue or queue visualization. For example, the displayed speaker queue 201 may be an array (e.g., horizontal, vertical, curved, etc.) of small photographs or visual cues of participants in the speaker queue. The displayed speaker queue can be in the bottom left-hand corner of the user interface of the virtual conferencing environment and positioned from left-to-right based on index or position of the participant in the speaker queue. Of course, the underlying principles of the invention are not limited to any particular visual representation of the speaker queue.
- When the speaker queue is empty, the default speaker (e.g., in the trailer position of the speaker queue) is featured in the conference, for example, by being given the center of attention. The leader, web conference initiator, or professor can initially be the default speaker and/or can designate a default speaker. For example, the professor can designate the default speaker by selecting the designated participant's thumbnail video feed 202 or other visual cue in the list or group of visual cues (e.g., at top, bottom, or side of the virtual conference). In one embodiment, each participant's audio broadcasting is muted by default and may be unmuted in response to input from the participant (e.g., by the participant holding the queue key).
- In one embodiment, when a participant presses and holds down the queue key, his or her microphone is un-muted. When the participant releases the queue key, the participant's microphone is muted again. In one embodiment, each speaker queue modification is synchronized to the clients of all participants via the publish/subscribe techniques implemented by the dynamic state synchronization service 120. In addition, data related to participation in the speaker queue may be stored by the virtual conferencing service 100 (and/or the external multimedia storage service 190) and later used to analyze participation activity (e.g., a length of time each participant was speaking).
- While the embodiment in
Figure 2 illustrates a single speaker in a current speaker position 203, other embodiments may include multiple current speaker positions. For example, one embodiment of the invention includes multi-region layouts in which the center of attention is sub-divided into multiple "attention regions," allowing for the simultaneous display of more than one participant in the virtual conference. For example,Figure 12 (discussed below) illustrates an embodiment with two attention regions for two different speakers. Another embodiment includes four attention regions arranged in a square or rectangle with two regions towards the top of the display and two regions towards the bottom of the display. Any number of attention regions may be generated while still complying with the underlying principles of the invention. - In these embodiments, a single speaker queue may be maintained for all attention regions. When a region becomes available (using the same criteria as with the single-region center of attention embodiments described herein), the first participant in the speaker queue is removed and the participant video is displayed in that attention region. In an alternate embodiment, each attention region may be assigned its own dedicated speaker queue (e.g., N speaker queues for N attention regions). This embodiment may be used, for example, to provide a dedicated attention region for each breakout group, to allow different members of the breakout group to take turns speaking within each dedicated attention region. In either of these embodiments, a "default speaker" may also be specified for each attention region.
- In addition, in one embodiment, when a speaker occupies an attention region in the center of attention, the professor, leader, or designated participant can "pin" the speaker to that region (e.g., by selecting a key or graphical element within the GUI). Pinning a speaker has the same effect as if the speaker actively maintained the position by holding the push-to-talk activation key or alternative mechanism to maintain the featured position. In one embodiment, no other speaker will be moved from the speaker queue into the speaker position until the featured speaker is "unpinned" by the professor, leader, designated participant, or the featured speaker themselves.
- In a traditional classroom environment or meeting, an instructor or meeting organizer determines how to subdivide a group (e.g., by having participants count off, dividing into pre-arranged groups or using some other heuristic). Once the groups are organized, the groups typically shuffle around the room to a designated spot to work together. The organizer may walk around to interact with each group. Once re-assembled, the groups may take turns presenting.
- One embodiment of the invention provides support for the same functionality within a virtualized conferencing environment. Breakout groups can be formed by the virtual conferencing environment based on user profile information associated with each participant, previous interaction history associated with each participant or any other suitable historical data associated with each participant in the virtual conferencing environment. For example, this information includes past participation statistics associated with the participant, grades, performance in assignments, etc.
- In another embodiment, the participant leading the virtual conference can also affect how the breakout groups are formed. For example, the participant can select to move participants between the formed breakout groups (e.g., using a graphical click-and-drag mechanism or other suitable actions), or indicate which participants should be in the same breakout group when the breakout groups are formed.
- The participant leading the virtual conference can also determine a start and/or an end time associated with the session of formed breakout groups, for example, indicating when the breakout groups are formed and when the breakout groups are dissolved into additional breakout groups or one big group.
- In one embodiment, each breakout group is provided with a copy of all associated materials and/or resources from the main group (e.g., a class) and can include any additional materials and/or resources needed to perform an assigned task or other suitable action in the virtual conference. Any participant may be provided with the ability to upload any type of material, as appropriate. Furthermore when the breakout groups are re-assembled into one big group or one or more additional breakout groups, the participant leading the virtual conference can access and feature the participants and their work (e.g., through the materials and/or additional materials).
- One embodiment of a logical architecture and flow for forming breakout groups is illustrated in
Figure 3 . This architecture may be implemented in software modules executed within the virtual conferencing service 100, on the client machines 130, 140, 150, 160, or any combination thereof (e.g., with some operations performed on the virtual conferencing service and some on the clients). - In one embodiment, an active conference 310 is formed as participants log in and authenticate with the virtual conferencing service 100 (e.g., as participants arrive for class). A user database 305 containing user IDs and other pertinent information may be queried during the login process to uniquely identify each user. In one embodiment, a breakout group selection module 320 selects participants to be subdivided into breakout groups in accordance with input from the moderator 325 (e.g., a processor or instructor), the identity of active participants in the conference 341, and other user data 306 which may be retrieved from the user database 305 (or a different database).
- By way of example, and not limitation, the moderator input 325 may indicate that the moderator wishes for there to be four breakout groups, with randomly selected participants. In response, the breakout group selection module 320 will subdivide the active participants 341 into four groups, as close in size as possible. For example, if there are 28 students, then four groups of 7 participants will be created. If there are 26 students, then two groups of 7 and two groups of 6 will be formed. Rather than randomly selecting the participants, the breakout group selection module 320 may run through the list of active participants alphabetically (e.g., using the first or last names of the participants).
- Alternatively, the participants in each breakout group may be pre-assigned by moderator ahead of the class or other meeting. In this embodiment, all that is required by the breakout group selection module 320 is the list of active participants 341.
- In one embodiment, the breakout group selection module 320 may select an initial set of breakout groups which the moderator may then review and modify. For example, the initial set may be selected based on user profile data or other pertinent data 306 stored in the user database 305 such as the performance of each user in the class (e.g., ensuring that each group includes at least some higher performing participants). Performance may be based on the current grade of each participant in the class, the cumulative time that each participant has talked, the grade on a recent exam, and/or additional information provided by the moderator.
- The breakout group selection module 320 may consider other pertinent data to generate the initial set of breakout groups such as participant relationships (e.g., frequency of interactions among participants); specific exercise outcomes; results from a poll (e.g., automatically grouping together participants who had similar or different responses); differing responses (e.g., automatically grouping together participants who had differing responses, in order to maximize likelihood of a productive learning exchange among participants); pre-class work; and order of arrival time to virtual conference or presence in virtual conference, to name a few. In one embodiment, the moderator may also specify a maximum size for breakout groups. The breakout group selection module 320 will then form the breakout groups in accordance with this maximum size.
- In one embodiment, breakout groups may be formed by an indication or a trigger from a participant or moderator (e.g., selection of a button, voice activated). The indication or trigger may be implemented within the virtual conference GUI or may be specified on a second screen or mobile device connected to the virtual conference.
- In one embodiment, once a breakout group is formed, the members of the breakout group will only receive and render video and/or audio of other members of the breakout group. The video/audio of the moderator may also be shared with the members of a breakout group when visiting the breakout group. This may be accomplished, for example, by muting the audio and disabling video rendering of streams for participants in all other groups, in another embodiment, the publish/subscribe mechanism in the multimedia stream distribution service 125 is updated to that a client only subscribes to the audio/video stream of other participants in the group. Various other mechanisms may be employed to ensure that audio is contained within each breakout group.
- In one embodiment, End-of-Breakout indications are generated, warning when breakout groups are about to end and/or that the breakout groups will be formed into additional breakout groups or a larger group (e.g., the original group). The indications maybe visual (e.g., via a pop-up window), audible (e.g., via an alarm or ringtone), or any combination thereof.
- In addition to having the ability to "visit" breakout groups, the processor or teacher may broadcast audio/video or messages to all of the breakout groups, and may also receive messages from one or more of the breakout groups (e.g., questions posed by participants).
- Returning to
Figure 3 , once the breakout groups 328 have been selected (i.e., the users in each breakout group identified using the techniques described above), breakout group generation logic 330 generates an instance of each breakout group, which may include (among other things) copies of materials from a materials database 334 to be used during the breakout sessions. InFigure 3 , for example, group materials 351 and 352 are provided to breakout groups 341 and 342, respectively. In one embodiment, the group materials 351 and 352 are copies of materials from the materials database 334 which may be independently edited by each respective group, 341 and 342. In addition, the same set of shared materials 360 may also be made available and shared by each breakout group. - In one embodiment, the materials and/or resources that may be distributed to all breakout groups include (but are not limited to) YouTube videos; PDF files; PowerPoint files; URLs; document notes; picture files in different forms; sound files (e.g., MP3); links to online sites; and any other visible or audible material capable of being reviewed and/or edited during for the breakout session.
- In one embodiment, each the participants in a breakout group are provided with a shared text editor and whiteboard function via a note element in the virtual conference. The shared text editor may be implemented by program code executed on each client and/or the virtual conferencing service 100. Each participant in a breakout group can also add material or documents not visible to other breakout groups. These additional external materials may be kept private to the participants of the specific breakout group (i.e., stored as group materials 351-352 in
Figure 3 ). - In one embodiment, each breakout group is provided with a tool to draw and annotate on top of shared or private materials or resources. The annotation tool may be provided as program code executed on each client 130, 140, 150, 160 or on the virtual conferencing service 100 (or both).
- One embodiment of the invention provides for group-specific dispersion of material. For example, the professor, teacher or other form of moderator may send particular documents and other resources (e.g., homework) to specific breakout groups (e.g., based on participants in the breakout group).
- As mentioned, in one embodiment, the moderator (e.g., professor or teacher) send a written-text or spoken-audio message to all breakout groups and may join a breakout group and leave the breakout group to return to a bird's-eye overview of all breakout groups. In addition, the moderator may audibly listen to all/each breakout group individually without joining each breakout group and may oversee work happening within all breakout groups. The moderator may also view the materials being edited by each of the breakout groups (e.g., shared notes as they are being typed; whiteboards, as they are being drawn, annotations as they are being added). The moderator may further respond to individual questions raised by specific breakout groups; move/drag a participant from one breakout group into another breakout group or out of the formed breakout groups completely; and cancel breakout group formation and return to additional breakout groups or one big group.
- In one embodiment, a breakout group can be featured (to other participants not in the breakout group). For example, the moderator may select the breakout group (e.g., click, voice activate) resulting in the presentation of the breakout group (and all the participants in the selected breakout group) in the center of the virtual conference. In one embodiment, when a breakout group is presenting, the dynamic state synchronization service 120 will ensure that the state updates on each client cause the members of the breakout group to have the center of attention. The moderator may also minimize the presentation of other participants not in the selected breakout group. Materials associated with the selected or featured breakout group may be presented in a similar manner.
- Additional graphical user interface (GUI) features are illustrated in
Figures 4- 18.Figure 4 illustrates an embodiment which may be displayed once a participant has logged in to the virtual conferencing service. A class initialization graphic 401 provides an indication of the amount of time before class begins (5 minutes in the example). The user may select the graphic to enter into the virtual classroom. - Once the participant has selected the class initialization graphic 401, the participant is taken to a pre-class user interface such as shown in
Figure 5 . In this embodiment, video thumbnails of other participants who have logged in to the classroom are displayed within a pre-class discussion region 501. A set of tools 502 are also provided to allow users to text one another, open personal video chat sessions, etc. -
Figure 6 illustrates multiple students sharing a single screen on a projector for the class and using separate devices (e.g., computers, tablets, etc) to interact with the virtual classroom (e.g., to speak in the center position, provide text comments and/or questions, etc). -
Figures 7A illustrates the graphical user interface when the professor initially joins the video conference. As illustrated, the participant thumbnails 701 are arranged randomly around the main display. In contrast, inFigure 7B , the participant thumbnails 701 have been moved in an organized manner to the top of the display (no longer obscuring the primary speaker region). The order of the participant thumbnails 701 may be alphabetical or based on other variables such as grades or class performance. - As mentioned, the current speaker may rely on various visual materials during the class such as a PowerPoint presentation or other graphical materials.
Figure 8 illustrates one embodiment in which the speaker is relying on materials 802 which are displayed in the center region of the display. In one embodiment, this is enabled by providing the speaker with a control button (physical or graphical) which, when selected, allows the speaker to identify materials to be displayed during the class. The video image of the speaker is offset to a thumbnail image 801 towards the bottom of the display which is differentiated from the participant thumbnails 701 based on location and size (i.e., the speaker thumbnail 801 is larger than the participant thumbnails 701). - In one embodiment, the professor uses gesture controls to manipulate the content in the speaker materials. For example, in
Figure 8 , the professor is rotating his hand to cause the image of a human brain to rotate within the primary display region. Gesture controls may be implemented by capturing sensor data from a motion controller attached to the professor's computer, and using it to modify or reposition (e.g. 3-D rotate) the content. Through the publish-subscribe mechanism, the stream of sensor data that triggered these modifications can be replicated in the view of all other clients in the class/conference. - In one embodiment, students/participants are provided with a graphic to "raise a hand" during the class/conference. The professor or other moderator will be provided with a visual indication of a student raising a hand (e.g., via the student's thumbnail being highlighted with a hand icon or other form of highlight graphic) and may acknowledge the student by selecting the student's thumbnail.
Figure 9 illustrates a video region 901 for displaying the video feed of the student who has been acknowledged by the professor. She is brought into the main element or center element along with the professor in response to the professor's acknowledgement. -
Figure 10 illustrates a poll which is conducted for the purpose of forming breakout groups. That is, the breakout groups are initially determined by participants' answers to the poll. The breakout groups can include the participants who voted similarly or can be a mixed group including participants who all voted differently. In the illustrated embodiment, the participants' answers are shown but in another embodiment the answers can be anonymous. -
Figure 11 illustrates a GUI feature in which the professor has selected an option to overlay material over the primary speaker region in the center of the display (in contrast to the embodiment shown inFigure 8 in which the material is displayed in the center and the video image of the professor is offset to the side). The professor may specify this option using a different control key or graphic. The overlaid material in this embodiment may also be a real-time simulation. -
Figure 12 illustrates an embodiment which includes two primary speaker regions 1201-1202 within the GUI. This embodiment may be used, for example, to enable debates between two or more participants or to allow two representatives of a breakout group to present results. Additional users may be added within additional speaker regions. For example, N adjacent regions may be used for N different users during a debate or during a breakout group presentation. In one embodiment, the thumbnails of the users may be removed from the participant thumbnail region 701 when the participants are shown in the current speaker regions 1201-1202. - As mentioned, in one embodiment, users are provided with the ability to view and annotate material via a touch-screen device such as a tablet device.
Figure 13 illustrates one embodiment of a tablet on which material is presented and annotations of material are made by a participant of the conference. A participant (e.g., as shown in the slightly enlarged visual cue on the top of the GUI) presents material and can annotate the material in front of the class or conference. Each participant may or may not have the ability to annotate the material. In one, the professor is provided with the ability to annotate the material and may grant access to other participants. -
Figure 14 illustrates an exemplary message and GUI displayed to participants who are about to be placed in a breakout group. In the illustrated embodiment, the GUI includes a set of breakout group thumbnails comprising still pictures or video of the participants in the breakout group. -
Figures 15A-B illustrates an exemplary set of breakout group GUI including a vertically arranged set of breakout group thumbnails 1501, breakout group materials 1502, and notes 1503 recorded by the breakout group. In addition,Figure 15B shows how the breakout group materials 1502 may be edited with annotations 1504 (e.g., performed via a touchscreen, mouse, or other user input device). - In one embodiment of the invention, the professor or teacher may be available to meet with students during office hours.
Figure 16A illustrates an exemplary embodiment in which video of a participant is displayed in current speaker region 1601 as the participant is meeting with a professor during office hours, with video of the professor displayed in current speaker region 1602.Figure 16B illustrates an exemplary embodiment in which the student and professor review the student's performance in the class, as indicated by student performance data 1605. In this embodiment, video of the student and professor is displayed within thumbnail images 1605. As illustrated inFigure 16C , the student and professor may review the student's participation during the class, which is replayed in region 1610. As previously discussed, the audio and/or video from the class may be stored and replayed from the external multimedia storage service 190. - As mentioned above, in one embodiment of the invention, the dynamic state synchronization service 120 interacts with the various clients 130, 140, 150, 160 to ensure that the state of each client is consistent (e.g., the current state of the speaker queue, the identity of the participant currently in the center speaker position, the identity of participants in each breakout group, etc). As illustrated in
Figure 17 , one embodiment of the dynamic state synchronization service 120 includes publish-subscribe logic 1721 which allows each client to subscribe to receive state updates for every other client. In one embodiment, the publish-subscribe logic 1721 maintains a publication queue for each client and every client subscribes to the publication queues of every other client (i.e., to ensure that all state updates are received by every client). Thus, when client 130 transmits a state update to its publication queue, all of the clients 130, 140, 150 which subscribe to client 130's publication queue receive the state update. - In addition, in one embodiment, sequence numbering logic 1722 ensures that state updates are applied to each client in the correct order. For example, the sequence numbering logic 1722 may increment a counter in response to the receipt of each new state update received from each client. The current counter value may then be attached to each state update to ensure that the state updates are applied in the order in which they are received by the dynamic state synchronization service 120. For example, the publish-subscribe logic 1721 may construct a packet for each state update and may embed the counter value within a field in each packet prior to transmission to each client 130, 140, 150.
- In one embodiment, each client 130, 140, 150 includes state management logic 1701, 1702, 1703, respectively, which processes the state updates to maintain a consistent local state 135, 145, 155, respectively. The state management logic 1701, 1702, 1703 maintains a global reorder buffer 1711, 1721, 1731 into which all of the state updates are initially stored. Because packets may sometimes be received over the Internet out of order, the global reorder buffer is used to reorder the state updates when necessary to ensure that the state updates are applied in the same order as the counter values associated with each state update.
- In addition, in one embodiment, the state management logic 1711, 1721, 1731 assigns a publisher sequence number to indicate the order of state update generated locally on its client 130, 140, 150, respectively. For example, if a participant on client 130 generates a request to be the current speaker, then sends a request to ask a question, and then removes the request to be the current speaker, the state management logic 1701 may assign a sequence number to each of these state updates to indicate the order in which they were submitted. The publisher sequence numbers are transmitted along with each state update to the publish-subscribe logic 1721 and are received by each individual client. To ensure that the state updates are applied in the same order as they were generated, the state management logic 170, 1702, 1703, maintains a set of publisher reorder buffers 1712-1714, 1722-1724, 1732-1734, respectively, which may be chained to the global reorder buffers 1711, 1721, 1731, respectively. The state management logic 1701-1703 reorders the state updates within the publisher reorder buffers 1712-1714, 1722-1724, 1732-1734 in accordance with the publisher sequence numbers to ensure that the state updates are applied in the same order in which they were generated on each client.
- The end result is that the global order of state updates is maintained, based on the order in which state updates are received by the publish-subscribe logic 1721 and program order is maintained based on the sequence of operations performed on each individual client.
- Because participants may arrive to the virtual classroom (or other type of virtual conference) at different times, one embodiment of the invention includes techniques for initializing each newly-arrived client with the correct state. As illustrated in
Figure 18 , this is accomplished in one embodiment with the persistent state manager 110 (briefly mentioned above) which maintains the current state of each client within a state database 115. Each time a state update is generated at a client, that client initially transmits an indication of the state update to the persistent state manager 110, which stores the state update within the state database 115. The client then connects with the publish-subscribe logic 1721 to publish the state update to the other clients. Thus, the state database 115 contains a persistent representation of the current state of all of the clients. - In one embodiment, when a new client 1810 comes online (e.g., in response to the participant joining an ongoing class), its state management logic 1820 performs the following operations to initialize its local state 1815. In one embodiment, the state management logic 1820 first establishes a connection with the publish-subscribe logic 1721, subscribing to all state updates published by all other clients and to its own state updates (as previously described). It then begins buffering all state updates received from the publish-subscribe logic 1721. In one embodiment, the state management logic 1820 then connects with the persistent state manager 110 to receive a copy of the current persistent state stored in the state database 115. Given transactional delays over the Internet, during the period of time when the initial connection is made to the persistent state manager 110 and the time when the state is downloaded from the state database 115, there may be changes made to the persistent state within the state database 115. Moreover, some state updates which the state management logic 1820 receives from the publish-subscribe logic 1721 may already be reflected in the state database 115 (i.e., because the state management logic 1820 connects first to the publish-subscribe logic 1721). Consequently, following the retrieval of the state from the state database 115 the state management logic 1820 may have a superset of all of the state data needed to initialize its local state 1815. It may include redundant state updates - some of which are reflected in the persistent state from the state database and some of which were received from the publish-subscribe logic.
- To ensure that these redundancies are resolved consistently, one embodiment of the invention ensures that all state updates are idempotent. As understood by those of skill in the art, idempotence is a property of operations in computer science that can be applied multiple times without changing the result beyond the initial application. Thus, for example, if the participant on client 130 requests to be added to the speaker queue, this state update may be applied multiple times on the new client 1810 (e.g., once from the state database 115 and once from the publish-subscribe logic 1721) to achieve the same local state 1815 (i.e., the second application of the state update will not alter the final local state 1815). Thus, by ensuring that all state updates are idempotent, redundant state updates may simply be applied multiple times without affecting the underlying local state of each client.
- In summary, once the state management logic 1820 has received and applied the copy of the persistent state from the state database 115 and applied all of the state updates received from the publish-subscribe logic (some of which may be redundant), the local state 1815 on the new client 1810 will be consistent with the local states 135, 145 of the other clients 130, 140.
- In order to ensure a responsive user interface, one embodiment of the state management logic 1820 applies speculative state updates locally, in response to input from the local participant, and then resolves the state updates to ensure consistency upon receipt of state update responses from the publish-subscribe logic 1721. For example, in response to the participant on client 1810 selecting and holding the queue key, the state management logic 1820 may instantly place the participant in the speaker queue and/or place the participant in the center speaker region (if the participant is first in the queue). Thus, the state update will be instantly reflected in the graphical user interface of the participant, resulting in a positive user experience.
- The state management logic 1820 then transmits the state update to the publish-subscribe logic 1721 where it is assigned a sequence number as described above. Because the client 1810 subscribes to its own publication queue as well as those of all other clients, the client 1810 will receive the state update from the publish-subscribe logic 1721. Upon receiving its own state update, both the global and publisher reorder buffers are applied to ensure proper ordering, and then the update is re-applied to client 1810. The second application of the state update ensures state consistency since the proper ordering is maintained. Re-applying an update is safe to do because of the idempotent property of state updates, as mentioned above.
- There is the possibility of flicker in the user interface if there was an intervening, conflicting update to client 1810 between the first application of the state update and the second. That flicker will not affect state consistency, but it can cause a visual effect that is undesirable to the user. In one embodiment, some instances of flicker are eliminated by explicitly detecting conflicting state updates. To detect conflicting state updates, each incoming state update to client 1810 is checked against a queue of speculatively applied state changes to see if it will affect state that was speculatively applied. If a conflicting incoming state update is detected, client 1810 will not apply that update in one important case, specifically when client 1810 has already applied the state update as a speculative update (i.e., client 1810 published the state update) and no other conflicting state updates have been detected. This optimization eliminates flicker when, for instance, a user requests entry into the speaker queue and then quickly (in less than the round trip time to the publish-subscribe server) requests to be removed from the speaker queue.
- As illustrated in
Figure 19 , in one embodiment, the multimedia stream distribution service 125 includes stream forwarding logic 1820 for managing the receipt and distribution of audio and video streams for each of the clients 130, 140, 150. In particular, in one embodiment, each client 130, 140, 150 captures audio and/or video of its participant and streams the audio/video to the stream forwarding logic 1920, which forwards the audio/video streams to each of the clients 130, 140, 150. A video camera 1910 and microphone are illustrated for capturing video and audio of the participant, respectively. Each client 130 also includes a display on which the GUI 132 is rendered and speakers 1912 for generating the audio of other participants. In one embodiment, audio/video (A/V) compression and decompression logic 1902 compresses audio and video of the participant and the compressed audio/video is then streamed to the stream forwarding logic 1920 by the virtual conferencing app or browser 1901. While the A/V compression/ decompression logic is shown integrated within the app/browser inFigure 19 , this may be a logically separate module which is accessed by the app/browser. - In one embodiment, the app/browser 1901 of each client 130, 140, 150 establishes a web socket connection with the stream forwarding logic 1920 to receive streams from each of the other clients. The stream forwarding logic 1920 may distribute audio/video using a publish/subscribe mechanism where each client subscribes to the audio and video feeds of all other clients. The stream forwarding logic then forwards the incoming audio/video feeds to all subscribing clients.
- Upon receiving the audio and video from other clients, the A/V decompression logic 1902 decompresses/decodes the audio and video streams, renders the video within the GUI (e.g., within the thumbnail images or within the center speaker region as described above) and outputs the decoded audio through the speakers 1912.
- In one embodiment, the A/V compression/decompression logic 1902 adjusts the compression on the video of the participant depending on the size of the video image of the participant shown within the GUI. For example, if the participant is the current speaker (i.e., at the top of the speaker queue), the A/V compression/decompression logic 1902 may encode the video at a relatively higher resolution and/or frame rate, because a higher resolution is needed to provide an acceptable level of video quality for the relatively larger speaker region. In contrast, if the participant is not the current speaker, then the compression/decompression logic 1902 may encode the video at a relatively lower resolution and/or frame rate to provide an acceptable quality for displaying video within a thumbnail region. The app or browser 1901 may determine the required size of the video image (e.g., whether the user is the current speaker) by reading the local state data 135 stored on the client. In one embodiment, the app/browser 1901 may specify a desired bitrate to the A/V compression/decompression logic 1902 which will then adjust the resolution and/or frame rate accordingly. These techniques will help to keep the bitrate at a reasonable level because if there is only one speaker, for example, then only one high quality stream will be transmitted and sent to all clients. In one embodiment, when a new participant becomes the current speaker, this will be reflected in the state data of each client and the app or browser will control the A/V compression/decompression logic accordingly (i.e., to increase the resolution and frame rate of the video stream showing the new speaker).
- In one embodiment of the invention, each app or browser 1901 performs dynamic bitrate adaptation based on the bitrate available to each client and the requirements of the various video regions within the GUI. For example, if 2Mbps is available to a particular client 130, then (using
Figure 12 as an example GUI) the app/browser 1901 may specify to the A/V compression/ decompression logic 1902 to allocate 1 Mbps to encode both of the current speaker regions 1201 -1202 and may allocate the remaining 1 Mbps to encode all of the participant thumbnails 701. The A/V compression/decompression logic 1902 will then compress/decompress video in accordance with the allocated bitrates for each region of the GUI. In addition, in one embodiment, each participant may be provided the ability to select different quality levels to be used when encoding the participant's outgoing video stream. By way of example, these selectable levels may include high quality, low quality, and audio only (i.e., no video feed). - As mentioned, the multimedia storage service 190 may capture and store audio and video of a class (or other virtual conference) for subsequent playback. As illustrated in
Figure 20 , in one embodiment, the multimedia storage service 190 may be treated like any other client and may be configured to receive and record all audio/video streams for all participants on a storage device 2000. The data format used may comprise a plurality of audio and video clips of each of the participants. In addition, a timestamp may be associated with each audio and video clip which may be used to reconstruct the playback of the virtual class (i.e., to ensure that each audio and video clip is played back at the appropriate time). - As mentioned above, the video and audio content stored on the multimedia storage service 190 may be a higher quality than the audio/video used during the live virtual conference. For example, as illustrated in
Figure 20 , local audio and/or video capture logic 2005 on each individual client may capture higher quality video and audio than may be possible to stream through the multimedia stream distribution service 130. The higher quality audio/video may be stored locally, as a set of audio and/or video clips on a storage device 2001 of each client 130 during the virtual conference. When the conference has ended, these clips may be uploaded to the storage device 2000 on the multimedia storage service 190. For example, each time a participant speaks, a local audio clip of the user's voice (e.g., an MP3 or AAC clip) may be recorded and subsequently uploaded to the multimedia storage service 190. Additionally, state data 135, 145, 155, 165, timestamp data, and/or any other data usable to reconstruct the virtual conference for playback may be collected and stored on the multimedia storage service 190 (as described in greater detail below). - In one embodiment, the recorded audio/video from the virtual conference 2000 may include audio/video and other content generated by each of the breakout groups. In this embodiment, each of the audio/video clips may be associated with an identifier identifying the breakout group from which they were collected. In this manner, the professor or teacher may individually play back the audio/video and other content to reconstruct and review the discussion and content generated by each breakout group.
- In one embodiment, playback of audio, video, and other content is performed using a virtual conference playback tool. The playback tool may be implemented as a separate app or application or as a browser plug-in.
- While the embodiment described above relies on a central virtual conferencing service 100 to establish connections between clients and to stream video and audio between the clients, the underlying principles of the invention are not limited to this particular implementation. For example, in one embodiment, the clients are configured to establish peer-to-peer connections with one another, either without a central server (e.g., using a peer-to-peer networking protocol), or using a central server solely as a directory server, to lookup the network addresses of the other clients participating in the virtual conference. Once connected via peer-to-peer connections, the clients may implement the same state synchronization techniques described above, including management of the speaker queue and breakout groups. In addition, in this embodiment, the clients establish direct connections with one another to exchange video and audio of the participants.
- Alternatively, rather than merely forwarding video and audio streams between participants, the central virtual conferencing service 100 may compress/recompress the video and/or audio based on the capabilities of each individual client (e.g., reducing the resolution and/or frame rate for clients with smaller displays or lower-bitrate network connections). In addition, in one embodiment, the virtual conferencing service 100 may combine the video streams from each of the clients into a single video stream that is then streamed to all of the clients (e.g., compositing all of the video streams into a single video frame, which is then compressed and streamed to the clients).
- In addition, various different forms of video and audio compression may be used by the clients and/or the virtual conferencing service 100 while still complying with the underlying principles of the invention. This includes, but is not limited to, H.264, VP8, and VP9 for video coding and Opus and iSAC for audio coding.
- As mentioned above, in some virtual conferencing systems, the meeting organizer or moderator is provided with control over the state of the virtual conferencing system via a graphical control panel. For example, when it is time to set up a debate between two or more students, the professor uses the control panel to manually rearrange the graphical user interface to include two or more speaker positions and identifies the students to participate in the debate. Similarly, to subdivide the class into breakout groups, the professor uses the control panel to manually specify the size of the breakout groups, identify the students in each group, provide the necessary materials for each group to use during the breakout session, and specify the duration of the breakout period. When the breakout period is over, the professor again uses the control panel to rearrange the graphical user interface to review the results of each breakout group. As another example, when a poll is to be conducted, the professor uses the control panel to initiate the poll, which may involve additional modifications to the graphical user interface.
- Requiring the instructor (or other moderator) to manually perform all of the above operations during the course of a class (or other type of virtual conference) can be distracting and time consuming. To address this problem, one embodiment of the invention comprises an interactive video conferencing timeline which includes a graphical representation of each of the ordered events scheduled to occur during the course of a virtual conference. To perform the sequence of operations required to implement an event, the professor (or other moderator) simply selects the graphical representation corresponding to the event. In an alternate implementation, the graphical representations may be selected automatically by the system, in accordance with timing information associated with each event.
- While the remainder of the discussion below will focus on an online classroom implementation, the underlying principles of the invention may be implemented in any virtual conferencing environment in which different events require changes to the virtual conferencing system configuration.
-
Figure 21A illustrates an exemplary graphical interactive timeline 2150 for use in an online classroom in which the lesson plan for the class is subdivided into a plurality of "sections" 2110-2111 and each section is subdivided into a plurality of "segments" 2120-2123, 2130 corresponding to scheduled events during the class. Selection of a segment from the timeline causes the client on which the timeline is displayed (typically the instructor's client) to transmit one or more commands to cause the video conferencing system to implement the operations associated with the segment 2120. In the illustrated example, segment 2120 is highlighted to indicate that this section is currently being implemented by the online virtual conferencing system. Because this particular segment 2120 is associated with conducting a poll with video of a single participant being displayed in the central speaker region (as indicated by the "1-up" indication), the selection of this segment (either manually by the instructor or automatically) causes the client device on which the segment is selected to transmit one or more commands to the online video conferencing system to implement the poll using the "1-up" user interface arrangement. This may include, for example, generating the necessary data structures to collect the poll data and generating a graphical user interface which includes video of a single speaker in the speaker region (e.g., the professor) and a region which includes one or more poll questions to be answered by each participant. - In one embodiment, the dynamic state synchronization service 120 described above in detail synchronizes the state of each client in response to receipt of the commands. For example, the dynamic state synchronization service 120 may open records in the state database 115 required to implement the online poll and may transmit synchronization signals to each of the clients participating in the online conference to ensure that the virtual conferencing graphical user interface is consistent across all of the clients.
- In one embodiment, timing data may be associated with each of the sections and/or segments. For example, in
Figure 21A , the time allocated to each section is displayed within the graphical elements, 2110 and 2111, associated with each section (e.g., 10 minutes for each of sections 2110 and 2111). In addition, in one embodiment, an elapsed time indicator 2160 may be displayed showing the total amount of time which has elapsed during the class. The color of the elapsed time may be updated to provide an indication as to whether the class is proceeding in a timely manner. For example, green may indicate that the class is proceeding on schedule or ahead of schedule, yellow may indicate that the class is proceeding slightly behind schedule (e.g., < 5 minutes behind), and red may indicate that the class is proceeding significantly behind schedule. The system may determine how far the class has progressed based on the current segment highlighted within the timeline. - A notes section 2140 provides instructions to the professor related to each segment. For example, the notes 2140 may provide general instructions related to the purpose and/or goals of the segment. The notes 2140 may also provide instructor notes from which the instructor may reference. A first graphical element at the bottom of the timeline may be selected to display the notes and a second graphical button (e.g., located at the top of the notes) may be selected to hide the notes.
-
Figure 21B illustrates the exemplary graphical interactive timeline 2150 integrated within the video conferencing GUI 2320, which may include the various virtual conferencing features described above such as a central speaker region, and video thumbnails of the various conference participants. In one embodiment, the instructor and/or an assistant to the instructor are the only participants who are provided with access to the interactive timeline 2150. The timeline may be hidden and brought into the display via a graphical user interaction by the instructor. For example, in one embodiment, the instructor may select a graphical button or other element to the right of the GUI or may select a designated control key on the keyboard to bring the interactive timeline 2150 into view. - In one embodiment, when timeline events are active (e.g., in response to the instructor selecting one of the segments 2120-2123), the event that is highlighted automatically becomes disabled to prevent accidental re-triggering of the event. When breakouts or polls are active in the timeline, their main button will become disabled, but different selectable actions may be provided. For example, for breakouts, a "message all" option may appear which opens the breakout sidebar with keyboard focus on the message field (e.g., so the instructor may message all students). In addition, an "end breakout" option may appear allowing the instructor to end the breakout session. For polls, a "restart poll" option may appear to re-take a poll and an "end poll" option may appear to end the polling process. For a segment which provides an N-up (e.g., placing N students in the speaker regions), N random students may be selected and/or N students may be selected based on specified selection criteria (e.g., picking the least talkative N students to feature). In one embodiment, as soon as the event is no longer active the actions described above will disappear and the entire action will again become clickable.
-
Figures 22A and22B illustrate how a lesson plan may be constructed and used to generate the graphical interactive timeline. In the embodiment shown inFigure 22A , a lesson plan 2101 may be constructed in a human-readable format prior to the online class (e.g., by a team of academic advisors working for the online university, by the instructor, etc).Figure 23 illustrates one such implementation in which a lesson plan 2101 has been constructed using an online word processor (e.g., Google™ Docs). One section 2301 is illustrated which indicates, in a human-readable format, a title, a time limit, and a start and stop time associated with the section. The first segment 2302 includes a specification of the operations which need to be performed for the segment including setting up a "2-up" view which includes video of the instructor and a set of slides to be used for the segment. A script or set of instructions are also provided within the segment 2302. The top portion of a second segment 2303 is illustrated which indicates that six breakout groups are to be formed using 2-3 students in each groups based on a particular attribute (e.g., frequency of participation in class, grade, polling results, or any of the other variables discussed above). - The lesson plan 2201 may be used to generate a machine-readable representation of the lesson plan 2203. For example, in
Figure 21B , machine-readable lesson plan generation logic 2102 uses the lesson plan 2101 to generate a machine-readable representation of the lesson plan 2103. For example, the machine-readable lesson plan generation logic 2202 may scan the lesson plan 2101 for certain keywords or fields and embed the data contained therein into the machine-readable representation 2103. In an alternate embodiment, the machine-readable representation of the lesson plan 2103 may be generated manually by a user (e.g., an academic team member) using the data contained in the lesson plan 2103. - Regardless of how the machine-readable representation of the lesson plan 2103 is generated, in one embodiment, it is generated in a YAML format, a well-known human-readable and machine-readable data serialization format (sometimes referred to as "Yet Another Markup Language" and sometimes using the recursive acronym "YAML Ain't Markup Language").
Figure 24 illustrates an exemplary portion of a YAML representation 2401 with arrows mapping section data and segment data to a graphical interactive timeline 2402. For example, "section 1" of the YAML representation 2401 includes a field for the section title and field for the duration of the segment. Each segment includes data indicating a title (e.g., "conduct poll"), an operation to be performed on the user interface (e.g., "1-up") and other pertinent information for implementing the section on the video conferencing system (e.g., specific panes and GUI features to be displayed). In addition, notes are provided which may be used by the instructor during the class. As mentioned, in one embodiment, the notes may be displayed beneath the interactive timeline 2402. - Returning to
Figure 22A , timeline generation logic 2204 interprets the machine-readable representation of the lesson plan 2203 to generate the timeline GUI and implement the underlying operations associated therewith. In one embodiment, the timeline generation logic 2204 is implemented as program code executed on one or more virtual conferencing servers within the virtual conferencing service 100. The various timeline GUI features and associated functions are then streamed to the instructor's client which may implement the timeline and other GUI features within the context of a Web browser or conferencing application installed on the client. Alternatively, the timeline generation logic 2204 may be executed directly on the instructor's client to generate the timeline GUI and associated functions. In this embodiment, the machine-readable representation 2203 may be sent directly to the instructor's client to be interpreted locally by the timeline generation logic 2204. Of course, the underlying principles of the invention are not limited to the particular location at which the program code for implementing the timeline is executed. - In another embodiment, illustrated in
Figure 22B , a graphical design application 2204 is used to construct the timeline for each class and responsively generate program code and/or a machine-readable representation of the lesson plan 2212. One example of such a graphical design application is illustrated inFigures 25A- C which includes a class timeline region 2501 comprising a series of entries into which different graphical objects may be moved by the lesson designer to construct each section and segment. In the example shown inFigure 25A , objects have been moved into a first set of entries 2510 (numbered 1-6) and a second set of entries 2511 (numbered 7-15) are available to receive new objects. In one embodiment, the lesson designer may create a new section and/or segment by clicking and dragging a new object into one of the open entries 2511. Different objects may be provided which represent different resources, tasks, and GUI configurations for each segment. In the example shown inFigure 25A , the objects include new documents 2502, saved resources 2503 such as PDFs, notes, videos, breakout groups 2504, polls 2505, and screen sharing 2506. A virtual unlimited number of such objects may be created and made available for the lesson designer to design each section and/or segment. InFigure 25B , the lesson designer has selected a second breakout object 2520 and is dragging the second breakout object towards the next open entry within the open entry region 2511 (entry # 7).Figure 25C illustrates the second breakout object 2520 positioned within the set 2510 of selected objects. - In one embodiment, each object provided within the graphical design application may have a set of parameters from which the lesson designer may select. For example, when selecting a new breakout group, a drop-down menu or other graphical selection structure may be provided to allow the lesson designer to select the parameters for the breakout group (e.g., the number of participants per group, the resources to be used during the session, etc). Similarly, when conducting a poll, the lesson designer may be provided with a design widget to enter a question and a set of possible responses. Various additional object-specific design features may be provided to allow the lesson designer to design each section and/or segment.
- In one embodiment, once the lesson designer has selected and configured a set of objects within the graphical design application 2211, the graphical design application 2211 will generate program code and/or a machine readable representation of the lesson plan 2212 which may then be interpreted by timeline generation logic 2213 to generate the timeline GUI and associated functions 2214 described herein. As mentioned, the generation of the timeline GUI and associated functions may be performed on the virtual conferencing service or locally on the instructor's client.
-
Figure 26A illustrates one particular implementation where a machine-readable representation of the lesson plan 2103 is interpreted by timeline generation logic 2104 on a virtual conferencing service 100 and the resulting timeline GUI and associated functions 2105 are transmitted to the instructor's client 160. In this particular example, the timeline GUI and associated functions are implemented within the context of the conferencing GUI 162 executed within a browser or application 161 on the instructor's client 160. In one embodiment, the timeline generation logic 2104 also establishes a database schema required to implement the functions of the timeline. The database schema may be established, for example, to set up the resources and other state required to implement each section and segment within the timeline. In one embodiment, the database schema is set up in accordance with the various operations and objects specified within the machine-readable representation of the lesson plan 2103. - In addition, in the illustrated embodiment, validation logic 2601 is employed which validates the machine-readable lesson plan 2103 prior to generating the timeline 2105 and database schema 2600. For example, the validation logic 2601 may parse and analyze the machine-readable representation of the lesson plan 2103 to ensure that no errors are present in the machine-readable representation. If the machine-readable representation is in a YAML format, for example, the validation logic 2601 may check to determine that the syntax used within the YAML file is valid and may also check to determine that various resources such as files referenced within the YAML file exist in the locations specified for the classroom.
- As mentioned, once the timeline has been generated on the instructor's computer, the instruction may readily implement all of the operations associated with a segment by selecting that segment. A variety of different operations may be included within a segment including, by way of example and not limitation, featuring a random or particular participant, or selecting a participant based on specified criteria (e.g., featuring the participant who has spoken the least in class thus far, the participant who scored the highest on a recent quiz, the participant who answered a particular poll question in a particular manner, etc). In addition, the segment may be configured to implement a variety of different graphical user interface elements such as featuring different numbers of participants within the speaker region(s) (e.g., 1-up, 2-up, 3-up, 4-up, 5-up, 6-up, or 8-up for 1-8 featured speakers, respectively) or displaying one or more resources such as PDFs, Youtube® videos, links to web sites, word processing or presentation documents, spreadsheets, photos, to name a few. Other operations included in a segment may include, but are not limited to, conducting a poll, displaying poll results to the participants, comparing poll results of multiple polls or the same poll conducted more than once (e.g., one conducted at the beginning of class and the other at the end of class), conducting a quiz, conducting a breakout session (e.g., selecting participants based on results of a poll), featuring breakout groups and their work-product (e.g., notes), pitting two students against one another in a debate, sharing the instructor's or a participant's screen, initiating an open discussion (with notes for the professor related to how to direct it), and allocating a time period for independent work.
- An exemplary database schema for the timeline 2600 is illustrated in
Figure 26B including a timeline component 2610, a TimelineSection component 2611 and a TimelineSegment component 2612 which define the data structures used for the timeline, sections within the timeline, and segments within each section, respectively. - After a user enters the timeline specification (e.g., as a YAML file or via the user interface as described herein), it is sent to the server 100, performs interpretation. Specifically, as mentioned above, validation logic 2601 may validate the YAML file or other machine readable representation 2103, thereby ensuring that all references to resources, breakouts, polls, etc, exist and are accessible. In the case of YAML file format, there are additional checks that the format is properly adhered to. If there is a problem, it is reported back to the user and the database is not updated.
- In one embodiment, if validation passes, the timeline specification goes through a "normalization" step (e.g., implemented by timeline generation logic 2104), whereby the human-interpretable YAML file is converted into a form that is more uniform and thus simpler and more efficient for the computer to interpret. In one embodiment, the normalized form is written into the database 2600 using the schema shown in
Figure 26B . - In the illustrated schema, a "Timeline" 2610 consists of zero or more "TimelineSection"'s (connected through a foreign key). A "TimelineSection" consists of zero or more "TimelineSegment"s. Each TimelineSection has a "title" as well as "duration_in_seconds" for keeping track of time during class and helping the instructor stay on track.
- Each TimelineSegment has a generic "details" text field, which contains all of the information needed to display the user interface components of the timeline and perform the operations that are part of that timeline segment. The "details" field is left generic because there is a great deal of variety in the types of operations a TimelineSegment might perform and, thus, a greater need for flexibility in what data is stored. Each TimelineSegment also has a "status" field, which indicates whether this TimelineSegment (1) has not yet been used by the professor; (2) is currently in-progress in class; or (3) has already been used. This state is used to maintain the Timeline graphical user interface.
- As mentioned above, different TimelineSegment operations have different "details" field formats (which, in one embodiment, are stored as json code). An example "details" field for displaying on stage one student with email address student@minerva.kgi.edu and the last poll result for the poll named "Reducing homelessness: Best practices" is shown below.
{ title: "Discuss poll result", op: layout, number-of-panes: 2, duration: 120, panes: [ { type: "person", email: "student@minerva.kgi.edu" }, { type: "poll-result", name: "Reducing homelessness: Best practices" instance-back: 0 } ] }
Title: Discuss poll result Duration: 2m Op: 2-up Panes: - type: person email: student@minerva.kgi.edu - type: poll-result title: Reducing homelessness: Best practices
Claims (8)
- A virtual conferencing system comprising:a plurality of clients (130, 140, 150, 160) operated by participants and at least one moderator of a virtual conference, each of the clients comprising state management logic (135, 145, 155, 165) configured to maintain a current state of the virtual conference;a virtual conferencing service (100) to establish audio and/or video connections between the plurality of clients during the virtual conference, the virtual conferencing service further including: a state synchronization service (120) communicatively coupled to the state management logic (110) on each client to ensure that the current state of the virtual conference is consistent on each client;a virtual conferencing graphical user interface, GUI (132, 142, 152, 162), to be rendered on the plurality of clients, the virtual conferencing GUI configured, via signals sent from the state synchronization service, to display a video stream of one or more current speakers during the virtual conference utilizing the established video connections; anda decision support module (3000) to evaluate the participants according to one or more criteria in the virtual conference and to select a subset of the participants as candidates to actively participate in the virtual conference based on the evaluation;a contribution identification module (3002) to identify events related to actions of participants during the virtual conference;an event log to store the events identified by the contribution identification module; andan event filter to provide options for searching for specific types of events within the event log based on input from the moderator and/or participants when reviewing a recording of the virtual conference, the event filter to generate a filtered set of events based on the input from the moderator and/or participants.
- The virtual conferencing system as in claim 1, wherein the operation of evaluating the participants comprises determining a score for each participant based on the one or more criteria.
- The virtual conferencing system as in any one of claims 1-2, wherein the operation of selecting a subset of participants comprises ordering/prioritizing the participants based on the score.
- The virtual conferencing system as in any one of claims 1-3, wherein the criterion comprises a set of sub-criterions, each of the set of sub-criterions associated with a corresponding weight.
- The virtual conferencing system as in claim 4, wherein a sum of the weights is unity.
- The virtual conferencing system as in any one of claims 1-5 further comprising:an event list to visually display the filtered set of events within a graphical user interface; anda media player to play back recorded portions of video and/or audio from the virtual conference corresponding to an event selected from within the event list.
- The virtual conferencing system as in claim 6 wherein the media player includes a graphical element to allow the user to jump ahead or jump back to different portions of the video and/or audio, wherein upon jumping ahead or back, an event is highlighted from within the event list corresponding to the point to which the jump was performed within the video and/or audio.
- The virtual conferencing system as in claim 6 wherein the event filter comprises options for filtering events by participant and by different event types and, optionally,
wherein the different types of events include speaking events comprising periods during which one or more of the participants speak during the virtual conference, typed comments submitted by participants during the virtual conference, and bookmark events indicating points in time at which the moderator entered bookmarks during the virtual conference.
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462046879P true | 2014-09-05 | 2014-09-05 | |
US201462046880P true | 2014-09-05 | 2014-09-05 | |
US201462046859P true | 2014-09-05 | 2014-09-05 | |
US14/840,513 US9674244B2 (en) | 2014-09-05 | 2015-08-31 | System and method for discussion initiation and management in a virtual conference |
US14/840,438 US9578073B2 (en) | 2014-09-05 | 2015-08-31 | System and method for decision support in a virtual conference |
US14/840,471 US9674243B2 (en) | 2014-09-05 | 2015-08-31 | System and method for tracking events and providing feedback in a virtual conference |
PCT/US2015/048593 WO2016037084A1 (en) | 2014-09-05 | 2015-09-04 | System and method for tracking events and providing feedback in a virtual conference |
Publications (3)
Publication Number | Publication Date |
---|---|
EP3189622A1 EP3189622A1 (en) | 2017-07-12 |
EP3189622A4 EP3189622A4 (en) | 2018-06-13 |
EP3189622B1 true EP3189622B1 (en) | 2019-11-06 |
Family
ID=55438624
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP15837998.2A Active EP3189622B1 (en) | 2014-09-05 | 2015-09-04 | System and method for tracking events and providing feedback in a virtual conference |
Country Status (6)
Country | Link |
---|---|
US (6) | US9674243B2 (en) |
EP (1) | EP3189622B1 (en) |
JP (2) | JP6734852B2 (en) |
KR (1) | KR20170060023A (en) |
BR (1) | BR112017004387A2 (en) |
WO (1) | WO2016037084A1 (en) |
Families Citing this family (72)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8675847B2 (en) | 2007-01-03 | 2014-03-18 | Cisco Technology, Inc. | Scalable conference bridge |
US10126927B1 (en) | 2013-03-15 | 2018-11-13 | Study Social, Inc. | Collaborative, social online education and whiteboard techniques |
US10291597B2 (en) | 2014-08-14 | 2019-05-14 | Cisco Technology, Inc. | Sharing resources across multiple devices in online meetings |
US10691398B2 (en) * | 2014-09-30 | 2020-06-23 | Accenture Global Services Limited | Connected classroom |
US10542126B2 (en) | 2014-12-22 | 2020-01-21 | Cisco Technology, Inc. | Offline virtual participation in an online conference meeting |
CN104794173A (en) * | 2015-04-01 | 2015-07-22 | 惠州Tcl移动通信有限公司 | Photographing processing method and system based on mobile terminal |
US9948786B2 (en) | 2015-04-17 | 2018-04-17 | Cisco Technology, Inc. | Handling conferences using highly-distributed agents |
US10057308B2 (en) * | 2015-04-30 | 2018-08-21 | Adobe Systems Incorporated | Customizable reproduction of electronic meetings |
US9781174B2 (en) * | 2015-09-21 | 2017-10-03 | Fuji Xerox Co., Ltd. | Methods and systems for electronic communications feedback |
KR101685847B1 (en) * | 2015-10-05 | 2016-12-13 | 한국과학기술원 | Method and System for Mediating Proximal Group Users Smart Device Usage via Location-based Virtual Usage Limiting Spaces |
USD805529S1 (en) * | 2015-10-08 | 2017-12-19 | Smule, Inc. | Display screen or portion thereof with animated graphical user interface |
USD813266S1 (en) * | 2015-10-08 | 2018-03-20 | Smule, Inc. | Display screen or portion thereof with graphical user interface |
USD807381S1 (en) * | 2015-10-08 | 2018-01-09 | Smule, Inc. | Display screen or portion thereof with animated graphical user interface |
US10291762B2 (en) | 2015-12-04 | 2019-05-14 | Cisco Technology, Inc. | Docking station for mobile computing devices |
US10614418B2 (en) * | 2016-02-02 | 2020-04-07 | Ricoh Company, Ltd. | Conference support system, conference support method, and recording medium |
JP6429819B2 (en) * | 2016-03-18 | 2018-11-28 | ヤフー株式会社 | Information providing apparatus and information providing method |
US10476989B2 (en) * | 2016-04-10 | 2019-11-12 | Dolby Laboratories Licensing Corporation | Remote management system for cinema exhibition devices |
US10142380B2 (en) * | 2016-04-15 | 2018-11-27 | Microsoft Technology Licensing, Llc | Joining executable component to online conference |
US9813667B1 (en) * | 2016-04-20 | 2017-11-07 | Disney Enterprises, Inc. | System and method for providing co-delivery of content |
US10230774B2 (en) * | 2016-05-19 | 2019-03-12 | Microsoft Technology Licensing, Llc | Virtual meeting attendee |
US9841968B1 (en) | 2016-06-03 | 2017-12-12 | Afero, Inc. | Integrated development tool with preview functionality for an internet of things (IoT) system |
US9846577B1 (en) | 2016-06-03 | 2017-12-19 | Afero, Inc. | Integrated development tool with preview functionality for an internet of things (IoT) system |
AU2017203723A1 (en) * | 2016-06-07 | 2017-12-21 | David Nixon | Meeting management system and process |
US10574609B2 (en) | 2016-06-29 | 2020-02-25 | Cisco Technology, Inc. | Chat room access control |
US10553129B2 (en) * | 2016-07-27 | 2020-02-04 | David Nelson | System and method for recording, documenting and visualizing group conversations |
CN106331883A (en) * | 2016-08-23 | 2017-01-11 | 北京汉博信息技术有限公司 | Remote visualization data interaction method and system |
USD817991S1 (en) * | 2016-10-26 | 2018-05-15 | Apple Inc. | Display screen or portion thereof with graphical user interface |
USD820304S1 (en) * | 2016-10-27 | 2018-06-12 | Apple Inc. | Display screen or portion thereof with graphical user interface |
USD815140S1 (en) * | 2016-10-27 | 2018-04-10 | Apple Inc. | Display screen or portion thereof with graphical user interface |
USD815141S1 (en) * | 2016-10-27 | 2018-04-10 | Apple Inc. | Display screen or portion thereof with graphical user interface |
USD817992S1 (en) * | 2016-10-27 | 2018-05-15 | Apple Inc. | Display screen or portion thereof with graphical user interface |
CA174365S (en) * | 2016-10-27 | 2017-11-28 | Apple Inc | Display screen with graphical user interface |
USD820303S1 (en) * | 2016-10-27 | 2018-06-12 | Apple Inc. | Display screen or portion thereof with graphical user interface |
USD815137S1 (en) * | 2016-10-27 | 2018-04-10 | Apple Inc. | Display screen or portion thereof with graphical user interface |
USD818484S1 (en) * | 2016-10-27 | 2018-05-22 | Apple Inc. | Display screen or portion thereof with graphical user interface |
US20180123986A1 (en) | 2016-11-01 | 2018-05-03 | Microsoft Technology Licensing, Llc | Notification of a Communication Session in a Different User Experience |
US10592867B2 (en) | 2016-11-11 | 2020-03-17 | Cisco Technology, Inc. | In-meeting graphical user interface display using calendar information and system |
USD838774S1 (en) * | 2016-11-18 | 2019-01-22 | International Business Machines Corporation | Training card |
JP6798288B2 (en) * | 2016-12-02 | 2020-12-09 | 株式会社リコー | Communication terminals, communication systems, video output methods, and programs |
US10516707B2 (en) | 2016-12-15 | 2019-12-24 | Cisco Technology, Inc. | Initiating a conferencing meeting using a conference room device |
US9819877B1 (en) | 2016-12-30 | 2017-11-14 | Microsoft Technology Licensing, Llc | Graphical transitions of displayed content based on a change of state in a teleconference session |
US10367858B2 (en) * | 2017-02-06 | 2019-07-30 | International Business Machines Corporation | Contemporaneous feedback during web-conferences |
US10193940B2 (en) | 2017-02-07 | 2019-01-29 | Microsoft Technology Licensing, Llc | Adding recorded content to an interactive timeline of a teleconference session |
US10171256B2 (en) | 2017-02-07 | 2019-01-01 | Microsoft Technology Licensing, Llc | Interactive timeline for a teleconference session |
US10515117B2 (en) | 2017-02-14 | 2019-12-24 | Cisco Technology, Inc. | Generating and reviewing motion metadata |
US9942519B1 (en) | 2017-02-21 | 2018-04-10 | Cisco Technology, Inc. | Technologies for following participants in a video conference |
US10070093B1 (en) * | 2017-02-24 | 2018-09-04 | Microsoft Technology Licensing, Llc | Concurrent viewing of live content and recorded content |
US10642478B2 (en) | 2017-04-10 | 2020-05-05 | Microsoft Technology Licensing Llc | Editable whiteboard timeline |
US10440073B2 (en) | 2017-04-11 | 2019-10-08 | Cisco Technology, Inc. | User interface for proximity based teleconference transfer |
US10375125B2 (en) | 2017-04-27 | 2019-08-06 | Cisco Technology, Inc. | Automatically joining devices to a video conference |
US10404481B2 (en) | 2017-06-06 | 2019-09-03 | Cisco Technology, Inc. | Unauthorized participant detection in multiparty conferencing by comparing a reference hash value received from a key management server with a generated roster hash value |
US10375474B2 (en) | 2017-06-12 | 2019-08-06 | Cisco Technology, Inc. | Hybrid horn microphone |
US10810273B2 (en) | 2017-06-13 | 2020-10-20 | Bank Of America Corporation | Auto identification and mapping of functional attributes from visual representation |
US20180366017A1 (en) * | 2017-06-14 | 2018-12-20 | Shorelight Education | International Student Delivery and Engagement Platform |
US10541824B2 (en) | 2017-06-21 | 2020-01-21 | Minerva Project, Inc. | System and method for scalable, interactive virtual conferencing |
US10477148B2 (en) | 2017-06-23 | 2019-11-12 | Cisco Technology, Inc. | Speaker anticipation |
US10516709B2 (en) | 2017-06-29 | 2019-12-24 | Cisco Technology, Inc. | Files automatically shared at conference initiation |
US10021190B1 (en) * | 2017-06-30 | 2018-07-10 | Ringcentral, Inc. | Communication management method and system for inserting a bookmark in a chat session |
US10706391B2 (en) | 2017-07-13 | 2020-07-07 | Cisco Technology, Inc. | Protecting scheduled meeting in physical room |
US10091348B1 (en) | 2017-07-25 | 2018-10-02 | Cisco Technology, Inc. | Predictive model for voice/video over IP calls |
US10771621B2 (en) | 2017-10-31 | 2020-09-08 | Cisco Technology, Inc. | Acoustic echo cancellation based sub band domain active speaker detection for audio and video conferencing applications |
US10404943B1 (en) * | 2017-11-21 | 2019-09-03 | Study Social, Inc. | Bandwidth reduction in video conference group sessions |
US10832009B2 (en) | 2018-01-02 | 2020-11-10 | International Business Machines Corporation | Extraction and summarization of decision elements from communications |
US10341609B1 (en) * | 2018-01-17 | 2019-07-02 | Motorola Solutions, Inc. | Group video synchronization |
CN110324564B (en) * | 2018-03-30 | 2020-11-27 | 视联动力信息技术股份有限公司 | Video conference data synchronization method and device |
KR20190121016A (en) | 2018-04-17 | 2019-10-25 | 삼성전자주식회사 | Electronic apparatus and method for controlling thereof |
KR102095323B1 (en) * | 2018-08-13 | 2020-03-31 | 신한대학교 산학협력단 | Apparatus for Inducing Learning |
US10275331B1 (en) | 2018-11-27 | 2019-04-30 | Capital One Services, Llc | Techniques and system for optimization driven by dynamic resilience |
US20200287947A1 (en) * | 2019-03-04 | 2020-09-10 | Metatellus Oü | System and method for selective communication |
CN110708493A (en) * | 2019-09-30 | 2020-01-17 | 视联动力信息技术股份有限公司 | Method and device for acquiring permission of participating in video networking conference |
US10686645B1 (en) * | 2019-10-09 | 2020-06-16 | Capital One Services, Llc | Scalable subscriptions for virtual collaborative workspaces |
US10866872B1 (en) | 2019-11-18 | 2020-12-15 | Capital One Services, Llc | Auto-recovery for software systems |
Family Cites Families (91)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100220042B1 (en) * | 1990-06-07 | 1999-09-01 | 가부시키가이샤 히타치 세이사쿠쇼 | Presentation supporting method and apparatus therefor |
JPH06266632A (en) | 1993-03-12 | 1994-09-22 | Toshiba Corp | Method and device for processing information of electronic conference system |
US5385475A (en) * | 1993-04-01 | 1995-01-31 | Rauland-Borg | Apparatus and method for generating and presenting an audio visual lesson plan |
US5767897A (en) * | 1994-10-31 | 1998-06-16 | Picturetel Corporation | Video conferencing system |
US5559875A (en) | 1995-07-31 | 1996-09-24 | Latitude Communications | Method and apparatus for recording and retrieval of audio conferences |
US6286034B1 (en) | 1995-08-25 | 2001-09-04 | Canon Kabushiki Kaisha | Communication apparatus, a communication system and a communication method |
US6343313B1 (en) | 1996-03-26 | 2002-01-29 | Pixion, Inc. | Computer conferencing system with real-time multipoint, multi-speed, multi-stream scalability |
US5909874A (en) | 1996-08-14 | 1999-06-08 | Daniel; Maurice | Icosahedron decimal dice |
WO1998044733A1 (en) * | 1997-03-31 | 1998-10-08 | Broadband Associates | Method and system for providing a presentation on a network |
US7490169B1 (en) * | 1997-03-31 | 2009-02-10 | West Corporation | Providing a presentation on a network having a plurality of synchronized media types |
US6091408A (en) * | 1997-08-13 | 2000-07-18 | Z-Axis Corporation | Method for presenting information units on multiple presentation units |
US6850609B1 (en) | 1997-10-28 | 2005-02-01 | Verizon Services Corp. | Methods and apparatus for providing speech recording and speech transcription services |
JPH11344920A (en) | 1998-06-02 | 1999-12-14 | Kyasuto:Kk | Teaching system, its program medium and apparatus therefor |
US6397036B1 (en) * | 1999-08-23 | 2002-05-28 | Mindblazer, Inc. | Systems, methods and computer program products for collaborative learning |
US6668273B1 (en) * | 1999-11-18 | 2003-12-23 | Raindance Communications, Inc. | System and method for application viewing through collaborative web browsing session |
US6587438B1 (en) | 1999-12-22 | 2003-07-01 | Resonate Inc. | World-wide-web server that finds optimal path by sending multiple syn+ack packets to a single client |
US6909874B2 (en) * | 2000-04-12 | 2005-06-21 | Thomson Licensing Sa. | Interactive tutorial method, system, and computer program product for real time media production |
TW527833B (en) * | 2000-05-19 | 2003-04-11 | Sony Corp | Network conferencing system, participation authorization method and presenting method |
US20130203485A1 (en) * | 2000-05-31 | 2013-08-08 | Igt | Method and apparatus for conducting focus groups using networked gaming devices |
US7240287B2 (en) * | 2001-02-24 | 2007-07-03 | Microsoft Corp. | System and method for viewing and controlling a presentation |
US7058891B2 (en) * | 2001-05-25 | 2006-06-06 | Learning Tree International, Inc. | Interface for a system of method of electronic presentations having multiple display screens with remote input |
US7454708B2 (en) * | 2001-05-25 | 2008-11-18 | Learning Tree International | System and method for electronic presentations with annotation of preview material |
US7447608B1 (en) * | 2001-09-28 | 2008-11-04 | Infocus Corporation | Method and apparatus for a collaborative meeting room system |
EP1298524A1 (en) * | 2001-09-28 | 2003-04-02 | Ricoh Company, Ltd. | Conference support apparatus, information processor, teleconference system and computer product |
US20030122863A1 (en) * | 2001-12-28 | 2003-07-03 | International Business Machines Corporation | Navigation tool for slide presentations |
US20030191805A1 (en) * | 2002-02-11 | 2003-10-09 | Seymour William Brian | Methods, apparatus, and systems for on-line seminars |
US20040008249A1 (en) * | 2002-07-10 | 2004-01-15 | Steve Nelson | Method and apparatus for controllable conference content via back-channel video interface |
US6839417B2 (en) * | 2002-09-10 | 2005-01-04 | Myriad Entertainment, Inc. | Method and apparatus for improved conference call management |
US7912199B2 (en) | 2002-11-25 | 2011-03-22 | Telesector Resources Group, Inc. | Methods and systems for remote cell establishment |
US8095409B2 (en) * | 2002-12-06 | 2012-01-10 | Insors Integrated Communications | Methods and program products for organizing virtual meetings |
US7434165B2 (en) * | 2002-12-12 | 2008-10-07 | Lawrence Charles Kleinman | Programmed apparatus and system of dynamic display of presentation files |
US7454460B2 (en) | 2003-05-16 | 2008-11-18 | Seiko Epson Corporation | Method and system for delivering produced content to passive participants of a videoconference |
US7330541B1 (en) * | 2003-05-22 | 2008-02-12 | Cisco Technology, Inc. | Automated conference moderation |
US20050099492A1 (en) | 2003-10-30 | 2005-05-12 | Ati Technologies Inc. | Activity controlled multimedia conferencing |
US7672997B2 (en) * | 2003-11-12 | 2010-03-02 | International Business Machines Corporation | Speaker annotation objects in a presentation graphics application |
US20050131714A1 (en) * | 2003-12-10 | 2005-06-16 | Braunstein Anne R. | Method, system and program product for hierarchically managing a meeting |
JP3903992B2 (en) | 2004-01-27 | 2007-04-11 | 日本電気株式会社 | Distance education system and method, server, and program |
US20050198139A1 (en) * | 2004-02-25 | 2005-09-08 | International Business Machines Corporation | Multispeaker presentation system and method |
US7133513B1 (en) | 2004-07-21 | 2006-11-07 | Sprint Spectrum L.P. | Method and system for transcribing voice content of an on-going teleconference into human-readable notation |
EP1782168A4 (en) * | 2004-07-23 | 2009-01-07 | Learning Tree Int Inc | System and method for electronic presentations |
US7640502B2 (en) * | 2004-10-01 | 2009-12-29 | Microsoft Corporation | Presentation facilitation |
US10200468B2 (en) * | 2004-11-18 | 2019-02-05 | Microsoft Technology Licensing, Llc | Active agenda |
US20060224430A1 (en) * | 2005-04-05 | 2006-10-05 | Cisco Technology, Inc. | Agenda based meeting management system, interface and method |
US20060248210A1 (en) * | 2005-05-02 | 2006-11-02 | Lifesize Communications, Inc. | Controlling video display mode in a video conferencing system |
US7822185B2 (en) | 2005-05-10 | 2010-10-26 | Samsung Electronics Co., Ltd. | Instant conference method and apparatus |
JP2007043493A (en) * | 2005-08-03 | 2007-02-15 | Pioneer Electronic Corp | Conference supporting system for managing progress of proceeding, conference supporting method, and conference supporting program |
US20070206759A1 (en) | 2006-03-01 | 2007-09-06 | Boyanovsky Robert M | Systems, methods, and apparatus to record conference call activity |
US20070299710A1 (en) * | 2006-06-26 | 2007-12-27 | Microsoft Corporation | Full collaboration breakout rooms for conferencing |
US20080022209A1 (en) * | 2006-07-19 | 2008-01-24 | Lyle Ruthie D | Dynamically controlling content and flow of an electronic meeting |
JP4755043B2 (en) | 2006-07-31 | 2011-08-24 | 株式会社富士通エフサス | Education support system |
US20080254434A1 (en) | 2007-04-13 | 2008-10-16 | Nathan Calvert | Learning management system |
US20090263777A1 (en) * | 2007-11-19 | 2009-10-22 | Kohn Arthur J | Immersive interactive environment for asynchronous learning and entertainment |
US8701009B2 (en) * | 2007-12-28 | 2014-04-15 | Alcatel Lucent | System and method for analyzing time for a slide presentation |
US8295462B2 (en) | 2008-03-08 | 2012-10-23 | International Business Machines Corporation | Alerting a participant when a topic of interest is being discussed and/or a speaker of interest is speaking during a conference call |
US20100151431A1 (en) | 2008-03-27 | 2010-06-17 | Knowledge Athletes, Inc. | Virtual learning |
US20100037151A1 (en) * | 2008-08-08 | 2010-02-11 | Ginger Ackerman | Multi-media conferencing system |
US8351581B2 (en) | 2008-12-19 | 2013-01-08 | At&T Mobility Ii Llc | Systems and methods for intelligent call transcription |
US8330794B2 (en) * | 2009-06-10 | 2012-12-11 | Microsoft Corporation | Implementing multiple dominant speaker video streams with manual override |
US9111263B2 (en) * | 2009-06-15 | 2015-08-18 | Microsoft Technology Licensing, Llc | Adaptive meeting management |
US8370142B2 (en) | 2009-10-30 | 2013-02-05 | Zipdx, Llc | Real-time transcription of conference calls |
WO2011099873A1 (en) * | 2010-02-12 | 2011-08-18 | Future Technologies International Limited | Public collaboration system |
US20110244953A1 (en) | 2010-03-30 | 2011-10-06 | Smart Technologies Ulc | Participant response system for the team selection and method therefor |
US20110264705A1 (en) * | 2010-04-22 | 2011-10-27 | Brandon Diamond | Method and system for interactive generation of presentations |
US9003303B2 (en) * | 2010-04-30 | 2015-04-07 | American Teleconferencing Services, Ltd. | Production scripting in an online event |
US8670018B2 (en) | 2010-05-27 | 2014-03-11 | Microsoft Corporation | Detecting reactions and providing feedback to an interaction |
CA2802706C (en) * | 2010-06-15 | 2020-08-18 | Scholarbox, Inc. | Method, system and user interface for creating and displaying of presentations |
US8379077B2 (en) | 2010-11-24 | 2013-02-19 | Cisco Technology, Inc. | Automatic layout and speaker selection in a continuous presence video conference |
US8581958B2 (en) | 2011-04-18 | 2013-11-12 | Hewlett-Packard Development Company, L.P. | Methods and systems for establishing video conferences using portable electronic devices |
US8589205B2 (en) | 2011-05-18 | 2013-11-19 | Infosys Technologies Ltd. | Methods for selecting one of a plurality of competing IT-led innovation projects and devices thereof |
US9053750B2 (en) | 2011-06-17 | 2015-06-09 | At&T Intellectual Property I, L.P. | Speaker association with a visual representation of spoken content |
US20130007635A1 (en) * | 2011-06-30 | 2013-01-03 | Avaya Inc. | Teleconferencing adjunct and user interface to support temporary topic-based exclusions of specific participants |
US20130024789A1 (en) | 2011-07-19 | 2013-01-24 | Abilene Christian University | Mobile Application For Organizing and Conducting Group Discussions and Activities |
US9014358B2 (en) | 2011-09-01 | 2015-04-21 | Blackberry Limited | Conferenced voice to text transcription |
US9354763B2 (en) * | 2011-09-26 | 2016-05-31 | The University Of North Carolina At Charlotte | Multi-modal collaborative web-based video annotation system |
US8682973B2 (en) * | 2011-10-05 | 2014-03-25 | Microsoft Corporation | Multi-user and multi-device collaboration |
US20140200944A1 (en) * | 2011-11-08 | 2014-07-17 | Matchware A/S | Automation of meeting scheduling and task list access permissions within a meeting series |
US20130169742A1 (en) | 2011-12-28 | 2013-07-04 | Google Inc. | Video conferencing with unlimited dynamic active participants |
US9292814B2 (en) * | 2012-03-22 | 2016-03-22 | Avaya Inc. | System and method for concurrent electronic conferences |
US20130305147A1 (en) * | 2012-04-13 | 2013-11-14 | Pixel Perfect Llc | Data processing system for event production management |
US20140099624A1 (en) * | 2012-05-16 | 2014-04-10 | Age Of Learning, Inc. | Mentor-tuned guided learning in online educational systems |
US20140282109A1 (en) * | 2013-03-15 | 2014-09-18 | GroupSystems Corporation d/b/a ThinkTank by GroupS | Context frame for sharing context information |
US9477380B2 (en) * | 2013-03-15 | 2016-10-25 | Afzal Amijee | Systems and methods for creating and sharing nonlinear slide-based mutlimedia presentations and visual discussions comprising complex story paths and dynamic slide objects |
US20140297350A1 (en) * | 2013-03-27 | 2014-10-02 | Hewlett-Packard Evelopment Company, L.P. | Associating event templates with event objects |
US9344291B2 (en) | 2013-04-24 | 2016-05-17 | Mitel Networks Corporation | Conferencing system with catch-up features and method of using same |
US9154531B2 (en) * | 2013-06-18 | 2015-10-06 | Avaya Inc. | Systems and methods for enhanced conference session interaction |
CN104469256B (en) * | 2013-09-22 | 2019-04-23 | 思科技术公司 | Immersion and interactive video conference room environment |
US9398059B2 (en) * | 2013-11-22 | 2016-07-19 | Dell Products, L.P. | Managing information and content sharing in a virtual collaboration session |
US9329833B2 (en) * | 2013-12-20 | 2016-05-03 | Dell Products, L.P. | Visual audio quality cues and context awareness in a virtual collaboration session |
US9792026B2 (en) * | 2014-04-10 | 2017-10-17 | JBF Interlude 2009 LTD | Dynamic timeline for branched video |
US9712569B2 (en) | 2014-06-23 | 2017-07-18 | Adobe Systems Incorporated | Method and apparatus for timeline-synchronized note taking during a web conference |
US9734485B2 (en) * | 2014-07-31 | 2017-08-15 | Adobe Systems Incorporated | Method and apparatus for providing a contextual timeline of an online interaction for use in assessing effectiveness |
-
2015
- 2015-08-31 US US14/840,471 patent/US9674243B2/en active Active
- 2015-08-31 US US14/840,546 patent/US10666696B2/en active Active
- 2015-08-31 US US14/840,438 patent/US9578073B2/en active Active
- 2015-08-31 US US14/840,513 patent/US9674244B2/en active Active
- 2015-09-04 KR KR1020177008079A patent/KR20170060023A/en active IP Right Grant
- 2015-09-04 JP JP2017531986A patent/JP6734852B2/en active Active
- 2015-09-04 BR BR112017004387A patent/BR112017004387A2/en unknown
- 2015-09-04 EP EP15837998.2A patent/EP3189622B1/en active Active
- 2015-09-04 WO PCT/US2015/048593 patent/WO2016037084A1/en active Application Filing
-
2017
- 2017-06-05 US US15/613,894 patent/US10110645B2/en active Active
-
2018
- 2018-10-22 US US16/167,130 patent/US10805365B2/en active Active
-
2020
- 2020-07-10 JP JP2020119302A patent/JP2020173853A/en active Pending
Non-Patent Citations (1)
Title |
---|
None * |
Also Published As
Publication number | Publication date |
---|---|
US20160073058A1 (en) | 2016-03-10 |
US10666696B2 (en) | 2020-05-26 |
US10110645B2 (en) | 2018-10-23 |
US20170279862A1 (en) | 2017-09-28 |
WO2016037084A1 (en) | 2016-03-10 |
EP3189622A1 (en) | 2017-07-12 |
US9674244B2 (en) | 2017-06-06 |
KR20170060023A (en) | 2017-05-31 |
US9674243B2 (en) | 2017-06-06 |
JP6734852B2 (en) | 2020-08-05 |
BR112017004387A2 (en) | 2017-12-05 |
US20160073059A1 (en) | 2016-03-10 |
EP3189622A4 (en) | 2018-06-13 |
US20160073056A1 (en) | 2016-03-10 |
JP2017537412A (en) | 2017-12-14 |
US10805365B2 (en) | 2020-10-13 |
US20160072862A1 (en) | 2016-03-10 |
US20190124128A1 (en) | 2019-04-25 |
JP2020173853A (en) | 2020-10-22 |
US9578073B2 (en) | 2017-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10546235B2 (en) | Relativistic sentiment analyzer | |
Wall | Citizen journalism: A retrospective on what we know, an agenda for what we don’t | |
US20180367759A1 (en) | Asynchronous Online Viewing Party | |
US9729823B2 (en) | Public collaboration system | |
Bower et al. | Collaborative learning across physical and virtual worlds: Factors supporting and constraining learners in a blended reality environment | |
US9747925B2 (en) | Speaker association with a visual representation of spoken content | |
US9525711B2 (en) | Multi-media conferencing system | |
US9329833B2 (en) | Visual audio quality cues and context awareness in a virtual collaboration session | |
US9925466B2 (en) | Large group interactions | |
Toohey et al. | “That sounds so cooool”: Entanglements of children, digital tools, and literacy practices | |
US20160103572A1 (en) | Collaborative media sharing | |
Lewis | Bringing technology into the classroom-Into the Classroom | |
Haythornthwaite et al. | E-learning theory and practice | |
US10515561B1 (en) | Video presentation, digital compositing, and streaming techniques implemented via a computer network | |
US10459985B2 (en) | Managing behavior in a virtual collaboration session | |
King et al. | Experiencing the digital world: The cultural value of digital engagement with heritage | |
Smith | Dangerous news: Media decision making about climate change risk | |
Minneman et al. | A confederation of tools for capturing and accessing collaborative activity | |
US20160088259A1 (en) | System and method for interactive internet video conferencing | |
Kral | Youth media as cultural practice: Remote Indigenous youth speaking out loud | |
US20150127340A1 (en) | Capture | |
US8984405B1 (en) | Categorized and tagged video annotation | |
JP2015507416A (en) | Video conferencing with unlimited dynamic active participants | |
US20190228029A1 (en) | Methods, systems, and media for generating sentimental information associated with media content | |
US20150169069A1 (en) | Presentation Interface in a Virtual Collaboration Session |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
17P | Request for examination filed |
Effective date: 20170331 |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAX | Request for extension of the european patent (deleted) | ||
DAV | Request for validation of the european patent (deleted) | ||
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04L 12/18 20060101AFI20180206BHEP Ipc: H04L 29/06 20060101ALI20180206BHEP Ipc: H04L 29/10 20060101ALI20180206BHEP Ipc: H04N 7/15 20060101ALI20180206BHEP |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1241166 Country of ref document: HK |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04N 7/15 20060101ALI20180508BHEP Ipc: H04L 29/06 20060101ALI20180508BHEP Ipc: H04L 29/10 20060101ALI20180508BHEP Ipc: H04L 12/18 20060101AFI20180508BHEP |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20180515 |
|
GRAP |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
||
GRAJ |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
||
GRAP |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
||
INTG | Intention to grant announced |
Effective date: 20190429 |
|
INTG | Intention to grant announced |
Effective date: 20190508 |
|
GRAS |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
||
GRAA |
Free format text: ORIGINAL CODE: 0009210 |
||
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP Ref country code: AT Ref legal event code: REF Ref document number: 1200295 Country of ref document: AT Kind code of ref document: T Effective date: 20191115 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602015041332 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20191106 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200306 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200206 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200206 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200207 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200306 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602015041332 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1200295 Country of ref document: AT Kind code of ref document: T Effective date: 20191106 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 |
|
26N | No opposition filed |
Effective date: 20200807 |
|
PGFP | Annual fee paid to national office [announced from national office to epo] |
Ref country code: GB Payment date: 20200928 Year of fee payment: 6 Ref country code: DE Payment date: 20200929 Year of fee payment: 6 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191106 |