US20200186373A1 - Method and system for sharing and discovery - Google Patents

Method and system for sharing and discovery Download PDF

Info

Publication number
US20200186373A1
US20200186373A1 US16/788,199 US202016788199A US2020186373A1 US 20200186373 A1 US20200186373 A1 US 20200186373A1 US 202016788199 A US202016788199 A US 202016788199A US 2020186373 A1 US2020186373 A1 US 2020186373A1
Authority
US
United States
Prior art keywords
user
chat
audio
chat room
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US16/788,199
Inventor
Philippe Clavel
Timophey Zaitsev
Stefan Birrer
Alexandre Francois
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rabbit Asset Purchase Corp
Original Assignee
Rabbit Asset Purchase Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201261739544P priority Critical
Priority to US201361777275P priority
Priority to US14/134,240 priority patent/US9755847B2/en
Priority to US15/694,038 priority patent/US10560276B2/en
Application filed by Rabbit Asset Purchase Corp filed Critical Rabbit Asset Purchase Corp
Priority to US16/788,199 priority patent/US20200186373A1/en
Assigned to RABBIT, INC. reassignment RABBIT, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CLAVEL, PHILIPPE, FRANCOIS, ALEXANDRE, BIRRER, STEFAN, ZAITSEV, TIMOPHEY
Assigned to RABBIT (ASSIGNMENT FOR THE BENEFIT OF CREDITORS), LLC reassignment RABBIT (ASSIGNMENT FOR THE BENEFIT OF CREDITORS), LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RABBIT, INC.
Assigned to RABBIT ASSET PURCHASE CORP. reassignment RABBIT ASSET PURCHASE CORP. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RABBIT (ASSIGNMENT FOR THE BENEFIT OF CREDITORS), LLC
Publication of US20200186373A1 publication Critical patent/US20200186373A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1822Conducting the conference, e.g. admission, detection, selection or grouping of participants, correlating users to one or more conference sessions, prioritising transmission
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Abstract

A method for sharing media within a chat room system, including: providing a virtual chat room that is accessible by a first user device and a second user device, receiving, from the first user device, a first media item generated by an application executed on the user device to share within the chat room, sending the first media item for distribution to the second user device via the virtual chat room, receiving a second media item from the second user device for provision in the virtual chat room, and sending the second media item for distribution to the first user device via the virtual chat room.

Description

    PRIORITY
  • This application is a continuation under 35 U.S.C. § 120 of U.S. patent application Ser. No. 15/694,038, filed 1 Sep. 2017, which is a continuation of U.S. patent application Ser. No. 14/134,240, filed 19 Dec. 2013, which claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application No. 61/739,544, filed 19 Dec. 2012, and U.S. Provisional Patent Application No. 61/777,275, filed 12 Mar. 2013, each of which is incorporated herein by reference in their entireties.
  • TECHNICAL FIELD
  • This invention relates generally to the social media field, and more specifically to a new and useful chat room system in the social media field.
  • BACKGROUND
  • Conventional chat rooms prohibit users from discovering new users and content that is being shared within simultaneously occurring conversations. This is due to the insular nature of conventional chat rooms—each conventional chat room only supports a single conversation, and users within the chat room are unable to simultaneously participate in multiple conversations without actively entering separate, insular chat rooms. Without notification of the user to the existence of these other conversations, users are not even aware that there are other conversations that can be joined. Past attempts at resolving this discovery issue have been made, but these solutions are inadequate because users cannot control which conversations they are joining, and therefore do not have control over which new users or what new content they will be consuming.
  • Thus, there is a need in the social media field to create a new and useful chat room system and method.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic representation of a method for facilitating user and content discovery.
  • FIG. 2 is a specific example of audio parameter setting determination based on the chat group priority.
  • FIG. 3 is a schematic representation of an example of virtual chat group positions within a virtual chat room.
  • FIG. 4 is a schematic representation of an example of the chat group video stream displayed corresponding to the virtual chat group positions of FIG. 3.
  • FIG. 5 is a schematic representation of a variation of simultaneously presenting the audio streams of a plurality of chat groups.
  • FIG. 6 is a schematic representation of a variation of simultaneously presenting the video streams of a plurality of chat groups.
  • FIG. 7 is a schematic representation of a variation of processing the audio streams.
  • FIG. 8 is a schematic representation of a variation of chat group presentation setting adjustment in response to a transient user selection of a chat group.
  • FIG. 9 is a schematic representation of a variation of chat group presentation setting adjustment in response to an explicit user selection of a chat group.
  • FIG. 10 is a schematic representation of the method of audio sharing.
  • FIG. 11 is a schematic representation of a variation of the method of audio sharing including capturing audio input.
  • FIG. 12 is a schematic representation of a variation of the method of audio sharing including substantially immediate playback of the system audio stream at the originating device.
  • FIG. 13 is a schematic representation of a variation of the method of audio sharing including delayed playback of the system audio stream at the originating device.
  • DESCRIPTION OF EXAMPLE EMBODIMENTS
  • The following description of the preferred embodiments of the invention is not intended to limit the invention to these preferred embodiments, but rather to enable any person skilled in the art to make and use this invention.
  • As shown in FIG. 1, the method for facilitating user and content discovery within a chat room includes determining a presentation setting for each of a set of chat groups S100 and simultaneously presenting the audio and video streams for each of the set of chat groups based on the presentation settings S200. The method can additionally include adjusting a chat group priority in response to a user action S300. The method functions to facilitate discovery of new users and new content within a chat room environment by enabling a user to simultaneously participate in multiple conversations. User participation in a conversation preferably includes listening to and/or watching a conversation, and can additionally include contributing audio and/or video to the conversation.
  • The method is preferably performed by one or more user devices 10 connected to one or more servers 20. The user device 10 is preferably a portable device, such as a smartphone, laptop, tablet, or any other suitable portable user device. Alternatively, the user device 10 can be a desktop or any other suitable computing device. The device 10 preferably includes an audio output 12, more preferably an audio device such as a speaker, but can alternatively include audio out jacks or any other suitable audio output. The device 10 preferably includes a processor, wherein the primary and routing modules are preferably located on the processor, such that the processor performs the actions of the primary and/or routing modules. Alternatively, the device 10 can include multiple processors or multiple threads, wherein the functions of the primary module and routing module can be performed by separate processors or separate threads. The device 10 can additionally include an audio input, such as a microphone, audio in jack, or any other suitable audio input. The device 10 can additionally include a display 14. The device 10 can additionally include a communication connection, such as a wireless receiver and transmitter, that functions to transfer and receive audio and/or video data to a receiver and transmitter pair linked to a server. However, the device 10 can alternatively include any other suitable component. The originating device and receiving devices preferably include substantially the same components described above, but can alternatively include different combinations of components.
  • The server 20 preferably includes a CPU and storage. The server is preferably external and separate from the user device, but can alternatively be a second user device (e.g., in a distributed network). The server 20 preferably stores and tracks the properties of each of a plurality of chat rooms within a chat system, and can additionally store and track the properties of each of a plurality of chat groups within each chat room. The server 20 can additionally receive the audio and video of each participating user from the respective user device (originating devices), and send the audio and video to the devices of the other users participating in the chat group (receiving devices). The server 20 can additionally synchronize the audio and video. Alternatively, the user device 10 can synchronize the audio and video. Alternatively, the receiving device can directly receive the audio and video from the originating devices. The user device 10 is preferably connected to the server 20 wirelessly (e.g., through WiFi, a cellular network, etc.), but can alternatively be connected to the server 20 through a wired connection.
  • The method is preferably facilitated by a primary native application that is executed by the user device, gathers the requisite data from the user device, and facilitates interaction with the server. The primary native application is preferably an executable that runs on the user device without external support, but can alternatively be a browser application, a browser plugin, a web-based system, a mobile application, or any other suitable software or data format that is supported by the user device system with minimal computational overhead or additional components. The primary native application is preferably capable of accessing the data of other native applications through the respective application programming interfaces, and is preferably capable of accessing lower-level computational data, such as processor-level data. However, the method can be facilitated by a non-native application, such as a browser plug-in, a browser displaying the mixed audio and video received from an external server, or by any other suitable means. The user profile, user history, social network connections, and any other suitable user information is preferably stored by the user device (e.g., native application), but can alternatively be stored by the server in association with a user account identifier.
  • In use, the chat system preferably presents one or more chat rooms 30 to a user on the user device 10. In response to receipt of a chat room selection, a set of chat groups 40 of the chat room are preferably displayed, wherein the audio and video streams of the chat groups are preferably presented to varying degrees. The displayed set of chat groups 40 are preferably a subset of the plurality of chat room chat groups (e.g., wherein the subset is smaller than the plurality), but can alternatively be all chat groups or any other suitable portion of the plurality. In response to receipt of a chat group selection, the audio streams 52 and video streams 54 of the users participating in the chat group are preferably presented at the user device, while the audio and video streams of secondary chat groups (e.g., unselected chat groups) are presented to varying degrees on the user device. More preferably, the user device preferably concurrently displays the participant video streams 54 of the users participating in the selected chat group 50, the chat group video stream 44 of the selected chat group, and the chat group video streams 44 of secondary chat groups, and concurrently plays the audio streams 52 of the users participating in the selected chat group and the dampened audio streams of all the chat room chat groups. However, the audio and video of the chat room chat groups can be otherwise concurrently presented.
  • As shown in FIG. 3, the chat room of the method is preferably a virtual chat room (e.g., digital chat room), wherein each chat room includes one or more chat groups (e.g., virtual or digital chat groups). The chat room is preferably one of many chat rooms supported by a chat room system, wherein the system is preferably capable of storing and managing an unlimited number of chat rooms, but can alternatively store and manage a limited number of chat rooms. The chat rooms are preferably substantially insular, such that users inside a chat room are not simultaneously participating in other chat rooms. For example, a user in a first chat group within a first chat room can hear and/or see audio and video streams of other users within other chat groups in the first chat room, but cannot hear or see the audio and video streams of users in other chat rooms. However, users outside of a chat room are preferably capable of participating in the chat room, wherein the outside users are preferably capable of listening to an associated audio stream and an associated video stream. Each chat room is preferably capable of supporting any suitable number of users, but can alternatively support a limited number of users. The chat rooms are preferably substantially persistent within the system, but can alternatively be transient and be deleted in response to a termination event (e.g., no users are in the chat room, the chat room creator has left the chat room, etc.).
  • Each chat room preferably includes associated permissions settings and presentation settings. The permissions settings can include access settings (e.g., which users can access the chat room, such as private or private chat rooms, etc.), content sharing settings (e.g., which users can share content in the chat room), or any other suitable permissions settings. The permissions settings are preferably set by the creator of the chat room, but can alternatively be set by default when the chat room is created. The permissions settings are preferably managed or changed by the creator or moderator of the chat room, but can alternatively be managed or changed automatically by the server or by another user. The permissions settings are preferably stored by the server, but can alternatively be stored on the device by the primary native application.
  • The presentation settings for the chat rooms are preferably determined in a similar manner to presentation setting determination for the chat group, as described below. However, the presentation settings can be determined based on the priorities of the chat groups within the chat room or determined in any other suitable manner. The presentation settings are preferably used to determine the aural and visual properties of chat room presentation to the user, prior to chat room selection. Chat room presentation parameters can include a representative audio stream selection, a representative video stream selection, the size of the displayed video stream relative to other chat rooms, the volume of the audio stream, the discernibility of the audio stream, the virtual position of the chat room on the user display, or any other suitable presentation settings. The representative video stream is preferably the video stream of the chat group having the highest priority to the user within the respective chat room, but can alternatively be a randomly selected video stream, include multiple concurrent video streams, or be any other suitable video stream. The representative video stream can be from a singular chat group or can be a compilation from multiple chat groups. The representative audio stream is preferably the audio stream corresponding to the selected video stream, but can alternatively be a compilation of a subset or all of the audio streams from the chat room. The chat room presentation settings can additionally influence how the audio and video of the chat groups are displayed once the user is within the chat room.
  • A chat group of the chat room preferably includes one or more users having a conversation. Each user is represented within the chat group by a video stream and audio stream received from a user device of the user (participant video stream and participant audio stream, respectively), but can alternatively be represented by a user account or any other suitable representation. The video stream preferably includes a substantially continuous stream of video frames, and the audio stream preferably includes a substantially continuous stream of audio frames. Each audio and video frame is preferably associated with (e.g., encoded with) a timestamp (e.g., the time at which the video frame was generated at the user device or the timestamp of the video frame within a piece of media). Each audio and video frame or stream can additionally or alternatively be associated with a geotag, the originating user identifier, a sentiment tag, a content or keyword tag, or any other suitable metadata. The audio and/or video metadata can be used to determine a chat group relevance to a user, searched to select for a stream of a specific user, or utilized in any other suitable manner. The audio and/or video streams can be tagged by the originating device, by the server, by the receiving device, or by any other suitable component of the system.
  • A chat group is preferably represented within a chat room by a chat group audio stream and a chat group video stream. A conversation preferably includes the sharing of audio streams, and can additionally include the sharing of video streams or any other suitable content between the users within the conversation. The chat groups of the chat rooms are preferably substantially inclusive, wherein all users within a chat room are preferably capable of simultaneously participating in all chat groups of the chat room. More preferably, all users within a chat room are simultaneously participating in all chat groups to varying degrees. The degree of participation of a user in a chat group is preferably determined based on the priority of the chat group for the user, but can alternatively be automatically determined or based on a user selection or on any other suitable parameter. In one variation of the method, the degree of participation preferably varies proportionally with the chat group priority, wherein the degree of participation is high for a chat group with high priority, and low for a chat group with low priority. A high degree of participation for a user preferably includes the user device playing a discernable audio stream of each participant within the chat group (e.g., the user audio streams have been processed to increase discernibility) and displaying the video stream of each user within the chat group, and wherein the audio stream and video stream associated with the user is continuously displayed to users of the chat group. A medium degree of participation for a user preferably includes the user device playing an audio stream representative of the chat group (e.g., the audio streams of the primary speaker or the audio stream of shared content) and displaying a video stream representative of the chat group (e.g., the video stream of the primary speaker or the video stream of the shared content), wherein the video stream of the user preferably is not continuously displayed to the users within the chat group. A medium degree of participation can alternatively and/or additionally include muffling, decreasing the volume, or otherwise processing the representative audio stream of the chat group to decrease discernibility. A low degree of participation for a user preferably includes the user device playing a substantially muffled or processed audio stream of the chat group, and does not include the user device displaying video associated with the chat group or include consistent display of user video with users of the chat group.
  • In another variation of the method, the user is capable of participating in a chat group in at least an active participation mode and a passive participation mode. When the user participates in the chat group in the active participation mode (e.g., a high degree of participation), the audio and/or video stream from the associated user device is preferably sent to the devices of the other chat group participants. When the user participates in the passive participation mode (e.g., a low or medium degree of participation), the audio and/or video stream from the associated user device is preferably not sent to the chat group participant devices. However, the user participant can alternatively be operable in any other suitable mode.
  • Determining the presentation settings for each chat group S100 functions to determine the contribution of each chat group to the audio output at the user device and to determine the display properties of the video stream(s) associated with each chat group. Determining the presentation settings can additionally function to determine a virtual chat group position within a virtual chat room space. The presentation settings for each chat group are preferably specific to each user, but can alternatively the same for all users or for a group of users. Determining the presentation settings preferably includes determining the presentation settings for every chat group within the chat room for the user, but can alternatively include determining the presentation settings for a sub set of chat groups within the chat room for the user. The presentation settings of each chat group are preferably determined for each new user and stored at the server or at the native device, but can alternatively be newly determined each time the user accesses the chat room.
  • The presentation settings are preferably used to determine the target presentation parameter values for each chat group. Presentation parameters preferably include audio parameters and video parameters. Audio parameters include an audio stream selection, the volume of an audio stream, and the discernibility of the audio stream (or conversely, the degree of audio stream dampening, muffling, or reduction in intelligibility). Video parameters include a video stream selection, the size of the displayed video stream, and the number of participant video streams displayed. However, any other suitable audio and video parameters can be determined. The presentation parameters can additionally include a virtual chat group position of the chat group relative to a virtual reference point (e.g., the virtual user location). The chat group video is preferably displayed at a position on the user device corresponding to the respective virtual position, and the audio is preferably encoded or mixed to have an aural position corresponding to the virtual position. However, any other suitable presentation parameter can be determined by the presentation settings.
  • The presentation settings of each chat group for each user are preferably determined by the primary native application, but can alternatively be determined by the server or by the user. The presentation settings are preferably stored and controlled by the server, but can alternatively be managed by the primary native application. The server preferably sets the presentation settings for each chat group based on the respective priorities, and displays each chat group according the presentation settings. The server can additionally control the priority of the chat group for each user. However, the primary native application or any other suitable component can alternatively control the aforementioned parameters. The primary native application or server can additionally adjust the chat group presentation settings in response to a user action. The primary native application or server can additionally map the chat groups to virtual locations within the chat group.
  • Determining the presentation settings for each chat group preferably includes determining a priority for each chat group for the user and setting the presentation settings for each chat group based on the respective priority. However, determining the presentation settings for each chat group can alternatively include setting the presentation setting to a default setting, setting the presentation settings based on user preferences, setting the presentation settings in accordance with presentation settings received from the user, or determining the presentation settings in any other suitable manner.
  • Determining the priority of each chat group for the user preferably functions to determine the degree of user interest in the chat group. The chat group priorities can be calculated through scores, ranking assigned on a continuous scale, tiers (e.g., high, medium, and low priority), or any other suitable priority setting. The chat group priorities are preferably determined from user actions performed within the instantaneous session, but can additionally and/or alternatively be determined from a priority selection received from a user, the similarity between the chat group and a user profile associated with the user, the degree of chat group association with one or more user social network connections, historical user actions with the chat group or related chat groups from past sessions, the presence of shared media within the chat group (e.g., as determined based on the source of the audio or video, wherein chat groups having video captured from an API can have higher priority than chat groups with only video from the native camera), the type of media shared within the chat group (e.g., chat groups sharing movies can have higher priorities than chat groups sharing images), the geographic proximity of the audio and video stream to a given physical location (e.g., based on the associated geotags), or from any other suitable user parameter. The chat group priority can alternatively or additionally be determined based on user preferences, based on the user profile, based on similarities between the chat group content and the user interests, a historical relationship between the chat group and the user (e.g., the user had previously visited the chat group), a relationship between the users in the chat group and the user (e.g., a social network connection of the user, as determined from a social networking service, is in the chat group), or based on any other suitable parameter. The user profile is preferably extracted from historical user actions (e.g., include themes or keywords associated with the chat groups that the user has previously participated in), but can alternatively be a profile received from the user (e.g., a profile input by the user), a profile from a third party system (e.g., from a social networking system), or any other suitable user profile.
  • In one variation of the method, chat groups that the user has explicitly selected (e.g., clicked on with a mouse), or otherwise indicated explicit interest in, preferably have the highest priority. Chat groups in which the user has indicated a passing interest (e.g., the user moused over the chat group video or otherwise transiently selected the chat group) preferably have the second highest priority. Chat groups that are associated with the user, such as those with which the user has interacted in the past, or those that the user could be interested in, preferably have the third highest priority. Chat groups that are within the same chat room, but have not received user action, can have a fourth highest priority. Chat groups can additionally or alternatively be prioritized based on the relevance to the user. For example, chat groups having participants that have low degrees of separation from the user (e.g., along a social graph) can be prioritized higher than chat groups with participants having a high degree of separation from the user. In another example, within a chat group, other participants having low degrees of separation from the user can be prioritized higher than participants having high degrees of separation from the user (e.g., when the user is participating in a chat group with a large number of users, such as a chat group having more than 50 participants).
  • The chat groups of a chat room are presented according to the respective priorities in response to a user selection of the chat room, wherein the chat groups displayed on the user device have at least a medium priority and chat groups not displayed on the user device have a low priority. In response to a user selection of a first chat group, the chat group priority of the first chat group increases beyond a first threshold and is categorized as a high priority group. The remainder of the chat groups can retain the previous respective chat group priorities, or the respective chat group priorities can be adjusted based on the properties of the selected chat group. For example, chat groups related to the first chat group can increase in priority whereas chat groups unrelated to the first chat group can decrease in priority.
  • Alternatively, the chat group priority can be assigned based on determined user preferences. User preferences are preferably determined from the themes, keywords, users, or any other suitable parameter of previous actions by the user, wherein chat groups with similar or shared parameters are preferably prioritized higher than chat groups with weaker associations. The user preferences can additionally be predicted based on prior actions through statistical analysis of user activity, correlation of users to persona descriptors (e.g., an abstract user construct used to summarize a group of users), or through any other suitable means of predicting or determining user preferences. The strength of chat group association with the user preferably determines the relative priority of the aforementioned groups. The strength of chat group association with the user is preferably proportional to the frequency of past user actions associated with the chat group (e.g., past participation in the group), strength of parameter associations with the user profile (e.g., degree of similarity between keywords extracted from the chat group conversation and keywords within the user profile, etc.), strength of network connection with the user (e.g., a first degree social network connection, based on a social networking service such as Facebook, is in the chat group), or degree of association of any other suitable user parameter, but can alternatively be inversely proportional, weighted, or otherwise determined. Chat groups that are unassociated with the user preferably have the lowest priority.
  • The chat group priority can alternatively be determined based on the virtual position of the chat group relative to a virtual user position 6 o (e.g., virtual reference position) within the chat room. In this variation, the method preferably additionally includes mapping the chat groups to virtual positions within a virtual space (e.g., the virtual chat room) and determining the chat group priority for each of the chat groups based on the distance between the virtual position of the chat group and a virtual position of the user (e.g., virtual reference point). The virtual chat group positions can be randomly determined, pre-set, set by secondary users, or otherwise determined. The virtual position of the user can be constant within the space (e.g., wherein the virtual space adjusts about the user) or can move within the space. The initial virtual position of the user when the user first enters the chat room is preferably set at a predetermined virtual position, but can alternatively be randomly generated. The user is preferably considered an active participant in a chat group when the virtual user position coincides with the chat group position. However, the chat group priority can be otherwise determined based on the respective virtual chat group position. Alternatively, the chat groups can be mapped to virtual positions within the virtual space based on the respective chat group priority. For example, the chat groups can be mapped such that the distance between the chat group and the virtual reference position varies inversely with the chat group priority (e.g., higher priority chat groups are closer to the virtual reference position and lower priority chat groups are further from the virtual reference position). However, the chat groups can be mapped such that the distance between the chat group and the virtual position varies directly with chat group priority, such that chat groups of a given priority are grouped together, or mapped within the virtual space in any other suitable configuration based on the respective chat group priority. The chat group priority can be otherwise determined using a combination of the aforementioned methods or determined in any other suitable manner.
  • Determining the presentation settings for a chat group based on the respective chat group priority preferably includes selecting the presentation settings for each of the priorities. Different settings are preferably determined for different priority levels, but the settings can alternatively vary along a substantially continuous continuum for a continuum of priorities. Different priority tiers are preferably associated with a different set of presentation settings. The parameter values for each priority tier are preferably automatically determined by the system (e.g., based on predetermined values, automatically calculated, etc.), but can alternatively be based on values received from the user, values determined from user preferences (e.g., from historical user actions), or otherwise determined. Alternatively, the parameter values for each priority tier can be calculated or otherwise determined based on the chat group priority.
  • In one variation of the method, the priorities are preferably divided into a high, medium, and low priority tier using a first and a second threshold, wherein each tier has a respective set of presentation settings. For example, chat groups having priorities above a first threshold preferably have a first set of presentation settings, chat groups having priorities between a first and second threshold preferably have a second set of presentation settings, and chat groups having priorities below the second threshold preferably have a third set of presentation settings, wherein the first, second, and third set of presentation settings preferably include different presentation parameter values. However, any suitable number of thresholds can be used, such that the set of chat groups can be divided into any suitable degree of resolution. The thresholds can be predetermined or be dynamically determined by the system. The thresholds can be set or adjusted such that a singular chat group has the highest priority while multiple chat groups can share lower priority rankings. Alternatively, the thresholds can be set or adjusted such that the chat groups all remain within a given priority tier until a user selection is received. However, the threshold can be selected to include any other suitable number of chat groups in any suitable tier bounded by an upper and lower threshold.
  • Determining the presentation settings based on the chat group priority preferably includes determining the audio settings for each chat group based on the respective chat group priority. Determining the audio settings for each chat group preferably includes determining the discernibility of the chat group audio stream. While the audio streams for all chat groups are preferably played to the user, the audio streams are preferably processed to varying degrees to adjust the discernibility of the chat group audio stream, dependent on the chat group priority. The discernibility of the chat group audio streams preferably varies directly with the assigned priorities, wherein the discernibility of the chat group audio stream increases with increasing priority (e.g., the highest priority chat groups are preferably the most discernible and the lowest priority chat groups are preferably the least discernible), an example of which is shown in FIG. 2. However, the respective contribution of the chat group audio streams to the audio output can be otherwise determined.
  • In one variation of the method, the chat group with the highest priority preferably contributes a highly discernible, clear audio stream to the audio output, wherein the audio streams of the users within the chat group are preferably processed to increase discernibility (e.g., reduce noise, increased volume, etc.). The chat groups with medium priority preferably contribute less discernible audio streams to the audio output, wherein the audio streams of the users within these chat groups are preferably muffled, lowered in volume, scrambled, or otherwise processed to decrease discernibility. The chat groups with low priority preferably contribute the least discernible audio streams to the audio output, wherein the audio streams of the users within these chat groups are preferably substantially muffled, lowered in volume, scrambled, or otherwise processed to substantially decrease discernibility (e.g., the discernibility is decreased to a second degree lower than that of medium priority chat groups).
  • Determining the audio settings for each chat group preferably includes determining the volume of the chat group audio stream. The audio settings can be determined individually for each chat group, or can be determined for a subset of chat groups and applied to the chat group subset. While the audio streams for all chat groups are preferably played to the user, the audio streams are preferably processed to varying degrees to adjust the respective volume of the chat group audio stream, dependent on the chat group priority. The volume of each chat group audio stream preferably varies directly with the assigned priorities, wherein the highest priority chat groups are preferably the loudest (e.g., has the highest amplitude limit) and the lowest priority chat groups are preferably the quietest (e.g., has the lowest amplitude limit). However, the respective contribution of the chat group audio streams to the audio output can be otherwise determined.
  • In one variation, the chat group with the highest priority preferably contributes an audio stream of a first volume to the audio output. The chat groups with medium priority preferably contribute audio streams at a second volume, less than the first volume, to the audio output. The chat groups with low priority preferably contribute audio streams at a third volume, lower than the second volume to the audio output. Presenting the chat group audio can include selecting a volume limit based on the chat group priority, mixing the constituent audio streams of the chat group into a chat group audio stream, processing the mixed chat group audio stream to meet the volume limit, and playing the processed chat group audio stream at the user device. Alternatively, adjusting the volume of the chat group audio stream can include processing the constituent audio streams to meet the respective volume limit and playing the dampened audio streams at the user device. Alternatively, the dampened audio streams can be mixed into a chat group audio stream 42, wherein the chat group audio stream is played at the user device. The constituent audio streams can be dampened, mixed, and/or muted by the server or by the primary native application, preferably without interruption to other audio streams, but alternatively as a whole with other audio streams. The constituent audio streams are preferably audio streams of the users participating in the chat group that are received from the user devices associated with the participating users (participant audio streams), but can alternatively be an ambient audio stream 32 (e.g., chat room audio stream), an additional sound track 22 retrieved from the server, user device, a third party source, or be any other suitable audio stream. However, any other suitable audio parameter can be otherwise determined based on the respective chat group priority.
  • Determining the presentation settings preferably additionally includes determining the video settings for each chat group based on the respective chat group priority. Determining the video settings for each chat group preferably includes determining the display properties of the chat group video stream. The display properties can include whether or not the video stream is to be displayed on the user display, the relative size of the chat group video stream on the user display, the number of participant video streams to display, the amount of participant profile data to display, the transparency of the video stream, or any other suitable display property of the video stream. The display properties of video streams associated with each chat group are preferably additionally associated with the priority of the chat group to the user. In one variation, only video streams associated with chat groups above a threshold priority are displayed. In another variation, the size of the displayed video stream is directly proportional to the priority, wherein chat groups with high priority are preferably larger than chat groups with low priority. In another variation, the amount of chat group detail directly varies with the priority, wherein video streams of chat groups with high priority are preferably displayed with a large amount of chat group detail (e.g., all the video streams of all the users within the chat group are displayed, profile details about the users can be displayed, etc.), video streams of chat groups with medium priority are preferably displayed with a low amount of chat group detail (e.g., only the video stream of a single user is shown, only the video stream of the primary speaker is shown, etc.), and video streams of chat groups with low priority are preferably not displayed at all. In another variation, the chat group with the highest priority is preferably displayed proximal the video-capture device (e.g., the camera), wherein chat groups with lower priority are preferably displayed distal the video-capture device. However, the video streams of the chat groups can be otherwise adjusted based on the priority of the chat group to the user.
  • In a more specific example, the relative size of the chat group displayed on the user device is preferably correlated with the respective chat group priority. For example, the video streams of high priority chat groups (e.g., chat groups having a chat group priority over a first threshold) are displayed larger than the video streams of medium priority chat groups (e.g., chat groups having a chat group priority between a first threshold and a second threshold), and the video streams of low priority chat groups (e.g., chat groups having a chat group priority under a second threshold) are not displayed. The number of video streams of the chat group that are displayed is preferably also dependent upon the respective chat group priority, wherein more video streams of higher priority chat groups than video streams of lower priority chat groups are displayed. For example, all the participant video streams of a high priority chat group are preferably displayed, one participant video stream of a medium priority chat group is preferably displayed, and no video streams of a low priority chat group can be displayed.
  • Determining the presentation settings for each chat group can alternatively include determining the presentation settings based on a user selection of a parameter value for a chat group. In response to receipt of the user selection of a parameter value for a chat group (e.g., wherein the user increases the size of the video display, increases or decreases the chat group volume, etc.) the received parameter value is preferably stored as the respective parameter setting for the chat group. Using the received user settings as the parameter settings for the chat group can additionally include determining the relative priority of the primary chat group (e.g., chat group that was acted upon) and storing the received parameter value as the respective parameter setting for the respective chat group priority. Using the received user selection as the parameter settings for the chat group can additionally include determining secondary chat groups that are similar to the primary chat group (e.g., includes users having similar interests, includes shared media having a similar media type, includes shared media of a similar topic, etc.) and storing the received parameter value as the respective parameter setting for the secondary chat groups.
  • Alternatively, the presentation settings for the chat group can be determined based on user preferences, based on the user profile, based on similarities between the chat group content and the user interests, a historical relationship between the chat group and the user (e.g., the user had previously visited the chat group), a relationship between the users in the chat group and the user (e.g., a social network connection of the user, as determined from a social networking service, is in the chat group), or based on any other suitable parameter. The presentation settings for a chat group can additionally or alternatively be determined based on the presentation settings or priority of the chat room that the chat group is located within. However, the presentation settings can be determined using a combination of the aforementioned methods, or determined in any other suitable manner.
  • Determining the presentation settings for each chat group can additionally include determining the presentation settings based on the respective virtual position of the chat group. The virtual position to which the chat group is mapped preferably determines where the chat group video is displayed on the screen, wherein the chat group is preferably displayed at a position on the screen corresponding to the virtual position of the user within the virtual space. The virtual chat group position can alternatively determine whether the chat group video is to be displayed, the priority of the chat group, or determine any other suitable presentation setting for the chat group. The virtual position of the chat group can additionally determine the aural position of the audio stream within the chat group audio stream (e.g., determines the position that the chat group audio stream is encoded to be played from, such as stereo left or stereo right). The chat group is preferably mapped to a two-dimensional virtual space (e.g., an array), but can alternatively be mapped to a three-dimensional virtual space or a virtual space of any other suitable dimensions. The virtual user position is preferably laterally centered on the user display, but can alternatively be located elsewhere on the user display (e.g., lower right corner).
  • In one variation of the method, the chat group virtual positions are determined prior to user entry into the chat room, wherein the user is preferably placed in a virtual position that maximizes the relevancy of the chat groups surrounding the user (e.g., maximizes the total priority of the chat groups proximal the user position). In another variation of the method, the chat group virtual positions are individually determined for each user, such that the chat group virtual positions for a first user are different than the virtual positions of the same chat groups for a second user. The virtual proximity of the chat group to the virtual user position preferably varies directly with the chat group priority. In another variation of the method, the chat group with the highest priority is preferably substantially laterally centered in the display and the virtual distance of each of the remaining chat groups from the highest priority chat group are preferably directly related to the difference in respective priority (e.g., medium priority chat groups are closer than low priority chat groups). In another variation of the method wherein there are the two or more high priority chat groups, the two or more chat groups are displayed about (e.g., are substantially centered about) the point on the display indicative of the user position. However, the chat groups can be otherwise displayed.
  • As shown in FIG. 1, simultaneously presenting a plurality of representative audio and video streams from each of a plurality of chat groups S200 functions to present a user with a plurality of conversations, such that the user can simultaneously access content that is shared within the conversations. More preferably, the display properties of the presented audio and video streams for each chat group are dependent upon the chat group presentation settings that are specific to the user.
  • Simultaneously presenting a plurality of audio and video streams from each of the plurality of chat groups preferably includes presenting the audio streams from each of the set of chat groups at the user device S220 and displaying the video streams of a subset of the chat groups at the user device based on the respective presentation settings S240. Simultaneously displaying a plurality of audio and video streams from the plurality of chat groups preferably additionally includes receiving the audio and video streams from the user devices associated with each participating user within each chat group (participant audio and video streams, respectively) and synchronizing the audio and video streams. The participant audio and video streams are preferably audio and video streams received from the user devices associated with the participating users of the chat groups, but can alternatively be an ambient audio stream retrieved from the server, user device, or a third party source, or be any other suitable audio stream. The participant audio and video streams can alternatively be received as a single audio stream and single video stream, wherein the constituent participant streams are individually tagged or otherwise identified within the single stream. The participant audio and/or video streams are preferably flattened to decrease bandwidth, but can alternatively be unprocessed or otherwise processed. The participant audio and/or video is preferably flattened by the client on the originating device prior to transmission to the server, but can alternatively be flattened by the server after receipt and before transmission to the final user device, or flattened at any other suitable stage in the method.
  • Simultaneously presenting the audio streams from each of the plurality of chat groups S220 enables the user to discover new content and users based on audio. Simultaneously presenting the audio streams preferably includes playing all the participant audio streams of all the chat groups, but can alternatively include playing the participant audio streams of a subset of the chat groups. Simultaneously presenting the audio streams from each of the plurality of chat groups preferably includes concurrently playing the audio streams received from each of the user devices associated with chat group participants, wherein the audio streams are processed to varying degrees based on the respective chat group presentation settings. Simultaneously playing the participant audio streams can additionally include concurrently playing an additional sound track 22 selected from the server or from the user device. Simultaneously playing the participant audio streams can additionally include concurrently playing participant audio streams of chat groups in other chat rooms. However, any other suitable audio stream can be played with the participant audio streams of the chat group.
  • The participant audio streams can be mixed into the chat group stream S224 at the server or at the primary native application. The participant audio streams can be processed to meet the audio settings at the server or at the primary native application S222, an example of which is shown in FIG. 7. The chat group streams can be mixed into the final stream at the server or at the primary native application. The server preferably mixes all the participant audio streams of all the chat groups into an ambient audio stream and sends the ambient audio stream and the participant audio streams of high priority chat groups to the user device. The server can additionally send participant or chat group audio streams of medium priority chat groups (e.g., chat groups adjacent the user virtual position) to the user device. The server preferably sends the pre-mixed chat group audio stream and/or participant audio stream to the user device in response to a request 16 received from the user device. The request preferably identifies the chat groups for which chat group audio streams and participant audio streams should be sent. The request is preferably generated by the primary native application based on the respective chat group priorities as determined by the primary native application, wherein chat group audio streams are preferably requested for medium priority chat groups and the participant audio streams are requested for high priority chat groups. However, any other suitable audio for a chat group can be requested. Alternatively, the server can determine which audio stream should be sent for the chat group based on the chat group priority (e.g., no audio stream, the chat group audio stream, the participant audio streams, etc.), wherein the server stores the chat group priorities for the user. The server can alternatively send the chat group stream or participant audio streams of all or any suitable chat groups to the user device.
  • Simultaneously presenting the audio streams S220 preferably includes mixing the audio streams received from the user devices of participants within a chat group into a chat group stream, processing the participant audio streams to meet the audio settings for the respective chat group, and mixing the chat group streams into a final stream, an example of which is shown in FIG. 5. Processing the participant audio stream preferably includes dampening the audio stream by an amount or degree determined by the audio setting, and can additionally include adjusting the volume of the audio stream. In a first variation of the method, the participant audio streams are mixed into a chat group stream, and the resultant chat group stream is preferably subsequently processed to meet the audio settings determined for the respective chat group and mixed into the final stream. In a second variation of the method, the participant audio streams can be processed to meet the audio settings for the respective chat group and subsequently mixed into the chat group stream, which is subsequently mixed into the final stream. In a third variation of the method, the participant audio streams of all chat groups of a given priority or tier can be mixed into an aggregate chat group stream, processed to meet the audio settings for the given tier, and mixed into the final stream. In a fourth variation of the method, the participant audio streams can be processed to meet the audio settings for the respective chat group and played at the user device. However, the participant audio streams can be mixed using a combination of the aforementioned methods, or can be otherwise mixed and processed into the final stream.
  • The final stream is preferably a stereophonic audio stream centered about the virtual user position, wherein the aural position of a chat group audio stream is preferably based on the virtual location of the respective chat group relative to the virtual user location. However, the final stream can be monophonic, or have any other suitable number of channels. Mixing the final stream preferably includes mixing the audio streams of chat groups below a priority threshold into a monophonic ambient track and mixing the audio streams of chat groups above the priority threshold into a stereophonic adjacent track that is subsequently mixed with the ambient track. Mixing the chat group audio streams preferably includes mixing the participant audio streams into a monophonic chat group audio stream, but can alternatively include mixing the participant audio streams into a stereophonic chat group audio stream, particularly when the chat group priority is beyond a priority threshold. The aural positions of the participant audio streams within the stereophonic chat group audio stream are preferably located in front of the user (e.g., wherein the user is assumed to be facing the user device), but can alternatively be arranged about the virtual user position or otherwise positioned. However, the final stream and respective aural positions of the chat group audio streams can be otherwise mixed.
  • Simultaneously presenting the video streams from each of the plurality of chat groups S240 enables the user to discover new content and users based on video. Simultaneously presenting the video streams preferably includes displaying a subset of the chat group video streams of the chat room, but can alternatively include displaying all the video streams of all the chat groups within the chat room or displaying any other suitable number of chat group video streams, as shown in FIG. 4. Simultaneously presenting the video streams from each of the plurality of chat groups preferably includes concurrently playing a set of chat group video streams based on the presentation settings for the respective chat group. More preferably, simultaneously playing the video streams includes simultaneously playing the video streams of chat groups having priorities beyond a priority threshold. The set of chat groups for which the video streams are displayed is preferably the same set that contribute aurally positioned audio streams to the stereophonic final stream (e.g., as determined by the priority threshold), but can alternatively be a subset of the set or a superset of the set.
  • Simultaneously presenting the video streams from a set of chat groups S240 preferably includes compositing participant video streams into a chat group video stream S242. The chat group video stream is preferably composited by the primary native application, but can alternatively be composited by the server, as shown in FIG. 6. The chat group video stream is preferably individual to a user (e.g., viewer), but can alternatively be generic to a plurality of users. In one variation of the method, the chat group video stream of a first chat group in which the user is a participant (e.g., streaming audio and video to the other participants of the chat group) is individually composited for the user, while the chat group video stream of a second chat group is generic (e.g., appears the same for a second user who is also not a participant in the second chat group). The server preferably sends the chat group video stream and/or participant video stream to the user device in response to a request received from the user device. The request preferably identifies the chat groups for which chat group video streams and participant video streams should be sent. The request is preferably generated by the primary native application based on the respective chat group priorities as determined by the primary native application, wherein chat group video streams are preferably requested for medium priority chat groups and both the participant video streams and the chat group video streams are requested for high priority chat groups. However, any other suitable video for a chat group can be requested. Alternatively, the server can determine which video stream should be sent for the chat group based on the chat group priority (e.g., no video stream, the chat group video stream, both the chat group video stream and the participant video streams, etc.), wherein the server stores the chat group priorities for the user.
  • Compositing the chat group video stream S242 preferably includes selecting a participant video stream of the chat group and streaming the selected participant video stream as the chat group video stream. Compositing the chat group video stream can additionally include selecting a second participant video stream in response to a switch condition being met and streaming the second participant video stream as the chat group video stream. The selected participant video stream is preferably representative of the user that the other participants of the chat group are focused on (e.g., the user that is speaking), but can alternatively be a video stream of interest to the user (e.g., shared media) or any other suitable video stream. The first participant video stream is streamed as the chat group video stream until the audio stream associated with the second participant video stream satisfies the selection criteria. Upon the second participant audio stream satisfying the selection criteria, the second video stream can replace the first video stream (e.g., with or without a video transition in between) as the chat group video stream, be displayed adjacent the first video stream, or be displayed in any other suitable manner as the chat group video stream. Participant video streams are preferably removed from the chat group video stream when the respective audio stream fails to satisfy the selection criteria.
  • Selecting a participant video stream preferably includes selecting the video stream based on the respective audio stream. The respective audio stream is preferably the audio stream received from the same user device as the video stream, but can be an audio stream otherwise associated with the video stream. Selecting the participant video stream preferably includes selecting the video stream associated with an audio stream having a volume above a predetermined threshold (e.g., an ambient volume threshold), but can alternatively or additionally include selecting an audio stream having a dominant frequency of a given frequency (e.g., between 1 kHz-4 kHz, alternatively higher or lower), a sustained volume above an ambient volume threshold for more than threshold period of time (e.g., more than 1.3 seconds, 2 seconds, etc.), a consistently increasing volume (e.g., increasing faster than a threshold rate), or any other suitable audio parameter indicative of human speech.
  • Alternatively, the participant video stream can be selected based on the video stream source. In this variation, different video sources can have different priorities, wherein the participant video streams from high priority video sources are selected over participant video streams from low priority video sources. Video stream sources can include primary video sources, such as an integrated camera of the user device (e.g., the front camera, back camera, etc.) and a camera removably connected to the user device, and secondary sources, such as the API of a native application (e.g., the application rendering the graphics, etc.), a third party source, or any other suitable video source. The video source is preferably determined from metadata or other data encoded within the video stream, but can be otherwise determined. In one variation of the method, secondary video sources have a higher priority than primary video sources. In another variation of the method, the back camera of a device can be prioritized higher than secondary sources, which are prioritized higher than the front camera of a device. However, any other suitable prioritization of video sources can be used.
  • Alternatively, the participant video stream can be selected based on the number of participants within the chat group. This can be particularly relevant when the user is one of the two users. In particular, when there are two or less users within the chat group, the chat group video is preferably the second video stream, wherein the first video stream is that of the user. However, the chat group video stream for users outside of the chat group can be selected in any suitable manner, such as those described above.
  • Selecting a first participant video stream as the chat group video stream can additionally include processing the associated first audio stream to increase discernibility, such as increasing the first audio stream volume, processing the first audio stream to remove noise, or otherwise processing the first audio stream. Selecting the first participant video stream can additionally include processing the secondary audio streams of the secondary video streams (e.g., video streams of other participants within the chat group) to decrease the discernibility, such as decreasing the volume of the secondary audio streams, dampening the secondary audio streams, or otherwise processing the secondary audio streams. However, the secondary audio streams can alternatively be processed in a similar manner as the first audio stream, unprocessed, or processed in any other suitable manner.
  • Displaying the composite video streams of the chat groups preferably includes displaying the composite video streams of chat groups of the set at positions corresponding to the virtual location of the respective chat group relative to the virtual user location, wherein the virtual user location corresponds to a center of a display of the user device. However, the composite video streams can be randomly arranged or otherwise displayed at the user device.
  • In one variation of the method, simultaneously presenting the video streams from each of the plurality of chat groups includes selecting a primary chat group having the highest chat group priority of the set, playing the participant video streams of the primary chat group, playing the chat group video stream of the primary chat group, and playing the chat group video streams of adjacent chat groups or chat groups having a chat group priority over a priority threshold.
  • Simultaneously presenting the audio and video streams from each of the plurality of chat groups can additionally include synchronizing the audio and video streams. Synchronizing the audio and video streams functions to align the audio and video within a predetermined time tolerance. The audio and video streams are preferably synchronized by the server, but can alternatively be synchronized by the primary native application. Synchronizing the audio and video streams can include synchronizing only the participant and chat group video streams that are to be displayed with the respective audio streams, but can alternatively include synchronizing all the participant video streams with the respective audio streams, synchronizing the participant video streams of chat groups having priorities over a priority threshold with the respective audio streams, or synchronizing any other suitable video streams with the respective audio streams.
  • Synchronizing the audio and video streams preferably includes extracting the timestamp from the audio stream packet, extracting the timestamp from the video stream packet, and determining the difference between the timestamps. Lag is detected when the difference between the timestamps exceeds a predetermined time threshold (e.g., 10 milliseconds, 30 milliseconds, etc.). In response to determination that the video stream is lagging behind the audio stream (e.g., the audio stream timestamp is for a later time than the video stream timestamp), synchronizing the audio and video streams preferably additionally includes dropping (e.g., skipping) video frames until the video timestamp substantially matches the audio timestamp. However, the audio can be paused until the video frame timestamp matches the audio timestamp, or synchronized in any other suitable manner. In response to determination that the audio stream is lagging behind the video stream (e.g., the video frame timestamp is for a later time than the audio stream timestamp), synchronizing the audio and video streams preferably additionally includes freezing the video frames until the audio timestamp substantially matches the video timestamp. However, audio packets can be dropped until the audio timestamp matches the video timestamp, or the audio and video can be synchronized in any other suitable manner.
  • Adjusting a chat group priority in response to a change in user action functions S300 to re-determine the relative chat group priority to the user due to a user action. The chat group priority can additionally be changed in response to a chat room participant action (e.g., other user action). Chat group priority is preferably increased in response to positive user actions, and decreased in response to negative user actions. Positive user actions can include explicit selections (e.g., selecting a chat room icon, such as a video, as shown in FIG. 9), temporary selections (e.g., mousing over a chat room icon, as shown in FIG. 8), selection of a notification, or any other suitable action indicative of user interest. Chat group priority can be increased if the chat group is related to content that is shared by the user, as determined through matching of keywords associated with the chat group and the content or through any other suitable means. Negative user actions can include non-selection of a displayed chat group for a threshold period of time (e.g., the chat group priority is decreased if the user has not explicitly or temporarily selected said chat group in a week), cancelling or hiding of a notification, leaving a chat group within a threshold period of time after joining the chat group, or any other suitable action indicative of user disinterest. Notifications are preferably generated in response to an occurrence of an event of potential interest to the user, such as a movie of potential interest to the user being shown, or a social network connection joining the chat room. The notification preferably links the user to the chat group in which the event occurred, thereby reassigning said chat group to the highest priority. The notification is preferably a pop-up notification, but can be any suitable notification.
  • The method can additionally include receiving a participant audio and video stream S400. The participant audio and video stream is preferably received from a user device, but can alternatively be received from a third party source, such as a video streaming system. The participant audio and video stream is preferably received by a native application of a second user indirectly from the first user device through the server, but can alternatively be received at the second native application directly from the first user device, received at the server, or received at any other suitable component of the system. Sharing the audio and video from the user device can additionally include capturing the audio and video at the user device.
  • The method can additionally include capturing the audio and video stream at the user device, which functions to provide a participant audio stream and a participant video stream to the system. The audio and video streams are preferably captured by a native application on the user device but can alternatively be captured by an application executed on a browser or captured in any other suitable manner. The audio and video streams are preferably encoded with a timestamp and can additionally be encoded with the media source, but can alternatively be encoded with any other suitable metadata. The audio and video streams can be encoded by the media source (e.g., camera or microphone), by the user device, by the primary native application, or by any other suitable component of the system.
  • In one variation of the method, capturing the audio and video stream includes capturing the video stream from a video input device and capturing the audio stream from an audio input device. The video input device is preferably camera, but can alternatively be any other suitable video capture device. The camera can be a camera that is built into the device, or a camera that is connected to the device through a wired or wireless connection. The video stream can be received from one or more video input devices. The video stream is preferably captured by the primary native application (e.g., through the API of the secondary native application rendering the graphics to be shared), but can alternatively be captured at the graphics card and subsequently extracted by the primary native application, or be captured at any other suitable device component. Capturing the video stream can additionally include processing the video stream to improve the apparent definition of video signals.
  • Capturing the audio stream can include capturing an audio input stream from an audio input device, which functions to capture external audio (audio external from the running applications on the originating device). The audio input stream is preferably an audio stream generated by the user (e.g., voice, music, etc.), but can be any other suitable audio input stream. The audio input stream can be received from one or more audio input devices. The audio input device can be a microphone connected to or integrated within the originating device, an audio input jack, or any other suitable audio input device. The audio input stream is preferably captured by the primary module, but can alternatively be captured by any other suitable module. The audio input stream is preferably sent to the server and subsequently sent to receiving devices, but can alternatively be directly sent to the receiving devices. Capturing the audio input can additionally include processing the audio input stream to cancel the system audio stream echoes from the audio input stream (echo cancellation) prior to sending the audio input stream to the server. Alternatively or additionally, the audio input stream can be processed to cancel the received audio stream echoes from the audio input stream, and can be particularly desirable when no system audio streams are playing. Alternatively or additionally, the audio input stream can be processed to automatically control gain, which functions to substantially maintain the output audio level at a given volume despite fluctuations in the audio input level. Alternatively or additionally, the audio input stream can be processed to de-noise the audio input stream to remove background noise. However, any other suitable filtering mechanism or method can be applied to improve or enhance perceived audio quality. In some variations of the method, the audio parameters for system audio stream playback can be dynamically adjusted based on the captured audio input stream. In one example, when the captured audio input stream is above an amplitude threshold (e.g., the user associated with the originating device is speaking louder than a threshold volume), the volume setting (amplitude) of the system audio stream playback can be decreased. The volume of the system audio stream playback can be reset to the previous volume setting once the captured audio input stream falls below the amplitude threshold.
  • In another variation of the method, capturing the video and audio stream includes capturing the video and audio of content to be shared. The shared content is preferably any suitable content that can be shared from the user device, and can include content streamed from the internet (e.g., movies from a video hosting site), screenshares (e.g., of a specific portion of the user display can be cropped and shared), one or more native applications (e.g., wherein the view of the primary native application is shared and other portions of the user display are not), or any other suitable content. The audio and video stream of the shared content is preferably captured in response to receipt of a sharing selection received from the user. The sharing selection preferably includes a reference to the content to be shared, such as a reference to a second native application, a selection of a desktop portion, or a reference to a third party source (e.g., a URL). However, the content can be shared in response to the occurrence of any other suitable sharing event.
  • The shared content can be displayed on the originating user device directly from the graphics and sound card (e.g., directly through the second native application), routed through the primary native application, or sent to and re-received from the server prior to display on the user device (e.g., by the primary native application). The latter variation allows the system to accommodate for display delays between the user device and the participant user devices. Alternatively, the content graphics (e.g., video frames) can be displayed after a predetermined delay. The delay period can be determined by the primary native application based on the substantially instantaneous timestamp of the user device and the timestamps of the most recently received audio or video stream frames, wherein the delay period is preferably approximately the difference between the substantially instantaneous timestamp and the timestamp of the audio or video frame. Alternatively, the delay period can be received by the primary native application from the server, wherein the server can estimate the delay period. The delay period can be estimated by the server based on the network connectivity parameters of the originating and secondary user devices, based on the difference between the encoded timestamp of the most recently received audio or video frame (e.g., packet) and substantially instantaneous time of receipt, or based on any other suitable parameter indicative of a delay between data transmission and receipt at the primary and secondary user devices. However, the content can be presented at any other suitable time.
  • Sharing content preferably includes capturing the video stream of the content. The video stream is preferably captured by a video sharing system. The video sharing system preferably additionally includes a video capture module that captures the video stream of a running application or a stream of screen images of the device.
  • The video stream is preferably captured from a second native application executed on the user device. Capturing the video stream from the second native application preferably includes accessing the second native application through an application programming interface provided by the primary native application or by the operating system (e.g., a specific API, Open GL, or any other suitable interface) and capturing the video, images, or any other suitable graphics rendered by the second native application.
  • Alternatively, capturing the video stream of the content includes capturing a series of screenshots of the user desktop or screen. The series of screenshots can be captured by a digital frame superimposed over a portion of the user device desktop, wherein the digital frame captures and sends any images within the frame to the primary native application. The digital frame is preferably defined by the user, but can alternatively be automatically defined and positioned by the user.
  • Alternatively, capturing the video stream of the content includes accessing the graphics card, rerouting the content graphics output (e.g., the graphics for the second native application) to the primary native application, and sending the graphics output to the server.
  • Capturing the audio and video of content to be shared preferably includes capturing the audio stream of the content 5520, as shown in FIG. 10. Capturing the content audio stream is preferably performed by an audio sharing system including a primary module and a routing module. The audio sharing system is preferably part of the video sharing system, but can alternatively be a stand-alone system. The audio sharing system is preferably a native application stored and installed on the device, but can alternatively be a web application (e.g., browser based application) or any other suitable application. The primary module of the audio sharing system preferably functions to receive the captured audio stream from the routing module, to receive one or more audio streams from a server or to locally generate one or more audio streams, to mix the captured audio stream with one or more audio streams received from the server, to play the mixed audio stream through the selected audio device, and to store the audio device selection. However, the primary module of the audio sharing system can additionally perform any other suitable functions. The routing module of the audio sharing system preferably functions as a virtual pass-through audio component that captures the audio output of running applications on the device and passes the captured audio to the primary module. The routing module can additionally store user audio settings (e.g., volume level, playback mode, etc.). However, the routing module of the audio sharing system can function to perform any other suitable audio capture functions, such as audio filtering (de-noising, signal processing, etc.). The routing module is preferably run in response to the detection of a trigger event, such as a receipt of a video or audio share indicator (e.g., selection of a share button), and is preferably shut off once a shutoff event is detected, such as the end of a the shared video, the end of an audio track, or the receipt of an end sharing indicator. However, the routing module can run continuously during primary module operation, or run at any other suitable time.
  • Capturing the system audio stream S520 preferably functions to capture substantially unaltered audio streams from the applications running on the originating device. The system audio stream is preferably captured with routing module, wherein the audio streams of the running applications are preferably rerouted from the default audio output (e.g., the audio device of the device) to the routing module. The routing module preferably passes each captured audio stream in an unaltered form to the primary module. Alternatively, the routing module can mix the disparate audio streams into a captured audio stream. In one variation of the method, capturing the system audio stream includes detecting a pre-assigned audio output device, saving the audio parameters (e.g., volume level and mute state) for the pre-assigned audio output device to the routing module, assigning the routing module as the audio output for the device, and assigning the pre-assigned audio output device to play the mixed audio stream from the primary module. The pre-assigned audio output device can be selected by the user or be a default audio output device. The audio parameters can be selected by the user or be default audio settings (e.g., volume level, mute state, etc.). In response to the receipt of a change in the audio parameters (e.g., received from a user or another application on the originating device), the audio parameters stored in the routing module are preferably changed. The audio parameter changes are preferably applied at the audio output device, wherein the mixed audio stream is preferably synchronized with the audio parameters at the audio output device. In response to receipt of a change in the selected audio output device (e.g., as received from a user), the audio output of the primary module is preferably reassigned to the newly selected audio output device.
  • Generating a mixed audio stream from the system audio stream functions to produce an audio stream for the chat group associated with the originating device. The mixed audio stream is preferably generated by the primary module, wherein generating the mixed audio stream further includes sending the primary module the captured system audio stream by the routing module. Generating a mixed audio stream preferably includes receiving an audio stream from the server at the primary module and mixing the received audio stream with the system audio stream at the primary module, such that the mixed audio stream includes the system audio stream and the received audio stream. The received audio stream is preferably the system audio stream from a second originating device (e.g., wherein the first originating device functions as the receiving device), but can alternatively be an audio track saved on and sent by the server. The system audio stream can additionally or alternatively be mixed with a saved audio track (e.g., an ambient noise track) that is saved on the originating device. As shown in FIG. 11, generating the mixed audio stream can additionally include decoding the data for the received audio stream, buffering the data for the received audio stream (e.g., jitter buffer and circular buffer), converting the received data for the received audio stream into an audio stream, mixing one or more received audio streams and the system audio stream with a group mixer, and/or mixing the resultant audio stream with the audio preferences, which are preferably received or retrieved from the routing module. The mixers can be stereo mixers, multichannel mixers, or any other suitable mixer.
  • As shown in FIG. 10, playing the mixed audio stream through an audio output device preferably functions to play the mixed stream through the audio output device of the originating device. As previously mentioned, the mixed stream preferably includes the system audio stream and the audio stream received from the server (e.g., a second system audio stream from a second originating device, audio input from the second originating device, etc.). Playing the mixed audio stream through the audio output device preferably includes applying the audio parameters to the mixed audio stream and sending the mixed audio stream to the selected audio output device. Alternatively, the mixed audio stream and the audio parameters can be sent to the audio output device, wherein the audio output device applies or synchronizes the audio parameters to the mixed audio stream.
  • Capturing the audio stream can additionally include sending the system audio stream of the originating device to a server. The server can be the same server from which the audio stream is received, but can alternatively be a different server. The system audio stream is preferably sent by the primary module to the server, but can alternatively be sent by the routing module. Sending the system audio stream to the server preferably additionally includes processing the system audio stream to cancel mixed audio stream echoes (e.g., echoes from the audio stream received from the server) prior to sending the system audio stream to the server.
  • In one variation of the method as shown in FIG. 12, sending the system audio stream of the originating device to the server includes sending the system audio stream to the server before, after, or simultaneously with system audio stream playback through the audio output device of the originating device.
  • In another variation of the method as shown in FIG. 13, the system audio stream is sent to the server after capture, is sent back by the server to the primary module as a portion of the audio stream, and is played through the audio output device after receipt, such that the sharing audio additionally includes sending a system audio stream of an originating device to a server and receiving an audio stream including the system audio stream from the server. This variation can function to accommodate for the delay in system audio stream playback between the originating device and the receiving device, as the originating device and receiving device will receive and subsequently play the system audio stream (from the originating device) at approximately the same time. In this variation, the primary module can synchronize latencies between the originating and receiving devices, wherein the audio and/or video streams received from the server preferably include metadata that enables synchronization. Alternatively, the server can additionally synchronize latencies between the originating and receiving devices, and can dynamically adjust for delays such that the originating device and receiving device receive and/or play the audio streams at substantially the same time. The video sharing system can additionally function to delay the video playback on the originating device to substantially match the delay on the system audio stream playback.
  • Alternatively, the audio of an application can be individually captured and sent to the server. In this variation of the method, the primary native application can access the audio of the secondary native application through the application programming interface of the secondary native application. Alternatively, the primary native application can access the sound card or graphics card and reroute the audio associated with the secondary native application to the primary native application, server, and/or audio output. However, the content audio can be otherwise captured and shared.
  • Sharing content can additionally include sharing content from a third party source. The third party source is preferably a video hosting system, such as YouTube or Vimeo, but can alternatively be any other suitable content hosting system. The shared content preferably replaces the audio and video stream of the user, but can alternatively only replace the audio and video stream of the user within the chat group video stream, be streamed in addition to the audio and video stream of the user, be streamed as an additional participant in the chat group, or be otherwise presented to the chat group participants. Sharing content from a third party source preferably includes receiving a sharing request including a link or reference to the third party source, receiving the referenced content from the third party source at the server or the native application, and streaming the content to the participant user devices.
  • Sharing content from the third party source can additionally include receiving a sharing request from the user at the server or user device, wherein the sharing request includes a link or reference to the third party source, sending the reference to the participant user devices of the chat group by the server or user device, and sending a synchronization timestamp to the participant user devices of the chat group by the server or user device. The participant user devices receive the referenced content from the third party source, and use the synchronization timestamp to determine the substantially current play position. The synchronization timestamp can be the time at which the content should start playing, or can be the content timestamp of the current play position. In the latter instance, the server or the primary native application of the originating user can track the current play position or estimate the current play position. In one variation, the primary native application of the originating user device tracks the current play position or any media action selections (e.g., pause, fast forward, rewind, etc.) and sends the current play position or media action selection and the corresponding device time to the server. The current play position can be sent at a predetermined frequency or in response to a request received from the server (e.g., wherein the server sends the request in response to a monitoring event occurrence, such as the entry of a new user to the chat room or group). The media action selection can be sent in response to media action selection receipt, can be sent at a predetermined frequency, or can be sent in response to any other suitable event. In another variation, the server estimates the current play position based on the duration from the start time. However, the synchronization timestamp can alternatively be determined in any other suitable manner. Content from a third party source can be shared in any other suitable manner.
  • An alternative embodiment preferably implements the above methods in a computer-readable medium storing computer-readable instructions. The instructions are preferably executed by computer-executable components preferably integrated with a communication routing system. The communication routing system may include a communication system, routing system and a pricing system. The computer-readable medium may be stored on any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component is preferably a processor but the instructions may alternatively or additionally be executed by any suitable dedicated hardware device.
  • As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims.

Claims (20)

What is claimed is:
1. A method, comprising:
providing a virtual chat room that is accessible by a first user device and a second user device;
receiving, from the first user device, a first media item generated by an application executed on the first user device to share within the virtual chat room;
sending the first media item for distribution to the second user device via the virtual chat room;
receiving a second media item from the second user device for provision in the virtual chat room; and
sending the second media item for distribution to the first user device via the virtual chat room.
2. The method of claim 1 further comprising receiving a presentation setting for the virtual chat room from the first user device, wherein the virtual chat room is provided based, at least in part, on the presentation setting.
3. The method of claim 2, wherein the presentation setting includes a target presentation parameter for the virtual chat room.
4. The method of claim 2, wherein the presentation setting includes a list of users that have been invited to access the virtual chat room, wherein the second user device is associated with a user that is included in the list of users that have been invited to access the virtual chat room.
5. The method of claim 2, wherein the presentation setting includes a content sharing setting, wherein the content sharing setting indicates that the second user device is permitted to share content with the virtual chat group.
6. The method of claim 2, wherein receiving the second media item from the second user device for provision in the virtual chat room comprises identifying a switch condition, wherein the second media item is sent for distribution to the first user device via the virtual chat room responsive to identifying the switch condition.
7. The method of claim 1, wherein providing the virtual chat room comprises providing the virtual chat room in a virtual position on a graphical user interface, wherein the second media item is provided to the first user device via the graphical user interface.
8. The method of claim 1, wherein receiving the first media item generated by the application executed on the user device comprises extracting the first media item from the application through an application programming interface for the application.
9. The method of claim 8, wherein receiving the first media item generated by the application executed on the user device comprises accessing graphics rendering instructions for the application at a graphics card.
10. A system, comprising:
one or more processors; and
one or more computer-readable non-transitory storage media coupled to one or more of the processors and comprising instructions operable when executed by one or more of the processors to cause the system to:
provide a virtual chat room that is accessible by a first user device and a second user device;
receive, from the first user device, a first media item generated by an application executed on the first user device to share within the virtual chat room;
send the first media item for distribution to the second user device via the virtual chat room;
receive a second media item from the second user device for provision in the virtual chat room; and
send the second media item for distribution to the first user device via the virtual chat room.
11. The system of claim 10, wherein execution of the instructions further cause the system to receive a presentation setting for the virtual chat room from the first user device, wherein the virtual chat room is provided based, at least in part, on the presentation setting.
12. The system of claim 11, wherein the presentation setting includes a target presentation parameter for the virtual chat room.
13. The system of claim 11, wherein the presentation setting includes a list of users that have been invited to access the virtual chat room, wherein the second user device is associated with a user that is included in the list of users that have been invited to access the virtual chat room.
14. The system of claim 11, wherein the presentation setting includes a content sharing setting, wherein the content sharing setting indicates that the second user device is permitted to share content with the virtual chat group.
15. The system of claim 11, wherein when receiving the second media item from the second user device for provision in the virtual chat room the system is to identify a switch condition, wherein the second media item is sent for distribution to the first user device via the virtual chat room responsive to identifying the switch condition.
16. The system of claim 11, wherein when providing the virtual chat room, the system is to provide the virtual chat room in a virtual position on a graphical user interface, wherein the second media item is provided to the first user device via the graphical user interface.
17. A plurality of non-transitory computer-readable storage media embodying software that is operative when executed to:
provide a virtual chat room that is accessible by a first user device and a second user device;
receive, from the first user device, a first media item generated by an application executed on the user device to share within the virtual chat room;
send the first media item for distribution to the second user device via the virtual chat room;
receive a second media item from the second user device for provision in the virtual chat room; and
send the second media item for distribution to the first user device via the virtual chat room.
18. The non-transitory computer-readable storage media of claim 17, wherein execution of the instructions further cause the system to receive a presentation setting for the virtual chat room from the first user device, wherein the virtual chat room is provided based, at least in part, on the presentation setting.
19. The non-transitory computer-readable storage media of claim 18, wherein the presentation setting includes a target presentation parameter for the virtual chat room.
20. The non-transitory computer-readable storage media of claim 18, wherein the presentation setting includes a list of users that have been invited to access the virtual chat room, wherein the second user device is associated with a user that is included in the list of users that have been invited to access the virtual chat room.
US16/788,199 2012-12-19 2020-02-11 Method and system for sharing and discovery Pending US20200186373A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US201261739544P true 2012-12-19 2012-12-19
US201361777275P true 2013-03-12 2013-03-12
US14/134,240 US9755847B2 (en) 2012-12-19 2013-12-19 Method and system for sharing and discovery
US15/694,038 US10560276B2 (en) 2012-12-19 2017-09-01 Method and system for sharing and discovery
US16/788,199 US20200186373A1 (en) 2012-12-19 2020-02-11 Method and system for sharing and discovery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/788,199 US20200186373A1 (en) 2012-12-19 2020-02-11 Method and system for sharing and discovery

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/694,038 Continuation US10560276B2 (en) 2012-12-19 2017-09-01 Method and system for sharing and discovery

Publications (1)

Publication Number Publication Date
US20200186373A1 true US20200186373A1 (en) 2020-06-11

Family

ID=50932481

Family Applications (4)

Application Number Title Priority Date Filing Date
US14/134,229 Pending US20140173467A1 (en) 2012-12-19 2013-12-19 Method and system for content sharing and discovery
US14/134,240 Active 2036-07-06 US9755847B2 (en) 2012-12-19 2013-12-19 Method and system for sharing and discovery
US15/694,038 Active 2034-02-20 US10560276B2 (en) 2012-12-19 2017-09-01 Method and system for sharing and discovery
US16/788,199 Pending US20200186373A1 (en) 2012-12-19 2020-02-11 Method and system for sharing and discovery

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US14/134,229 Pending US20140173467A1 (en) 2012-12-19 2013-12-19 Method and system for content sharing and discovery
US14/134,240 Active 2036-07-06 US9755847B2 (en) 2012-12-19 2013-12-19 Method and system for sharing and discovery
US15/694,038 Active 2034-02-20 US10560276B2 (en) 2012-12-19 2017-09-01 Method and system for sharing and discovery

Country Status (2)

Country Link
US (4) US20140173467A1 (en)
WO (1) WO2014100374A2 (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9448708B1 (en) * 2011-10-19 2016-09-20 Google Inc. Theming for virtual collaboration
US9083816B2 (en) * 2012-09-14 2015-07-14 Microsoft Technology Licensing, Llc Managing modality views on conversation canvas
US20140173467A1 (en) * 2012-12-19 2014-06-19 Rabbit, Inc. Method and system for content sharing and discovery
US9369670B2 (en) * 2012-12-19 2016-06-14 Rabbit, Inc. Audio video streaming system and method
JP6187054B2 (en) * 2013-09-03 2017-08-30 ソニー株式会社 Information processing apparatus, information processing method, and program
US10027731B2 (en) * 2013-10-25 2018-07-17 Louis Gurtowski Selective capture with rapid sharing of user computer or mixed reality actions, states using interactive virtual streaming
US9565224B1 (en) * 2013-12-31 2017-02-07 Google Inc. Methods, systems, and media for presenting a customized user interface based on user actions
US20150200785A1 (en) * 2014-01-10 2015-07-16 Adobe Systems Incorporated Method and apparatus for managing activities in a web conference
KR20150096248A (en) * 2014-02-14 2015-08-24 삼성전자주식회사 Method and apparatus for creating communication group of electronic device
JP5791744B1 (en) * 2014-03-18 2015-10-07 株式会社ドワンゴ Terminal apparatus, moving image display method, and program
RU2646351C2 (en) * 2014-03-27 2018-03-02 Общество С Ограниченной Ответственностью "Яндекс" Method for transmitting a notification of an unread e-mail message (options) to the user and an electronic device used therefor
US9769097B2 (en) * 2014-05-29 2017-09-19 Multi Media, LLC Extensible chat rooms in a hosted chat environment
US20150365371A1 (en) * 2014-06-17 2015-12-17 George McDowell Agnew Channels: Real Time, Internet Social Networking, Multichannel, Communication and Multimedia Sharing Application
US20160098180A1 (en) * 2014-10-01 2016-04-07 Sony Corporation Presentation of enlarged content on companion display device
KR101626474B1 (en) * 2015-02-09 2016-06-01 라인 가부시키가이샤 Apparatus for providing document sharing service based messenger and method using the same
JP2016162143A (en) * 2015-02-27 2016-09-05 インフォサイエンス株式会社 Member information management system and member information management program
US10025771B2 (en) * 2015-05-07 2018-07-17 Here Global B.V. Method and apparatus for providing shared annotations and recall of geospatial information
US9652196B2 (en) * 2015-06-29 2017-05-16 Microsoft Technology Licensing, Llc Smart audio routing management
CN105024835B (en) * 2015-07-23 2017-07-11 腾讯科技(深圳)有限公司 Group management and device
WO2017086876A1 (en) 2015-11-18 2017-05-26 Razer (Asia-Pacific) Pte. Ltd. Interlacing methods, computer-readable media, and interlacing devices
US10171843B2 (en) 2017-01-19 2019-01-01 International Business Machines Corporation Video segment manager
CN107124622A (en) * 2017-04-14 2017-09-01 武汉鲨鱼网络直播技术有限公司 A kind of audio frequency and video interflow compact system and method
US10541824B2 (en) * 2017-06-21 2020-01-21 Minerva Project, Inc. System and method for scalable, interactive virtual conferencing
US10726603B1 (en) * 2018-02-28 2020-07-28 Snap Inc. Animated expressive icon
US20200045095A1 (en) * 2018-08-06 2020-02-06 NetTalk.com, Inc. Method and Apparatus for Coviewing Video
US20200099962A1 (en) * 2018-09-20 2020-03-26 Facebook, Inc. Shared Live Audio
US10791224B1 (en) * 2019-08-20 2020-09-29 Motorola Solutions, Inc. Chat call within group call

Family Cites Families (115)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5420860A (en) 1990-06-18 1995-05-30 Intelect, Inc. Volume control for digital communication system
US5736982A (en) * 1994-08-03 1998-04-07 Nippon Telegraph And Telephone Corporation Virtual space apparatus with avatars and speech
US5533112A (en) 1994-03-31 1996-07-02 Intel Corporation Volume control in digital teleconferencing
US6219045B1 (en) * 1995-11-13 2001-04-17 Worlds, Inc. Scalable virtual world chat client-server system
US5956491A (en) 1996-04-01 1999-09-21 Marks; Daniel L. Group communications multiplexing system
US7379961B2 (en) 1997-04-30 2008-05-27 Computer Associates Think, Inc. Spatialized audio in a three-dimensional computer-based scene
US6396509B1 (en) * 1998-02-21 2002-05-28 Koninklijke Philips Electronics N.V. Attention-based interaction in a virtual environment
US6329986B1 (en) * 1998-02-21 2001-12-11 U.S. Philips Corporation Priority-based virtual environment
US6212548B1 (en) 1998-07-30 2001-04-03 At & T Corp System and method for multiple asynchronous text chat conversations
TW463503B (en) * 1998-08-26 2001-11-11 United Video Properties Inc Television chat system
AU6392899A (en) 1998-09-15 2000-04-03 Local2Me.Com, Inc. Dynamic matching TM of users for group communication
US7536705B1 (en) 1999-02-22 2009-05-19 Tvworks, Llc System and method for interactive distribution of selectable presentations
US6697476B1 (en) 1999-03-22 2004-02-24 Octave Communications, Inc. Audio conference platform system and method for broadcasting a real-time audio conference over the internet
JP4425407B2 (en) * 1999-05-13 2010-03-03 富士通株式会社 Conversation sending method and conversation system
US6442590B1 (en) * 1999-05-27 2002-08-27 Yodlee.Com, Inc. Method and apparatus for a site-sensitive interactive chat network
US8145776B1 (en) * 1999-10-15 2012-03-27 Sony Corporation Service providing apparatus and method, and information processing apparatus and method as well as program storage medium
US6772195B1 (en) * 1999-10-29 2004-08-03 Electronic Arts, Inc. Chat clusters for a virtual world application
US6519771B1 (en) 1999-12-14 2003-02-11 Steven Ericsson Zenith System for interactive chat without a keyboard
JP3434487B2 (en) * 2000-05-12 2003-08-11 株式会社イサオ Position-linked chat system, position-linked chat method therefor, and computer-readable recording medium recording program
US6501739B1 (en) 2000-05-25 2002-12-31 Remoteability, Inc. Participant-controlled conference calling system
JP2002123478A (en) * 2000-10-17 2002-04-26 Isao:Kk Chat system, device and method for processing chat information and recording medium
US7313593B1 (en) * 2000-10-24 2007-12-25 International Business Machines Corporation Method and apparatus for providing full duplex and multipoint IP audio streaming
CA2379782C (en) * 2001-04-20 2010-11-02 Evertz Microsystems Ltd. Circuit and method for live switching of digital video programs containing embedded audio data
DE10145490B4 (en) * 2001-09-14 2006-08-31 Siemens Ag Method for exchanging messages in a chat group
US7328242B1 (en) 2001-11-09 2008-02-05 Mccarthy Software, Inc. Using multiple simultaneous threads of communication
AUPR989802A0 (en) * 2002-01-09 2002-01-31 Lake Technology Limited Interactive spatialized audiovisual system
US6813360B2 (en) 2002-01-22 2004-11-02 Avaya, Inc. Audio conferencing with three-dimensional audio encoding
US7068792B1 (en) 2002-02-28 2006-06-27 Cisco Technology, Inc. Enhanced spatial mixing to enable three-dimensional audio deployment
US7844662B2 (en) * 2002-10-17 2010-11-30 At&T Intellectual Property Ii, L.P. Merging instant messaging (IM) chat sessions
US7386799B1 (en) * 2002-11-21 2008-06-10 Forterra Systems, Inc. Cinematic techniques in avatar-centric communication during a multi-user online simulation
CN1720740A (en) * 2002-12-04 2006-01-11 皇家飞利浦电子股份有限公司 Recommendation of video content based on the user profile of users with similar viewing habits
US9339728B2 (en) * 2002-12-10 2016-05-17 Sony Interactive Entertainment America Llc System and method for managing audio and video channels for video game players and spectators
US7454460B2 (en) * 2003-05-16 2008-11-18 Seiko Epson Corporation Method and system for delivering produced content to passive participants of a videoconference
US8873561B2 (en) 2003-08-18 2014-10-28 Cisco Technology, Inc. Supporting enhanced media communications using a packet-based communication link
WO2005104433A1 (en) * 2004-04-21 2005-11-03 Koninklijke Philips Electronics, N.V. System and method for managing threads in a network chat environment
US7945006B2 (en) 2004-06-24 2011-05-17 Alcatel-Lucent Usa Inc. Data-driven method and apparatus for real-time mixing of multichannel signals in a media server
US8443041B1 (en) 2004-07-02 2013-05-14 Aol Inc. Chat preview
US20060174207A1 (en) * 2005-01-31 2006-08-03 Sharp Laboratories Of America, Inc. Systems and methods for implementing a user interface for multiple simultaneous instant messaging, conference and chat room sessions
US7864209B2 (en) 2005-04-28 2011-01-04 Apple Inc. Audio processing in a multi-participant conference
US7949117B2 (en) 2005-04-28 2011-05-24 Apple Inc. Heterogeneous video conferencing
US20070043822A1 (en) * 2005-08-18 2007-02-22 Brumfield Sara C Instant messaging prioritization based on group and individual prioritization
US20070168511A1 (en) * 2006-01-17 2007-07-19 Brochu Jason M Method and apparatus for user moderation of online chat rooms
US8001184B2 (en) 2006-01-27 2011-08-16 International Business Machines Corporation System and method for managing an instant messaging conversation
US7577711B2 (en) * 2006-02-07 2009-08-18 International Business Machines Corporation Chat room communication network implementation enabling senders to restrict the display of messages to the chat room chronological displays of only designated recipients
US8151323B2 (en) 2006-04-12 2012-04-03 Citrix Systems, Inc. Systems and methods for providing levels of access and action control via an SSL VPN appliance
US7945620B2 (en) 2006-06-13 2011-05-17 International Business Machines Corporation Chat tool for concurrently chatting over more than one interrelated chat channels
US20070300165A1 (en) * 2006-06-26 2007-12-27 Microsoft Corporation, Corporation In The State Of Washington User interface for sub-conferencing
US8120637B2 (en) * 2006-09-20 2012-02-21 Cisco Technology, Inc. Virtual theater system for the home
US8126129B1 (en) 2007-02-01 2012-02-28 Sprint Spectrum L.P. Adaptive audio conferencing based on participant location
US8006191B1 (en) 2007-03-21 2011-08-23 Google Inc. Chat room with thin walls
US20080294721A1 (en) * 2007-05-21 2008-11-27 Philipp Christian Berndt Architecture for teleconferencing with virtual representation
US20090037006A1 (en) * 2007-08-03 2009-02-05 Transtechnology, Inc. Device, medium, data signal, and method for obtaining audio attribute data
US8683068B2 (en) * 2007-08-13 2014-03-25 Gregory J. Clary Interactive data stream
WO2009034412A1 (en) * 2007-09-13 2009-03-19 Alcatel Lucent Method of controlling a video conference
US8954178B2 (en) * 2007-09-30 2015-02-10 Optical Fusion, Inc. Synchronization and mixing of audio and video streams in network-based video conferencing call systems
US8009619B1 (en) 2007-10-23 2011-08-30 Phunware, Inc. Server-side wireless communications link support for mobile handheld devices
US8169916B1 (en) 2007-11-23 2012-05-01 Media Melon, Inc. Multi-platform video delivery configuration
US20090172557A1 (en) * 2008-01-02 2009-07-02 International Business Machines Corporation Gui screen sharing between real pcs in the real world and virtual pcs in the virtual world
US8412171B2 (en) 2008-01-21 2013-04-02 Alcatel Lucent Voice group sessions over telecommunication networks
US8223185B2 (en) * 2008-03-12 2012-07-17 Dish Network L.L.C. Methods and apparatus for providing chat data and video content between multiple viewers
US8397168B2 (en) * 2008-04-05 2013-03-12 Social Communications Company Interfacing with a spatial virtual communication environment
US8271579B2 (en) 2008-04-07 2012-09-18 Phunware, Inc. Server method and system for executing applications on a wireless device
US20090251488A1 (en) 2008-04-07 2009-10-08 Hands-On Mobile, Inc. Method and system for executing applications on a wireless device
US20090292608A1 (en) * 2008-05-22 2009-11-26 Ruth Polachek Method and system for user interaction with advertisements sharing, rating of and interacting with online advertisements
US8375308B2 (en) * 2008-06-24 2013-02-12 International Business Machines Corporation Multi-user conversation topic change
US20100058381A1 (en) * 2008-09-04 2010-03-04 At&T Labs, Inc. Methods and Apparatus for Dynamic Construction of Personalized Content
CN102165767A (en) * 2008-09-26 2011-08-24 惠普开发有限公司 Event management system for creating a second event
US20100091687A1 (en) * 2008-10-15 2010-04-15 Ted Beers Status of events
US20100145775A1 (en) * 2008-12-10 2010-06-10 Maria Alejandra Torres System and method for computer program implemented internet environmental information exchange and marketplace
US8863173B2 (en) * 2008-12-11 2014-10-14 Sony Corporation Social networking and peer to peer for TVs
DE102009002150A1 (en) * 2009-04-02 2010-10-07 BSH Bosch und Siemens Hausgeräte GmbH Method for operating a water-conducting household appliance
US8407287B2 (en) 2009-07-14 2013-03-26 Radvision Ltd. Systems, methods, and media for identifying and associating user devices with media cues
US8477661B2 (en) 2009-08-14 2013-07-02 Radisys Canada Ulc Distributed media mixing and conferencing in IP networks
US8448073B2 (en) 2009-09-09 2013-05-21 Viewplicity, Llc Multiple camera group collaboration system and method
US20110066438A1 (en) * 2009-09-15 2011-03-17 Apple Inc. Contextual voiceover
US8144633B2 (en) * 2009-09-22 2012-03-27 Avaya Inc. Method and system for controlling audio in a collaboration environment
US9111538B2 (en) * 2009-09-30 2015-08-18 T-Mobile Usa, Inc. Genius button secondary commands
KR20110037590A (en) 2009-10-07 2011-04-13 삼성전자주식회사 P2p network system and data transmission and reception method thereof
US8442198B2 (en) * 2009-10-20 2013-05-14 Broadcom Corporation Distributed multi-party conferencing system
US8782562B2 (en) * 2009-12-02 2014-07-15 Dell Products L.P. Identifying content via items of a navigation system
WO2011068919A1 (en) 2009-12-02 2011-06-09 Astro Gaming, Inc. Wireless game/audio system and method
US9191425B2 (en) 2009-12-08 2015-11-17 Citrix Systems, Inc. Systems and methods for remotely presenting a multimedia stream
US20110149809A1 (en) * 2009-12-23 2011-06-23 Ramprakash Narayanaswamy Web-Enabled Conferencing and Meeting Implementations with Flexible User Calling and Content Sharing Features
KR20120139666A (en) * 2010-01-29 2012-12-27 휴렛-팩커드 디벨롭먼트 컴퍼니, 엘.피. Portable computer having multiple embedded audio controllers
US8570907B2 (en) 2010-04-07 2013-10-29 Apple Inc. Multi-network architecture for media data exchange
US20130298040A1 (en) * 2010-04-30 2013-11-07 American Teleconferencing Services, Ltd. Systems, Methods, and Computer Programs for Providing Simultaneous Online Conferences
US20110271209A1 (en) * 2010-04-30 2011-11-03 American Teleconferncing Services Ltd. Systems, Methods, and Computer Programs for Providing a Conference User Interface
US20110271213A1 (en) * 2010-05-03 2011-11-03 Alcatel-Lucent Canada Inc. Event based social networking application
US8482593B2 (en) 2010-05-12 2013-07-09 Blue Jeans Network, Inc. Systems and methods for scalable composition of media streams for real-time multimedia communication
US8458085B1 (en) * 2010-06-03 2013-06-04 Zelman Yakubov Investor social networking website
US8458084B2 (en) * 2010-06-03 2013-06-04 Zelman Yakubov Investor social networking website
US8438226B2 (en) * 2010-06-22 2013-05-07 International Business Machines Corporation Dynamic adjustment of user-received communications for a real-time multimedia communications event
US9262531B2 (en) * 2010-07-23 2016-02-16 Applied Minds, Llc System and method for chat message prioritization and highlighting
US8607146B2 (en) * 2010-09-30 2013-12-10 Google Inc. Composition of customized presentations associated with a social media application
US9153000B2 (en) * 2010-12-13 2015-10-06 Microsoft Technology Licensing, Llc Presenting content items shared within social networks
US20120150971A1 (en) * 2010-12-13 2012-06-14 Microsoft Corporation Presenting notifications of content items shared by social network contacts
US8848025B2 (en) * 2011-04-21 2014-09-30 Shah Talukder Flow-control based switched group video chat and real-time interactive broadcast
US8934015B1 (en) * 2011-07-20 2015-01-13 Google Inc. Experience sharing
US10063430B2 (en) * 2011-09-09 2018-08-28 Cloudon Ltd. Systems and methods for workspace interaction with cloud-based applications
US10217117B2 (en) * 2011-09-15 2019-02-26 Stephan HEATH System and method for social networking interactions using online consumer browsing behavior, buying patterns, advertisements and affiliate advertising, for promotions, online coupons, mobile services, products, goods and services, entertainment and auctions, with geospatial mapping technology
US8509816B2 (en) * 2011-11-11 2013-08-13 International Business Machines Corporation Data pre-fetching based on user demographics
SG11201402546WA (en) * 2011-11-23 2014-06-27 Calgary Scient Inc Methods ans systems for collaborative remote application sharing and conferencing
US8754926B1 (en) * 2011-11-29 2014-06-17 Google Inc. Managing nodes of a synchronous communication conference
US8972262B1 (en) * 2012-01-18 2015-03-03 Google Inc. Indexing and search of content in recorded group communications
US9001178B1 (en) * 2012-01-27 2015-04-07 Google Inc. Multimedia conference broadcast system
US8812602B2 (en) * 2012-04-03 2014-08-19 Python4Fun, Inc. Identifying conversations in a social network system having relevance to a first file
US20130346867A1 (en) * 2012-06-25 2013-12-26 United Video Properties, Inc. Systems and methods for automatically generating a media asset segment based on verbal input
US20140047049A1 (en) * 2012-08-07 2014-02-13 Milyoni, Inc. Methods and systems for linking and prioritizing chat messages
US8681203B1 (en) * 2012-08-20 2014-03-25 Google Inc. Automatic mute control for video conferencing
US9094524B2 (en) * 2012-09-04 2015-07-28 Avaya Inc. Enhancing conferencing user experience via components
WO2014047425A1 (en) * 2012-09-21 2014-03-27 Comment Bubble, Inc. Timestamped commentary system for video content
US8983836B2 (en) * 2012-09-26 2015-03-17 International Business Machines Corporation Captioning using socially derived acoustic profiles
US9055021B2 (en) * 2012-11-30 2015-06-09 The Nielsen Company (Us), Llc Methods and apparatus to monitor impressions of social media messages
US20140173467A1 (en) * 2012-12-19 2014-06-19 Rabbit, Inc. Method and system for content sharing and discovery
US9369670B2 (en) * 2012-12-19 2016-06-14 Rabbit, Inc. Audio video streaming system and method

Also Published As

Publication number Publication date
US20140173467A1 (en) 2014-06-19
US20140173430A1 (en) 2014-06-19
WO2014100374A2 (en) 2014-06-26
US10560276B2 (en) 2020-02-11
US9755847B2 (en) 2017-09-05
US20170366366A1 (en) 2017-12-21
WO2014100374A3 (en) 2014-10-09

Similar Documents

Publication Publication Date Title
US10419721B2 (en) Method and apparatus for providing video conferencing
US10572117B2 (en) System for universal remote media control in a multi-user, multi-platform, multi-device environment
US10313631B2 (en) System and method to enable layered video messaging
US10687161B2 (en) Smart hub
US9800622B2 (en) Virtual socializing
US10524006B2 (en) Automatic transition of content based on facial recognition
US9485459B2 (en) Virtual window
US10802689B2 (en) Continuation of playback of media content by different output devices
US10650862B2 (en) Method and device for transmitting audio and video for playback
US9282129B2 (en) Multi-user interactive virtual environment including broadcast content and enhanced social layer content
EP2642753B1 (en) Transmission terminal, transmission system, display control method, and display control program
US8725125B2 (en) Systems and methods for controlling audio playback on portable devices with vehicle equipment
US9590837B2 (en) Interaction of user devices and servers in an environment
US9800939B2 (en) Virtual desktop services with available applications customized according to user type
TWI475855B (en) Synchronized wireless display devices
US10579243B2 (en) Theming for virtual collaboration
US20160134690A1 (en) System and Method for Providing a Virtual Environment with Shared Video on Demand
US9030523B2 (en) Flow-control based switched group video chat and real-time interactive broadcast
US9686329B2 (en) Method and apparatus for displaying webcast rooms
US8429704B2 (en) System architecture and method for composing and directing participant experiences
US8582565B1 (en) System for streaming audio to a mobile device using voice over internet protocol
US9172979B2 (en) Experience or “sentio” codecs, and methods and systems for improving QoE and encoding based on QoE experiences
US7760659B2 (en) Transmission optimization for application-level multicast
KR101951975B1 (en) communication system
US9661270B2 (en) Multiparty communications systems and methods that optimize communications based on mode and available bandwidth

Legal Events

Date Code Title Description
AS Assignment

Owner name: RABBIT (ASSIGNMENT FOR THE BENEFIT OF CREDITORS), LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RABBIT, INC.;REEL/FRAME:051789/0772

Effective date: 20190524

Owner name: RABBIT, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CLAVEL, PHILIPPE;ZAITSEV, TIMOPHEY;BIRRER, STEFAN;AND OTHERS;SIGNING DATES FROM 20140122 TO 20140127;REEL/FRAME:051789/0684

Owner name: RABBIT ASSET PURCHASE CORP., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RABBIT (ASSIGNMENT FOR THE BENEFIT OF CREDITORS), LLC;REEL/FRAME:051900/0313

Effective date: 20190725

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER