CN107256136B - Facilitating simultaneous consumption of media content by multiple users using superimposed animations - Google Patents

Facilitating simultaneous consumption of media content by multiple users using superimposed animations Download PDF

Info

Publication number
CN107256136B
CN107256136B CN201710450507.0A CN201710450507A CN107256136B CN 107256136 B CN107256136 B CN 107256136B CN 201710450507 A CN201710450507 A CN 201710450507A CN 107256136 B CN107256136 B CN 107256136B
Authority
CN
China
Prior art keywords
user
computing
media content
animation
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710450507.0A
Other languages
Chinese (zh)
Other versions
CN107256136A (en
Inventor
P·I·费尔考伊
A·哈珀
R·亚戈迪奇
R·K·蒙贾
G·休梅克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US13/532,612 priority Critical
Priority to US13/532,612 priority patent/US9456244B2/en
Application filed by Intel Corp filed Critical Intel Corp
Priority to CN201380027047.0A priority patent/CN104335242B/en
Publication of CN107256136A publication Critical patent/CN107256136A/en
Application granted granted Critical
Publication of CN107256136B publication Critical patent/CN107256136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/22Cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/002Denoising; Smoothing
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2358/00Arrangements for display data security
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/04Exchange of auxiliary data, i.e. other than image data, between monitor and graphics controller

Abstract

Embodiments of apparatus, computer-implemented methods, systems, devices, and computer-readable media are described herein for facilitating simultaneous consumption of media content by a first user of a first computing device and a second user of a second computing device. In various embodiments, the causing may include superimposing an animation of the second user on media content presented on the first computing device based on captured visual data of the second user received from the second computing device. After determining that the first user is interested in the second user, the animation may be visually enhanced. The facilitating may include conditional changes to the captured visual data of the first user based at least in part on whether the second user is assigned a trusted status by the first user, and transmission of the changed or unchanged visual data of the first user to the second computing device.

Description

Facilitating simultaneous consumption of media content by multiple users using superimposed animations
The application is a divisional application of a Chinese patent application with the application date of 2013, 5 and 20 and the application number of 201380027047.0.
Cross Reference to Related Applications
This application claims priority to U.S. patent application No.13/532,612, filed on day 6, 25, 2012, the entire contents of which are hereby incorporated by reference in their entirety for all purposes.
Technical Field
Embodiments of the invention relate generally to the field of data processing, and more particularly, to facilitating simultaneous consumption of media content by multiple users using superimposed animations.
Background
The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure. Unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this disclosure and are not admitted to be prior art by inclusion in this section.
People may wish to consume media content together. For example, a group of friends may be gathering together to watch a movie, television program, sporting event, home video, or other similar media content. Friends may communicate with each other during the presentation to enhance the media consumption experience. Two or more people that are physically separated from each other and cannot converge in a single location may still wish to share a media content consumption experience.
Drawings
The embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.
FIG. 1 schematically illustrates an example computing device configured using applicable portions of the teachings of the present disclosure to communicate with other similarly configured remote computing devices, in accordance with various embodiments.
FIG. 2 schematically depicts the scenario of FIG. 1, in which a user of a computing device has indicated an interest in a particular overlay animation of a remote user, in accordance with various embodiments.
FIG. 3 schematically depicts the scenario of FIG. 1, in which a user of a computing device has indicated that interest in a remote user exceeds interest in media content overlaid with an animation, in accordance with various embodiments.
Fig. 4 schematically depicts an example method that may be implemented by a computing device in accordance with various embodiments.
Fig. 5 schematically depicts another example method that may be implemented by a computing device in accordance with various embodiments.
Fig. 6 schematically depicts an example computing device on which the disclosed methods and computer-readable media may be implemented, in accordance with various embodiments.
Detailed Description
In the following detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments which may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the embodiments is defined by the appended claims and their equivalents.
Various operations may be described as multiple discrete actions or operations performed in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. The described operations may be performed in an order different than the described embodiments. In additional embodiments, additional operations may be performed and/or the described operations may be omitted.
For the purposes of this disclosure, the phrase "a and/or B" means (a), (B), or (a and B). For the purposes of this disclosure, the phrase "A, B and/or C" means (a), (B), (C), (a and B), (a and C), (B and C), or (A, B and C).
The description may use the phrases "in one embodiment" or "in an embodiment," each of which may refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous.
As used herein, the term "module" may refer to, be a part of, or include the following components: an application specific integrated circuit ("ASIC"), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
FIG. 1 schematically depicts an example computing device 100 configured using applicable portions of the teachings of the present disclosure, in accordance with various embodiments. Computing device 100 is depicted as a tablet computing device, but this is not meant to be limiting. Computing device 100 may be various other types of computing devices (or combinations thereof), including but not limited to: laptop computers, netbook computers, notebook computers, ultrabooks, smart phones, personal digital assistants ("PDAs"), ultra mobile PCs, mobile phones, desktop computers, servers, printers, scanners, monitors, set-top boxes, entertainment control units (e.g., gaming consoles), digital cameras, portable music players, digital video recorders, televisions (e.g., plasma, liquid crystal displays or "LCDs," cathode ray tube "CRTs," projection screens), and so forth.
The computing device 100 may include a display 102. The display 102 may be various types of displays including, but not limited to: plasma, LCD, CRT, etc. In some embodiments (not shown), the display may include a projection surface onto which the projector may project graphics with superimposed animations as described herein. In various embodiments, display 102 may be a touch screen display, which may be used to provide input to computing device 100 and operate computing device 100. In various embodiments, computing device 100 may include additional input controls (not shown) to facilitate input in addition to or instead of via the touch screen display.
In various embodiments, computing device 100 may include a camera 104 configured to capture visual data, such as one or more frames and/or digital images. As will be described below, the captured visual data may be transmitted to a remote computing device and used to facilitate animated overlays on other content by the remote device.
Although camera 104 is shown as an integral part of computing device 100 in fig. 1-3, this is not meant to be limiting. In various embodiments, the camera 104 may be separate from the computing device 100. For example, camera 104 may be an external camera (e.g., a webcam) that connects to computing device 100 using one or more wires or wirelessly.
In various embodiments, the computing device 100 may include an eye tracking device 106. In various embodiments, such as the computing tablet shown in fig. 1, the camera 104 also serves as an eye tracking device 106. However, this is not essential. In various embodiments, the eye tracking device 106 may be separate from the camera 104 and may be a different type of device and/or a different type of camera. For example, in embodiments where computing device 100 is a television or a gaming machine connected to a television, eye tracking device 106 may be a camera or other device (e.g., a motion capture device) operatively coupled to the television or gaming machine. Such an example is shown in fig. 3 and will be described below.
In various embodiments, the visual data captured by the camera 104 and/or the eye tracking device 106 may be analyzed using software, hardware, or any combination of the two to determine and/or estimate which portion (if any) of the display 102 the user is looking at. This determination may include various operations including, but not limited to: determine a distance of the user's face and/or eyes from the display 102, identify one or more features of the user's eyes in the visual data, such as pupils, measure a distance between the identified features, and the like. As will be discussed below, the determination of which portion of the display 102 the user is looking at (and thus indicating interest in) and which portion of the display 102 the user is not looking at (and thus indicating no interest in) may be used in various ways.
Computing device 100 may communicate with various remote computing devices via one or more networks. For example, in fig. 1 and 2, the computing device 100 is in wireless communication with a first wireless network access node 108, the first wireless network access node 108 itself being in communication with a network 110. In various embodiments, the first wireless access node 108 may be an evolved node B, WiMAX (IEEE 802.16 family) access point, a Wi-Fi (IEEE 802.11 family) access point, or any other node to which the computing device 100 may wirelessly connect. Network 110 may include one or more domain, local, or wide area networks, private networks, and/or public networks, including but not limited to the internet. Although wireless device 100 is shown to be wirelessly connected to network 110, this is not meant to be limiting, and computing device 100 may be connected to one or more networks in any manner, including via so-called "wired" connections.
Computing device 100 may be in network communication with any number of remote computing devices. For example, in fig. 1 and 2, computing device 100 is in network communication with a first remote computing device 112 and a second remote computing device 114. As with computing device 100, the first and second remote computing devices 112, 114 may be any type of computing device, such as those previously mentioned. For example, in fig. 1, the first remote computing device 112 is a smartphone and the second remote computing device 114 is a laptop computer.
The first remote computing device 112 is shown wirelessly connected to another wireless network access node 116. A second remote computing device 114 is shown connected to the network 110 via a wired connection. However, the type of network connection used by the remote computing device is not material. Any computing device may communicate with any other computing device in the manner described herein using any type of network connection.
In various embodiments, the computing device 100 may be configured to facilitate the simultaneous consumption of media content by a user (not shown) of the computing device 100 and one or more users of one or more remote computing devices (e.g., a first remote user 118 of a first remote computing device 112 and/or a second remote user 120 of a second remote computing device 114). In various embodiments, the computing device 100 may be configured to superimpose one or more animations of the remote user on the media content 122 presented on the computing device 100.
In various embodiments, one or more superimposed animations may be rendered by the computing device 100 based on visual data received from a remote computing device. In various embodiments, the visual data received from the remote computing device may be based on visual data of the remote user (e.g., 118, 120) captured at the remote computing device.
As used herein, the term "animation" may refer to a mobile visual representation that is fabricated from captured visual data. This may include, but is not limited to, video (e.g., bitmap) rendering of captured visual data, artistic interpretation of visual data (e.g., cartoons rendered based on captured visual data of the user), and the like. In other words, in this document, "animation" is used as a noun form of a verb "animate" (which means "bring about life"). Thus, "animation" refers to the depiction or rendering of "animate" (as opposed to "inanimate"). "animation" is not limited to drawings created by a painter.
In various embodiments, the media content 122 may include, but is not limited to, audio and/or visual content such as videos (e.g., streaming), video games, web pages, slides, presentations (presentations), and the like.
By superimposing the animation of a remote user on the media content, two or more users that are remote from each other may be able to consume the media content "together". Each user may see animations of other users superimposed on the media content. Thus, for example, two or more friends that are remote from each other may share in the experience of watching a movie, a television program, a sporting event, and so forth.
In fig. 1, a first animation 124 and a second animation 126 (representing the first remote user 118 and the second remote user 120, respectively) may be superimposed on the media content 122 on the display 102 of the computing device 100. The first animation 124 may be based on captured visual data of the first remote user 118 received by the computing device 100 from the first remote computing device 112 (e.g., from a camera (not shown) of the first remote computing device 112). For example, the first animation 124 may be a video stream depicting the first remote user 118. Similarly, the second animation 126 may be based on captured visual data of the second remote user 120 from the second remote computing device 114 received at the computing device 100.
In various embodiments, the visual data on which the animation is rendered may be transmitted between computing devices in various forms. In various embodiments, one computer may transfer captured visual data to another computer in the form of a bitmap (e.g., a video stream of an a. png or other visual file with an alpha mask). In other embodiments, captured visual data may be transmitted using streaming video with alpha incorporated. In yet another embodiment, the captured visual data may be transmitted using a stream of bitmap (e.g., RGB) frames and depth frames from which two-dimensional ("2D") or three-dimensional ("3D") animations may be rendered.
In fig. 1-3, the animation is rendered near the bottom of the display 102 so that the user of the computing device 100 can still view the media content 122. However, this is not meant to be limiting. Animations, such as the first animation 124 and the second animation 126, may be rendered on any portion of the display 102. In some embodiments, animations may be displayed on multiple displays. For example, if a desktop computer user has multiple monitors, one or more of the animations may be displayed on one monitor or another. In various embodiments, these animations may be superimposed on the content 122 on one or both monitors.
In various embodiments, after a user's interest in a particular animation is determined by computing device 100, the animation may be visually emphasized. As used herein, "visually emphasizing" an animation may refer to rendering the animation differently than other superimposed animations or media content, so as to draw attention to one animation relative to one or more other animations, or otherwise distinguish one animation from one or more other animations.
For example, in FIG. 1, the first and second animations 124, 126 are depicted in white and black outlines to indicate that the two animations are equally visually emphasized such that the user's attention is not more attracted to one animation relative to the other animation. For example, a; both animations may depict the first and second users in real-time and may be rendered in an almost equally noticeable manner. In other words, neither animation is "visually faded".
"visually fading" may refer to rendering an animation in the following manner: the animation is distinguished from other animations or media content in a manner that draws no attention to the animation or draws attention away from the animation (e.g., to another animation that is visually enhanced or to the underlying media content). An example of a visual fade is shown in fig. 2. The first animation 124 is shown in full black to indicate that it is visually obscured. The second animation 126 is shown with white and black outlines to indicate that it is visually emphasized.
In various embodiments, the animation of the remote user may be visually diluted in various ways. For example, rather than rendering an animation of the user with all colors or all features, a silhouette of the remote user may be rendered, for example, in a single color (e.g., gray, black, or any other color or shade). In various embodiments, the remote user may be rendered in shadows. In some embodiments, the visually faded animation may not be animated at all, or be animated at a slower frame rate than the visually emphasized animation.
In FIG. 3, both the first animation 124 and the second animation 126 are visually diluted. This may occur when the user of computing device 100 has not yet indicated an interest in any one of the users. For example, the user may have indicated an interest in viewing the media content 122, rather than an interest in animation of the remote user. When the user indicates interest in one or the other animation, the animation in which the user expresses interest may then be visually enhanced by computing device 100.
The user may indicate interest or disinterest in a particular animation or other portion of the display 102 in various ways. For example, the camera 104 and/or the eye tracking device 106 may be configured to collect data related to the eye movements of the user. Based on this data, the computing device 100 may calculate which portion (if any) of the display 102 the user is viewing.
For example, in fig. 2, based on input from the eye tracking device 106, the computing device 100 may have determined that the user is focusing on (or viewing) the second animation 126. Thus, the computing device 100 may visually emphasize the second animation 126 and visually deemphasize the first animation 124.
As another example, in fig. 3, based on input from the eye tracking device 106, the computing device may have determined that the user is focusing on the media content 122, and/or is not focusing on the first animation 124 or the second animation 126. As such, the computing device 100 may visually fade both the first animation 124 and the second animation 126 to facilitate less distracting viewing of the media content 122.
Although not shown in fig. 1-3, similar to computing device 100, first remote computing device 112 and second remote computing device 114 may simultaneously display media content 122 and overlays of animations to other remote users. For example, the first remote computing device 112 may superimpose an animation of a user (not shown) of the computing device 100 and an animation of the second remote user 120 on the media content 122. Similarly, the second remote computing device 114 may overlay an animation of the user (not shown) of the computing device 100 and the first remote user 118 on the media content 122. Additionally, although three computing devices are shown, it should be understood that any number of computing devices configured with applicable portions of the present disclosure may participate in a media content simultaneous viewing session.
Although the animation shown in the figures depicts the entire body of the remote user, this is not meant to be limiting. In various embodiments, less than the entire body of the remote user may be rendered. For example, in some embodiments, a portion of the remote user may be depicted, such as above the torso (e.g., a bust of the remote user). In some cases, the animation may be rendered adjacent to the bottom of the display such that the animation of the remote user appears to "pop up" from the bottom of the display. Other portions of the remote user (e.g., only the head, over the chest, over the knees or legs, half or the other half of the remote user, etc.) may also be animated.
In some embodiments, computing device 100 may be configured to crop captured visual data and/or resulting animations of the remote user. For example, the captured visual data of the remote user includes the entire body and background of the remote user. In various embodiments, the computing device 100 may be configured to automatically crop out unwanted portions, such as blank space in the remote user's legs and/or background.
In various embodiments, the computing device 100 may be configured to dynamically and/or automatically crop the captured visual data of its own local or remote user based on various criteria. For example, based on determining that the region of visual data (in which the local or remote user is represented) occupies a predetermined portion of less than all of the visual data, the computing device 100 may dynamically crop at least a portion of the visual data of the local user or the visual data of the remote user of the computing device 100. If the local or remote user moves around, for example, near his or her camera, the local or remote user may become larger within the viewing area. In this case, the computing device 100 may dynamically reduce cropping as needed. Thus, the computing device 100 can ensure that the animation of the user (local or remote) is of the proper size and scale in the visual data it provides to the remote computing device and in the visual data it receives from the remote computing device.
In various embodiments, in addition to rendering animations of remote users, computing device 100 may also render animations of local users of computing device 100. This may allow the user to see what the remote user would see. This may also increase the feeling of a clique by placing the local user's animation in a "common area" with the remote user's animation. This may also facilitate decisions by the user regarding his or her privacy, as will be discussed further below.
In various embodiments, the media content simultaneous sharing session may be implemented using peer-to-peer and/or client-server software installed on each computing device. In various embodiments, a session may persist even if one or more users exit the media content while sharing the session. For example, in fig. 1, the first animation 124 on the computing device 100 may disappear if the first remote user 118 is to exit, but the second animation 126 may persist as long as the computing device 100 and the second remote computing device 114 maintain media content while sharing the session.
In various embodiments, a user may be able to join (or rejoin) existing media content while sharing a session. For example, in fig. 1 and 2, the second remote user 120 participates via a laptop computer. However, in fig. 3, the second remote user 120 may have exited the media content concurrent sharing session on the laptop and may have rejoined using the third remote computing device 128 (which is configured using applicable portions of the present disclosure).
In FIG. 3, the third remote computing device 128 is in the form of a gaming machine connected to a television 130. In this arrangement, the television 130 may function similarly to the display 102 of the computing device 100. The third remote computing device 128 may also be operatively coupled to a motion sensing device 132. In various embodiments, the motion-sensing device 132 may include a camera (not shown). In various embodiments, motion-sensing device 132 may include an eye tracking device (not shown).
In various embodiments, in addition to superimposing the animation, the computing device 100 may receive audio or other data from a remote computing device and present it to the user. For example, a remote computing device (e.g., 112, 114, 128) may be equipped with a microphone (not shown) to record the sound of a remote user (e.g., 118, 120). The remote computing device may digitize the received audio and transmit it to the computing device 100. The computing device 100 may audibly render the received audio data, such as in conjunction with animations (e.g., 124, 126).
When multiple users are sharing media content simultaneously, the users may wish to prevent audio from remote users from disrupting the audio portion of the media content. Thus, in various embodiments, a user may be able to disable (e.g., mute) audio from one or more remote users, even though animations of those remote users are still allowed to appear on the display 102. In various embodiments, the computing device 100 may be configured to superimpose textual representations of the speech of one or more remote users on the media content on the display 102. An example of this is seen in fig. 3, where a call bubble 140 has been superimposed over the media content 122 to display a textual representation of the comment made by the second remote user 120.
In various embodiments, the textual representation of the remote user's speech at the computing device 100 may be based on speech-to-text data received from the remote computing device. In various other embodiments, the textual representation of the remote user's speech may be based on audio data received by computing device 100 from the remote computing device. In the latter case, the computing device 100 may be configured to utilize speech-to-text software to convert the received audio to text.
Media may be consumed simultaneously by multiple users in various ways. In various embodiments, streaming video or other media content may be synchronized among multiple computing devices (e.g., 100, 112, 114, 128) such that all users see the same content at the same time. Media content may be published in various ways. In some embodiments, a first user may have media content and may provide it to other users. For example, a user of computing device 100 may have an account for streaming video (e.g., subscribe to an on-demand video stream) and may forward a copy of the stream to a remote computing device (e.g., 112, 114, 128). In this case, the first user's computing device may insert a delay in the playback of the video stream so that it does not play before the playback of the video stream on the remote computing device.
In other embodiments, the media content may be located in the middle (e.g., at the content server), and the computing device may be separately connected to or streamed from the content server. In this case, the computing devices may exchange synchronization signals to ensure that each user views the same content at the same time. In some embodiments, if the user pauses the play of media content on the computing device 100, the play of content on other participating computing devices (e.g., remote computing devices 112, 114, 128) may be paused.
In various embodiments, privacy mechanisms may be employed to protect the privacy of the user. For example, a user of the computing device 100 may instruct the computing device 100 (e.g., to a remote computing device (e.g., 112, 114)) to only provide visual data sufficient for the remote computing device to render a silhouette or shadow of the user animation. In some embodiments, the user may direct computing device 100 to not provide captured visual data at all. In some embodiments, the user may direct computing device 100 to capture visual data only during certain periods of time and/or to limit capturing or changing/distorting visual data during other periods of time.
In some embodiments, the computing device 100 may employ one or more image processing filters to make the animation of the user rendered on the remote computing device illegible and/or incompletely rendered. For example, visual data captured by the camera 104 of the computing device 100 may be passed through one or more image processing filters to blur, pixilate, or otherwise alter the data. In some embodiments, the user may direct computing device 100 to remove some features from the captured visual data such that the resulting animation has a reduced frame rate. Additionally or alternatively, the computing device 100 may reduce the sampling rate of the camera 104 to capture coarser visual data.
In some embodiments, the computing device 100 may be configured to protect the privacy of a remote user, for example, in response to instructions received from a remote computing device (e.g., 112, 114). For example, the computing device 100 may be configured to alter (e.g., by passing through an image processing filter) visual data representing the remote user (which would otherwise be fully rendered), thereby making the resulting animation of the remote user illegible or otherwise not fully rendered.
In various embodiments, a user may assign a trusted status to one or more remote users. Thereafter, those remote users may be considered to be one of the user's "contacts". When one of the user's contacts joins or rejoins the media content while viewing the session, the animation of that contact may appear, re-animate, or otherwise change in appearance. When one of the user's contacts leaves the media content while viewing the session, the animation of that contact may disappear, become prone, or otherwise change in appearance.
In some embodiments, computing device 100 may conditionally change the visual data transmitted to the remote computing device depending on whether the remote user of the target remote computing device has been assigned a trusted status. For example, computing device 100 may send "complete" or unchanged visual data to a user's contact, or to a particular contact that is assigned a higher trust state than other contacts (e.g., "close friends"). Computing device 100 may send incomplete visual data (e.g., visual data with frames removed or visual data captured with a reduced adoption rate) or altered visual data (e.g., blurred, pixelated, etc.) to contacts (e.g., acquaintances) deemed more distant. In some embodiments, computing device 100 may send little to no visual data, or severely altered visual data, to a remote computing device of a user that is not assigned a trusted status.
In various embodiments, the computing device 100 may request to perform a handshaking procedure with a remote computing device (e.g., 112, 114) before the computing device 100 is to overlay a remote user's animation on media content or provide the remote computing device with captured visual data of the user. For example, a user of the computing device 100 may be required to click on or otherwise select an icon or other graphic representing a remote user before the computing device 100 is to overlay an animation of the remote user on media content or provide visual data for the remote computing device. In some embodiments, the computing device 100 may overlay an animation of the user's "closest" contact (e.g., a contact that has been assigned a relatively high trust level by the user), or provide the closest contact with captured visual data of the user without requesting any handshake.
In some embodiments, image processing may be applied to visual data for purposes other than privacy. For example, in some embodiments, background subtraction may be implemented by computing device 100 to subtract the background from the "subtracted" user and visual data. When the remote computing device overlays the user's animation using visual data, the user can be rendered separately without any background.
In various embodiments, superimposed animations, such as the first animation 124 and the second animation 126, may be rendered in 2D and/or 3D. In embodiments where the animation is rendered in 3D, the computing device 100 may be configured to employ parallax correction on the superposition of the animation of the remote user. In some 3D embodiments, the captured visual data (where the animation is based on the visual data) may be transmitted between computing devices as a point cloud, a list of vertices, a list of triangles.
In embodiments where the computing device 100 is configured to render animations in 3D, the computing device 100 may do so in a variety of ways. For example, in some embodiments, the computing device 100 may render 3D geometry on a 2D screen. In other 3D embodiments, the computing device 100 may render 3D geometry in 3D on a stereoscopic display, and the user may wear 3D glasses.
The superimposition of the animation of the remote user may be rendered on the display 102 in various ways. In various embodiments, the overlay of the remote user's animation may be rendered in a transparent window that itself is overlaid on all or a portion of the other content displayed on the display 102.
Fig. 4 depicts an example method 400 that may be implemented on a computing device (e.g., computing device 100, first remote computing device 112, second remote computing device 114, and/or third remote computing device 128). At block 402, captured visual data of a remote user of a remote computing device may be received, for example, by computing device 100 from the remote computing device. At block 404, media content (e.g., videos, shared web browsing sessions, slideshows, etc.) may be presented, for example, by computing device 100, while the media content is being presented on a remote computing device.
At block 406, it may be determined, for example, by computing device 100, that a user of the computing device is interested or not interested in the remote user. For example, computing device 100 may receive data from an eye tracking device (e.g., 106), which computing device 100 may use to determine where the user is looking. If the animation of the remote user is at the location or a particular distance from the location, it may be determined, for example, by computing device 100 that the user is interested in the remote user.
At block 408, based on the received visual data, an animation of the remote user may be superimposed on the media content (e.g., 122), for example, by the computing device 100. At block 410, the animation may be visually emphasized or faded, for example, by computing device 100, based on the determination of the user interest. For example, if the user is interested in the remote user, the animation of the remote user may be fully rendered. If the user is not interested in the remote user, the animation of the remote user may not be fully rendered, e.g., in shadow, at a lower frame rate, pixilated, etc. After block 410, if a media simultaneous sharing session is still in progress, the method 400 may return to block 402. If the session is terminated, the method 400 may proceed to an end block.
Fig. 5 depicts an example method 500 that may be implemented on a computing device (e.g., computing device 100, first remote computing device 112, second remote computing device 114, and/or third remote computing device 128). At block 502, visual data may be captured, for example, by the camera 104. At block 504, it may be determined, for example, by camera 104, whether one or more remote users with whom the user of computing device 100 wishes to consume media content simultaneously are contained in a list of remote users (e.g., contacts) having a trusted status. If the answer is no, then at block 506, the captured visual data of the user may be altered, for example, by computing device 100 to maintain the privacy of the user. For example, visual data may be provided through one or more image processing filters (e.g., blur filters, pixelization filters) or otherwise altered to make the resulting animation illegible, distorted, and/or incompletely revealed on the remote computing device. At block 508, the changed visual data may be transmitted, for example, by computing device 100, to a remote computing device (e.g., 112, 114, 128). Following block 508, if a media simultaneous sharing session is still in progress, the method 500 may return to block 502. If the session is terminated, method 500 may proceed to an end block.
If the answer at block 504 is yes, then at block 510, it may be determined, for example, by computing device 100, whether the user requires privacy. For example, the computing device 100 may determine whether a privacy flag is set, or whether the current time is within a period of time that the user has indicated that privacy is desired. If the user requires privacy, the method 500 may proceed to block 506 and the visual data may be changed prior to transmission to protect the user's privacy. If the answer at block 510 is no, then at block 508, the unchanged visual data may be transmitted, for example, by computing device 100, to one or more remote computing devices (e.g., 112, 114, 128).
Fig. 6 illustrates an example computing device 600 in accordance with various embodiments. Computing device 600 may include a number of components, a processor 604 and at least one communication chip 606. In various embodiments, processor 604 may be a processor core. In various embodiments, at least one communication chip 606 may also be physically and electrically coupled to the processor 604. In further implementations, the communication chip 606 may be part of the processor 604. In various embodiments, computing device 600 may include a printed circuit board ("PCB") 602. For these embodiments, processor 604 and communication chip 606 may be disposed thereon. In an alternative embodiment, the various components may be coupled without the use of the PCB 602.
Depending on its applications, computing device 600 may include other components that may or may not be physically and electrically coupled to PCB 602. These other components include, but are not limited to: volatile memory (e.g., dynamic random access memory 608, also referred to as "DRAM"), non-volatile memory (e.g., read only memory 610, also referred to as "ROM"), flash memory 612, graphics processor 614, digital signal processor (not shown), crypto processor (not shown), input/output ("I/O") controller 616, antenna 618, display (not shown), touchscreen display 620, touchscreen controller 622, a battery 624, an audio codec (not shown), a video codec (not shown), a global positioning system ("GPS") device 628, a compass 630, an accelerometer (not shown), a gyroscope (not shown), a speaker 632, a camera 634, and a mass storage device (e.g., a hard drive, a solid state drive, a compact disc ("CD"), a digital versatile disc ("DVD")) (not shown), and so forth. In various embodiments, processor 604 may be integrated with other components on the same die to form a system on a chip ("SoC").
In various embodiments, the volatile memory (e.g., DRAM 608), non-volatile memory (e.g., ROM610), flash memory 612, and mass storage devices may include program instructions configured to enable the computing device 600, in response to execution by the processor 604, to implement all or selected aspects of the methods 400 and/or 500. For example, one or more of the memory components, such as volatile memory (e.g., DRAM 608), non-volatile memory (e.g., ROM610), flash memory 612, and mass storage devices, may include temporary and/or permanent copies of instructions (shown as control module 636 in fig. 6) configured to enable computing device 600 to implement all or selected aspects of the disclosed techniques, e.g., method 400 and/or method 500.
The communication chip 606 may enable wired and/or wireless communication for transferring data to or from the computing device 600. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they may not. The communication chip 606 may implement any of a variety of wireless standards or protocols, including but not limited to: IEEE 802.11 ("WiFi"), IEEE 802.16 ("WiMAX"), IEEE 702.20, long term evolution ("LTE"), general packet radio service ("GPRS"), evolution-data optimized ("Ev-DO"), evolution high speed packet access ("HSPA +"), evolution high speed downlink packet access ("HSDPA +"), evolution high speed uplink packet access ("HSUPA +"), global system for mobile communications ("GSM"), enhanced data rates for GSM evolution ("EDGE"), code division multiple access ("CDMA"), time division multiple access ("TDMA"), digital enhanced cordless communications ("DECT"), bluetooth, derivatives thereof, and any other wireless protocol designated as 3G, 4G, 5G, and higher. The computing device 600 may include a plurality of communication chips 606. For example, a first communication chip 606 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 606 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and the like.
In various embodiments, computing device 600 may be a laptop computer, a netbook computer, a notebook computer, an ultrabook, a smartphone, a computing tablet, a personal digital assistant ("PDA"), an ultra-portable PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit (e.g., a gaming console), a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 600 may be any other electronic device that processes data.
Embodiments of apparatus, computer-implemented methods, systems, devices, and computer-readable media are described herein for facilitating simultaneous consumption of media content by a first user of a first computing device and a second user of a second computing device. In various embodiments, the causing may include superimposing an animation of the second user on the media content presented on the first computing device based on the captured visual data of the second user received from the second computing device. In various embodiments, the animation may be visually enhanced upon determining that the first user is interested in the second user. In various embodiments, the determination of the first user's interest may be based on data received from an eye tracking input device associated with the first computing device.
In various embodiments, after determining that the first user is interested in the second user or not interested in the third user, an overlay of the animation of the third user of the third computing device on the media content presented on the first computing device may be visually diluted. In various embodiments, the first computing device may render the overlay of the animation of the third user in shadow to visually fade the overlay of the animation of the third user.
In various embodiments, the superimposition of the animation of the second user comprises a superimposition of the animation of the second user at a bottom of a display proximate to the first computing device. In various embodiments, parallax correction may be employed on the superimposition of the animation of the second user.
In various embodiments, a textual representation of the second user's speech may be superimposed on the media content. In various embodiments, the textual representation of the second user's speech may be based on speech-to-text data received from the second computing device or audio data received from the second computing device. In various embodiments, the overlay of the animation of the second user may be rendered in a transparent window.
In various embodiments, the captured visual data of the first user may be visually altered based at least in part on whether the second user is assigned a trusted status by the first user. In various embodiments, the captured visual data of the first user may be transmitted to the second computing device. In various embodiments, the captured visual data may be configured to cause the second computing device to superimpose the animation of the first user on the media content displayed on the second computing device.
In various embodiments, the conditional change may include image processing of the captured visual data of the first user, the image processing including blurring, pixelation, background subtraction, or frame removal. In various embodiments, the captured visual data of the first user may be changed in response to determining that the second user is not assigned a trusted status by the first user.
In various embodiments, at least some of the captured visual data of the first or second user may be automatically cropped. In various embodiments, at least some of the captured visual data of the first or second user may be dynamically cropped based on determining that the region of visual data (in which the first or second user is represented) occupies less than all of the predetermined portion of the captured visual data of the first or second user.
Although certain embodiments have been illustrated and described herein for purposes of description, this application is intended to cover any adaptations or variations of the embodiments discussed herein. It is therefore manifestly intended that the embodiments described herein be limited only by the claims.
Where the disclosure recites "a" or "a first" element or the equivalent thereof, the disclosure includes one or more such elements, neither requiring nor excluding two or more such elements. Further, ordinal indicators (e.g., first, second or third) for identified elements are used to distinguish between the elements, and do not indicate or imply a required or limited number of such elements, nor do they indicate a particular position or order of particular such elements unless otherwise specifically stated.

Claims (28)

1. A computer-implemented method, comprising:
receiving, by a first computing device used by a first user, an indication of a visual representation of a second user from a second computing device;
accessing, on the first computing device, media content to be displayed on the first computing device;
adding the media content to include a visual representation of the second user;
presenting the added media content on a display of the first computing device;
capturing data from a sensing device of the first computing device;
determining, based on data from the sensing device, whether a first user of the first computing device is looking at: (i) the media content or (ii) a visual representation of the second user, wherein determining whether the first user is watching the media content comprises: it is determined which portion of the media content the first user is looking at.
2. The computer-implemented method of claim 1, further comprising:
conditionally altering, by the first computing device, the captured visual data of the first user based at least in part on whether the first user has assigned a trusted state to the second user; and is
Transmitting, by the first computing device, the visual data to the second computing device, the captured visual data configured to cause the second computing device to superimpose an animation of the first user on the media content displayed on the second computing device.
3. The computer-implemented method of claim 2, wherein conditionally changing the captured visual data of the first user comprises: image processing is performed on the captured visual data, the image processing including blurring, pixelation, background subtraction, or frame removal.
4. The computer-implemented method of claim 2, wherein conditionally changing the captured visual data of the first user comprises: changing the visual data in response to a determination that the second user has not been assigned a trusted status by the first user.
5. The computer-implemented method of claim 2, further comprising: automatically cropping, by the first computing device, at least some of the captured visual data of the first or second user.
6. The computer-implemented method of claim 2, wherein determining the first user's interest comprises:
receiving eye tracking data from an eye tracker input device associated with the first computing device; and is
Determining a location at which the first user is looking based on the eye tracking data.
7. At least one non-transitory computer-readable medium comprising instructions that, in response to execution by a first computing device of a first user, cause the first computing device to:
receiving, from the second computing device, an indication of a visual representation of the second user;
accessing, on the first computing device, media content to be displayed on the first computing device;
adding the media content to include a visual representation of the second user;
presenting the added media content on a display of the first computing device;
capturing data from a sensing device of the first computing device; and
determining, based on data from the sensing device, whether a first user of the first computing device is looking at: (i) the media content or (ii) a visual representation of the second user, wherein determining whether the first user is watching the media content comprises: it is determined which portion of the media content the first user is looking at.
8. The at least one non-transitory computer-readable medium of claim 7, further comprising instructions that, in response to execution by the first computing device, cause the first computing device to:
visually fading an animation of a third user on the media content presented on the first computing device upon determining that the first user is interested in the second user or not interested in the third user of the third computing device.
9. The at least one non-transitory computer-readable medium of claim 8, further comprising instructions that, in response to execution by the first computing device, cause the first computing device to:
rendering the overlay of the animation of the third user in shadow to visually fade the overlay of the animation of the third user.
10. The at least one non-transitory computer-readable medium of claim 7, wherein the overlay of the animation of the second user comprises an overlay of the animation of the second user at a bottom side of a display adjacent to the first computing device.
11. The at least one non-transitory computer-readable medium of claim 7, further comprising instructions that, in response to execution by the first computing device, cause the first computing device to:
employing disparity correction for the superimposition of the animation of the second user.
12. The at least one non-transitory computer-readable medium of claim 7, further comprising instructions that, in response to execution by the first computing device, cause the first computing device to:
superimposing a textual representation of the second user's speech on the media content presented on the first computing device.
13. The at least one non-transitory computer-readable medium of claim 12, wherein the text representation of the second user's speech is based on speech-to-text data received from the second computing device.
14. The at least one non-transitory computer-readable medium of claim 12, wherein the textual representation of the second user's speech is based on audio data received from the second computing device.
15. The at least one non-transitory computer-readable medium of claim 7, further comprising instructions that, in response to execution by the first computing device, cause the first computing device to:
conditionally changing the captured visual data of the first user based at least in part on whether the first user has assigned a trusted status to the second user; and is
Transmitting the captured visual data of the first user to the second computing device to cause the second computing device to superimpose an animation of the first user on the media content displayed on the second computing device.
16. The at least one non-transitory computer-readable medium of claim 15, wherein conditionally changing comprises image processing for the captured first user including blurring, pixelation, background subtraction, or frame removal.
17. The at least one non-transitory computer-readable medium of claim 15, wherein altering the captured visual data of the first user comprises: changing the captured visual data of the first user in response to a determination that the second user has not been assigned a trusted status by the first user.
18. The at least one non-transitory computer-readable medium of claim 15, further comprising instructions that, in response to execution by the first computing device, cause the first computing device to:
crop at least some of the captured visual data of the first or second user.
19. The at least one non-transitory computer-readable medium of claim 18, further comprising instructions that, in response to execution by the first computing device, cause the first computing device to:
cropping at least some of the captured visual data of the first or second user based on a determination that an area in which the visual data of the first or second user is presented occupies less than all of the captured predetermined portion of the visual data of the first or second user.
20. The at least one non-transitory computer-readable medium of claim 7, wherein determining the first user's interest comprises:
receiving eye tracking data from an eye tracker input device associated with the first computing device; and is
Determining a location at which the first user is looking based on the eye tracking data.
21. The at least one non-transitory computer-readable medium of claim 7, wherein the data from the sensing device comprises data from an accelerometer of the first computing device.
22. The at least one non-transitory computer-readable medium of claim 7, wherein adding the media content to include the visual representation of the second user comprises adding the media content to include a cartoon rendering of the second user.
23. The at least one non-transitory computer-readable medium of claim 7, wherein the instructions further cause the first computing device to add the media content based on a determination of which portion of the media content the first user is looking at.
24. A system that facilitates simultaneous consumption of media content by a plurality of users, comprising:
one or more processors;
a memory operatively coupled to the one or more processors;
a display; and
a control module contained in the memory and operated by the one or more processors to facilitate simultaneous consumption of media content by a first user of the system and a second user of a remote computing device, wherein the facilitating comprises: adding a visual representation of the second user received from the remote computing device to the media content presented on the display of the system, and based on data from the sensing device, determining whether the first user of the first computing device is looking: (i) the media content or (ii) a visual representation of the second user, wherein determining whether the first user is watching the media content comprises: it is determined which portion of the media content the user is looking at.
25. The system of claim 24, wherein the remote computing device is a first remote computing device and the control module is further configured to visually fade an animation of a third user on the media content presented on the display upon determining that the first user is interested in the second user or not interested in the third user of a second remote computing device.
26. The system of claim 25, wherein the control module is further configured to render the animation of the third user in shadow.
27. The system of claim 24, wherein the overlay of the animation of the second user comprises an overlay of the animation of the second user at a bottom side adjacent to the display.
28. The system of claim 24, wherein the determination of the first user's interest is based on data received from an eye tracking device.
CN201710450507.0A 2012-06-25 2013-05-20 Facilitating simultaneous consumption of media content by multiple users using superimposed animations Active CN107256136B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US13/532,612 2012-06-25
US13/532,612 US9456244B2 (en) 2012-06-25 2012-06-25 Facilitation of concurrent consumption of media content by multiple users using superimposed animation
CN201380027047.0A CN104335242B (en) 2012-06-25 2013-05-20 Consumed while being facilitated using superposition animation by multiple users to media content

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201380027047.0A Division CN104335242B (en) 2012-06-25 2013-05-20 Consumed while being facilitated using superposition animation by multiple users to media content

Publications (2)

Publication Number Publication Date
CN107256136A CN107256136A (en) 2017-10-17
CN107256136B true CN107256136B (en) 2020-08-21

Family

ID=49775155

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201380027047.0A CN104335242B (en) 2012-06-25 2013-05-20 Consumed while being facilitated using superposition animation by multiple users to media content
CN201710450507.0A Active CN107256136B (en) 2012-06-25 2013-05-20 Facilitating simultaneous consumption of media content by multiple users using superimposed animations

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201380027047.0A CN104335242B (en) 2012-06-25 2013-05-20 Consumed while being facilitated using superposition animation by multiple users to media content

Country Status (4)

Country Link
US (3) US9456244B2 (en)
JP (1) JP6022043B2 (en)
CN (2) CN104335242B (en)
WO (1) WO2014003915A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9361295B1 (en) 2006-11-16 2016-06-07 Christopher C. Andrews Apparatus, method and graphical user interface for providing a sound link for combining, publishing and accessing websites and audio files on the internet
US10296561B2 (en) 2006-11-16 2019-05-21 James Andrews Apparatus, method and graphical user interface for providing a sound link for combining, publishing and accessing websites and audio files on the internet
US20120253493A1 (en) * 2011-04-04 2012-10-04 Andrews Christopher C Automatic audio recording and publishing system
CN103369289B (en) * 2012-03-29 2016-05-04 深圳市腾讯计算机系统有限公司 A kind of communication means of video simulation image and device
US9456244B2 (en) * 2012-06-25 2016-09-27 Intel Corporation Facilitation of concurrent consumption of media content by multiple users using superimposed animation
US8867841B2 (en) * 2012-08-08 2014-10-21 Google Inc. Intelligent cropping of images based on multiple interacting variables
US8996616B2 (en) * 2012-08-29 2015-03-31 Google Inc. Cross-linking from composite images to the full-size version
US10825058B1 (en) * 2015-10-02 2020-11-03 Massachusetts Mutual Life Insurance Company Systems and methods for presenting and modifying interactive content
US10871821B1 (en) * 2015-10-02 2020-12-22 Massachusetts Mutual Life Insurance Company Systems and methods for presenting and modifying interactive content
US10140987B2 (en) * 2016-09-16 2018-11-27 International Business Machines Corporation Aerial drone companion device and a method of operating an aerial drone companion device
US10169850B1 (en) * 2017-10-05 2019-01-01 International Business Machines Corporation Filtering of real-time visual data transmitted to a remote recipient
CN108551587B (en) * 2018-04-23 2020-09-04 刘国华 Method, device, computer equipment and medium for automatically collecting data of television
CN109327608B (en) * 2018-09-12 2021-01-22 广州酷狗计算机科技有限公司 Song sharing method, terminal, server and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4503945B2 (en) * 2003-07-04 2010-07-14 独立行政法人科学技術振興機構 Remote observation equipment

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7297856B2 (en) * 1996-07-10 2007-11-20 Sitrick David H System and methodology for coordinating musical communication and display
US6742083B1 (en) * 1999-12-14 2004-05-25 Genesis Microchip Inc. Method and apparatus for multi-part processing of program code by a single processor
US6559866B2 (en) * 2001-05-23 2003-05-06 Digeo, Inc. System and method for providing foreign language support for a remote control device
JP2003259336A (en) * 2002-03-04 2003-09-12 Sony Corp Data generating method, data generating apparatus, data transmission method, video program reproducing apparatus, video program reproducing method, and recording medium
JP2004112511A (en) 2002-09-19 2004-04-08 Fuji Xerox Co Ltd Display controller and method therefor
JP2006197217A (en) 2005-01-13 2006-07-27 Matsushita Electric Ind Co Ltd Videophone and image data transmission method
JP4848727B2 (en) 2005-10-03 2011-12-28 日本電気株式会社 Video distribution system, video distribution method, and video synchronization sharing apparatus
WO2007130693A2 (en) 2006-05-07 2007-11-15 Sony Computer Entertainment Inc. Methods and systems for processing an interchange of real time effects during video communication
JP4884918B2 (en) * 2006-10-23 2012-02-29 株式会社野村総合研究所 Virtual space providing server, virtual space providing system, and computer program
FR2910770A1 (en) * 2006-12-22 2008-06-27 France Telecom Videoconference device for e.g. TV, has light source illuminating eyes of local user viewing screen during communication with remote user, such that image sensor captures local user's image with reflection of light source on eyes
CN101500125B (en) * 2008-02-03 2011-03-09 突触计算机系统(上海)有限公司 Method and apparatus for providing user interaction during displaying video on customer terminal
FR2928805B1 (en) * 2008-03-14 2012-06-01 Alcatel Lucent Method for implementing video enriched on mobile terminals
US9003315B2 (en) * 2008-04-01 2015-04-07 Litl Llc System and method for streamlining user interaction with electronic content
US20100115426A1 (en) 2008-11-05 2010-05-06 Yahoo! Inc. Avatar environments
US20100191728A1 (en) 2009-01-23 2010-07-29 James Francis Reilly Method, System Computer Program, and Apparatus for Augmenting Media Based on Proximity Detection
JP2010200150A (en) * 2009-02-26 2010-09-09 Toshiba Corp Terminal, server, conference system, conference method, and conference program
US20100306671A1 (en) 2009-05-29 2010-12-02 Microsoft Corporation Avatar Integrated Shared Media Selection
US20110202603A1 (en) * 2010-02-12 2011-08-18 Nokia Corporation Method and apparatus for providing object based media mixing
US20110234481A1 (en) * 2010-03-26 2011-09-29 Sagi Katz Enhancing presentations using depth sensing cameras
US9582166B2 (en) * 2010-05-16 2017-02-28 Nokia Technologies Oy Method and apparatus for rendering user interface for location-based service having main view portion and preview portion
US8913056B2 (en) 2010-08-04 2014-12-16 Apple Inc. Three dimensional user interface effects on a display by using properties of motion
US9531803B2 (en) * 2010-11-01 2016-12-27 Google Inc. Content sharing interface for sharing content in social networks
US20120159527A1 (en) * 2010-12-16 2012-06-21 Microsoft Corporation Simulated group interaction with multimedia content
US9197848B2 (en) * 2012-06-25 2015-11-24 Intel Corporation Video conferencing transitions among a plurality of devices
US9456244B2 (en) 2012-06-25 2016-09-27 Intel Corporation Facilitation of concurrent consumption of media content by multiple users using superimposed animation
US10034049B1 (en) * 2012-07-18 2018-07-24 Google Llc Audience attendance monitoring through facial recognition

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4503945B2 (en) * 2003-07-04 2010-07-14 独立行政法人科学技術振興機構 Remote observation equipment

Also Published As

Publication number Publication date
CN104335242A (en) 2015-02-04
JP6022043B2 (en) 2016-11-09
JP2015523001A (en) 2015-08-06
CN107256136A (en) 2017-10-17
US20170046114A1 (en) 2017-02-16
US9456244B2 (en) 2016-09-27
US10048924B2 (en) 2018-08-14
WO2014003915A1 (en) 2014-01-03
US20130346075A1 (en) 2013-12-26
CN104335242B (en) 2017-07-14
US10956113B2 (en) 2021-03-23
US20190042178A1 (en) 2019-02-07

Similar Documents

Publication Publication Date Title
US10691202B2 (en) Virtual reality system including social graph
JP2018143777A (en) Sharing three-dimensional gameplay
US20180018944A1 (en) Automated object selection and placement for augmented reality
EP3105921B1 (en) Photo composition and position guidance in an imaging device
US10270825B2 (en) Prediction-based methods and systems for efficient distribution of virtual reality media content
US10204451B2 (en) Multi-optical surface optical design
CN106030503B (en) Adaptive video processing
US20190149851A1 (en) Live interactive video streaming using one or more camera devices
US9451242B2 (en) Apparatus for adjusting displayed picture, display apparatus and display method
US10134364B2 (en) Prioritized display of visual content in computer presentations
JP2018518078A (en) Camera rig and 3D image capture
EP2852933B1 (en) Image-driven view management for annotations
US9066017B2 (en) Viewfinder display based on metering images
US20170084231A1 (en) Imaging system management for camera mounted behind transparent display
JP2018522429A (en) Capture and render panoramic virtual reality content
TWI493971B (en) Image overlay in a mobile device
US9491418B2 (en) Method of providing a digitally represented visual instruction from a specialist to a user in need of said visual instruction, and a system therefor
CN107113396B (en) Method implemented at user terminal during video call, user terminal and computer-readable storage medium
JP6165846B2 (en) Selective enhancement of parts of the display based on eye tracking
US9369667B2 (en) Conveying gaze information in virtual conference
US10681342B2 (en) Behavioral directional encoding of three-dimensional video
TWI528787B (en) Techniques for managing video streaming
US8893010B1 (en) Experience sharing in location-based social networking
KR20180029344A (en) Method and apparatus for delivering and playbacking content in virtual reality system
KR101482025B1 (en) Augmented reality presentations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant