US20200228864A1 - Generating highlight videos in an online game from user expressions - Google Patents
Generating highlight videos in an online game from user expressions Download PDFInfo
- Publication number
- US20200228864A1 US20200228864A1 US16/833,284 US202016833284A US2020228864A1 US 20200228864 A1 US20200228864 A1 US 20200228864A1 US 202016833284 A US202016833284 A US 202016833284A US 2020228864 A1 US2020228864 A1 US 2020228864A1
- Authority
- US
- United States
- Prior art keywords
- online game
- user
- time
- video
- gameplay
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000014509 gene expression Effects 0.000 title 1
- 230000002996 emotional effect Effects 0.000 claims abstract description 127
- 238000000034 method Methods 0.000 claims description 72
- 230000000007 visual effect Effects 0.000 claims description 53
- 238000004891 communication Methods 0.000 claims description 50
- 230000008451 emotion Effects 0.000 claims description 49
- 238000003860 storage Methods 0.000 claims description 25
- 230000003993 interaction Effects 0.000 claims description 22
- 230000001755 vocal effect Effects 0.000 claims description 11
- 230000002123 temporal effect Effects 0.000 claims description 10
- 230000000694 effects Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 description 18
- 238000010801 machine learning Methods 0.000 description 16
- 230000008921 facial expression Effects 0.000 description 14
- 230000008859 change Effects 0.000 description 10
- 238000004590 computer program Methods 0.000 description 10
- 230000001419 dependent effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000001815 facial effect Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/85—Providing additional services to players
- A63F13/86—Watching games played by other players
-
- G06K9/00302—
-
- G06K9/00711—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- H04L67/22—
-
- H04L67/38—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/535—Tracking the activity of the user
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4334—Recording operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4781—Games
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8549—Creating video summaries, e.g. movie trailer
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/85—Providing additional services to players
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
-
- H04L67/42—
Definitions
- the present disclosure relates to generating video edits.
- the video edits may be highlight videos of gameplay of an online game.
- the video edits may comprise of one or more video clips of the online game.
- the video clips may be selected based on events of interest and user emotional manifestations in the online game.
- Highlight videos may be created from the important moments of gameplay in the online game. The highlight videos are an efficient way to review the important moments of gameplay in the online game.
- the video edits may be highlight videos of gameplay of an online game.
- the video edits may comprise of one or more video clips of the online game.
- a video clip may include a portion of a recording of the gameplay of the online game and a video of a user participating in the online game.
- the online game may include the video of the user.
- the gameplay of the online game may include the video of the user.
- the video of the user may be a video chat between the user and another user participating in the online game.
- the online game may include game elements, a communication interface, and/or other components of the online game.
- the communication interface may allow the user to communicate with other users participating in the online game.
- the portion of the recording of the online game to include in the video clip may be determined.
- the user emotional manifestations may be evidence of one or more emotions being experienced by the user during the gameplay of the online game. For example, the user cheering with a smile may be an example of a user emotional manifestation.
- the events of interest may be in-game events that occur during the gameplay of the online game.
- the user scoring a point in the online game may be an example of an in-game event that occurs in during the gameplay of the online game.
- a first video clip may include a first user emotional manifestation that occurred during a first point in time during the gameplay of the online game.
- a second video clip may include the first user emotional manifestation that occurred during the first point in time during the gameplay of the online game and/or a first event of interest that occurred during a second point in time during the gameplay of the online game.
- the user may be presented with access to the video edits, including the first video clip, the second video clip, and/or other video clips, and/or previews to the video edits.
- a system for generating video edits may include one or more servers, one or more client computing devices, one or more external resources, and/or other components.
- the one or more servers may be configured to communicate with the one or more client computing devices according to a client/server architecture. Users of the system may access the system via the client computing device(s).
- the server(s) may be configured to execute one or more computer program components.
- the computer program components may include one or more of a capture component, an identification component, a determination component, a generation component, a presentation component, and/or other components.
- the capture component may be configured to record the online game, the gameplay of the online game, and/or other information. In some implementations, the capture component may be configured to record visual and/or audio content of the online game, the gameplay of the online game, and/or other information.
- the recording of the online game may include visual and/or audio content of the online game, including the game elements, the communication interface, and/or other components of the online game.
- the communication interface may include a live stream of the user participating in the online game, and/or other communication elements.
- the live stream of the user participating in the online game may include a real-time or near real-time video of the user participating in the online game, and/or other contents.
- the identification component may be configured to make identifications from the online game, the recording of the gameplay of the online game, and/or from other information relating to the online game. In some implementations, the identification component may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information of the online game. In some implementations, the identification component may be configured to make identifications of the user emotional manifestations by identifying evidence of one or more emotions being experienced by the user during the gameplay of the online game. In some implementations, the identification component may determine when the user emotional manifestations occurred during the gameplay of the online game and/or the recording of the gameplay of the online game.
- the identification component may be configured to make identifications of the events of interest by identifying the in-game events that occurred in the gameplay of the online game. In some implementations, the identification component may determine when the events of interest occurred during the gameplay of the online game and/or the recording of the gameplay of the online game.
- the determination component may be configured to determine a portion of the recording of the gameplay of the online game to include in one or more video clips of the video edits. In some implementations, the determination component may determine to include a portion of the recording including at least one of the user emotional manifestations in a video clip. For example, the determination component may determine to include a portion of the recording that includes a point in time the user emotional manifestations occurred in the video clip. In some implementations, the determination component may determine to include a portion of the recording including at least one of the events of interest in a video clip. In some implementations, the determination component may determine to include a portion of the recording including at least one of the user emotional manifestations and at least one of the events of interest in a video clip.
- the determination component may be configured to determine an association between the user emotional manifestations and the events of interests, and/or other information. For example, the determination component may be configured to determine if an event of interest caused a user emotional manifestation. If the event of interest caused the user emotional manifestation, the event of interest may be associated with the user emotional manifestation.
- the generation component may be configured to generate video edits and/or other content.
- the video edits may include one or more of the video clips, and/or other information.
- the generation component may be configured to generate a first video edit that may include all the video clips, including the first video clip, the second video clip, the third video clip, and/or other video clips.
- the generation component may be configured to generate a second video edit that may include some of the video clips.
- the presentation component may be configured to present the video edits and/or other information. In some implementations, the presentation component may be configured to present the video edits and/or other information to the client computing device and/or other devices. In some implementations, the presentation component may transmit information that facilitates the presentation of the video edits through the client computing device. In some implementations, the presentation component may provide the client computing device with access to the video edits. The presentation component may provide the client computing device 104 with previews of the video edits.
- FIG. 1 illustrates a system for generating video edits, in accordance with one or more implementations.
- FIG. 2 illustrates a client computing device playing an online game at the start of a match in the online game, in accordance with one or more implementations.
- FIG. 3 illustrates the client computing device playing the online game at the end of a match in the online game, in accordance with one or more implementations.
- FIG. 4 illustrates the client computing device presented with video edits.
- FIG. 5 illustrates a method for generating video edits, in accordance with one or more implementations.
- FIG. 1 illustrates a system 100 for generating video edits.
- the video edits may be highlight videos of gameplay of an online game.
- the video edits may comprise of one or more video clips of a recording of the online game.
- a video clip may include a portion of a recording of the gameplay of the online game and a video of a user participating in the online game.
- the online game may include the video of the user.
- the gameplay of the online game may include the video of the user.
- the video of the user may be a video chat between the user and another user participating in the online game. From user emotional manifestations, events of interest, and/or other information of the online game, the portion of the recording of the online game selected for the video clip may be determined.
- a user emotional manifestation may be evidence of one or more emotions experienced by the user during gameplay of the online game.
- An event of interest may be one or more in-game events that occurred during the gameplay of the online game.
- a first video clip may include a first user emotional manifestation that occurred during a first point in time during the gameplay of the online game.
- a second video clip may include the first user emotional manifestation that occurred during the first point in time during the gameplay of the online game and/or a first event of interest that occurred during a second point in time during the gameplay of the online game.
- the user may be presented with access to the video edits, including the first video clip, the second video clip, and/or other video clips, and/or previews to the video edits.
- system 100 may include one or more of one or more servers 102 , one or more client computing devices 104 , one or more external resources 120 , and/or other components.
- Server(s) 102 may be configured to communicate with client computing device(s) 104 according to a client/server architecture. The user of system 100 may access system 100 via client computing device(s) 104 .
- Server(s) 102 may include one or more physical processors 124 , one or more electronic storages 122 , and/or other components.
- the one or more physical processors 124 may be configured by machine-readable instructions 105 . Executing machine-readable instructions 105 may cause server(s) 102 to generate video edits.
- Machine-readable instructions 105 may include one or more computer program components.
- the computer program components may include one or more of a capture component 106 , an identification component 108 , a determination component 110 , a generation component 112 , a presentation component 114 , and/or other components.
- electronic storage(s) 122 and/or other components may be configured to store recordings of the online game, portions of the recording of the online game, the video edits, the video clips, and/or other information.
- the recording of the online game may include visual and/or audio content of the online game, and/or other information.
- client computing device(s) 104 may include one or more of a mobile computing device, a game console, a personal computer, and/or other computing platforms.
- the mobile computing device may include one or more of a smartphone, a smartwatch, a tablet, and/or other mobile computing devices.
- client computing device(s) 104 may carry one or more sensors.
- the one or more sensors may include one or more image sensors, one or more audio sensors, one or more infrared sensors, one or more depth sensors, and/or other sensors.
- the one or more sensors may be coupled to client computing device(s) 104 .
- an image sensor may be configured to generate output signals conveying visual information, and/or other information.
- the visual information may define visual content within a field of view of the image sensor and/or other content.
- the visual content may include depictions of real-world objects and/or surfaces.
- the visual content may be in the form of one or more of images, videos, and/or other visual information.
- the field of view of the image sensor may be a function of a position and an orientation of a client computing device.
- an image sensor may comprise one or more of a photosensor array (e.g., an array of photosites), a charge-coupled device sensor, an active pixel sensor, a complementary metal-oxide semiconductor sensor, an N-type metal-oxide-semiconductor sensor, and/or other devices.
- a photosensor array e.g., an array of photosites
- a charge-coupled device sensor e.g., an active pixel sensor
- an active pixel sensor e.g., a charge-coupled device sensor
- a complementary metal-oxide semiconductor sensor e.g., an N-type metal-oxide-semiconductor sensor, and/or other devices.
- an audio sensor may be configured to generate output signals conveying audio information, and/or other information.
- the audio information may define audio from a user of the audio sensor (e.g., utterances of the user), audio around the user (such as ambient audio), and/or other information.
- an audio sensor may include one or more of a microphone, a micro-electro-mechanical microphone, and/or other devices.
- a depth sensor may be configured to generate output signals conveying depth information within a field of view of the depth sensor, and/or other information.
- the depth information may define depths of real-world objects and/or surfaces, and/or other information.
- a field of view of the depth sensor may be a function of a position and an orientation of a client computing device.
- the depth information may define a three-dimensional depth map of real-world objects and/or a face of a user.
- the depth sensor may comprise of one or more ultrasound devices, infrared devices, light detection and ranging (LiDAR) devices, time-of-flight cameras, and/or other depth sensors and/or ranging devices.
- the infrared devices may include one or more infrared sensors. The infrared sensors may generate output signals conveying the depth information.
- client computing device(s) 104 may include one or more of one or more processors configured by machine-readable instructions and/or other components.
- Machine-readable instructions of client computing device(s) 104 may include computer program components.
- the computer program components may be configured to enable the user associated with client computing device(s) 104 to interface with system 100 , the one or more sensors, and/or external resources 120 , and/or provide other functionality attributed herein to client computing device(s) 104 and/or server(s) 102 .
- capture component 106 may be configured to record the online game, the gameplay of the online game, and/or other information.
- the online game, the gameplay of the online game, and/or other information may be presented through a client interface of the client computing device(s) 104 .
- the online game, the gameplay of the online game, and/or other information may be viewed by the user through the client interface.
- the client interface may display visual content of the online game through a digital screen of the client computing device(s) 104 .
- the client interface of the client computing device(s) 104 may include visual and/or audio content of the online game, the gameplay of the online game, and/or other information.
- capture component 106 may be configured to record the visual and/or audio content of the client interface of the client computing device(s) 104 . In some implementations, capture component 106 may be configured to record the online game being played by the user through the client interface of client computing device(s) 104 . In some implementations, the recording of the online game includes the video of the user. The video of the user may be of the video chat between the user and the other user participating in the online game.
- the online game may include one or more game elements, a communication interface, and/or other components of the online game.
- capture component 106 may be configured to record visual and/or audio content of the online game, the gameplay of the online game, and/or other information.
- capture component 106 may be configured to record visual and/or audio content of the gameplay of the online game and/or other information.
- the visual and/or audio content of the gameplay of the online game may include the game elements, the communication interface, and/or other components of the online game.
- the visual and/or audio content of the gameplay of the online game may be stored in within electronic storage 122 , non-transitory storage media, and/or other storage media.
- capture component 106 may be configured to record the visual and/or audio content of the gameplay of the online game from a start to an end of the online game.
- the start of the online game may be when the user begins to interact with the online game.
- the start of the online game may be the moment when the user opens the online game.
- the end of the online game may be when the user closes the online game.
- the end of the online game may be the moment when the user leaves the online game.
- the start of the online game may be a beginning of a new phase of the online game.
- the start of the online game may be a beginning of a match in the online game, and/or a beginning of a new instance in the online game.
- the end of the online game may be the ending of the new phase of the online game.
- the end of the online game may be an ending of the match in the online game, and/or an ending of the new instance in the online game.
- the game elements may include virtual content that makes up the online game, and/or other information.
- the game elements may include one or more game interfaces, game environments, virtual objects, virtual entities, and/or other virtual content.
- the virtual entities may include one or more non-player entities, player entities, and/or other entities.
- a player entity may be associated with the user.
- the user may control the player entity through client computing device(s) 104 .
- the user may interact with the online game. In some implementations, the user may interact with and/or control components of the online game. In some implementations, the user may interact with and/or control the game elements, and/or other components of the online game. For example, the user may interact with and/or control the virtual content that makes up the online game, and/or other components of the online game. In some implementations, the user may interact with and/or control the game elements through inputting user inputs through client computing device(s) 104 , and/or through other inputs through other devices.
- the user input may comprise of one or more of a gesture input received through the image sensor and/or other sensors of the given client computing device(s) 104 , one or more of a voice input received through the audio sensors of the given client computing device(s) 104 , one or more of a touch input received though a touch-enabled display of the given client computing device(s) 104 , one or more of a controller input received through game controllers of the given client computing device(s) 104 and/or other user inputs.
- the communication interface may include communication between users participating in the online game, and/or other information.
- the communication interface may include communication between the user participating in the online game and another user participating in the online game.
- the communication between the user and the other user may include one or more of an instant message, a voice chat, a video chat, and/or other forms of communication.
- the communication between the user and the other user through the online game may include a live stream of the user and/or the other user participating in the online game.
- the live stream of the user and/or the other user participating in the online game may include a real-time or near real-time video of the user and/or the other user participating in the online game, and/or other content.
- the real-time or near real-time video of the user and/or the other user may include visual and/or audio content of the user and/or the other user.
- the visual content of the user may include a face of the user, and/or other visual content.
- the visual content of the user may include facial features of the face of the user, and/or other visual content.
- the image sensor carried by client computing device(s) 104 may capture the visual content of the user.
- the output signals of the image sensor may convey visual information defining the visual content of the user.
- the visual content of the other user may include a face of the other user, and/or other visual content.
- the visual content of the other user may include facial features of the face of the other user, and/or other visual content.
- the image sensors carried by another client computing device may capture the visual content of the other user.
- the other client computing device may have similar functionalities as client computing device(s) 104 .
- output signals of an image sensor of the other client computing device may convey visual information defining the visual content of the other user.
- the audio content may include audio information of the user (e.g., the user speaking) and/or other audio content.
- the audio sensors carried by client computing device(s) 104 may capture the audio content of the user.
- the output signals of the audio sensor carried by client computing device(s) 104 may convey audio information defining the audio content of the user.
- the audio content may include audio information of the other user (e.g., the user speaking) and/or other audio content.
- an audio sensor carried by the other client computing device may capture the audio content of the other user.
- output signals of an audio sensor carried by the other client computing device may convey audio information defining the audio content of the other user.
- identification component 108 may be configured to make identifications from the online game, the recording of the gameplay of the online game, and/or from other information relating to the online game. In some implementations, identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information from the online game. In some implementations, identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information from the online game from the recording of the gameplay of the online game, and/or from other information relating to the online game.
- identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information of the online game concurrently with the gameplay of the online game, and/or during other time periods. In some implementations, identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information of the online game after the end of the gameplay of the online game, and/or during other time periods. In some implementations, identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information of the online game concurrently with the recording of the gameplay of the online game, and/or during other time periods. In some implementations, identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information of the online game from the recorded gameplay of the online game.
- identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information of the online game through one or more machine learning techniques, image-processing techniques, computer vision techniques, and/or other techniques.
- the machine learning techniques may include one or more of a convolution neural network, decision tree learning, supervised learning, minimax algorithm, unsupervised learning, semi-supervised learning, reinforcement learning, deep learning, and/or other techniques.
- the image-processing techniques may include one or more of bundle adjustment, SURF, ORB, computer vision, and/or other techniques.
- the computer vision techniques may include one or more recognition techniques, motion analysis techniques, image restoration techniques, and/or other techniques.
- identification component 108 may be configured to make identifications of the user emotional manifestations by identifying evidence of one or more emotions being experienced by the user during the gameplay of the online game. In some implementations, identification component 108 may identify the first user emotional manifestation, the second user emotional manifestation, and/or other user emotional manifestations.
- the one or more emotions being experienced by the user during the gameplay of the online game may include fear, anger, sadness, joy, disgust, surprised, trust, anticipation, and/or other emotions.
- identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the communication interface, and/or other content of the online game. In some implementations, identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the communication interface during the gameplay of the online game. In some implementations, identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the communication interface of the recording of the gameplay of the online game. In some implementations, identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the visual and/or audio content of the communication interface during the gameplay of the online game. In some implementations, identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the visual and/or audio content of the communication interface of the recording of the gameplay of the online game.
- identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the visual content of the communication interface.
- the visual content of the communication interface may include physical communication from the user.
- the physical communication from the user may be communication from the user via their body parts and/or face.
- the physical communication may be a gesture and/or facial expression may be the user.
- the visual content of the communication interface may include the face of the user, gestures made by the user, and/or other information.
- the face of the user may include one or more facial expressions, and/or other information.
- facial features of the face may define the facial expressions, and/or other features of the face.
- the facial expressions of the face of the user may convey information about the emotions being experienced by the user.
- the gestures made by the user may convey information about the emotions being experienced by the user.
- identification component 108 may identify the one or more emotions being experienced by the user from the facial expressions of the face of the user, and/or from other information. For example, if the facial expressions on the face of the user convey a smile, identification component 108 may identify evidence of joy being experienced by the user. If the facial expressions on the face of the user convey a frown, identification component 108 may identify evidence of anger or sadness being experienced by the user. In some implementations, identification component 108 may use the one or more machine learning techniques, the one or more image-processing technique, the one or more computer vision techniques, and/or techniques to identify the facial expressions of the user's face.
- identification component 108 may determine the emotions associated with the facial expressions. The association between the emotions and the facial expressions may help identify the one or more emotions being experienced by the user, and/or other information.
- specific facial expressions may be associated with specific emotions.
- the association between the specific facial expressions and the specific emotions may be predetermined. For example, a smile may be associated with joy, a frown may be associated with anger or sadness, and/or other facial expressions may be associated with other emotions.
- identification component 108 may use the one or more machine learning techniques, the one or more image-processing technique, the one or more computer vision techniques, and/or techniques to determine the association between the specific facial expressions and the specific emotions.
- identification component 108 may identify the one or more emotions being experienced from the gestures made by the user, and/or from other information. For example, if the gestures made by the user conveys a clap, identification component 108 may identify evidence of joy being experienced by the user. If the gestures made by the user conveys a hateful message, identification component 108 may identify evidence of anger being experienced by the user. In some implementations, identification component 108 may use the one or more machine learning techniques, the one or more image-processing technique, the one or more computer vision techniques, and/or techniques to identify the gestures made by the user.
- identification component 108 may determine the emotions associated with the gestures made by the user. The association between the emotions and the gestures may help identify the one or more emotions being experienced by the user. In some implementations, specific gestures made by the user may be associated with specific emotions. In some implementations, the association between the specific gestures made by the user and the specific emotions may be predetermined. For example, a clap may be associated with joy, a hateful gesture may be associated with anger, and/or other gestures made by the user may be associated with other emotions. In some implementations, identification component 108 may use the one or more machine learning techniques, the one or more image-processing technique, the one or more computer vision techniques, and/or techniques to determine the association between the specific gestures made by the user and the specific emotions.
- identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from a vocal communication from the user.
- the vocal communication from the user may be obtained from the communication interface.
- the vocal communication from the user may include audio content from the user.
- identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the nature of the audio content from the user through the communication interface.
- the audio from the user may include one or more spoken words, tonality, speed, volume, and/or other information.
- the nature of the audio from the user may convey information about the emotions being experienced by the user.
- identification component 108 may identify the one or more emotions being experienced by the user from the nature of the audio from the user, and/or from other information. For example, if the audio of the user conveys a laugh, identification component 108 may identify evidence of joy being experienced by the user. If the audio of the user conveys a shout, identification component 108 may identify evidence of anger being experienced by the user. If the audio of the user conveys profanity, identification component 108 may identify evidence of anger being experienced by the user. If the audio of the user conveys a sudden increase in volume, identification component 108 may identify evidence of surprise being experienced by the user. If the audio of the user conveys a sudden increase in speed, identification component 108 may identify evidence of anticipation being experienced by the user. In some implementations, identification component 108 may use the one or more machine learning techniques, and/or techniques to identify the nature of the audio from the user.
- identification component 108 may determine the emotions associated with the nature of the audio from the user. The association between the emotions and the nature of the audio from the user may help identify the one or more emotions being experienced by the user. In some implementations, specific nature of the audio from the user may be associated with specific emotions. In some implementations, the association between the specific nature of the audio from the user and the specific emotions may be predetermined. For example, laughter may be associated with joy, shouting and/or profanity may be associated with anger, a sudden increase in volume may be associated with surprise, a sudden increase in speed may be associated with anticipation, and/or other nature of the audio from the user may be associated with other emotions. In some implementations, identification component 108 may use the one or more machine learning techniques, and/or techniques to determine the association between the specific nature of the audio from the user and the specific emotions.
- identification component 108 may determine when the user emotional manifestations occurred during the gameplay of the online game and/or the recording of the gameplay of the online game. In some implementations, identification component 108 may determine points in time the user emotional manifestations occurred during the gameplay of the online game and/or the recording of the gameplay of the online game. In some implementations, identification component 108 may determine the individual points in time the first user emotional manifestation, the second user emotional manifestation, and/or other user emotional manifestations occurred during the gameplay of the online game and/or the recording of the gameplay of the online game.
- the points in time the user emotional manifestations occurred in during the gameplay of the online game may be when the evidence of the one or more emotions being experienced by the user was identified during the gameplay of the online game. For example, if the first user emotional manifestation occurred 5 minutes (e.g., the first point in time) into the gameplay of the online game, identification component 108 may determine that the first point in time may be 5 minutes into the gameplay of the online game. If the second user emotional manifestation occurred 6 minutes into the gameplay of the online game, identification component 108 may determine that the second point in time may be 6 minutes (e.g., the second point in time) into the gameplay of the online game.
- the points in time the user emotional manifestations occurred in the recording of the gameplay of the online game may be when the evidence of the one or more emotions being experienced by the user was identified during the recording of the gameplay of the online game. For example, if the first user emotional manifestation occurred 5 minutes into the recording of the gameplay of the online game, identification component 108 may determine that the first point in time may be 5 minutes into the recording of the gameplay of the online game. If the second user emotional manifestation occurred 6 minutes into the recording of the gameplay of the online game, identification component 108 may determine that the second point in time may be 6 minutes into the recording of the gameplay of the online game.
- identification component 108 may be configured to make identifications of the events of interest by identifying the in-game events that occurred in the gameplay of the online game. In some implementations, identification component 108 may identify the first event of interest, the second event of interest, and/or other events of interest.
- the in-game events that occurred in the gameplay of the online game may be one or more actions, activities, events, and/or other occurrences during the gameplay of the online game.
- identification component 108 may be configured to identify the in-game events from the game elements, and/or other information of the online game. In some implementations, identification component 108 may be configured to identify the in-game events from the game elements during the gameplay of the online game, and/or other information of the online game. In some implementations, identification component 108 may be configured to identify the in-game events from the game elements of the recording of the gameplay of the online game, and/or other information of the online game. In some implementations, identification component 108 may be configured to identify the in-game events from the visual and/or audio content of the game elements during the gameplay of the online game. In some implementations, identification component 108 may be configured to identify the in-game events from the visual and/or audio content of the game elements of the recording of the gameplay of the online game.
- the in-game events may include one or more changes in game state, criteria being met, temporal occurrences, interactions within the online game, and/or other in-game events.
- identification component 108 may be configured to identify one or more of the changes in game state, the criteria being met, the temporal occurrences, the interactions within the online game, and/or other in-game events.
- identification component 108 may be configured to identify one or more of the changes in game state, the criteria being met, the temporal occurrences, the interactions within the online game, and/or other in-game events through the one or more machine learning techniques, the one or more image-processing technique, the one or more computer vision techniques, and/or techniques.
- identification component 108 may identify the events of interest from the one or more changes in game state, and/or other information.
- the one or more changes in game state may signify a change in a phase of the online game, a change in a stage of the online game, and/or other information.
- the one or more changes in game state may include one or more of a start of the online game, an end of the online game, a pause in the online game, a start of a match in the online game, the end of a match in the online game, a change in the game environment, progression in the online game, and/or other changes in game states.
- the change in the game environment may include one or more changes in the environment of the online game.
- the change in the game environment may include a change in the weather system in the online game, a change in the game board of the online game, a change in the theme of the online game, and/or other changes of the game environment.
- the progression in the online game may include one or more of a virtual entity of the online game advancing in a portion of the virtual entity (e.g. leveling up), the virtual entity obtaining an item in the online game, the virtual entity obtaining a reward in the online game, the user finishing a portion of the online game, the user advancing to another portion of the online game, and/or other progression in the online game.
- identification component 108 may identify the events of interest from the one or more criteria being met, and/or other information.
- the one or more criteria may define one or more objectives in the online game, one or more conditional events in the online game, and/or other information.
- the one or more criteria may include an objective to reach a checkpoint, an objective to reach a milestone, an objective to reach, a condition for the user to win the online game, a condition for the user to lose the online game, a condition for the user to not winning in the online game (e.g., a draw), and/or other criteria.
- the one or more criteria being met may trigger other in-game events. For example, when the objective to reach a milestone is met, the game state may change, and the user may advance to another portion of the online game. When the condition for the user to lose the online game is met, the game state may change, and the online game may end.
- identification component 108 may identify the events of interest from the one or more temporal occurrences, and/or other information.
- the one or more temporal occurrences may define one or more timed events.
- the one or more temporal occurrences may include one or more timers running out in the online game, reaching time limits in the online game, reaching a time durations in the online game, and/or other temporal occurrences.
- the timers running out in the online game may include a timer running out for a match in the online game, a timer running out for a game session in the online game, a timer running out for some features of the online game, and/or other timer running out in the online game.
- the timer running out for some features of the online game may include the timer running out for an ability of the virtual entity of the online game, and/or other features.
- identification component 108 may identify the events of interest from the one or more interactions within the online game, and/or other information.
- the one or more interactions within the online game may include the interaction between the users participating in the online game, an interaction between the users participating in the online game and the virtual content, an interaction between the virtual entities in the online game, an interaction between the virtual entities and the virtual objects in the online game, an interaction between the virtual entities and the virtual environment in the online game, and/or other interactions.
- the interaction between the users participating in the online game may include communication between the users participating in the online game through the communication interface, and/or other interfaces.
- the interaction between the users participating in the online game and the virtual content may include communication between the users and the virtual content, and/or other interactions.
- the interaction between the virtual entities in the online game may include communication between the virtual entities, contact between the virtual entities, and/or other interactions between the virtual entities.
- the interaction between the virtual entities and the virtual objects in the online game may include communication between the virtual entities and the virtual objects, contact between the virtual entities and the virtual objects, and/or other interaction.
- the interaction between the virtual entities and the virtual environment in the online game may include the virtual entities traversing across or to a particular portion of the virtual environment, and/or other interactions.
- identification component 108 may determine when the events of interest occurred during the gameplay of the online game and/or the recording of the gameplay of the online game. In some implementations, identification component 108 may determine the points in time the events of interest occurred during the gameplay of the online game and/or the recording of the gameplay of the online game. In some implementations, identification component 108 may determine the individual points in time the first event of interest, the second event of interest, and/or other events of interest occurred during the gameplay of the online game and/or the recording of the gameplay of the online game.
- the points in time the events of interest occurred during the gameplay of the online game may be when the in-game events were identified during the gameplay of the online game. For example, if the first event of interest occurred 3 minutes (e.g., the third point in time) into the recording of the gameplay of the online game, identification component 108 may determine that the third point in time may be 3 minutes into the recording of the gameplay of the online game. If the second event of interest occurred 4 minutes (e.g., the fourth point in time) into the recording of the gameplay of the online game, identification component 108 may determine that the fourth point in time may be 4 minutes into the recording of the gameplay of the online game.
- 3 minutes e.g., the third point in time
- identification component 108 may determine that the third point in time may be 3 minutes into the recording of the gameplay of the online game.
- identification component 108 may determine that the fourth point in time may be 4 minutes into the recording of the gameplay of the online game.
- the points in time the events of interest occurred in the recording of the gameplay of the online game may be when the in-game events were identified in the recording of the gameplay of the online game. For example, if the first event of interest occurred 3 minutes into the recording of the gameplay of the online game, identification component 108 may determine that the third point in time may be 3 minutes into the recording of the gameplay of the online game. If the second event of interest occurred 4 minutes into the recording of the gameplay of the online game, identification component 108 may determine that the fourth point in time may be 4 minutes into the recording of the gameplay of the online game.
- determination component 110 may be configured to determine a portion of the recording of the gameplay of the online game to include in one or more video clips of the video edits. In some implementations, determination component 110 may be configured to generate the one or more video clips of the video edits based on the determination made for the portion of the recording of the gameplay of the online game to include in one or more video clips.
- the recording of the gameplay of the online game may hereby be referred to as: “the recording.”
- determination component 110 may determine to include a portion of the recording including at least one of the user emotional manifestations in a video clip. In some implementations, determination component 110 may determine to include a portion of the recording including at least one of the events of interest in a video clip. In some implementations, determination component 110 may determine to include a portion of the recording including at least one of the user emotional manifestations and at least one of the events of interest in a video clip.
- determination component 110 may be configured to determine a portion of the recording to include in a first video clip, a second video clip, a third video clip, and/or other video clips.
- the first video clip, the second video clip, and the third video clip may include different portions of the recording of the online game.
- the first video clip may be the portion of the recording including at least one of the user emotional manifestations.
- the second video clip may be the portion of the recording including the portion of the recording including at least one of the events of interest.
- the third video clip may be the portion of the recording including the portion of the recording including at least one of the user emotional manifestations and at least one of the events of interest.
- the portion of the recording including the user emotional manifestation may be a portion of the recording when the user emotional manifestations occurred.
- determination component 110 may determine to include the portion of the recording, including at least one point in time, the user emotional manifestations occurred in the video clip.
- the first user emotional manifestation occurred at the first point in time e.g., 5 minutes into the recording
- determination component 110 may determine to include the portion of the recording including the first point in time of the recording.
- the portion of the recording including the first point in time of the recording may include the recording between 4.5 minutes to 5.5 minutes.
- the duration of the portion of the recording to include in the video clip may be dependent on an expected duration of the first user emotional manifestation. Determination component 110 may determine the expected duration of the first user emotional manifestation through one or more machine-learning techniques.
- determination component 110 may determine to include a portion of the recording, including more than one point in time, the user emotional manifestations occurred in the video clip. In some implementations, if the first user emotional manifestation occurred at the first point in time (e.g., 5 minutes into the recording) and the second user emotional manifestation occurred at the second point in time (e.g., 6 minutes into the recording of the gameplay of the online game), determination component 110 may determine to include the portion of the recording including the first point in time and the second point in time of the recording. For example, the portion of the recording including the first point in time and the second point in time of the recording may include the recording between 4.5 minutes to 6.5 minutes.
- the duration of the portion of the recording to include in the video clip may be dependent on an expected duration of the first user emotional manifestation and the second user emotional manifestation.
- Determination component 110 may determine the expected duration of the first user emotional manifestation and the second user emotional manifestation through one or more machine-learning techniques.
- the portion of the recording including the events of interest may be a portion of the recording when the events of interest occurred.
- determination component 110 may determine to include the portion of the recording, including at least one point in time, the events of interest occurred in the video clip.
- the first event of interest occurred at the third point in time e.g., 3 minutes into the recording
- determination component 110 may determine to include the portion of the recording including the third point in time of the recording.
- the portion of the recording including the third point in time of the recording may include the recording between 2.5 minutes to 3.5 minutes.
- the duration of the portion of the recording to include in the video clip may be dependent on an expected duration of the first event of interest. Determination component 110 may determine the expected duration of the first event of interest through one or more machine-learning techniques.
- determination component 110 may determine to include a portion of the recording, including more than one point in time, the events of interest occurred in the video clip. In some implementations, if the first event of interest occurred at the third point in time (e.g. 3 minutes into the recording) and the second event of interest occurred at the fourth point in time (e.g. 4 minutes into the recording), determination component 110 may determine to include the portion of the recording including the third point in time and the fourth point in time of the recording. For example, the portion of the recording including the third point in time and the fourth point in time of the recording may include the recording between 2.5 minutes to 4.5 minutes. The duration of the portion of the recording to include in the video clip may be dependent on an expected duration of the first event of interest and the second event of interest. Determination component 110 may determine the expected duration of the first event of interest and the second event of interest through one or more machine-learning techniques.
- the portion of the recording including the user emotional manifestation and the events of interest may be a portion of the recording when the user emotional manifestations and the events of interest occurred.
- determination component 110 may determine to include the portion of the recording including at least one point in time the user emotional manifestations and at least one point in time the events of interest occurred in the video clip. In some implementations, if the first user emotional manifestation occurred at the first point in time (e.g., 5 minutes into the recording) and the first event of interest occurred at the third point in time (e.g. 3 minutes into the recording), determination component 110 may determine to include the portion of the recording including the third point in time and the first point in time of the recording.
- the portion of the recording including the third point in time and the first point in time of the recording may include the recording between 2.5 minutes to 5.5 minutes.
- the duration of the portion of the recording to include in the video clip may be dependent on an expected duration of the first user emotional manifestation and the first event of interest.
- Determination component 110 may determine the expected duration of the first user emotional manifestation and the first event of interest through one or more machine-learning techniques.
- determination component 110 may determine to include a portion of the recording including more than one point in time the user emotional manifestations and more than one point in time the events of interest occurred in the video clip. In some implementations, if the first user emotional manifestation occurred at the first point in time, the second user emotional manifestation occurred at the second point in time, the first event of interest occurred at the third point in time, and the second event of interest occurred at the fourth point in time, determination component 110 may determine to include the portion of the recording including the third point in time, the fourth point in time, the first point in time, the second point in time of the recording.
- the portion of the recording including the third point in time, the fourth point in time, the first point in time, the second point in time of the recording may include the recording between 2.5 minutes to 6.5 minutes.
- the duration of the portion of the recording to include in the video clip may be dependent on an expected duration of the first user emotional manifestation, the second user emotional manifestation, the first event of interest, and the second event of interest.
- Determination component 110 may determine the expected duration through one or more machine-learning techniques.
- determination component 110 may determine to include a portion of the recording including more than one point in time the user emotional manifestations and at least one point in time the events of interest occurred in the video clip. In some implementations, determination component 110 may determine to include the portion of the recording including at least one point in time the user emotional manifestations and more than one point in time the events of interest occurred in the video clip. In some implementations, the user may determine the number of the user emotional manifestations and/or the events of interest to include in the video clip. In some implementations, the number of the user emotional manifestations and/or the events of interest to include in the video clip may be predetermined by server(s) 102 or the online game.
- determination component 110 may be configured to determine an association between the user emotional manifestations and the events of interests, and/or other information. For example, determination component 110 may be configured to determine if an event of interest caused a user emotional manifestation. If the event of interest caused the user emotional manifestation, the event of interest may be associated with the user emotional manifestation. In some implementations, determination component 110 determines the association between the user emotional manifestations and the events of interests based on the temporal relationship between the user emotional manifestations and the events of interest. For example, if a user emotional manifestation occurred shortly after an event of interest, determination component 110 may determine that the user emotional manifestation may be associated with the event of interest. In some implementations, determination component 110 determines the association between the user emotional manifestations and the events of interests using the one or more machine-learning techniques, and/or other techniques.
- determination component 110 may determine to include a portion of the recording including the event of interest and the user emotional manifestation associated with the user emotional manifestation. In some implementations, determination component 110 may determine to include the portion of the recording including the points in time the event of interest and the user emotional manifestation associated with the user emotional manifestation occurred.
- generation component 112 may be configured to generate video edits and/or other content.
- the video edits may include one or more of the video clips, and/or other information.
- the video edits may include one or more of the video clips generated by determination component 110 , and/or other information.
- generation component 112 may be configured to generate a first video edit that may include all the video clips, including the first video clip, the second video clip, the third video clip, and/or other video clips.
- generation component 112 may be configured to generate a second video edit that may include some of the video clips.
- generation component 112 may determine the one or more video clips to include in a video edit. In some implementations, generation component 112 may determine to include one or more video clips with content from a period of the recording. The period of the recording may be predetermined or determined by the user. The period of the recording may be predetermined or determined by the server(s) 102 or the online game. For example, generation component 112 may determine to include the one or more video clips with content from the recording between the start of the recording and the middle of the recording. Generation component 112 may determine to include the one or more video clips with content from the recording between the start of the recording and the end of the recording.
- generation component 112 may determine to include the one or more video clips with similar user emotional manifestations, events of interest, time durations, a combination of different user emotional manifestations, a combination of different events of interest, a combination of similar user emotional manifestations and events of interest, and/or other combination of video clips in the video edits.
- the one or more video clips with the similar user emotional manifestations may be the one or more video clips where similar emotions are being experienced by the user in the one or more video clips.
- the video edit may include more than one of the video clips where similar emotions are being experienced by the user.
- the one or more video clips with the similar events of interest may be the one or more video clips with similar in-game events that occurred.
- the video edit may include more than one of the video clips where similar in-game events occurred.
- presentation component 114 may be configured to present the video edits and/or other information. In some implementations, presentation component 114 may be configured to present the video edits and/or other information to client computing device(s) 104 and/or other devices. In some implementations, presentation component 114 may be configured to present a first video edit, a second video edit, and/or other video edits to client computing device(s) 104 and/or other devices. In some implementations, presentation component 114 may transmit information that facilitates the presentation of the video edits through client computing device(s) 104 . In some implementations, presentation component 114 may provide client computing device(s) 104 with access to the video edits.
- Presentation component 114 may provide client computing device(s) 104 with previews of the video edits.
- the previews of the video edits may include a preview of the individual video clips of the video edits.
- the user of client computing device(s) 104 may have access to the video edits through client computing device(s) 104 .
- the user of client computing device(s) 104 may preview the video edits through client computing device(s) 104 .
- the user may access the video edits through one or more user inputs through client computing device(s) 104 .
- access to the video edits may include access to the individual video clips of the video edits. Access to the individual video clips of the video edits may include access to the visual and/or audio content of the video clip.
- the user may view the video edits through client computing device(s) 104 . In some implementations, the user may view the individual video clips of the video edits through client computing device(s) 104 . In some implementations, the user may modify the video edits and/or the individual video clips of the video edits through client computing device(s) 104 . The video edits and/or the individual video clips of the video edits modified by the user through client computing device(s) 104 may be stored in client computing device(s) 104 and/or other storage media.
- presentation component 114 may be configured to transmit the video edits or the individual video clips of the video edits to a different device.
- the user may instruct presentation component 114 to transmit the video edits or the individual video clips of the video edits to a different device through client computing device(s) 104 .
- client computing device(s) 104 may instruct presentation component 114 through client computing device(s) 104 to save the video edits or the individual video clips of the video edits to an external storage media or to client computing device(s) 104 .
- presentation component 114 may be configured to transmit the video edits or the individual video clips of the video edits to one or more external sources, and/or other devices.
- the external sources may be one or more of a social media, a video-sharing media, and/or other external sources.
- the user may instruct presentation component 114 to transmit the video edits or the individual video clips of the video edits to one or more external sources through client computing device(s) 104 .
- client computing device(s) 104 may instruct presentation component 114 through client computing device(s) 104 to transmit the individual video clips of the video edits to a social media account associated with the user.
- server(s) 102 , client device(s) 104 , and/or external resources 120 may be operatively linked via one or more electronic communication links.
- electronic communication links may be established, at least in part, via the network 103 such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting and that the scope of this disclosure includes implementations in which server(s) 102 , client device(s) 104 , and/or external resources 120 may be operatively linked via some other communication media.
- external resources 120 may include sources of information, hosts and/or providers of virtual environments outside of system 100 , external entities participating with system 100 , and/or other resources. In some implementations, some or all of the functionality attributed herein to external resources 120 may be provided by resources included in system 100 .
- Server(s) 102 may include electronic storage 122 , one or more processors 124 , and/or other components. Server(s) 102 may include communication lines or ports to enable the exchange of information with a network and/or other computing devices. Illustration of server(s) 102 in FIG. 1 is not intended to be limiting. Servers(s) 102 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server(s) 102 . For example, server(s) 102 may be implemented by a cloud of computing devices operating together as server(s) 102 .
- electronic storage 122 may include electronic storage media that electronically stores information.
- the electronic storage media of electronic storage 122 may include one or both of system storage that is provided integrally (i.e., substantially nonremovable) with server(s) 102 and/or removable storage that is removably connectable to server(s) 102 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.).
- a port e.g., a USB port, a firewire port, etc.
- a drive e.g., a disk drive, etc.
- Electronic storage 122 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media.
- the electronic storage 122 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources).
- Electronic storage 122 may store software algorithms, information determined by processor(s) 124 , information received from server(s) 102 , information received from client computing device(s) 104 , and/or other information that enables server(s) 102 to function as described herein.
- processor(s) 124 may be configured to provide information processing capabilities in server(s) 102 .
- processor(s) 124 may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information.
- processor(s) 124 is shown in FIG. 1 as a single entity, this is for illustrative purposes only.
- processor(s) 124 may include a plurality of processing units. These processing units may be physically located within the same client computing device, or processor(s) 124 may represent processing functionality of a plurality of devices operating in coordination.
- the processor(s) 124 may be configured to execute computer-readable instruction components 106 , 108 , 110 , 112 , 114 , and/or other components.
- the processor(s) 124 may be configured to execute components 106 , 108 , 110 , 112 , 114 , and/or other components by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor(s) 124 .
- components 106 , 108 , 110 , 112 , and 114 are illustrated in FIG. 1 as being co-located within a single processing unit, in implementations in which processor(s) 124 includes multiple processing units, one or more of components 106 , 108 , 110 , 112 , and/or 114 may be located remotely from the other components.
- the description of the functionality provided by the different components 106 , 108 , 110 , 112 , and/or 114 described herein is for illustrative purposes, and is not intended to be limiting, as any of components 106 , 108 , 110 , 112 , and/or 114 may provide more or less functionality than is described.
- processor(s) 124 may be configured to execute one or more additional components that may perform some or all of the functionality attributed herein to one of components 106 , 108 , 110 , 112 , and/or 114 .
- FIG. 2 illustrates a client computing device 104 playing an online game.
- Client computing device 104 may carry a camera 202 , and/or other devices.
- Camera 202 may be an image sensor.
- Camera 202 may be configured to generate output signals conveying visual information within a field of view of camera 202 .
- the visual information may define visual content.
- the visual content may include visuals of a face of a first user 216 .
- First user 216 may be a user of client computing platform 104 .
- the online game may be presented through a client interface 205 .
- the online game may include one or more game elements 208 , a communication interface 207 , and/or other components.
- Game element(s) 208 may include one or more game environments 213 , one or more game entities, and/or other contents.
- the one or more game entities may include a first game entity 212 , a second game entity 214 , and/or other game entities.
- game element(s) 208 may include messages indicating the state of the game.
- the messages indicating the state of the game may include a first message 210 a , a second message 210 b (as illustrated in FIG. 3 ), and/or other messages.
- Communication interface 207 may include a first view 204 of the face of first user 216 , a second view 206 of a face of a second user 218 , and/or other views of other information.
- first user 216 may request video edits from client computing device 104 .
- client computing device 104 may obtain the video edits from a system similar to system 100 .
- client computing device 104 may generate video edits by executing one or more computer program components of client computing device 104 .
- the one or more computer program components of client computing device 104 may be similar to the computer program components of system 100 .
- the one or more computer program components of client computing device 104 may include one or more of a capture component, an identification component, a determination component, a generation component, a presentation component, and/or other components.
- the capture component may record gameplay of the online game.
- the recording of the gameplay of the online game may include a view of client interface 205 .
- the recording of the gameplay of the online game may include the view of client interface 205 at the start of the online game (as illustrated in FIG. 2 ) to the end of the online game (as illustrated in FIG. 3 ).
- the capture component may be similar to capture component 106 (as illustrated in FIG. 1 ).
- the identification component may make identifications of user emotional manifestations, events of interest, and/or other information from the online game from the recording of the gameplay of the online game.
- the identification component may make identifications of the user emotional manifestations, the events of interest, and/or other information from the recording of client interface 205 .
- the identification component may identify the user emotional manifestations from the face of the first user 216 and/or the face of the second user 218 .
- the identification component may identify a first user emotional manifestation from audio signal 216 a from first user 216 , a second user emotional manifestation from audio signal 216 b from first user 216 , and/or other user emotional manifestations.
- the identification component may identify the events of interest from game element(s) 208 , and/or other information.
- the identification component may identify a first event of interest from message 210 a , and a second event of interest from message 210 b , and/or other events of interest.
- the identification component may be similar to identification component 108 (as illustrated in FIG. 1 ).
- the determination component may determine a portion of the recording of the gameplay of the online game to include in one or more video clips of the video edits, and/or generate the one or more video clips of the video edits.
- the determination component may be similar to determination component 110 (as illustrated in FIG. 1 ).
- the generation component may be configured to generate one or more video edits and/or generate other content.
- the one or more video edits may include a first video edit 302 , a second video edit 304 , a third video edit 306 (as illustrated in FIG. 4 ), and/or other video edits.
- the generation component may be similar to generation component 112 (as illustrated in FIG. 1 ).
- the presentation component may be configured to present the video edits and/or other information.
- the presentation of the video edits to client computing device 104 can be seen in an example illustrated in FIG. 4 .
- the presentation component may present one or more video edits, including first video edit 302 , second video edit 304 , third video edit 306 , to the first user 216 through client computing device 104 .
- the presentation component may be similar to presentation component 114 (as illustrated in FIG. 1 ).
- FIG. 5 illustrates a method 500 for generating video edits, in accordance with one or more implementations.
- the operations of method 500 presented below are intended to be illustrative. In some implementations, method 500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 500 are illustrated in FIG. 5 and described below is not intended to be limiting.
- method 500 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
- the one or more processing devices may include one or more devices executing some or all of the operations of method 500 in response to instructions stored electronically on an electronic storage medium.
- the one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 500 .
- the method 500 includes operations for generating video edits.
- the video edits may be highlight videos of gameplay of an online game.
- the video edits may comprise of one or more video clips of a recording of the online game.
- a video clip may include a portion of a recording of the gameplay of the online game.
- the portion of the recording of the online game selected for the video clip may be determined based on user emotional manifestations, events of interest, and/or other information.
- the operations of method 500 presented below are intended to be illustrative. In some implementations, method 500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 500 are illustrated in FIG. 5 and described below is not intended to be limiting.
- the recording of the online game is analyzed to identify the user emotional manifestations, the events of interest, and/or other information.
- the user emotional manifestations may be associated with the user participating in the online game.
- the events of interest may be associated with in-game events of the online game.
- the recording of the online game is analyzed to determine when the user emotional manifestations and/or events of interest occurred.
- operation 502 is performed by an identification component the same as or similar to identification component 108 (shown in FIG. 1 and described herein).
- video edits are generated.
- the video edits include selected user emotional manifestations and/or events of interest.
- the video edits including a first video edit.
- the first video edit comprises a first video clip, and/or other video clips.
- the first video clip may include a portion of the recording of the online game that includes at least one user emotional manifestations and/or events of interest.
- the first video clip may include a portion of the recording of the online game that includes a first point in time of the recording of the online game that includes at least one of the user emotional manifestations and/or events of interest.
- the first video clip may be generated by a determination component the same as or similar to determination component 110 (shown in FIG. 1 and described herein).
- operation 506 is performed by a generation component the same as or similar to generation component 12 (shown in FIG. 1 and described herein).
- presentation of the video edits are effectuated through a client computing device.
- the video edits are effectuated such that the user can preview the video edits through the client computing device.
- operation 508 is performed by a presentation component the same as or similar to presentation component 114 (shown in FIG. 1 and described herein).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Social Psychology (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Hardware Design (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- The present disclosure relates to generating video edits. The video edits may be highlight videos of gameplay of an online game. The video edits may comprise of one or more video clips of the online game. The video clips may be selected based on events of interest and user emotional manifestations in the online game.
- Capturing important moments of gameplay in an online game and sharing the important moments are increasing in popularity. Highlight videos may be created from the important moments of gameplay in the online game. The highlight videos are an efficient way to review the important moments of gameplay in the online game.
- One aspect of this disclosure relates to generating video edits. For example, the video edits may be highlight videos of gameplay of an online game. The video edits may comprise of one or more video clips of the online game. A video clip may include a portion of a recording of the gameplay of the online game and a video of a user participating in the online game. The online game may include the video of the user. The gameplay of the online game may include the video of the user. The video of the user may be a video chat between the user and another user participating in the online game. The online game may include game elements, a communication interface, and/or other components of the online game. The communication interface may allow the user to communicate with other users participating in the online game. From user emotional manifestations, events of interests, and/or other information of the online game, the portion of the recording of the online game to include in the video clip may be determined. The user emotional manifestations may be evidence of one or more emotions being experienced by the user during the gameplay of the online game. For example, the user cheering with a smile may be an example of a user emotional manifestation. The events of interest may be in-game events that occur during the gameplay of the online game. For example, the user scoring a point in the online game may be an example of an in-game event that occurs in during the gameplay of the online game. In some implementations, a first video clip may include a first user emotional manifestation that occurred during a first point in time during the gameplay of the online game. In some implementations, a second video clip may include the first user emotional manifestation that occurred during the first point in time during the gameplay of the online game and/or a first event of interest that occurred during a second point in time during the gameplay of the online game. The user may be presented with access to the video edits, including the first video clip, the second video clip, and/or other video clips, and/or previews to the video edits.
- In some implementations, a system for generating video edits may include one or more servers, one or more client computing devices, one or more external resources, and/or other components. The one or more servers may be configured to communicate with the one or more client computing devices according to a client/server architecture. Users of the system may access the system via the client computing device(s). The server(s) may be configured to execute one or more computer program components. The computer program components may include one or more of a capture component, an identification component, a determination component, a generation component, a presentation component, and/or other components.
- In some implementations, the capture component may be configured to record the online game, the gameplay of the online game, and/or other information. In some implementations, the capture component may be configured to record visual and/or audio content of the online game, the gameplay of the online game, and/or other information. The recording of the online game may include visual and/or audio content of the online game, including the game elements, the communication interface, and/or other components of the online game. The communication interface may include a live stream of the user participating in the online game, and/or other communication elements. The live stream of the user participating in the online game may include a real-time or near real-time video of the user participating in the online game, and/or other contents.
- In some implementations, the identification component may be configured to make identifications from the online game, the recording of the gameplay of the online game, and/or from other information relating to the online game. In some implementations, the identification component may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information of the online game. In some implementations, the identification component may be configured to make identifications of the user emotional manifestations by identifying evidence of one or more emotions being experienced by the user during the gameplay of the online game. In some implementations, the identification component may determine when the user emotional manifestations occurred during the gameplay of the online game and/or the recording of the gameplay of the online game. In some implementations, the identification component may be configured to make identifications of the events of interest by identifying the in-game events that occurred in the gameplay of the online game. In some implementations, the identification component may determine when the events of interest occurred during the gameplay of the online game and/or the recording of the gameplay of the online game.
- In some implementations, the determination component may be configured to determine a portion of the recording of the gameplay of the online game to include in one or more video clips of the video edits. In some implementations, the determination component may determine to include a portion of the recording including at least one of the user emotional manifestations in a video clip. For example, the determination component may determine to include a portion of the recording that includes a point in time the user emotional manifestations occurred in the video clip. In some implementations, the determination component may determine to include a portion of the recording including at least one of the events of interest in a video clip. In some implementations, the determination component may determine to include a portion of the recording including at least one of the user emotional manifestations and at least one of the events of interest in a video clip.
- In some implementations, the determination component may be configured to determine an association between the user emotional manifestations and the events of interests, and/or other information. For example, the determination component may be configured to determine if an event of interest caused a user emotional manifestation. If the event of interest caused the user emotional manifestation, the event of interest may be associated with the user emotional manifestation.
- In some implementations, the generation component may be configured to generate video edits and/or other content. The video edits may include one or more of the video clips, and/or other information. In some implementations, the generation component may be configured to generate a first video edit that may include all the video clips, including the first video clip, the second video clip, the third video clip, and/or other video clips. In some implementations, the generation component may be configured to generate a second video edit that may include some of the video clips.
- In some implementations, the presentation component may be configured to present the video edits and/or other information. In some implementations, the presentation component may be configured to present the video edits and/or other information to the client computing device and/or other devices. In some implementations, the presentation component may transmit information that facilitates the presentation of the video edits through the client computing device. In some implementations, the presentation component may provide the client computing device with access to the video edits. The presentation component may provide the
client computing device 104 with previews of the video edits. - These and other features, and characteristics of the present technology, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise.
-
FIG. 1 illustrates a system for generating video edits, in accordance with one or more implementations. -
FIG. 2 illustrates a client computing device playing an online game at the start of a match in the online game, in accordance with one or more implementations. -
FIG. 3 illustrates the client computing device playing the online game at the end of a match in the online game, in accordance with one or more implementations. -
FIG. 4 illustrates the client computing device presented with video edits. -
FIG. 5 illustrates a method for generating video edits, in accordance with one or more implementations. -
FIG. 1 illustrates asystem 100 for generating video edits. For example, the video edits may be highlight videos of gameplay of an online game. The video edits may comprise of one or more video clips of a recording of the online game. A video clip may include a portion of a recording of the gameplay of the online game and a video of a user participating in the online game. The online game may include the video of the user. The gameplay of the online game may include the video of the user. The video of the user may be a video chat between the user and another user participating in the online game. From user emotional manifestations, events of interest, and/or other information of the online game, the portion of the recording of the online game selected for the video clip may be determined. A user emotional manifestation may be evidence of one or more emotions experienced by the user during gameplay of the online game. An event of interest may be one or more in-game events that occurred during the gameplay of the online game. In some implementations, a first video clip may include a first user emotional manifestation that occurred during a first point in time during the gameplay of the online game. In some implementations, a second video clip may include the first user emotional manifestation that occurred during the first point in time during the gameplay of the online game and/or a first event of interest that occurred during a second point in time during the gameplay of the online game. The user may be presented with access to the video edits, including the first video clip, the second video clip, and/or other video clips, and/or previews to the video edits. - As illustrated in
FIG. 1 ,system 100 may include one or more of one ormore servers 102, one or moreclient computing devices 104, one or moreexternal resources 120, and/or other components. Server(s) 102 may be configured to communicate with client computing device(s) 104 according to a client/server architecture. The user ofsystem 100 may accesssystem 100 via client computing device(s) 104. Server(s) 102 may include one or morephysical processors 124, one or moreelectronic storages 122, and/or other components. The one or morephysical processors 124 may be configured by machine-readable instructions 105. Executing machine-readable instructions 105 may cause server(s) 102 to generate video edits. Machine-readable instructions 105 may include one or more computer program components. The computer program components may include one or more of acapture component 106, anidentification component 108, adetermination component 110, ageneration component 112, apresentation component 114, and/or other components. - In some implementations, electronic storage(s) 122 and/or other components may be configured to store recordings of the online game, portions of the recording of the online game, the video edits, the video clips, and/or other information. The recording of the online game may include visual and/or audio content of the online game, and/or other information.
- In some implementations, client computing device(s) 104 may include one or more of a mobile computing device, a game console, a personal computer, and/or other computing platforms. The mobile computing device may include one or more of a smartphone, a smartwatch, a tablet, and/or other mobile computing devices. In some implementations, client computing device(s) 104 may carry one or more sensors. The one or more sensors may include one or more image sensors, one or more audio sensors, one or more infrared sensors, one or more depth sensors, and/or other sensors. In some implementations, the one or more sensors may be coupled to client computing device(s) 104.
- In some implementations, an image sensor may be configured to generate output signals conveying visual information, and/or other information. The visual information may define visual content within a field of view of the image sensor and/or other content. The visual content may include depictions of real-world objects and/or surfaces. The visual content may be in the form of one or more of images, videos, and/or other visual information. The field of view of the image sensor may be a function of a position and an orientation of a client computing device. In some implementations, an image sensor may comprise one or more of a photosensor array (e.g., an array of photosites), a charge-coupled device sensor, an active pixel sensor, a complementary metal-oxide semiconductor sensor, an N-type metal-oxide-semiconductor sensor, and/or other devices.
- In some implementations, an audio sensor may be configured to generate output signals conveying audio information, and/or other information. The audio information may define audio from a user of the audio sensor (e.g., utterances of the user), audio around the user (such as ambient audio), and/or other information. In some implementations, an audio sensor may include one or more of a microphone, a micro-electro-mechanical microphone, and/or other devices.
- In some implementations, a depth sensor may be configured to generate output signals conveying depth information within a field of view of the depth sensor, and/or other information. The depth information may define depths of real-world objects and/or surfaces, and/or other information. A field of view of the depth sensor may be a function of a position and an orientation of a client computing device. In some implementations, the depth information may define a three-dimensional depth map of real-world objects and/or a face of a user. In some implementations, the depth sensor may comprise of one or more ultrasound devices, infrared devices, light detection and ranging (LiDAR) devices, time-of-flight cameras, and/or other depth sensors and/or ranging devices. In some implementations, the infrared devices may include one or more infrared sensors. The infrared sensors may generate output signals conveying the depth information.
- In some implementations, client computing device(s) 104 may include one or more of one or more processors configured by machine-readable instructions and/or other components. Machine-readable instructions of client computing device(s) 104 may include computer program components. The computer program components may be configured to enable the user associated with client computing device(s) 104 to interface with
system 100, the one or more sensors, and/orexternal resources 120, and/or provide other functionality attributed herein to client computing device(s) 104 and/or server(s) 102. - In some implementations,
capture component 106 may be configured to record the online game, the gameplay of the online game, and/or other information. In some implementations, the online game, the gameplay of the online game, and/or other information may be presented through a client interface of the client computing device(s) 104. The online game, the gameplay of the online game, and/or other information may be viewed by the user through the client interface. For example, the client interface may display visual content of the online game through a digital screen of the client computing device(s) 104. The client interface of the client computing device(s) 104 may include visual and/or audio content of the online game, the gameplay of the online game, and/or other information. In some implementations,capture component 106 may be configured to record the visual and/or audio content of the client interface of the client computing device(s) 104. In some implementations,capture component 106 may be configured to record the online game being played by the user through the client interface of client computing device(s) 104. In some implementations, the recording of the online game includes the video of the user. The video of the user may be of the video chat between the user and the other user participating in the online game. - In some implementations, the online game may include one or more game elements, a communication interface, and/or other components of the online game. In some implementations,
capture component 106 may be configured to record visual and/or audio content of the online game, the gameplay of the online game, and/or other information. In some implementations,capture component 106 may be configured to record visual and/or audio content of the gameplay of the online game and/or other information. In some implementations, the visual and/or audio content of the gameplay of the online game may include the game elements, the communication interface, and/or other components of the online game. In some implementations, the visual and/or audio content of the gameplay of the online game may be stored in withinelectronic storage 122, non-transitory storage media, and/or other storage media. - In some implementations,
capture component 106 may be configured to record the visual and/or audio content of the gameplay of the online game from a start to an end of the online game. In some implementations, the start of the online game may be when the user begins to interact with the online game. For example, the start of the online game may be the moment when the user opens the online game. In some implementations, the end of the online game may be when the user closes the online game. For example, the end of the online game may be the moment when the user leaves the online game. In some implementations, the start of the online game may be a beginning of a new phase of the online game. For example, the start of the online game may be a beginning of a match in the online game, and/or a beginning of a new instance in the online game. In some implementations, the end of the online game may be the ending of the new phase of the online game. For example, the end of the online game may be an ending of the match in the online game, and/or an ending of the new instance in the online game. - In some implementations, the game elements may include virtual content that makes up the online game, and/or other information. In some implementations, the game elements may include one or more game interfaces, game environments, virtual objects, virtual entities, and/or other virtual content. The virtual entities may include one or more non-player entities, player entities, and/or other entities. In some implementations, a player entity may be associated with the user. In some implementations, the user may control the player entity through client computing device(s) 104.
- In some implementations, the user may interact with the online game. In some implementations, the user may interact with and/or control components of the online game. In some implementations, the user may interact with and/or control the game elements, and/or other components of the online game. For example, the user may interact with and/or control the virtual content that makes up the online game, and/or other components of the online game. In some implementations, the user may interact with and/or control the game elements through inputting user inputs through client computing device(s) 104, and/or through other inputs through other devices. The user input may comprise of one or more of a gesture input received through the image sensor and/or other sensors of the given client computing device(s) 104, one or more of a voice input received through the audio sensors of the given client computing device(s) 104, one or more of a touch input received though a touch-enabled display of the given client computing device(s) 104, one or more of a controller input received through game controllers of the given client computing device(s) 104 and/or other user inputs.
- In some implementations, the communication interface may include communication between users participating in the online game, and/or other information. The communication interface may include communication between the user participating in the online game and another user participating in the online game. For example, the communication between the user and the other user may include one or more of an instant message, a voice chat, a video chat, and/or other forms of communication.
- In some implementations, the communication between the user and the other user through the online game may include a live stream of the user and/or the other user participating in the online game. The live stream of the user and/or the other user participating in the online game may include a real-time or near real-time video of the user and/or the other user participating in the online game, and/or other content. In some implementations, the real-time or near real-time video of the user and/or the other user may include visual and/or audio content of the user and/or the other user.
- In some implementations, the visual content of the user may include a face of the user, and/or other visual content. In some implementations, the visual content of the user may include facial features of the face of the user, and/or other visual content. In some implementations, the image sensor carried by client computing device(s) 104 may capture the visual content of the user. In some implementations, the output signals of the image sensor may convey visual information defining the visual content of the user.
- In some implementations, the visual content of the other user may include a face of the other user, and/or other visual content. In some implementations, the visual content of the other user may include facial features of the face of the other user, and/or other visual content. In some implementations, the image sensors carried by another client computing device may capture the visual content of the other user. The other client computing device may have similar functionalities as client computing device(s) 104. In some implementations, output signals of an image sensor of the other client computing device may convey visual information defining the visual content of the other user.
- In some implementations, the audio content may include audio information of the user (e.g., the user speaking) and/or other audio content. In some implementations, the audio sensors carried by client computing device(s) 104 may capture the audio content of the user. In some implementations, the output signals of the audio sensor carried by client computing device(s) 104 may convey audio information defining the audio content of the user.
- In some implementations, the audio content may include audio information of the other user (e.g., the user speaking) and/or other audio content. In some implementations, an audio sensor carried by the other client computing device may capture the audio content of the other user. In some implementations, output signals of an audio sensor carried by the other client computing device may convey audio information defining the audio content of the other user.
- In some implementations,
identification component 108 may be configured to make identifications from the online game, the recording of the gameplay of the online game, and/or from other information relating to the online game. In some implementations,identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information from the online game. In some implementations,identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information from the online game from the recording of the gameplay of the online game, and/or from other information relating to the online game. - In some implementations,
identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information of the online game concurrently with the gameplay of the online game, and/or during other time periods. In some implementations,identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information of the online game after the end of the gameplay of the online game, and/or during other time periods. In some implementations,identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information of the online game concurrently with the recording of the gameplay of the online game, and/or during other time periods. In some implementations,identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information of the online game from the recorded gameplay of the online game. - In some implementations,
identification component 108 may be configured to make identifications of the user emotional manifestations, the events of interest, and/or other information of the online game through one or more machine learning techniques, image-processing techniques, computer vision techniques, and/or other techniques. The machine learning techniques may include one or more of a convolution neural network, decision tree learning, supervised learning, minimax algorithm, unsupervised learning, semi-supervised learning, reinforcement learning, deep learning, and/or other techniques. The image-processing techniques may include one or more of bundle adjustment, SURF, ORB, computer vision, and/or other techniques. The computer vision techniques may include one or more recognition techniques, motion analysis techniques, image restoration techniques, and/or other techniques. - In some implementations,
identification component 108 may be configured to make identifications of the user emotional manifestations by identifying evidence of one or more emotions being experienced by the user during the gameplay of the online game. In some implementations,identification component 108 may identify the first user emotional manifestation, the second user emotional manifestation, and/or other user emotional manifestations. The one or more emotions being experienced by the user during the gameplay of the online game may include fear, anger, sadness, joy, disgust, surprised, trust, anticipation, and/or other emotions. - In some implementations,
identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the communication interface, and/or other content of the online game. In some implementations,identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the communication interface during the gameplay of the online game. In some implementations,identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the communication interface of the recording of the gameplay of the online game. In some implementations,identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the visual and/or audio content of the communication interface during the gameplay of the online game. In some implementations,identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the visual and/or audio content of the communication interface of the recording of the gameplay of the online game. - In some implementations,
identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the visual content of the communication interface. The visual content of the communication interface may include physical communication from the user. The physical communication from the user may be communication from the user via their body parts and/or face. For example, the physical communication may be a gesture and/or facial expression may be the user. The visual content of the communication interface may include the face of the user, gestures made by the user, and/or other information. The face of the user may include one or more facial expressions, and/or other information. In some implementations, facial features of the face may define the facial expressions, and/or other features of the face. In some implementations, the facial expressions of the face of the user may convey information about the emotions being experienced by the user. In some implementations, the gestures made by the user may convey information about the emotions being experienced by the user - In some implementations,
identification component 108 may identify the one or more emotions being experienced by the user from the facial expressions of the face of the user, and/or from other information. For example, if the facial expressions on the face of the user convey a smile,identification component 108 may identify evidence of joy being experienced by the user. If the facial expressions on the face of the user convey a frown,identification component 108 may identify evidence of anger or sadness being experienced by the user. In some implementations,identification component 108 may use the one or more machine learning techniques, the one or more image-processing technique, the one or more computer vision techniques, and/or techniques to identify the facial expressions of the user's face. - In some implementations,
identification component 108 may determine the emotions associated with the facial expressions. The association between the emotions and the facial expressions may help identify the one or more emotions being experienced by the user, and/or other information. In some implementations, specific facial expressions may be associated with specific emotions. In some implementations, the association between the specific facial expressions and the specific emotions may be predetermined. For example, a smile may be associated with joy, a frown may be associated with anger or sadness, and/or other facial expressions may be associated with other emotions. In some implementations,identification component 108 may use the one or more machine learning techniques, the one or more image-processing technique, the one or more computer vision techniques, and/or techniques to determine the association between the specific facial expressions and the specific emotions. - In some implementations,
identification component 108 may identify the one or more emotions being experienced from the gestures made by the user, and/or from other information. For example, if the gestures made by the user conveys a clap,identification component 108 may identify evidence of joy being experienced by the user. If the gestures made by the user conveys a hateful message,identification component 108 may identify evidence of anger being experienced by the user. In some implementations,identification component 108 may use the one or more machine learning techniques, the one or more image-processing technique, the one or more computer vision techniques, and/or techniques to identify the gestures made by the user. - In some implementations,
identification component 108 may determine the emotions associated with the gestures made by the user. The association between the emotions and the gestures may help identify the one or more emotions being experienced by the user. In some implementations, specific gestures made by the user may be associated with specific emotions. In some implementations, the association between the specific gestures made by the user and the specific emotions may be predetermined. For example, a clap may be associated with joy, a hateful gesture may be associated with anger, and/or other gestures made by the user may be associated with other emotions. In some implementations,identification component 108 may use the one or more machine learning techniques, the one or more image-processing technique, the one or more computer vision techniques, and/or techniques to determine the association between the specific gestures made by the user and the specific emotions. - In some implementations,
identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from a vocal communication from the user. The vocal communication from the user may be obtained from the communication interface. The vocal communication from the user may include audio content from the user. In some implementations,identification component 108 may be configured to identify the evidence of the one or more emotions being experienced by the user from the nature of the audio content from the user through the communication interface. The audio from the user may include one or more spoken words, tonality, speed, volume, and/or other information. In some implementations, the nature of the audio from the user may convey information about the emotions being experienced by the user. - In some implementations,
identification component 108 may identify the one or more emotions being experienced by the user from the nature of the audio from the user, and/or from other information. For example, if the audio of the user conveys a laugh,identification component 108 may identify evidence of joy being experienced by the user. If the audio of the user conveys a shout,identification component 108 may identify evidence of anger being experienced by the user. If the audio of the user conveys profanity,identification component 108 may identify evidence of anger being experienced by the user. If the audio of the user conveys a sudden increase in volume,identification component 108 may identify evidence of surprise being experienced by the user. If the audio of the user conveys a sudden increase in speed,identification component 108 may identify evidence of anticipation being experienced by the user. In some implementations,identification component 108 may use the one or more machine learning techniques, and/or techniques to identify the nature of the audio from the user. - In some implementations,
identification component 108 may determine the emotions associated with the nature of the audio from the user. The association between the emotions and the nature of the audio from the user may help identify the one or more emotions being experienced by the user. In some implementations, specific nature of the audio from the user may be associated with specific emotions. In some implementations, the association between the specific nature of the audio from the user and the specific emotions may be predetermined. For example, laughter may be associated with joy, shouting and/or profanity may be associated with anger, a sudden increase in volume may be associated with surprise, a sudden increase in speed may be associated with anticipation, and/or other nature of the audio from the user may be associated with other emotions. In some implementations,identification component 108 may use the one or more machine learning techniques, and/or techniques to determine the association between the specific nature of the audio from the user and the specific emotions. - In some implementations,
identification component 108 may determine when the user emotional manifestations occurred during the gameplay of the online game and/or the recording of the gameplay of the online game. In some implementations,identification component 108 may determine points in time the user emotional manifestations occurred during the gameplay of the online game and/or the recording of the gameplay of the online game. In some implementations,identification component 108 may determine the individual points in time the first user emotional manifestation, the second user emotional manifestation, and/or other user emotional manifestations occurred during the gameplay of the online game and/or the recording of the gameplay of the online game. - In some implementations, the points in time the user emotional manifestations occurred in during the gameplay of the online game may be when the evidence of the one or more emotions being experienced by the user was identified during the gameplay of the online game. For example, if the first user emotional manifestation occurred 5 minutes (e.g., the first point in time) into the gameplay of the online game,
identification component 108 may determine that the first point in time may be 5 minutes into the gameplay of the online game. If the second user emotional manifestation occurred 6 minutes into the gameplay of the online game,identification component 108 may determine that the second point in time may be 6 minutes (e.g., the second point in time) into the gameplay of the online game. - In some implementations, the points in time the user emotional manifestations occurred in the recording of the gameplay of the online game may be when the evidence of the one or more emotions being experienced by the user was identified during the recording of the gameplay of the online game. For example, if the first user emotional manifestation occurred 5 minutes into the recording of the gameplay of the online game,
identification component 108 may determine that the first point in time may be 5 minutes into the recording of the gameplay of the online game. If the second user emotional manifestation occurred 6 minutes into the recording of the gameplay of the online game,identification component 108 may determine that the second point in time may be 6 minutes into the recording of the gameplay of the online game. - In some implementations,
identification component 108 may be configured to make identifications of the events of interest by identifying the in-game events that occurred in the gameplay of the online game. In some implementations,identification component 108 may identify the first event of interest, the second event of interest, and/or other events of interest. The in-game events that occurred in the gameplay of the online game may be one or more actions, activities, events, and/or other occurrences during the gameplay of the online game. - In some implementations,
identification component 108 may be configured to identify the in-game events from the game elements, and/or other information of the online game. In some implementations,identification component 108 may be configured to identify the in-game events from the game elements during the gameplay of the online game, and/or other information of the online game. In some implementations,identification component 108 may be configured to identify the in-game events from the game elements of the recording of the gameplay of the online game, and/or other information of the online game. In some implementations,identification component 108 may be configured to identify the in-game events from the visual and/or audio content of the game elements during the gameplay of the online game. In some implementations,identification component 108 may be configured to identify the in-game events from the visual and/or audio content of the game elements of the recording of the gameplay of the online game. - In some implementations, the in-game events may include one or more changes in game state, criteria being met, temporal occurrences, interactions within the online game, and/or other in-game events. In some implementations,
identification component 108 may be configured to identify one or more of the changes in game state, the criteria being met, the temporal occurrences, the interactions within the online game, and/or other in-game events. In some implementations,identification component 108 may be configured to identify one or more of the changes in game state, the criteria being met, the temporal occurrences, the interactions within the online game, and/or other in-game events through the one or more machine learning techniques, the one or more image-processing technique, the one or more computer vision techniques, and/or techniques. - In some implementations,
identification component 108 may identify the events of interest from the one or more changes in game state, and/or other information. The one or more changes in game state may signify a change in a phase of the online game, a change in a stage of the online game, and/or other information. For example, the one or more changes in game state may include one or more of a start of the online game, an end of the online game, a pause in the online game, a start of a match in the online game, the end of a match in the online game, a change in the game environment, progression in the online game, and/or other changes in game states. - In some implementations, the change in the game environment may include one or more changes in the environment of the online game. For example, the change in the game environment may include a change in the weather system in the online game, a change in the game board of the online game, a change in the theme of the online game, and/or other changes of the game environment. In some implementations, the progression in the online game may include one or more of a virtual entity of the online game advancing in a portion of the virtual entity (e.g. leveling up), the virtual entity obtaining an item in the online game, the virtual entity obtaining a reward in the online game, the user finishing a portion of the online game, the user advancing to another portion of the online game, and/or other progression in the online game.
- In some implementations,
identification component 108 may identify the events of interest from the one or more criteria being met, and/or other information. The one or more criteria may define one or more objectives in the online game, one or more conditional events in the online game, and/or other information. For example, the one or more criteria may include an objective to reach a checkpoint, an objective to reach a milestone, an objective to reach, a condition for the user to win the online game, a condition for the user to lose the online game, a condition for the user to not winning in the online game (e.g., a draw), and/or other criteria. In some implementations, the one or more criteria being met may trigger other in-game events. For example, when the objective to reach a milestone is met, the game state may change, and the user may advance to another portion of the online game. When the condition for the user to lose the online game is met, the game state may change, and the online game may end. - In some implementations,
identification component 108 may identify the events of interest from the one or more temporal occurrences, and/or other information. The one or more temporal occurrences may define one or more timed events. For example, the one or more temporal occurrences may include one or more timers running out in the online game, reaching time limits in the online game, reaching a time durations in the online game, and/or other temporal occurrences. In some implementations, the timers running out in the online game may include a timer running out for a match in the online game, a timer running out for a game session in the online game, a timer running out for some features of the online game, and/or other timer running out in the online game. The timer running out for some features of the online game may include the timer running out for an ability of the virtual entity of the online game, and/or other features. - In some implementations,
identification component 108 may identify the events of interest from the one or more interactions within the online game, and/or other information. The one or more interactions within the online game may include the interaction between the users participating in the online game, an interaction between the users participating in the online game and the virtual content, an interaction between the virtual entities in the online game, an interaction between the virtual entities and the virtual objects in the online game, an interaction between the virtual entities and the virtual environment in the online game, and/or other interactions. - For example, the interaction between the users participating in the online game may include communication between the users participating in the online game through the communication interface, and/or other interfaces. The interaction between the users participating in the online game and the virtual content may include communication between the users and the virtual content, and/or other interactions. The interaction between the virtual entities in the online game may include communication between the virtual entities, contact between the virtual entities, and/or other interactions between the virtual entities. The interaction between the virtual entities and the virtual objects in the online game may include communication between the virtual entities and the virtual objects, contact between the virtual entities and the virtual objects, and/or other interaction. The interaction between the virtual entities and the virtual environment in the online game may include the virtual entities traversing across or to a particular portion of the virtual environment, and/or other interactions.
- In some implementations,
identification component 108 may determine when the events of interest occurred during the gameplay of the online game and/or the recording of the gameplay of the online game. In some implementations,identification component 108 may determine the points in time the events of interest occurred during the gameplay of the online game and/or the recording of the gameplay of the online game. In some implementations,identification component 108 may determine the individual points in time the first event of interest, the second event of interest, and/or other events of interest occurred during the gameplay of the online game and/or the recording of the gameplay of the online game. - In some implementations, the points in time the events of interest occurred during the gameplay of the online game may be when the in-game events were identified during the gameplay of the online game. For example, if the first event of interest occurred 3 minutes (e.g., the third point in time) into the recording of the gameplay of the online game,
identification component 108 may determine that the third point in time may be 3 minutes into the recording of the gameplay of the online game. If the second event of interest occurred 4 minutes (e.g., the fourth point in time) into the recording of the gameplay of the online game,identification component 108 may determine that the fourth point in time may be 4 minutes into the recording of the gameplay of the online game. - In some implementations, the points in time the events of interest occurred in the recording of the gameplay of the online game may be when the in-game events were identified in the recording of the gameplay of the online game. For example, if the first event of interest occurred 3 minutes into the recording of the gameplay of the online game,
identification component 108 may determine that the third point in time may be 3 minutes into the recording of the gameplay of the online game. If the second event of interest occurred 4 minutes into the recording of the gameplay of the online game,identification component 108 may determine that the fourth point in time may be 4 minutes into the recording of the gameplay of the online game. - In some implementations,
determination component 110 may be configured to determine a portion of the recording of the gameplay of the online game to include in one or more video clips of the video edits. In some implementations,determination component 110 may be configured to generate the one or more video clips of the video edits based on the determination made for the portion of the recording of the gameplay of the online game to include in one or more video clips. The recording of the gameplay of the online game may hereby be referred to as: “the recording.” In some implementations,determination component 110 may determine to include a portion of the recording including at least one of the user emotional manifestations in a video clip. In some implementations,determination component 110 may determine to include a portion of the recording including at least one of the events of interest in a video clip. In some implementations,determination component 110 may determine to include a portion of the recording including at least one of the user emotional manifestations and at least one of the events of interest in a video clip. - In some implementations,
determination component 110 may be configured to determine a portion of the recording to include in a first video clip, a second video clip, a third video clip, and/or other video clips. The first video clip, the second video clip, and the third video clip may include different portions of the recording of the online game. For example, the first video clip may be the portion of the recording including at least one of the user emotional manifestations. The second video clip may be the portion of the recording including the portion of the recording including at least one of the events of interest. The third video clip may be the portion of the recording including the portion of the recording including at least one of the user emotional manifestations and at least one of the events of interest. - In some implementations, the portion of the recording including the user emotional manifestation may be a portion of the recording when the user emotional manifestations occurred. In some implementations,
determination component 110 may determine to include the portion of the recording, including at least one point in time, the user emotional manifestations occurred in the video clip. In some implementations, if the first user emotional manifestation occurred at the first point in time (e.g., 5 minutes into the recording),determination component 110 may determine to include the portion of the recording including the first point in time of the recording. For example, the portion of the recording including the first point in time of the recording may include the recording between 4.5 minutes to 5.5 minutes. The duration of the portion of the recording to include in the video clip may be dependent on an expected duration of the first user emotional manifestation.Determination component 110 may determine the expected duration of the first user emotional manifestation through one or more machine-learning techniques. - In some implementations,
determination component 110 may determine to include a portion of the recording, including more than one point in time, the user emotional manifestations occurred in the video clip. In some implementations, if the first user emotional manifestation occurred at the first point in time (e.g., 5 minutes into the recording) and the second user emotional manifestation occurred at the second point in time (e.g., 6 minutes into the recording of the gameplay of the online game),determination component 110 may determine to include the portion of the recording including the first point in time and the second point in time of the recording. For example, the portion of the recording including the first point in time and the second point in time of the recording may include the recording between 4.5 minutes to 6.5 minutes. The duration of the portion of the recording to include in the video clip may be dependent on an expected duration of the first user emotional manifestation and the second user emotional manifestation.Determination component 110 may determine the expected duration of the first user emotional manifestation and the second user emotional manifestation through one or more machine-learning techniques. - In some implementations, the portion of the recording including the events of interest may be a portion of the recording when the events of interest occurred. In some implementations,
determination component 110 may determine to include the portion of the recording, including at least one point in time, the events of interest occurred in the video clip. In some implementations, if the first event of interest occurred at the third point in time (e.g., 3 minutes into the recording),determination component 110 may determine to include the portion of the recording including the third point in time of the recording. For example, the portion of the recording including the third point in time of the recording may include the recording between 2.5 minutes to 3.5 minutes. The duration of the portion of the recording to include in the video clip may be dependent on an expected duration of the first event of interest.Determination component 110 may determine the expected duration of the first event of interest through one or more machine-learning techniques. - In some implementations,
determination component 110 may determine to include a portion of the recording, including more than one point in time, the events of interest occurred in the video clip. In some implementations, if the first event of interest occurred at the third point in time (e.g. 3 minutes into the recording) and the second event of interest occurred at the fourth point in time (e.g. 4 minutes into the recording),determination component 110 may determine to include the portion of the recording including the third point in time and the fourth point in time of the recording. For example, the portion of the recording including the third point in time and the fourth point in time of the recording may include the recording between 2.5 minutes to 4.5 minutes. The duration of the portion of the recording to include in the video clip may be dependent on an expected duration of the first event of interest and the second event of interest.Determination component 110 may determine the expected duration of the first event of interest and the second event of interest through one or more machine-learning techniques. - In some implementations, the portion of the recording including the user emotional manifestation and the events of interest may be a portion of the recording when the user emotional manifestations and the events of interest occurred. In some implementations,
determination component 110 may determine to include the portion of the recording including at least one point in time the user emotional manifestations and at least one point in time the events of interest occurred in the video clip. In some implementations, if the first user emotional manifestation occurred at the first point in time (e.g., 5 minutes into the recording) and the first event of interest occurred at the third point in time (e.g. 3 minutes into the recording),determination component 110 may determine to include the portion of the recording including the third point in time and the first point in time of the recording. For example, the portion of the recording including the third point in time and the first point in time of the recording may include the recording between 2.5 minutes to 5.5 minutes. The duration of the portion of the recording to include in the video clip may be dependent on an expected duration of the first user emotional manifestation and the first event of interest.Determination component 110 may determine the expected duration of the first user emotional manifestation and the first event of interest through one or more machine-learning techniques. - In some implementations,
determination component 110 may determine to include a portion of the recording including more than one point in time the user emotional manifestations and more than one point in time the events of interest occurred in the video clip. In some implementations, if the first user emotional manifestation occurred at the first point in time, the second user emotional manifestation occurred at the second point in time, the first event of interest occurred at the third point in time, and the second event of interest occurred at the fourth point in time,determination component 110 may determine to include the portion of the recording including the third point in time, the fourth point in time, the first point in time, the second point in time of the recording. For example, the portion of the recording including the third point in time, the fourth point in time, the first point in time, the second point in time of the recording may include the recording between 2.5 minutes to 6.5 minutes. The duration of the portion of the recording to include in the video clip may be dependent on an expected duration of the first user emotional manifestation, the second user emotional manifestation, the first event of interest, and the second event of interest.Determination component 110 may determine the expected duration through one or more machine-learning techniques. - In some implementations,
determination component 110 may determine to include a portion of the recording including more than one point in time the user emotional manifestations and at least one point in time the events of interest occurred in the video clip. In some implementations,determination component 110 may determine to include the portion of the recording including at least one point in time the user emotional manifestations and more than one point in time the events of interest occurred in the video clip. In some implementations, the user may determine the number of the user emotional manifestations and/or the events of interest to include in the video clip. In some implementations, the number of the user emotional manifestations and/or the events of interest to include in the video clip may be predetermined by server(s) 102 or the online game. - In some implementations,
determination component 110 may be configured to determine an association between the user emotional manifestations and the events of interests, and/or other information. For example,determination component 110 may be configured to determine if an event of interest caused a user emotional manifestation. If the event of interest caused the user emotional manifestation, the event of interest may be associated with the user emotional manifestation. In some implementations,determination component 110 determines the association between the user emotional manifestations and the events of interests based on the temporal relationship between the user emotional manifestations and the events of interest. For example, if a user emotional manifestation occurred shortly after an event of interest,determination component 110 may determine that the user emotional manifestation may be associated with the event of interest. In some implementations,determination component 110 determines the association between the user emotional manifestations and the events of interests using the one or more machine-learning techniques, and/or other techniques. - In some implementations,
determination component 110 may determine to include a portion of the recording including the event of interest and the user emotional manifestation associated with the user emotional manifestation. In some implementations,determination component 110 may determine to include the portion of the recording including the points in time the event of interest and the user emotional manifestation associated with the user emotional manifestation occurred. - In some implementations,
generation component 112 may be configured to generate video edits and/or other content. The video edits may include one or more of the video clips, and/or other information. The video edits may include one or more of the video clips generated bydetermination component 110, and/or other information. In some implementations,generation component 112 may be configured to generate a first video edit that may include all the video clips, including the first video clip, the second video clip, the third video clip, and/or other video clips. In some implementations,generation component 112 may be configured to generate a second video edit that may include some of the video clips. - In some implementations,
generation component 112 may determine the one or more video clips to include in a video edit. In some implementations,generation component 112 may determine to include one or more video clips with content from a period of the recording. The period of the recording may be predetermined or determined by the user. The period of the recording may be predetermined or determined by the server(s) 102 or the online game. For example,generation component 112 may determine to include the one or more video clips with content from the recording between the start of the recording and the middle of the recording.Generation component 112 may determine to include the one or more video clips with content from the recording between the start of the recording and the end of the recording. In some implementations,generation component 112 may determine to include the one or more video clips with similar user emotional manifestations, events of interest, time durations, a combination of different user emotional manifestations, a combination of different events of interest, a combination of similar user emotional manifestations and events of interest, and/or other combination of video clips in the video edits. - In some implementations, the one or more video clips with the similar user emotional manifestations may be the one or more video clips where similar emotions are being experienced by the user in the one or more video clips. In some implementations, the video edit may include more than one of the video clips where similar emotions are being experienced by the user. In some implementations, the one or more video clips with the similar events of interest may be the one or more video clips with similar in-game events that occurred. In some implementations, the video edit may include more than one of the video clips where similar in-game events occurred.
- In some implementations,
presentation component 114 may be configured to present the video edits and/or other information. In some implementations,presentation component 114 may be configured to present the video edits and/or other information to client computing device(s) 104 and/or other devices. In some implementations,presentation component 114 may be configured to present a first video edit, a second video edit, and/or other video edits to client computing device(s) 104 and/or other devices. In some implementations,presentation component 114 may transmit information that facilitates the presentation of the video edits through client computing device(s) 104. In some implementations,presentation component 114 may provide client computing device(s) 104 with access to the video edits.Presentation component 114 may provide client computing device(s) 104 with previews of the video edits. The previews of the video edits may include a preview of the individual video clips of the video edits. The user of client computing device(s) 104 may have access to the video edits through client computing device(s) 104. The user of client computing device(s) 104 may preview the video edits through client computing device(s) 104. In some implementations, the user may access the video edits through one or more user inputs through client computing device(s) 104. - In some implementations, access to the video edits may include access to the individual video clips of the video edits. Access to the individual video clips of the video edits may include access to the visual and/or audio content of the video clip. In some implementations, the user may view the video edits through client computing device(s) 104. In some implementations, the user may view the individual video clips of the video edits through client computing device(s) 104. In some implementations, the user may modify the video edits and/or the individual video clips of the video edits through client computing device(s) 104. The video edits and/or the individual video clips of the video edits modified by the user through client computing device(s) 104 may be stored in client computing device(s) 104 and/or other storage media.
- In some implementations,
presentation component 114 may be configured to transmit the video edits or the individual video clips of the video edits to a different device. The user may instructpresentation component 114 to transmit the video edits or the individual video clips of the video edits to a different device through client computing device(s) 104. For example, the user may instructpresentation component 114 through client computing device(s) 104 to save the video edits or the individual video clips of the video edits to an external storage media or to client computing device(s) 104. - In some implementations,
presentation component 114 may be configured to transmit the video edits or the individual video clips of the video edits to one or more external sources, and/or other devices. The external sources may be one or more of a social media, a video-sharing media, and/or other external sources. In some implementations, the user may instructpresentation component 114 to transmit the video edits or the individual video clips of the video edits to one or more external sources through client computing device(s) 104. For example, the user may instructpresentation component 114 through client computing device(s) 104 to transmit the individual video clips of the video edits to a social media account associated with the user. - In some implementations, server(s) 102, client device(s) 104, and/or
external resources 120 may be operatively linked via one or more electronic communication links. For example, such electronic communication links may be established, at least in part, via thenetwork 103 such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting and that the scope of this disclosure includes implementations in which server(s) 102, client device(s) 104, and/orexternal resources 120 may be operatively linked via some other communication media. - In some implementations,
external resources 120 may include sources of information, hosts and/or providers of virtual environments outside ofsystem 100, external entities participating withsystem 100, and/or other resources. In some implementations, some or all of the functionality attributed herein toexternal resources 120 may be provided by resources included insystem 100. - In some implementations, Server(s) 102 may include
electronic storage 122, one ormore processors 124, and/or other components. Server(s) 102 may include communication lines or ports to enable the exchange of information with a network and/or other computing devices. Illustration of server(s) 102 inFIG. 1 is not intended to be limiting. Servers(s) 102 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server(s) 102. For example, server(s) 102 may be implemented by a cloud of computing devices operating together as server(s) 102. - In some implementations,
electronic storage 122 may include electronic storage media that electronically stores information. The electronic storage media ofelectronic storage 122 may include one or both of system storage that is provided integrally (i.e., substantially nonremovable) with server(s) 102 and/or removable storage that is removably connectable to server(s) 102 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.).Electronic storage 122 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Theelectronic storage 122 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources).Electronic storage 122 may store software algorithms, information determined by processor(s) 124, information received from server(s) 102, information received from client computing device(s) 104, and/or other information that enables server(s) 102 to function as described herein. - In some implementations, processor(s) 124 may be configured to provide information processing capabilities in server(s) 102. As such, processor(s) 124 may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although processor(s) 124 is shown in
FIG. 1 as a single entity, this is for illustrative purposes only. In some implementations, processor(s) 124 may include a plurality of processing units. These processing units may be physically located within the same client computing device, or processor(s) 124 may represent processing functionality of a plurality of devices operating in coordination. The processor(s) 124 may be configured to execute computer-readable instruction components components - It should be appreciated that although
components FIG. 1 as being co-located within a single processing unit, in implementations in which processor(s) 124 includes multiple processing units, one or more ofcomponents different components components components components components - By way of non-limiting illustration,
FIG. 2 illustrates aclient computing device 104 playing an online game.Client computing device 104 may carry acamera 202, and/or other devices.Camera 202 may be an image sensor.Camera 202 may be configured to generate output signals conveying visual information within a field of view ofcamera 202. In some implementations, the visual information may define visual content. The visual content may include visuals of a face of afirst user 216.First user 216 may be a user ofclient computing platform 104. The online game may be presented through aclient interface 205. The online game may include one ormore game elements 208, acommunication interface 207, and/or other components. Game element(s) 208 may include one ormore game environments 213, one or more game entities, and/or other contents. The one or more game entities may include afirst game entity 212, asecond game entity 214, and/or other game entities. In some implementations, game element(s) 208 may include messages indicating the state of the game. The messages indicating the state of the game may include afirst message 210 a, asecond message 210 b (as illustrated inFIG. 3 ), and/or other messages.Communication interface 207 may include afirst view 204 of the face offirst user 216, asecond view 206 of a face of asecond user 218, and/or other views of other information. - In some implementations,
first user 216 may request video edits fromclient computing device 104. In some implementations,client computing device 104 may obtain the video edits from a system similar tosystem 100. In some implementations,client computing device 104 may generate video edits by executing one or more computer program components ofclient computing device 104. The one or more computer program components ofclient computing device 104 may be similar to the computer program components ofsystem 100. The one or more computer program components ofclient computing device 104 may include one or more of a capture component, an identification component, a determination component, a generation component, a presentation component, and/or other components. - In some implementations, the capture component may record gameplay of the online game. The recording of the gameplay of the online game may include a view of
client interface 205. The recording of the gameplay of the online game may include the view ofclient interface 205 at the start of the online game (as illustrated inFIG. 2 ) to the end of the online game (as illustrated inFIG. 3 ). In some implementations, the capture component may be similar to capture component 106 (as illustrated inFIG. 1 ). - In some implementations, the identification component may make identifications of user emotional manifestations, events of interest, and/or other information from the online game from the recording of the gameplay of the online game. The identification component may make identifications of the user emotional manifestations, the events of interest, and/or other information from the recording of
client interface 205. For example, the identification component may identify the user emotional manifestations from the face of thefirst user 216 and/or the face of thesecond user 218. In some implementations, the identification component may identify a first user emotional manifestation fromaudio signal 216 a fromfirst user 216, a second user emotional manifestation fromaudio signal 216 b fromfirst user 216, and/or other user emotional manifestations. In some implementations, the identification component may identify the events of interest from game element(s) 208, and/or other information. The identification component may identify a first event of interest frommessage 210 a, and a second event of interest frommessage 210 b, and/or other events of interest. The identification component may be similar to identification component 108 (as illustrated inFIG. 1 ). - In some implementations, the determination component may determine a portion of the recording of the gameplay of the online game to include in one or more video clips of the video edits, and/or generate the one or more video clips of the video edits. The determination component may be similar to determination component 110 (as illustrated in
FIG. 1 ). - In some implementations, the generation component may be configured to generate one or more video edits and/or generate other content. The one or more video edits may include a
first video edit 302, asecond video edit 304, a third video edit 306 (as illustrated inFIG. 4 ), and/or other video edits. The generation component may be similar to generation component 112 (as illustrated inFIG. 1 ). - In some implementations, the presentation component may be configured to present the video edits and/or other information. The presentation of the video edits to
client computing device 104 can be seen in an example illustrated inFIG. 4 . In some implementations, the presentation component may present one or more video edits, includingfirst video edit 302,second video edit 304,third video edit 306, to thefirst user 216 throughclient computing device 104. The presentation component may be similar to presentation component 114 (as illustrated inFIG. 1 ). -
FIG. 5 illustrates amethod 500 for generating video edits, in accordance with one or more implementations. The operations ofmethod 500 presented below are intended to be illustrative. In some implementations,method 500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations ofmethod 500 are illustrated inFIG. 5 and described below is not intended to be limiting. - In some implementations,
method 500 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations ofmethod 500 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations ofmethod 500. - In some implementations, the
method 500 includes operations for generating video edits. The video edits may be highlight videos of gameplay of an online game. The video edits may comprise of one or more video clips of a recording of the online game. A video clip may include a portion of a recording of the gameplay of the online game. The portion of the recording of the online game selected for the video clip may be determined based on user emotional manifestations, events of interest, and/or other information. The operations ofmethod 500 presented below are intended to be illustrative. In some implementations,method 500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations ofmethod 500 are illustrated inFIG. 5 and described below is not intended to be limiting. - At an
operation 502, the recording of the online game is analyzed to identify the user emotional manifestations, the events of interest, and/or other information. The user emotional manifestations may be associated with the user participating in the online game. The events of interest may be associated with in-game events of the online game. In some implementations, the recording of the online game is analyzed to determine when the user emotional manifestations and/or events of interest occurred. In some embodiments,operation 502 is performed by an identification component the same as or similar to identification component 108 (shown inFIG. 1 and described herein). - At an
operation 504, video edits are generated. The video edits include selected user emotional manifestations and/or events of interest. The video edits including a first video edit. The first video edit comprises a first video clip, and/or other video clips. The first video clip may include a portion of the recording of the online game that includes at least one user emotional manifestations and/or events of interest. The first video clip may include a portion of the recording of the online game that includes a first point in time of the recording of the online game that includes at least one of the user emotional manifestations and/or events of interest. The first video clip may be generated by a determination component the same as or similar to determination component 110 (shown inFIG. 1 and described herein). In some embodiments,operation 506 is performed by a generation component the same as or similar to generation component 12 (shown inFIG. 1 and described herein). - At an
operation 506, presentation of the video edits are effectuated through a client computing device. The video edits are effectuated such that the user can preview the video edits through the client computing device. In some embodiments, operation 508 is performed by a presentation component the same as or similar to presentation component 114 (shown inFIG. 1 and described herein). - Although the system(s) and/or method(s) of this disclosure have been described in detail for the purpose of illustration based on what is currently considered to be the most practical and/or preferred implementations, it is to be understood that such detail is solely for that purpose and/or that the disclosure is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and/or equivalent arrangements that are within the spirit and/or scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.
Claims (16)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/833,284 US20200228864A1 (en) | 2018-02-15 | 2020-03-27 | Generating highlight videos in an online game from user expressions |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/897,979 US10237615B1 (en) | 2018-02-15 | 2018-02-15 | Generating highlight videos in an online game from user expressions |
US16/279,705 US10462521B2 (en) | 2018-02-15 | 2019-02-19 | Generating highlight videos in an online game from user expressions |
US16/656,410 US10645452B2 (en) | 2018-02-15 | 2019-10-17 | Generating highlight videos in an online game from user expressions |
US16/833,284 US20200228864A1 (en) | 2018-02-15 | 2020-03-27 | Generating highlight videos in an online game from user expressions |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/656,410 Continuation US10645452B2 (en) | 2018-02-15 | 2019-10-17 | Generating highlight videos in an online game from user expressions |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200228864A1 true US20200228864A1 (en) | 2020-07-16 |
Family
ID=65722125
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/897,979 Expired - Fee Related US10237615B1 (en) | 2018-02-15 | 2018-02-15 | Generating highlight videos in an online game from user expressions |
US16/279,705 Expired - Fee Related US10462521B2 (en) | 2018-02-15 | 2019-02-19 | Generating highlight videos in an online game from user expressions |
US16/656,410 Expired - Fee Related US10645452B2 (en) | 2018-02-15 | 2019-10-17 | Generating highlight videos in an online game from user expressions |
US16/833,284 Abandoned US20200228864A1 (en) | 2018-02-15 | 2020-03-27 | Generating highlight videos in an online game from user expressions |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/897,979 Expired - Fee Related US10237615B1 (en) | 2018-02-15 | 2018-02-15 | Generating highlight videos in an online game from user expressions |
US16/279,705 Expired - Fee Related US10462521B2 (en) | 2018-02-15 | 2019-02-19 | Generating highlight videos in an online game from user expressions |
US16/656,410 Expired - Fee Related US10645452B2 (en) | 2018-02-15 | 2019-10-17 | Generating highlight videos in an online game from user expressions |
Country Status (1)
Country | Link |
---|---|
US (4) | US10237615B1 (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10237615B1 (en) * | 2018-02-15 | 2019-03-19 | Teatime Games, Inc. | Generating highlight videos in an online game from user expressions |
US11185786B2 (en) * | 2018-08-21 | 2021-11-30 | Steelseries Aps | Methods and apparatus for monitoring actions during gameplay |
US11185465B2 (en) | 2018-09-24 | 2021-11-30 | Brian Sloan | Automated generation of control signals for sexual stimulation devices |
US11071914B2 (en) | 2018-11-09 | 2021-07-27 | Steelseries Aps | Methods, systems, and devices of providing portions of recorded game content in response to a trigger |
CN109951743A (en) * | 2019-03-29 | 2019-06-28 | 上海哔哩哔哩科技有限公司 | Barrage information processing method, system and computer equipment |
US11064175B2 (en) | 2019-12-11 | 2021-07-13 | At&T Intellectual Property I, L.P. | Event-triggered video creation with data augmentation |
CN111199210B (en) * | 2019-12-31 | 2023-05-30 | 武汉星巡智能科技有限公司 | Expression-based video generation method, device, equipment and storage medium |
US20210350139A1 (en) * | 2020-05-11 | 2021-11-11 | Nvidia Corporation | Highlight determination using one or more neural networks |
US11554324B2 (en) * | 2020-06-25 | 2023-01-17 | Sony Interactive Entertainment LLC | Selection of video template based on computer simulation metadata |
US11318386B2 (en) | 2020-09-21 | 2022-05-03 | Zynga Inc. | Operator interface for automated game content generation |
US12083436B2 (en) | 2020-09-21 | 2024-09-10 | Zynga Inc. | Automated assessment of custom game levels |
US11291915B1 (en) * | 2020-09-21 | 2022-04-05 | Zynga Inc. | Automated prediction of user response states based on traversal behavior |
US11465052B2 (en) | 2020-09-21 | 2022-10-11 | Zynga Inc. | Game definition file |
US11420115B2 (en) | 2020-09-21 | 2022-08-23 | Zynga Inc. | Automated dynamic custom game content generation |
US11738272B2 (en) | 2020-09-21 | 2023-08-29 | Zynga Inc. | Automated generation of custom content for computer-implemented games |
US11806624B2 (en) | 2020-09-21 | 2023-11-07 | Zynga Inc. | On device game engine architecture |
US11565182B2 (en) | 2020-09-21 | 2023-01-31 | Zynga Inc. | Parametric player modeling for computer-implemented games |
US11865443B2 (en) * | 2021-09-02 | 2024-01-09 | Steelseries Aps | Selecting head related transfer function profiles for audio streams in gaming systems |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7154510B2 (en) | 2002-11-14 | 2006-12-26 | Eastman Kodak Company | System and method for modifying a portrait image in response to a stimulus |
US8462996B2 (en) | 2008-05-19 | 2013-06-11 | Videomining Corporation | Method and system for measuring human response to visual stimulus based on changes in facial expression |
US11056225B2 (en) * | 2010-06-07 | 2021-07-06 | Affectiva, Inc. | Analytics for livestreaming based on image analysis within a shared digital environment |
US20170238859A1 (en) * | 2010-06-07 | 2017-08-24 | Affectiva, Inc. | Mental state data tagging and mood analysis for data collected from multiple sources |
TW201220216A (en) | 2010-11-15 | 2012-05-16 | Hon Hai Prec Ind Co Ltd | System and method for detecting human emotion and appeasing human emotion |
US9569986B2 (en) | 2012-02-27 | 2017-02-14 | The Nielsen Company (Us), Llc | System and method for gathering and analyzing biometric user feedback for use in social media and advertising applications |
US20130339433A1 (en) * | 2012-06-15 | 2013-12-19 | Duke University | Method and apparatus for content rating using reaction sensing |
US9251406B2 (en) * | 2012-06-20 | 2016-02-02 | Yahoo! Inc. | Method and system for detecting users' emotions when experiencing a media program |
US9134215B1 (en) * | 2012-11-09 | 2015-09-15 | Jive Software, Inc. | Sentiment analysis of content items |
US9516259B2 (en) * | 2013-10-22 | 2016-12-06 | Google Inc. | Capturing media content in accordance with a viewer expression |
US9679380B2 (en) * | 2014-01-30 | 2017-06-13 | Futurewei Technologies, Inc. | Emotion modification for image and video content |
US20170228600A1 (en) * | 2014-11-14 | 2017-08-10 | Clipmine, Inc. | Analysis of video game videos for information extraction, content labeling, smart video editing/creation and highlights generation |
US10632372B2 (en) * | 2015-06-30 | 2020-04-28 | Amazon Technologies, Inc. | Game content interface in a spectating system |
US9918128B2 (en) * | 2016-04-08 | 2018-03-13 | Orange | Content categorization using facial expression recognition, with improved detection of moments of interest |
US10154191B2 (en) * | 2016-05-18 | 2018-12-11 | Microsoft Technology Licensing, Llc | Emotional/cognitive state-triggered recording |
US10529379B2 (en) * | 2016-09-09 | 2020-01-07 | Sony Corporation | System and method for processing video content based on emotional state detection |
US10237615B1 (en) | 2018-02-15 | 2019-03-19 | Teatime Games, Inc. | Generating highlight videos in an online game from user expressions |
-
2018
- 2018-02-15 US US15/897,979 patent/US10237615B1/en not_active Expired - Fee Related
-
2019
- 2019-02-19 US US16/279,705 patent/US10462521B2/en not_active Expired - Fee Related
- 2019-10-17 US US16/656,410 patent/US10645452B2/en not_active Expired - Fee Related
-
2020
- 2020-03-27 US US16/833,284 patent/US20200228864A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
US10462521B2 (en) | 2019-10-29 |
US10237615B1 (en) | 2019-03-19 |
US10645452B2 (en) | 2020-05-05 |
US20200053425A1 (en) | 2020-02-13 |
US20190253756A1 (en) | 2019-08-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10645452B2 (en) | Generating highlight videos in an online game from user expressions | |
US10987596B2 (en) | Spectator audio analysis in online gaming environments | |
JP6708689B2 (en) | 3D gameplay sharing | |
US11176967B2 (en) | Automatic generation of video playback effects | |
JP7018312B2 (en) | How computer user data is collected and processed while interacting with web-based content | |
CN107029429B (en) | System, method, and readable medium for implementing time-shifting tutoring for cloud gaming systems | |
US10293260B1 (en) | Player audio analysis in online gaming environments | |
US20150194187A1 (en) | Telestrator system | |
US20170106283A1 (en) | Automated generation of game event recordings | |
US20200413135A1 (en) | Methods and devices for robotic interactions | |
US20230163987A1 (en) | Personal space bubble in vr environments | |
US10913002B2 (en) | Joining gameplay of a game through video content | |
JP2020039029A (en) | Video distribution system, video distribution method, and video distribution program | |
US10363488B1 (en) | Determining highlights in a game spectating system | |
US10864447B1 (en) | Highlight presentation interface in a game spectating system | |
WO2022095516A1 (en) | Livestreaming interaction method and apparatus | |
US20240187679A1 (en) | Group party view and post viewing digital content creation | |
JP2020202575A (en) | Video distribution system, video distribution method, and video distribution program | |
US20230029894A1 (en) | Sharing movement data | |
US20240329747A1 (en) | Hand gesture magnitude analysis and gearing for communicating context-correct communication | |
US20240029726A1 (en) | Intent Identification for Dialogue Support | |
US20240029725A1 (en) | Customized dialogue support | |
JP2024517709A (en) | Rhythm-based content creation | |
WO2013168089A2 (en) | Changing states of a computer program, game, or a mobile app based on real time non-verbal cues of user |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TEATIME GAMES, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GUDMUNDSSON, GUNNAR HOLMSTEINN;BERGTHORSSON, JOHANN THORVALDUR;FRIDRIKSSON, THORSTEINN BALDUR;AND OTHERS;REEL/FRAME:052250/0199 Effective date: 20180212 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |