US11330313B2 - Crowd rating media content based on micro-expressions of viewers - Google Patents

Crowd rating media content based on micro-expressions of viewers Download PDF

Info

Publication number
US11330313B2
US11330313B2 US16/530,196 US201916530196A US11330313B2 US 11330313 B2 US11330313 B2 US 11330313B2 US 201916530196 A US201916530196 A US 201916530196A US 11330313 B2 US11330313 B2 US 11330313B2
Authority
US
United States
Prior art keywords
sentiment
playback
media content
computing device
timestamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/530,196
Other versions
US20210037271A1 (en
Inventor
Sathish Kumar Bikumala
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dell Products LP
Original Assignee
Dell Products LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dell Products LP filed Critical Dell Products LP
Assigned to DELL PRODUCTS L. P. reassignment DELL PRODUCTS L. P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BIKUMALA, SATHISH KUMAR
Priority to US16/530,196 priority Critical patent/US11330313B2/en
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH SECURITY AGREEMENT Assignors: DELL PRODUCTS L.P., EMC CORPORATION, EMC IP Holding Company LLC
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT (NOTES) Assignors: DELL PRODUCTS L.P., EMC CORPORATION, EMC IP Holding Company LLC
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. SECURITY AGREEMENT Assignors: CREDANT TECHNOLOGIES INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DELL PRODUCTS L.P., EMC CORPORATION, EMC IP Holding Company LLC
Publication of US20210037271A1 publication Critical patent/US20210037271A1/en
Assigned to EMC IP Holding Company LLC, DELL PRODUCTS L.P., EMC CORPORATION reassignment EMC IP Holding Company LLC RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421 Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH
Publication of US11330313B2 publication Critical patent/US11330313B2/en
Application granted granted Critical
Assigned to EMC IP Holding Company LLC, EMC CORPORATION, DELL PRODUCTS L.P. reassignment EMC IP Holding Company LLC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Assigned to EMC CORPORATION, EMC IP Holding Company LLC, DELL PRODUCTS L.P. reassignment EMC CORPORATION RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Assigned to EMC CORPORATION, DELL USA L.P., DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), DELL INTERNATIONAL L.L.C., EMC IP Holding Company LLC, DELL PRODUCTS L.P., DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.) reassignment EMC CORPORATION RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06K9/00302
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6582Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Definitions

  • This invention relates generally to determining a user's micro-expression when media content is being played back on a display device (e.g., television) and, more particularly to sending the captured micro-expressions to a server to a create a crowd-based sentiment map of the media content.
  • a display device e.g., television
  • IHS information handling systems
  • An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information.
  • information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated.
  • the variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications.
  • information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
  • a media content may include any audio, video media content or both.
  • the media content may include audio-playbacks, music, songs, news, podcasts, shows such as comedy shows, or three-dimensional media content.
  • a media content item may be rated “R” (restricted) because of one or two particular scenes.
  • Adults viewing “R” rated media content may not want a child who walks into the room to view the particular scenes but may be okay with the child temporarily viewing other portions of the media content.
  • a computing device initiates playback of media content on a display device.
  • the computing device receives one or more images from a camera having a field of view that includes one or more viewers of the display device.
  • the computing device may analyze at least one of the images and determine, based on the analysis, a micro-expression being expressed by at least one of the viewers.
  • the computing device may determine a sentiment based on the micro-expression.
  • a timestamp derived from the one or more images may be associated with the sentiment and sent to a server to create a sentiment map of the media content. If the sentiment matches a pre-specified sentiment, then the computing device may skip playback of a remainder of a current portion of the media content that is being displayed and initiate playback of a next portion of the media content.
  • FIG. 1 is a block diagram of a system that includes a computing device to determine a sentiment associated with an event, according to some embodiments.
  • FIG. 2 is a block diagram illustrating a sentiment map, according to some embodiments.
  • FIG. 3 is a flowchart of a process that associates associating sending a timestamp and a sentiment associated with a portion of media content, according to some embodiments.
  • FIG. 4 illustrates an example configuration of a computing device that can be used to implement the systems and techniques described herein.
  • an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, calculate, determine, classify, process, transmit, receive, retrieve, originate, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes.
  • an information handling system may be a personal computer (e.g., desktop or laptop), tablet computer, mobile device (e.g., personal digital assistant (PDA) or smart phone), server (e.g., blade server or rack server), a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price.
  • the information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, touchscreen and/or video display. The information handling system may also include one or more buses operable to transmit communications between the various hardware components.
  • RAM random access memory
  • processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory.
  • Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, touchscreen and/or video display.
  • I/O input and output
  • the information handling system may also include one or more buses operable to transmit
  • the systems and techniques described herein may monitor a user's facial expressions (e.g., micro-expressions) when viewing media content being displayed on a display device (e.g., a television).
  • the media content may be sent to the display device by a computing device (e.g., IHS), such as, for example, a set-top box or other media streaming device (e.g., Amazon® fire stick, Roku® media box, or the like).
  • the systems and techniques may capture the user's facial expressions in a set of images (e.g., one or more video frames) and timestamp each set of images.
  • a machine learning module may analyze the set of images to identify a micro-expression and to determine a sentiment (e.g., happy, sad, puzzled, disgust, or the like).
  • the system and techniques may associate the sentiment with a timestamp and send the data (e.g., sentiment and associated timestamp) to a server.
  • the server may receive such data from multiple (e.g., hundreds of thousands of) computing devices and analyze the data to determine a particular sentiment associated with each particular portion of the media content. In this way, each scene in the media content may have an associated sentiment (e.g., average sentiment) based on data received from multiple users.
  • the computing device may, substantially in real-time (e.g., less than one second after determining the user's micro-expression), modify the playback of the media content based on a micro-expression of one or more users. For example, if one or more users have a particular type of micro-expression (e.g., disgust), then the computing device may skip a current portion (e.g., scene) of the media content that is being played back and advance playback of the media content to a next portion (e.g., next scene) of the media content.
  • a current portion e.g., scene
  • a next portion e.g., next scene
  • a camera may capture micro-expressions associated with the users and associate a time stamp with each micro-expression.
  • the micro-expressions may be associated with a particular portion of the media content, such as a scene in a movie or a show or an episode in an audio-playback or a news broadcast.
  • the micro-expressions may be summarized in the form of a sentiment (e.g., happy, sad, disgust, confused, and the like) and sent to a server. Multiple data from multiple cameras may be sent to the server to enable the server to create a crowd-based sentiment map in which individual portions of the media content may have an associated sentiment.
  • the computing device may modify playback of the media content substantially in real-time based on the user's micro-expression (or sentiment). For example, if the micro-expression indicates disgust or surprise (e.g., a child walked into the room during an adult-oriented scene), the computing device may skip playback of a current portion (e.g., scene or chapter) and advance playback of the media content to a next portion (e.g., a next scene or a next chapter).
  • a current portion e.g., scene or chapter
  • a next portion e.g., a next scene or a next chapter
  • a computing device such as a media playback device or a media streaming device may send media content to a display device (e.g., a television or a display monitor).
  • a camera may be connected to (e.g., attached to or integrated with) the display device.
  • the computing device may receive images from the camera and analyze (e.g., using machine learning) the images to determine a micro-expression of one or more users present in the images (e.g., in a field of view of the camera).
  • the machine learning may determine a sentiment associated with the portion of the media content that is being played back, determine a timestamp (e.g., each of the images may include a timestamp) associated with the images (e.g., a timestamp of the first image), associate the sentiment with the timestamp and send the sentiment and timestamp to a server.
  • the server may store a sentiment map of the media content.
  • the sentiment map may identify, based on the timestamp, a portion (e.g., a scene or a chapter) of the media content and a sentiment associated with the portion.
  • the sentiment associated with a current portion of the media content that is being played back matches a pre-specified sentiment (e.g., one or more of fear, disgust, anger, or contempt)
  • a pre-specified sentiment e.g., one or more of fear, disgust, anger, or contempt
  • playback of a remainder of the current portion of the media content may be skipped and playback of a next portion of the media content may be initiated.
  • playback of the current portion of the media content may continue. In this way, the computing device may skip zero or more portions of the media content that the user does not enjoy viewing.
  • a computing device may include one or more processors and one or more non-transitory computer-readable storage media to store instructions executable by the one or more processors to perform various operation.
  • the operations may include initiating playback of media content on a display device that is connected to the computing device.
  • the computing device may be a set-top box device, a media streaming device, or a combination of both.
  • the operations may include receiving one or more images from a camera connected to the computing device.
  • the camera may have a field of view that includes one or more viewers viewing the display device.
  • the operations may include performing an analysis of at least one image of the one or more images and determining, based on the analysis, that the at least one image includes a micro-expression being expressed by at least one viewer of the one or more viewers.
  • the operations may include determining a sentiment corresponding to the micro-expression.
  • the operations may include determining a timestamp associated with the at least one image and associating the sentiment with the timestamp.
  • the operations may include sending the sentiment and the timestamp to a server.
  • the operations may include determining that the sentiment comprises a pre-specified sentiment, automatically skipping playback of a remainder of a current portion of the media content that is being played back on the display device, and automatically initiating playback of a next portion of the media content.
  • the current portion may include a particular chapter of a movie and the next portion may include a next chapter of the movie.
  • the current portion may include a particular scene of a show and the next portion may include a next scene of the show.
  • the computing, the server, or both may create a sentiment map associated with the media content.
  • the sentiment map includes a particular sentiment associated with individual portions of a plurality of portions of the media content.
  • the sentiment may be one of: a neutral sentiment, a surprise sentiment, a fear sentiment, a disgust sentiment, an angry sentiment, a happy sentiment, a sad sentiment, or a contempt sentiment.
  • a computing device may include one or more processors and one or more non-transitory computer-readable storage media to store instructions executable by the one or more processors to perform various operations.
  • the operations may include initiating playback of media content on a display device that is connected to the computing device.
  • the computing device may be a set-top box device, a media streaming device, or a combination of both.
  • the operations may include receiving one or more images from a camera connected to the computing device.
  • the camera may have a field of view that includes one or more viewers viewing the display device.
  • the operations may include performing an analysis of at least one image of the one or more images and determining that the at least one image includes a particular micro-expression being expressed by the one or more viewers.
  • the operations may include determining a sentiment corresponding to the micro-expression.
  • the operations may include determining a timestamp associated with the at least one image, associating the sentiment with the timestamp, and sending the sentiment and the timestamp to a server.
  • the operations may include determining that the sentiment is one of a pre-specified set (of one or more) sentiments.
  • the computing device may automatically skip playback of a remainder of a current portion of the media content and automatically initiate playback of a next portion of the media content.
  • the current portion may include a particular chapter of a movie and the next portion may include a next chapter of the movie.
  • the current portion may include a particular scene of a show and the next portion may include a next scene of the show.
  • the pre-specified sentiment may include at least one of: a surprise sentiment, a fear sentiment, a disgust sentiment, an angry sentiment, or a contempt sentiment.
  • the server, the computing device, or both may create a sentiment map in which a particular sentiment is associated with an individual portion of a plurality of portions of the media content.
  • FIG. 1 is a block diagram of a system 100 that includes a computing device to determine a sentiment associated with an event, according to some embodiments.
  • the system 100 may include a computing device (e.g., an information handling system) connected to a server 106 via a network 106 .
  • the computing device 102 may be a media playback device, such as a set top box device, streaming media device, or the like that is capable of sending media content 122 to a display device 108 that is connected to the computing device 102 .
  • a camera 110 may be connected to or integrated into the display device 110 .
  • the camera 110 may capture and send one or more images 112 (e.g., video frames) at a predetermined interval (e.g., 1, 15, 30, or 60 frames per second (fps)), to the computing device 102 .
  • images 112 e.g., video frames
  • a predetermined interval e.g., 1, 15, 30, or 60 frames per second (fps)
  • the network 106 may include multiple networks using multiple technologies, such as wired and wireless technologies.
  • a media content distribution company may make media content available to a home via a cable connection, a fiber connection, a satellite connection, an internet connection, or the like.
  • the computing device 102 may receive the one or more images 112 at a predetermined time interval from the camera 110 .
  • the one or more images 112 may include a micro-expression 116 of a user 150 when viewing a portion 134 (Q) (e.g., a set of video frames, such as a scene) of the media content 122 (e.g., a movie, a show such as a comedy show, an audio-playback, a news broadcast or the like).
  • Q ⁇ P where Q is a current portion of the media content 122 .
  • the computing device 102 may use machine learning 114 to identify a sentiment based on the micro-expression 116 , determine a timestamp of a first image of the images 112 , and associate the sentiment with the timestamp. In this way, after playing back the media content 122 , the computing device 102 may have created a sentiment map 130 .
  • the media content 122 has multiple portions 134 (e.g., scenes), and a timestamp 126 (e.g., a time when the portion starts) may be associated with a sentiment 128 that is determined based on the micro-expression of the user during each of the portions 134 of the media content 122 .
  • a timestamp 126 ( 1 ) may have an associated sentiment 128 ( 1 ) and a timestamp 126 (N) may have an associated sentiment 128 (N).
  • a typical movie may have between about 40 to about 60 scenes (portions 134 )
  • a one-hour show may have about 20 to about 30 scenes (portions 134 )
  • a half hour show may have about 10 to about 15 scenes (portions 134 ).
  • the user 150 may use an input device 118 , such as a remote control, to provide input data 120 to the computing device 102 .
  • Each of the sentiments 128 may express one of multiple micro-expressions, such as one of neutral, surprise, fear, disgust, anger, happiness, sadness, and contempt.
  • the neutral micro-expression may include eyes and eyebrows neutral and the mouth opened or closed with few wrinkles.
  • the surprise micro-expression may include raised eyebrows, stretched skin below the brow, horizontal wrinkles across the forehead, open eyelids, whites of the eye (both above and below the eye) showing, jaw open and teeth parted, or any combination thereof.
  • the fear micro-expression may include one or more eyebrows that are raised and drawn together (often in a flat line), wrinkles in the forehead between (but not across) the eyebrows, raised upper eyelid, tense (e.g., drawn up) lower eyelid, upper (but not lower) whites of eyes showing, mouth open, lips slightly tensed or stretched and drawn back, or any combination thereof.
  • the disgust micro-expression may include a raised upper eyelid, raised lower lip, wrinkled nose, raised cheeks, lines below the lower eyelid, or any combination thereof.
  • the anger micro-expression may include eyebrows that are lowered and drawn together, vertical lines between the eyebrows, tense lower eyelid(s), eyes staring or bulging, lips pressed firmly together (with corners down or in a square shape), nostrils flared (e.g., dilated), lower jaw jutting out, or any combination thereof.
  • the happiness micro-expression may include the corners of the lips drawn back and up, the mouth may be parted with teeth exposed, a wrinkle may run from the outer nose to the outer lip, cheeks may be raised, lower eyelid may show wrinkles, Crow's feet near the eyes, or any combination thereof.
  • the sadness micro-expression may include the inner corners of the eyebrows drawn in and up, triangulated skin below the eyebrows, one or both corners of the lips drawn down, jaw up, lower lip pouts out, or any combination thereof.
  • the contempt (e.g., hate) micro-expression may include one side of the mouth raised.
  • the computing device 102 may send data 140 to the server 104 .
  • the data 140 may include a current one of the timestamps 126 and the associated one of the sentiments 128 .
  • the server 104 may have a sentiment map 130 associated with the media content 122 .
  • the server 104 may use a database 140 to store multiple media content items 132 ( 1 ) to 132 (M) (M>0) and associated sentiment maps 136 ( 1 ) to 136 (M), respectively.
  • an analyzer 138 may analyze the sentiment maps 136 to identify the types of sentiments that are popular. For example, the media content provider may determine that particular locations (e.g., zip codes) predominantly view horror movies (e.g., that case micro-expressions associated with the sentiment of fear) and the like. In this way, the media content provider may customize commercials or other advertisements based on the type of content being viewed in each location.
  • locations e.g., zip codes
  • horror movies e.g., that case micro-expressions associated with the sentiment of fear
  • the user 150 may use the input device 118 to instruct the computing device 102 to initiate playback of the media content 122 on the display device 108 , causing the display device 108 to display the portion 134 (Q) of the media content 122 .
  • the computing device 102 may determine that the camera 110 is available and use the camera 110 to capture the images 112 when the user 150 is viewing the portion 134 (Q) of the media content 122 .
  • the computing device 102 may receive the images 112 and use the machine learning 114 to identify the micro-expression 116 and the corresponding one of the sentiments 128 associated with the portion 134 (Q).
  • the computing device 102 may associate a timestamp with the sentiment.
  • the computing device 102 may send the timestamp 126 and the associated sentiment 128 to the server 104 .
  • the computing device 102 may skip playback of a remainder of the current portion 134 (Q) of the media content 122 and initiate playback of a next portion 134 of the media content 122 .
  • the pre-specified sentiment 142 is disgust
  • the computing device 102 may skip playback of the current portion 134 (Q) of the media content 122 and initiate playback of a next portion of the media content 122 .
  • the computing device 102 may automatically skip playback of the current portion 134 (Q) of the media content 122 and initiate playback of a next portion of the media content 122 .
  • a different sentiment besides disgust may be the pre-specified sentiment 142 to cause automatic skipping of a portion of the media content 122 .
  • determining one or more of surprise, fear, anger, or contempt in the micro-expression 116 may be used to cause the computing device 102 to skip playback of the current portion 134 (Q) of the media content 122 and initiate playback of a next one of the portions 134 of the media content 122 .
  • the machine learning 114 may learn which portions of media content to skip based on the behavior of the user 150 .
  • the user 150 may be uncomfortable viewing certain types of scenes (e.g., graphic sexual scene, masochism, nudity, or the like) and use the input device 118 to instruct the computing device 102 to skip to a next one of the portions 134 (e.g., a next chapter marker in a video).
  • the user 150 may exhibit one or more particular micro-expressions corresponding to one or more particular sentiments when the user 150 is uncomfortable viewing certain types of scenes.
  • the machine learning 114 may correlate the input data 120 (e.g., instructing, using the input device 118 , the computing device 102 to skip to a next scene or chapter) with a current sentiment (one of the sentiments 128 ) and perform machine learning. In this way, the machine learning 114 may, in response to determining a particular micro-expression in the images 112 associated with the portion 134 (Q), determine that the user 150 is uncomfortable (e.g., one or more of disgust, contempt, surprise, fear, or anger), and automatically (e.g., without human interaction) skip the current portion 134 (Q) of the media content 122 and initiate playback of a next one of the portions 134 of the media content 122 .
  • uncomfortable e.g., one or more of disgust, contempt, surprise, fear, or anger
  • the user 150 may be viewing adult media content and another person (e.g., a spouse, a child, one of the user's parents, one of the user's grandparents, or the like) may walk in to the room.
  • another person e.g., a spouse, a child, one of the user's parents, one of the user's grandparents, or the like
  • the computing device 102 may automatically (e.g., without the user 150 doing anything) skip the current portion 134 (Q) and initiate playback of a next one of the portions 134 (e.g., next chapter) of the media content 122 .
  • the machine learning 114 may learn what types of scenes the user 150 is interested in viewing and which types of scenes the user 150 is uninterested in viewing. For example, a car enthusiast may enjoy watching scenes that include car chases.
  • the computing device 102 may learn this by determining that the user 150 uses the input device 118 to skip scenes that don't include car chases. After the machine learning 114 has learned this behavior, the machine learning 114 may automatically skip playback of the portion 134 (Q) and skip to a next one of the portions 134 of the media content 122 when the micro-expression 116 indicates particular sentiments (e.g., one or more of neutral, disgust, anger, or contempt) that indicate that the user is not interested in viewing the current portion 134 (Q).
  • particular sentiments e.g., one or more of neutral, disgust, anger, or contempt
  • a computing device such as a media playback device or a media streaming device may send media content to a display device (e.g., an audio player, a television or a display monitor).
  • a camera may be connected to (e.g., attached to or integrated with) the display device.
  • the computing device may receive images from the camera and analyze (e.g., using machine learning) the images to determine a micro-expression of one or more users present in the images (e.g., in a field of view of the camera).
  • the machine learning may determine a sentiment associated with the portion of the media content that is being played back, determine a timestamp (e.g., each of the images may include a timestamp) associated with the images (e.g., a timestamp of the first image), associate the sentiment with the timestamp and send the sentiment and timestamp to a server.
  • the server may store a sentiment map of the media content.
  • the sentiment map may identify, based on the timestamp, a portion (e.g., a scene or a chapter) of the media content and a sentiment associated with the portion.
  • the computing device may skip zero or more portions of the media content that the user does not enjoy viewing, thereby providing a more enjoyable media content viewing experience.
  • FIG. 2 is a block diagram 200 illustrating a sentiment map, according to some embodiments.
  • the machine learning 114 may analyze one or more images 202 ( 1 ) (e.g., the images 112 of FIG. 1 ) from the camera 110 and determine a sentiment 126 ( 1 ) (e.g., neutral). The machine learning 114 may associate the timestamp 126 ( 1 ) from the images 202 ( 1 ) to indicate when the user displayed the sentiment 126 ( 1 ). In this way, the sentiment map 136 (M) indicates that when the portion 134 ( 1 ) is being played back, the user 150 displays the sentiment 128 ( 1 ) at the time 124 ( 1 ).
  • the machine learning 114 may analyze one or more images 202 ( 2 ) (e.g., the images 112 of FIG. 1 ) from the camera 110 and determine a sentiment 126 ( 2 ) (e.g., contempt or disgust).
  • the machine learning 114 may associate the timestamp 126 ( 2 ) from the images 202 ( 2 ) to indicate when the user displayed the sentiment 126 ( 2 ).
  • the sentiment map 136 (M) indicates that when the portion 134 ( 2 ) is being played back, the user 150 displays the sentiment 128 ( 2 ) at the time 124 ( 2 ).
  • the machine learning 114 may automatically skip a remaining playback of the portion 134 ( 2 ) and skip to the next portion 134 ( 3 ).
  • the machine learning 114 may analyze one or more images 202 ( 3 ) (e.g., the images 112 of FIG. 1 ) from the camera 110 and determine a sentiment 126 ( 3 ) (e.g., happy). The machine learning 114 may associate the timestamp 126 ( 3 ) from the images 202 ( 3 ) to indicate when the user displayed the sentiment 126 ( 3 ). In this way, the sentiment map 136 (M) indicates that when the portion 134 ( 3 ) is being played back, the user 150 displays the sentiment 128 ( 3 ) at the time 124 ( 3 ).
  • the machine learning 114 may analyze one or more images 202 ( 4 ) (e.g., the images 112 of FIG. 1 ) from the camera 110 and determine a sentiment 126 ( 4 ) (e.g., contempt or disgust).
  • the machine learning 114 may associate the timestamp 126 ( 4 ) from the images 202 ( 4 ) to indicate when the user displayed the sentiment 126 ( 4 ).
  • the sentiment map 136 (M) indicates that when the portion 134 ( 4 ) is being played back, the user 150 displays the sentiment 128 ( 4 ) at the time 124 ( 4 ).
  • the machine learning 114 may automatically skip a remaining playback of the portion 134 ( 4 ) and skip to the next portion 134 ( 5 ).
  • the machine learning 114 may analyze one or more images 202 ( 5 ) (e.g., the images 112 of FIG. 1 ) from the camera 110 and determine a sentiment 126 ( 5 ) (e.g., neutral). The machine learning 114 may associate the timestamp 126 ( 5 ) from the images 202 ( 5 ) to indicate when the user displayed the sentiment 126 ( 5 ). In this way, the sentiment map 136 (M) indicates that when the portion 134 ( 5 ) is being played back, the user 150 displays the sentiment 128 ( 5 ) at the time 124 ( 5 ).
  • the machine learning 114 may analyze one or more images 202 (P) (e.g., the images 112 of FIG. 1 ) from the camera 110 and determine a sentiment 126 (P) (e.g., happy). The machine learning 114 may associate the timestamp 126 (P) from the images 202 (P) to indicate when the user displayed the sentiment 126 (P). In this way, the sentiment map 136 (M) indicates that when the portion 134 (P) is being played back, the user 150 displays the sentiment 128 (P) at the time 124 (P).
  • P images 202
  • P e.g., the images 112 of FIG. 1
  • the machine learning 114 may associate the timestamp 126 (P) from the images 202 (P) to indicate when the user displayed the sentiment 126 (P). In this way, the sentiment map 136 (M) indicates that when the portion 134 (P) is being played back, the user 150 displays the sentiment 128 (P) at the time 124 (P).
  • images may be analyzed to identify a sentiment of a user, at what time the user displayed the sentiment, which portion of the media content the user was viewing, and the like. If the user displays one or more pre-specified (or learned) sentiments, then the machine learning 114 may automatically skip playback of a remainder of the current portion and initiate playback of a next portion of the media content.
  • each block represents one or more operations that can be implemented in hardware, software, or a combination thereof.
  • the blocks represent computer-executable instructions that, when executed by one or more processors, cause the processors to perform the recited operations.
  • computer-executable instructions include routines, programs, objects, modules, components, data structures, and the like that perform particular functions or implement particular abstract data types.
  • the order in which the blocks are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes.
  • the process 300 is described with reference to FIGS. 1 and 2 as described above, although other models, frameworks, systems and environments may be used to implement this process.
  • FIG. 3 is a flowchart of a process 300 that associates associating sending a timestamp and a sentiment associated with a portion of media content, according to some embodiments.
  • the process may be performed by the computing device 102 of FIGS. 1 and 2 .
  • the process may initiate playback of media content on a display device.
  • the process may determine that a camera is accessible.
  • the process may capture one or more images using the camera.
  • the user 150 may use the input device 118 to instruct the computing device 102 to initiate playback of the media content 122 on the display device 108 , causing the display device 108 to display the portion 134 (Q) of the media content 122 .
  • the computing device 102 may determine that the camera 110 is available and use the camera 110 to capture the images 112 when the user 150 is viewing the portion 134 (Q) of the media content 122 .
  • the process may determine a micro-expression expressed in the one or more images.
  • the process may determine a sentiment based on the micro-expression.
  • the computing device 102 may receive the images 112 and use the machine learning 114 to identify the micro-expression 116 and the corresponding one of the sentiments 128 associated with the portion 134 (Q).
  • the computing device 102 may associate a timestamp with the sentiment.
  • the computing device 102 may send the timestamp 126 and the associated sentiment 128 to the server 104 .
  • the process may determine a timestamp and associate the timestamp with the sentiment.
  • the process may send the timestamp and the sentiment to a server.
  • the computing device 102 may associate a timestamp with the sentiment.
  • the computing device 102 may send the timestamp 126 and the associated sentiment 128 to the server 104 .
  • the process may determine whether the sentiment is one of a pre-specified sentiment. If the process determines, at 316 , that “yes” the sentiment is one of the pre-specified sentiments, then the process may proceed to 318 , where a current portion of the media content may be skipped and play back of a next portion of the media content may be initiated. If the process determines, at 316 , that “no” the sentiment is not one of the pre-specified sentiments, then the process may proceed to 320 . At 320 , the process may continue playback of the media content and the process may proceed to 306 to capture additional images using the camera. For example, in FIG.
  • the computing device 102 may skip playback of a remainder of the current portion 134 (Q) of the media content 122 and initiate playback of a next portion 134 of the media content 122 .
  • the pre-specified sentiment 142 is disgust
  • the computing device 102 may skip playback of the current portion 134 (Q) of the media content 122 and initiate playback of a next portion of the media content 122 .
  • the computing device 102 may continue playback of the current portion 134 (Q) of the media content 122 .
  • a computing device such as a media playback device or a media streaming device may send media content to a display device (e.g., a television or a display monitor).
  • a camera may be connected to (e.g., attached to or integrated with) the display device.
  • the computing device may receive images from the camera and analyze (e.g., using machine learning) the images to determine a micro-expression of one or more users present in the images (e.g., in a field of view of the camera).
  • the machine learning may determine a sentiment associated with the portion of the media content that is being played back, determine a timestamp (e.g., each of the images may include a timestamp) associated with the images (e.g., a timestamp of the first image), associate the sentiment with the timestamp and send the sentiment and timestamp to a server.
  • the server may store a sentiment map of the media content.
  • the sentiment map may identify, based on the timestamp, a portion (e.g., a scene or a chapter) of the media content and a sentiment associated with the portion.
  • the sentiment associated with a current portion of the media content that is being played back matches a pre-specified sentiment (e.g., one or more of fear, disgust, anger, or contempt)
  • a pre-specified sentiment e.g., one or more of fear, disgust, anger, or contempt
  • playback of a remainder of the current portion of the media content may be skipped and playback of a next portion of the media content may be initiated.
  • playback of the current portion of the media content may continue. In this way, the computing device may skip zero or more portions of the media content that the user does not enjoy viewing.
  • portions of the media content that the user(s) feel uncomfortable viewing may be automatically skipped to improve the user(s) experience when viewing media content.
  • FIG. 4 illustrates an example configuration of a computing device 400 that can be used to implement the computing device 102 or the server 104 of FIGS. 1 and 2 .
  • the computing device 400 is shown implementing the computing device 102 of FIGS. 1 and 2 .
  • the computing device 102 may include one or more processors 402 (e.g., CPU, GPU, or the like), a memory 404 , communication interfaces 406 , a display device 408 , input devices 408 (e.g., the input device 118 of FIG. 1 ), other input/output (I/O) devices 410 (e.g., trackball and the like), and one or more mass storage devices 412 (e.g., disk drive, solid state disk drive, or the like), configured to communicate with each other, such as via one or more system buses 414 or other suitable connections.
  • system buses 414 may include multiple buses, such as a memory device bus, a storage device bus (e.g., serial ATA (SATA) and the like), data buses (e.g., universal serial bus (USB) and the like), video signal buses (e.g., ThunderBolt®, DVI, HDMI, and the like), power buses, etc.
  • a memory device bus e.g., a hard disk drive (WLAN) and the like
  • data buses e.g., universal serial bus (USB) and the like
  • video signal buses e.g., ThunderBolt®, DVI, HDMI, and the like
  • power buses e.g., ThunderBolt®, DVI, HDMI, and the like
  • the processors 402 are one or more hardware devices that may include a single processing unit or a number of processing units, all of which may include single or multiple computing units or multiple cores.
  • the processors 402 may include a graphics processing unit (GPU) that is integrated into the CPU or the GPU may be a separate processor device from the CPU.
  • the processors 402 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, graphics processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions.
  • the processors 402 may be configured to fetch and execute computer-readable instructions stored in the memory 404 , mass storage devices 412 , or other computer-readable media.
  • Memory 404 and mass storage devices 412 are examples of computer storage media (e.g., memory storage devices) for storing instructions that can be executed by the processors 402 to perform the various functions described herein.
  • memory 404 may include both volatile memory and non-volatile memory (e.g., RAM, ROM, or the like) devices.
  • mass storage devices 412 may include hard disk drives, solid-state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), a storage array, a network attached storage, a storage area network, or the like.
  • Both memory 404 and mass storage devices 412 may be collectively referred to as memory or computer storage media herein and may be any type of non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that can be executed by the processors 402 as a particular machine configured for carrying out the operations and functions described in the implementations herein.
  • the computing device 400 may include one or more communication interfaces 406 for exchanging data via the network 106 (e.g., when the computing device 400 is connected to the dock 104 ).
  • the communication interfaces 406 can facilitate communications within a wide variety of networks and protocol types, including wired networks (e.g., Ethernet, DOCSIS, DSL, Fiber, USB etc.) and wireless networks (e.g., WLAN, GSM, CDMA, 802.11, Bluetooth, Wireless USB, ZigBee, cellular, satellite, etc.), the Internet and the like.
  • Communication interfaces 406 can also provide communication with external storage, such as a storage array, network attached storage, storage area network, cloud storage, or the like.
  • the display device 408 may be used for displaying content (e.g., information and images) to users.
  • Other I/O devices 410 may be devices that receive various inputs from a user and provide various outputs to the user, and may include a keyboard, a touchpad, a mouse, a printer, audio input/output devices, and so forth.
  • the computer storage media such as memory 116 and mass storage devices 412 , may be used to store software and data, such as, for example, the images 112 , the micro-expression 116 , the machine learning 114 , the media content 122 , the sentiment map 130 , and the like.
  • the computing device 400 may be a media playback device or a media streaming device.
  • the computing device 400 may send media content 122 to the display device 108 (e.g., a television or a display monitor).
  • the camera 110 may be connected to (e.g., attached to or integrated with) the display device 108 .
  • the computing device 400 may receive the images 112 from the camera and analyze (e.g., using machine learning 114 ) the images 112 to determine the micro-expression 116 of one or more users present in the images 112 (e.g., in a field of view of the camera 110 ).
  • the machine learning 114 may determine a sentiment (e.g., one of the sentiments 128 ) associated with the portion of the media content 122 that is being played back on the display device 108 , determine a timestamp (e.g., each of the images 112 may include a timestamp) associated with the images 112 (e.g., a timestamp of the first image), associate the sentiment 128 with the timestamp 126 and send the data 140 that includes the sentiment 128 and the timestamp 126 to the server 104 .
  • the server 104 may store a sentiment map 130 of the media content 122 .
  • the sentiment map 130 may identify, based on the timestamp 126 , a portion (e.g., a scene or a chapter) of the media content 122 and a sentiment 128 associated with the portion 134 .
  • the sentiment 134 associated with a current portion of the media content 122 that is being played back matches a pre-specified sentiment 142 (e.g., one or more of fear, disgust, anger, or contempt)
  • a pre-specified sentiment 142 e.g., one or more of fear, disgust, anger, or contempt
  • playback of a remainder of the current portion of the media content 122 may be skipped and playback of a next portion of the media content 122 may be initiated.
  • the sentiment associated with a current portion of the media content 122 that is being played back on the display device 108 does not match the pre-specified sentiment 142
  • playback of the current portion of the media content 122 may continue.
  • the computing device 400 may automatically skip portions of the media content 122 that the user does not enjoy viewing (according to the user's micro-expressions).
  • module can represent program code (and/or declarative-type instructions) that performs specified tasks or operations when executed on a processing device or devices (e.g., CPUs or processors).
  • the program code can be stored in one or more computer-readable memory devices or other computer storage devices.
  • this disclosure provides various example implementations, as described and as illustrated in the drawings. However, this disclosure is not limited to the implementations described and illustrated herein, but can extend to other implementations, as would be known or as would become known to those skilled in the art. Reference in the specification to “one implementation,” “this implementation,” “these implementations” or “some implementations” means that a particular feature, structure, or characteristic described is included in at least one implementation, and the appearances of these phrases in various places in the specification are not necessarily all referring to the same implementation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Social Psychology (AREA)
  • Computational Linguistics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

In some examples, a computing device initiates playback of media content on a display device. The computing device receives one or more images from a camera having a field of view that includes one or more viewers of the display device. The computing device may analyze at least one of the images and determine, based on the analysis, a micro-expression being expressed by at least one of the viewers. The computing device may determine a sentiment based on the micro-expression. A timestamp derived from the one or more images may be associated with the sentiment and sent to a server to create a sentiment map of the media content. If the sentiment matches a pre-specified sentiment then the computing device may skip playback of a remainder of a current portion of the media content that is being displayed and initiate playback of a next portion of the media content.

Description

BACKGROUND OF THE INVENTION Field of the Invention
This invention relates generally to determining a user's micro-expression when media content is being played back on a display device (e.g., television) and, more particularly to sending the captured micro-expressions to a server to a create a crowd-based sentiment map of the media content.
Description of the Related Art
As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems (IHS). An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
Content ratings are usually associated with an entire media content item (e.g., a movie, a show, or the like). A media content may include any audio, video media content or both. For example, the media content may include audio-playbacks, music, songs, news, podcasts, shows such as comedy shows, or three-dimensional media content. For example, a media content item may be rated “R” (restricted) because of one or two particular scenes. Adults viewing “R” rated media content may not want a child who walks into the room to view the particular scenes but may be okay with the child temporarily viewing other portions of the media content.
SUMMARY OF THE INVENTION
This Summary provides a simplified form of concepts that are further described below in the Detailed Description. This Summary is not intended to identify key or essential features and should therefore not be used for determining or limiting the scope of the claimed subject matter.
In some examples, a computing device initiates playback of media content on a display device. The computing device receives one or more images from a camera having a field of view that includes one or more viewers of the display device. The computing device may analyze at least one of the images and determine, based on the analysis, a micro-expression being expressed by at least one of the viewers. The computing device may determine a sentiment based on the micro-expression. A timestamp derived from the one or more images may be associated with the sentiment and sent to a server to create a sentiment map of the media content. If the sentiment matches a pre-specified sentiment, then the computing device may skip playback of a remainder of a current portion of the media content that is being displayed and initiate playback of a next portion of the media content.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete understanding of the present disclosure may be obtained by reference to the following Detailed Description when taken in conjunction with the accompanying Drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same reference numbers in different figures indicate similar or identical items.
FIG. 1 is a block diagram of a system that includes a computing device to determine a sentiment associated with an event, according to some embodiments.
FIG. 2 is a block diagram illustrating a sentiment map, according to some embodiments.
FIG. 3 is a flowchart of a process that associates associating sending a timestamp and a sentiment associated with a portion of media content, according to some embodiments.
FIG. 4 illustrates an example configuration of a computing device that can be used to implement the systems and techniques described herein.
DETAILED DESCRIPTION
For purposes of this disclosure, an information handling system (IHS) may include any instrumentality or aggregate of instrumentalities operable to compute, calculate, determine, classify, process, transmit, receive, retrieve, originate, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an information handling system may be a personal computer (e.g., desktop or laptop), tablet computer, mobile device (e.g., personal digital assistant (PDA) or smart phone), server (e.g., blade server or rack server), a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, touchscreen and/or video display. The information handling system may also include one or more buses operable to transmit communications between the various hardware components.
The systems and techniques described herein may monitor a user's facial expressions (e.g., micro-expressions) when viewing media content being displayed on a display device (e.g., a television). The media content may be sent to the display device by a computing device (e.g., IHS), such as, for example, a set-top box or other media streaming device (e.g., Amazon® fire stick, Roku® media box, or the like). The systems and techniques may capture the user's facial expressions in a set of images (e.g., one or more video frames) and timestamp each set of images. A machine learning module may analyze the set of images to identify a micro-expression and to determine a sentiment (e.g., happy, sad, puzzled, disgust, or the like). The system and techniques may associate the sentiment with a timestamp and send the data (e.g., sentiment and associated timestamp) to a server. The server may receive such data from multiple (e.g., hundreds of thousands of) computing devices and analyze the data to determine a particular sentiment associated with each particular portion of the media content. In this way, each scene in the media content may have an associated sentiment (e.g., average sentiment) based on data received from multiple users.
In some cases, the computing device may, substantially in real-time (e.g., less than one second after determining the user's micro-expression), modify the playback of the media content based on a micro-expression of one or more users. For example, if one or more users have a particular type of micro-expression (e.g., disgust), then the computing device may skip a current portion (e.g., scene) of the media content that is being played back and advance playback of the media content to a next portion (e.g., next scene) of the media content.
Thus, when one or more users are viewing media content, a camera may capture micro-expressions associated with the users and associate a time stamp with each micro-expression. The micro-expressions may be associated with a particular portion of the media content, such as a scene in a movie or a show or an episode in an audio-playback or a news broadcast. The micro-expressions may be summarized in the form of a sentiment (e.g., happy, sad, disgust, confused, and the like) and sent to a server. Multiple data from multiple cameras may be sent to the server to enable the server to create a crowd-based sentiment map in which individual portions of the media content may have an associated sentiment.
In addition, the computing device may modify playback of the media content substantially in real-time based on the user's micro-expression (or sentiment). For example, if the micro-expression indicates disgust or surprise (e.g., a child walked into the room during an adult-oriented scene), the computing device may skip playback of a current portion (e.g., scene or chapter) and advance playback of the media content to a next portion (e.g., a next scene or a next chapter).
Thus, a computing device, such as a media playback device or a media streaming device may send media content to a display device (e.g., a television or a display monitor). A camera may be connected to (e.g., attached to or integrated with) the display device. The computing device may receive images from the camera and analyze (e.g., using machine learning) the images to determine a micro-expression of one or more users present in the images (e.g., in a field of view of the camera). The machine learning may determine a sentiment associated with the portion of the media content that is being played back, determine a timestamp (e.g., each of the images may include a timestamp) associated with the images (e.g., a timestamp of the first image), associate the sentiment with the timestamp and send the sentiment and timestamp to a server. In this way, the server may store a sentiment map of the media content. The sentiment map may identify, based on the timestamp, a portion (e.g., a scene or a chapter) of the media content and a sentiment associated with the portion.
During playback of the media content, if the sentiment associated with a current portion of the media content that is being played back matches a pre-specified sentiment (e.g., one or more of fear, disgust, anger, or contempt), then playback of a remainder of the current portion of the media content may be skipped and playback of a next portion of the media content may be initiated. Otherwise, e.g., if the sentiment associated with a current portion of the media content that is being played back does not match a pre-specified sentiment, then playback of the current portion of the media content may continue. In this way, the computing device may skip zero or more portions of the media content that the user does not enjoy viewing.
As an example, a computing device may include one or more processors and one or more non-transitory computer-readable storage media to store instructions executable by the one or more processors to perform various operation. For example, the operations may include initiating playback of media content on a display device that is connected to the computing device. The computing device may be a set-top box device, a media streaming device, or a combination of both. The operations may include receiving one or more images from a camera connected to the computing device. The camera may have a field of view that includes one or more viewers viewing the display device. The operations may include performing an analysis of at least one image of the one or more images and determining, based on the analysis, that the at least one image includes a micro-expression being expressed by at least one viewer of the one or more viewers. The operations may include determining a sentiment corresponding to the micro-expression. The operations may include determining a timestamp associated with the at least one image and associating the sentiment with the timestamp. The operations may include sending the sentiment and the timestamp to a server. The operations may include determining that the sentiment comprises a pre-specified sentiment, automatically skipping playback of a remainder of a current portion of the media content that is being played back on the display device, and automatically initiating playback of a next portion of the media content. For example, the current portion may include a particular chapter of a movie and the next portion may include a next chapter of the movie. As another example, the current portion may include a particular scene of a show and the next portion may include a next scene of the show. After playback of the media content has completed, the computing, the server, or both may create a sentiment map associated with the media content. The sentiment map includes a particular sentiment associated with individual portions of a plurality of portions of the media content. The sentiment may be one of: a neutral sentiment, a surprise sentiment, a fear sentiment, a disgust sentiment, an angry sentiment, a happy sentiment, a sad sentiment, or a contempt sentiment.
As a second example, a computing device may include one or more processors and one or more non-transitory computer-readable storage media to store instructions executable by the one or more processors to perform various operations. For example, the operations may include initiating playback of media content on a display device that is connected to the computing device. The computing device may be a set-top box device, a media streaming device, or a combination of both. The operations may include receiving one or more images from a camera connected to the computing device. The camera may have a field of view that includes one or more viewers viewing the display device. The operations may include performing an analysis of at least one image of the one or more images and determining that the at least one image includes a particular micro-expression being expressed by the one or more viewers. The operations may include determining a sentiment corresponding to the micro-expression. The operations may include determining a timestamp associated with the at least one image, associating the sentiment with the timestamp, and sending the sentiment and the timestamp to a server. The operations may include determining that the sentiment is one of a pre-specified set (of one or more) sentiments. In response, the computing device may automatically skip playback of a remainder of a current portion of the media content and automatically initiate playback of a next portion of the media content. For example, the current portion may include a particular chapter of a movie and the next portion may include a next chapter of the movie. As another example, the current portion may include a particular scene of a show and the next portion may include a next scene of the show. The pre-specified sentiment may include at least one of: a surprise sentiment, a fear sentiment, a disgust sentiment, an angry sentiment, or a contempt sentiment. After the computing device has completed playback of the media content, the server, the computing device, or both may create a sentiment map in which a particular sentiment is associated with an individual portion of a plurality of portions of the media content.
FIG. 1 is a block diagram of a system 100 that includes a computing device to determine a sentiment associated with an event, according to some embodiments. The system 100 may include a computing device (e.g., an information handling system) connected to a server 106 via a network 106. The computing device 102 may be a media playback device, such as a set top box device, streaming media device, or the like that is capable of sending media content 122 to a display device 108 that is connected to the computing device 102. A camera 110 may be connected to or integrated into the display device 110. The camera 110 may capture and send one or more images 112 (e.g., video frames) at a predetermined interval (e.g., 1, 15, 30, or 60 frames per second (fps)), to the computing device 102.
The network 106 may include multiple networks using multiple technologies, such as wired and wireless technologies. For example, a media content distribution company may make media content available to a home via a cable connection, a fiber connection, a satellite connection, an internet connection, or the like.
The computing device 102 may receive the one or more images 112 at a predetermined time interval from the camera 110. The one or more images 112 may include a micro-expression 116 of a user 150 when viewing a portion 134(Q) (e.g., a set of video frames, such as a scene) of the media content 122 (e.g., a movie, a show such as a comedy show, an audio-playback, a news broadcast or the like). The media content 122 may include multiple portions 134(1) to 134(P), where Q<=P and P>1. For illustration purposes, the examples discussed herein assume Q<P, where Q is a current portion of the media content 122. The computing device 102 may use machine learning 114 to identify a sentiment based on the micro-expression 116, determine a timestamp of a first image of the images 112, and associate the sentiment with the timestamp. In this way, after playing back the media content 122, the computing device 102 may have created a sentiment map 130. For example, the media content 122 has multiple portions 134 (e.g., scenes), and a timestamp 126 (e.g., a time when the portion starts) may be associated with a sentiment 128 that is determined based on the micro-expression of the user during each of the portions 134 of the media content 122. For example, for the media content 122, a timestamp 126(1) may have an associated sentiment 128(1) and a timestamp 126(N) may have an associated sentiment 128(N). To illustrate, a typical movie may have between about 40 to about 60 scenes (portions 134), a one-hour show may have about 20 to about 30 scenes (portions 134), and a half hour show may have about 10 to about 15 scenes (portions 134). The user 150 may use an input device 118, such as a remote control, to provide input data 120 to the computing device 102.
Each of the sentiments 128 may express one of multiple micro-expressions, such as one of neutral, surprise, fear, disgust, anger, happiness, sadness, and contempt. The neutral micro-expression may include eyes and eyebrows neutral and the mouth opened or closed with few wrinkles. The surprise micro-expression may include raised eyebrows, stretched skin below the brow, horizontal wrinkles across the forehead, open eyelids, whites of the eye (both above and below the eye) showing, jaw open and teeth parted, or any combination thereof. The fear micro-expression may include one or more eyebrows that are raised and drawn together (often in a flat line), wrinkles in the forehead between (but not across) the eyebrows, raised upper eyelid, tense (e.g., drawn up) lower eyelid, upper (but not lower) whites of eyes showing, mouth open, lips slightly tensed or stretched and drawn back, or any combination thereof. The disgust micro-expression may include a raised upper eyelid, raised lower lip, wrinkled nose, raised cheeks, lines below the lower eyelid, or any combination thereof. The anger micro-expression may include eyebrows that are lowered and drawn together, vertical lines between the eyebrows, tense lower eyelid(s), eyes staring or bulging, lips pressed firmly together (with corners down or in a square shape), nostrils flared (e.g., dilated), lower jaw jutting out, or any combination thereof. The happiness micro-expression may include the corners of the lips drawn back and up, the mouth may be parted with teeth exposed, a wrinkle may run from the outer nose to the outer lip, cheeks may be raised, lower eyelid may show wrinkles, Crow's feet near the eyes, or any combination thereof. The sadness micro-expression may include the inner corners of the eyebrows drawn in and up, triangulated skin below the eyebrows, one or both corners of the lips drawn down, jaw up, lower lip pouts out, or any combination thereof. The contempt (e.g., hate) micro-expression may include one side of the mouth raised.
After the computing device 102 determines a particular sentiment associated with a particular portion 134 of the media content 122, the computing device 102 may send data 140 to the server 104. The data 140 may include a current one of the timestamps 126 and the associated one of the sentiments 128. In this way, after the computing device 102 has completed playback of the media content 122, the server 104 may have a sentiment map 130 associated with the media content 122. For example, the server 104 may use a database 140 to store multiple media content items 132(1) to 132(M) (M>0) and associated sentiment maps 136(1) to 136(M), respectively. In some cases, an analyzer 138 may analyze the sentiment maps 136 to identify the types of sentiments that are popular. For example, the media content provider may determine that particular locations (e.g., zip codes) predominantly view horror movies (e.g., that case micro-expressions associated with the sentiment of fear) and the like. In this way, the media content provider may customize commercials or other advertisements based on the type of content being viewed in each location.
The user 150 may use the input device 118 to instruct the computing device 102 to initiate playback of the media content 122 on the display device 108, causing the display device 108 to display the portion 134(Q) of the media content 122. The computing device 102 may determine that the camera 110 is available and use the camera 110 to capture the images 112 when the user 150 is viewing the portion 134(Q) of the media content 122. The computing device 102 may receive the images 112 and use the machine learning 114 to identify the micro-expression 116 and the corresponding one of the sentiments 128 associated with the portion 134(Q). The computing device 102 may associate a timestamp with the sentiment. The computing device 102 may send the timestamp 126 and the associated sentiment 128 to the server 104.
If the computing device 102 determines that a current one of the sentiments 128 associated with the portion 134(Q) is a pre-specified sentiment 142, then the computing device 102 may skip playback of a remainder of the current portion 134(Q) of the media content 122 and initiate playback of a next portion 134 of the media content 122. For example, if the pre-specified sentiment 142 is disgust, then if the computing device 102 determines that the micro-expression 116 expresses the sentiment disgust, then the computing device 102 may skip playback of the current portion 134(Q) of the media content 122 and initiate playback of a next portion of the media content 122. To illustrate, when the user 150 is viewing adult content and a child walks into the room, the user 150, the child, or both may have the micro-expression 116 that expresses the sentiment of disgust. In response to detecting the sentiment of disgust in the micro-expression 116 of the images 112, the computing device 102 may automatically skip playback of the current portion 134(Q) of the media content 122 and initiate playback of a next portion of the media content 122.
In other cases, a different sentiment besides disgust may be the pre-specified sentiment 142 to cause automatic skipping of a portion of the media content 122. For example, depending on the implementation, determining one or more of surprise, fear, anger, or contempt in the micro-expression 116 may be used to cause the computing device 102 to skip playback of the current portion 134(Q) of the media content 122 and initiate playback of a next one of the portions 134 of the media content 122.
In some cases, the machine learning 114 may learn which portions of media content to skip based on the behavior of the user 150. For example, the user 150 may be uncomfortable viewing certain types of scenes (e.g., graphic sexual scene, masochism, nudity, or the like) and use the input device 118 to instruct the computing device 102 to skip to a next one of the portions 134 (e.g., a next chapter marker in a video). The user 150 may exhibit one or more particular micro-expressions corresponding to one or more particular sentiments when the user 150 is uncomfortable viewing certain types of scenes. The machine learning 114 may correlate the input data 120 (e.g., instructing, using the input device 118, the computing device 102 to skip to a next scene or chapter) with a current sentiment (one of the sentiments 128) and perform machine learning. In this way, the machine learning 114 may, in response to determining a particular micro-expression in the images 112 associated with the portion 134(Q), determine that the user 150 is uncomfortable (e.g., one or more of disgust, contempt, surprise, fear, or anger), and automatically (e.g., without human interaction) skip the current portion 134(Q) of the media content 122 and initiate playback of a next one of the portions 134 of the media content 122. Of course, this may apply to all users whose face is within a field of view of the camera 110. For example, the user 150 may be viewing adult media content and another person (e.g., a spouse, a child, one of the user's parents, one of the user's grandparents, or the like) may walk in to the room. If the camera 110 captures the images 112 that include the other person and the other person's micro-expression 116 expresses one or more of the pre-specified sentiments 142 (e.g., one or more of disgust, contempt, surprise, fear, anger, or contempt), then the computing device 102 may automatically (e.g., without the user 150 doing anything) skip the current portion 134(Q) and initiate playback of a next one of the portions 134 (e.g., next chapter) of the media content 122.
The machine learning 114 may learn what types of scenes the user 150 is interested in viewing and which types of scenes the user 150 is uninterested in viewing. For example, a car enthusiast may enjoy watching scenes that include car chases. The computing device 102 may learn this by determining that the user 150 uses the input device 118 to skip scenes that don't include car chases. After the machine learning 114 has learned this behavior, the machine learning 114 may automatically skip playback of the portion 134(Q) and skip to a next one of the portions 134 of the media content 122 when the micro-expression 116 indicates particular sentiments (e.g., one or more of neutral, disgust, anger, or contempt) that indicate that the user is not interested in viewing the current portion 134(Q).
Thus, a computing device, such as a media playback device or a media streaming device may send media content to a display device (e.g., an audio player, a television or a display monitor). A camera may be connected to (e.g., attached to or integrated with) the display device. The computing device may receive images from the camera and analyze (e.g., using machine learning) the images to determine a micro-expression of one or more users present in the images (e.g., in a field of view of the camera). The machine learning may determine a sentiment associated with the portion of the media content that is being played back, determine a timestamp (e.g., each of the images may include a timestamp) associated with the images (e.g., a timestamp of the first image), associate the sentiment with the timestamp and send the sentiment and timestamp to a server. In this way, the server may store a sentiment map of the media content. The sentiment map may identify, based on the timestamp, a portion (e.g., a scene or a chapter) of the media content and a sentiment associated with the portion.
During playback of the media content, if the sentiment associated with a current portion of the media content that is being played back matches a pre-specified sentiment (e.g., one or more of fear, disgust, anger, or contempt), then playback of a remainder of the current portion of the media content may be skipped and playback of a next portion of the media content may be initiated. Otherwise, e.g., if the sentiment associated with a current portion of the media content that is being played back does not match a pre-specified sentiment, then playback of the current portion of the media content may continue. In this way, the computing device may skip zero or more portions of the media content that the user does not enjoy viewing, thereby providing a more enjoyable media content viewing experience.
FIG. 2 is a block diagram 200 illustrating a sentiment map, according to some embodiments.
The machine learning 114 may analyze one or more images 202(1) (e.g., the images 112 of FIG. 1) from the camera 110 and determine a sentiment 126(1) (e.g., neutral). The machine learning 114 may associate the timestamp 126(1) from the images 202(1) to indicate when the user displayed the sentiment 126(1). In this way, the sentiment map 136(M) indicates that when the portion 134(1) is being played back, the user 150 displays the sentiment 128(1) at the time 124(1).
The machine learning 114 may analyze one or more images 202(2) (e.g., the images 112 of FIG. 1) from the camera 110 and determine a sentiment 126(2) (e.g., contempt or disgust). The machine learning 114 may associate the timestamp 126(2) from the images 202(2) to indicate when the user displayed the sentiment 126(2). In this way, the sentiment map 136(M) indicates that when the portion 134(2) is being played back, the user 150 displays the sentiment 128(2) at the time 124(2). Here, because the user 150 displays the sentiment of disgust or contempt, the machine learning 114 may automatically skip a remaining playback of the portion 134(2) and skip to the next portion 134(3).
The machine learning 114 may analyze one or more images 202(3) (e.g., the images 112 of FIG. 1) from the camera 110 and determine a sentiment 126(3) (e.g., happy). The machine learning 114 may associate the timestamp 126(3) from the images 202(3) to indicate when the user displayed the sentiment 126(3). In this way, the sentiment map 136(M) indicates that when the portion 134(3) is being played back, the user 150 displays the sentiment 128(3) at the time 124(3).
The machine learning 114 may analyze one or more images 202(4) (e.g., the images 112 of FIG. 1) from the camera 110 and determine a sentiment 126(4) (e.g., contempt or disgust). The machine learning 114 may associate the timestamp 126(4) from the images 202(4) to indicate when the user displayed the sentiment 126(4). In this way, the sentiment map 136(M) indicates that when the portion 134(4) is being played back, the user 150 displays the sentiment 128(4) at the time 124(4). Here, because the user 150 displays the sentiment of disgust or contempt, the machine learning 114 may automatically skip a remaining playback of the portion 134(4) and skip to the next portion 134(5).
The machine learning 114 may analyze one or more images 202(5) (e.g., the images 112 of FIG. 1) from the camera 110 and determine a sentiment 126(5) (e.g., neutral). The machine learning 114 may associate the timestamp 126(5) from the images 202(5) to indicate when the user displayed the sentiment 126(5). In this way, the sentiment map 136(M) indicates that when the portion 134(5) is being played back, the user 150 displays the sentiment 128(5) at the time 124(5).
The machine learning 114 may analyze one or more images 202(P) (e.g., the images 112 of FIG. 1) from the camera 110 and determine a sentiment 126(P) (e.g., happy). The machine learning 114 may associate the timestamp 126(P) from the images 202(P) to indicate when the user displayed the sentiment 126(P). In this way, the sentiment map 136(M) indicates that when the portion 134(P) is being played back, the user 150 displays the sentiment 128(P) at the time 124(P).
Thus, images may be analyzed to identify a sentiment of a user, at what time the user displayed the sentiment, which portion of the media content the user was viewing, and the like. If the user displays one or more pre-specified (or learned) sentiments, then the machine learning 114 may automatically skip playback of a remainder of the current portion and initiate playback of a next portion of the media content.
In the flow diagram of FIG. 3, each block represents one or more operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, cause the processors to perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, modules, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the blocks are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. For discussion purposes, the process 300 is described with reference to FIGS. 1 and 2 as described above, although other models, frameworks, systems and environments may be used to implement this process.
FIG. 3 is a flowchart of a process 300 that associates associating sending a timestamp and a sentiment associated with a portion of media content, according to some embodiments. The process may be performed by the computing device 102 of FIGS. 1 and 2.
At 302, the process may initiate playback of media content on a display device. At 304, the process may determine that a camera is accessible. At 306, the process may capture one or more images using the camera. For example, in FIG. 1, the user 150 may use the input device 118 to instruct the computing device 102 to initiate playback of the media content 122 on the display device 108, causing the display device 108 to display the portion 134(Q) of the media content 122. The computing device 102 may determine that the camera 110 is available and use the camera 110 to capture the images 112 when the user 150 is viewing the portion 134(Q) of the media content 122.
At 308, the process may determine a micro-expression expressed in the one or more images. At 310, the process may determine a sentiment based on the micro-expression. For example, in FIG. 1, the computing device 102 may receive the images 112 and use the machine learning 114 to identify the micro-expression 116 and the corresponding one of the sentiments 128 associated with the portion 134(Q). The computing device 102 may associate a timestamp with the sentiment. The computing device 102 may send the timestamp 126 and the associated sentiment 128 to the server 104.
At 312, the process may determine a timestamp and associate the timestamp with the sentiment. At 314, the process may send the timestamp and the sentiment to a server. For example, in FIG. 1, the computing device 102 may associate a timestamp with the sentiment. The computing device 102 may send the timestamp 126 and the associated sentiment 128 to the server 104.
At 316, the process may determine whether the sentiment is one of a pre-specified sentiment. If the process determines, at 316, that “yes” the sentiment is one of the pre-specified sentiments, then the process may proceed to 318, where a current portion of the media content may be skipped and play back of a next portion of the media content may be initiated. If the process determines, at 316, that “no” the sentiment is not one of the pre-specified sentiments, then the process may proceed to 320. At 320, the process may continue playback of the media content and the process may proceed to 306 to capture additional images using the camera. For example, in FIG. 1, if the computing device 102 determines that a current one of the sentiments 128 associated with the portion 134(Q) is a pre-specified sentiment 142, then the computing device 102 may skip playback of a remainder of the current portion 134(Q) of the media content 122 and initiate playback of a next portion 134 of the media content 122. For example, if the pre-specified sentiment 142 is disgust, then if the computing device 102 determines that the micro-expression 116 expresses the sentiment disgust, then the computing device 102 may skip playback of the current portion 134(Q) of the media content 122 and initiate playback of a next portion of the media content 122. If the computing device 102 determines that a current one of the sentiments 128 associated with the portion 134(Q) is not one of the pre-specified sentiments 142, then the computing device 102 may continue playback of the current portion 134(Q) of the media content 122.
Thus, a computing device, such as a media playback device or a media streaming device may send media content to a display device (e.g., a television or a display monitor). A camera may be connected to (e.g., attached to or integrated with) the display device. The computing device may receive images from the camera and analyze (e.g., using machine learning) the images to determine a micro-expression of one or more users present in the images (e.g., in a field of view of the camera). The machine learning may determine a sentiment associated with the portion of the media content that is being played back, determine a timestamp (e.g., each of the images may include a timestamp) associated with the images (e.g., a timestamp of the first image), associate the sentiment with the timestamp and send the sentiment and timestamp to a server. In this way, the server may store a sentiment map of the media content. The sentiment map may identify, based on the timestamp, a portion (e.g., a scene or a chapter) of the media content and a sentiment associated with the portion.
During playback of the media content, if the sentiment associated with a current portion of the media content that is being played back matches a pre-specified sentiment (e.g., one or more of fear, disgust, anger, or contempt), then playback of a remainder of the current portion of the media content may be skipped and playback of a next portion of the media content may be initiated. Otherwise, e.g., if the sentiment associated with a current portion of the media content that is being played back does not match a pre-specified sentiment, then playback of the current portion of the media content may continue. In this way, the computing device may skip zero or more portions of the media content that the user does not enjoy viewing.
In this way, portions of the media content that the user(s) feel uncomfortable viewing may be automatically skipped to improve the user(s) experience when viewing media content.
FIG. 4 illustrates an example configuration of a computing device 400 that can be used to implement the computing device 102 or the server 104 of FIGS. 1 and 2. For illustration purposes, in FIG. 4, the computing device 400 is shown implementing the computing device 102 of FIGS. 1 and 2.
The computing device 102 may include one or more processors 402 (e.g., CPU, GPU, or the like), a memory 404, communication interfaces 406, a display device 408, input devices 408 (e.g., the input device 118 of FIG. 1), other input/output (I/O) devices 410 (e.g., trackball and the like), and one or more mass storage devices 412 (e.g., disk drive, solid state disk drive, or the like), configured to communicate with each other, such as via one or more system buses 414 or other suitable connections. While a single system bus 414 is illustrated for ease of understanding, it should be understood that the system buses 414 may include multiple buses, such as a memory device bus, a storage device bus (e.g., serial ATA (SATA) and the like), data buses (e.g., universal serial bus (USB) and the like), video signal buses (e.g., ThunderBolt®, DVI, HDMI, and the like), power buses, etc.
The processors 402 are one or more hardware devices that may include a single processing unit or a number of processing units, all of which may include single or multiple computing units or multiple cores. The processors 402 may include a graphics processing unit (GPU) that is integrated into the CPU or the GPU may be a separate processor device from the CPU. The processors 402 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, graphics processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processors 402 may be configured to fetch and execute computer-readable instructions stored in the memory 404, mass storage devices 412, or other computer-readable media.
Memory 404 and mass storage devices 412 are examples of computer storage media (e.g., memory storage devices) for storing instructions that can be executed by the processors 402 to perform the various functions described herein. For example, memory 404 may include both volatile memory and non-volatile memory (e.g., RAM, ROM, or the like) devices. Further, mass storage devices 412 may include hard disk drives, solid-state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), a storage array, a network attached storage, a storage area network, or the like. Both memory 404 and mass storage devices 412 may be collectively referred to as memory or computer storage media herein and may be any type of non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that can be executed by the processors 402 as a particular machine configured for carrying out the operations and functions described in the implementations herein.
The computing device 400 may include one or more communication interfaces 406 for exchanging data via the network 106 (e.g., when the computing device 400 is connected to the dock 104). The communication interfaces 406 can facilitate communications within a wide variety of networks and protocol types, including wired networks (e.g., Ethernet, DOCSIS, DSL, Fiber, USB etc.) and wireless networks (e.g., WLAN, GSM, CDMA, 802.11, Bluetooth, Wireless USB, ZigBee, cellular, satellite, etc.), the Internet and the like. Communication interfaces 406 can also provide communication with external storage, such as a storage array, network attached storage, storage area network, cloud storage, or the like.
The display device 408 may be used for displaying content (e.g., information and images) to users. Other I/O devices 410 may be devices that receive various inputs from a user and provide various outputs to the user, and may include a keyboard, a touchpad, a mouse, a printer, audio input/output devices, and so forth. The computer storage media, such as memory 116 and mass storage devices 412, may be used to store software and data, such as, for example, the images 112, the micro-expression 116, the machine learning 114, the media content 122, the sentiment map 130, and the like.
Thus, the computing device 400, may be a media playback device or a media streaming device. The computing device 400 may send media content 122 to the display device 108 (e.g., a television or a display monitor). The camera 110 may be connected to (e.g., attached to or integrated with) the display device 108. The computing device 400 may receive the images 112 from the camera and analyze (e.g., using machine learning 114) the images 112 to determine the micro-expression 116 of one or more users present in the images 112 (e.g., in a field of view of the camera 110). The machine learning 114 may determine a sentiment (e.g., one of the sentiments 128) associated with the portion of the media content 122 that is being played back on the display device 108, determine a timestamp (e.g., each of the images 112 may include a timestamp) associated with the images 112 (e.g., a timestamp of the first image), associate the sentiment 128 with the timestamp 126 and send the data 140 that includes the sentiment 128 and the timestamp 126 to the server 104. In this way, the server 104 may store a sentiment map 130 of the media content 122. The sentiment map 130 may identify, based on the timestamp 126, a portion (e.g., a scene or a chapter) of the media content 122 and a sentiment 128 associated with the portion 134.
During playback of the media content 122, if the sentiment 134 associated with a current portion of the media content 122 that is being played back matches a pre-specified sentiment 142 (e.g., one or more of fear, disgust, anger, or contempt), then playback of a remainder of the current portion of the media content 122 may be skipped and playback of a next portion of the media content 122 may be initiated. Otherwise, e.g., if the sentiment associated with a current portion of the media content 122 that is being played back on the display device 108 does not match the pre-specified sentiment 142, then playback of the current portion of the media content 122 may continue. In this way, the computing device 400 may automatically skip portions of the media content 122 that the user does not enjoy viewing (according to the user's micro-expressions).
The example systems and computing devices described herein are merely examples suitable for some implementations and are not intended to suggest any limitation as to the scope of use or functionality of the environments, architectures and frameworks that can implement the processes, components and features described herein. Thus, implementations herein are operational with numerous environments or architectures, and may be implemented in general purpose and special-purpose computing systems, or other devices having processing capability. Generally, any of the functions described with reference to the figures can be implemented using software, hardware (e.g., fixed logic circuitry) or a combination of these implementations. The term “module,” “mechanism” or “component” as used herein generally represents software, hardware, or a combination of software and hardware that can be configured to implement prescribed functions. For instance, in the case of a software implementation, the term “module,” “mechanism” or “component” can represent program code (and/or declarative-type instructions) that performs specified tasks or operations when executed on a processing device or devices (e.g., CPUs or processors). The program code can be stored in one or more computer-readable memory devices or other computer storage devices. Thus, the processes, components and modules described herein may be implemented by a computer program product.
Furthermore, this disclosure provides various example implementations, as described and as illustrated in the drawings. However, this disclosure is not limited to the implementations described and illustrated herein, but can extend to other implementations, as would be known or as would become known to those skilled in the art. Reference in the specification to “one implementation,” “this implementation,” “these implementations” or “some implementations” means that a particular feature, structure, or characteristic described is included in at least one implementation, and the appearances of these phrases in various places in the specification are not necessarily all referring to the same implementation.
Although the present invention has been described in connection with several embodiments, the invention is not intended to be limited to the specific forms set forth herein. On the contrary, it is intended to cover such alternatives, modifications, and equivalents as can be reasonably included within the scope of the invention as defined by the appended claims.

Claims (18)

What is claimed is:
1. A method comprising:
initiating, by one or more processors of a computing device, sequential playback of a plurality of portions of media content on a display device that is connected to the computing device;
after the sequential playback initiation, initiating, by the one or more processors, playback of a portion of the plurality of portions on the display device;
receiving, by the one or more processors, one or more images from a camera connected to the computing device, the camera having a field of view that includes one or more viewers viewing the display device, wherein the one or more images of the one or more viewers are captured by the camera as the portion is played back on the display, and received as the portion is played back on the display;
performing an analysis, by the one or more processors, of at least one image of the one or more images;
determining, by the one or more processors and based on the analysis, that the at least one image includes a micro-expression being expressed by the one or more viewers;
determining, by the one or more processors, a sentiment corresponding to the micro-expression;
automatically skipping playback of a remainder of the portion that is being played back on the display device based on the micro-expression expressed by the one or more viewers captured by the camera as the portion is played back on the display, and received as the portion is played back on the display;
determining, by the one or more processors, a timestamp associated with the at least one image, wherein the timestamp is defined as a time interval between a time when the sequential playback of the plurality of portions was initiated and a time when the playback of the portion was initiated, or the timestamp is defined as a time interval between the time when the sequential playback of the plurality of portions was initiated and a time when the one or more images are received from the camera;
associating, by the one or more processors, the sentiment with the timestamp; and
sending, by the one or more processors, the sentiment and the timestamp to a server;
the server creating a sentiment map;
the server identifying the portion based on the timestamp it receives;
wherein the server updates the sentiment map by mapping the sentiment it receives to the identified portion.
2. The method of claim 1, further comprising:
determining that the sentiment comprises a pre-specified sentiment;
automatically skipping playback of a remainder of the portion that is being played back on the display device; and
automatically initiating playback of another portion of the plurality of portions, wherein the other portion is linked with another sentiment that is different from the sentiment.
3. The method of claim 2, wherein:
the portion comprises a particular chapter of a movie; and
the next portion comprises a next chapter of the movie.
4. The method of claim 2, wherein:
the portion comprises a particular scene of a show; and
the other portion comprises a another scene of the show.
5. The method of claim 1:
wherein the sentiment map includes distinct sentiments mapped to one portion of the plurality of portions.
6. The method of claim 1:
wherein the sentiment map includes distinct sentiments mapped to respective portions of the plurality of portions.
7. The method of claim 1, wherein the sentiment comprises one of:
a neutral sentiment, a surprise sentiment, a fear sentiment, a disgust sentiment, an angry sentiment, a happy sentiment, a sad sentiment, or a contempt sentiment.
8. A computing device comprising:
a display device for displaying a plurality of portions of media content;
one or more processors; and
one or more non-transitory computer-readable storage media to store instructions executable by the one or more processors to perform operations comprising:
initiating sequential playback of a plurality of portions of media content on the display device;
after the sequential playback initiation, initiating playback of a portion of the plurality of portions on the display device;
receiving one or more images from a camera connected to the computing device, the camera having a field of view that includes one or more viewers viewing the display device as the display device is displaying the portion, wherein the one or more images of the one or more viewers are captured by the camera as the portion is played back on the display, and received as the portion is played back on the display;
performing an analysis of at least one image of the one or more images;
determining that the at least one image includes a particular micro-expression being expressed by the one or more viewers;
determining a sentiment corresponding to the micro-expression;
automatically skipping playback of a remainder of the portion that is being played back on the display device based on the micro-expression expressed by the one or more viewers captured by the camera as the portion is played back on the display, and received as the portion is played back on the display;
determining a timestamp associated with the at least one image, wherein the timestamp is determined based on a time when the sequential playback was initiated;
mapping the sentiment with the timestamp in a sentiment map in memory of the computing device; and
sending the sentiment and the timestamp to a server.
9. The computing device of claim 8, wherein the operations further comprise:
determining that the sentiment comprises a pre-specified sentiment;
automatically skipping playback of a remainder of the portion of the media content; and
automatically initiating playback of another portion of the plurality of portions, wherein the other portion is linked with another sentiment that is different from the sentiment.
10. The computing device of claim 9, wherein:
the portion comprises a particular chapter of a movie; and
the other portion comprises a another chapter of the movie.
11. The computing device of claim 9, wherein:
the portion comprises a particular scene of a show; and
the other portion comprises a another scene of the show.
12. The computing device of claim 9, wherein the pre-specified sentiment comprises at least one of:
a surprise sentiment, a fear sentiment, a disgust sentiment, an angry sentiment, or a contempt sentiment.
13. The computing device of claim 8, wherein:
the computing device comprises one of a set-top box device or a media streaming device.
14. One or more non-transitory computer readable media storing instructions executable by an embedded controller to perform operations comprising:
initiating a sequential playback of a portion of a plurality of portions of media content on a display device;
after the sequential playback initiation, initiating playback of a portion of the plurality of portions on the display device;
receiving one or more images from a camera as the portion is displayed on the display device, the camera having a field of view that includes one or more viewers viewing the display device, wherein the one or more images of the one or more viewers are captured by the camera as the portion is played back on the display, and received as the portion is played back on the display;
performing an analysis of at least one image of the one or more images;
determining that the at least one image includes a particular micro-expression being expressed by the one or more viewers;
determining a sentiment corresponding to the micro-expression;
automatically skipping playback of a remainder of the portion that is being played back on the display device based on the micro-expression expressed by the one or more viewers captured by the camera as the portion is played back on the display, and received as the portion is played back on the display;
determining a timestamp associated with the at least one image, wherein the timestamp is defined as a time interval between a time when the sequential playback of the plurality of portions was initiated and a time when the playback of the portion was initiated, or the timestamp is defined as a time interval between the time when the sequential playback of the plurality of portions was initiated and a time when the one or more images are received from the camera;
mapping the sentiment with the timestamp in a sentiment map in memory;
sending the sentiment and the timestamp to a server.
15. The one or more non-transitory computer readable media of claim 14, wherein the operations further comprise:
determining that the sentiment comprises a pre-specified sentiment;
automatically skipping playback of a remainder of the portion of the media content; and
automatically initiating playback of a another portion of the media content.
16. The one or more non-transitory computer readable media of claim 15, wherein:
the portion comprises a particular chapter of a movie; and
the other portion comprises a next chapter of the movie.
17. The one or more non-transitory computer readable media of claim 15, wherein:
the portion comprises a particular scene of a show; and
the other portion comprises a next scene of the show.
18. The one or more non-transitory computer readable media of claim 14, wherein the sentiment comprises one of:
a neutral sentiment, a surprise sentiment, a fear sentiment, a disgust sentiment, an angry sentiment, a happy sentiment, a sad sentiment, or a contempt sentiment.
US16/530,196 2019-08-02 2019-08-02 Crowd rating media content based on micro-expressions of viewers Active US11330313B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/530,196 US11330313B2 (en) 2019-08-02 2019-08-02 Crowd rating media content based on micro-expressions of viewers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/530,196 US11330313B2 (en) 2019-08-02 2019-08-02 Crowd rating media content based on micro-expressions of viewers

Publications (2)

Publication Number Publication Date
US20210037271A1 US20210037271A1 (en) 2021-02-04
US11330313B2 true US11330313B2 (en) 2022-05-10

Family

ID=74260446

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/530,196 Active US11330313B2 (en) 2019-08-02 2019-08-02 Crowd rating media content based on micro-expressions of viewers

Country Status (1)

Country Link
US (1) US11330313B2 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11284063B2 (en) * 2012-12-28 2022-03-22 Promptlink Communications, Inc. Video quality analysis and detection of blockiness, artifacts and color variation for high-volume testing of devices using automated video testing system
KR101932844B1 (en) 2017-04-17 2018-12-27 주식회사 하이퍼커넥트 Device and method of making video calls and method of mediating video calls
KR102282963B1 (en) * 2019-05-10 2021-07-29 주식회사 하이퍼커넥트 Mobile, server and operating method thereof
CN113692563A (en) * 2019-06-27 2021-11-23 苹果公司 Modifying existing content based on target audience
KR102293422B1 (en) 2020-01-31 2021-08-26 주식회사 하이퍼커넥트 Mobile and operating method thereof
KR102287704B1 (en) 2020-01-31 2021-08-10 주식회사 하이퍼커넥트 Terminal, Operation Method Thereof and Computer Readable Recording Medium
KR20210115442A (en) 2020-03-13 2021-09-27 주식회사 하이퍼커넥트 Report evaluation device and operating method thereof
US11829413B1 (en) * 2020-09-23 2023-11-28 Amazon Technologies, Inc. Temporal localization of mature content in long-form videos using only video-level labels
US11457249B2 (en) * 2020-11-05 2022-09-27 At & T Intellectual Property I, L.P. Method and apparatus for smart video skipping
US20220295131A1 (en) * 2021-03-09 2022-09-15 Comcast Cable Communications, Llc Systems, methods, and apparatuses for trick mode implementation
US20230008492A1 (en) * 2021-07-07 2023-01-12 At&T Intellectual Property I, L.P. Aggregation of unconscious and conscious behaviors for recommendations and authentication
US20240388744A1 (en) * 2022-12-09 2024-11-21 Google Llc Method of enabling enhanced content consumption

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040199923A1 (en) * 2003-04-07 2004-10-07 Russek David J. Method, system and software for associating atributes within digital media presentations
US20090006288A1 (en) * 2007-06-26 2009-01-01 Noriyuki Yamamoto Information Processing Apparatus, Information Processing Method, and Program
US20090317060A1 (en) * 2008-06-24 2009-12-24 Samsung Electronics Co., Ltd. Method and apparatus for processing multimedia
US20100251295A1 (en) * 2009-03-31 2010-09-30 At&T Intellectual Property I, L.P. System and Method to Create a Media Content Summary Based on Viewer Annotations
US20110098056A1 (en) * 2009-10-28 2011-04-28 Rhoads Geoffrey B Intuitive computing methods and systems
US20110243530A1 (en) * 2010-03-31 2011-10-06 Sony Corporation Electronic apparatus, reproduction control system, reproduction control method, and program therefor
US20120066705A1 (en) * 2009-06-12 2012-03-15 Kumi Harada Content playback apparatus, content playback method, program, and integrated circuit
US20140007147A1 (en) * 2012-06-27 2014-01-02 Glen J. Anderson Performance analysis for combining remote audience responses
US20140098986A1 (en) * 2012-10-08 2014-04-10 The Procter & Gamble Company Systems and Methods for Performing Video Analysis
US20140108309A1 (en) * 2012-10-14 2014-04-17 Ari M. Frank Training a predictor of emotional response based on explicit voting on content and eye tracking to verify attention
US20140195919A1 (en) * 2003-11-03 2014-07-10 James W. Wieder Adaptive Personalized Playback or Presentation using Cumulative Time
US20140282721A1 (en) * 2013-03-15 2014-09-18 Samsung Electronics Co., Ltd. Computing system with content-based alert mechanism and method of operation thereof
US20140359647A1 (en) * 2012-12-14 2014-12-04 Biscotti Inc. Monitoring, Trend Estimation, and User Recommendations
US20150026708A1 (en) * 2012-12-14 2015-01-22 Biscotti Inc. Physical Presence and Advertising
US20150070516A1 (en) * 2012-12-14 2015-03-12 Biscotti Inc. Automatic Content Filtering
US20150281595A1 (en) * 2014-03-27 2015-10-01 Sony Corporation Apparatus and method for video generation
US20160029057A1 (en) * 2014-07-23 2016-01-28 United Video Properties, Inc. Systems and methods for providing media asset recommendations for a group
US20160066036A1 (en) * 2014-08-27 2016-03-03 Verizon Patent And Licensing Inc. Shock block
US20160366203A1 (en) * 2015-06-12 2016-12-15 Verizon Patent And Licensing Inc. Capturing a user reaction to media content based on a trigger signal and using the user reaction to determine an interest level associated with a segment of the media content
US20170257410A1 (en) * 2016-03-03 2017-09-07 Comcast Cable Holdings, Llc Determining Points of Interest in a Content Item
US20180253222A1 (en) * 2017-03-06 2018-09-06 Massachusetts Institute Of Technology Methods and Apparatus for Multimedia Presentation
US20200092610A1 (en) * 2018-09-19 2020-03-19 International Business Machines Corporation Dynamically providing customized versions of video content

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040199923A1 (en) * 2003-04-07 2004-10-07 Russek David J. Method, system and software for associating atributes within digital media presentations
US20140195919A1 (en) * 2003-11-03 2014-07-10 James W. Wieder Adaptive Personalized Playback or Presentation using Cumulative Time
US20090006288A1 (en) * 2007-06-26 2009-01-01 Noriyuki Yamamoto Information Processing Apparatus, Information Processing Method, and Program
US20090317060A1 (en) * 2008-06-24 2009-12-24 Samsung Electronics Co., Ltd. Method and apparatus for processing multimedia
US20100251295A1 (en) * 2009-03-31 2010-09-30 At&T Intellectual Property I, L.P. System and Method to Create a Media Content Summary Based on Viewer Annotations
US20120066705A1 (en) * 2009-06-12 2012-03-15 Kumi Harada Content playback apparatus, content playback method, program, and integrated circuit
US20110098056A1 (en) * 2009-10-28 2011-04-28 Rhoads Geoffrey B Intuitive computing methods and systems
US20110243530A1 (en) * 2010-03-31 2011-10-06 Sony Corporation Electronic apparatus, reproduction control system, reproduction control method, and program therefor
US20140007147A1 (en) * 2012-06-27 2014-01-02 Glen J. Anderson Performance analysis for combining remote audience responses
US20140098986A1 (en) * 2012-10-08 2014-04-10 The Procter & Gamble Company Systems and Methods for Performing Video Analysis
US20140108309A1 (en) * 2012-10-14 2014-04-17 Ari M. Frank Training a predictor of emotional response based on explicit voting on content and eye tracking to verify attention
US20150070516A1 (en) * 2012-12-14 2015-03-12 Biscotti Inc. Automatic Content Filtering
US20140359647A1 (en) * 2012-12-14 2014-12-04 Biscotti Inc. Monitoring, Trend Estimation, and User Recommendations
US20150026708A1 (en) * 2012-12-14 2015-01-22 Biscotti Inc. Physical Presence and Advertising
US20140282721A1 (en) * 2013-03-15 2014-09-18 Samsung Electronics Co., Ltd. Computing system with content-based alert mechanism and method of operation thereof
US20150281595A1 (en) * 2014-03-27 2015-10-01 Sony Corporation Apparatus and method for video generation
US20160029057A1 (en) * 2014-07-23 2016-01-28 United Video Properties, Inc. Systems and methods for providing media asset recommendations for a group
US20160066036A1 (en) * 2014-08-27 2016-03-03 Verizon Patent And Licensing Inc. Shock block
US20160366203A1 (en) * 2015-06-12 2016-12-15 Verizon Patent And Licensing Inc. Capturing a user reaction to media content based on a trigger signal and using the user reaction to determine an interest level associated with a segment of the media content
US20170257410A1 (en) * 2016-03-03 2017-09-07 Comcast Cable Holdings, Llc Determining Points of Interest in a Content Item
US20180253222A1 (en) * 2017-03-06 2018-09-06 Massachusetts Institute Of Technology Methods and Apparatus for Multimedia Presentation
US20200092610A1 (en) * 2018-09-19 2020-03-19 International Business Machines Corporation Dynamically providing customized versions of video content

Also Published As

Publication number Publication date
US20210037271A1 (en) 2021-02-04

Similar Documents

Publication Publication Date Title
US11330313B2 (en) Crowd rating media content based on micro-expressions of viewers
US11625920B2 (en) Method for labeling performance segment, video playing method, apparatus and system
US10810434B2 (en) Movement and transparency of comments relative to video frames
EP3726471B1 (en) Augmented reality method and device
US9870755B2 (en) Prioritized display of visual content in computer presentations
TWI581128B (en) Method, system, and computer-readable storage memory for controlling a media program based on a media reaction
US20200312327A1 (en) Method and system for processing comment information
US11451858B2 (en) Method and system of processing information flow and method of displaying comment information
KR102045575B1 (en) Smart mirror display device
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114025219B (en) Rendering method, device, medium and equipment for augmented reality special effects
US11463718B2 (en) Image compression method and image decompression method
US20170235828A1 (en) Text Digest Generation For Searching Multiple Video Streams
CN113870133B (en) Multimedia display and matching method, device, equipment and medium
US20180143741A1 (en) Intelligent graphical feature generation for user content
EP3525475A1 (en) Electronic device and method for generating summary image of electronic device
WO2023045635A1 (en) Multimedia file subtitle processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
EP4415356A1 (en) Video quality assessment method and apparatus, and computer device, computer storage medium and computer program product
US20240430524A1 (en) Systems and methods for recommending content items based on an identified posture
US20240251127A1 (en) Method and System for Generating a Visual Composition of User Reactions in a Shared Content Viewing Session
CN115086710B (en) Video playing method, terminal equipment, device, system and storage medium
CN114153342B (en) Visual information display method, device, computer equipment and storage medium
US11062359B2 (en) Dynamic media content for in-store screen experiences
CN115426505B (en) Preset expression special effect triggering method based on face capture and related equipment
CN111800651B (en) Information processing method and information processing device

Legal Events

Date Code Title Description
AS Assignment

Owner name: DELL PRODUCTS L. P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BIKUMALA, SATHISH KUMAR;REEL/FRAME:049950/0630

Effective date: 20190801

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NORTH CAROLINA

Free format text: SECURITY AGREEMENT;ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:050406/0421

Effective date: 20190917

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS

Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:050724/0571

Effective date: 20191010

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001

Effective date: 20200409

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS

Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:053311/0169

Effective date: 20200603

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

AS Assignment

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058213/0825

Effective date: 20211101

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058213/0825

Effective date: 20211101

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058213/0825

Effective date: 20211101

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ALLOWED -- NOTICE OF ALLOWANCE NOT YET MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742

Effective date: 20220329

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742

Effective date: 20220329

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0088

Effective date: 20220329

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0088

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0088

Effective date: 20220329

Owner name: DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001

Effective date: 20220329

Owner name: DELL INTERNATIONAL L.L.C., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001

Effective date: 20220329

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001

Effective date: 20220329

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001

Effective date: 20220329

Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001

Effective date: 20220329

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001

Effective date: 20220329