US20200304839A1 - Personalized key object identification in a live video stream - Google Patents
Personalized key object identification in a live video stream Download PDFInfo
- Publication number
- US20200304839A1 US20200304839A1 US16/362,678 US201916362678A US2020304839A1 US 20200304839 A1 US20200304839 A1 US 20200304839A1 US 201916362678 A US201916362678 A US 201916362678A US 2020304839 A1 US2020304839 A1 US 2020304839A1
- Authority
- US
- United States
- Prior art keywords
- video stream
- live video
- end user
- presented
- interest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000001514 detection method Methods 0.000 claims abstract description 26
- 238000004590 computer program Methods 0.000 claims abstract description 15
- 238000003860 storage Methods 0.000 claims description 20
- 230000008921 facial expression Effects 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 5
- 238000010025 steaming Methods 0.000 claims description 2
- 238000001914 filtration Methods 0.000 claims 3
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000006397 emotional response Effects 0.000 description 3
- 208000027534 Emotional disease Diseases 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/216—Parsing using statistical methods
-
- G06F17/2705—
-
- G06F17/2765—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
-
- G06K9/00302—
-
- G06K9/00718—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25808—Management of client data
- H04N21/25841—Management of client data involving the geographical location of the client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42201—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44222—Analytics of user selections, e.g. selection of programs or purchase activity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4532—Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
- H04N21/8405—Generation or processing of descriptive data, e.g. content descriptors represented by keywords
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Definitions
- the present invention relates to the field of emotional response detection during a video media presentation and more particularly to object detection in a video media presentation.
- the Internet has created an environment in which nearly unlimited amounts of content may be published for viewing by countless viewers in so many different forms including static text and imagery, and also video imagery.
- the Internet and especially the World Wide Web provides a nearly limitless canvas by way of which marketing messages may be delivered to awaiting prospective consumers. Even still, the effective use of the Internet as a marketing vehicle is not without cost.
- the nearly limitless canvas of both the World Wide Web and also private applications accessible through the Internet is tempered by the private management of Web sites through which marketing messages may be delivered and by which, fees are leveled for advertisers seeking access to the end users of the Web sites.
- the more likely it is that a subscriber will view a marketing message on a Web site or through an Internet accessible application the more expensive the fee will be assessed by the publisher of the Web site or host of the Internet accessible application.
- one must be strategic in selecting which marketing content to provide to which end user through which Internet accessible application or Web site.
- video imagery has become a prime medium through which marketing material may be presented.
- Paramount to successfully delivering marketing content through video imagery is the ability to detect an emotional response by a viewer of the video imagery.
- it is known to sense the physical feedback of the end user during the viewing by the end user of the video content in order to compute the impact of the message contained within the video imagery.
- the detected physical feedback may come in the form of eye-gaze tracking, facial expression tracking, laugh detection and the like.
- a method for personalized key object detection in a live video stream includes streaming a live video stream in a window of a computing device and during the streaming, collecting biophysical data of an end user viewing the playing back, such as a heartbeat or facial expression of the end user, and responding to ones of the collected biophysical data that indicate a positive reaction by associating a contemporaneously displayed frame of the live video stream with the positive reaction.
- each corresponding frame associated with positive feedback can be processed by identifying key words presented in text of the feedback to the corresponding frame, matching the identified key words to a tag of an object visually presented in the corresponding frame, and storing a reference to the object in connection with the end user as an object of interest for which targeted marketing may be presented.
- the key words of the corresponding frame are determined by parsing the text of the feedback, computing a frequency of presence in the text of a set of words, and selecting as key words, only those in the set having a corresponding frequency that exceeds a threshold value.
- the key words may be selected only in respect to those in the set that both have a corresponding frequency that exceeds a threshold value and also that match data in a pre-stored profile of the end user.
- the contemporaneously displayed frame of the live video stream is associated with the positive reaction by applying a time stamp to the positive reaction and correlating the time stamp to a time location in the live video stream.
- the corresponding frame may be image processed to identify a brand identification of the object of interest so as to store the brand identification along with the stored reference to the object in connection with the end user as an object of interest for which targeted marketing may be presented.
- a data processing system is configured for personalized key object detection in a live video stream.
- the system includes a host computing platform that includes one or more computers, each with memory and at least one processor, and a media player executing in the memory of the host computing platform steaming a live video stream.
- the system additionally includes a personalized key object detection module coupled to the media player.
- the module includes computer program instructions operable during execution in the memory of the host computing platform to perform during the streaming, the collection of biophysical data of an end user viewing the streaming and the response to ones of the collected biophysical data that indicate a positive reaction by associating a contemporaneously displayed frame of the live video stream with the positive reaction.
- the program instructions are further operable to process each corresponding frame associated with positive feedback by identifying key words presented in text of the feedback to the corresponding frame, matching the identified key words to a tag of an object visually presented in the corresponding frame, and storing a reference to the object in connection with the end user as an object of interest for which targeted marketing may be presented.
- FIG. 1 is a pictorial illustration of a process for personalized key object detection in a live video stream
- FIG. 2 is a schematic illustration of a data processing system configured for personalized key object detection in a live video stream
- FIG. 3 is a flow chart illustrating a process for personalized key object detection in a live video stream.
- Embodiments of the invention provide for personalized key object detection in a live video stream.
- biophysical data of an end user viewing video imagery is collected while the end user views the video imagery.
- a time stamp is recorded of the positive emotional reaction is recorded in connection with one or more frames of the video imagery presented at the time indicated in the time stamp.
- user contribute text commentary regarding the frames is retrieved, parsed and statistically analyzed to filter the text to only those words of the text appearing with a threshold frequency.
- Each of the words of the filtered text are then compared to tags for visual objects in the frames and at least one visual object with a tag matching one of the words of the filtered text is the recorded as an object of interest of the user. In this way, targeted marketing pertaining to the object of interest may be presented to the end user.
- FIG. 1 is a pictorial illustration of a process for personalized key object detection in a live video stream.
- video imagery 120 streams in a media player 110 of an end user 160 .
- the video imagery 120 includes a multiplicity of objects 130 such as an individual engaging in actions in the video imagery 120 , clothing and accessories worn by the individual, and equipment utilized by the individual while engaging in the actions.
- Each of the objects 130 includes corresponding meta-information 140 such as a tag, describing a corresponding one of the objects 130 .
- the video imagery 120 also includes textual commentary 150 pertaining to the objects 130 and provided by different viewers of the video imagery 120 .
- Key object detection logic 100 monitors biophysical data detected in respect to the end user 160 , such as a heart rate of the end user 160 , a facial expression of the end user 160 , or an eye gaze of the end user 160 . Key object detection logic 100 then responds to a positive emotional response by the end user 160 indicated by the biophysical data by retrieving the textual commentary 150 and parsing the substantive words of the textual commentary 150 in order to compute a frequency of appearance of each of the words in the textual commentary 150 . Those of the words having a threshold frequency are then located in mapping 180 to the different tags 140 of the video imagery 120 in order to determine an object of interest 190 of the end user 160 within the video imagery 120 . Optionally, the words can be further filtered to a smaller subject by including only those words matching profile information in a user profile 170 of the end user 160 .
- FIG. 2 schematically shows a data processing system configured for personalized key object detection in a live video stream.
- the system includes a host computing platform 210 that includes one or more computers, each with memory and at least one processor.
- the host computing platform 210 is communicatively coupled to different client computers 250 over computer communications network 240 .
- Each of the client computers 250 includes a media player 270 adapted to display streaming media therein streamed from over the computer communications network 240 , and also a biophysical sensing system 260 adapted to sense biophysical data of an end user, such as a heart rate, a facial expression or a direction of a gaze as in a gaze tracking system.
- the system includes a key object detection module 300 .
- the key object detection module 300 includes computer program instructions enabled during execution in the memory of the host computing platform 210 to receive biophysical data of different end users at different, respective ones of the client computers 250 as the different end users view streaming video within corresponding ones of the media players 270 .
- the biophysical data in particular is determined from time to time, to reflect a positive reaction as between an end user and a portion of the streaming video.
- the program instructions of the module 300 record a timestamp in respect to the streaming video and the end user.
- the program instructions of the key object detection module 300 then processes the timestamps to identify frames of the streaming video associated with the positive responses by the end users and process companion text for the frames in order to identify a most frequent set of one or more words in the companion text.
- the program instructions of the key object detection module 300 further filter those words in the most frequent set to only those words mapping to one or more profile values of a corresponding end user stored in profile data store 220 , such as demographic data or user preferences.
- the program instructions of the key object detection module 300 map those remaining words to meta-data for one or more objects present in the streaming video so as to store in the profile data store 220 , one or more objects determined to be of interest to the end user.
- targeted marketing can be formulated for the end user based upon the objects of interest stored in association with the end user.
- FIG. 3 is a flow chart illustrating a process for personalized key object detection in a live video stream.
- biophysical data is received in connection with an end user and streaming video.
- a first timestamp is retrieved from the memory and in block 350 , an end user and streaming video is identified in respect to the time stamp.
- a frame corresponding to the timestamp is identified and in block 370 , textual commentary associated with the frame is retrieved into memory.
- one or more keywords are identified in the textual commentary.
- a frequency of utilization of each of the substantive words of the textual commentary (excluding articles and pronouns, for example) is computed.
- a set of the words enjoying a threshold frequency are then mapped to the tags for one or more objects in the frame are identified in block 390 .
- the objects mapped to the keywords are stored in connection with the end user so as to indicate objects of interest to the end user suitable for formulating marketing messaging to the end user in the future.
- the present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration
- the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention
- the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
- the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- SRAM static random access memory
- CD-ROM compact disc read-only memory
- DVD digital versatile disk
- memory stick a floppy disk
- a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
- a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the blocks may occur out of the order noted in the Figures.
- two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Social Psychology (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Neurosurgery (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Computer Security & Cryptography (AREA)
- Human Computer Interaction (AREA)
- Probability & Statistics with Applications (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Embodiments of the invention provide a method, system and computer program product for personalized key object detection in a live video stream. The method includes streaming a live video stream in a window of a computing device and during the streaming, collecting biophysical data of an end user viewing the playing back and responding to ones of the collected biophysical data that indicate a positive reaction by associating a contemporaneously displayed frame of the live video stream with the positive reaction. Thereafter, each corresponding frame associated with positive feedback can be processed by identifying key words presented in text of the feedback to the corresponding frame, matching the identified key words to a tag of an object visually presented in the corresponding frame, and storing a reference to the object in connection with the end user as an object of interest for which targeted marketing may be presented.
Description
- The present invention relates to the field of emotional response detection during a video media presentation and more particularly to object detection in a video media presentation.
- The Internet has created an environment in which nearly unlimited amounts of content may be published for viewing by countless viewers in so many different forms including static text and imagery, and also video imagery. As a medium through which goods or services may be promoted for sale, the Internet and especially the World Wide Web provides a nearly limitless canvas by way of which marketing messages may be delivered to awaiting prospective consumers. Even still, the effective use of the Internet as a marketing vehicle is not without cost.
- In this regard, the nearly limitless canvas of both the World Wide Web and also private applications accessible through the Internet, is tempered by the private management of Web sites through which marketing messages may be delivered and by which, fees are leveled for advertisers seeking access to the end users of the Web sites. As a general rule, the more likely it is that a subscriber will view a marketing message on a Web site or through an Internet accessible application, the more expensive the fee will be assessed by the publisher of the Web site or host of the Internet accessible application. Thus, one must be strategic in selecting which marketing content to provide to which end user through which Internet accessible application or Web site.
- With the ever-expanding bandwidth available to end users interacting with network distributable content, video imagery has become a prime medium through which marketing material may be presented. Paramount to successfully delivering marketing content through video imagery is the ability to detect an emotional response by a viewer of the video imagery. To that end, it is known to sense the physical feedback of the end user during the viewing by the end user of the video content in order to compute the impact of the message contained within the video imagery. To wit, the detected physical feedback may come in the form of eye-gaze tracking, facial expression tracking, laugh detection and the like.
- Embodiments of the present invention address deficiencies of the art in respect to the intelligent distribution of video imagery for marketing and provide a novel and non-obvious method, system and computer program product for personalized key object detection in a live video stream. In an embodiment of the invention, a method for personalized key object detection in a live video stream includes streaming a live video stream in a window of a computing device and during the streaming, collecting biophysical data of an end user viewing the playing back, such as a heartbeat or facial expression of the end user, and responding to ones of the collected biophysical data that indicate a positive reaction by associating a contemporaneously displayed frame of the live video stream with the positive reaction. Thereafter, each corresponding frame associated with positive feedback can be processed by identifying key words presented in text of the feedback to the corresponding frame, matching the identified key words to a tag of an object visually presented in the corresponding frame, and storing a reference to the object in connection with the end user as an object of interest for which targeted marketing may be presented.
- In one aspect of the embodiment, the key words of the corresponding frame are determined by parsing the text of the feedback, computing a frequency of presence in the text of a set of words, and selecting as key words, only those in the set having a corresponding frequency that exceeds a threshold value. In this regard, the key words may be selected only in respect to those in the set that both have a corresponding frequency that exceeds a threshold value and also that match data in a pre-stored profile of the end user. In another aspect of the embodiment, the contemporaneously displayed frame of the live video stream is associated with the positive reaction by applying a time stamp to the positive reaction and correlating the time stamp to a time location in the live video stream. In yet another aspect of the embodiment, for the object of interest, the corresponding frame may be image processed to identify a brand identification of the object of interest so as to store the brand identification along with the stored reference to the object in connection with the end user as an object of interest for which targeted marketing may be presented.
- In another embodiment of the invention, a data processing system is configured for personalized key object detection in a live video stream. The system includes a host computing platform that includes one or more computers, each with memory and at least one processor, and a media player executing in the memory of the host computing platform steaming a live video stream. The system additionally includes a personalized key object detection module coupled to the media player. The module includes computer program instructions operable during execution in the memory of the host computing platform to perform during the streaming, the collection of biophysical data of an end user viewing the streaming and the response to ones of the collected biophysical data that indicate a positive reaction by associating a contemporaneously displayed frame of the live video stream with the positive reaction. The program instructions are further operable to process each corresponding frame associated with positive feedback by identifying key words presented in text of the feedback to the corresponding frame, matching the identified key words to a tag of an object visually presented in the corresponding frame, and storing a reference to the object in connection with the end user as an object of interest for which targeted marketing may be presented.
- Additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The aspects of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
- The accompanying drawings, which are incorporated in and constitute part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention. The embodiments illustrated herein are presently preferred, it being understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown, wherein:
-
FIG. 1 is a pictorial illustration of a process for personalized key object detection in a live video stream; -
FIG. 2 is a schematic illustration of a data processing system configured for personalized key object detection in a live video stream; and, -
FIG. 3 is a flow chart illustrating a process for personalized key object detection in a live video stream. - Embodiments of the invention provide for personalized key object detection in a live video stream. In accordance with an embodiment of the invention, biophysical data of an end user viewing video imagery is collected while the end user views the video imagery. To the extent that the biophysical data indicates a positive emotional reaction by the end user, a time stamp is recorded of the positive emotional reaction is recorded in connection with one or more frames of the video imagery presented at the time indicated in the time stamp. The, user contribute text commentary regarding the frames is retrieved, parsed and statistically analyzed to filter the text to only those words of the text appearing with a threshold frequency. Each of the words of the filtered text are then compared to tags for visual objects in the frames and at least one visual object with a tag matching one of the words of the filtered text is the recorded as an object of interest of the user. In this way, targeted marketing pertaining to the object of interest may be presented to the end user.
- In further illustration,
FIG. 1 is a pictorial illustration of a process for personalized key object detection in a live video stream. As shown inFIG. 1 ,video imagery 120 streams in amedia player 110 of anend user 160. Thevideo imagery 120 includes a multiplicity ofobjects 130 such as an individual engaging in actions in thevideo imagery 120, clothing and accessories worn by the individual, and equipment utilized by the individual while engaging in the actions. Each of theobjects 130 includes corresponding meta-information 140 such as a tag, describing a corresponding one of theobjects 130. Thevideo imagery 120 also includestextual commentary 150 pertaining to theobjects 130 and provided by different viewers of thevideo imagery 120. - Key
object detection logic 100 monitors biophysical data detected in respect to theend user 160, such as a heart rate of theend user 160, a facial expression of theend user 160, or an eye gaze of theend user 160. Keyobject detection logic 100 then responds to a positive emotional response by theend user 160 indicated by the biophysical data by retrieving thetextual commentary 150 and parsing the substantive words of thetextual commentary 150 in order to compute a frequency of appearance of each of the words in thetextual commentary 150. Those of the words having a threshold frequency are then located in mapping 180 to thedifferent tags 140 of thevideo imagery 120 in order to determine an object ofinterest 190 of theend user 160 within thevideo imagery 120. Optionally, the words can be further filtered to a smaller subject by including only those words matching profile information in auser profile 170 of theend user 160. - The process described in connection with
FIG. 1 , may be implemented in a data processing system. In further illustration,FIG. 2 schematically shows a data processing system configured for personalized key object detection in a live video stream. The system includes ahost computing platform 210 that includes one or more computers, each with memory and at least one processor. Thehost computing platform 210 is communicatively coupled todifferent client computers 250 overcomputer communications network 240. Each of theclient computers 250 includes amedia player 270 adapted to display streaming media therein streamed from over thecomputer communications network 240, and also abiophysical sensing system 260 adapted to sense biophysical data of an end user, such as a heart rate, a facial expression or a direction of a gaze as in a gaze tracking system. - Of note, the system includes a key
object detection module 300. The keyobject detection module 300 includes computer program instructions enabled during execution in the memory of thehost computing platform 210 to receive biophysical data of different end users at different, respective ones of theclient computers 250 as the different end users view streaming video within corresponding ones of themedia players 270. The biophysical data in particular is determined from time to time, to reflect a positive reaction as between an end user and a portion of the streaming video. As such, as it is determined that the biophysical data received from over thecomputer communications network 240 indicates a positive response to a portion of the streaming video, the program instructions of themodule 300 record a timestamp in respect to the streaming video and the end user. - The program instructions of the key
object detection module 300 then processes the timestamps to identify frames of the streaming video associated with the positive responses by the end users and process companion text for the frames in order to identify a most frequent set of one or more words in the companion text. The program instructions of the keyobject detection module 300 further filter those words in the most frequent set to only those words mapping to one or more profile values of a corresponding end user stored inprofile data store 220, such as demographic data or user preferences. Then, the program instructions of the keyobject detection module 300 map those remaining words to meta-data for one or more objects present in the streaming video so as to store in theprofile data store 220, one or more objects determined to be of interest to the end user. As such, targeted marketing can be formulated for the end user based upon the objects of interest stored in association with the end user. - In even yet further illustration of the operation of the key
object detection module 300,FIG. 3 is a flow chart illustrating a process for personalized key object detection in a live video stream. Beginning inblock 310, biophysical data is received in connection with an end user and streaming video. Inblock 320, it is determined whether or not the biophysical data indicates a positive response by the end user to the streaming video. If so, inblock 330, a timestamp is created in memory in respect to the end user and the streaming video. The process then continues inblock 310 for additional biophysical data from the same end user or other end users in respect to the same streaming video or other streaming video. - In
block 340, a first timestamp is retrieved from the memory and inblock 350, an end user and streaming video is identified in respect to the time stamp. Inblock 360, a frame corresponding to the timestamp is identified and inblock 370, textual commentary associated with the frame is retrieved into memory. Inblock 380, one or more keywords are identified in the textual commentary. In this regard, a frequency of utilization of each of the substantive words of the textual commentary (excluding articles and pronouns, for example) is computed. A set of the words enjoying a threshold frequency are then mapped to the tags for one or more objects in the frame are identified inblock 390. Finally, inblock 400 the objects mapped to the keywords are stored in connection with the end user so as to indicate objects of interest to the end user suitable for formulating marketing messaging to the end user in the future. - The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Claims (23)
1. A method for personalized key object detection in a live video stream, the method comprising:
streaming a live video stream in a window of a computing device;
during the streaming:
collecting biophysical data of an end user viewing the streaming, and
responding to ones of the collected biophysical data that indicate a positive reaction by associating a contemporaneously displayed frame of the live video stream with the positive reaction;
processing each corresponding frame associated with positive feedback by identifying key words presented in text of the feedback to the corresponding frame, wherein the key words of the corresponding frame are determined by:
parsing the text of the feedback;
computing a frequency of presence of a set of words in the text;
selecting as key words only those in the set having a corresponding frequency that exceeds a threshold value; and
filtering those words in the most frequent set to only those words mapping to one or more profile values of the end user stored in a profile data store;
matching the identified key words to a tag of an object visually presented in the corresponding frame, wherein the tag includes corresponding meta-information describing the object; and
storing a reference to the object in connection with the end user as an object of interest for which targeted marketing may be presented.
2. The method of claim 1 , wherein the biophysical data is a heartbeat.
3. The method of claim 1 , wherein the biophysical data is a facial expression.
4. (canceled)
5. (canceled)
6. The method of claim 1 , wherein the contemporaneously displayed frame of the live video stream is associated with the positive reaction by applying a time stamp to the positive reaction and correlating the time stamp to a time location in the live video stream.
7. The method of claim 1 , further comprising, for the object of interest, image processing the corresponding frame to identify a brand identification of the object of interest and storing the brand identification along with the stored reference to the object in connection with the end user as the object of interest for which targeted marketing may be presented.
8. A data processing system configured for personalized key object detection in a live video stream, the system comprising:
a host computing platform comprising one or more computers, each with at least one processor;
a media player executing in the memory of the host computing platform steaming a live video stream; and,
a computer-readable storage medium communicatively coupled to the at least one processor, the modulo computer-readable storage medium comprising computer program instructions which, when executed by the at least one processor, cause the at least one processor to perform a method comprising:
during the streaming:
collecting biophysical data of an end user viewing the streaming, and
responding to ones of the collected biophysical data that indicate a positive reaction by associating a contemporaneously displayed frame of the live video stream with the positive reaction;
processing each corresponding frame associated with positive feedback by identifying key words presented in text of the feedback to the corresponding frame, wherein the key words of the corresponding frame are determined by:
parsing the text of the feedback;
computing a frequency of presence of a set of words in the text;
selecting as key words only those in the set having a corresponding frequency that exceeds a threshold value; and
filtering those words in the most frequent set to only those words mapping to one or more profile values of the end user stored in a profile data store;
matching the identified key words to a tag of an object visually presented in the corresponding frame, wherein the tag includes corresponding meta-information describing the object; and
storing a reference to the object in connection with the end user as an object of interest for which targeted marketing may be presented.
9. The system of claim 8 , wherein the biophysical data is a heartbeat.
10. The system of claim 8 , wherein the biophysical data is a facial expression.
11. (canceled)
12. The system of claim 8 , wherein the contemporaneously displayed frame of the live video stream is associated with the positive reaction by applying a time stamp to the positive reaction and correlating the time stamp to a time location in the live video stream.
13. The system of claim 8 , wherein the method performed by the at least one processor further comprises, for the object of interest, image processing the corresponding frame to identify a brand identification of the object of interest and storing the brand identification along with the stored reference to the object in connection with the end user as the object of interest for which targeted marketing may be presented.
14. A computer program product for personalized key object detection in a live video stream, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, wherein the computer readable storage medium is not a transitory signal per se, the program instructions executable by a device to cause the device to perform a method comprising:
streaming a live video stream in a window of a computing device;
during the streaming:
collecting biophysical data of an end user viewing the streaming, and
responding to ones of the collected biophysical data that indicate a positive reaction by associating a contemporaneously displayed frame of the live video stream with the positive reaction;
processing each corresponding frame associated with positive feedback by identifying key words presented in text of the feedback to the corresponding frame, wherein the key words of the corresponding frame are determined by:
parsing the text of the feedback;
computing a frequency of presence of a set of words in the text;
selecting as key words only those in the set having a corresponding frequency that exceeds a threshold value; and
filtering those words in the most frequent set to only those words mapping to one or more profile values of the end user stored in a profile data store;
matching the identified key words to a tag of an object visually presented in the corresponding frame, wherein the tag includes corresponding meta-information describing the object; and
storing a reference to the object in connection with the end user as an object of interest for which targeted marketing may be presented.
15. The computer program product of claim 14 , wherein the biophysical data is a heartbeat.
16. The computer program product of claim 14 , wherein the biophysical data is a facial expression.
17. (canceled)
18. (canceled)
19. The computer program product of claim 14 , wherein the contemporaneously displayed frame of the live video stream is associated with the positive reaction by applying a time stamp to the positive reaction and correlating the time stamp to a time location in the live video stream.
20. The computer program product of claim 14 , wherein the method further includes, for the object of interest, image processing the corresponding frame to identify a brand identification of the object of interest and storing the brand identification along with the stored reference to the object in connection with the end user as the object of interest for which targeted marketing may be presented.
21. The method of claim 1 , wherein the positive feedback includes textual commentary pertaining to the object provided by one or more other end users.
22. The method of claim 1 , wherein the positive feedback includes textual commentary pertaining to the object provided by the end user.
23. The method of claim 1 , wherein the biophysical data is an eye gaze of the end user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/362,678 US10798425B1 (en) | 2019-03-24 | 2019-03-24 | Personalized key object identification in a live video stream |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/362,678 US10798425B1 (en) | 2019-03-24 | 2019-03-24 | Personalized key object identification in a live video stream |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200304839A1 true US20200304839A1 (en) | 2020-09-24 |
US10798425B1 US10798425B1 (en) | 2020-10-06 |
Family
ID=72515872
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/362,678 Active US10798425B1 (en) | 2019-03-24 | 2019-03-24 | Personalized key object identification in a live video stream |
Country Status (1)
Country | Link |
---|---|
US (1) | US10798425B1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113159870A (en) * | 2021-05-21 | 2021-07-23 | 口碑(上海)信息技术有限公司 | Display method and device of push information and computer equipment |
CN113283480A (en) * | 2021-05-13 | 2021-08-20 | 北京奇艺世纪科技有限公司 | Object identification method and device, electronic equipment and storage medium |
CN113438520A (en) * | 2021-06-29 | 2021-09-24 | 北京奇艺世纪科技有限公司 | Data processing method, device and system |
CN115037988A (en) * | 2021-03-05 | 2022-09-09 | 北京字节跳动网络技术有限公司 | Page display method, device and equipment |
CN115379246A (en) * | 2021-05-21 | 2022-11-22 | 北京字节跳动网络技术有限公司 | Live video stream playing method and device, electronic equipment and storage medium |
US20240040205A1 (en) * | 2020-12-16 | 2024-02-01 | Petal Cloud Technology Co., Ltd. | Method for Displaying Label in Image Picture, Terminal Device, and Storage Medium |
US20240244302A1 (en) * | 2023-01-17 | 2024-07-18 | Bank Of America Corporation | Systems and methods for embedding extractable metadata elements within a channel-agnostic layer of audio-visual content |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4062908B2 (en) * | 2001-11-21 | 2008-03-19 | 株式会社日立製作所 | Server device and image display device |
US8037496B1 (en) * | 2002-12-27 | 2011-10-11 | At&T Intellectual Property Ii, L.P. | System and method for automatically authoring interactive television content |
US7889073B2 (en) | 2008-01-31 | 2011-02-15 | Sony Computer Entertainment America Llc | Laugh detector and system and method for tracking an emotional response to a media presentation |
US20110225515A1 (en) | 2010-03-10 | 2011-09-15 | Oddmobb, Inc. | Sharing emotional reactions to social media |
US20120324491A1 (en) | 2011-06-17 | 2012-12-20 | Microsoft Corporation | Video highlight identification based on environmental sensing |
US9100685B2 (en) | 2011-12-09 | 2015-08-04 | Microsoft Technology Licensing, Llc | Determining audience state or interest using passive sensor data |
US9015737B2 (en) * | 2013-04-18 | 2015-04-21 | Microsoft Technology Licensing, Llc | Linked advertisements |
US9536329B2 (en) | 2014-05-30 | 2017-01-03 | Adobe Systems Incorporated | Method and apparatus for performing sentiment analysis based on user reactions to displayable content |
US9671862B2 (en) | 2014-10-15 | 2017-06-06 | Wipro Limited | System and method for recommending content to a user based on user's interest |
US10390064B2 (en) * | 2015-06-30 | 2019-08-20 | Amazon Technologies, Inc. | Participant rewards in a spectating system |
US9916866B2 (en) * | 2015-12-22 | 2018-03-13 | Intel Corporation | Emotional timed media playback |
US10149008B1 (en) * | 2017-06-30 | 2018-12-04 | Rovi Guides, Inc. | Systems and methods for assisting a user with identifying and replaying content missed by another user based on an alert alerting the other user to the missed content |
-
2019
- 2019-03-24 US US16/362,678 patent/US10798425B1/en active Active
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20240040205A1 (en) * | 2020-12-16 | 2024-02-01 | Petal Cloud Technology Co., Ltd. | Method for Displaying Label in Image Picture, Terminal Device, and Storage Medium |
CN115037988A (en) * | 2021-03-05 | 2022-09-09 | 北京字节跳动网络技术有限公司 | Page display method, device and equipment |
CN113283480A (en) * | 2021-05-13 | 2021-08-20 | 北京奇艺世纪科技有限公司 | Object identification method and device, electronic equipment and storage medium |
CN113159870A (en) * | 2021-05-21 | 2021-07-23 | 口碑(上海)信息技术有限公司 | Display method and device of push information and computer equipment |
CN115379246A (en) * | 2021-05-21 | 2022-11-22 | 北京字节跳动网络技术有限公司 | Live video stream playing method and device, electronic equipment and storage medium |
CN113438520A (en) * | 2021-06-29 | 2021-09-24 | 北京奇艺世纪科技有限公司 | Data processing method, device and system |
US20240244302A1 (en) * | 2023-01-17 | 2024-07-18 | Bank Of America Corporation | Systems and methods for embedding extractable metadata elements within a channel-agnostic layer of audio-visual content |
Also Published As
Publication number | Publication date |
---|---|
US10798425B1 (en) | 2020-10-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10798425B1 (en) | Personalized key object identification in a live video stream | |
US11356746B2 (en) | Dynamic overlay video advertisement insertion | |
JP6968100B2 (en) | How to identify user behavioral preferences and how to present recommended information and devices | |
US10284884B2 (en) | Monitoring individual viewing of television events using tracking pixels and cookies | |
US11140458B2 (en) | System and method for dynamic advertisements driven by real-time user reaction based AB testing and consequent video branching | |
EP3090396B1 (en) | Tracking pixels and cookies for television event viewing | |
US20170318336A1 (en) | Influencing content or access to content | |
US20150382075A1 (en) | Monitoring individual viewing of television events using tracking pixels and cookies | |
US20090319516A1 (en) | Contextual Advertising Using Video Metadata and Chat Analysis | |
CN106471539A (en) | System and method for obscuring audience measurement | |
US10397661B2 (en) | Video frame selection for targeted content | |
CN107547922B (en) | Information processing method, device, system and computer readable storage medium | |
CN104185041A (en) | Video interaction advertisement automatic generation method and system | |
US9538209B1 (en) | Identifying items in a content stream | |
CN110602534B (en) | Information processing method and device and computer readable storage medium | |
US10846738B1 (en) | Engaged view rate analysis | |
US11589125B2 (en) | Dynamic content generation | |
US20210021898A1 (en) | Rating and an overall viewership value determined based on user engagement | |
EP2680164A1 (en) | Content data interaction | |
Solanki et al. | Artificial Intelligence Powered Brand Identification and Attribution for On Screen Content | |
GB2542596A (en) | Monitoring the influence of media content on website traffic |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, DI LING;SU, SHI;ZHONG, WU MI;REEL/FRAME:048681/0924 Effective date: 20190305 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |