US9154853B1 - Web identity to social media identity correlation - Google Patents
Web identity to social media identity correlation Download PDFInfo
- Publication number
- US9154853B1 US9154853B1 US13/975,551 US201313975551A US9154853B1 US 9154853 B1 US9154853 B1 US 9154853B1 US 201313975551 A US201313975551 A US 201313975551A US 9154853 B1 US9154853 B1 US 9154853B1
- Authority
- US
- United States
- Prior art keywords
- event
- media
- web
- social media
- advertisement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 claims description 79
- 235000014510 cooky Nutrition 0.000 claims description 65
- 238000013507 mapping Methods 0.000 claims description 22
- 238000003860 storage Methods 0.000 claims description 19
- 230000002123 temporal effect Effects 0.000 claims description 19
- 238000012512 characterization method Methods 0.000 claims description 4
- 230000006399 behavior Effects 0.000 abstract description 16
- 230000008569 process Effects 0.000 description 44
- 238000001514 detection method Methods 0.000 description 33
- 230000011218 segmentation Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 16
- 230000002596 correlated effect Effects 0.000 description 13
- 238000000605 extraction Methods 0.000 description 13
- 230000004044 response Effects 0.000 description 13
- 230000033001 locomotion Effects 0.000 description 12
- 230000000875 corresponding effect Effects 0.000 description 9
- 230000006855 networking Effects 0.000 description 8
- 230000000052 comparative effect Effects 0.000 description 7
- 238000004590 computer program Methods 0.000 description 7
- 238000001914 filtration Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000008685 targeting Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 230000036651 mood Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 244000025254 Cannabis sativa Species 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 241000288113 Gallirallus australis Species 0.000 description 1
- 241000254158 Lampyridae Species 0.000 description 1
- 241000252794 Sphinx Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009118 appropriate response Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000013479 data entry Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000003599 detergent Substances 0.000 description 1
- 230000010006 flight Effects 0.000 description 1
- 230000037406 food intake Effects 0.000 description 1
- 230000008570 general process Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 238000013077 scoring method Methods 0.000 description 1
- 238000013515 script Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000013518 transcription Methods 0.000 description 1
- 230000035897 transcription Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0269—Targeted advertisements based on user profile or attribute
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0277—Online advertisement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
-
- H04L51/32—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/52—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44222—Analytics of user selections, e.g. selection of programs or purchase activity
- H04N21/44224—Monitoring of user activity on external systems, e.g. Internet browsing
- H04N21/44226—Monitoring of user activity on external systems, e.g. Internet browsing on social networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25875—Management of end-user data involving end-user authentication
Definitions
- the present invention relates generally to correlating web cookie content with social media content and using those correlations to send targeted advertisements to web users.
- Online social media services such as social networking sites, search engines, news aggregators, blogs, and the like provide a rich environment for users to comment on events of interest and communicate with other users.
- Social media content items authored by users of social networking systems often include references to events that appear in time based media such as television shows, news reports, sporting events, movies, concert performances, and the like.
- the content items can sometimes refer to the time-based media
- the social media content items themselves typically are isolated from the events and time-based media that those content items refer to; for example, the social content items appear in online social networks provided over the Internet, while the events occur in other contexts and systems, such as television programming provided on broadcast systems.
- identities and behavior of social media users are isolated within the social network, and are not connected with identities and behavior of users of the internet more generally.
- An identification server matches the web browsing behavior of an individual with their use of social media systems to correlate the individual's social media (SM) identity (ID) to the individual's web ID. To determine this correlation, the identification server matches the website browsing behavior contained in a cookie for a web ID with the content of SM content items authored by a user with the SM ID. The correlation may be expressed as a confidence score that a web ID corresponds to one or more SM IDs, or vice versa. In one embodiment, web IDs and SM IDs are correlated by matching the uniform resource locators (URLs) of websites visited by a web ID along with the times those websites were visited to URLs contained within SM content items authored by a user with the SM ID, and the times when those SM content items were posted.
- URLs uniform resource locators
- the identity correlations between web IDs and SM IDs may be used along with other alignments to provide messages containing information regarding the time-based media (e.g., ads, TV shows) to which those individuals have likely been exposed.
- the identification server can detect the airing of advertisements within time-based media streams (referred to as the airing overlap).
- the identification server may also determine alignments between SM content items authored by SM IDs and the time-based media events (e.g., television shows and advertisements) to which those content items refer. As a result, the identification server can identify what shows or advertisements a user with a given SM ID has most likely seen.
- This identification thereby links the user's activities in the social media context (the user's social media content) with the user's activity in an entirely unrelated context of television programming (watching television programming and advertisements), in the absence of any formal, predefined relationship between these two contexts or events occurring therein.
- the identification server may use the determination of what shows or advertisements a user with a particular SM ID has likely seen and the correlations between web IDs and SM IDs to send (or assist in the sending) of targeted messaging to the web ID associated with the particular user.
- the identification server may act as a resource for a website host (e.g., ComcastTM, AOLTM, GoDaddyTM), a social networking system (e.g., FacebookTM, TwitterTM), or an advertisement bidding system (e.g., GoogleTM Ad Words, DataXuTM) that sends advertisements to web IDs, for example by displaying ads in a website browser in use by a user.
- a website host e.g., ComcastTM, AOLTM, GoDaddyTM
- a social networking system e.g., FacebookTM, TwitterTM
- an advertisement bidding system e.g., GoogleTM Ad Words, DataXuTM
- the input and output of the identification server depends upon the implementation.
- the identification server may be configured to push data, for example by pushing individual messages, pushing messages in batches, sending a data feed, and/or sending a message responsive to the airing of an advertisement or time-based media stream.
- Data may also be pulled from the identification server, for example in response to a request containing a cookie, a web ID, a SM ID, or demographic or targeting information for a group of users.
- the data output by the identification server may include a fully constructed advertisement, advertising material for custom-tailoring an advertisement to a recipient user, a list of one or more SM IDs or web IDs, and/or targeting criteria for indicating who the recipient/s of a given message should be.
- the recipient of a message sent by the identification server may be an individual user associated with a web or SM ID or a group of users.
- the intended recipient/s of a message may be specified directly by SM IDs or web IDs, or indirectly by targeting criteria contained in the message.
- FIG. 1 illustrates the computing environment of one embodiment of a system for correlating web identities and social media identities.
- FIG. 2 is a block diagram of one embodiment of an identification server.
- FIG. 3 is a block diagram and flowchart of an alignment and identification process at a high level according to one embodiment.
- FIG. 4A is a conceptual diagram illustrating the video to metadata and social media to event alignment processes at a high level according to one embodiment.
- FIG. 4B is a block diagram and flowchart illustrating one embodiment of a method for determining the airings of time-based media events.
- FIG. 4C is a block diagram and flowchart illustrating one embodiment of a video event segmentation process.
- FIG. 4D is a block diagram and flowchart illustrating one embodiment of video event to metadata alignment.
- FIG. 5 is a block diagram and flowchart illustrating one embodiment of social media to event alignment.
- FIG. 6 is an illustration of one embodiment of web identity to social media identity alignment.
- FIG. 7 is an interaction diagram for using the identification server to send messages, according to one embodiment.
- FIG. 1 illustrates the computing environment of one embodiment of a system for identifying a web identity (ID or identifier) and a social media (SM) ID of a user.
- the environment 100 includes social networking sources 110 , time-based media sources 120 , the identification server 130 , a network 140 , client devices 150 , advertisers 160 , web servers 170 , and ad server 180 .
- the social media sources 110 include social networks, blogs, news media, forums, user groups, etc.
- Examples of SM sources include social networking systems such as FacebookTM and TwitterTM. These systems generally provide a plurality of SM users, each having a SM identity (SM ID), with the ability to communicate and interact with other users of the system (i.e., individuals with other SM IDs).
- SM ID SM identity
- the term “SM ID” will be used herein as both a literal referring to actual data comprising the social media identifier, as well as a reference to a user associated with the SM ID (e.g., as in “An SM ID can post to a social media network”).
- SM IDs can typically author various SM content items (e.g., posts, videos, photos, links, status updates, blog entries, tweets, profiles, and the like), which may refer to media events (e.g., TV shows, advertisements) or other SM content items (e.g., other posts, etc., pages associated with TV shows or advertisements), and can engage in discussions, games, online events, and other participatory services.
- the SM ID may be referred to as the author of a particular SM content item.
- the time-based media sources 120 include broadcasters, direct content providers, advertisers, and any other third-party providers of time-based media content. These sources 120 typically publish content such as TV shows, commercials, videos, movies, serials, audio recordings, and the like.
- the network 140 may comprise any combination of local area and/or wide area networks, the Internet, or one or more intranets, using both wired and wireless communication systems.
- the client devices 150 comprise computing devices that can receive input from a user and can transmit and receive data via the network 140 .
- client devices 150 may be a desktop computer, a laptop computer, a smart phone, a personal digital assistant (PDAs), or any other device including computing functionality and data communication capabilities.
- PDAs personal digital assistant
- a client device 150 is configured to communicate with web servers 170 , SM sources 110 , time-based media sources 120 , and ad servers 180 via the network 140 .
- Advertisers 160 include companies, advertising agencies, or any other third-party organizations that create, distribute, or promote advertisements for web or SM users. Advertisements include not only individual advertisements (e.g., video ads, banner ads, links or other creatives), but also brands, advertising campaigns, and flights, and targeted advertisements. Advertisements may be published in the social networks 110 alongside other content, posted in websites hosted by web servers 170 , sent directly to client devices 150 , or inserted into time-based media sources 120 . Advertisements may be stored on servers maintained by the advertisers 160 , they may be sent to the identification server 130 and stored there, they may be sent to the SM sources 110 and stored there, and/or they may be sent to the ad servers 180 or web server 170 and stored there. Advertisements may be sent to users by the ad servers 180 , by the web servers 170 , by the SM sources 110 , by the advertisers 160 , or by the client devices 150 . These systems may also work in conjunction to request, create, and send advertisements.
- Advertisements include not only individual advertisements (e.
- the identification server 130 determines web ID of a user in terms of one or more SM IDs, and uses these correlations between identities to send messages as further described in conjunction with FIGS. 2-7 .
- FIG. 2 is a block diagram of one embodiment of an identification server.
- the identification server 130 shown in FIG. 2 is a computer system that includes a web server 200 and associated API 202 , an event airing detection 314 system, a TV show/ad overlap 318 engine, a SM to event alignment 322 engine, a web ID to SM ID alignment 326 engine, a message selection 330 engine, an annotated event store 318 , a TV show/ad overlap store 320 , a SM ID to event mapping store 324 , and a web ID to SM ID mapping store 328 .
- the identification server 130 may be implemented using a single computer, or a network of computers, including cloud-based computer implementations.
- the computers are preferably server class computers including one or more high-performance CPUs, 1 G or more of main memory, as well as 500 GB to 2 Tb of computer readable, persistent storage, and running an operating system such as LINUX or variants thereof.
- the operations of the server 130 as described can be controlled through either hardware or through computer programs installed in computer storage and executed by the processors of such servers to perform the functions described herein.
- the server 130 includes other hardware elements necessary for the operations described here, including network interfaces and protocols, security systems, input devices for data entry, and output devices for display, printing, or other presentations of data; these and other conventional components are not shown so as to not obscure the relevant details.
- server 130 comprises a number of “engines,” which refers to computational logic for providing the specified functionality.
- An engine can be implemented in hardware, firmware, and/or software.
- An engine may sometimes be equivalently referred to as a “module,” “system”, or a “server.”
- module a module
- server 130 may lack the components described herein and/or distribute the described functionality among the components in a different manner. Additionally, the functionalities attributed to more than one component can be incorporated into a single component.
- the engine can be implemented as a standalone program, but can also be implemented through other means, for example as part of a larger program, as a plurality of separate programs, or as one or more statically or dynamically linked libraries.
- the engines are stored on the computer readable persistent storage devices of the server 130 , loaded into memory, and executed by the one or more processors of the system's computers.
- the operations of the server 130 and its various components will be further described below with respect to the remaining figures.
- the various data processing operations described herein are sufficiently complex and time consuming as to require the operation of a computer system such as the server 130 , and cannot be performed merely by mental steps.
- the web server 200 links the server 130 to the network 140 and the other systems described in FIG. 1 .
- the web server 200 serves web pages, as well as other web related content, such as Java, Flash, XML, and so forth.
- the web server 200 may include a mail server or other messaging functionality for receiving and routing messages between the server 130 and the other systems described in FIG. 1 .
- the API 202 allows one or more external entities to access information from the server 130 .
- the web server 200 may also allow external entities to send information to the server 130 calling the API 202 .
- an external entity sends an API request to the server 130 via the network 140 and the web server 200 receives the API request.
- the web server 200 processes the request by calling an API 202 associated with the API request to generate an appropriate response, which the web server 200 communicates to the external entity via the network 140 .
- the API may be used by a SNS 110 to communicate information and requests to the server 130 .
- FIG. 3 is a block diagram and flowchart of an alignment and identification process at a high level according to one embodiment.
- the identification server 130 accesses and stores a number of different items of information through data ingestion 302 , which may be performed by the web server 200 .
- the ingested data includes time-based media streams (not shown), TV programming guide data stored in store 304 , SM content items stored in SM content store 306 , SM author information stored in SM ID store 308 , cookies of web behavior for web users stored in cookie store 310 , and web user information stored in web ID store 312 .
- the time-based media is used in an event airing detection 314 process to identify the airings of individual events (e.g., advertisements, TV shows).
- the events are stored in the annotated event store 316 .
- Event airing detection 314 is described further below with respect to FIGS. 4A-4D .
- the annotated events 316 are used in two distinct processes.
- the annotated events 316 are used, in conjunction with the TV programming guide data 304 in a TV show to advertisement overlap process 318 , described further below, that determines which advertisements aired during which TV shows.
- the annotated events 316 and TV programming guide date 304 are also used to align SM content items and their authors (i.e., SM IDs) with the annotated events.
- mappings between SM content items, SM IDs, and annotated events indicate which content items are likely to have been seen by which SM IDs. These mappings are stored in mapping store 324 .
- SM to event alignment 322 is described further below with respect to FIG. 5 .
- the identification server 130 is also configured to align 326 the SM ID of a user to the web ID of the user. To align 326 these two IDs, the identification server 130 matches the web browsing behavior associated with a web ID with the web links contained in the SM content items authored by a SM ID. SM ID to web ID alignments are stored in mapping store 328 . SM ID alignment to web ID alignment 326 is described further below with respect to FIG. 6 .
- the web ID to SM ID alignments, SM to event alignments, and TV show to advertisement overlaps are used by a message selection engine 330 to either send targeted messages directly to SM users or web users, or to assist other systems in sending targeted messages to those users. Examples of the various use cases for message selection 330 are described further below with respect to FIG. 7 .
- FIG. 4A is a conceptual diagram illustrating the video to metadata and SM to event alignment processes at a high level according to one embodiment. Beginning with metadata instances 457 and events in time-based media 301 as input, annotated events 459 are formed. As shown, time-based media (TBM) 451 includes multiple segments (seg. 1 -M) 453 , which contain events in the time-based media, as described herein.
- the video to metadata alignment 416 process aligns one or more metadata instances ( 1 -N) 457 with the events to form annotated events 459 , as further described in conjunction with FIG. 4D .
- the SM to event alignment 322 process aligns, or “maps,” the annotated events 459 resulting from the video to metadata alignment 416 to one or more SM content items (A-O) 461 , as further described in conjunction with FIG. 5 .
- the various alignments are one-to-one, many-to-one, and/or many-to-many.
- a given SM content item 461 can be mapped to multiple different annotated events 459 (e.g., SM content items C, D, and F), and an annotated event 459 can be mapped to multiple different SM content items 461 .
- the relationships between content items and events can be quantified to estimate social interest, as further explained below.
- FIG. 4B is a block diagram and flowchart illustrating one embodiment of a method for determining the airings of time-based media events.
- Multiple streams of data are ingested 302 at the server 130 for processing.
- Data may be received at the server 130 from any of the systems described in FIG. 1 , Particularly, the data ingested includes time-based media streams, e.g., from broadcast television feeds, radio feeds, internet streams, directly from content producers, and/or from other third parties.
- web server 200 is one means for ingesting 302 the data.
- the ingested data may also include, but is not limited to, electronic programming guide 304 data, closed captioning data, statistics, SM posts, mainstream news media, and usage statistics.
- the ingested data may be stored in data stores specific to the type of data
- time-based media data is stored in the multimedia store 402 .
- the time-based media in the multimedia store 402 may undergo additional processing before being used within the methods shown in FIGS. 3-6 .
- closed captioning data can be extracted from data using extractor 404 , and stored in a closed caption store 406 separately or in conjunction with the multimedia store 402 .
- time-based media event metadata associated with media events is stored in the event metadata store 412 .
- Closed captioning data typically can be extracted from broadcast video or other sources encoded with closed captions using open source software such as CCExtractor available via SourceForge.net.
- open source software such as CCExtractor available via SourceForge.net.
- imperfect methods such as automatic speech recognition can be used to capture and convert the audio data into a text stream comparable to closed captioning text. This can be done, for example, using open source software such as Sphinx 3 available via SourceForge.net.
- the closed captioning is ingested, it is preferably correlated to speech in a video.
- Various alignment methods are known in the art. One such method is described in Hauptmann, A.
- Time-based media includes any data that changes meaningfully with respect to time. Examples include, and are not limited to, videos, (e.g., TV shows or portions thereof, movies or portions thereof) audio recordings, MIDI sequences, animations, and combinations thereof. Time-based media can be obtained from a variety of sources, such as local or network stores, as well as directly from capture devices such as cameras, microphones, and live broadcasts. It is anticipated that other types of time-based media within the scope of the invention will be developed in the future (e.g., 3D media, holographic presentations, immersive media, and so forth).
- the event metadata store 412 stores metadata related to time-based media events.
- metadata can include, but is not limited to: the type of event occurring, the brand/product for which an advertisement event is advertising, the agents actors/characters involved in the event, the scene/location of the event, the time of occurrence and time length of the event, the results/causes of the event, etc.
- metadata for an advertisement event may include information such as “Brand: Walmart; Scene: father dresses up as clown; Mood: comic.”
- the metadata can be structured as tuples of ⁇ name, value> pairs.
- Metadata may also include low level features for an event, e.g., image or audio features or content features, hand annotations with text descriptions, or both. Metadata may be represented as text descriptions of time-based media events and as feature vector representations of audio and/or video content extracted from examples of events. Examples of such metadata include a number and length of each shot, histograms of each shot (e.g., color, texture, edges, gradients, brightness, etc.), and spectral information (e.g., frequency coefficients, energy levels) of the associated audio. Metadata may be generated using human annotation (e.g., via human annotators watching events or samples thereof) and may be supplemented with automatic annotations. Metadata may also include different types of features including but not limited to scale-variant feature transform (SIFT), speeded up robust features (SURF), local energy based shape histogram (LESH), color histogram, and gradient location orientation histogram (GLOH).
- SIFT scale-variant feature transform
- SURF speeded up robust features
- a video event segmentation process 408 segments time-based media streams (e.g., raw video and/or audio) into semantically meaningful segments corresponding to discrete events depicted in video at semantically meaningful boundaries. This process is described with respect to FIG. 4C below.
- the output of video event segmentation 408 is stored in the video event store 410 .
- the events and event metadata are used to perform video metadata alignment 416 , in which events are annotated with semantically meaningful information relevant to the event. This process is described with respect to FIG. 4D below. The intervening step of feature extraction 414 is also described with respect to FIG. 4D .
- the annotations metadata to events generated using video metadata alignment 416 are stored in the annotated event store 316 .
- event airing detection 314 could be performed by a separate entity, such as a content provider or owner, e.g., which does not want to release the video content to others.
- the identification server 130 would provide software, including the software modules and engines described herein, to the separate entity to allow them to perform these processes on the raw time-based media.
- the separate entity in return could provide the server 130 with the extracted features, video events, and their respective metadata for use by the server 130 .
- These data exchanges could take place via API 202 exposed to the separate entity via web server 200 .
- FIG. 4C is a block diagram and flowchart illustrating one embodiment of a video event segmentation process.
- video event segmentation 408 segments time-based media into semantically meaningful segments corresponding to discrete portions or “events,”
- Input to the video event segmentation process 408 is a video stream 418 from the multimedia store 402 .
- Video event segmentation 408 may include shot boundary detection 420 , event detection 422 , and event boundary determination 424 , each of which is described in greater detail below.
- the output of video event segmentation 408 is an event 426 , which is stored in the video event store 410 .
- the first step in segmenting is shot boundary detection 420 for discrete segments (or “shots”) within a video.
- Shot boundaries are points of non-continuity in the video, e.g., associated with a change in a camera angle or scene. Shot boundaries may be determined by comparing color histograms of adjacent video frames and applying a threshold to that difference. Shot boundaries may be determined to exist wherever the difference in the color histograms of adjacent frames exceeds this threshold.
- Many techniques are known in the art for shot boundary detection.
- One exemplary algorithm is described in Tardini et al., Shot Detection and Motion Analysis for Automatic MPEG -7 Annotation of Sports Videos, 13th International Conference on Image Analysis and Processing (November 2005).
- Other techniques for shot boundary detection 420 may be used as well, such as using motion features.
- Another known technique is described in A. Jacobs, et al., Automatic shot boundary detection combining color, edge, and motion features of adjacent frames , Center for Computing Technologies, Bremen, Germany (2004).
- Event detection 422 identifies the presence of an event in a stream of (one or more) segments using various features corresponding, for example, to the image, audio, and/or camera motion for a given segment.
- a classifier using such features may be optimized by hand or trained using machine learning techniques such as those implemented in the WEKA machine learning package described in Witten, I. and Frank, E., Data Mining: Practical machine learning tools and techniques (2nd Edition), Morgan Kaufmann, San Francisco, Calif. (June 2005).
- the event detection process 420 details may vary by domain.
- Image features are features generated from individual frames within a video. They include low level and higher level features based on those pixel values. Image features include, but are not limited to, color distributions, texture measurements, entropy, motion, detection of lines, detection of faces, presence of all black frames, graphics detection, aspect ratio, and shot boundaries.
- Speech and audio features describe information extracted from the audio and closed captioning streams. Audio features are based on the presence of music, cheering, excited speech, silence, detection of volume change, presence/absence of closed captioning, etc. According to one embodiment, these features are detected using boosted decision trees. Classification operates on a sequence of overlapping frames (e.g., 30 ms overlap) extracted from the audio stream. For each frame, a feature vector is computed using Mel-frequency cepstral coefficients (MFCCs), as well as energy, the number of zero crossings, spectral entropy, and relative power between different frequency bands. The classifier is applied to each frame, producing a sequence of class labels. These labels are then smoothed using a dynamic programming cost minimization algorithm, similar to those used in hidden Markov models.
- MFCCs Mel-frequency cepstral coefficients
- features may be extracted from the words or phrases spoken by narrators and/or announcers. From a domain specific ontology (not shown), a predetermined list of words and phrases is selected and the speech stream is monitored for the utterance of such terms. A feature vector representation is created in which the value of each element represents the number of times a specific word from the list was uttered. The presence of such terms in the feature vector correlates with the occurrence of an event associated with the predetermined list of words. For example, the uttering of the phrase “Travelocity” is correlated with the occurrence of an advertisement for Travelocity.
- camera motion features represent more precise information about the actions occurring in a video.
- the camera acts as a stand in for a viewer's focus. As actions occur in a video, the camera moves to follow it; this camera motion thus mirrors the actions themselves, providing informative features for event identification.
- shot boundary detection there are various methods for detecting the motion of the camera in a video (i.e., the amount it pans left to right, tilts up and down, and zooms in and out).
- Bouthemy, P., et al. A unified approach to shot change detection and camera motion characterization , IEEE Trans.
- this system computes the camera motion using the parameters of a two-dimensional affine model to fit every pair of sequential frames in a video.
- a 15-state first-order hidden Markov model is used, implemented with the Graphical Modeling Toolkit, and then the output of the Bouthemy is output into a stream of clustered characteristic camera motions (e.g., state 12 clusters together motions of zooming in fast while panning slightly left).
- the beginning and ending boundaries of that event must be determined 424 .
- the shot boundaries determined in 410 are estimates of the beginning and end of an event. The estimates can be improved as well by exploiting additional features of the video and audio streams to further refine the boundaries of video segments.
- Event boundary determination 424 may be performed using a classifier that may be optimized by hand or using supervised learning techniques. The classifier may make decisions based on a set of rules applied to a feature vector representation of the data. The features used to represent video overlap with those used in the previous processes.
- Events have beginning and end points (or offsets), and those boundaries may be determined based on the presence/absence of black frames, shot boundaries, aspect ratio changes, etc., and have a confidence measure associated with the segmentation.
- the result of event boundary determination 424 (concluding video event segmentation 408 ) is a (set of) segmented video event 426 that is stored in the video event store 410 .
- FIG. 4D is a block diagram and flowchart illustrating one embodiment of video event to metadata alignment.
- the video metadata alignment 416 process produces annotations of the events from video event segmentation 408 , where annotations include semantically meaningful information regarding the event.
- Video metadata alignment 416 includes feature extraction 414 and video metadata alignment 432 .
- the event is converted into a feature vector representation via feature extraction 414 .
- Video events 426 are retrieved from the video event store 410 .
- Output from feature extraction 414 is a video event feature representation 430 .
- Features may be identical to (or a subset of) the image/audio properties discussed above for metadata as stored in the event metadata store 412 , and may vary by domain (e.g., television, radio, TV show, advertisement, sitcom, sports show).
- Video metadata alignment 416 takes as input the feature vector representation 430 of an event and an instance of metadata 428 .
- Metadata instances are metadata corresponding to a single event.
- Video metadata alignment cycles through each metadata instance 428 in the event metadata store 412 and uses an alignment function to estimate the likelihood that a particular event may be described by a particular metadata instance for an event.
- the alignment function may be a simple cosign similarity function that compares the feature representation 430 of the event to the low level properties described in the metadata instance 428 .
- the most likely alignment 434 i.e., alignment with the highest probability or score
- the event associated with the feature representation 430 is annotated with the metadata instance 428 and the resulting annotated event 436 is stored in the annotated event store 316 along with a score describing the confidence of the annotation. If no event passes the threshold, the event is marked as not annotated.
- a set of results from the process is hand annotated into two categories: correct and incorrect results. Cross-validation may then be used to find the threshold that maximizes the precision/recall of the system over the manually annotated result set.
- FIG. 5 is a block diagram and flowchart illustrating one embodiment of SM to event alignment.
- SM to event alignment 322 aligns (or maps) the annotated events with SM content items authored by SM users.
- the annotated events are drawn from the annotated event store 316 , as well as from TV programming guide data 304 .
- the TV programming guide data 304 store as a set of mappings between metadata (e.g. TV show and advertisement names, casts, characters, genres, episode descriptions, etc.) and specific airing information (e.g. time, time zone, channel, network, geographic region, etc.).
- metadata e.g. TV show and advertisement names, casts, characters, genres, episode descriptions, etc.
- specific airing information e.g. time, time zone, channel, network, geographic region, etc.
- SM content items generally contain content created or added by an authoring SM user.
- SM content items include long form and short form items such as posts, videos, photos, links, status updates, blog entries, tweets, and the like.
- Other examples of SM content items include audio of commentators on, or participants of, another event or topic (e.g., announcers on TV or radio) and text transcriptions thereof (generated manually or automatically), event-related information (e.g., recipes, instructions, scripts, etc.), statistical data (e.g., sports statistics or financial data streams), news articles, and media usage statistics (e.g., user behavior such as viewing, rewind, pausing, etc.).
- SM filtering 502 prior to SM to event alignment 322 .
- SM content items are filtered 502 in order to create a set of candidate content items with a high likelihood that they are relevant to a specific event.
- content items can be relevant to an event if they include a reference to the event.
- a candidate set of content items is compiled based on the likelihood that those content items are relevant to the events, for example, by including at least one reference to a specific event.
- a comparative feature extraction engine 510 is one mechanism for doing this, and is described with respect to SM to event alignment 322 .
- this candidate set of content items can be the result of filtering 502 associated with a given time frame of the event in question.
- Temporal filters often are, however, far too general, as many content items will only coincidentally co-occur in time with a given event.
- broadcast television e.g., the increasing use of digital video recorders has broadened significantly the relevant timeframe for events.
- Additional filters 502 are applied based on terms used in the content item's text content (e.g., actual texts or extracted text from closed caption or audio) that appear in the metadata for an event. Additional filters may also include domain specific terms from domain ontologies 504 . For example, content item of a social network posting of “Touchdown Brady! Go Patriots” has a high probability that it refers to an event in a Patriots football game due to the use of the player name, team name, and play name, and this content item would be relevant to the event. In another example, a content item of a post “I love that Walmart commercial” has a high probability that it refers to an advertisement event for Walmart due to the use of the store name, and the term “commercial,” and thus would likewise be relevant to this event.
- domain ontologies 504 For example, content item of a social network posting of “Touchdown Brady! Go Patriots” has a high probability that it refers to an event in a Patriots football game due to the use of the player name, team name, and
- a SM content item can be relevant to an event without necessarily including a direct textual reference to the event.
- Various information retrieval and scoring methods can be applied to the content items to determine relevancy, based on set-theoretic (e.g., Boolean search), algebraic (e.g., vector space models, neural networks, latent semantic analysis), or probabilistic models (e.g., binary independence, or language models), and the like.
- set-theoretic e.g., Boolean search
- algebraic e.g., vector space models, neural networks, latent semantic analysis
- probabilistic models e.g., binary independence, or language models
- SM content items that do not pass certain of these initial filters e.g., temporal or content filters
- the output of SM filtering 502 is an updated SM content store 306 , which indicates, for each content item, whether that content item was filtered by temporal or content filters. Additional filters may apply in additional domains.
- the SM to annotated event alignment 322 includes a comparative feature extraction 510 and an alignment function 512 .
- the comparative feature extraction 510 converts input of an annotated event 508 (and/or events stored in the TV programming guide data 304 ) and a SM content item 506 into a feature vector representation, which is then input to the alignment function 512 .
- the alignment function uses the received features to create a relationship between the event features and SM features. The relationship may be co-occurrence, correlation, or other relationships as described herein.
- the comparative feature extraction 510 also may receive input from the SM author store 308 and the domain ontologies 504 .
- the three major types of features extracted are content features 510 c , geo-temporal features 510 b , and authority features 510 a.
- Content features 510 c refer to co-occurring information within the content of the SM content items and the metadata for the video events, e.g., terms that exist both in the content item and in the metadata for the video event.
- Domain ontologies 504 may be used to expand the set of terms used when generating content features.
- Geo-temporal features 510 b refer to the difference in location (e.g., geographic region of airing) and time at which the input media was generated from a location associated with the SM content item about the event. Such information is useful as the relevance of SM to an event is often inversely correlated with the distance from the event (in time and space) that the media was produced. In other words, SM relevant to an event is often produced during or soon after that event, and sometimes by people at or near the event (e.g., a sporting event) or exposed to it (e.g., within broadcast area for television-based event).
- geo-temporal information can be determined based on the location and/or time zone of the event or broadcast of the event, the time it started, the offset in the video that the start of the event is determined, the channel on which it was broadcast.
- geo-temporal information can be part of the content of the media itself (e.g., a time stamp on a blog entry or status update) or as metadata of the media or its author.
- the temporal features describe the difference in time between when the SM content item was created from the time that the event itself took place. In general, smaller differences in time of production are indicative of more confident alignments. Such differences can be passed through a sigmoid function such that as the difference in time increases, the probability of alignment decreases, but plateaus at a certain point.
- the parameters of this function may be tuned based on an annotated verification data set.
- the spatial features describe the distance from the author of the content item location relative to the geographical area of the event or broadcast. Spatial differences are less indicative because often times people comment on events that take place far from their location. A sigmoid function may be used to model this relationship as well, although parameters are tuned based on different held out data.
- Authority features 510 a describe information related to the author of the SM and help to increase the confidence that a SM content item refers to a video event.
- the probability that any ambiguous post refers to a particular event is dependent upon the prior probability that the author would post about a similar type of event (e.g., a basketball game for an author who has posted content about prior basketball games).
- the prior probability can be approximated based on a number of features including: the author's self-generated user profile (e.g., mentions of a brand, team, etc.), the author's previous content items (e.g., about similar or related events), and the author's friends (e.g., their content contributions, profiles, etc.). These prior probability features may be used as features for the mapping function.
- the alignment function 512 takes the set of extracted features 510 a - c and outputs a mapping 514 and a confidence score 516 representing the confidence that the SM content item refers (or references) to the video event. For each feature type 510 a - c , a feature specific sub-function generates a score indicating whether the SM content item refers to the annotated event. Each sub-function's score is based only on the information extracted in that particular feature set.
- the output of the SM to event alignment 332 is a mapping between an annotated event and a SM content item. This mapping, along with the real-value confidence score is stored in the mapping store 324 .
- the alignments 332 between SM content items and events may be translated into alignments between SM IDs and events.
- a total confidence score may be determined that represents the confidence that an event is relevant to an SM ID. This total confidence score may be interpreted as the likelihood that the event (e.g., a television program or commercial) has been viewed by the user associated with the SM ID.
- the total confidence score may be determined using a function incorporating the confidence scores, determined using alignment 332 , between SM content items authored by the SM ID and the event. For example, the function may sum these individual confidence scores.
- alignment process 322 described in FIG. 5 may be used outside of the context of social media content item to event alignment.
- alignment process 322 may be used to align social media content items with keyword topics or interests provided by a third party source, such as an advertiser 160 . This may be useful if an advertiser 160 is interested in determining whether a given SM ID is interested in a given topic.
- either the identification server 130 may create, or alternatively the advertiser 160 may provide, keywords associated with a topic to the identification server 130 .
- the identification server 130 may perform the alignment process 322 using these keywords to perform comparative feature extraction 510 on the SM content items.
- the extracted features and keywords may then be aligned 512 to identify SM content items associated with the keywords.
- a confidence score may be determined regarding the alignment between the one or more keywords and the SM content item.
- the identification server may also identify the authors of SM content items.
- the individual confidence scores of SM content items authored by the user may be aggregated.
- a total confidence score that a user is aligned with a topic may be determined based on the individual confidence scores of the SM content items authored by that user.
- the identification server 130 may store (not shown) and output the SM IDs associated with a given topic by returning the SM IDs aligned with the topic based on their respective total confidence scores.
- a topic may include general categories such as politics, sports, and fashion, specific personalities such as Justin Bieber or Joss Whedon, or specific brands such as Harley Davidson motorcycles and Porsche cars.
- Any other word or set of words may be used as a keyword associated with a topic.
- a keyword may include a single words, or a series of words, such as a phrase.
- the identification server 130 may use keywords commonly associated with being a pet owner to determine whether the user is a pet owner. These keywords might include, for example, “my dog”, “my cat”, “my kitten”, “our dog”, “our puppy”, and so on.
- the SM content items authored by a given SM ID may contain an example SM content item stating “My dog slobbered all over the couch!”. Comparative feature extraction 510 may extract several features from this content item based on the presence of several of the keywords in the example SM content item.
- An example of a feature in this SM content item may include “my dog.” Consequently, alignment 512 may indicate that there is a high level of confidence (e.g., a high confidence score) that the SM content item is associated with the topic of being a pet owner. Based on this and other SM content items authored by the SM ID, a total confidence score may be determined regarding whether the SM ID is aligned with the pet owner topic.
- a high level of confidence e.g., a high confidence score
- multiples streams of data are ingested as a preliminary step in the method.
- the time-based media is segmented into semantically meaningful segments corresponding to discrete “events” which are identified with advertisements (i.e. commercials).
- Event detection 422 in the advertising domain may operate by identifying one or more shots that may be part of an advertisement. Advertisements can be detected using image features such as the presence of all black frames, graphics detection (e.g. presence of a channel logo in the frame), aspect ratio, shot boundaries, etc. Speech/audio features may be used including detection of volume change, and the presence/absence of closed captioning.
- Event boundary detection 424 operates on an advertisement block and identifies the beginning and ending boundaries of individual ads within the block.
- Event boundary determination may be performed using a classifier based on features such as the presence/absence of black frames, shot boundaries, aspect ratio changes, typical/expected length of advertisements. Classifiers may be optimized by hand or using machine learning techniques.
- the video metadata alignment 416 process is domain dependent.
- metadata for an advertisement may include information such as “Brand: Walmart, Scene: father dresses up as clown, Mood: comic.” This metadata is generated by human annotators who watch sample ad events and log metadata for ads, including, the key products/brands involved in the ad, the mood of the ad, the story/creative aspects of the ad, the actors/celebrities in the ad, etc.
- Metadata for advertisements may also include low level image and audio properties of the ad (e.g. number and length of shots, average color histograms of each shot, power levels of the audio, etc.).
- Video metadata alignment 432 then takes as input the feature vector representation 430 of an advertisement and a metadata instance 428 . It cycles through each metadata instance 428 in the event metadata store 412 and estimates the likelihood that the particular advertisement may be described by a particular metadata instance using, for example, a simple cosign similarity function that compares the low level feature representation of the ad event to the low level properties in the metadata.
- the particular start and end times, channel and location in which the specific advertisement appeared is included with the metadata that is stored in the Annotated Event Store 316 .
- SM to event alignment 322 generates geo-temporal features, content features, and authority features.
- Content feature representations express the amount of co-occurring content between television show or advertisement metadata, as stored in the TV programming guide data 304 and annotated event store 316 , and terms within SM content items. For example, the content item “I loved this Glee episode. Can you believe what Quinn just did” and the metadata for the television show “Glee”: ⁇ “Show: Glee; Cast: Dianne Agron, Chris Colfer, etc.; Characters: Quinn, Kurt, etc.; Description: In this episode . . .
- ” ⁇ have co-occurring (e.g., matching) content terms (e.g, “Glee” and “Quinn”).
- content terms e.g., “Glee” and “Quinn”.
- the content item “I loved that viable Walmart clown commercial” and the metadata for an advertisement for Walmart ⁇ “Brand: Walmart, Scene: father dresses up as clown, Mood: comic” ⁇ have co-occurring content terms (e.g., “Walmart” and “clown”).
- the matches may be considered generally, so that content appearing anywhere in a SM message can be matched against any terms or elements of the television show or advertisement metadata, or may be restricted to certain sub-parts thereof.
- the domain ontologies 504 that encode information relevant the television show and/or advertising domain may be used to expand the term set to include synonyms and hypernyms (e.g., “hilarious” for “comic”), names of companies, products, stores, etc., as well as TV show associated words (e.g., “episode”) and advertisement associated words (e.g., “commercial”).
- the output of SM to event alignment 322 is a mapping between the annotated TV show or advertisement and each SM content item, with an associated confidence score. This information is stored in the mapping store 324 .
- the TV show to advertisement overlap 318 engine creates mappings between the detected airings of advertisements and the TV shows in which those airings occurred. Put another way, TV show to advertisement overlap 318 engine determines which advertisements aired during which TV shows. Similarly to the SM to event alignment 322 , TV show to advertisement overlap 318 accesses annotated events from the annotated events store 316 and the TV programming guide 304 data, and uses this information to determine the overlap of airings between advertisements and other types of time-based media.
- the engine 318 is configured to compare the temporal extent of the airing times of the TV shows and advertisements. If an advertisement airs between the total temporal extent of the TV show, the airing advertisement is determined to match (or overlap) the airing of the TV show. When an airing of an advertisement occurs on the same channel, in the same TV market, and within the same airing time window as a TV show, a mapping indicative of this occurrence is stored in the TV show/ad overlap store 320 by the engine 318 .
- a mapping may be created between an ad for laundry detergent airing at 7:15 pm PST on FOXTM on ComcastTM cable and an episode of the TV show Glee from 7:00 pm to 8:00 pm PST, also on FOXTM on ComcastTM cable.
- a web ID to SM ID 326 engine correlates the web browsing behavior of individuals with their use of SM systems to identify (or map or align) the user's web ID to their SM ID.
- a given web ID may be mapped to one or more SM IDs, and a given SM ID may be mapped to one or more web IDs.
- the web ID to SM ID alignment 326 engine receives input from a SM content store 306 containing SM content items, a SM author store 308 containing the SM IDs of the user who authored each SM content item from the SM content store 306 .
- the web ID to SM ID alignment 326 engine also receives input from a cookie store 310 and a web ID store 312 .
- the cookie store 310 stores cookies (or HTTP cookies, web cookies, or browser cookies) containing text regarding the behavior of a web ID on the internet.
- the behavior stored in a cookie may include a list of websites visited, times when the websites were visited, website authentication information, user preference information for the browser generally or for specific websites, shopping cart content, or any other textual information.
- the cookies stored in cookie store 310 may be received by the identification server from any one of a number of different sources, including any of the systems described with respect to FIG. 3 .
- the list of web IDs of the cookies in store 310 may be separately stored in a web ID store 312 .
- FIG. 6 is an illustration of one embodiment of web ID to SM ID alignment 326 .
- a web user's ID is matched with one or more SM IDs by matching uniform resource locators (URLs) that appear in tracking cookies and also in SM content items, and by also matching the times that those URLs appear.
- URLs uniform resource locators
- one or more cookies storing the website URL browsing behavior of a single web ID is used as a baseline to compare against the SM content items stored in the SM content store 306 .
- the SM content items used in the matching each contain at least one URL link. The exact manner in which URLs and times in the cookie/s and the SM content items are compared may vary depending upon the implementation.
- the SM content items are time indexed.
- the time index contains a number of time bins, where each bin covers a distinct, non-overlapping time range (e.g., one hour periods).
- the SM content items are added to the time bins depending upon when they were authored (e.g., the date and time when they were posted to a social networking system).
- Each entry in one of the time bins of the time index may, for example, include two values, a value indicating the SM ID of the user who authored the SM content item, and the URL or URLs contained in the SM content item.
- the entries in a given bin are arranged in reverse chronological order.
- the comparison is performed by taking each URL in a cookie, examining the time that the URL was visited, and comparing that time against the time index to match a particular time bin. Then, searching within that time bin, the URL from the cookie is compared against the URLs of the SM content items in the time bin. If the URL from the cookie matches a URL from a SM content item in the time bin, then it is determined that there is an instance of a match between the web ID of the cookie and the SM ID of the matching content item. This process may be repeated for each URL in the web cookie, against the SM content items in each matching time bin.
- Basing the alignment between web IDs and SM IDs on the comparison above relies on the assumption that if a user authors a SM content item containing a URL link to a website on the internet, then it is assumed that the user likely visited that URL using a web browser near in time to when they authored the SM content item. While this assumption is not expected to be true in every single instance, it is assumed to be generally true in many cases.
- a single match between a SM ID and a web ID does not necessarily guarantee that the SM ID is definitively correlated with the web ID. Even multiple matches may not guarantee correlation. However, the greater the number of matches that are detected between a web ID and a SM ID, the more likely it is that the two are correlated.
- the alignment 326 between a web ID and SM IDs may be expressed as a list of all SM IDs that contain at least a threshold number of matches to the web ID. Further, each web ID to SM ID alignment may be represented as a confidence value. The more matches between a web ID and a SM ID, the greater the confidence value. The confidence may be determined as any numerical value (e.g., ranks, probabilities, percentages, real number values). Confidence values may also be normalized, for example using the confidence values of the other SM IDs in the list.
- the contribution that a match makes to the confidence value may be a fixed value, such that each match between URLs in SM content items and URLs in a cookie contributes the same amount as any other match.
- the contribution a match makes to the confidence value may vary depending upon the popularity of the website. For example, if a website is very rarely visited, then a match to a shared link in a SM content item may increase the confidence value a larger amount versus the contribution of a match of a very commonly visited website. Contributions of matches to the confidence value may also vary depending upon other factors including, for example, the time of day of the match, and the number of simultaneous visitors (i.e., density of visitors) at a given URL at the time of the match.
- FIG. 6 illustrates an example alignment between three example candidate SM users 1 , 2 , and 3 , versus a single example web ID cookie 604 .
- the cookie and content items as temporal extents from T 0 to T 5 are represented by the rectangular boxes, and the appearance of a URL at a particular time is represented by an dark vertical mark.
- Candidate user SM 1 matches the web ID cookie 604 twice (for URL 1 and URL 3 )
- candidate user SM 2 matches the web ID cookie 604 three times (for URL 1 , URL 2 and URL 3 )
- candidate user SM 3 matches the web ID cookie 604 only once (URL 3 ).
- candidate SM user 2 has the highest confidence as being the same user from web ID cookie 604
- candidate SM user 1 has the second highest confidence
- candidate SM user 3 has the lowest confidence.
- the matches for all three candidate SM users are displayed in a single time index 62 at the top of FIG. 6 , illustrating how an example time index may be structured.
- the time index may contain all SM content items under consideration, not just the matches, but for clarity of the example only matches are shown.
- URLs may vary depending upon the implementation. As URLs may contain a great deal of information, the portion of the URL that is used to match will affect how many other URLs match that URL. On average, if the entire URL string is matched, less matches will be determined than if URLs are truncated prior to matching. Truncation may be used on either (or both) of SM content item URLs and website URLs to match any portion of each URL.
- a URL in a cookie may be, in its entirety, “http://forum.site.com/showthread.php ?4819133-myThread/page51.” Truncation may be used to match “http://forum.site.com/showthread.php?4819133-myThread/” or “http://forum.site.com/”. Truncation of URLs may be fixed in advance, adjusted dynamically, or externally controlled as an input to the matching process. Truncation thus may act as a tunable parameter that widens or narrows the scope of a potential match. More truncation will result, generally, in more matches and higher confidence values, where as less truncation will result, generally, in fewer matches and lower confidence values.
- the web ID to SM ID alignment process 326 may also be configured to expand shortened URLs to create expanded URLs for use in matching.
- a content item may contain a URL that has been shortened using a website such as Bit.lyTM.
- the original URL prior to shortening may be of the form “http://forum.site.com/showthread.php ?4819133-myThread/page51.”
- the shortened version may be contained in a SM content item or cookie as http://bit.ly/JLzAzK.
- the expanded URLs may then be used in matching in place of, or in addition to, the shortened URLs.
- the scope of URL matching may further be controlled by abstracting URLs into a general type prior to matching. Once URLs have been abstracted to types, matching is performed by comparing the types of the cookie URLs to the types of SM content item URLs. For example, a URL in a cookie may be for www.espn.com. This may be abstracted to be a URL of type SPORTS-SITE. A URL in a content item may be for www.ncaa.org. This URL may also be abstracted to be a URL of type SPORTS-SITE. Without performing abstraction, these two URLs would generally not match.
- time range covered by each time bin in the time index may be adjusted to control how close in time SM content items must be authored to the time when websites are visited in order to generate a match.
- increasing the time range covered by a bin in the time index increases the amount of time needed to find matches, assuming that size of bin is related to match time.
- time ranges may be fixed in advance, dynamically adjusted, or externally controlled as an input to the matching process.
- web ID to SM ID alignment 326 has been described above in terms of indexing SM content items and matching the web cookie against those SM content items, the alignment 326 may also be performed in reverse. For example, web cookies may be indexed, and the SM content items authored by a SM ID could be compared against those cookies. Additionally, alignment 326 has been described in terms of indexing based on time in order to facilitate URL matching. Indexing may also be performed by URL to facilitate time matching.
- the web ID to SM ID 326 engine correlates the web browsing behavior of individuals with their use of SM systems to map the user's web ID to their SM ID.
- the engine 326 maps web IDs against a model user constructed to represent a group of SM IDs who share one or more traits.
- the engine 326 is instead determining whether the user of a given web ID matches the kind of person represented by a model.
- a model user is constructed by aggregating together all of the SM IDs of users who are known to have certain specified traits (e.g., interests, hobbies, activities, characteristics, television viewing habits, brand affinities). These traits may be extracted directly from SM content items associated with each SM ID.
- the model user may be associated with all of the SM IDs that have a SM content item indicating that the user of the SM ID shares the specified trait.
- the model user may also be associated with all the of the SM content items of all of the SM IDs that share the specified trait.
- the model user is similar to any other SM ID in that both have associated SM content items that may be used for matching.
- the web ID to SM ID 326 engine performs the same matching process as described above, except that in this case the model user is substituted in place of a SM ID.
- the URLs and times of the cookie/s associated with the web ID are compared against the URLs and times from the SM content items associated with the model user.
- a confidence value may be determined that a web ID matches a model user.
- An example of a model user may be a model user who has the trait of being a Joss Whedon fan.
- the model user may be constructed to include all SM IDs who have expressed interest in Joss Whedon directly, or any of the projects he has worked on (e.g., Dr. Horrible's Sing-Along Blog, Firefly). If one or more of the content items of a given SM ID mentions any of the projects delineated as being associated with the trait, that SM ID may be incorporated into the model user.
- Web ID to model user alignment may then be performed using engine 326 to determine whether or not a user associated with a web ID is a Joss Whedon fan.
- web ID to SM ID alignment may instead be performed by a third party (not shown) externally.
- Identification server 130 may make requests for alignment from the third party, and may receive responses containing the alignments.
- FIG. 7 is an interaction diagram for using the identification server to send messages, according to one embodiment.
- a client device 150 operated by a user, sends 702 a web page request to a web server 170 requesting content for display in a web browser running on the client device.
- the request 702 may be for SM information
- the recipient of the request 702 may be a SM source 110 such as a social networking system, rather than a web server 170 .
- the web page request 702 may also include a request for time-based media from a time-based media source 120 .
- the recipient of the web page request 702 retrieves the content to be returned in response 716 to the request 702 .
- the web server 170 may also request 704 advertising content, for example, where the web page includes an advertisement placement, such as banner ad, sidebar ad, overlay ad, or the like.
- the web server 170 obtains the advertising content 714 using the ad request 704 from an ad server 180 .
- the ad server 180 is an advertisement bidding system that allows advertisers 160 (not shown in FIG. 7 ) to place bids on advertisement placements in web pages.
- the ad server 180 interacts with one or more advertisers 160 to coordinate the purchase and display of advertisements. This is not shown as a separate step in FIG. 7 , as the identification server 130 provides services to both the ad server 180 and the advertisers 160 in order to facilitate the return of advertising material 714 to the web server 170 .
- Ad servers 180 may send a request 706 to the identification server 130 to determine how much to charge for advertisement space to be displayed to the requesting user.
- Advertisers 160 may send a request 706 with identification server 130 to determine how much to bid for advertisement space to requesting user, and what advertisement to show to the requesting user.
- the identification server 130 responds to received requests 706 from any entity with one or more messages 710 a - c , sent either separately or as part of a regularized feed.
- the recipient/s and contents of the message 710 vary depending upon the requestor and the contents of the message request 706 .
- a message 710 a may be sent to the advertiser 160 or ad server 180 , a message 710 b may be sent to web server 170 , or a message 710 c may be sent to the client 150 .
- Various use cases for the identification server 130 including the contents of the request 706 , the recipient of the message 710 , and actions performed by the identification server 130 are described further below.
- Processing 712 the message 710 a may include, for example, determining which advertisement to send 714 to the web server 170 , constructing a tailored advertisement 714 to send to the web server 170 , sending an advertisement referenced or stored in the message 710 to the web server 170 , and/or pricing or placing a bid on advertisement space in the page response 716 .
- the web server 170 sends the ad, in both cases, to the client 150 .
- the message 710 c contains an advertisement to be presented to the user of the client 150 .
- the message request 706 received by the identification server 130 may include a web ID, a SM ID, a cookie, instructions for receiving targeting criteria, and tolerance parameters.
- the message 708 sent by the identification server 130 may include, for example, a specific advertisement to send to a specific web ID or SM ID, advertising material that the advertiser 160 may use to create an advertisement to be sent to the user, an identification of a user in terms of one or more web IDs or SM IDS, targeting criteria, and/or a listing of the time-based media events (e.g., ads, TV shows) the user is likely to have seen.
- the web server 170 , ad server 180 , advertisers 160 , SM sources 110 , and identification server 130 may not be separate, and functions performed by each of these entities may be combined together.
- the identification server 130 may communicate with the web server 170 , SM sources 110 , or the clients directly 150 .
- the identification server 130 is configured to use event airing detection 314 , TV show to ad overlap 318 , SM to event alignment 322 , web ID to SM ID alignment 326 as described above. There are a number of different use cases with different inputs that affect the content of the message 710 output by the identification server 130 .
- Advertisers 160 including, for example, a social networking system 110 may want to determine whether to bid on a particular advertisement space to be presented to a user associated with a web ID. To determine whether or not to place the bid, the advertisers 160 may want to know who the user associated with the web ID is. The identification server 130 can provide this information in the form of one or more correlated SM IDs of the user. Alternatively, advertisers 160 may wish to tailor advertisements to be sent to a specific user associated with particular web ID. The advertiser 160 may request SM IDs correlated with a given web ID to determine what advertising content will be sent to the user associated with the web ID.
- the identification server 130 receives a request 706 containing a cookie and a web ID, and requesting one or more SM IDs corresponding to the web ID.
- the advertiser 160 may, for example, use the return SM IDs to determine what advertisement to send to the user.
- the identification server 130 uses the received web ID and the cookie to perform web ID to SM ID alignment 326 .
- the identification server 130 responds with a message 710 comprising one or more SM IDs, as well as confidences indicative of the chance that each returned SM ID corresponds to the received web ID.
- web ID to SM ID alignment 326 may also return one or more model users who match a web ID. Consequently, the message 710 may also comprise one or more model users (with confidences) who match the web ID from the request.
- the advertisers 160 can make use of the returned SM IDs.
- the advertisers 160 may instead want to know what TV advertisements (or other time-based media) the user associated with the web ID has been exposed to.
- the identification server 130 can provide this information.
- the advertisers 160 may, again, use this information to determine what advertisements to bid on, or to tailor their advertisements to the user who will be receiving the advertisement. For example, advertisers 160 may want to send an advertisement to a user, where the advertisement sent is related to an advertisement or TV show that has aired and that the user has likely been exposed to.
- the identification server 130 receives a request 706 containing a cookie and a web ID, and requesting a listing of advertisements (e.g., specific advertisements, brands, ad creatives) that the user corresponding to the web ID is likely to have seen.
- the identification server 130 uses the web ID and the cookie to perform web ID to SM ID alignment 326 .
- the identification server 130 Asynchronously with the alignment 326 , the identification server also performs event airing detection 314 , the TV show to ad overlap 318 process, and SM to event alignment 322 .
- the identification server 130 obtains the events that are aligned with those SM IDs from SM to event mapping store 324 . This identifies the annotated time-based media events that are correlated with the web ID.
- the identification server 130 obtains the advertisements that aired during those events from the TV show to ad overlap store 320 . This identifies the advertisements that the user associated with the web ID is likely to have seen.
- the identification server 130 responds with a message including the advertisements that the user associated with the web ID is likely to have seen, or with a message including an advertisement related to the events likely seen.
- advertisers 160 may want to know whether a web ID or SM ID is interested in or associated with a particular topic. For example, given a web ID, an advertiser may wish to know whether a user is a pet owner. The identification server 130 can provide this information. The advertisers 160 may, again, use this information to determine what advertisements to bid on, or to tailor their advertisements to the user who will be receiving the advertisement. For example, advertisers 160 may want to send advertisements to users already known to be pet owners, in order to maximize the efficacy of their ad campaign.
- the identification server 130 receives a request 706 containing a cookie and a web ID, and a request for a determination of whether the user associated with the web ID is interested in a topic.
- the advertiser 160 provides the identification server 130 with a rule, and may provide one or more keywords for assisting in the determination of whether or not a user has interest in the designated topic.
- the identification server 130 may itself determine one or more keywords to associate with a topic for the purpose of determining whether or not a user has interest in the designated topic.
- the identification server 130 uses the web ID and the cookie to perform web ID to SM ID alignment 326 .
- the identification server 130 further performs SM content item to keyword alignment 322 , including filtering 502 , comparative feature extraction 510 , and alignment 512 , to determine whether the user associated with the web ID and SM ID is interested in the topic.
- the identification server 130 then responds with a message 710 based on the rule and the user's determined interest in the topic.
- the request 706 received by the identification server 130 comprises only a web ID
- the identification server 130 uses stored cookies 310 associated with the web ID to perform web ID to SM ID alignment 326 .
- This is beneficial as an alternative to including cookies as part of requests 706 .
- advertisers 160 may more easily make requests 706 of the identification server 130 without requiring as large of an input.
- this embodiment covers the case where the source of the cookie is someone other than the requestor.
- the requestor 706 has information regarding a user in a website browsing context (e.g., their web ID or cookies), and uses this information to obtain, from the identification server, information about the user's behavior in a social media context.
- the identification server is also configured to operate in the reverse situation, where the requestor has information about the user in a social media context, and requests information about the user in a website browsing context. This facilitates use of the identification server by a wider variety of possible consumers.
- the request 706 includes a SM ID and/or a list of SM content items authored by the user associated with that SM ID.
- the identification server 130 compares the SM content items to stored cookies 310 associated with one or more stored web IDs 312 to perform web ID to SM ID alignment 326 .
- the identification server 130 may return a message containing a list of one or more web IDs corresponding to the received SM ID, along with confidences indicative of the chance that each web ID corresponds to the received SM ID.
- the identification server may also return a message including the advertisements that the user associated with the received SM ID is likely to have seen previously, as above.
- the identification server 130 may also provide advertisers 160 and ad servers 180 with additional input options to control the behavior of the identification server 130 .
- a request 706 for a message may include tolerance parameters to be used as part of web ID to SM ID alignment 326 .
- the tolerance parameters may, on average, increase or decrease the chance of a match between a SM ID and a web ID by altering the conditions for a match.
- the tolerance parameters may also tune the amount of time taken to perform matches during web ID to SM ID alignment 326 . Examples of tolerance parameters include the extent to which URLs are truncated for matching, whether truncation or shortened URL expansion is used, and the time range covered by each bin in the time index. Other examples of tolerance parameters are also contemplated, particularly if other types of indices are used to perform matches, and/or if other items of data are used to perform the match.
- the Identification Server May be Configured to Send Messages Automatically
- the identification server 130 may also be configured to provide messages 710 to recipients automatically. This may be useful, for example, if an advertisers wishes to send an advertisement based on the airing of one of their own advertisements during a television show. For example, an advertiser 160 may air an ad on TV, and may use the identification server 130 to tell the advertiser 160 when a related advertisement should be sent using a message 710 to users via a website browser as well. The identification server's 130 ability to detect airings of advertisements 314 on TV and notify the advertiser 160 accordingly facilitates this business strategy. Additionally, advertisers 160 may wish to be continually updated, for example using a feed, regarding other information, for example correlations between web IDs and SM IDs, or what advertisements various users have likely been exposed to.
- the identification server 130 generates 708 messages 710 on its own initiative. Messages 710 may be generated 708 so as to be part of a regularized feed, or in response to the detection 314 of airing of a particular time-based media event. To determine what messages 710 to send, the identification system may store rules (not shown) for when and to whom messages 710 are to be sent. Rules are described further below.
- the identification server 130 is configured to keeps track of the TV shows and advertisements that are currently airing or have aired.
- the identification server 130 may do this by monitoring information from the TV show/ad overlap store 320 as provided by event airing detection 314 , and/or from the TV programming guide 304 .
- message selection 330 queries for rules wherein the detected advertisements or TV shows are used in the rule.
- the process for the detection of airings and sending of messages 710 in response may be performed in batches one or more times per day.
- message selection 330 creates a message 710 associated with the matched rule. If more than one rule is matched, the identification server 130 may select between the possible matched rules. The selection may, for example, be based on how recently the user is expected to have seen the ad, the amount of time since a user or group of users received a message, and/or how much an advertiser associated with a rule and message paid or bid for the advertising space for that message.
- the identification server 130 may make use of rules to determine what messages 710 to send, when to send messages, and to whom to send messages.
- the use of rules allows the identification server not only to identify users, as described in the use cases above, but also to send messages 710 containing advertising content back to advertisers 160 , ad servers 180 , or directly to client devices 150 .
- Rules may be used both to send messages 710 automatically, and also to respond to requests 706 for advertising material. Rules may be stored in a store or database (not shown).
- Rules for the sending of messages 710 may be specified using one or more rule antecedents and one or more rule consequents.
- a rule may specify an airing criteria, a temporal criteria, a geographical criteria, a demographic criteria, and a viewed content criteria.
- a rule may embody the logic of “If advertisement X airs during show Y, then send message N to web ID M.”
- Rules may be provided by advertisers 160 to the identification server 130 to determine when messages 710 containing their advertising content are sent to client devices 150 .
- some criteria, such as airing criteria and temporal criteria indicate under what conditions a message 710 is to be sent, while other criteria such as geographical criteria, demographic criteria, and viewed content criteria indicate the population of web IDs who will receive the message 710 .
- messages 710 may be sent without performing SM to event alignment 322 , or web ID to SM ID alignment 326 .
- Messages 710 may be sent according to these rules using message selection 330 , event airing detection 314 , and the TV show to ad overlap process 318 .
- SM to event alignment 322 and web ID to SM ID alignment 326 are used to send messages 710 as well.
- Airing criteria specifies the trigger for when an a message 710 is to be sent.
- Airing criteria in a rule may take the form “if advertisement X airs during show Y.” Generally, an airing criteria specifies that if a given advertisement or TV show has aired, then a message 710 is to be sent responsive to that airing. The remainder of the rule may specify the content of the message 710 who the recipients of the message 710 will be. Whether an airing criteria of a rule is met may be determined using the event airing detection 314 engine as well as using TV show to ad overlap 318 engine.
- Temporal criteria specifies how close in time to the airing of a time based media event a web ID in a request 706 must have visited a website requesting advertising content in order to receive the message 710 from the identification server 130 .
- the identification server 130 can ensure that the messages 710 sent occur close in time (e.g., within 30 seconds, within 5 minutes, within 2 days) to the actual airing of the event. For example, a temporal criteria “if a website request is received from a web ID within X seconds of the airing of TV show Y, send message Z to that web ID in response to the request.”
- Geographical criteria specifies a geographical region requirement for the potential recipients of a message 710 .
- IP internet protocol
- This analysis may be performed by the message selection engine 330 .
- the geographical location of the user associated with the web ID may be compared to a geographical region requirement in a rule to determine if the user's location is within the specified region. If it is, they may be sent the message 710 . If not, they will not receive the message 710 .
- Geographical criteria is useful for advertisers 160 who are only located in particular real world geographic regions, and who wish to target their advertising to those regions.
- An example of a geographic criteria may be “if advertisement Y airs during TV show Z, then send message N to web IDs in geographic region M.”
- Demographic criteria specifies that a message 710 should only be sent to web IDs associated with users of a certain demographic.
- Demographic criteria may include, for example, age, gender, socioeconomic status, interests, hobbies, and group membership.
- the demographic of a user associated with a web ID may be determined from one or more cookies that may be associated with the web ID.
- Demographics may be determined internally by the identification server 130 , or may be determined externally. For example, demographic information may be included in requests 706 received from advertisers 160 , ad servers 180 , and web servers 170 .
- An example of a demographic criteria may be “if the user associated with a web ID in a request is of demographic W then send message N to in the message in response to the request”, or alternatively the demographic criteria may be “if advertisement Y airs during TV show Z then send message N to web IDs of demographic W.”
- Demographic criteria may alternatively require that a time-based media event have match a particular demographic before a message 710 may be sent.
- the demographic of an advertisement may be provided by the advertiser 160 .
- the demographic of a TV show may be part of the electronic programming guide data 304 , or it may be received from external sources. For example, entities such as NIELSEN and KANTAR organize data about the demographics of people who watch various TV shows. For example, it may be specified that a particular TV show is associated with watches within the age range of 18-29.
- An example of a rule that incorporates both the demographics of the recipient of the message 710 as well as the demographic of a TV show may be “if the user associated with a web ID in a request is of demographic W and if advertisement Y airs during TV show Z, and TV show Z is also associated with demographic W, then send message N in response to the request,” or alternatively the demographic criteria may be “if advertisement Y airs during TV show Z of demographic W, then send message N to web IDs also of demographic W.”
- the demographic may also be determined using SM to event alignment 322 .
- the SM content items contained in the SM to event mapping store 324 for time-based media events of a TV shows may be analyzed to determine the entire population of SM IDs who have authored a SM content item regarding a given TV show.
- the SM content items of those SM IDs may be analyzed to determine the demographics of those SM IDs. For example, for the TV show “Top Gear” it may be determined from that the majority of SM content items related to “Top Gear” are posted by males. As a consequence, “Top Gear” events may be associated with the male demographic.
- a viewed content criteria specifies that in order to receive a message 710 it is determined that the potential recipient is likely to have seen a particular TV show or advertisement. For example, it may be specified that a request 706 containing a web ID is associated with a SM ID that is likely to have seen a particular time-based media event (e.g., a specific TV show or advertisement), or at least one in a series of related time-based media events (e.g., any episode of a particular TV show). The determination of whether a viewed content criteria is met may be performed using SM to event alignment 322 and web ID to SM ID alignment 326 to determine what events a user is likely to have seen, as described above.
- An example of a viewed content criteria may be “if the user associated with a web ID in a request is likely to have seen episode X of TV show Y, send message N.”
- the viewed content criteria may be “if advertisement Y airs during TV show Z, send message N to all web IDs likely to have seen advertisement Y.”
- Rules may also contain other qualifiers. As some advertisers show the same advertisement multiple times during a TV show, the rules can also precisely identify a time (or time window) at which an advertisement aired, the number of messages to be sent in response to the advertisement, or the advertisement's sequence position (e.g., first appearance, second appearance, etc.). Sequence position is useful where the advertiser does not know in advance exactly when its advertisements may appear, and to overcome variations in program scheduling. Rules may also specify that a message 710 is to be sent the next time the recipient user logs into the SNS, the next time the user authors a content item on the relevant TV show or advertisement, or that the message 710 may be sent at anytime in the future.
- sequence position e.g., first appearance, second appearance, etc.
- rules may be created for any number of purposes.
- rules may be provided by advertisers 160 for determining whether a user has an interest in a topic.
- an advertiser 160 may be interested in knowing whether a user is a pet owner.
- a received web ID may be correlated with a SM ID using web ID to SM ID alignment 326 , and the interests of a SM ID in a topic may be determined using the process described in FIG. 5 as above.
- a rule associated with this determination of interest may, for example, be “If web ID X has sufficient interest in topic Y, send message Z to web ID X.”
- TV and advertising domains are described above, the methods described herein can be adapted to any domain using time-based media (e.g., radio).
- time-based media e.g., radio
- the method of adaptation is general across different domains.
- Techniques and features used for event segmentation and annotation are adapted to reflect domain specific characteristics. For example, detecting events in football exploits the visibility of grass as it is represented in the color distributions in a video frame, while detecting events in news video or audio clip may exploit clues in the closed captioning stream.
- a software module or engine is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
- Embodiments of the invention may also relate to an apparatus for performing the operations herein.
- This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer.
- a computer program may be persistently stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus.
- any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
- Embodiments of the invention may also relate to a product that is produced by a computing process described herein.
- a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Accounting & Taxation (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Social Psychology (AREA)
- Computing Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Human Resources & Organizations (AREA)
- Primary Health Care (AREA)
- Tourism & Hospitality (AREA)
- Databases & Information Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
align(feat(x,y))=[α·content(feat(x,y))]+[β·geoTemp(feat(x,y))]+[γ·author(feat(x,y))]
Claims (35)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/975,551 US9154853B1 (en) | 2012-05-09 | 2013-08-26 | Web identity to social media identity correlation |
US14/873,687 US9471936B2 (en) | 2012-05-09 | 2015-10-02 | Web identity to social media identity correlation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/467,281 US8566866B1 (en) | 2012-05-09 | 2012-05-09 | Web identity to social media identity correlation |
US13/975,551 US9154853B1 (en) | 2012-05-09 | 2013-08-26 | Web identity to social media identity correlation |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/467,281 Continuation US8566866B1 (en) | 2012-05-09 | 2012-05-09 | Web identity to social media identity correlation |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/873,687 Continuation US9471936B2 (en) | 2012-05-09 | 2015-10-02 | Web identity to social media identity correlation |
Publications (1)
Publication Number | Publication Date |
---|---|
US9154853B1 true US9154853B1 (en) | 2015-10-06 |
Family
ID=49355416
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/467,281 Active US8566866B1 (en) | 2012-05-09 | 2012-05-09 | Web identity to social media identity correlation |
US13/900,206 Active US8819728B2 (en) | 2012-05-09 | 2013-05-22 | Topic to social media identity correlation |
US13/975,551 Expired - Fee Related US9154853B1 (en) | 2012-05-09 | 2013-08-26 | Web identity to social media identity correlation |
US14/873,687 Active US9471936B2 (en) | 2012-05-09 | 2015-10-02 | Web identity to social media identity correlation |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/467,281 Active US8566866B1 (en) | 2012-05-09 | 2012-05-09 | Web identity to social media identity correlation |
US13/900,206 Active US8819728B2 (en) | 2012-05-09 | 2013-05-22 | Topic to social media identity correlation |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/873,687 Active US9471936B2 (en) | 2012-05-09 | 2015-10-02 | Web identity to social media identity correlation |
Country Status (1)
Country | Link |
---|---|
US (4) | US8566866B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10733667B1 (en) | 2016-12-29 | 2020-08-04 | Wells Fargo Bank, N.A. | Online social media network analyzer |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101672445B1 (en) * | 2010-03-19 | 2016-11-04 | 삼성전자주식회사 | Method and apparatus for controlling content play in content transmission system |
EP2732424A4 (en) * | 2011-07-13 | 2015-03-25 | Bluefin Labs Inc | Topic and time based media affinity estimation |
US20130263166A1 (en) * | 2012-03-27 | 2013-10-03 | Bluefin Labs, Inc. | Social Networking System Targeted Message Synchronization |
US8949240B2 (en) * | 2012-07-03 | 2015-02-03 | General Instrument Corporation | System for correlating metadata |
US20140081909A1 (en) * | 2012-09-14 | 2014-03-20 | Salesforce.Com, Inc. | Linking social media posts to a customers account |
US8887197B2 (en) * | 2012-11-29 | 2014-11-11 | At&T Intellectual Property I, Lp | Method and apparatus for managing advertisements using social media data |
US10235683B2 (en) | 2014-07-18 | 2019-03-19 | PlaceIQ, Inc. | Analyzing mobile-device location histories to characterize consumer behavior |
GB2510424A (en) * | 2013-02-05 | 2014-08-06 | British Broadcasting Corp | Processing audio-video (AV) metadata relating to general and individual user parameters |
US10628858B2 (en) | 2013-02-11 | 2020-04-21 | Facebook, Inc. | Initiating real-time bidding based on expected revenue from bids |
US9866648B2 (en) * | 2013-05-10 | 2018-01-09 | Laurent Bortolamiol | Automatic transmission of user profile information to a web server |
US10187674B2 (en) * | 2013-06-12 | 2019-01-22 | Netflix, Inc. | Targeted promotion of original titles |
US10349140B2 (en) * | 2013-11-18 | 2019-07-09 | Tagboard, Inc. | Systems and methods for creating and navigating broadcast-ready social content items in a live produced video |
US9544655B2 (en) | 2013-12-13 | 2017-01-10 | Nant Holdings Ip, Llc | Visual hash tags via trending recognition activities, systems and methods |
US9471671B1 (en) | 2013-12-18 | 2016-10-18 | Google Inc. | Identifying and/or recommending relevant media content |
US10002127B2 (en) * | 2014-01-17 | 2018-06-19 | Intel Corporation | Connecting people based on content and relational distance |
US10929858B1 (en) * | 2014-03-14 | 2021-02-23 | Walmart Apollo, Llc | Systems and methods for managing customer data |
US9578116B1 (en) | 2014-08-08 | 2017-02-21 | Cox Communications | Representing video client in social media |
US20160055546A1 (en) * | 2014-08-21 | 2016-02-25 | Oracle International Corporation | Managing progressive statistical ids |
US10108672B2 (en) * | 2014-10-03 | 2018-10-23 | Netscout Systems Texas, Llc | Stream-based object storage solution for real-time applications |
US9277257B1 (en) * | 2014-11-03 | 2016-03-01 | Cox Communications, Inc. | Automatic video service actions based on social networking affinity relationships |
US10051069B2 (en) | 2014-11-26 | 2018-08-14 | International Business Machines Corporation | Action based trust modeling |
US10860669B2 (en) * | 2015-06-05 | 2020-12-08 | Nippon Telegraph And Telephone Corporation | User estimation apparatus, user estimation method, and user estimation program |
US9924222B2 (en) * | 2016-02-29 | 2018-03-20 | Gracenote, Inc. | Media channel identification with multi-match detection and disambiguation based on location |
US20170257678A1 (en) * | 2016-03-01 | 2017-09-07 | Comcast Cable Communications, Llc | Determining Advertisement Locations Based on Customer Interaction |
US11228817B2 (en) * | 2016-03-01 | 2022-01-18 | Comcast Cable Communications, Llc | Crowd-sourced program boundaries |
US11107150B2 (en) * | 2016-07-10 | 2021-08-31 | Beachy Co. | Ad Hoc item Geo temporal location and allocation apparatuses, methods and systems |
US10158897B2 (en) * | 2017-03-28 | 2018-12-18 | International Business Machines Corporation | Location-based event affinity detangling for rolling broadcasts |
US10386923B2 (en) * | 2017-05-08 | 2019-08-20 | International Business Machines Corporation | Authenticating users and improving virtual reality experiences via ocular scans and pupillometry |
CN109547859B (en) * | 2017-09-21 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Video clip determination method and device |
CN109919677A (en) * | 2019-03-06 | 2019-06-21 | 厦门清谷信息技术有限公司 | The method, apparatus and intelligent terminal of advertising strategy Optimized Iterative |
US12045818B2 (en) | 2022-02-02 | 2024-07-23 | Capital One Services, Llc | Identity verification using a virtual credential |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010029525A1 (en) * | 2000-01-28 | 2001-10-11 | Lahr Nils B. | Method of utilizing a single uniform resource locator for resources with multiple formats |
US20020059094A1 (en) | 2000-04-21 | 2002-05-16 | Hosea Devin F. | Method and system for profiling iTV users and for providing selective content delivery |
US20020104083A1 (en) * | 1992-12-09 | 2002-08-01 | Hendricks John S. | Internally targeted advertisements using television delivery systems |
US20030066077A1 (en) * | 2001-10-03 | 2003-04-03 | Koninklijke Philips Electronics N.V. | Method and system for viewing multiple programs in the same time slot |
US20050076230A1 (en) * | 2003-10-02 | 2005-04-07 | George Redenbaugh | Fraud tracking cookie |
US7064796B2 (en) * | 2001-12-21 | 2006-06-20 | Eloda Inc. | Method and system for re-identifying broadcast segments using statistical profiles |
US20060212900A1 (en) * | 1998-06-12 | 2006-09-21 | Metabyte Networks, Inc. | Method and apparatus for delivery of targeted video programming |
US20070186254A1 (en) * | 2006-02-06 | 2007-08-09 | Kabushiki Kaisha Toshiba | Video distribution system and method of managing receiving terminal of video distribution service |
US20080028036A1 (en) | 2006-07-31 | 2008-01-31 | Microsoft Corporation | Adaptive dissemination of personalized and contextually relevant information |
US20080147482A1 (en) * | 2006-10-27 | 2008-06-19 | Ripl Corp. | Advertisement selection and propagation of advertisements within a social network |
US20090070219A1 (en) * | 2007-08-20 | 2009-03-12 | D Angelo Adam | Targeting advertisements in a social network |
US20090144385A1 (en) * | 2008-03-03 | 2009-06-04 | Harry Gold | Sequential Message Transmission System |
US20090164897A1 (en) | 2007-12-20 | 2009-06-25 | Yahoo! Inc. | Recommendation System Using Social Behavior Analysis and Vocabulary Taxonomies |
US20100023399A1 (en) * | 2008-07-22 | 2010-01-28 | Saurabh Sahni | Personalized Advertising Using Lifestreaming Data |
US20100274815A1 (en) * | 2007-01-30 | 2010-10-28 | Jonathan Brian Vanasco | System and method for indexing, correlating, managing, referencing and syndicating identities and relationships across systems |
US20100333127A1 (en) * | 2009-06-30 | 2010-12-30 | At&T Intellectual Property I, L.P. | Shared Multimedia Experience Including User Input |
US20110040760A1 (en) * | 2009-07-16 | 2011-02-17 | Bluefin Lab, Inc. | Estimating Social Interest in Time-based Media |
US20110055017A1 (en) | 2009-09-01 | 2011-03-03 | Amiad Solomon | System and method for semantic based advertising on social networking platforms |
US8019875B1 (en) * | 2004-06-04 | 2011-09-13 | Google Inc. | Systems and methods for indicating a user state in a social network |
US20120110071A1 (en) | 2010-10-29 | 2012-05-03 | Ding Zhou | Inferring user profile attributes from social information |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7302404B2 (en) | 2000-02-14 | 2007-11-27 | Auctionkiller | Method and apparatus for a network system designed to actively match buyers and sellers in a buyer-driven environment |
US9117219B2 (en) | 2007-12-31 | 2015-08-25 | Peer 39 Inc. | Method and a system for selecting advertising spots |
US20110029505A1 (en) * | 2009-07-31 | 2011-02-03 | Scholz Martin B | Method and system for characterizing web content |
-
2012
- 2012-05-09 US US13/467,281 patent/US8566866B1/en active Active
-
2013
- 2013-05-22 US US13/900,206 patent/US8819728B2/en active Active
- 2013-08-26 US US13/975,551 patent/US9154853B1/en not_active Expired - Fee Related
-
2015
- 2015-10-02 US US14/873,687 patent/US9471936B2/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020104083A1 (en) * | 1992-12-09 | 2002-08-01 | Hendricks John S. | Internally targeted advertisements using television delivery systems |
US20060212900A1 (en) * | 1998-06-12 | 2006-09-21 | Metabyte Networks, Inc. | Method and apparatus for delivery of targeted video programming |
US20010029525A1 (en) * | 2000-01-28 | 2001-10-11 | Lahr Nils B. | Method of utilizing a single uniform resource locator for resources with multiple formats |
US20020059094A1 (en) | 2000-04-21 | 2002-05-16 | Hosea Devin F. | Method and system for profiling iTV users and for providing selective content delivery |
US20030066077A1 (en) * | 2001-10-03 | 2003-04-03 | Koninklijke Philips Electronics N.V. | Method and system for viewing multiple programs in the same time slot |
US7064796B2 (en) * | 2001-12-21 | 2006-06-20 | Eloda Inc. | Method and system for re-identifying broadcast segments using statistical profiles |
US20050076230A1 (en) * | 2003-10-02 | 2005-04-07 | George Redenbaugh | Fraud tracking cookie |
US8019875B1 (en) * | 2004-06-04 | 2011-09-13 | Google Inc. | Systems and methods for indicating a user state in a social network |
US20070186254A1 (en) * | 2006-02-06 | 2007-08-09 | Kabushiki Kaisha Toshiba | Video distribution system and method of managing receiving terminal of video distribution service |
US20080028036A1 (en) | 2006-07-31 | 2008-01-31 | Microsoft Corporation | Adaptive dissemination of personalized and contextually relevant information |
US20080147482A1 (en) * | 2006-10-27 | 2008-06-19 | Ripl Corp. | Advertisement selection and propagation of advertisements within a social network |
US20100274815A1 (en) * | 2007-01-30 | 2010-10-28 | Jonathan Brian Vanasco | System and method for indexing, correlating, managing, referencing and syndicating identities and relationships across systems |
US20090070219A1 (en) * | 2007-08-20 | 2009-03-12 | D Angelo Adam | Targeting advertisements in a social network |
US20090164897A1 (en) | 2007-12-20 | 2009-06-25 | Yahoo! Inc. | Recommendation System Using Social Behavior Analysis and Vocabulary Taxonomies |
US20090144385A1 (en) * | 2008-03-03 | 2009-06-04 | Harry Gold | Sequential Message Transmission System |
US20100023399A1 (en) * | 2008-07-22 | 2010-01-28 | Saurabh Sahni | Personalized Advertising Using Lifestreaming Data |
US20100333127A1 (en) * | 2009-06-30 | 2010-12-30 | At&T Intellectual Property I, L.P. | Shared Multimedia Experience Including User Input |
US20110040760A1 (en) * | 2009-07-16 | 2011-02-17 | Bluefin Lab, Inc. | Estimating Social Interest in Time-based Media |
US20110041080A1 (en) | 2009-07-16 | 2011-02-17 | Bluefin Lab, Inc. | Displaying Estimated Social Interest in Time-based Media |
US20110055017A1 (en) | 2009-09-01 | 2011-03-03 | Amiad Solomon | System and method for semantic based advertising on social networking platforms |
US20120110071A1 (en) | 2010-10-29 | 2012-05-03 | Ding Zhou | Inferring user profile attributes from social information |
Non-Patent Citations (8)
Title |
---|
Bouthemy, P., et al., A unified approach to shot change detection and camera motion characterization, IEEE Trans. on Circuits and Systems for Video Technology, 9(7) (Oct. 1999). |
Hauptmann, A. and Witbrock, M., Story Segmentation and Detection of Commercials in Broadcast News Video, ADL-98 Advances in Digital Libraries Conference, Santa Barbara, CA (Apr. 1998), 12 pages. |
Jacobs, A., et al., Automatic shot boundary detection combining color, edge, and motion features of adjacent frames, Center for Computing Technologies, Bremen, Germany (2004). |
Tardini et al., Shot Detection and Motion Analysis for Automatic MPEG-7 Annotation of Sports Videos, 13th International Conference on Image Analysis and Processing (Nov. 2005). |
U.S. Office Action, U.S. Appl. No. 13/467,281, Mar. 11, 2013, 38 pages. |
U.S. Office Action, U.S. Appl. No. 13/900,206, Jul. 31, 2013, 21 pages. |
U.S. Office Action, U.S. Appl. No. 13/900,206, Sep. 27, 2013, 17 pages. |
Witten, I. and Frank, E., Data Mining: Practical machine learning tools and techniques (2nd Edition), Morgan Kaufmann, San Francisco, CA (Jun. 2005). |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10733667B1 (en) | 2016-12-29 | 2020-08-04 | Wells Fargo Bank, N.A. | Online social media network analyzer |
US11375804B1 (en) | 2016-12-29 | 2022-07-05 | Wells Fargo Bank, N.A. | Online social media network analyzer |
Also Published As
Publication number | Publication date |
---|---|
US9471936B2 (en) | 2016-10-18 |
US20130305282A1 (en) | 2013-11-14 |
US20160027065A1 (en) | 2016-01-28 |
US20130305280A1 (en) | 2013-11-14 |
US8819728B2 (en) | 2014-08-26 |
US8566866B1 (en) | 2013-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9471936B2 (en) | Web identity to social media identity correlation | |
US11381856B2 (en) | Social networking system targeted message synchronization | |
US11048752B2 (en) | Estimating social interest in time-based media | |
US11301505B2 (en) | Topic and time based media affinity estimation | |
US9432721B2 (en) | Cross media targeted message synchronization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BLUEFIN LABS, INC., MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FLEISCHMAN, MICHAEL BEN;REEL/FRAME:034669/0001 Effective date: 20120509 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: SECURITY INTEREST;ASSIGNOR:TWITTER, INC.;REEL/FRAME:062079/0677 Effective date: 20221027 Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: SECURITY INTEREST;ASSIGNOR:TWITTER, INC.;REEL/FRAME:061804/0086 Effective date: 20221027 Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND Free format text: SECURITY INTEREST;ASSIGNOR:TWITTER, INC.;REEL/FRAME:061804/0001 Effective date: 20221027 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20231006 |