US20130145388A1 - System and method for applying a database to video multimedia - Google Patents

System and method for applying a database to video multimedia Download PDF

Info

Publication number
US20130145388A1
US20130145388A1 US13/750,746 US201313750746A US2013145388A1 US 20130145388 A1 US20130145388 A1 US 20130145388A1 US 201313750746 A US201313750746 A US 201313750746A US 2013145388 A1 US2013145388 A1 US 2013145388A1
Authority
US
United States
Prior art keywords
category
user
advertisement
video
video content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/750,746
Other versions
US9338520B2 (en
Inventor
David Girouard
Bradley Horowitz
Richard Humphrey
Charles Fuller
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vl Collective Ip LLC
Vl Ip Holdings LLC
Original Assignee
Virage Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US13/750,746 priority Critical patent/US9338520B2/en
Application filed by Virage Inc filed Critical Virage Inc
Publication of US20130145388A1 publication Critical patent/US20130145388A1/en
Assigned to VIRAGE, INC. reassignment VIRAGE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HOROWITZ, BRADLEY, GIROUARD, DAVID, FULLER, CHARLES, HUMPHREY, RICHARD
Assigned to HEWLETT-PACKARD COMPANY reassignment HEWLETT-PACKARD COMPANY MERGER (SEE DOCUMENT FOR DETAILS). Assignors: VIRAGE, INC.
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD COMPANY
Assigned to HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP reassignment HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Publication of US9338520B2 publication Critical patent/US9338520B2/en
Application granted granted Critical
Assigned to VIDEOLABS, INC. reassignment VIDEOLABS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT PACKARD ENTERPRISE COMPANY, HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Assigned to VL COLLECTIVE IP LLC reassignment VL COLLECTIVE IP LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VL IP HOLDINGS LLC
Assigned to VL IP HOLDINGS LLC reassignment VL IP HOLDINGS LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VIDEOLABS, INC.
Assigned to PRAETOR FUND I, A SUB-FUND OF PRAETORIUM FUND I ICAV reassignment PRAETOR FUND I, A SUB-FUND OF PRAETORIUM FUND I ICAV SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VL COLLECTIVE IP LLC
Assigned to PRAETOR FUND I, A SUB-FUND OF PRAETORIUM FUND I ICAV reassignment PRAETOR FUND I, A SUB-FUND OF PRAETORIUM FUND I ICAV SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VL COLLECTIVE IP LLC
Adjusted expiration legal-status Critical
Assigned to VL COLLECTIVE IP LLC reassignment VL COLLECTIVE IP LLC RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: PRAETOR FUND I, A SUB-FUND OF PRAETORIUM FUND I ICAV
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0255Targeted advertisements based on user history
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/47815Electronic shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors

Definitions

  • the present invention generally relates to the field of applying a database to video multimedia. More particularly, the invention relates to techniques for applying a database for accessing and processing digital video on a network.
  • Video logging is a process that incorporates both automated indexing and manual annotation facilities to create a rich, fine-grained (in a temporal sense) index into a body of video content.
  • the index typically consists of a combination of visual and textual indices that permit time-based searching of video content.
  • the index may incorporate spoken text, speaker identifications, facial identifications, on-screen text, and additional annotations, keywords, and descriptions that may be applied by a human user executing the video logging application.
  • the Virage VideoLogger® is one example of this type of video logging technology that is commercially available.
  • the delivery of streaming media on the Internet typically involves the encoding of video content into one or more streaming video formats and efficient delivery of that content for display to the end users.
  • Common streaming formats presently in use include RealVideo, Microsoft Windows Media, QuickTime, and MPEG.
  • the video logging technology may coordinate the encoding of one or more of these formats while the video is being indexed to ensure that the video index is time-synchronized with the encoded content.
  • the final delivery of streaming media content for display to an end user is typically accomplished with a wide variety of video serving mechanisms and infrastructure.
  • These mechanisms may include basic video servers (such as those from Real, Microsoft, or Apple), caching appliances (such as those from CacheFlow, Network Appliance, Inktomi, or Cisco), and content distribution networks (herein “CDN's”, such as those from Akamai, Digital Island, iBeam, or Adero). These types of video serving mechanisms ultimately deliver media content for display to the end user.
  • basic video servers such as those from Real, Microsoft, or Apple
  • caching appliances such as those from CacheFlow, Network Appliance, Inktomi, or Cisco
  • CDN's content distribution networks
  • e-commerce advertising and electronic commerce
  • E-commerce-based Websites exploiting video share the common goal of using rich and interactive media content (such as video) to more effectively sell products and services. Compelling video content can be used to create web experiences that are more efficient and compelling in terms of communicating value and relevance to the (potential) customer. Highly-targeted advertising and e-commerce is made possible by associating demographic and product/service information with video content. Consumers are more likely to respond to targeted offerings than random offerings, thus making the website more productive.
  • Video content is indexed and encoded using applications such as, for example, the VideoLogger available from Virage.
  • the index provides a rich, fine-grained search mechanism to access the video in a non-linear fashion. This turns interactive video into a useful and attractive feature on a website.
  • auto-categorization technology allows the system to automatically identify category designations of the content during the indexing phase, where the categories are useful in the process of selecting relevant ads and commerce options to be presented to the user.
  • the index is structured to also provide higher level topic and category information.
  • a video search and retrieval application gives website visitors the ability to search media content to find segments that are of interest. Utilizing these search and retrieval capabilities and a repository of engaging content, various mechanisms can be added.
  • a method of applying a database to video multimedia comprising indexing video content; storing the indexed video content in an index database, the indexed video content comprising metadata; encoding the video content concurrent with the indexing of the video content, wherein the index database does not contain the encoded video content; and storing in the index database at least one tag correlated with the video content on a time-code basis, wherein the tag is valid for a certain span of time within the video, and wherein the tag is configured to be associated with an advertisement or ecommerce opportunity, wherein the method is carried out in a computing environment.
  • the method may additionally comprise making associations between the video content and at least one of ad banners, product offerings, and service offerings so that such items are associated with the tags.
  • the method may additionally comprise collecting a user profile describing the content that is most of interest to the user.
  • the method may additionally comprise learning the user profile by monitoring usage patterns of the user.
  • the user profile may be combined with the tags so as to make targeted associations between at least one of ads, products, services, and a person viewing the video content.
  • the method may additionally comprise storing a plurality of indices that result from the indexing in the index database, wherein each stored index may be associated with one of a plurality of different metadata types and at least a portion of the stored indices are associated with different ones of the metadata types.
  • the method may additionally comprise algorithmically selecting a metadata element from a plurality of metadata elements in the user profile, wherein the algorithmic selecting utilizes one of cyclic, least-recently used, or random selection.
  • the method may additionally comprise algorithmically selecting an advertisement or ecommerce opportunity based on the selection of the metadata element.
  • the algorithmic selecting of the advertisement or ecommerce opportunity may utilize at least one of heuristics, fuzzy logic or hidden Markov models.
  • the method may additionally comprise algorithmically selecting an advertisement or ecommerce opportunity based on selected metadata of the video content.
  • the selected advertisement or ecommerce opportunity may be configured for display concurrently with viewing of video content that is played.
  • the data corresponding with a metadata type may have a time span that is different than the data corresponding with another metadata type.
  • a non-transitory computer readable medium containing program instructions for applying a database to video multimedia, wherein execution of the program instructions by a computing environment carries out a method, comprising indexing video content; storing a plurality of indices that result from the indexing in an index database, the indices comprising metadata; encoding the video content concurrent with the indexing of the video content, wherein the index database does not contain the encoded video content; storing in the index database a plurality of tags correlated with the video content on a time-code basis via the index database; collecting, with a personalization agent, a user profile describing the content that is most of interest to the user; algorithmically selecting a single metadata type from a plurality of metadata types in the user profile; and algorithmically selecting an advertisement or ecommerce opportunity associated with the selected single metadata type.
  • the method embodied by program instructions may additionally comprise combining the user profile with the tags so as to make targeted associations between at least one of ads, products, services, and the person viewing the video content.
  • the method embodied by program instructions may additionally comprise making associations between the video content and at least one of ad banners, product offerings, and service offerings so that such items are associated with the tags.
  • a system for applying a database to video multimedia comprising a computing environment configured to index video content; a computer database accessed by the computing environment, the computer database storing a plurality of indices that result from indexing the video content, the indices comprising metadata, wherein each stored index is associated with one of a plurality of different metadata types and at least a portion of the stored indices are associated with different ones of the metadata types; the computing environment further configured to encode the video content concurrent with the indexing of the video content, wherein the database does not contain the encoded video content; store in the database a plurality of tags correlated with the video content on a time-code basis via the computer database; make associations between the video content and at least one of ad banners, product offerings, and service offerings so that such items are synchronized via the tags; and algorithmically select an advertisement or ecommerce opportunity based on metadata of the video content.
  • At least one of the tags may be valid for a certain span of time within the video.
  • the computing environment may be further configured to collect a user profile describing the content that is most of interest to the user.
  • the user profile may be combined with the tags so as to make targeted associations between at least one of ads, products, services, and a person viewing the video content.
  • a method of applying a database to video multimedia comprising indexing video content to generate an index; storing the index in an index database; encoding the video content concurrent with the indexing of the video content, wherein the index database does not contain the encoded video content; and associating a plurality of tags with the video content on a time-code basis via the database, wherein at least one of the tags is valid for a certain span of time within the video.
  • FIG. 1 is a diagram of an example network configuration in which certain embodiments may operate.
  • FIG. 2 is a block diagram of an example system architecture in accordance with certain embodiments.
  • FIG. 3 is a block diagram showing a high-level system view of the example video application server embodiment shown in FIG. 2 and the server's interaction with e-commerce subsystems.
  • FIG. 4 is a flowchart showing an example process of gathering and managing personalization profile information from the user to define their static personal profile such as performed on the example architecture embodiment shown in FIG. 2 .
  • FIG. 5 is a flowchart showing an example process of gathering and managing personalization profile information based on the user's viewing habits to define their dynamic personal profile such as performed on the example architecture embodiment shown in FIG. 2 .
  • FIGS. 6 a , 6 b and 6 c are flowcharts showing an example delivery and response to a targeted e-commerce offering such as performed on the example architecture embodiment shown in FIG. 2 .
  • FIGS. 7 a , 7 b and 7 c are flowcharts showing example processes for using content-based and personalization-based information to deliver a targeted advertisement such as performed on the architecture embodiment shown in FIG. 2 .
  • a network may refer to a network or combination of networks spanning any geographical area, such as a local area network, wide area network, regional network, national network, and/or global network.
  • the Internet is an example of a current global computer network.
  • Those terms may refer to hardwire networks, wireless networks, or a combination of hardwire and wireless networks.
  • Hardwire networks may include, for example, fiber optic lines, cable lines, ISDN lines, copper lines, etc.
  • Wireless networks may include, for example, cellular systems, personal communications service (PCS) systems, satellite communication systems, packet radio systems, and mobile broadband systems.
  • a cellular system may use, for example, code division multiple access (CDMA), time division multiple access (TDMA), personal digital phone (PDC), Global System Mobile (GSM), or frequency division multiple access (FDMA), among others.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • PDC personal digital phone
  • GSM Global System Mobile
  • FDMA frequency division multiple access
  • a website may refer to one or more interrelated web page files and other files and programs on one or more web servers.
  • the files and programs are accessible over a computer network, such as the Internet, by sending a hypertext transfer protocol (HTTP) request specifying a uniform resource locator (URL) that identifies the location of one of said web page files, wherein the files and programs are owned, managed or authorized by a single business entity.
  • HTTP hypertext transfer protocol
  • Such files and programs can include, for example, hypertext markup language (HTML) files, common gateway interface (CGI) files, and Java applications.
  • the web page files preferably include a home page file that corresponds to a home page of the website. The home page can serve as a gateway or access point to the remaining files and programs contained within the website.
  • all of the files and programs are located under, and accessible within, the same network domain as the home page file.
  • the files and programs can be located and accessible through several different network domains.
  • a web page or electronic page may comprise that which is presented by a standard web browser in response to an HTTP request specifying the URL by which the web page file is identified.
  • a web page can include, for example, text, images, sound, video, and animation.
  • Content, media content and streaming media content may refer to the delivery of electronic materials such as music, videos, software, books, multimedia presentations, images, and other electronic data, for example over a network to one or more users.
  • Content data will typically be in the form of computer files for video, audio, program, data and other multimedia type content as well as actual physical copies of valuable content, for example CD-ROM, DVD, VCR, audio, TV or radio broadcast signals, streaming audio and video over networks, or other forms of conveying such information.
  • the terms content, media content and streaming media content may be used interchangeably.
  • a computer or computing device may be any processor controlled device that permits access to the Internet, including terminal devices, such as personal computers, workstations, servers, clients, mini-computers, main-frame computers, laptop computers, a network of individual computers, mobile computers, palm-top computers, hand-held computers, set top boxes for a television, other types of web-enabled televisions, interactive kiosks, personal digital assistants, interactive or web-enabled wireless communications devices, mobile web browsers, or a combination thereof.
  • the computers may further possess one or more input devices such as a keyboard, mouse, touch pad, joystick, pen-input-pad, and the like.
  • the computers may also possess an output device, such as a visual display and an audio output.
  • One or more of these computing devices may form a computing environment.
  • These computers may be uni-processor or multi-processor machines. Additionally, these computers may include an addressable storage medium or computer accessible medium, such as random access memory (RAM), an electronically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), hard disks, floppy disks, laser disk players, digital video devices, compact disks, video tapes, audio tapes, magnetic recording tracks, electronic networks, and other techniques to transmit or store electronic content such as, by way of example, programs and data.
  • the computers are equipped with a network communication device such as a network interface card, a modem, or other network connection device suitable for connecting to the communication network.
  • the computers execute an appropriate operating system such as Linux, Unix, any of the versions of Microsoft Windows, Apple MacOS, IBM OS/2 or other operating system.
  • the appropriate operating system may include a communications protocol implementation that handles all incoming and outgoing message traffic passed over the Internet.
  • the operating system may differ depending on the type of computer, the operating system will continue to provide the appropriate communications protocols to establish communication links with the Internet.
  • the computers may contain program logic, or other substrate configuration representing data and instructions, which cause the computer to operate in a specific and predefined manner, as described herein.
  • the program logic may be implemented as one or more object frameworks or modules. These modules may be configured to reside on the addressable storage medium and configured to execute on one or more processors.
  • the modules include, but are not limited to, software or hardware components that perform certain tasks.
  • a module may include, by way of example, components, such as, software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • the various components of the system may communicate with each other and other components comprising the respective computers through mechanisms such as, by way of example, interprocess communication, remote procedure call, distributed object interfaces, and other various program interfaces.
  • the functionality provided for in the components, modules, and databases may be combined into fewer components, modules, or databases or further separated into additional components, modules, or databases.
  • the components, modules, and databases may be implemented to execute on one or more computers.
  • some of the components, modules, and databases may be implemented to execute on one or more computers external to the website.
  • the website includes program logic, which enables the website to communicate with the externally implemented components, modules, and databases to perform the functions as disclosed herein.
  • Video logging applications can typically accept novel signal and linguistic analysis algorithms to further refine and extend the metadata index generated during the logging phase.
  • categorization algorithms and technology such as Webmind, Verity, Autonomy, and Semio. Extensibility and how it is used to integrate additional technology, such as categorization, is described in Applicant's copending U.S. Patent application Ser. No. 09/134,497, entitled “Video Cataloger System With Synchronized Encoders”, which is hereby incorporated by reference. Categorization technology from any of these vendors can thus be integrated into the logging phase.
  • Each of these technologies accepts an input stream of text and responds with a category designation.
  • the categories are used in the process of selecting relevant ads and commerce options to be presented to the user.
  • Most of these offerings require a training phase whereby a known body of content and corresponding categories are provided to the categorization engine, and a linguistic model is constructed. Thereafter, as new content is submitted to the engine, it can reliably generate category designations.
  • These systems are effective across multiple languages, and are relatively new and rapidly maturing. Auto-categorization of content is utilized because it offers the ability to scale the content processing up to large volumes within an automatic process. Manual solutions are also available (e.g., human editors making judgment calls on the content) but are much less scalable in a business sense.
  • a video search and retrieval (e.g., browse) application gives website visitors the ability to search media content to find segments that are of interest. Utilizing these search and retrieval capabilities and a repository of engaging content, various e-commerce mechanisms can be added on. Ad banners, product offerings, and service offerings can each be triggered to appear in a synchronized fashion with video content being viewed by the end-user. For example, a product demonstration video can be viewed with associated links and mechanisms to purchase the product. A sports video can have sneaker ads automatically interspersed. These associations are made possible by associating keyword ‘tags’ with video content on a time-code basis. The tag is ‘valid’ for a certain span of time within the video.
  • a related mechanism for targeting advertising and e-commerce is also disclosed. Given a repository of indexed video as described above, it is also possible to create ‘personalization agents’ to exploit user information, in addition to content-based information, in the targeting process.
  • a personalization agent gathers a specification (a ‘profile’) from the user describing the content, products, and servers that are most of interest to the user. Additionally, a personalization agent has the ability to ‘learn’ the personalization profile by monitoring the usage patterns of the user.
  • the personalization profile combined with content-based tagging, can be used to make highly-targeted associations between ads, products, services, and the person viewing the content.
  • Video server and search server technologies are integrated with ad serving personalization agents to make the final presentations of content, advertising, and commerce.
  • the algorithms for making the final presentation decisions may be made using combinations of any of the following: look-up tables, keyword intersections, heuristics, fuzzy-logic, Hidden Markov Models (HMM's), and so forth.
  • FIG. 1 is a diagram of an example network configuration 100 in which certain embodiments may operate.
  • An end user 102 communicates with a computing environment, which may include multiple server computers 108 or a single server computer 110 in a client/server relationship on a network communication medium 116 .
  • each of the server computers 108 , 110 may include a server program that communicates with a user device 115 , which may be a personal computer (PC), a hand-held electronic device, a mobile or cellular phone, a TV set or any number of other electronic devices.
  • PC personal computer
  • the server computers 108 , 110 , and the user device 115 may each have any conventional general purpose single- or multi-chip microprocessor, for example a Pentium processor, a Pentium Pro processor, a MIPS processor, a Power PC processor, an ALPHA processor, or other processor.
  • the microprocessor may be any conventional special purpose microprocessor such as a digital signal processor or a graphics processor.
  • the server computers 108 , 110 and the user device 115 may be desktop, server, portable, hand-held, set-top, or other desired type of computing device.
  • server computers 108 , 110 and the user device 115 each may be used in connection with various operating systems, including, for example, UNIX, LINUX, Disk Operating System (DOS), VxWorks, PalmOS, OS/2, any version of Microsoft Windows, or other operating system.
  • operating systems including, for example, UNIX, LINUX, Disk Operating System (DOS), VxWorks, PalmOS, OS/2, any version of Microsoft Windows, or other operating system.
  • the server computers 108 , 110 and the user device 115 may each include a network terminal equipped with a video display, keyboard and pointing device.
  • the user device 115 includes a network browser 120 used to access the server computers 108 , 110 .
  • the network browser 120 may be, for example, Microsoft Internet Explorer or Netscape Navigator.
  • the user 102 at the user device 115 may utilize the browser 120 to remotely access the server program using a keyboard and/or pointing device and a visual display, such as a monitor 118 .
  • FIG. 1 shows only one user device 115 , the network configuration 100 may include any number of client devices.
  • the network 116 may be any type of electronic transmission medium, for example, including but not limited to the following networks: a virtual private network, a public Internet, a private Internet, a secure Internet, a private network, a public network, a value-added network, an intranet, or a wireless gateway.
  • a virtual private network refers to a secure and encrypted communications link between nodes on the Internet, a Wide Area Network (WAN), Intranet, or any other network transmission means.
  • the connectivity to the network 116 may be via, for example, a modem, Ethernet (IEEE 802.3), Token Ring (IEEE 802.5), Fiber Distributed Datalink Interface (FDDI), Asynchronous Transfer Mode (ATM), Wireless Application Protocol (WAP), or other form of network connectivity.
  • the user device 115 may connect to the network 116 by use of a modem or by use of a network interface card that resides in the user device 115 .
  • the server computers 108 may be connected via a wide area network 106 to a network gateway 104 , which provides access to the wide area network 106 via a high-speed, dedicated data circuit.
  • devices other than the hardware configurations described above may be used to communicate with the server computers 108 , 110 .
  • the server computers 108 , 110 are equipped with voice recognition or Dual Tone Multi-Frequency (DTMF) hardware
  • the user 102 may communicate with the server computers by use of a telephone 124 .
  • the telephone may optionally be equipped with a browser 120 and display screen.
  • Other examples of connection devices for communicating with the server computers 108 , 110 include a portable personal computer (PC) 126 or a personal digital assistant (PDA) device with a modem or wireless connection interface, a cable interface device 128 connected to a visual display 130 , or a satellite dish 132 connected to a satellite receiver 134 and a television 136 .
  • PC personal computer
  • PDA personal digital assistant
  • Still other methods of allowing communication between the user 102 and the server computers 108 , 110 are additionally contemplated by this application.
  • server computers 108 , 110 and the user device 115 may be located in different rooms, buildings or complexes. Moreover, the server computers 108 , 110 and the user device 115 could be located in different geographical locations, for example in different cities, states or countries. This geographic flexibility which networked communications allows is additionally within the contemplation of this application.
  • FIG. 2 is a block diagram of an example system architecture 200 in accordance with certain embodiments.
  • the system architecture 200 includes a commerce website facility 210 , which further includes a video encoding module 214 and a video logging module 216 , both of which receives media content 212 , in one embodiment.
  • the commerce website facility 210 further includes a video editorial module 218 , which communicates with the video logging module 216 .
  • the commerce website facility 210 further includes a video application server 220 , which communicates with the video editorial module 218 .
  • the commerce website facility 210 further includes a web server 222 , which communicates with the video application server 220 .
  • the commerce website facility 210 further includes a video index 224 , which is produced by the video logging module 216 and the video editorial module 218 , and is maintained by the video application server 220 .
  • the commerce website facility 210 further includes a server administration (“Admin”) module 228 , which communicates with the web server module 222 .
  • the commerce website facility 210 further includes a commerce module 250 and a personalization module 260 , both of which communicate with the video application server 220 and the web server 222 .
  • the commerce module 250 and the personalization module 260 are described in greater detail below with regards to certain embodiments of FIG. 2 , and additionally in reference to FIGS. 4 through 7 .
  • the system architecture 200 further includes the network 116 shown in FIG. 1 , which may be the Internet.
  • Web pages 232 and search forms 234 are accessible via the Internet 116 .
  • Each web page 232 may depict a plurality of pages rendered by various web servers.
  • the search form 234 is also accessible by the commerce website facility web server 222 .
  • results data 238 is also accessible via the Internet 116 .
  • a video player 236 is also accessible via the Internet 116 , which communicates with the web server 222 .
  • the system architecture 200 further includes a content distribution network 240 , which transfers encoded video to the video player 236 .
  • the content distribution network 240 further receives uploaded digital video files from the video encoding module 214 .
  • the content distribution network 240 may be part of a wide variety of video serving mechanisms and infrastructures that serve to deliver encoded media content 242 for display to the end user 102 shown in FIG. 1 .
  • the content distribution network 240 may include a content owner running a simple video server at the content owner facility 220 , a complex edge caching content distribution mechanism, or other mechanisms to transmit video and other media content for display to end users 102 .
  • a commerce website may be hosted internally on the commerce web server 222 as shown in FIG. 2 , or alternatively outsourced to a web-hosting service provider, which delivers commerce features as described herein to end users 102 .
  • the operation of the video encoding module 214 , video logging module 216 , video editorial module 218 , video application server module 220 , video index 224 , administration module 228 , and web server 222 are described with respect to embodiments disclosed in the related application titled “Interactive Video Application Hosting” (U.S. application Ser. No. 09/827,772), which was incorporated by reference above. To the extent that these modules may operate differently in certain embodiments than in the related application, any such differences will be described herein.
  • the video application server module 220 manages the video index containing metadata and annotations produced by the video logging module 216 .
  • the application server 220 receives video and metadata after the video logging 216 and video editorial 218 modules, and transfers video search form 234 queries and results 238 data to the web server 222 for display to an end user 102 ( FIG. 1 ) in a web browser 120 at the user device 115 via the Internet 116 .
  • the communication of the search form 234 queries and results 238 data to the web server 222 includes an exchange of extensible markup language (XML) data, although one skilled in the technology will understand that other data exchange formats may also be utilized.
  • XML extensible markup language
  • Final HTML rendering of search foams 234 , results 238 presentation, and video player 236 playback windows may be accomplished via templates, whereby such templates dictate the graphical look-and-feel of the final media presentation.
  • Actual metadata results, communicated via XML or other data exchange formats, may be rendered into the template by substituting special keywords with results from the video application server 220 to form an HTML-compliant presentation.
  • Additional communications may be provided with the administration module 228 for server administration, metadata editing, and batch processing. Batch processing may be accomplished for insertion processes, deletion or ‘aging’ processes, metadata editing, or for automated performance of other tasks as well.
  • the administration module 228 further allows system administrators to manage the video application server 220 , including, for example, index management, asset management, editing, and startup and shutdown control.
  • the content 212 is processed by the video logging module 216 to extract index data, for example keyframes, closed-caption text, speaker identifications, facial identifications, or other index data.
  • the content 212 may additionally undergo processing by the video editorial module 218 , whereby humans may elect to add labels to the index of the content 212 by providing additional annotations, descriptions, keywords, or any other marking information such as commerce tags.
  • the index and annotation information is transferred to the video application server 220 , which hosts publishing, search, retrieval, browse, or other related video services.
  • the video application server 220 may maintain the metadata in the video index 224 .
  • the video application server 220 provides the above-described video services to the web server 222 for incorporation into the web pages 232 via the template mechanism described above.
  • the video application server 220 includes the server itself that processes XML-based queries and data management activities, performs searches against the video index, and returns video references and metadata via XML or other data exchange formats.
  • Other modules of the video application server 220 include the search or browse rendering interface which processes HTML requests into XML, and additionally processes XML responses back into HTML for delivery by the web server 222 using templates to format and render the XML data into HTML.
  • the video application server's 220 XML-based open architecture allows for simple integration of additional features and functions, such as, for example, an e-commerce engine as shown in FIG. 3 .
  • Such functions may be implemented in various commonly used programming languages, for example Perl, C, C++, Java, or other programming languages, and may utilize publically or commercially available packages for parsing and formatting XML or other data exchange formats.
  • FIG. 3 is a block diagram showing a high-level view 300 of the video application server (VAS) and it's interaction with e-commerce, targeted advertising, and personalization subsystems.
  • a video application server architecture includes the server that processes XML-based queries and data management activities, performs searches against the video index, and returns video references and metadata via XML.
  • One such architecture is described in U.S. application Ser. No. 09/827,772, filed Apr. 6, 2001 and titled “Interactive Video Application Hosting” and which was incorporated by reference above.
  • modules of the application server include a Search/Browse rendering interface which processes HTML requests into XML, and also processes XML responses back into HTML for delivery by the Web server using templates to format and render the XML data into HTML; and, the Administration module that allows system administrators to manage the application server (index management, asset management, editing, start-up/shut-down, etc.).
  • the video application server's open, XML-based, architecture readily allows the integration of additional features and functions, from syndication engines, commerce building mechanisms, to the e-commerce, targeted advertising systems, and personalization modules contemplated here. Any such modules can be implemented in any of several commonly used languages (Perl, C, C++, Java, etc.), and can utilize publically and commercially available packages of subroutines for parsing and formatting XML.
  • the Personalization Server in FIG. 3 interacts with the Personalization Interface through any of a number of communication mechanisms, including HTML, XML, and proprietary protocols specific to the Personalization Server employed.
  • the main task of the Personalization Interface is to mediate between the protocol and semantic vocabulary of the chosen Personalization Server and the Video Application Server's XML interface.
  • the VAS serves as a persistent store of state information about individual users to maintain profiles on behalf of the Personalization Server.
  • Personalization features are rendered in HTML for the end user, which allows the user to select categories, topics, and preferences to help define their individual profile.
  • Some Personalization Servers will also allow for monitoring of individual's activity and behavior to more accurately characterize the preferences of the end user. This is referred to as ‘learning behavior’ and allows the personal profile to grow and change over time.
  • the system can accommodate a range of capabilities within the Personalization Server, and can supply context and monitoring information about the user in question.
  • user-specific profiles and behavior information can be stored within a “cookie” on the user's own computer, set-top box, etc., thus insuring privacy.
  • the value of personalization technology relevant to the system is in its ability to direct and select the presentation of e-commerce and advertising opportunities for the end user.
  • the Personalization Interface module is the connective mechanism between the detailed information about the user, the detailed information about the content (based on the automatic indexing), and the range of available commerce and advertising opportunities that could be presented to the user at any given point in time, based on the content being viewed and the user in question.
  • the e-commerce engine embodiment shown in FIG. 3 represents any of a number of commercially available engines for processing e-commerce transactions by interfacing with standard transaction infrastructure, represented by the Transaction System module.
  • the Transaction System in reality represents the diverse processing subsystems typified by offerings from SAP and others present in many commercial enterprises.
  • the Transaction System interfaces with databases, order processing subsystems, shipping, inventory, billing, and customer services systems.
  • the e-commerce interface is responsible for mediating between the various information sources (personalization and video content via the Video Application Server) that determine which e-commerce opportunity should be presented to the end user at any given point in time, based on the video content and the preferences of the user.
  • the e-commerce opportunity is presented to the user in an HTML framework, and should the user select a commercial transaction, control is passed to the e-commerce engine from the e-commerce interface.
  • the ad server mechanism embodiment shown in FIG. 3 represents any of a number of commercial ad server vendors, most of which offer facilities for requesting a topic-specific ad in response to a request that contains category information.
  • the primary task of the ad interface is to mediate between the content-specific and personal profile-specific information (via XML) and the protocol of the ad server.
  • the result of a request is a targeted ad (in the form of a banner, video clip, etc.) that is presented to the user via HTML in context with the video clip being served by the video application server.
  • FIG. 4 is a flowchart showing a process 400 of gathering and managing personalization profile information from the user to define their static personal profile.
  • the exact mechanism and profile information gathered in this process 400 is dependent on the personalization server employed in the system; the process 400 depicted in FIG. 4 is merely offered as an example of the types of profile information that can be gathered and the manner in which a system might interact with a user to gather such information.
  • the process 400 begins when the user elects to define or modify their personal profile. Typically, the user may select categories, topics within those categories, and arbitrary keywords to define their static profile.
  • This profile information is actively defined by the user, and is stored on their behalf, being relatively static in the sense that it does not dynamically update based on their viewing habits.
  • Categories are typically selected from a pre-defined list of available categories, and might include things like ‘politics’, ‘sports’, ‘science’, etc. Topics are a further refinement within a category, and might include things like ‘presidential elections’, ‘hockey highlights’, or ‘the moons of Jupiter’. Topic selection first begins by selecting a category within which specific topics are selected from pre-defined lists. Keywords, unlike categories and topics, are specified with a free-form entry, and are not pre-defined. Keywords are typically any word or set of words that the user deems of interest to them that might appear in the transcript of the video. Examples of keywords include proper nouns (persons, places, locations, or organizations) and other nouns that carry information important to the user.
  • a given personalization server might employ any or all of these methods of defining a personal profile. Additionally, some personalization servers may also allow the specification of a weighting mechanism to identify the importance of each selection. For example, ‘science’ may be more important to the user than ‘politics’, and the user will be offered the ability to indicate this distinction through an importance rating (High, Medium, Low) or a numerical weighting value.
  • the profile information is stored on the user's behalf using a standard ‘cookie’ mechanism to maintain the profile on the user's local computer, thus insuring privacy. The VAS can then later access this information when the user's profile is required for commerce or advertising purposes.
  • FIG. 5 is a flowchart showing a process 500 of gathering and managing personalization profile information based on the user's viewing habits to define their dynamic personal profile.
  • the dynamic profile is constantly updated based on the video content that the user views.
  • the process 500 is invoked whenever the user proactively searches for content and chooses to view it.
  • the video metadata previously extracted during the indexing process is consulted to extract category, topic, and keyword information that can contribute to the user's dynamic personal profile. This information is readily available as part of the video index, and can be easily gathered and added to the dynamic profile of the user.
  • the dynamic profile is stored and accessed using the standard ‘cookie’ mechanism previously described for the static profile process 400 described in conjunction with FIG. 4 .
  • FIGS. 6 a , 6 b and 6 c are flowcharts showing the delivery and response to a targeted e-commerce offering.
  • FIG. 6 a illustrates a process 600 of making a targeted e-commerce opportunity available based on the subject information of the video being viewed at that moment by the user.
  • FIG. 6 b illustrates a similar process 620 based on using the personal profile information of the user.
  • FIG. 6 c illustrates a combined process 650 of using both the video content and the personal profile information to make an e-commerce opportunity available. In each case, an opportunity to make a purchase of a product or service is offered to the user in conjunction with viewing a video. This is similar to advertising in traditional broadcast video, but with two important differences.
  • the first is that the commerce opportunity is offered concurrently with the viewing of the video.
  • the second is that it is more than an advertisement; if the user selects the opportunity (either interrupting their viewing experience, or after their viewing experience is complete), the user can actually complete a purchase on the spot.
  • the process 600 shown in FIG. 6 a begins with the user viewing a selected video clip.
  • the video index is then consulted to extract the corresponding category information for that clip.
  • the category (for example, ‘sports’) is submitted to the e-commerce server to request a commerce opportunity corresponding to the category (for example, a hockey highlights video for purchase).
  • the commerce server returns the purchase opportunity, typically in the form of a graphic description of the highlights video available for purchase. If the user clicks on the opportunity, the purchase transaction is forwarded to the commerce server for fulfillment. At this point, detailed purchase information is gathered by the commerce server (such as DVD or video tape, billing and shipping information, etc.), and the commerce transaction is completed.
  • the process 620 shown in FIG. 6 b is similar to the process 600 in FIG. 6 a , except that the category information is extracted from the personalization profile(s) of the user. In this case, more than one category selection is usually present. Therefore, the process 620 includes a step to make a single category selection from the plurality of categories present in the personal profile.
  • the selection mechanism can be any of a number of algorithms, including random selection (using a random number generator), cyclic (or ‘round-robin’ selection), or least-recently-used.
  • the selected category is then submitted to the commerce server, and the transaction continues as for FIG. 6 a.
  • FIG. 6 c depicts the combined process 650 that uses the video category information in conjunction with the personal profile information.
  • the system attempts to make a match between the video category and any of the categories present in the personal profile. If a match is found, the matching category is submitted to the commerce server as before. If no match is found, the selection mechanism (random, cyclic, etc.) is used to select a category, and the process 650 proceeds as before.
  • FIGS. 7 a , 7 b and 7 c are flowcharts showing processes using content-based and personalization-based information to deliver a targeted advertisement.
  • FIG. 7 a illustrates a process 700 of making a targeted advertising available based on the subject information of the video being viewed at that moment by the user.
  • FIG. 7 b illustrates a similar process 720 based on using the personal profile information of the user.
  • FIG. 7 c illustrates a combined process 750 of using both the video content and the personal profile information to make an advertisement available.
  • an advertisement is offered to the user in conjunction with viewing a video. Typical advertisements can be clicked upon by the user to find out more information, be transported to another website, and potentially make a purchase there.
  • the process 700 shown in FIG. 7 a begins with the user viewing a selected video clip.
  • the video index is then consulted to extract the corresponding category information for that clip.
  • the category (for example, ‘science’) is submitted to the advertising server to request an advertisement corresponding to the category (for example, ‘come learn about space at Space.com).
  • the ad server returns the advertisement, typically in the form of a clickable banner ad or video clip. If the user clicks on the ad, their browser typically connects to another website pertaining to the advertisement.
  • the process 720 shown in FIG. 7 b is similar to the one in FIG. 7 a , except that the category information is extracted from the personalization profile(s) of the user. In this case, more than one category selection is usually present. Therefore, the process 720 includes a step to make a single category selection from the plurality of categories present in the personal profile.
  • the selection mechanism can be any of a number of algorithms, including random selection (using a random number generator), cyclic (or ‘round-robin’ selection), or least-recently-used.
  • the selected category is then submitted to the advertising server, and the process 720 continues as described for FIG. 7 a.
  • FIG. 7 c depicts a combined process 750 that uses the video category information in conjunction with the personal profile information.
  • the system attempts to make a match between the video category and any of the categories present in the personal profile. If a match is found, the matching category is submitted to the advertising server as before. If no match is found, the selection mechanism (random, cyclic, etc.) is used to select a category, and the process 750 proceeds as before.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Marketing (AREA)
  • Databases & Information Systems (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A system and method for applying a database to video multimedia is disclosed. Certain embodiments provide media content owners the capability to exploit video processing capabilities using rich, interactive and compelling visual content on a network. Mechanisms of associating video with commerce offerings are provided. Video server and search server technologies are integrated with ad serving personalization agents to make the final presentations of content and advertising. Algorithms utilized by the system use a variety of techniques for making the final presentation decisions of which ads, with which content, are served to which user.

Description

    RELATED APPLICATIONS
  • This application is a continuation application of U.S. application Ser. No. 10/872,191, filed Jun. 18, 2004, and issued as U.S. Pat. No. 8,171,509, which is a divisional application of U.S. application Ser. No. 09/828,507, filed Apr. 6, 2001, which claims the benefit of U.S. Provisional Application No. 60/195,535, filed Apr. 7, 2000, each of which are hereby incorporated by reference in their entirety.
  • This application is related to U.S. application Ser. No. 09/827,772, filed Apr. 6, 2001 and titled “SYSTEM AND METHOD FOR HOSTING OF VIDEO CONTENT OVER A NETWORK,” and issued as U.S. Pat. No. 7,222,163, U.S. application Ser. No. 09/828,618, filed Apr. 6, 2001 and titled “VIDEO-ENABLED COMMUNITY BUILDING,” and issued as U.S. Pat. No. 7,962,948, and U.S. application Ser. No. 09/828,506, filed Apr. 6, 2001 and titled “NETWORK VIDEO GUIDE AND SPIDERING,” and issued as U.S. Pat. No. 7,260,564, each of which are hereby incorporated by reference in their entirety.
  • BACKGROUND
  • 1. Field
  • The present invention generally relates to the field of applying a database to video multimedia. More particularly, the invention relates to techniques for applying a database for accessing and processing digital video on a network.
  • 2. Description of the Related Technology
  • A number of techniques have evolved in recent years as the Internet has grown in size and sophistication, including:
      • The use of web servers and HTML delivery to web browsers.
      • The use of the application-server model for connecting database information with web pages and interactive interfaces for end users.
      • The use of dynamically generated HTML that pulls information from a database to dynamically format HTML for delivery to the end user.
      • The use of a template language to merge database output with pre-formatted HTML presentations.
      • The use of ‘cookies’ to track individual user preferences as they interact with the web pages and applications.
      • The use of e-commerce engines and financial transaction processing technology (such as available from IBM, Qpass, Oracle, etc.)
      • The use of agent technology to build and manage personalization profiles (such as available from Autonomy, Semio, Cyber Dialog, Net Perceptions, etc.)
      • The use of auto-categorization technologies to take a segment of transcript or a document, and analyze it using natural language processing techniques to identify category labels that apply to the body of text. Example vendors of these technologies (which also offer search technologies as well) include Webmind, Verity, Autonomy, and Semio.
  • These and other related web technologies and techniques are in commonplace use and readily accessible on the Internet.
  • In addition to theses technologies, video indexing technology has also emerged, herein referred to as ‘video logging’. Video logging is a process that incorporates both automated indexing and manual annotation facilities to create a rich, fine-grained (in a temporal sense) index into a body of video content. The index typically consists of a combination of visual and textual indices that permit time-based searching of video content. The index may incorporate spoken text, speaker identifications, facial identifications, on-screen text, and additional annotations, keywords, and descriptions that may be applied by a human user executing the video logging application. The Virage VideoLogger® is one example of this type of video logging technology that is commercially available.
  • The delivery of streaming media on the Internet typically involves the encoding of video content into one or more streaming video formats and efficient delivery of that content for display to the end users. Common streaming formats presently in use include RealVideo, Microsoft Windows Media, QuickTime, and MPEG. The video logging technology may coordinate the encoding of one or more of these formats while the video is being indexed to ensure that the video index is time-synchronized with the encoded content. The final delivery of streaming media content for display to an end user is typically accomplished with a wide variety of video serving mechanisms and infrastructure. These mechanisms may include basic video servers (such as those from Real, Microsoft, or Apple), caching appliances (such as those from CacheFlow, Network Appliance, Inktomi, or Cisco), and content distribution networks (herein “CDN's”, such as those from Akamai, Digital Island, iBeam, or Adero). These types of video serving mechanisms ultimately deliver media content for display to the end user.
  • In an Internet/World Wide Web environment, companies frequently attempt to generate revenue though advertising and electronic commerce (hereinafter referred to as e-commerce) within their website. Whether selling products, services, or advertising, they all have a primary need to engage visitors in a compelling presentation of their offering, or something associated with their offering that ultimately convinces the visitor to make a purchase or follow an ad link, thus generating revenue for the company.
  • Increased visitors, repeat visitors, and increased visitation time all contribute to revenue streams derived from standard advertising models. In addition, these increased visitation properties also allow more numerous and frequent opportunities for e-commerce (products and services). E-commerce-based Websites exploiting video share the common goal of using rich and interactive media content (such as video) to more effectively sell products and services. Compelling video content can be used to create web experiences that are more efficient and compelling in terms of communicating value and relevance to the (potential) customer. Highly-targeted advertising and e-commerce is made possible by associating demographic and product/service information with video content. Consumers are more likely to respond to targeted offerings than random offerings, thus making the website more productive.
  • Therefore, what is needed in the technology is a system that effectively uses and manages video in a central role for commerce-oriented websites so as to increase their success. What is desired are mechanisms of associating video with commerce offerings, which in turn, can be used to build the websites and e-commerce tools that many companies and website owners want.
  • SUMMARY OF CERTAIN INVENTIVE ASPECTS
  • The present system and method relate to techniques whereby various traditional mechanisms are combined in an innovative way with an interactive video search and retrieval application environment. Video content is indexed and encoded using applications such as, for example, the VideoLogger available from Virage. The index provides a rich, fine-grained search mechanism to access the video in a non-linear fashion. This turns interactive video into a useful and attractive feature on a website. The use of auto-categorization technology allows the system to automatically identify category designations of the content during the indexing phase, where the categories are useful in the process of selecting relevant ads and commerce options to be presented to the user. Thus, the index is structured to also provide higher level topic and category information.
  • A video search and retrieval application gives website visitors the ability to search media content to find segments that are of interest. Utilizing these search and retrieval capabilities and a repository of engaging content, various mechanisms can be added.
  • In one embodiment, there is a method of applying a database to video multimedia, the method comprising indexing video content; storing the indexed video content in an index database, the indexed video content comprising metadata; encoding the video content concurrent with the indexing of the video content, wherein the index database does not contain the encoded video content; and storing in the index database at least one tag correlated with the video content on a time-code basis, wherein the tag is valid for a certain span of time within the video, and wherein the tag is configured to be associated with an advertisement or ecommerce opportunity, wherein the method is carried out in a computing environment.
  • The method may additionally comprise making associations between the video content and at least one of ad banners, product offerings, and service offerings so that such items are associated with the tags. The method may additionally comprise collecting a user profile describing the content that is most of interest to the user. The method may additionally comprise learning the user profile by monitoring usage patterns of the user. The user profile may be combined with the tags so as to make targeted associations between at least one of ads, products, services, and a person viewing the video content. The method may additionally comprise storing a plurality of indices that result from the indexing in the index database, wherein each stored index may be associated with one of a plurality of different metadata types and at least a portion of the stored indices are associated with different ones of the metadata types. The method may additionally comprise algorithmically selecting a metadata element from a plurality of metadata elements in the user profile, wherein the algorithmic selecting utilizes one of cyclic, least-recently used, or random selection. The method may additionally comprise algorithmically selecting an advertisement or ecommerce opportunity based on the selection of the metadata element. The algorithmic selecting of the advertisement or ecommerce opportunity may utilize at least one of heuristics, fuzzy logic or hidden Markov models. The method may additionally comprise algorithmically selecting an advertisement or ecommerce opportunity based on selected metadata of the video content. The selected advertisement or ecommerce opportunity may be configured for display concurrently with viewing of video content that is played. The data corresponding with a metadata type may have a time span that is different than the data corresponding with another metadata type.
  • In another embodiment, there is a non-transitory computer readable medium containing program instructions for applying a database to video multimedia, wherein execution of the program instructions by a computing environment carries out a method, comprising indexing video content; storing a plurality of indices that result from the indexing in an index database, the indices comprising metadata; encoding the video content concurrent with the indexing of the video content, wherein the index database does not contain the encoded video content; storing in the index database a plurality of tags correlated with the video content on a time-code basis via the index database; collecting, with a personalization agent, a user profile describing the content that is most of interest to the user; algorithmically selecting a single metadata type from a plurality of metadata types in the user profile; and algorithmically selecting an advertisement or ecommerce opportunity associated with the selected single metadata type. The method embodied by program instructions may additionally comprise combining the user profile with the tags so as to make targeted associations between at least one of ads, products, services, and the person viewing the video content. The method embodied by program instructions may additionally comprise making associations between the video content and at least one of ad banners, product offerings, and service offerings so that such items are associated with the tags.
  • In yet another embodiment, there is a system for applying a database to video multimedia, the system comprising a computing environment configured to index video content; a computer database accessed by the computing environment, the computer database storing a plurality of indices that result from indexing the video content, the indices comprising metadata, wherein each stored index is associated with one of a plurality of different metadata types and at least a portion of the stored indices are associated with different ones of the metadata types; the computing environment further configured to encode the video content concurrent with the indexing of the video content, wherein the database does not contain the encoded video content; store in the database a plurality of tags correlated with the video content on a time-code basis via the computer database; make associations between the video content and at least one of ad banners, product offerings, and service offerings so that such items are synchronized via the tags; and algorithmically select an advertisement or ecommerce opportunity based on metadata of the video content. At least one of the tags may be valid for a certain span of time within the video. The computing environment may be further configured to collect a user profile describing the content that is most of interest to the user. The user profile may be combined with the tags so as to make targeted associations between at least one of ads, products, services, and a person viewing the video content.
  • In yet another embodiment, there is a method of applying a database to video multimedia, the method comprising indexing video content to generate an index; storing the index in an index database; encoding the video content concurrent with the indexing of the video content, wherein the index database does not contain the encoded video content; and associating a plurality of tags with the video content on a time-code basis via the database, wherein at least one of the tags is valid for a certain span of time within the video.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and other aspects, features and advantages will be better understood by referring to the following detailed description, which should be read in conjunction with the accompanying drawings. These drawings and the associated description are provided to illustrate certain embodiments, and not to limit the scope of the invention.
  • FIG. 1 is a diagram of an example network configuration in which certain embodiments may operate.
  • FIG. 2 is a block diagram of an example system architecture in accordance with certain embodiments.
  • FIG. 3 is a block diagram showing a high-level system view of the example video application server embodiment shown in FIG. 2 and the server's interaction with e-commerce subsystems.
  • FIG. 4 is a flowchart showing an example process of gathering and managing personalization profile information from the user to define their static personal profile such as performed on the example architecture embodiment shown in FIG. 2.
  • FIG. 5 is a flowchart showing an example process of gathering and managing personalization profile information based on the user's viewing habits to define their dynamic personal profile such as performed on the example architecture embodiment shown in FIG. 2.
  • FIGS. 6 a, 6 b and 6 c are flowcharts showing an example delivery and response to a targeted e-commerce offering such as performed on the example architecture embodiment shown in FIG. 2.
  • FIGS. 7 a, 7 b and 7 c are flowcharts showing example processes for using content-based and personalization-based information to deliver a targeted advertisement such as performed on the architecture embodiment shown in FIG. 2.
  • DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS
  • The following detailed description of certain embodiments presents various descriptions of specific embodiments. However, the present invention can be embodied in a multitude of different ways as defined and covered by the claims. In this description, reference is made to the drawings wherein like parts are designated with like numerals throughout.
  • Definitions
  • The following provides a number of useful possible definitions of terms used in describing certain embodiments.
  • A network may refer to a network or combination of networks spanning any geographical area, such as a local area network, wide area network, regional network, national network, and/or global network. The Internet is an example of a current global computer network. Those terms may refer to hardwire networks, wireless networks, or a combination of hardwire and wireless networks. Hardwire networks may include, for example, fiber optic lines, cable lines, ISDN lines, copper lines, etc. Wireless networks may include, for example, cellular systems, personal communications service (PCS) systems, satellite communication systems, packet radio systems, and mobile broadband systems. A cellular system may use, for example, code division multiple access (CDMA), time division multiple access (TDMA), personal digital phone (PDC), Global System Mobile (GSM), or frequency division multiple access (FDMA), among others.
  • A website may refer to one or more interrelated web page files and other files and programs on one or more web servers. The files and programs are accessible over a computer network, such as the Internet, by sending a hypertext transfer protocol (HTTP) request specifying a uniform resource locator (URL) that identifies the location of one of said web page files, wherein the files and programs are owned, managed or authorized by a single business entity. Such files and programs can include, for example, hypertext markup language (HTML) files, common gateway interface (CGI) files, and Java applications. The web page files preferably include a home page file that corresponds to a home page of the website. The home page can serve as a gateway or access point to the remaining files and programs contained within the website. In one embodiment, all of the files and programs are located under, and accessible within, the same network domain as the home page file. Alternatively, the files and programs can be located and accessible through several different network domains.
  • A web page or electronic page may comprise that which is presented by a standard web browser in response to an HTTP request specifying the URL by which the web page file is identified. A web page can include, for example, text, images, sound, video, and animation.
  • Content, media content and streaming media content may refer to the delivery of electronic materials such as music, videos, software, books, multimedia presentations, images, and other electronic data, for example over a network to one or more users. Content data will typically be in the form of computer files for video, audio, program, data and other multimedia type content as well as actual physical copies of valuable content, for example CD-ROM, DVD, VCR, audio, TV or radio broadcast signals, streaming audio and video over networks, or other forms of conveying such information. The terms content, media content and streaming media content may be used interchangeably.
  • A computer or computing device may be any processor controlled device that permits access to the Internet, including terminal devices, such as personal computers, workstations, servers, clients, mini-computers, main-frame computers, laptop computers, a network of individual computers, mobile computers, palm-top computers, hand-held computers, set top boxes for a television, other types of web-enabled televisions, interactive kiosks, personal digital assistants, interactive or web-enabled wireless communications devices, mobile web browsers, or a combination thereof. The computers may further possess one or more input devices such as a keyboard, mouse, touch pad, joystick, pen-input-pad, and the like. The computers may also possess an output device, such as a visual display and an audio output. One or more of these computing devices may form a computing environment.
  • These computers may be uni-processor or multi-processor machines. Additionally, these computers may include an addressable storage medium or computer accessible medium, such as random access memory (RAM), an electronically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), hard disks, floppy disks, laser disk players, digital video devices, compact disks, video tapes, audio tapes, magnetic recording tracks, electronic networks, and other techniques to transmit or store electronic content such as, by way of example, programs and data. In one embodiment, the computers are equipped with a network communication device such as a network interface card, a modem, or other network connection device suitable for connecting to the communication network. Furthermore, the computers execute an appropriate operating system such as Linux, Unix, any of the versions of Microsoft Windows, Apple MacOS, IBM OS/2 or other operating system. The appropriate operating system may include a communications protocol implementation that handles all incoming and outgoing message traffic passed over the Internet. In other embodiments, while the operating system may differ depending on the type of computer, the operating system will continue to provide the appropriate communications protocols to establish communication links with the Internet.
  • The computers may contain program logic, or other substrate configuration representing data and instructions, which cause the computer to operate in a specific and predefined manner, as described herein. In one embodiment, the program logic may be implemented as one or more object frameworks or modules. These modules may be configured to reside on the addressable storage medium and configured to execute on one or more processors. The modules include, but are not limited to, software or hardware components that perform certain tasks. Thus, a module may include, by way of example, components, such as, software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • The various components of the system may communicate with each other and other components comprising the respective computers through mechanisms such as, by way of example, interprocess communication, remote procedure call, distributed object interfaces, and other various program interfaces. Furthermore, the functionality provided for in the components, modules, and databases may be combined into fewer components, modules, or databases or further separated into additional components, modules, or databases. Additionally, the components, modules, and databases may be implemented to execute on one or more computers. In another embodiment, some of the components, modules, and databases may be implemented to execute on one or more computers external to the website. In this instance, the website includes program logic, which enables the website to communicate with the externally implemented components, modules, and databases to perform the functions as disclosed herein.
  • Overview of Auto-categorization
  • Auto-categorization of content, specifically applying a category to a given time segment of the video, is particularly useful in certain embodiments. Video logging applications (such as the Virage VideoLogger) can typically accept novel signal and linguistic analysis algorithms to further refine and extend the metadata index generated during the logging phase. Several vendors offer categorization algorithms and technology, such as Webmind, Verity, Autonomy, and Semio. Extensibility and how it is used to integrate additional technology, such as categorization, is described in Applicant's copending U.S. Patent application Ser. No. 09/134,497, entitled “Video Cataloger System With Synchronized Encoders”, which is hereby incorporated by reference. Categorization technology from any of these vendors can thus be integrated into the logging phase. Each of these technologies accepts an input stream of text and responds with a category designation. The categories are used in the process of selecting relevant ads and commerce options to be presented to the user. Most of these offerings require a training phase whereby a known body of content and corresponding categories are provided to the categorization engine, and a linguistic model is constructed. Thereafter, as new content is submitted to the engine, it can reliably generate category designations. These systems are effective across multiple languages, and are relatively new and rapidly maturing. Auto-categorization of content is utilized because it offers the ability to scale the content processing up to large volumes within an automatic process. Manual solutions are also available (e.g., human editors making judgment calls on the content) but are much less scalable in a business sense.
  • Overview of E-commerce Mechanisms
  • A video search and retrieval (e.g., browse) application gives website visitors the ability to search media content to find segments that are of interest. Utilizing these search and retrieval capabilities and a repository of engaging content, various e-commerce mechanisms can be added on. Ad banners, product offerings, and service offerings can each be triggered to appear in a synchronized fashion with video content being viewed by the end-user. For example, a product demonstration video can be viewed with associated links and mechanisms to purchase the product. A sports video can have sneaker ads automatically interspersed. These associations are made possible by associating keyword ‘tags’ with video content on a time-code basis. The tag is ‘valid’ for a certain span of time within the video. A metadata model, time spans, time stamps and other related concepts are further described in Applicant's copending U.S. patent application Ser. No. 09/134,497, entitled “Video Cataloger System With Synchronized Encoders”, especially in conjunction with FIGS. 6, 7, 8 and 9 of the application.
  • A related mechanism for targeting advertising and e-commerce is also disclosed. Given a repository of indexed video as described above, it is also possible to create ‘personalization agents’ to exploit user information, in addition to content-based information, in the targeting process. A personalization agent gathers a specification (a ‘profile’) from the user describing the content, products, and servers that are most of interest to the user. Additionally, a personalization agent has the ability to ‘learn’ the personalization profile by monitoring the usage patterns of the user. The personalization profile, combined with content-based tagging, can be used to make highly-targeted associations between ads, products, services, and the person viewing the content.
  • Video server and search server technologies are integrated with ad serving personalization agents to make the final presentations of content, advertising, and commerce. The algorithms for making the final presentation decisions (which ads with which content served to which user) may be made using combinations of any of the following: look-up tables, keyword intersections, heuristics, fuzzy-logic, Hidden Markov Models (HMM's), and so forth.
  • DESCRIPTION OF THE FIGURES
  • FIG. 1 is a diagram of an example network configuration 100 in which certain embodiments may operate. However, various other types of electronic devices communicating in a networked environment may also be used. An end user 102 communicates with a computing environment, which may include multiple server computers 108 or a single server computer 110 in a client/server relationship on a network communication medium 116. In a typical client/server environment, each of the server computers 108, 110 may include a server program that communicates with a user device 115, which may be a personal computer (PC), a hand-held electronic device, a mobile or cellular phone, a TV set or any number of other electronic devices.
  • The server computers 108, 110, and the user device 115 may each have any conventional general purpose single- or multi-chip microprocessor, for example a Pentium processor, a Pentium Pro processor, a MIPS processor, a Power PC processor, an ALPHA processor, or other processor. In addition, the microprocessor may be any conventional special purpose microprocessor such as a digital signal processor or a graphics processor. Additionally, the server computers 108, 110 and the user device 115 may be desktop, server, portable, hand-held, set-top, or other desired type of computing device. Furthermore, the server computers 108, 110 and the user device 115 each may be used in connection with various operating systems, including, for example, UNIX, LINUX, Disk Operating System (DOS), VxWorks, PalmOS, OS/2, any version of Microsoft Windows, or other operating system.
  • The server computers 108, 110 and the user device 115 may each include a network terminal equipped with a video display, keyboard and pointing device. In one embodiment of the network configuration 100, the user device 115 includes a network browser 120 used to access the server computers 108,110. The network browser 120 may be, for example, Microsoft Internet Explorer or Netscape Navigator. The user 102 at the user device 115 may utilize the browser 120 to remotely access the server program using a keyboard and/or pointing device and a visual display, such as a monitor 118. Although FIG. 1 shows only one user device 115, the network configuration 100 may include any number of client devices.
  • The network 116 may be any type of electronic transmission medium, for example, including but not limited to the following networks: a virtual private network, a public Internet, a private Internet, a secure Internet, a private network, a public network, a value-added network, an intranet, or a wireless gateway. The term “virtual private network” refers to a secure and encrypted communications link between nodes on the Internet, a Wide Area Network (WAN), Intranet, or any other network transmission means.
  • In addition, the connectivity to the network 116 may be via, for example, a modem, Ethernet (IEEE 802.3), Token Ring (IEEE 802.5), Fiber Distributed Datalink Interface (FDDI), Asynchronous Transfer Mode (ATM), Wireless Application Protocol (WAP), or other form of network connectivity. The user device 115 may connect to the network 116 by use of a modem or by use of a network interface card that resides in the user device 115. The server computers 108 may be connected via a wide area network 106 to a network gateway 104, which provides access to the wide area network 106 via a high-speed, dedicated data circuit.
  • As would be understood by one skilled in the technology, devices other than the hardware configurations described above may be used to communicate with the server computers 108, 110. If the server computers 108, 110 are equipped with voice recognition or Dual Tone Multi-Frequency (DTMF) hardware, the user 102 may communicate with the server computers by use of a telephone 124. The telephone may optionally be equipped with a browser 120 and display screen. Other examples of connection devices for communicating with the server computers 108, 110 include a portable personal computer (PC) 126 or a personal digital assistant (PDA) device with a modem or wireless connection interface, a cable interface device 128 connected to a visual display 130, or a satellite dish 132 connected to a satellite receiver 134 and a television 136. Still other methods of allowing communication between the user 102 and the server computers 108, 110 are additionally contemplated by this application.
  • Additionally, the server computers 108, 110 and the user device 115 may be located in different rooms, buildings or complexes. Moreover, the server computers 108, 110 and the user device 115 could be located in different geographical locations, for example in different cities, states or countries. This geographic flexibility which networked communications allows is additionally within the contemplation of this application.
  • FIG. 2 is a block diagram of an example system architecture 200 in accordance with certain embodiments. In one embodiment, the system architecture 200 includes a commerce website facility 210, which further includes a video encoding module 214 and a video logging module 216, both of which receives media content 212, in one embodiment. Although the term facility is used, the components do not necessarily need to be at a common location. The commerce website facility 210 further includes a video editorial module 218, which communicates with the video logging module 216. The commerce website facility 210 further includes a video application server 220, which communicates with the video editorial module 218. The commerce website facility 210 further includes a web server 222, which communicates with the video application server 220. The commerce website facility 210 further includes a video index 224, which is produced by the video logging module 216 and the video editorial module 218, and is maintained by the video application server 220. The commerce website facility 210 further includes a server administration (“Admin”) module 228, which communicates with the web server module 222. The commerce website facility 210 further includes a commerce module 250 and a personalization module 260, both of which communicate with the video application server 220 and the web server 222. The commerce module 250 and the personalization module 260 are described in greater detail below with regards to certain embodiments of FIG. 2, and additionally in reference to FIGS. 4 through 7.
  • In one embodiment, the system architecture 200 further includes the network 116 shown in FIG. 1, which may be the Internet. Web pages 232 and search forms 234 are accessible via the Internet 116. Each web page 232 may depict a plurality of pages rendered by various web servers. The search form 234 is also accessible by the commerce website facility web server 222. Additionally accessible via the Internet 116 is results data 238, which is produced by the web server 222. Also accessible via the Internet 116 is a video player 236, which communicates with the web server 222. The system architecture 200 further includes a content distribution network 240, which transfers encoded video to the video player 236. The content distribution network 240 further receives uploaded digital video files from the video encoding module 214. The content distribution network 240 may be part of a wide variety of video serving mechanisms and infrastructures that serve to deliver encoded media content 242 for display to the end user 102 shown in FIG. 1. The content distribution network 240 may include a content owner running a simple video server at the content owner facility 220, a complex edge caching content distribution mechanism, or other mechanisms to transmit video and other media content for display to end users 102.
  • The following paragraphs provide a description of the operation of one embodiment of the system architecture 200 shown in FIG. 2. A commerce website may be hosted internally on the commerce web server 222 as shown in FIG. 2, or alternatively outsourced to a web-hosting service provider, which delivers commerce features as described herein to end users 102. The operation of the video encoding module 214, video logging module 216, video editorial module 218, video application server module 220, video index 224, administration module 228, and web server 222 are described with respect to embodiments disclosed in the related application titled “Interactive Video Application Hosting” (U.S. application Ser. No. 09/827,772), which was incorporated by reference above. To the extent that these modules may operate differently in certain embodiments than in the related application, any such differences will be described herein.
  • In one embodiment, the video application server module 220 manages the video index containing metadata and annotations produced by the video logging module 216. The application server 220 receives video and metadata after the video logging 216 and video editorial 218 modules, and transfers video search form 234 queries and results 238 data to the web server 222 for display to an end user 102 (FIG. 1) in a web browser 120 at the user device 115 via the Internet 116. In one embodiment, the communication of the search form 234 queries and results 238 data to the web server 222 includes an exchange of extensible markup language (XML) data, although one skilled in the technology will understand that other data exchange formats may also be utilized. Final HTML rendering of search foams 234, results 238 presentation, and video player 236 playback windows may be accomplished via templates, whereby such templates dictate the graphical look-and-feel of the final media presentation. Actual metadata results, communicated via XML or other data exchange formats, may be rendered into the template by substituting special keywords with results from the video application server 220 to form an HTML-compliant presentation. Additional communications may be provided with the administration module 228 for server administration, metadata editing, and batch processing. Batch processing may be accomplished for insertion processes, deletion or ‘aging’ processes, metadata editing, or for automated performance of other tasks as well. The administration module 228 further allows system administrators to manage the video application server 220, including, for example, index management, asset management, editing, and startup and shutdown control.
  • In one embodiment, regardless of its original form, the content 212 is processed by the video logging module 216 to extract index data, for example keyframes, closed-caption text, speaker identifications, facial identifications, or other index data. The content 212 may additionally undergo processing by the video editorial module 218, whereby humans may elect to add labels to the index of the content 212 by providing additional annotations, descriptions, keywords, or any other marking information such as commerce tags. The index and annotation information is transferred to the video application server 220, which hosts publishing, search, retrieval, browse, or other related video services. The video application server 220 may maintain the metadata in the video index 224. The video application server 220 provides the above-described video services to the web server 222 for incorporation into the web pages 232 via the template mechanism described above.
  • In another embodiment, the video application server 220 includes the server itself that processes XML-based queries and data management activities, performs searches against the video index, and returns video references and metadata via XML or other data exchange formats. Other modules of the video application server 220 include the search or browse rendering interface which processes HTML requests into XML, and additionally processes XML responses back into HTML for delivery by the web server 222 using templates to format and render the XML data into HTML.
  • In one embodiment, the video application server's 220 XML-based open architecture allows for simple integration of additional features and functions, such as, for example, an e-commerce engine as shown in FIG. 3. Such functions may be implemented in various commonly used programming languages, for example Perl, C, C++, Java, or other programming languages, and may utilize publically or commercially available packages for parsing and formatting XML or other data exchange formats.
  • FIG. 3 is a block diagram showing a high-level view 300 of the video application server (VAS) and it's interaction with e-commerce, targeted advertising, and personalization subsystems. A video application server architecture includes the server that processes XML-based queries and data management activities, performs searches against the video index, and returns video references and metadata via XML. One such architecture is described in U.S. application Ser. No. 09/827,772, filed Apr. 6, 2001 and titled “Interactive Video Application Hosting” and which was incorporated by reference above. Other modules of the application server include a Search/Browse rendering interface which processes HTML requests into XML, and also processes XML responses back into HTML for delivery by the Web server using templates to format and render the XML data into HTML; and, the Administration module that allows system administrators to manage the application server (index management, asset management, editing, start-up/shut-down, etc.).
  • The video application server's open, XML-based, architecture readily allows the integration of additional features and functions, from syndication engines, commerce building mechanisms, to the e-commerce, targeted advertising systems, and personalization modules contemplated here. Any such modules can be implemented in any of several commonly used languages (Perl, C, C++, Java, etc.), and can utilize publically and commercially available packages of subroutines for parsing and formatting XML.
  • The Personalization Server in FIG. 3 interacts with the Personalization Interface through any of a number of communication mechanisms, including HTML, XML, and proprietary protocols specific to the Personalization Server employed. The main task of the Personalization Interface is to mediate between the protocol and semantic vocabulary of the chosen Personalization Server and the Video Application Server's XML interface. The VAS serves as a persistent store of state information about individual users to maintain profiles on behalf of the Personalization Server. Personalization features are rendered in HTML for the end user, which allows the user to select categories, topics, and preferences to help define their individual profile. Some Personalization Servers will also allow for monitoring of individual's activity and behavior to more accurately characterize the preferences of the end user. This is referred to as ‘learning behavior’ and allows the personal profile to grow and change over time. The system can accommodate a range of capabilities within the Personalization Server, and can supply context and monitoring information about the user in question. Typically, user-specific profiles and behavior information can be stored within a “cookie” on the user's own computer, set-top box, etc., thus insuring privacy. The value of personalization technology relevant to the system is in its ability to direct and select the presentation of e-commerce and advertising opportunities for the end user. The Personalization Interface module is the connective mechanism between the detailed information about the user, the detailed information about the content (based on the automatic indexing), and the range of available commerce and advertising opportunities that could be presented to the user at any given point in time, based on the content being viewed and the user in question.
  • The e-commerce engine embodiment shown in FIG. 3 represents any of a number of commercially available engines for processing e-commerce transactions by interfacing with standard transaction infrastructure, represented by the Transaction System module. The Transaction System in reality represents the diverse processing subsystems typified by offerings from SAP and others present in many commercial enterprises. The Transaction System interfaces with databases, order processing subsystems, shipping, inventory, billing, and customer services systems. In the system, the e-commerce interface is responsible for mediating between the various information sources (personalization and video content via the Video Application Server) that determine which e-commerce opportunity should be presented to the end user at any given point in time, based on the video content and the preferences of the user. The e-commerce opportunity is presented to the user in an HTML framework, and should the user select a commercial transaction, control is passed to the e-commerce engine from the e-commerce interface.
  • The ad server mechanism embodiment shown in FIG. 3 represents any of a number of commercial ad server vendors, most of which offer facilities for requesting a topic-specific ad in response to a request that contains category information. The primary task of the ad interface is to mediate between the content-specific and personal profile-specific information (via XML) and the protocol of the ad server. The result of a request is a targeted ad (in the form of a banner, video clip, etc.) that is presented to the user via HTML in context with the video clip being served by the video application server.
  • FIG. 4 is a flowchart showing a process 400 of gathering and managing personalization profile information from the user to define their static personal profile. The exact mechanism and profile information gathered in this process 400 is dependent on the personalization server employed in the system; the process 400 depicted in FIG. 4 is merely offered as an example of the types of profile information that can be gathered and the manner in which a system might interact with a user to gather such information. The process 400 begins when the user elects to define or modify their personal profile. Typically, the user may select categories, topics within those categories, and arbitrary keywords to define their static profile. This profile information is actively defined by the user, and is stored on their behalf, being relatively static in the sense that it does not dynamically update based on their viewing habits. Categories are typically selected from a pre-defined list of available categories, and might include things like ‘politics’, ‘sports’, ‘science’, etc. Topics are a further refinement within a category, and might include things like ‘presidential elections’, ‘hockey highlights’, or ‘the moons of Jupiter’. Topic selection first begins by selecting a category within which specific topics are selected from pre-defined lists. Keywords, unlike categories and topics, are specified with a free-form entry, and are not pre-defined. Keywords are typically any word or set of words that the user deems of interest to them that might appear in the transcript of the video. Examples of keywords include proper nouns (persons, places, locations, or organizations) and other nouns that carry information important to the user. A given personalization server might employ any or all of these methods of defining a personal profile. Additionally, some personalization servers may also allow the specification of a weighting mechanism to identify the importance of each selection. For example, ‘science’ may be more important to the user than ‘politics’, and the user will be offered the ability to indicate this distinction through an importance rating (High, Medium, Low) or a numerical weighting value. In one embodiment, the profile information is stored on the user's behalf using a standard ‘cookie’ mechanism to maintain the profile on the user's local computer, thus insuring privacy. The VAS can then later access this information when the user's profile is required for commerce or advertising purposes.
  • FIG. 5 is a flowchart showing a process 500 of gathering and managing personalization profile information based on the user's viewing habits to define their dynamic personal profile. The dynamic profile is constantly updated based on the video content that the user views. The process 500 is invoked whenever the user proactively searches for content and chooses to view it. At this point, the video metadata previously extracted during the indexing process is consulted to extract category, topic, and keyword information that can contribute to the user's dynamic personal profile. This information is readily available as part of the video index, and can be easily gathered and added to the dynamic profile of the user. The dynamic profile is stored and accessed using the standard ‘cookie’ mechanism previously described for the static profile process 400 described in conjunction with FIG. 4.
  • FIGS. 6 a, 6 b and 6 c are flowcharts showing the delivery and response to a targeted e-commerce offering. FIG. 6 a illustrates a process 600 of making a targeted e-commerce opportunity available based on the subject information of the video being viewed at that moment by the user. FIG. 6 b illustrates a similar process 620 based on using the personal profile information of the user. FIG. 6 c illustrates a combined process 650 of using both the video content and the personal profile information to make an e-commerce opportunity available. In each case, an opportunity to make a purchase of a product or service is offered to the user in conjunction with viewing a video. This is similar to advertising in traditional broadcast video, but with two important differences. The first is that the commerce opportunity is offered concurrently with the viewing of the video. The second is that it is more than an advertisement; if the user selects the opportunity (either interrupting their viewing experience, or after their viewing experience is complete), the user can actually complete a purchase on the spot.
  • The process 600 shown in FIG. 6 a begins with the user viewing a selected video clip. The video index is then consulted to extract the corresponding category information for that clip. The category (for example, ‘sports’) is submitted to the e-commerce server to request a commerce opportunity corresponding to the category (for example, a hockey highlights video for purchase). The commerce server returns the purchase opportunity, typically in the form of a graphic description of the highlights video available for purchase. If the user clicks on the opportunity, the purchase transaction is forwarded to the commerce server for fulfillment. At this point, detailed purchase information is gathered by the commerce server (such as DVD or video tape, billing and shipping information, etc.), and the commerce transaction is completed.
  • The process 620 shown in FIG. 6 b is similar to the process 600 in FIG. 6 a, except that the category information is extracted from the personalization profile(s) of the user. In this case, more than one category selection is usually present. Therefore, the process 620 includes a step to make a single category selection from the plurality of categories present in the personal profile. The selection mechanism can be any of a number of algorithms, including random selection (using a random number generator), cyclic (or ‘round-robin’ selection), or least-recently-used. The selected category is then submitted to the commerce server, and the transaction continues as for FIG. 6 a.
  • FIG. 6 c depicts the combined process 650 that uses the video category information in conjunction with the personal profile information. In this case, the system attempts to make a match between the video category and any of the categories present in the personal profile. If a match is found, the matching category is submitted to the commerce server as before. If no match is found, the selection mechanism (random, cyclic, etc.) is used to select a category, and the process 650 proceeds as before.
  • FIGS. 7 a, 7 b and 7 c are flowcharts showing processes using content-based and personalization-based information to deliver a targeted advertisement. FIG. 7 a illustrates a process 700 of making a targeted advertising available based on the subject information of the video being viewed at that moment by the user. FIG. 7 b illustrates a similar process 720 based on using the personal profile information of the user. FIG. 7 c illustrates a combined process 750 of using both the video content and the personal profile information to make an advertisement available. In each case, an advertisement is offered to the user in conjunction with viewing a video. Typical advertisements can be clicked upon by the user to find out more information, be transported to another website, and potentially make a purchase there.
  • The process 700 shown in FIG. 7 a begins with the user viewing a selected video clip. The video index is then consulted to extract the corresponding category information for that clip. The category (for example, ‘science’) is submitted to the advertising server to request an advertisement corresponding to the category (for example, ‘come learn about space at Space.com). The ad server returns the advertisement, typically in the form of a clickable banner ad or video clip. If the user clicks on the ad, their browser typically connects to another website pertaining to the advertisement.
  • The process 720 shown in FIG. 7 b is similar to the one in FIG. 7 a, except that the category information is extracted from the personalization profile(s) of the user. In this case, more than one category selection is usually present. Therefore, the process 720 includes a step to make a single category selection from the plurality of categories present in the personal profile. The selection mechanism can be any of a number of algorithms, including random selection (using a random number generator), cyclic (or ‘round-robin’ selection), or least-recently-used. The selected category is then submitted to the advertising server, and the process 720 continues as described for FIG. 7 a.
  • FIG. 7 c depicts a combined process 750 that uses the video category information in conjunction with the personal profile information. In this case, the system attempts to make a match between the video category and any of the categories present in the personal profile. If a match is found, the matching category is submitted to the advertising server as before. If no match is found, the selection mechanism (random, cyclic, etc.) is used to select a category, and the process 750 proceeds as before.
  • Embodiments of the system and method may use:
      • video indexing tools to automatically extract textual metadata used in search processes and to generate categories automatically for commerce associations.
      • video indexing tools to carefully place commerce tags as time-stamped elements to be associated with the video content during playback.
      • personalization agents to gather and generate user profiles and demographic information to be consulted by the video server technology.
      • video and search serving technology to exploit the commerce tags in the video content and the personalization profile of the user watching the video to make decisions about which ads, products, and/or services should be presented to the user.
      • the presence of commerce tags in the video stream combined with personalization profiles to allow interaction with viewers in a highly targeted manner so as to achieve true, 1-to-1 marketing and sales on large populations.
  • As described herein, certain embodiments fill the longstanding need in the technology of a system that provides commerce oriented websites the capability to achieve their e-commerce goals by exploiting video processing capabilities using rich and interactive media content. While the above detailed description has shown, described, and pointed out novel features of the invention as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the device or process illustrated may be made by those skilled in the art without departing from the intent of the invention. The scope of the invention is indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (21)

1-20. (canceled)
21. A method of delivering targeted advertisement, said method comprising:
determining video content being viewed by a user;
determining a category of at least one of the video content and the user;
accessing an advertisement corresponding to the determined category; and
displaying the accessed advertisement to the user.
22. The method according to claim 21, wherein determining the category further comprises consulting a video index of the video content to extract the category corresponding to the video content being viewed by the user.
23. The method according to claim 22, wherein consulting the video index of the video content further comprises consulting a database on which the video index of the video content is stored.
24. The method according to claim 23, wherein the database also stores a tag correlated to the video content on a time-code basis, wherein the tag is valid for a certain span of time within the video content, wherein the tag is associated with a particular advertisement, wherein consulting the database further comprises consulting the database to extract the tag, and wherein accessing the advertisement further comprises accessing the particular advertisement associated with the extracted tag.
25. The method according to claim 22, further comprising:
accessing a user profile category of the user, wherein the user profile category corresponds to a particular category;
determining whether there is a matching category between the extracted category and the particular category of the user profile category; and
wherein determining the category further comprises determining the category to be the matching category in response to a determination that there is a matching category.
26. The method according to claim 25, wherein determining the category further comprises selecting the category from a plurality of available categories through implementation of a selection technique in response to a determination that there is no matching category.
27. The method according to claim 21, further comprising:
accessing a user profile of the user; and
wherein determining the category further comprises determining the category based upon the user profile of the user.
28. The method according to claim 21, wherein accessing the advertisement corresponding to the determined category further comprises:
submitting the determined category to one of an advertising server and an e-commerce server, wherein the advertising server or the e-commerce server is to select the advertisement from the determined category; and
receiving the selected advertisement from the advertising server or the e-commerce server.
29. The method according to claim 21, wherein displaying the accessed advertisement to the user further comprises displaying the accessed advertisement concurrently with the video content being viewed by the user.
30. The method according to claim 21, further comprising:
receiving a selection of the displayed advertisement; and
directing the user to a server associated with the displayed advertisement.
31. An apparatus for delivering targeted advertising, said apparatus comprising:
a memory on which is stored machine readable instructions to:
determine video content being viewed by a user;
determine a category corresponding to at least one of the video content and the user;
access an advertisement corresponding to the determined category; and
display the accessed advertisement to the user; and
a processor to execute the machine readable instructions.
32. The apparatus according to claim 31, wherein, to determine the category corresponding to at least one of the video content and the user, the machine readable instructions are further to:
determine a tag correlated to the video content on a time-code basis, wherein the tag is valid for a certain span of time within the video content and is associated with a particular advertisement; and
wherein, to access the advertisement, the machine readable instructions are further to access the advertisement that is associated with the determined tag.
33. The apparatus according to claim 31, wherein the machine readable instructions are further to:
access a user profile category of the user, wherein the user profile category corresponds to a particular category;
determine whether there is a matching category between the determined category and the particular category of the user profile category; and
wherein, to determine the category, the machine readable instructions are further to determine the category to be the matching category in response to a determination that there is a matching category.
34. The apparatus according to claim 33, wherein, to determine the category, the machine readable instructions are further to:
select the category from a plurality of available categories through implementation of a selection technique in response to a determination that there is no matching category.
35. The apparatus according to claim 31, wherein the machine readable instructions are further to:
access a user profile of the user; and
wherein, to determine the category, the machine readable instructions are further to determine the category based upon the user profile of the user.
36. The apparatus according to claim 31, wherein the machine readable instructions are further to display the accessed advertisement concurrently with the video content being viewed by the user.
37. The apparatus according to claim 31, wherein the machine readable instructions are further to:
receive a selection of the displayed advertisement; and
direct the user to a server associated with the displayed advertisement.
38. The apparatus according to claim 31, wherein the advertisement comprises at least one of an ad banner, a product offering, and a service offering.
39. A non-transitory computer readable storage medium on which is stored machine readable instructions that when executed by a processor are to:
determine video content being viewed by a user;
consult a database containing information pertaining to at least one of the video content and the user to determine a category corresponding to at least one of the video content and the user;
access an advertisement corresponding to the determined category; and
display the accessed advertisement to the user.
40. The non-transitory computer readable storage medium according to claim 39, wherein the database also has stored thereon a tag correlated to the video content on a time-code basis, wherein the tag is valid for a certain span of time within the video content and is associated with a particular advertisement, and wherein, to access the advertisement, the machine readable instructions are further to:
access the advertisement that is associated with the determined tag.
US13/750,746 2000-04-07 2013-01-25 System and method for applying a database to video multimedia Expired - Fee Related US9338520B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/750,746 US9338520B2 (en) 2000-04-07 2013-01-25 System and method for applying a database to video multimedia

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US19553500P 2000-04-07 2000-04-07
US82850701A 2001-04-06 2001-04-06
US10/872,191 US8171509B1 (en) 2000-04-07 2004-06-18 System and method for applying a database to video multimedia
US13/458,971 US8387087B2 (en) 2000-04-07 2012-04-27 System and method for applying a database to video multimedia
US13/750,746 US9338520B2 (en) 2000-04-07 2013-01-25 System and method for applying a database to video multimedia

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/458,971 Continuation US8387087B2 (en) 2000-04-07 2012-04-27 System and method for applying a database to video multimedia

Publications (2)

Publication Number Publication Date
US20130145388A1 true US20130145388A1 (en) 2013-06-06
US9338520B2 US9338520B2 (en) 2016-05-10

Family

ID=45990983

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/872,191 Active 2026-12-21 US8171509B1 (en) 2000-04-07 2004-06-18 System and method for applying a database to video multimedia
US13/458,971 Expired - Fee Related US8387087B2 (en) 2000-04-07 2012-04-27 System and method for applying a database to video multimedia
US13/750,746 Expired - Fee Related US9338520B2 (en) 2000-04-07 2013-01-25 System and method for applying a database to video multimedia

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US10/872,191 Active 2026-12-21 US8171509B1 (en) 2000-04-07 2004-06-18 System and method for applying a database to video multimedia
US13/458,971 Expired - Fee Related US8387087B2 (en) 2000-04-07 2012-04-27 System and method for applying a database to video multimedia

Country Status (1)

Country Link
US (3) US8171509B1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040175036A1 (en) * 1997-12-22 2004-09-09 Ricoh Company, Ltd. Multimedia visualization and integration environment
US20080228581A1 (en) * 2007-03-13 2008-09-18 Tadashi Yonezaki Method and System for a Natural Transition Between Advertisements Associated with Rich Media Content
US20090083417A1 (en) * 2007-09-18 2009-03-26 John Hughes Method and apparatus for tracing users of online video web sites
US20090259552A1 (en) * 2008-04-11 2009-10-15 Tremor Media, Inc. System and method for providing advertisements from multiple ad servers using a failover mechanism
US20110093783A1 (en) * 2009-10-16 2011-04-21 Charles Parra Method and system for linking media components
US20120278169A1 (en) * 2005-11-07 2012-11-01 Tremor Media, Inc Techniques for rendering advertisements with rich media
US20150281782A1 (en) * 2014-03-25 2015-10-01 UXP Systems Inc. System and Method for Creating and Managing Individual Users for Personalized Television on Behalf of Pre-Existing Video Delivery Platforms
US9485316B2 (en) 2008-09-17 2016-11-01 Tubemogul, Inc. Method and apparatus for passively monitoring online video viewing and viewer behavior
US9612995B2 (en) 2008-09-17 2017-04-04 Adobe Systems Incorporated Video viewer targeting based on preference similarity

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6792575B1 (en) 1999-10-21 2004-09-14 Equilibrium Technologies Automated processing and delivery of media to web servers
US20100145794A1 (en) * 1999-10-21 2010-06-10 Sean Barnes Barger Media Processing Engine and Ad-Per-View
US7260564B1 (en) 2000-04-07 2007-08-21 Virage, Inc. Network video guide and spidering
US8171509B1 (en) 2000-04-07 2012-05-01 Virage, Inc. System and method for applying a database to video multimedia
US7962948B1 (en) * 2000-04-07 2011-06-14 Virage, Inc. Video-enabled community building
US8205237B2 (en) 2000-09-14 2012-06-19 Cox Ingemar J Identifying works, using a sub-linear time search, such as an approximate nearest neighbor search, for initiating a work-based action, such as an action on the internet
US7865498B2 (en) * 2002-09-23 2011-01-04 Worldwide Broadcast Network, Inc. Broadcast network platform system
US8141111B2 (en) 2005-05-23 2012-03-20 Open Text S.A. Movie advertising playback techniques
US9648281B2 (en) 2005-05-23 2017-05-09 Open Text Sa Ulc System and method for movie segment bookmarking and sharing
EP2309737A1 (en) 2005-05-23 2011-04-13 Thomas S. Gilley Distributed scalable media environment
US8145528B2 (en) 2005-05-23 2012-03-27 Open Text S.A. Movie advertising placement optimization based on behavior and content analysis
US20070157228A1 (en) 2005-12-30 2007-07-05 Jason Bayer Advertising with video ad creatives
US7620551B2 (en) * 2006-07-20 2009-11-17 Mspot, Inc. Method and apparatus for providing search capability and targeted advertising for audio, image, and video content over the internet
US20090094113A1 (en) * 2007-09-07 2009-04-09 Digitalsmiths Corporation Systems and Methods For Using Video Metadata to Associate Advertisements Therewith
US9275056B2 (en) * 2007-12-14 2016-03-01 Amazon Technologies, Inc. System and method of presenting media data
JP5549903B2 (en) * 2008-09-14 2014-07-16 雅英 田中 Content receiving device and distribution device
US9179171B2 (en) * 2011-11-30 2015-11-03 Verizon Patent And Licensing Inc. Content recommendation for a unified catalog
CN102447712B (en) * 2012-01-20 2015-07-08 华为技术有限公司 Method and system for interconnecting nodes in content delivery network (CDN) as well as nodes
US8584156B2 (en) * 2012-03-29 2013-11-12 Sony Corporation Method and apparatus for manipulating content channels
US10331661B2 (en) * 2013-10-23 2019-06-25 At&T Intellectual Property I, L.P. Video content search using captioning data
CN103607647B (en) * 2013-11-05 2018-08-14 Tcl集团股份有限公司 Method, system and advertisement playing device are recommended in the advertisement of multimedia video
WO2015131126A1 (en) 2014-02-27 2015-09-03 Cinsay, Inc. Apparatus and method for gathering analytics
US10728603B2 (en) 2014-03-14 2020-07-28 Aibuy, Inc. Apparatus and method for automatic provisioning of merchandise
US10885570B2 (en) 2014-12-31 2021-01-05 Aibuy, Inc. System and method for managing a product exchange
WO2016109810A1 (en) * 2014-12-31 2016-07-07 Cinsay, Inc. System and method for managing a product exchange
US10162879B2 (en) * 2015-05-08 2018-12-25 Nec Corporation Label filters for large scale multi-label classification
US11455549B2 (en) 2016-12-08 2022-09-27 Disney Enterprises, Inc. Modeling characters that interact with users as part of a character-as-a-service implementation
US10755317B2 (en) * 2017-03-11 2020-08-25 International Business Machines Corporation Managing a set of offers using a dialogue
WO2019111120A1 (en) * 2017-12-04 2019-06-13 Filippi Marta System and method for the production, propagation and management of multimedia contents
WO2019125704A1 (en) 2017-12-20 2019-06-27 Flickray, Inc. Event-driven streaming media interactivity
US11252477B2 (en) 2017-12-20 2022-02-15 Videokawa, Inc. Event-driven streaming media interactivity
US11113884B2 (en) 2018-07-30 2021-09-07 Disney Enterprises, Inc. Techniques for immersive virtual reality experiences
CN113127679A (en) 2019-12-30 2021-07-16 阿里巴巴集团控股有限公司 Video searching method and device and index construction method and device
US12003821B2 (en) * 2020-04-20 2024-06-04 Disney Enterprises, Inc. Techniques for enhanced media experience
US11475668B2 (en) 2020-10-09 2022-10-18 Bank Of America Corporation System and method for automatic video categorization

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020184047A1 (en) * 2001-04-03 2002-12-05 Plotnick Michael A. Universal ad queue
US20050076357A1 (en) * 1999-10-28 2005-04-07 Fenne Adam Michael Dynamic insertion of targeted sponsored video messages into Internet multimedia broadcasts
US20080155616A1 (en) * 1996-10-02 2008-06-26 Logan James D Broadcast program and advertising distribution system
US20110209170A1 (en) * 1995-10-02 2011-08-25 Starsight Telecast, Inc. Systems and methods for contextually linking television program information

Family Cites Families (187)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4115805A (en) 1975-05-23 1978-09-19 Bausch & Lomb Incorporated Image analysis indexing apparatus and methods
JPS5923467B2 (en) 1979-04-16 1984-06-02 株式会社日立製作所 Position detection method
JPS57185777A (en) 1981-05-12 1982-11-16 Fuji Photo Film Co Ltd Electronic camera with electronic memo
JP2684695B2 (en) * 1988-08-05 1997-12-03 キヤノン株式会社 Data recording device
JP3002471B2 (en) 1988-08-19 2000-01-24 株式会社日立製作所 Program distribution device
US5045940A (en) 1989-12-22 1991-09-03 Avid Technology, Inc. Video/audio transmission systsem and method
US5446919A (en) * 1990-02-20 1995-08-29 Wilkins; Jeff K. Communication system and method with demographically or psychographically defined audiences
US5136655A (en) 1990-03-26 1992-08-04 Hewlett-Pacard Company Method and apparatus for indexing and retrieving audio-video data
US5335072A (en) 1990-05-30 1994-08-02 Minolta Camera Kabushiki Kaisha Photographic system capable of storing information on photographed image data
US5307456A (en) 1990-12-04 1994-04-26 Sony Electronics, Inc. Integrated multi-media production and authoring system
GB9100732D0 (en) 1991-01-14 1991-02-27 Xerox Corp A data access system
WO1992022983A2 (en) 1991-06-11 1992-12-23 Browne H Lee Large capacity, random access, multi-source recorder player
US5706290A (en) 1994-12-15 1998-01-06 Shaw; Venson Method and apparatus including system architecture for multimedia communication
US5861881A (en) 1991-11-25 1999-01-19 Actv, Inc. Interactive computer system for providing an interactive presentation with personalized video, audio and graphics responses for multiple viewers
IL100252A0 (en) 1991-12-05 1992-09-06 D S P Group Israel Ltd Video cassette directory
US6400996B1 (en) 1999-02-01 2002-06-04 Steven M. Hoffberg Adaptive pattern recognition based control system and method
US5467288A (en) 1992-04-10 1995-11-14 Avid Technology, Inc. Digital audio workstations providing digital storage and display of video information
US5946445A (en) 1992-04-10 1999-08-31 Avid Technology, Inc. Media recorder for capture and playback of live and prerecorded audio and/or video information
US5566290A (en) 1992-04-29 1996-10-15 Canon Kabushiki Kaisha Multi-media device
US5506644A (en) 1992-08-18 1996-04-09 Olympus Optical Co., Ltd. Camera
US5414808A (en) 1992-12-30 1995-05-09 International Business Machines Corporation Method for accessing and manipulating library video segments
US5692104A (en) 1992-12-31 1997-11-25 Apple Computer, Inc. Method and apparatus for detecting end points of speech activity
US5657077A (en) 1993-02-18 1997-08-12 Deangelis; Douglas J. Event recording system with digital line camera
DE69428039T2 (en) 1993-04-16 2002-06-06 Avid Technology, Inc. METHOD AND DEVICE FOR SYNCHRONIZING AN IMAGE DATA CURRENT WITH A CORRESPONDING AUDIO DATA CURRENT
US5680639A (en) 1993-05-10 1997-10-21 Object Technology Licensing Corp. Multimedia control system
US5551016A (en) 1993-07-01 1996-08-27 Queen's University At Kingston Monitoring system and interface apparatus therefor
US5481296A (en) 1993-08-06 1996-01-02 International Business Machines Corporation Apparatus and method for selectively viewing video information
JP2986345B2 (en) 1993-10-18 1999-12-06 インターナショナル・ビジネス・マシーンズ・コーポレイション Voice recording indexing apparatus and method
JP3528214B2 (en) 1993-10-21 2004-05-17 株式会社日立製作所 Image display method and apparatus
US5889578A (en) 1993-10-26 1999-03-30 Eastman Kodak Company Method and apparatus for using film scanning information to determine the type and category of an image
US5485553A (en) 1993-10-29 1996-01-16 Hewlett-Packard Company Method and apparatus for managing and initiating video capture and printing
KR960003651B1 (en) 1993-12-24 1996-03-21 재단법인 한국전자통신연구소 Multi-board circuit for high speed local bus
US5701153A (en) 1994-01-14 1997-12-23 Legal Video Services, Inc. Method and system using time information in textual representations of speech for correlation to a second representation of that speech
US5508940A (en) 1994-02-14 1996-04-16 Sony Corporation Of Japan And Sony Electronics, Inc. Random access audio/video processor with multiple outputs
CA2140850C (en) 1994-02-24 1999-09-21 Howard Paul Katseff Networked system for display of multimedia presentations
US5606655A (en) 1994-03-31 1997-02-25 Siemens Corporate Research, Inc. Method for representing contents of a single video shot using frames
US5521841A (en) 1994-03-31 1996-05-28 Siemens Corporate Research, Inc. Browsing contents of a given video sequence
JP3356537B2 (en) 1994-04-15 2002-12-16 富士写真フイルム株式会社 Camera system and image file system
JP3277679B2 (en) 1994-04-15 2002-04-22 ソニー株式会社 High efficiency coding method, high efficiency coding apparatus, high efficiency decoding method, and high efficiency decoding apparatus
US5613032A (en) 1994-09-02 1997-03-18 Bell Communications Research, Inc. System and method for recording, playing back and searching multimedia events wherein video, audio and text can be searched and retrieved
US5835667A (en) 1994-10-14 1998-11-10 Carnegie Mellon University Method and apparatus for creating a searchable digital video library and a system and method of using such a library
US5664227A (en) 1994-10-14 1997-09-02 Carnegie Mellon University System and method for skimming digital audio/video data
US5926205A (en) 1994-10-19 1999-07-20 Imedia Corporation Method and apparatus for encoding and formatting data representing a video program to provide multiple overlapping presentations of the video program
WO1996017313A1 (en) 1994-11-18 1996-06-06 Oracle Corporation Method and apparatus for indexing multimedia information streams
US5574845A (en) 1994-11-29 1996-11-12 Siemens Corporate Research, Inc. Method and apparatus video data management
US5774170A (en) 1994-12-13 1998-06-30 Hite; Kenneth C. System and method for delivering targeted advertisements to consumers
US5826102A (en) 1994-12-22 1998-10-20 Bell Atlantic Network Services, Inc. Network arrangement for development delivery and presentation of multimedia applications using timelines to integrate multimedia objects and program objects
US5485611A (en) 1994-12-30 1996-01-16 Intel Corporation Video database indexing and method of presenting video database index to a user
US5729279A (en) 1995-01-26 1998-03-17 Spectravision, Inc. Video distribution system
US5642285A (en) 1995-01-31 1997-06-24 Trimble Navigation Limited Outdoor movie camera GPS-position and time code data-logging for special effects production
US5557320A (en) 1995-01-31 1996-09-17 Krebs; Mark Video mail delivery system
US5872865A (en) 1995-02-08 1999-02-16 Apple Computer, Inc. Method and system for automatic classification of video images
US6111604A (en) 1995-02-21 2000-08-29 Ricoh Company, Ltd. Digital camera which detects a connection to an external device
JPH08249348A (en) 1995-03-13 1996-09-27 Hitachi Ltd Method and device for video retrieval
JP3507176B2 (en) 1995-03-20 2004-03-15 富士通株式会社 Multimedia system dynamic interlocking method
US5930446A (en) 1995-04-08 1999-07-27 Sony Corporation Edition system
US5706457A (en) 1995-06-07 1998-01-06 Hughes Electronics Image display and archiving system and method
US5930493A (en) 1995-06-07 1999-07-27 International Business Machines Corporation Multimedia server system and method for communicating multimedia information
US6009507A (en) 1995-06-14 1999-12-28 Avid Technology, Inc. System and method for distributing processing among one or more processors
US5898441A (en) 1995-06-16 1999-04-27 International Business Machines Corporation Method and apparatus for integrating video capture and monitor
US6112226A (en) 1995-07-14 2000-08-29 Oracle Corporation Method and apparatus for concurrently encoding and tagging digital information for allowing non-sequential access during playback
US6119154A (en) 1995-07-14 2000-09-12 Oracle Corporation Method and apparatus for non-sequential access to an in-progress video feed
US6505160B1 (en) 1995-07-27 2003-01-07 Digimarc Corporation Connected audio and other media objects
JP3471526B2 (en) 1995-07-28 2003-12-02 松下電器産業株式会社 Information provision device
JPH11512903A (en) 1995-09-29 1999-11-02 ボストン テクノロジー インク Multimedia architecture for interactive advertising
US5767893A (en) 1995-10-11 1998-06-16 International Business Machines Corporation Method and apparatus for content based downloading of video programs
US5751280A (en) 1995-12-11 1998-05-12 Silicon Graphics, Inc. System and method for media stream synchronization with a base atom index file and an auxiliary atom index file
JPH09163299A (en) 1995-12-13 1997-06-20 Sony Corp Broadcast signal recording device and method
US5633678A (en) 1995-12-20 1997-05-27 Eastman Kodak Company Electronic still camera for capturing and categorizing images
US5794249A (en) 1995-12-21 1998-08-11 Hewlett-Packard Company Audio/video retrieval system that uses keyword indexing of digital recordings to display a list of the recorded text files, keywords and time stamps associated with the system
US5884056A (en) 1995-12-28 1999-03-16 International Business Machines Corporation Method and system for video browsing on the world wide web
GB2312582A (en) 1996-01-19 1997-10-29 Orad Hi Tech Systems Limited Insertion of virtual objects into a video sequence
US5838314A (en) * 1996-02-21 1998-11-17 Message Partners Digital video services system with optional interactive advertisement capabilities
JP3493872B2 (en) 1996-02-29 2004-02-03 ソニー株式会社 Image data processing method and apparatus
US6061056A (en) 1996-03-04 2000-05-09 Telexis Corporation Television monitoring system with automatic selection of program material of interest and subsequent display under user control
US5778181A (en) 1996-03-08 1998-07-07 Actv, Inc. Enhanced video programming system and method for incorporating and displaying retrieved integrated internet information segments
US6018768A (en) 1996-03-08 2000-01-25 Actv, Inc. Enhanced video programming system and method for incorporating and displaying retrieved integrated internet information segments
US5774664A (en) 1996-03-08 1998-06-30 Actv, Inc. Enhanced video programming system and method for incorporating and displaying retrieved integrated internet information segments
JPH09271002A (en) 1996-03-29 1997-10-14 Mitsubishi Electric Corp Video data distribution system
US5893095A (en) 1996-03-29 1999-04-06 Virage, Inc. Similarity engine for content-based retrieval of images
US5983237A (en) 1996-03-29 1999-11-09 Virage, Inc. Visual dictionary
US5913205A (en) 1996-03-29 1999-06-15 Virage, Inc. Query optimization for visual information retrieval system
US5915250A (en) 1996-03-29 1999-06-22 Virage, Inc. Threshold-based comparison
US5918012A (en) 1996-03-29 1999-06-29 British Telecommunications Public Limited Company Hyperlinking time-based data files
US5911139A (en) 1996-03-29 1999-06-08 Virage, Inc. Visual image database search engine which allows for different schema
JPH09282849A (en) 1996-04-08 1997-10-31 Pioneer Electron Corp Information-recording medium and recording apparatus and reproducing apparatus therefor
US5852435A (en) 1996-04-12 1998-12-22 Avid Technology, Inc. Digital multimedia editing and data management system
JPH1056609A (en) 1996-04-15 1998-02-24 Canon Inc Image recording method, communication method, image recording device, communication equipment and medium
US5870754A (en) 1996-04-25 1999-02-09 Philips Electronics North America Corporation Video retrieval of MPEG compressed sequences using DC and motion signatures
US5740388A (en) 1996-05-10 1998-04-14 Custom Communications, Inc. Apparatus for creating individually customized videos
US6370543B2 (en) 1996-05-24 2002-04-09 Magnifi, Inc. Display of media previews
US5983176A (en) 1996-05-24 1999-11-09 Magnifi, Inc. Evaluation of media content in media files
US5903892A (en) 1996-05-24 1999-05-11 Magnifi, Inc. Indexing of media content on a network
US6065050A (en) 1996-06-05 2000-05-16 Sun Microsystems, Inc. System and method for indexing between trick play and normal play video streams in a video delivery system
US8107015B1 (en) 1996-06-07 2012-01-31 Virage, Incorporated Key frame selection
US5903261A (en) 1996-06-20 1999-05-11 Data Translation, Inc. Computer based video system
US5953005A (en) 1996-06-28 1999-09-14 Sun Microsystems, Inc. System and method for on-line multimedia access
PT932398E (en) 1996-06-28 2006-09-29 Ortho Mcneil Pharm Inc USE OF THE SURFACE OR ITS DERIVATIVES FOR THE PRODUCTION OF A MEDICINAL PRODUCT FOR THE TREATMENT OF MANIAC-DEPRESSIVE BIPOLAR DISTURBLES
US5813014A (en) 1996-07-10 1998-09-22 Survivors Of The Shoah Visual History Foundation Method and apparatus for management of multimedia assets
US5969716A (en) 1996-08-06 1999-10-19 Interval Research Corporation Time-based media processing system
US5890175A (en) 1996-09-25 1999-03-30 Wong; Garland Dynamic generation and display of catalogs
US5828809A (en) 1996-10-01 1998-10-27 Matsushita Electric Industrial Co., Ltd. Method and apparatus for extracting indexing information from digital video data
WO1998015887A2 (en) 1996-10-09 1998-04-16 Starguide Digital Networks Aggregate information production and display system
US5974572A (en) 1996-10-15 1999-10-26 Mercury Interactive Corporation Software system and methods for generating a load test using a server access log
US5774666A (en) 1996-10-18 1998-06-30 Silicon Graphics, Inc. System and method for displaying uniform network resource locators embedded in time-based medium
US5917958A (en) 1996-10-31 1999-06-29 Sensormatic Electronics Corporation Distributed video data base with remote searching for image data features
US5872565A (en) 1996-11-26 1999-02-16 Play, Inc. Real-time video processing system
JP2001507541A (en) 1996-12-30 2001-06-05 シャープ株式会社 Sprite-based video coding system
US5901245A (en) 1997-01-23 1999-05-04 Eastman Kodak Company Method and system for detection and characterization of open space in digital images
US6006241A (en) 1997-03-14 1999-12-21 Microsoft Corporation Production of a video stream with synchronized annotations over a computer network
US5875446A (en) 1997-02-24 1999-02-23 International Business Machines Corporation System and method for hierarchically grouping and ranking a set of objects in a query context based on one or more relationships
US6211869B1 (en) 1997-04-04 2001-04-03 Avid Technology, Inc. Simultaneous storage and network transmission of multimedia data with video host that requests stored data according to response time from a server
US6735253B1 (en) * 1997-05-16 2004-05-11 The Trustees Of Columbia University In The City Of New York Methods and architecture for indexing and editing compressed video over the world wide web
US5987454A (en) 1997-06-09 1999-11-16 Hobbs; Allen Method and apparatus for selectively augmenting retrieved text, numbers, maps, charts, still pictures and/or graphics, moving pictures and/or graphics and audio information from a network resource
US5920856A (en) 1997-06-09 1999-07-06 Xerox Corporation System for selecting multimedia databases over networks
US20030040962A1 (en) * 1997-06-12 2003-02-27 Lewis William H. System and data management and on-demand rental and purchase of digital data products
US6285788B1 (en) 1997-06-13 2001-09-04 Sharp Laboratories Of America, Inc. Method for fast return of abstracted images from a digital image database
US5864823A (en) 1997-06-25 1999-01-26 Virtel Corporation Integrated virtual telecommunication system for E-commerce
US6317885B1 (en) 1997-06-26 2001-11-13 Microsoft Corporation Interactive entertainment and information system using television set-top box
US6169573B1 (en) 1997-07-03 2001-01-02 Hotv, Inc. Hypervideo system and method with object tracking in a compressed digital video environment
US6167404A (en) 1997-07-31 2000-12-26 Avid Technology, Inc. Multimedia plug-in using dynamic objects
US6014183A (en) 1997-08-06 2000-01-11 Imagine Products, Inc. Method and apparatus for detecting scene changes in a digital video stream
AU8697598A (en) 1997-08-08 1999-03-01 Pics Previews, Inc. Digital department system
US7295752B1 (en) 1997-08-14 2007-11-13 Virage, Inc. Video cataloger system with audio track extraction
US6567980B1 (en) 1997-08-14 2003-05-20 Virage, Inc. Video cataloger system with hyperlinked output
US6360234B2 (en) 1997-08-14 2002-03-19 Virage, Inc. Video cataloger system with synchronized encoders
US6463444B1 (en) 1997-08-14 2002-10-08 Virage, Inc. Video cataloger system with extensibility
AUPO918697A0 (en) 1997-09-15 1997-10-09 Canon Information Systems Research Australia Pty Ltd Enhanced information gathering apparatus and method
AUPO960197A0 (en) 1997-10-03 1997-10-30 Canon Information Systems Research Australia Pty Ltd Multi-media editing method and apparatus
US5990955A (en) 1997-10-03 1999-11-23 Innovacom Inc. Dual encoding/compression method and system for picture quality/data density enhancement
US6009410A (en) 1997-10-16 1999-12-28 At&T Corporation Method and system for presenting customized advertising to a user on the world wide web
US5969772A (en) 1997-10-30 1999-10-19 Nec Corporation Detection of moving objects in video data by block matching to derive a region motion vector
US6571054B1 (en) 1997-11-10 2003-05-27 Nippon Telegraph And Telephone Corporation Method for creating and utilizing electronic image book and recording medium having recorded therein a program for implementing the method
JP3613543B2 (en) 1997-11-11 2005-01-26 株式会社日立国際電気 Video editing device
US6170065B1 (en) 1997-11-14 2001-01-02 E-Parcel, Llc Automatic system for dynamic diagnosis and repair of computer configurations
US6119123A (en) 1997-12-02 2000-09-12 U.S. Philips Corporation Apparatus and method for optimizing keyframe and blob retrieval and storage
US6363380B1 (en) 1998-01-13 2002-03-26 U.S. Philips Corporation Multimedia computer system with story segmentation capability and operating program therefor including finite automation video parser
US6134243A (en) 1998-01-15 2000-10-17 Apple Computer, Inc. Method and apparatus for media data transmission
US6084595A (en) 1998-02-24 2000-07-04 Virage, Inc. Indexing method for image search engine
US6173287B1 (en) 1998-03-11 2001-01-09 Digital Equipment Corporation Technique for ranking multimedia annotations of interest
US6289346B1 (en) 1998-03-12 2001-09-11 At&T Corp. Apparatus and method for a bookmarking system
IL123819A (en) 1998-03-24 2001-09-13 Geo Interactive Media Group Lt Network media streaming
US6459427B1 (en) 1998-04-01 2002-10-01 Liberate Technologies Apparatus and method for web-casting over digital broadcast TV network
US6006265A (en) 1998-04-02 1999-12-21 Hotv, Inc. Hyperlinks resolution at and by a special network server in order to enable diverse sophisticated hyperlinking upon a digital network
EP1004200B8 (en) 1998-05-22 2005-12-28 Koninklijke Philips Electronics N.V. Recording arrangement having keyword detection means
US6498897B1 (en) 1998-05-27 2002-12-24 Kasenna, Inc. Media server system and method having improved asset types for playback of digital media
US6154771A (en) 1998-06-01 2000-11-28 Mediastra, Inc. Real-time receipt, decompression and play of compressed streaming video/hypervideo; with thumbnail display of past scenes and with replay, hyperlinking and/or recording permissively intiated retrospectively
US6698020B1 (en) * 1998-06-15 2004-02-24 Webtv Networks, Inc. Techniques for intelligent video ad insertion
US20010012062A1 (en) 1998-07-23 2001-08-09 Eric C. Anderson System and method for automatic analysis and categorization of images in an electronic imaging device
KR100636910B1 (en) 1998-07-28 2007-01-31 엘지전자 주식회사 Video Search System
US6295092B1 (en) * 1998-07-30 2001-09-25 Cbs Corporation System for analyzing television programs
US6233389B1 (en) 1998-07-30 2001-05-15 Tivo, Inc. Multimedia time warping system
US6324338B1 (en) 1998-08-07 2001-11-27 Replaytv, Inc. Video data recorder with integrated channel guides
US20020054752A1 (en) 1998-08-07 2002-05-09 Anthony Wood Video data recorder with personal channels
US6833865B1 (en) * 1998-09-01 2004-12-21 Virage, Inc. Embedded metadata engines in digital capture devices
US6457010B1 (en) 1998-12-03 2002-09-24 Expanse Networks, Inc. Client-server based subscriber characterization system
JP3235660B2 (en) 1998-12-24 2001-12-04 日本電気株式会社 Information retrieval apparatus and method, and storage medium storing information retrieval program
US7209942B1 (en) * 1998-12-28 2007-04-24 Kabushiki Kaisha Toshiba Information providing method and apparatus, and information reception apparatus
US6473804B1 (en) 1999-01-15 2002-10-29 Grischa Corporation System for indexical triggers in enhanced video productions by redirecting request to newly generated URI based on extracted parameter of first URI
US6185527B1 (en) 1999-01-19 2001-02-06 International Business Machines Corporation System and method for automatic audio content analysis for word spotting, indexing, classification and retrieval
US6236395B1 (en) 1999-02-01 2001-05-22 Sharp Laboratories Of America, Inc. Audiovisual information management system
US6462754B1 (en) 1999-02-22 2002-10-08 Siemens Corporate Research, Inc. Method and apparatus for authoring and linking video documents
US6462778B1 (en) 1999-02-26 2002-10-08 Sony Corporation Methods and apparatus for associating descriptive data with digital image files
US6480853B1 (en) 1999-03-08 2002-11-12 Ericsson Inc. Systems, methods and computer program products for performing internet searches utilizing bookmarks
US7017173B1 (en) * 1999-03-30 2006-03-21 Sedna Patent Services, Llc System enabling user access to secondary content associated with a primary content stream
US6240553B1 (en) 1999-03-31 2001-05-29 Diva Systems Corporation Method for providing scalable in-band and out-of-band access within a video-on-demand environment
US6654030B1 (en) 1999-03-31 2003-11-25 Canon Kabushiki Kaisha Time marker for synchronized multimedia
US6834083B1 (en) 1999-04-16 2004-12-21 Sony Corporation Data transmitting method and data transmitter
US6801576B1 (en) 1999-08-06 2004-10-05 Loudeye Corp. System for accessing, distributing and maintaining video content over public and private internet protocol networks
US6795863B1 (en) 1999-08-10 2004-09-21 Intline.Com, Inc. System, device and method for combining streaming video with e-mail
US6774926B1 (en) 1999-09-03 2004-08-10 United Video Properties, Inc. Personal television channel system
US7424678B2 (en) * 1999-09-16 2008-09-09 Sharp Laboratories Of America, Inc. Audiovisual information management system with advertising
US6601026B2 (en) 1999-09-17 2003-07-29 Discern Communications, Inc. Information retrieval by natural language querying
US6813384B1 (en) 1999-11-10 2004-11-02 Intel Corporation Indexing wavelet compressed video for efficient data handling
US6963867B2 (en) 1999-12-08 2005-11-08 A9.Com, Inc. Search query processing to provide category-ranked presentation of search results
WO2001047273A1 (en) * 1999-12-21 2001-06-28 Tivo, Inc. Intelligent system and methods of recommending media content items based on user preferences
US7028071B1 (en) 2000-01-28 2006-04-11 Bycast Inc. Content distribution system for generating content streams to suit different users and facilitating e-commerce transactions using broadcast content metadata
US6675174B1 (en) 2000-02-02 2004-01-06 International Business Machines Corp. System and method for measuring similarity between a set of known temporal media segments and a one or more temporal media streams
US6484148B1 (en) * 2000-02-19 2002-11-19 John E. Boyd Electronic advertising device and method of using the same
WO2001067772A2 (en) 2000-03-09 2001-09-13 Videoshare, Inc. Sharing a streaming video
US7222163B1 (en) 2000-04-07 2007-05-22 Virage, Inc. System and method for hosting of video content over a network
US8171509B1 (en) 2000-04-07 2012-05-01 Virage, Inc. System and method for applying a database to video multimedia
US7962948B1 (en) 2000-04-07 2011-06-14 Virage, Inc. Video-enabled community building
US7260564B1 (en) 2000-04-07 2007-08-21 Virage, Inc. Network video guide and spidering
US6760749B1 (en) 2000-05-10 2004-07-06 Polycom, Inc. Interactive conference content distribution device and methods of use thereof
US20050198006A1 (en) 2004-02-24 2005-09-08 Dna13 Inc. System and method for real-time media searching and alerting
US20050234985A1 (en) 2004-04-09 2005-10-20 Nexjenn Media, Inc. System, method and computer program product for extracting metadata faster than real-time

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110209170A1 (en) * 1995-10-02 2011-08-25 Starsight Telecast, Inc. Systems and methods for contextually linking television program information
US20080155616A1 (en) * 1996-10-02 2008-06-26 Logan James D Broadcast program and advertising distribution system
US20050076357A1 (en) * 1999-10-28 2005-04-07 Fenne Adam Michael Dynamic insertion of targeted sponsored video messages into Internet multimedia broadcasts
US20020184047A1 (en) * 2001-04-03 2002-12-05 Plotnick Michael A. Universal ad queue

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8995767B2 (en) * 1997-12-22 2015-03-31 Ricoh Company, Ltd. Multimedia visualization and integration environment
US20040175036A1 (en) * 1997-12-22 2004-09-09 Ricoh Company, Ltd. Multimedia visualization and integration environment
US20120278169A1 (en) * 2005-11-07 2012-11-01 Tremor Media, Inc Techniques for rendering advertisements with rich media
US9563826B2 (en) * 2005-11-07 2017-02-07 Tremor Video, Inc. Techniques for rendering advertisements with rich media
US20080228581A1 (en) * 2007-03-13 2008-09-18 Tadashi Yonezaki Method and System for a Natural Transition Between Advertisements Associated with Rich Media Content
US10270870B2 (en) 2007-09-18 2019-04-23 Adobe Inc. Passively monitoring online video viewing and viewer behavior
US8577996B2 (en) 2007-09-18 2013-11-05 Tremor Video, Inc. Method and apparatus for tracing users of online video web sites
US20090083417A1 (en) * 2007-09-18 2009-03-26 John Hughes Method and apparatus for tracing users of online video web sites
US20090259552A1 (en) * 2008-04-11 2009-10-15 Tremor Media, Inc. System and method for providing advertisements from multiple ad servers using a failover mechanism
US10462504B2 (en) 2008-09-17 2019-10-29 Adobe Inc. Targeting videos based on viewer similarity
US9967603B2 (en) 2008-09-17 2018-05-08 Adobe Systems Incorporated Video viewer targeting based on preference similarity
US9485316B2 (en) 2008-09-17 2016-11-01 Tubemogul, Inc. Method and apparatus for passively monitoring online video viewing and viewer behavior
US9612995B2 (en) 2008-09-17 2017-04-04 Adobe Systems Incorporated Video viewer targeting based on preference similarity
US9781221B2 (en) 2008-09-17 2017-10-03 Adobe Systems Incorporated Method and apparatus for passively monitoring online video viewing and viewer behavior
US20110093783A1 (en) * 2009-10-16 2011-04-21 Charles Parra Method and system for linking media components
US9357265B2 (en) * 2014-03-25 2016-05-31 UXP Systems Inc. System and method for creating and managing individual users for personalized television on behalf of pre-existing video delivery platforms
US20150281782A1 (en) * 2014-03-25 2015-10-01 UXP Systems Inc. System and Method for Creating and Managing Individual Users for Personalized Television on Behalf of Pre-Existing Video Delivery Platforms

Also Published As

Publication number Publication date
US20120215629A1 (en) 2012-08-23
US8387087B2 (en) 2013-02-26
US8171509B1 (en) 2012-05-01
US9338520B2 (en) 2016-05-10

Similar Documents

Publication Publication Date Title
US9338520B2 (en) System and method for applying a database to video multimedia
US11477539B2 (en) Systems and methods for generating media content using microtrends
US9923947B2 (en) Method and system for providing media programming
JP3103070B2 (en) How to customize a tour dynamically
US8495694B2 (en) Video-enabled community building
US6523022B1 (en) Method and apparatus for selectively augmenting retrieved information from a network resource
US8548978B2 (en) Network video guide and spidering
US8180674B2 (en) Targeting of advertisements based on mutual information sharing between devices over a network
US20060173825A1 (en) Systems and methods to provide internet search/play media services
US20110191321A1 (en) Contextual display advertisements for a webpage
KR20010023562A (en) Automated content scheduler and displayer
US20020138331A1 (en) Method and system for web page personalization
CN1610915A (en) Specific internet user target advertising replacement method and system
US20160373513A1 (en) Systems and methods for integrating xml syndication feeds into online advertisement
US20040025191A1 (en) System and method for creating and presenting content packages
WO2008136630A1 (en) System and method for providing multimedia contents
WO2001082621A1 (en) Media and information display systems and methods
JP2002288187A (en) Systems and methods for information accumulation, information providing and electronic mail distribution, methods for information accumulation, information providing and electronic mail distriibution, and recording medium with information processing program reorded thereon
JP2001249927A (en) Advertisement providing method and advertisement providing system
WO2001098911A1 (en) System and method for communicating a variety of multi-media works over a computer network based on user selection
WO2001006380A1 (en) Internet-based multi-media presentation system for customized information
KR20010096398A (en) Advertising system and advertising service method using multimedia broadcasting in communication network
WO2006008719A2 (en) Systems and methods to provide internet search/play media services
KR20020011292A (en) Method Of Information Service Using Mouse Pointer
MXPA00002208A (en) Automated content scheduler and displayer

Legal Events

Date Code Title Description
AS Assignment

Owner name: VIRAGE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GIROUARD, DAVID;HOROWITZ, BRADLEY;HUMPHREY, RICHARD;AND OTHERS;SIGNING DATES FROM 20010712 TO 20010731;REEL/FRAME:032137/0094

AS Assignment

Owner name: HEWLETT-PACKARD COMPANY, CALIFORNIA

Free format text: MERGER;ASSIGNOR:VIRAGE, INC.;REEL/FRAME:036002/0930

Effective date: 20141101

AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:036737/0587

Effective date: 20150929

AS Assignment

Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001

Effective date: 20151027

ZAAA Notice of allowance and fees due

Free format text: ORIGINAL CODE: NOA

ZAAB Notice of allowance mailed

Free format text: ORIGINAL CODE: MN/=.

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: VIDEOLABS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP;HEWLETT PACKARD ENTERPRISE COMPANY;REEL/FRAME:050910/0061

Effective date: 20190905

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: VL COLLECTIVE IP LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VL IP HOLDINGS LLC;REEL/FRAME:051392/0412

Effective date: 20191219

Owner name: VL IP HOLDINGS LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VIDEOLABS, INC.;REEL/FRAME:051391/0919

Effective date: 20191219

AS Assignment

Owner name: PRAETOR FUND I, A SUB-FUND OF PRAETORIUM FUND I ICAV, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:VL COLLECTIVE IP LLC;REEL/FRAME:051748/0267

Effective date: 20191204

AS Assignment

Owner name: PRAETOR FUND I, A SUB-FUND OF PRAETORIUM FUND I ICAV, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:VL COLLECTIVE IP LLC;REEL/FRAME:052272/0435

Effective date: 20200324

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20200510

FEPP Fee payment procedure

Free format text: PETITION RELATED TO MAINTENANCE FEES FILED (ORIGINAL EVENT CODE: PMFP); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PMFG); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

PRDP Patent reinstated due to the acceptance of a late maintenance fee

Effective date: 20210524

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: VL COLLECTIVE IP LLC, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PRAETOR FUND I, A SUB-FUND OF PRAETORIUM FUND I ICAV;REEL/FRAME:062977/0325

Effective date: 20221228

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20240510