US20160171548A1 - Method for identifying advertisements for placement in multimedia content elements - Google Patents

Method for identifying advertisements for placement in multimedia content elements Download PDF

Info

Publication number
US20160171548A1
US20160171548A1 US15/019,223 US201615019223A US2016171548A1 US 20160171548 A1 US20160171548 A1 US 20160171548A1 US 201615019223 A US201615019223 A US 201615019223A US 2016171548 A1 US2016171548 A1 US 2016171548A1
Authority
US
United States
Prior art keywords
multimedia content
signature
content element
advertisement
web
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/019,223
Inventor
Igal RAICHELGAUZ
Karina ODINAEV
Yehoshua Y. Zeevi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cortica Ltd
Original Assignee
Cortica Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from IL173409A external-priority patent/IL173409A0/en
Priority claimed from PCT/IL2006/001235 external-priority patent/WO2007049282A2/en
Priority claimed from IL185414A external-priority patent/IL185414A0/en
Priority claimed from US12/195,863 external-priority patent/US8326775B2/en
Priority claimed from US13/624,397 external-priority patent/US9191626B2/en
Application filed by Cortica Ltd filed Critical Cortica Ltd
Priority to US15/019,223 priority Critical patent/US20160171548A1/en
Publication of US20160171548A1 publication Critical patent/US20160171548A1/en
Assigned to CORTICA LTD reassignment CORTICA LTD ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ODINAEV, KARINA, RAICHELGAUZ, IGAL, ZEEVI, YEHOSHUA Y
Priority to US16/783,187 priority patent/US20200175550A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0261Targeted advertisements based on user location
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0242Determining effectiveness of advertisements
    • G06Q30/0246Traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/10Arrangements for replacing or switching information during the broadcast or the distribution
    • H04H20/103Transmitter-side switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/35Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
    • H04H60/37Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying segments of broadcast information, e.g. scenes or extracting programme ID
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/56Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • H04H60/59Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 of video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/61Arrangements for services using the result of monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • H04H60/66Arrangements for services using the result of monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 for using the result on distributors' side
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8106Monomedia components thereof involving special audio data, e.g. different tracks for different languages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17318Direct or substantially direct transmission and handling of requests

Definitions

  • the present disclosure relates generally to the analysis of multimedia content, and more specifically to a method for determining an area within multimedia content over which an advertisement can be displayed.
  • the Internet also referred to as the worldwide web (WWW) has become a mass media whereby the content presentation is largely supported by paid advertisements that are added to the web-page content.
  • advertisements are displayed using portions of code written in, for example, hyper-text mark-up language (HTML) or JavaScript that is inserted into, or otherwise called up by HTML documents (web-pages).
  • HTML hyper-text mark-up language
  • JavaScript JavaScript that is inserted into, or otherwise called up by HTML documents (web-pages).
  • a web-page typically contains text and multimedia elements, such images, video clips, audio clips, and the like that are rendered and displayed by a web browser on a display device.
  • banner advertisements are generally images or animations that are displayed within a web-page. Other advertisements are simply inserted at various locations within the display area of the HTML document forming the web-page. A typical web-page is cluttered with many advertisement banners, which frequently are irrelevant to the content being displayed in the web-page. As a result, the user's attention is not given to the advertised content. Consequently, the price for advertising in a potentially valuable area within a web-page is low because its respective effectiveness is low.
  • Certain embodiments disclosed herein include a method for identifying advertisements for display in a multimedia content element.
  • the method includes: partitioning the multimedia content element into a predefined number of portions; generating at least one signature for each portion of the multimedia content element; analyzing the at least one signature generated for each portion of the multimedia content element; identifying at least one attractive advertising area within the multimedia content element based on the signature analysis; and identifying at least one matching advertisement for the multimedia content element based on the generated at least one signature, wherein the at least one matching advertisement fits within the identified at least one attractive advertising area.
  • Certain embodiments disclosed herein also include a system for identifying advertisements for display in a multimedia content element.
  • the system includes: a processing system; and a memory, the memory containing instructions that, when executed by the processing unit, configure the system to: partition the multimedia content element into a predefined number of portions; generate at least one signature for each portion of the multimedia content element; analyze the at least one signature generated for each portion of the multimedia content element; identify at least one attractive advertising area within the multimedia content element based on the signature analysis; and identify at least one matching advertisement for the multimedia content element based on the generated at least one signature, wherein the at least one matching advertisement fits within the identified at least one attractive advertising area.
  • FIG. 1 is a schematic block diagram of a network system utilized to describe various embodiments of the invention.
  • FIG. 2 is a flowchart describing a process of matching an advertisement to multimedia content displayed on a web-page.
  • FIG. 3 is a block diagram depicting a basic flow of information in the signature generator system.
  • FIG. 4 is a diagram showing a flow of patches generation, response vector generation, and signature generation in a large-scale speech-to-text system.
  • FIG. 5 is a flowchart describing a method for determining an area within the multimedia content of which an advertisement can be displayed according to one embodiment.
  • FIGS. 6 and 7 are screenshots of images showing an area within the image selected for the display of an advertisement according to an embodiment.
  • the disclosed techniques are based on a system designed to allow matching at least an appropriate advertisement that is relevant to a multimedia content displayed in a web-page, and analyzing the multimedia content displayed on the web-page accordingly. Based on the analysis results, for one or more multimedia content elements included in the web-page, one or more matching signatures are generated. The signatures are utilized to search for appropriate advertisements to be displayed in the web-page.
  • an advertisement is matched to a multimedia element displayed in web-page, based on the content of the element. Furthermore, the disclosed embodiment determines the most attractive area within the multimedia element that the advertisement can be displayed in order to attract the viewer's attention.
  • FIG. 1 shows an exemplary and non-limiting schematic diagram of a network system 100 utilized to describe the disclosed embodiments.
  • a network 110 is used to communicate between different parts of the system.
  • the network 110 may be the Internet, the world-wide-web (WWW), a local area network (LAN), a wide area network (WAN), a metro area network (MAN), and other networks capable of enabling communication between the elements of the system 100 .
  • WWW world-wide-web
  • LAN local area network
  • WAN wide area network
  • MAN metro area network
  • WB web browsers
  • a web browser 120 is executed over a computing device including, for example, a personal computer (PC), a personal digital assistant (PDA), a mobile phone, a tablet computer, and other kinds of wired and mobile appliances, equipped with browsing, viewing, listening, filtering, and managing capabilities etc., that are enabled as further discussed herein below.
  • PC personal computer
  • PDA personal digital assistant
  • a web server 170 is further connected to the network 110 and may provide to a web browser 120 web-pages containing multimedia content, or references therein, such that upon request by a web browser 120 , such multimedia content is provided to the web browser 120 .
  • the system 100 also includes a signature generator system (SGS) 140 .
  • the SGS 140 is connected to a server 130 .
  • the server 130 is enabled to receive and serve multimedia content and causes the SGS 140 to generate a signature respective of the multimedia content.
  • the server 130 together with the SGS 140 perform the process of matching an advertisement to a multimedia content element displayed in a web-page and determining the most attractive area within the multimedia element to display the advertisement according to various disclosed embodiments discussed in detail below.
  • each of the server 130 and the SGS 140 typically comprises a processing unit, such as a processor (not shown) that is coupled to a memory.
  • the memory contains instructions that can be executed by the processing unit.
  • the server 130 also includes an interface (not shown) to the network 110 .
  • a plurality of publisher servers P 1 150 - 1 through Pm 150 - m are also connected to the network 110 , each of which is configured to generate and send online advertisements to the server 130 and web-server 170 .
  • the publisher servers 150 typically receive the advertised content from advertising agencies that place the advertising campaign.
  • the advertisements may be stored in a data warehouse 160 which is connected to the server 130 (either directly or through the network 110 ) for further use.
  • a user visits a web-page, hosted in the web-server 170 , using a web-browser 120 .
  • a request is sent to the server 130 to analyze the multimedia content elements contained in the web-page.
  • the request to analyze the multimedia elements content can be generated and sent by a script executed in the web-page, an agent installed in the web-browser, or by one of the publisher servers 150 when requested to upload one or more advertisements to the web-page.
  • the request to analyze the multimedia content may include a URL of the web-page or a copy of the web-page.
  • the request may also include multimedia content elements extracted from the web-page.
  • a multimedia content element may include, for example, an image, a graphic, a video stream, a video clip, an audio stream, an audio clip, a video frame, a photograph, and an image of signals (e.g., spectrograms, phasograms, scalograms, etc.), and/or combinations thereof and portions thereof.
  • signals e.g., spectrograms, phasograms, scalograms, etc.
  • the server 130 analyzes the multimedia content elements in the web-page to detect one or matching advertisements for the multimedia content elements. It should be noted that the server 130 may analyze all or a sub-set of the multimedia content elements contained in the web-page. It should be further noted that the number of matching advertisements that are provided for the analysis can be determined based on the number of advertisement banners that can be displayed on the web-page, or in response to a request pre-configured by a campaign manager.
  • the SGS 140 generates for each multimedia content element provided by the server 130 at least one signature.
  • the at least one generated signature may be robust to noise and distortions as discussed below.
  • the server 130 searches the data warehouse 160 for a matching advertisement. For example, if the signature of an image indicates a “sea shore” then an advertisement for a swimsuit can be a potential matching advertisement.
  • a multimedia content element typically includes many details, and is composed of different content portions, each of which may be of a different type and related to a different object.
  • a picture 700 shown in FIG. 7 is composed of the text “Our Fleet” and images of 4 different cars (color or model), a road, and a building as a background.
  • the server 130 matches an advertisement to be placed over the multimedia content element based on the various content portions included in the element.
  • the server 130 further determines an area within the multimedia content element over which an advertisement can be placed, such that it would not distract the viewer's attention away from the advertised content and the displayed element, but rather it would attract the user to the displayed content.
  • the server 130 matches an advertisement for more than one content portion included therein.
  • the server 130 may process the picture 700 by means of the SGS 140 which generates at least one signature for each content portion of the picture. Based on the generated signatures the server matches an advertisement for one or more cars displayed in the picture 700 , and matches another advertisement that relates to all cars. Based on the signatures analysis the server 130 determines an area, for display of each of the advertisements, within the multimedia content element. The determination is based on at least one of the area's texture, visibility, contrast, relativity to the advertisement content, distance from other content portions, and so on. For example, AD-1, AD-2, AD-3, and AD-4 each relate to a specific car's model and are displayed below each model, while AD-5 is for a dealership and is displayed next to the text “Our Fleet.”
  • the signatures generated for the picture 700 would enable accurate recognition of the model of the car because the signatures generated for the multimedia content elements, according to the disclosed embodiments, allow for recognition and classification of multimedia elements, such as, content-tracking, video filtering, multimedia taxonomy generation, video fingerprinting, speech-to-text, audio classification, element recognition, video/image search and any other applications requiring content-based signatures generation and matching for large content volumes such as, web and other large-scale databases.
  • multimedia elements such as, content-tracking, video filtering, multimedia taxonomy generation, video fingerprinting, speech-to-text, audio classification, element recognition, video/image search and any other applications requiring content-based signatures generation and matching for large content volumes such as, web and other large-scale databases.
  • the signatures generated for more than one multimedia content element are clustered.
  • the clustered signatures are used to search for a matching advertisement.
  • the one or more selected matching advertisements are retrieved from the data warehouse 160 and are placed in the one or more determined areas within the multimedia content element by the server 130 .
  • the composed element including the matching advertisements is uploaded to the web-page on the web browser 120 by means of one of the publisher servers 150 .
  • the matching advertisements may be provided to the publisher servers 150 with instructions as to where to place each advertisement in the web-page.
  • the instructions may include the element ID in the web-page, a URL of the web-page, coordinates within the web-page and/or element in which to place the advertisements, and so on.
  • the matching advertisements are overlaid on top of the content element.
  • FIG. 2 depicts an exemplary and non-limiting flowchart 200 describing the process of matching an advertisement to multimedia content displayed on a web-page.
  • the method starts when a web-page is provided responsive of a request by one of the web-browsers (e.g., web-browser 120 - 1 ).
  • a request to match at least one multimedia content element contained in the uploaded web-page to an appropriate advertisement item is received.
  • the request can be received from a publisher server (e.g., a server 150 - 1 ), a script running on the uploaded web-page, or an agent (e.g., an add-on) installed in the web-browser.
  • 8210 can also include extracting the multimedia content elements for a signature that should be generated.
  • a signature for the multimedia content element is generated.
  • the signature for the multimedia content element generated by a signature generator is described below.
  • an advertisement item is matched to the multimedia content element respective of its generated signature.
  • the matching process includes searching for at least one advertisement item respective of the signature of the multimedia content and a display of the at least one advertisement item within the display area of the web-page.
  • the matching of an advertisement to a multimedia content element can be performed by the computational cores that are part of a large scale matching discussed in detail below.
  • the advertisement item is uploaded to the web-page and displayed therein.
  • the user's gesture may be: a scroll on the multimedia content element, a press on the multimedia content element, and/or a response to the multimedia content. This ensures that the user's attention is given to the advertised content.
  • S 250 it is checked whether there are additional requests to analyze multimedia content elements, and if so, execution continues with S 210 ; otherwise, execution terminates.
  • the image is then analyzed and a signature is generated respective thereto.
  • an advertisement item e.g., a banner
  • a swimsuit advertisement is matched to the image, for example, a swimsuit advertisement.
  • a user's gesture for example, a mouse scrolling over the sea shore image
  • the swimsuit ad is displayed.
  • the web-page may contain a number of multimedia content elements; however, in some instances only a few advertisement items may be displayed in the web-page. Accordingly, in one embodiment, the signatures generated for the multimedia content elements are clustered and the cluster of signatures is matched to one or more advertisement items.
  • Video content segments 2 from a Master database (DB) 6 and a Target DB 1 are processed in parallel by a large number of independent computational cores 3 that constitute an architecture for generating the signatures (hereinafter the “Architecture”). Further details on the computational cores generation are provided below.
  • the independent cores 3 generate a database of Robust Signatures and Signatures 4 for Target content-segments 5 and a database of Robust Signatures and Signatures 7 for Master content-segments 8 .
  • An exemplary and non-limiting process of signature generation for an audio component is shown in detail in FIG. 4 .
  • Target Robust Signatures and/or Signatures 4 are effectively matched, by a matching algorithm 9 , to Master Robust Signatures and/or Signatures 7 database to find all matches between the two databases.
  • the Matching System is extensible for signatures generation capturing the dynamics in-between the frames.
  • the signatures' generation process is now described with reference to FIG. 4 .
  • the first step in the process of signatures generation from a given speech-segment is to break down the speech-segment to K patches 14 of random length P and random position within the speech segment 12 .
  • the breakdown is performed by the patch generator component 21 .
  • the value of the number of patches K, random length P and random position parameters is determined based on optimization, considering the tradeoff between accuracy rate and the number of fast matches required in the flow process of the server 130 and SGS 150 .
  • all the K patches are injected in parallel into all computational cores 3 to generate K response vectors 22 , which are fed into a signature generator system 23 to produce a database of Robust Signatures and Signatures 4 .
  • LTU leaky integrate-to-threshold unit
  • Threshold values Thx are set differently for signature generation and for Robust Signature generation. For example, for a certain distribution of values (for the set of nodes), the thresholds for signature (Th S ) and Robust Signature (Th RS ) are set apart, after optimization, according to at least one or more of the following criteria:
  • the cores should be designed so as to obtain maximal independence, i.e., the projection from a signal space should generate a maximal pair-wise distance between any two cores' projections into a high-dimensional space.
  • the cores should be optimally designed for the type of signals, i.e., the cores should be maximally sensitive to the spatio-temporal structure of the injected signal, for example, and in particular, sensitive to local correlations in time and space.
  • a core represents a dynamic system, such as in state space, phase space, edge of chaos, etc., which is uniquely used herein to exploit their maximal computational power.
  • the cores should be optimally designed with regard to invariance to a set of signal distortions, of interest in relevant applications.
  • FIG. 5 depicts an exemplary and non-limiting flowchart 500 describing a method for detecting attractive advertising areas within a multimedia content element and matching advertisements for display in the detected areas according to one embodiment.
  • a web-page is provided responsive of a request by one of the web-browsers (e.g., web-browser 120 - 1 ).
  • a request is received to process at least one multimedia content element contained in the uploaded web-page, for the purpose of detecting attractive advertising areas and matching advertisements.
  • the request and the web-page can be received from a publisher server (e.g., a server 150 - 1 ), a script running on the uploaded web-page, or an agent (e.g., an add-on) installed in the web-browser.
  • the multimedia content element is extracted from the web-page.
  • the multimedia content element is partitioned into a predefined number of portions.
  • This number may be a configurable parameter of the server 130 .
  • an image may be partitioned into blocks having equal or non-equal size.
  • At least one signature is generated for each portion of the multimedia content element.
  • each of the at least one signatures is robust to noise and distortion and is generated by the SGS 140 as described hereinabove.
  • an advertisement item is matched to the each portion of the multimedia content respective of its generated signature. Alternatively or collectively, all signatures generated for the various portions are clustered, and an advertisement is matched to the clustered signature.
  • the operation of S 540 is described in detail hereinabove, at least with reference to FIG. 2 .
  • the matching of an advertisement to a signature can be performed by the computational cores that are part of a large scale matching discussed in detail hereinabove.
  • one or more attractive advertising areas are identified within the multimedia content element, for display of one or more of the matched advertisements.
  • the at least one signature generated for each portion of the multimedia content element is analyzed.
  • the signature analysis includes determination of the texture uniformity, margin of the respective portion, the location of a portion within the multimedia element, and so on.
  • An image texture is a set of metrics calculated in image processing designed to quantify the perceived texture of an image.
  • the Image texture provides information about the spatial arrangement of color or intensities in an image or selected region of an image.
  • each portion is assigned with an attractiveness score, indicating how the portion is likely to attract viewers' attention without damaging the overall appearance of the multimedia content element. For example, a center portion of the element would have a higher score relative to other portions.
  • portions having attractiveness scores above a predefined threshold are determined to be attractive advertising areas.
  • the predefined threshold may be a configurable parameter of the server 130 .
  • the matching advertisements are overlaid on the determined attractive advertising areas. It should be noted that not all matching advertisements may be used for this purpose, but rather only the number of matching advertisements that can fit within the determined areas.
  • the overlaid advertisements are displayed as part of the received multimedia content element.
  • FIG. 6 An example of the operation of the method described with reference to FIG. 5 and the option of the server 130 is provided in FIG. 6 .
  • a request to match an advertisement to an image 600 displayed over a web-page is received.
  • the image 600 is partitioned into 4 different portions 600 -A, 600 -B, 600 -C, and 600 -D.
  • For each portion at least one signature is generated by the SGS 140 , which is then analyzed by the server 130 .
  • the analysis result would determine that the portion 600 -B does not include any displayed object and its texture is flat, thus portion 600 -B is determined as the attractive advertising area and an advertisement 630 may be displayed in this area.
  • the advertisement may be related to sunglasses 620 or a salad bowl 610 .
  • the various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof.
  • the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces.
  • CPUs central processing units
  • the computer platform may also include an operating system and microinstruction code.
  • a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Computer Graphics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Information Transfer Between Computers (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A system and method for identifying advertisements for display in a multimedia content element. The method includes: partitioning the multimedia content element into a predefined number of portions; generating at least one signature for each portion of the multimedia content element; analyzing the at least one signature generated for each portion of the multimedia content element; identifying at least one attractive advertising area within the multimedia content element based on the signature analysis; and identifying at least one matching advertisement for the multimedia content element based on the generated at least one signature, wherein the at least one matching advertisement fits within the identified at least one attractive advertising area.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 13/874,195 filed on Apr. 30, 2013, now allowed, which claims the benefit of U.S. Provisional Application No. 61/789,378 filed on Mar. 15, 2013. The Ser. No. 13/874,195 Application is a continuation-in-part (CIP) of U.S. patent application Ser. No. 13/624,397 filed on Sep. 21, 2012, now U.S. Pat. No. 9,191,626. The Ser. No. 13/624,397 Application is a CIP of:
  • (a) U.S. patent application Ser. No. 13/344,400 filed on Jan. 5, 2012, now U.S. Pat. No. 8,959,037, which is a continuation of U.S. patent application Ser. No. 12/434,221 filed on May 1, 2009, now U.S. Pat. No. 8,112,376.
  • (b) U.S. patent application Ser. No. 12/195,863 filed on Aug. 21, 2008, now U.S. Pat. No. 8,326,775, which claims priority under 35 USC 119 from Israeli Application No. 185414, filed on Aug. 21, 2007, and which is also a continuation-in-part of the below-referenced U.S. patent application Ser. No. 12/084,150; and
  • (c) U.S. patent application Ser. No. 12/084,150 having a filing date of Apr. 7, 2009, now U.S. Pat. No. 8,655,801, which is the National Stage of International Application No. PCT/IL2006/001235, filed on Oct. 26, 2006, which claims foreign priority from Israeli Application No. 171577 filed on Oct. 26, 2005 and Israeli Application No. 173409 filed on Jan. 29, 2006.
  • All of the Applications referenced above are hereby incorporated by reference for all that they contain.
  • TECHNICAL FIELD
  • The present disclosure relates generally to the analysis of multimedia content, and more specifically to a method for determining an area within multimedia content over which an advertisement can be displayed.
  • BACKGROUND
  • The Internet, also referred to as the worldwide web (WWW), has become a mass media whereby the content presentation is largely supported by paid advertisements that are added to the web-page content. Typically, advertisements are displayed using portions of code written in, for example, hyper-text mark-up language (HTML) or JavaScript that is inserted into, or otherwise called up by HTML documents (web-pages). A web-page typically contains text and multimedia elements, such images, video clips, audio clips, and the like that are rendered and displayed by a web browser on a display device.
  • One of the most common types of advertisements on the Internet is in a form of a banner advertisement. Banner advertisements are generally images or animations that are displayed within a web-page. Other advertisements are simply inserted at various locations within the display area of the HTML document forming the web-page. A typical web-page is cluttered with many advertisement banners, which frequently are irrelevant to the content being displayed in the web-page. As a result, the user's attention is not given to the advertised content. Consequently, the price for advertising in a potentially valuable area within a web-page is low because its respective effectiveness is low.
  • It would therefore be advantageous to provide a solution that would attract viewers' attention to advertised content and thereby increase the price of advertising areas within web-pages.
  • SUMMARY
  • Certain embodiments disclosed herein include a method for identifying advertisements for display in a multimedia content element. The method includes: partitioning the multimedia content element into a predefined number of portions; generating at least one signature for each portion of the multimedia content element; analyzing the at least one signature generated for each portion of the multimedia content element; identifying at least one attractive advertising area within the multimedia content element based on the signature analysis; and identifying at least one matching advertisement for the multimedia content element based on the generated at least one signature, wherein the at least one matching advertisement fits within the identified at least one attractive advertising area.
  • Certain embodiments disclosed herein also include a system for identifying advertisements for display in a multimedia content element. The system includes: a processing system; and a memory, the memory containing instructions that, when executed by the processing unit, configure the system to: partition the multimedia content element into a predefined number of portions; generate at least one signature for each portion of the multimedia content element; analyze the at least one signature generated for each portion of the multimedia content element; identify at least one attractive advertising area within the multimedia content element based on the signature analysis; and identify at least one matching advertisement for the multimedia content element based on the generated at least one signature, wherein the at least one matching advertisement fits within the identified at least one attractive advertising area.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
  • FIG. 1 is a schematic block diagram of a network system utilized to describe various embodiments of the invention.
  • FIG. 2 is a flowchart describing a process of matching an advertisement to multimedia content displayed on a web-page.
  • FIG. 3 is a block diagram depicting a basic flow of information in the signature generator system.
  • FIG. 4 is a diagram showing a flow of patches generation, response vector generation, and signature generation in a large-scale speech-to-text system.
  • FIG. 5 is a flowchart describing a method for determining an area within the multimedia content of which an advertisement can be displayed according to one embodiment.
  • FIGS. 6 and 7 are screenshots of images showing an area within the image selected for the display of an advertisement according to an embodiment.
  • DETAILED DESCRIPTION
  • It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.
  • The disclosed techniques are based on a system designed to allow matching at least an appropriate advertisement that is relevant to a multimedia content displayed in a web-page, and analyzing the multimedia content displayed on the web-page accordingly. Based on the analysis results, for one or more multimedia content elements included in the web-page, one or more matching signatures are generated. The signatures are utilized to search for appropriate advertisements to be displayed in the web-page. According to disclosed embodiments, an advertisement is matched to a multimedia element displayed in web-page, based on the content of the element. Furthermore, the disclosed embodiment determines the most attractive area within the multimedia element that the advertisement can be displayed in order to attract the viewer's attention.
  • FIG. 1 shows an exemplary and non-limiting schematic diagram of a network system 100 utilized to describe the disclosed embodiments. A network 110 is used to communicate between different parts of the system. The network 110 may be the Internet, the world-wide-web (WWW), a local area network (LAN), a wide area network (WAN), a metro area network (MAN), and other networks capable of enabling communication between the elements of the system 100.
  • Further connected to the network 110 are one or more client applications, such as web browsers (WB) 120-1 through 120-n (collectively referred hereinafter as web browsers 120 or individually as a web browser 120). A web browser 120 is executed over a computing device including, for example, a personal computer (PC), a personal digital assistant (PDA), a mobile phone, a tablet computer, and other kinds of wired and mobile appliances, equipped with browsing, viewing, listening, filtering, and managing capabilities etc., that are enabled as further discussed herein below.
  • A web server 170 is further connected to the network 110 and may provide to a web browser 120 web-pages containing multimedia content, or references therein, such that upon request by a web browser 120, such multimedia content is provided to the web browser 120. The system 100 also includes a signature generator system (SGS) 140. In one embodiment, the SGS 140 is connected to a server 130. The server 130 is enabled to receive and serve multimedia content and causes the SGS 140 to generate a signature respective of the multimedia content. Specifically, the server 130 together with the SGS 140 perform the process of matching an advertisement to a multimedia content element displayed in a web-page and determining the most attractive area within the multimedia element to display the advertisement according to various disclosed embodiments discussed in detail below. The process for generating the signatures for multimedia content by the SGS 140, is explained in more detail herein below with respect to FIGS. 3 and 4. It should be noted that each of the server 130 and the SGS 140, typically comprises a processing unit, such as a processor (not shown) that is coupled to a memory. The memory contains instructions that can be executed by the processing unit. The server 130 also includes an interface (not shown) to the network 110.
  • A plurality of publisher servers P1 150-1 through Pm 150-m (collectively referred to hereinafter as publisher servers 150, or individually as a publisher server 150) are also connected to the network 110, each of which is configured to generate and send online advertisements to the server 130 and web-server 170. The publisher servers 150 typically receive the advertised content from advertising agencies that place the advertising campaign. In one embodiment, the advertisements may be stored in a data warehouse 160 which is connected to the server 130 (either directly or through the network 110) for further use.
  • A user visits a web-page, hosted in the web-server 170, using a web-browser 120. When the web-page is uploaded on the user's web-browser 120, a request is sent to the server 130 to analyze the multimedia content elements contained in the web-page. The request to analyze the multimedia elements content can be generated and sent by a script executed in the web-page, an agent installed in the web-browser, or by one of the publisher servers 150 when requested to upload one or more advertisements to the web-page. The request to analyze the multimedia content may include a URL of the web-page or a copy of the web-page. The request may also include multimedia content elements extracted from the web-page. A multimedia content element may include, for example, an image, a graphic, a video stream, a video clip, an audio stream, an audio clip, a video frame, a photograph, and an image of signals (e.g., spectrograms, phasograms, scalograms, etc.), and/or combinations thereof and portions thereof.
  • The server 130 analyzes the multimedia content elements in the web-page to detect one or matching advertisements for the multimedia content elements. It should be noted that the server 130 may analyze all or a sub-set of the multimedia content elements contained in the web-page. It should be further noted that the number of matching advertisements that are provided for the analysis can be determined based on the number of advertisement banners that can be displayed on the web-page, or in response to a request pre-configured by a campaign manager.
  • The SGS 140 generates for each multimedia content element provided by the server 130 at least one signature. The at least one generated signature may be robust to noise and distortions as discussed below. Then, using the generated signature(s) the server 130 searches the data warehouse 160 for a matching advertisement. For example, if the signature of an image indicates a “sea shore” then an advertisement for a swimsuit can be a potential matching advertisement.
  • However, typically a multimedia content element includes many details, and is composed of different content portions, each of which may be of a different type and related to a different object. For example, a picture 700 shown in FIG. 7 is composed of the text “Our Fleet” and images of 4 different cars (color or model), a road, and a building as a background.
  • According to the embodiments disclosed, the server 130 matches an advertisement to be placed over the multimedia content element based on the various content portions included in the element. The server 130 further determines an area within the multimedia content element over which an advertisement can be placed, such that it would not distract the viewer's attention away from the advertised content and the displayed element, but rather it would attract the user to the displayed content. In another embodiment, the server 130 matches an advertisement for more than one content portion included therein.
  • For example, the server 130 may process the picture 700 by means of the SGS 140 which generates at least one signature for each content portion of the picture. Based on the generated signatures the server matches an advertisement for one or more cars displayed in the picture 700, and matches another advertisement that relates to all cars. Based on the signatures analysis the server 130 determines an area, for display of each of the advertisements, within the multimedia content element. The determination is based on at least one of the area's texture, visibility, contrast, relativity to the advertisement content, distance from other content portions, and so on. For example, AD-1, AD-2, AD-3, and AD-4 each relate to a specific car's model and are displayed below each model, while AD-5 is for a dealership and is displayed next to the text “Our Fleet.”
  • It should be noted that the signatures generated for the picture 700 would enable accurate recognition of the model of the car because the signatures generated for the multimedia content elements, according to the disclosed embodiments, allow for recognition and classification of multimedia elements, such as, content-tracking, video filtering, multimedia taxonomy generation, video fingerprinting, speech-to-text, audio classification, element recognition, video/image search and any other applications requiring content-based signatures generation and matching for large content volumes such as, web and other large-scale databases.
  • In one embodiment, the signatures generated for more than one multimedia content element are clustered. The clustered signatures are used to search for a matching advertisement. In one embodiment, the one or more selected matching advertisements are retrieved from the data warehouse 160 and are placed in the one or more determined areas within the multimedia content element by the server 130. Then, the composed element including the matching advertisements is uploaded to the web-page on the web browser 120 by means of one of the publisher servers 150. Alternatively, the matching advertisements may be provided to the publisher servers 150 with instructions as to where to place each advertisement in the web-page. The instructions may include the element ID in the web-page, a URL of the web-page, coordinates within the web-page and/or element in which to place the advertisements, and so on. The matching advertisements are overlaid on top of the content element.
  • FIG. 2 depicts an exemplary and non-limiting flowchart 200 describing the process of matching an advertisement to multimedia content displayed on a web-page. At 8205, the method starts when a web-page is provided responsive of a request by one of the web-browsers (e.g., web-browser 120-1). In 8210, a request to match at least one multimedia content element contained in the uploaded web-page to an appropriate advertisement item is received. The request can be received from a publisher server (e.g., a server 150-1), a script running on the uploaded web-page, or an agent (e.g., an add-on) installed in the web-browser. 8210 can also include extracting the multimedia content elements for a signature that should be generated.
  • In S220, a signature for the multimedia content element is generated. The signature for the multimedia content element generated by a signature generator is described below. In S230, an advertisement item is matched to the multimedia content element respective of its generated signature. In one embodiment, the matching process includes searching for at least one advertisement item respective of the signature of the multimedia content and a display of the at least one advertisement item within the display area of the web-page. In one embodiment, the matching of an advertisement to a multimedia content element can be performed by the computational cores that are part of a large scale matching discussed in detail below.
  • In S240, upon a user's gesture the advertisement item is uploaded to the web-page and displayed therein. The user's gesture may be: a scroll on the multimedia content element, a press on the multimedia content element, and/or a response to the multimedia content. This ensures that the user's attention is given to the advertised content. In S250 it is checked whether there are additional requests to analyze multimedia content elements, and if so, execution continues with S210; otherwise, execution terminates.
  • As a non-limiting example, a user uploads a web=page that contains an image of a sea shore. The image is then analyzed and a signature is generated respective thereto. Respective of the image signature, an advertisement item (e.g., a banner) is matched to the image, for example, a swimsuit advertisement. Upon detection of a user's gesture, for example, a mouse scrolling over the sea shore image, the swimsuit ad is displayed.
  • The web-page may contain a number of multimedia content elements; however, in some instances only a few advertisement items may be displayed in the web-page. Accordingly, in one embodiment, the signatures generated for the multimedia content elements are clustered and the cluster of signatures is matched to one or more advertisement items.
  • FIGS. 3 and 4 illustrate the generation of signatures for the multimedia content by the SGS 150 according to one embodiment. An exemplary high-level description of the process for large scale matching is depicted in FIG. 3. In this example, the matching is for a video content.
  • Video content segments 2 from a Master database (DB) 6 and a Target DB 1 are processed in parallel by a large number of independent computational cores 3 that constitute an architecture for generating the signatures (hereinafter the “Architecture”). Further details on the computational cores generation are provided below. The independent cores 3 generate a database of Robust Signatures and Signatures 4 for Target content-segments 5 and a database of Robust Signatures and Signatures 7 for Master content-segments 8. An exemplary and non-limiting process of signature generation for an audio component is shown in detail in FIG. 4. Finally, Target Robust Signatures and/or Signatures 4 are effectively matched, by a matching algorithm 9, to Master Robust Signatures and/or Signatures 7 database to find all matches between the two databases.
  • To demonstrate an example of signature generation process, it is assumed, merely for the sake of simplicity and without limitation on the generality of the disclosed embodiments, that the signatures are based on a single frame, leading to certain simplification of the computational cores generation. The Matching System is extensible for signatures generation capturing the dynamics in-between the frames.
  • The signatures' generation process is now described with reference to FIG. 4. The first step in the process of signatures generation from a given speech-segment is to break down the speech-segment to K patches 14 of random length P and random position within the speech segment 12. The breakdown is performed by the patch generator component 21. The value of the number of patches K, random length P and random position parameters is determined based on optimization, considering the tradeoff between accuracy rate and the number of fast matches required in the flow process of the server 130 and SGS 150. Thereafter, all the K patches are injected in parallel into all computational cores 3 to generate K response vectors 22, which are fed into a signature generator system 23 to produce a database of Robust Signatures and Signatures 4.
  • In order to generate Robust Signatures, i.e., signatures that are robust to additive noise L (where L is an integer equal to or greater than 1) by the computational cores 3, a frame ‘i’ is injected into all the cores 3. Then, cores 3 generate two binary response vectors: {right arrow over (S)} which is a signature vector, and {right arrow over (RS)} which is a Robust Signature vector.
  • For generation of signatures robust to additive noise, such as White-Gaussian-Noise, scratch, etc., but not robust to distortions, such as crop, shift and rotation, etc., a core Ci={ni} (1≦i≦L) may consist of a single leaky integrate-to-threshold unit (LTU) node or more nodes. The node ni equations are:
  • V i = j w ij k j n i = ( Vi - Th x )
  • where, Π is a Heaviside step function; wij is a coupling node unit (CNU) between node i and image component j (for example, grayscale value of a certain pixel j); kj is an image component ‘j’ (for example, grayscale value of a certain pixel j); Thx is a constant Threshold value, where x is ‘S’ for signature and ‘RS’ for Robust Signature; and Vi is a Coupling Node Value.
  • The Threshold values Thx are set differently for signature generation and for Robust Signature generation. For example, for a certain distribution of values (for the set of nodes), the thresholds for signature (ThS) and Robust Signature (ThRS) are set apart, after optimization, according to at least one or more of the following criteria:

  • For: Vi>ThRS

  • 1−p(V>Th S)−1−(1−ε)l<<1   1
  • That is, given that l nodes (cores) constitute a Robust Signature of a certain image I, the probability that not all of these I nodes will belong to the signature of a same, but noisy image, Ĩ is sufficiently low (according to a system's specified accuracy).

  • p(V i >Th RS)≈l/L   2
  • i.e., approximately l out of the total L nodes can be found to generate a Robust Signature according to the above definition.
    • 3: Both Robust Signature and Signature are generated for certain frame i.
  • It should be understood that the generation of a signature is unidirectional, and typically yields lossless compression, where the characteristics of the compressed data are maintained but the uncompressed data cannot be reconstructed. Therefore, a signature can be used for the purpose of comparison to another signature without the need of comparison to the original data. The detailed description of the signature generation can be found U.S. Pat. Nos. 8,326,775 and 8,312,031, assigned to common assignee, is hereby incorporated by reference for all the useful information it contains.
  • A computational core generation is a process of definition, selection, and tuning of the parameters of the cores for a certain realization in a specific system and application. The process is based on several design considerations, such as:
  • (a) The cores should be designed so as to obtain maximal independence, i.e., the projection from a signal space should generate a maximal pair-wise distance between any two cores' projections into a high-dimensional space.
  • (b) The cores should be optimally designed for the type of signals, i.e., the cores should be maximally sensitive to the spatio-temporal structure of the injected signal, for example, and in particular, sensitive to local correlations in time and space. Thus, in some cases a core represents a dynamic system, such as in state space, phase space, edge of chaos, etc., which is uniquely used herein to exploit their maximal computational power.
  • (c) The cores should be optimally designed with regard to invariance to a set of signal distortions, of interest in relevant applications.
  • A detailed description of the computational core generation and the process for configuring such cores is discussed in more detail in U.S. Pat. No. 8,655,801 referenced above.
  • FIG. 5 depicts an exemplary and non-limiting flowchart 500 describing a method for detecting attractive advertising areas within a multimedia content element and matching advertisements for display in the detected areas according to one embodiment.
  • In S510, a web-page is provided responsive of a request by one of the web-browsers (e.g., web-browser 120-1). In S520, a request is received to process at least one multimedia content element contained in the uploaded web-page, for the purpose of detecting attractive advertising areas and matching advertisements. The request and the web-page can be received from a publisher server (e.g., a server 150-1), a script running on the uploaded web-page, or an agent (e.g., an add-on) installed in the web-browser. In S525, the multimedia content element is extracted from the web-page.
  • In S530, the multimedia content element is partitioned into a predefined number of portions. This number may be a configurable parameter of the server 130. For example, an image may be partitioned into blocks having equal or non-equal size.
  • In S535, at least one signature is generated for each portion of the multimedia content element. In one embodiment, each of the at least one signatures is robust to noise and distortion and is generated by the SGS 140 as described hereinabove. In 8540, an advertisement item is matched to the each portion of the multimedia content respective of its generated signature. Alternatively or collectively, all signatures generated for the various portions are clustered, and an advertisement is matched to the clustered signature. The operation of S540 is described in detail hereinabove, at least with reference to FIG. 2. In one embodiment, the matching of an advertisement to a signature can be performed by the computational cores that are part of a large scale matching discussed in detail hereinabove.
  • In S550, one or more attractive advertising areas are identified within the multimedia content element, for display of one or more of the matched advertisements. With this aim, the at least one signature generated for each portion of the multimedia content element is analyzed. The signature analysis includes determination of the texture uniformity, margin of the respective portion, the location of a portion within the multimedia element, and so on. An image texture is a set of metrics calculated in image processing designed to quantify the perceived texture of an image. The Image texture provides information about the spatial arrangement of color or intensities in an image or selected region of an image.
  • Based on the analysis, each portion is assigned with an attractiveness score, indicating how the portion is likely to attract viewers' attention without damaging the overall appearance of the multimedia content element. For example, a center portion of the element would have a higher score relative to other portions.
  • In S555, portions having attractiveness scores above a predefined threshold are determined to be attractive advertising areas. The predefined threshold may be a configurable parameter of the server 130.
  • In S560, the matching advertisements are overlaid on the determined attractive advertising areas. It should be noted that not all matching advertisements may be used for this purpose, but rather only the number of matching advertisements that can fit within the determined areas. The overlaid advertisements are displayed as part of the received multimedia content element.
  • In one embodiment, the display of the one or more matching advertisements over the determined area may be displayed upon a user's gesture. The user's gesture may be, for example, a scroll over on the multimedia content element, a mouse click or a tap on the multimedia content, and so on. According to another embodiment, an advertising element may be integrated to be shown as part of the multimedia content element. For example, in order to advertise a soft drink within an image of a man sitting on a beach, a bottle of the soft drink may be displayed in the man's right hand.
  • In S570 it is checked whether there are additional requests, and if so, execution continues with S510; otherwise, execution terminates.
  • An example of the operation of the method described with reference to FIG. 5 and the option of the server 130 is provided in FIG. 6. A request to match an advertisement to an image 600 displayed over a web-page is received. The image 600 is partitioned into 4 different portions 600-A, 600-B, 600-C, and 600-D. For each portion at least one signature is generated by the SGS 140, which is then analyzed by the server 130. The analysis result would determine that the portion 600-B does not include any displayed object and its texture is flat, thus portion 600-B is determined as the attractive advertising area and an advertisement 630 may be displayed in this area. The advertisement may be related to sunglasses 620 or a salad bowl 610.
  • The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.
  • All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosed embodiment and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosed embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.

Claims (19)

What is claimed is:
1. A method for identifying advertisements for display in a multimedia content element, comprising:
partitioning the multimedia content element into a predefined number of portions;
generating at least one signature for each portion of the multimedia content element;
analyzing the at least one signature generated for each portion of the multimedia content element;
identifying at least one attractive advertising area within the multimedia content element based on the signature analysis; and
identifying at least one matching advertisement for the multimedia content element based on the generated at least one signature, wherein the at least one matching advertisement fits within the identified at least one attractive advertising area.
2. The method of claim 1, wherein a number of the at least one matching advertisement is based on any of: a number of advertisement banners that can be displayed on a web-page, and a predetermined request.
3. The method of claim 1, wherein identifying the at least one attractive advertising area further comprises:
assigning each portion with an attractiveness score based on the analysis of the respective at least one signature; and
determining each portion having an attractiveness score above a predefined threshold as an attractive advertising area.
4. The method of claim 1, wherein identifying at least one matching advertisement for the multimedia content element further comprises:
matching at least one advertisement item to each of the portion of the multimedia content elements respective of its at least one generated signature.
5. The method of claim 4, further comprising:
overlaying the at least one matching advertisement item over the at least one attractive advertising area.
6. The method of claim 4, wherein the at least one advertisement item is displayed respective of a gesture of a user detected by a user node configured to display a web-page.
7. The method of claim 1, further comprising:
clustering the at least one signature generated for each portion of the multimedia content element, wherein each of the at least one matching advertisement is identified based on the signature cluster.
8. The method of claim 1, wherein each signature is generated by a signature generator system, wherein the signature generator system includes a plurality of computational cores configured to receive a plurality of unstructured data elements, each computational core of the plurality of computational cores having properties that are at least partly statistically independent of other of the computational cores, the properties are set independently of each other core.
9. The method of claim 1, wherein the multimedia content element is at least one of: an image, graphics, a video stream, a video clip, an audio stream, an audio clip, a video frame, a photograph, images of signals, and portions thereof.
10. A non-transitory computer readable medium having stored thereon instructions for causing one or more processing units to execute the method according to claim 1.
11. A system for identifying advertisements for display in a multimedia content element, comprising:
a processing system; and
a memory, the memory containing instructions that, when executed by the processing unit, configure the system to:
partition the multimedia content element into a predefined number of portions;
generate at least one signature for each portion of the multimedia content element;
analyze the at least one signature generated for each portion of the multimedia content element;
identify at least one attractive advertising area within the multimedia content element based on the signature analysis; and
identify at least one matching advertisement for the multimedia content element based on the generated at least one signature, wherein the at least one matching advertisement fits within the identified at least one attractive advertising area.
12. The system of claim 11, wherein a number of the at least one matching advertisement is based on any of: a number of advertisement banners that can be displayed on a web-page, and a predetermined request.
13. The system of claim 11, wherein the system is further configured to:
assign each portion with an attractiveness score based on the analysis of the respective at least one signature; and
determine each portion having an attractiveness score above a predefined threshold as an attractive advertising area.
14. The system of claim 11, wherein the system is further configured to:
match at least one advertisement item to each of the portion of the multimedia content elements respective of its at least one generated signature.
15. The system of claim 14, wherein the system is further configured to:
overlay the at least one matching advertisement item over the at least one attractive advertising area.
16. The system of claim 14, wherein the at least one advertisement item is displayed respective of a gesture of a user detected by a user node configured to display a web-page.
17. The system of claim 11, wherein the system is further configured to:
cluster the at least one signature generated for each portion of the multimedia content element, wherein each of the at least one matching advertisement is identified based on the signature cluster.
18. The system of claim 11, wherein each signature is generated by a signature generator system, wherein the signature generator system includes a plurality of computational cores configured to receive a plurality of unstructured data elements, each computational core of the plurality of computational cores having properties that are at least partly statistically independent of other of the computational cores, the properties are set independently of each other core.
19. The system of claim 11, wherein the multimedia content element is at least one of: an image, graphics, a video stream, a video clip, an audio stream, an audio clip, a video frame, a photograph, images of signals, and portions thereof.
US15/019,223 2005-10-26 2016-02-09 Method for identifying advertisements for placement in multimedia content elements Abandoned US20160171548A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/019,223 US20160171548A1 (en) 2005-10-26 2016-02-09 Method for identifying advertisements for placement in multimedia content elements
US16/783,187 US20200175550A1 (en) 2005-10-26 2020-02-06 Method for identifying advertisements for placement in multimedia content elements

Applications Claiming Priority (14)

Application Number Priority Date Filing Date Title
IL17157705 2005-10-26
IL171577 2005-10-26
IL173409 2006-01-29
IL173409A IL173409A0 (en) 2006-01-29 2006-01-29 Fast string - matching and regular - expressions identification by natural liquid architectures (nla)
PCT/IL2006/001235 WO2007049282A2 (en) 2005-10-26 2006-10-26 A computing device, a system and a method for parallel processing of data streams
IL185414A IL185414A0 (en) 2005-10-26 2007-08-21 Large-scale matching system and method for multimedia deep-content-classification
IL185414 2007-08-21
US12/195,863 US8326775B2 (en) 2005-10-26 2008-08-21 Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof
US12/434,221 US8112376B2 (en) 2005-10-26 2009-05-01 Signature based system and methods for generation of personalized multimedia channels
US13/344,400 US8959037B2 (en) 2005-10-26 2012-01-05 Signature based system and methods for generation of personalized multimedia channels
US13/624,397 US9191626B2 (en) 2005-10-26 2012-09-21 System and methods thereof for visual analysis of an image on a web-page and matching an advertisement thereto
US201361789378P 2013-03-15 2013-03-15
US13/874,195 US9286623B2 (en) 2005-10-26 2013-04-30 Method for determining an area within a multimedia content element over which an advertisement can be displayed
US15/019,223 US20160171548A1 (en) 2005-10-26 2016-02-09 Method for identifying advertisements for placement in multimedia content elements

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/874,195 Continuation US9286623B2 (en) 2005-10-26 2013-04-30 Method for determining an area within a multimedia content element over which an advertisement can be displayed

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/783,187 Continuation US20200175550A1 (en) 2005-10-26 2020-02-06 Method for identifying advertisements for placement in multimedia content elements

Publications (1)

Publication Number Publication Date
US20160171548A1 true US20160171548A1 (en) 2016-06-16

Family

ID=49158523

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/874,195 Active 2027-02-14 US9286623B2 (en) 2005-10-26 2013-04-30 Method for determining an area within a multimedia content element over which an advertisement can be displayed
US15/019,223 Abandoned US20160171548A1 (en) 2005-10-26 2016-02-09 Method for identifying advertisements for placement in multimedia content elements

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/874,195 Active 2027-02-14 US9286623B2 (en) 2005-10-26 2013-04-30 Method for determining an area within a multimedia content element over which an advertisement can be displayed

Country Status (1)

Country Link
US (2) US9286623B2 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160085733A1 (en) * 2005-10-26 2016-03-24 Cortica, Ltd. System and method thereof for dynamically associating a link to an information resource with a multimedia content displayed in a web-page
US11003706B2 (en) 2005-10-26 2021-05-11 Cortica Ltd System and methods for determining access permissions on personalized clusters of multimedia content elements
US10621988B2 (en) 2005-10-26 2020-04-14 Cortica Ltd System and method for speech to text translation using cores of a natural liquid architecture system
US9277255B1 (en) * 2013-03-15 2016-03-01 Google Inc. Metering of internet protocol video streams
CN106997390B (en) * 2017-04-05 2020-07-07 安徽机器猫电子商务股份有限公司 Commodity transaction information searching method for equipment accessories or parts
US12330646B2 (en) 2018-10-18 2025-06-17 Autobrains Technologies Ltd Off road assistance
US11488290B2 (en) 2019-03-31 2022-11-01 Cortica Ltd. Hybrid representation of a media unit
US12049116B2 (en) 2020-09-30 2024-07-30 Autobrains Technologies Ltd Configuring an active suspension
US12142005B2 (en) 2020-10-13 2024-11-12 Autobrains Technologies Ltd Camera based distance measurements
US12257949B2 (en) 2021-01-25 2025-03-25 Autobrains Technologies Ltd Alerting on driving affecting signal
US12139166B2 (en) 2021-06-07 2024-11-12 Autobrains Technologies Ltd Cabin preferences setting that is based on identification of one or more persons in the cabin
EP4194300A1 (en) 2021-08-05 2023-06-14 Autobrains Technologies LTD. Providing a prediction of a radius of a motorcycle turn
US12293560B2 (en) 2021-10-26 2025-05-06 Autobrains Technologies Ltd Context based separation of on-/off-vehicle points of interest in videos

Family Cites Families (150)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4972363A (en) 1989-02-01 1990-11-20 The Boeing Company Neural network using stochastic processing
US6052481A (en) 1994-09-02 2000-04-18 Apple Computers, Inc. Automatic method for scoring and clustering prototypes of handwritten stroke-based data
US7289643B2 (en) 2000-12-21 2007-10-30 Digimarc Corporation Method, apparatus and programs for generating and utilizing content signatures
JPH0981566A (en) 1995-09-08 1997-03-28 Toshiba Corp Method and device for translation
CA2166247A1 (en) 1995-12-28 1997-06-29 Ravi Shankar Ananth Supervisory circuit
US6076088A (en) 1996-02-09 2000-06-13 Paik; Woojin Information extraction system and method using concept relation concept (CRC) triples
US6243375B1 (en) 1996-11-08 2001-06-05 Gregory J. Speicher Internet-audiotext electronic communications system with multimedia based matching
US6122628A (en) 1997-10-31 2000-09-19 International Business Machines Corporation Multidimensional data clustering and dimension reduction for indexing and searching
US7313805B1 (en) 1998-11-30 2007-12-25 Sony Corporation Content navigator graphical user interface system and method
US20020123928A1 (en) 2001-01-11 2002-09-05 Eldering Charles A. Targeting ads to subscribers based on privacy-protected subscriber profiles
JP3585844B2 (en) 1999-01-29 2004-11-04 エルジー エレクトロニクス インコーポレーテッド Method for searching and generating multimedia data
US6819797B1 (en) 1999-01-29 2004-11-16 International Business Machines Corporation Method and apparatus for classifying and querying temporal and spatial information in video
CN1229996C (en) 1999-01-29 2005-11-30 三菱电机株式会社 Method of image features encoding and method of image search
US6381656B1 (en) 1999-03-10 2002-04-30 Applied Microsystems Corporation Method and apparatus for monitoring input/output (“I/O”) performance in I/O processors
US6643620B1 (en) 1999-03-15 2003-11-04 Matsushita Electric Industrial Co., Ltd. Voice activated controller for recording and retrieving audio/video programs
US6128651A (en) 1999-04-14 2000-10-03 Americom Usa Internet advertising with controlled and timed display of ad content from centralized system controller
US6763519B1 (en) 1999-05-05 2004-07-13 Sychron Inc. Multiprogrammed multiprocessor system with lobally controlled communication and signature controlled scheduling
US8055588B2 (en) 1999-05-19 2011-11-08 Digimarc Corporation Digital media methods
KR100326400B1 (en) 1999-05-19 2002-03-12 김광수 Method for generating caption location information, method for searching thereby, and reproducing apparatus using the methods
US6411724B1 (en) 1999-07-02 2002-06-25 Koninklijke Philips Electronics N.V. Using meta-descriptors to represent multimedia information
AUPQ206399A0 (en) 1999-08-06 1999-08-26 Imr Worldwide Pty Ltd. Network user measurement system and method
JP2001049923A (en) 1999-08-09 2001-02-20 Aisin Seiki Co Ltd Door closer device
KR100346262B1 (en) 1999-08-27 2002-07-26 엘지전자주식회사 Method of multimedia data keyword self formation
US6601026B2 (en) 1999-09-17 2003-07-29 Discern Communications, Inc. Information retrieval by natural language querying
US20030182567A1 (en) 1999-10-20 2003-09-25 Tivo Inc. Client-side multimedia content targeting system
US6665657B1 (en) 1999-11-19 2003-12-16 Niku Corporation Method and system for cross browsing of various multimedia data sources in a searchable repository
KR100357261B1 (en) 1999-12-30 2002-10-18 엘지전자 주식회사 Multimedia browser and structure to define and express the importance of a multimedia segment based on the semantic entities and the importance of a semantic entity based on the multimedia segments
CN1223181C (en) 2000-01-13 2005-10-12 皇家菲利浦电子有限公司 Noise reduction
US7047033B2 (en) 2000-02-01 2006-05-16 Infogin Ltd Methods and apparatus for analyzing, processing and formatting network information such as web-pages
US6804356B1 (en) 2000-03-20 2004-10-12 Koninklijke Philips Electronics N.V. Hierarchical authentication system for images and video
US6901207B1 (en) 2000-03-30 2005-05-31 Lsi Logic Corporation Audio/visual device for capturing, searching and/or displaying audio/visual material
US7260564B1 (en) 2000-04-07 2007-08-21 Virage, Inc. Network video guide and spidering
US7035873B2 (en) 2001-08-20 2006-04-25 Microsoft Corporation System and methods for providing adaptive media property classification
US7660737B1 (en) 2000-07-18 2010-02-09 Smartpenny.Com, Inc. Economic filtering system for delivery of permission based, targeted, incentivized advertising
US7464086B2 (en) 2000-08-01 2008-12-09 Yahoo! Inc. Metatag-based datamining
JP2004507820A (en) 2000-08-23 2004-03-11 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Method, client system and server system for improving content item rendering
AU2001295591A1 (en) 2000-10-13 2002-04-22 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. A method for supervised teaching of a recurrent artificial neural network
US7146349B2 (en) 2000-11-06 2006-12-05 International Business Machines Corporation Network for describing multimedia information
JP3658761B2 (en) 2000-12-12 2005-06-08 日本電気株式会社 Image search system, image search method, and storage medium storing image search program
US20040128511A1 (en) 2000-12-20 2004-07-01 Qibin Sun Methods and systems for generating multimedia signature
US6728681B2 (en) 2001-01-05 2004-04-27 Charles L. Whitham Interactive multimedia book
CN1235408C (en) 2001-02-12 2006-01-04 皇家菲利浦电子有限公司 Generating and matching hashes of multimedia content
US7529659B2 (en) 2005-09-28 2009-05-05 Audible Magic Corporation Method and apparatus for identifying an unknown work
JP4153990B2 (en) 2001-08-02 2008-09-24 株式会社日立製作所 Data distribution method and system
US20030041047A1 (en) 2001-08-09 2003-02-27 International Business Machines Corporation Concept-based system for representing and processing multimedia objects with arbitrary constraints
AU2002329417A1 (en) 2001-09-27 2003-04-07 British Telecommunications Public Limited Company Method and apparatus for data analysis
US7353224B2 (en) 2001-12-04 2008-04-01 Hewlett-Packard Development Company, L.P. System and method for efficiently finding near-similar images in massive databases
US7020654B1 (en) 2001-12-05 2006-03-28 Sun Microsystems, Inc. Methods and apparatus for indexing content
US7392230B2 (en) 2002-03-12 2008-06-24 Knowmtech, Llc Physical neural network liquid state machine utilizing nanotechnology
DE60323086D1 (en) 2002-04-25 2008-10-02 Landmark Digital Services Llc ROBUST AND INVARIANT AUDIO COMPUTER COMPARISON
US20030191764A1 (en) 2002-08-06 2003-10-09 Isaac Richards System and method for acoustic fingerpringting
US7519616B2 (en) 2002-10-07 2009-04-14 Microsoft Corporation Time references for multimedia objects
US7734556B2 (en) 2002-10-24 2010-06-08 Agency For Science, Technology And Research Method and system for discovering knowledge from text documents using associating between concepts and sub-concepts
US20040107181A1 (en) 2002-11-14 2004-06-03 FIORI Product Development, Inc. System and method for capturing, storing, organizing and sharing visual, audio and sensory experience and event records
US7870279B2 (en) 2002-12-09 2011-01-11 Hrl Laboratories, Llc Method and apparatus for scanning, personalizing, and casting multimedia data streams via a communication network and television
US7694318B2 (en) 2003-03-07 2010-04-06 Technology, Patents & Licensing, Inc. Video detection and insertion
EP1668903A4 (en) 2003-09-12 2011-01-05 Nielsen Media Res Inc Digital video signature apparatus and methods for use with video program identification systems
FR2861937A1 (en) 2003-10-30 2005-05-06 Thomson Licensing Sa NAVIGATION METHOD DISPLAYING A MOBILE WINDOW, RECEIVER EMPLOYING THE METHOD
CA2498364C (en) 2004-02-24 2012-05-15 Dna 13 Inc. System and method for real-time media searching and alerting
FR2867561B1 (en) 2004-03-11 2007-02-02 Commissariat Energie Atomique DISTRIBUTED MEASUREMENT SYSTEM OF THE CURVES OF A STRUCTURE
US20070300142A1 (en) 2005-04-01 2007-12-27 King Martin T Contextual dynamic advertising based upon captured rendered text
US7697791B1 (en) 2004-05-10 2010-04-13 Google Inc. Method and system for providing targeted documents based on concepts automatically identified therein
KR20070020256A (en) 2004-05-28 2007-02-20 코닌클리케 필립스 일렉트로닉스 엔.브이. Method and apparatus for content item signature matching
US20080201299A1 (en) 2004-06-30 2008-08-21 Nokia Corporation Method and System for Managing Metadata
US7461312B2 (en) 2004-07-22 2008-12-02 Microsoft Corporation Digital signature generation for hardware functional test
US7487072B2 (en) 2004-08-04 2009-02-03 International Business Machines Corporation Method and system for querying multimedia data where adjusting the conversion of the current portion of the multimedia data signal based on the comparing at least one set of confidence values to the threshold
US7914468B2 (en) 2004-09-22 2011-03-29 Svip 4 Llc Systems and methods for monitoring and modifying behavior
US7526607B1 (en) 2004-09-23 2009-04-28 Juniper Networks, Inc. Network acceleration and long-distance pattern detection using improved caching and disk mapping
US20090253583A1 (en) 2004-09-27 2009-10-08 Med Biogene Inc. Hematological Cancer Profiling System
US7929728B2 (en) 2004-12-03 2011-04-19 Sri International Method and apparatus for tracking a movable object
WO2006075902A1 (en) 2005-01-14 2006-07-20 Samsung Electronics Co., Ltd. Method and apparatus for category-based clustering using photographic region templates of digital photo
US20070050446A1 (en) 2005-02-01 2007-03-01 Moore James F Managing network-accessible resources
US7769221B1 (en) 2005-03-10 2010-08-03 Amazon Technologies, Inc. System and method for visual verification of item processing
US20060236343A1 (en) 2005-04-14 2006-10-19 Sbc Knowledge Ventures, Lp System and method of locating and providing video content via an IPTV network
US8732175B2 (en) 2005-04-21 2014-05-20 Yahoo! Inc. Interestingness ranking of media objects
US10740722B2 (en) 2005-04-25 2020-08-11 Skyword Inc. User-driven media system in a computer network
US20060253423A1 (en) 2005-05-07 2006-11-09 Mclane Mark Information retrieval system and method
US7433895B2 (en) 2005-06-24 2008-10-07 Microsoft Corporation Adding dominant media elements to search results
US7788132B2 (en) 2005-06-29 2010-08-31 Google, Inc. Reviewing the suitability of Websites for participation in an advertising network
US7831582B1 (en) 2005-08-23 2010-11-09 Amazon Technologies, Inc. Method and system for associating keywords with online content sources
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US8312031B2 (en) 2005-10-26 2012-11-13 Cortica Ltd. System and method for generation of complex signatures for multimedia data content
US8326775B2 (en) 2005-10-26 2012-12-04 Cortica Ltd. Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof
US9087049B2 (en) 2005-10-26 2015-07-21 Cortica, Ltd. System and method for context translation of natural language
US9218606B2 (en) 2005-10-26 2015-12-22 Cortica, Ltd. System and method for brand monitoring and trend analysis based on deep-content-classification
US8655801B2 (en) 2005-10-26 2014-02-18 Cortica, Ltd. Computing device, a system and a method for parallel processing of data streams
US9191626B2 (en) 2005-10-26 2015-11-17 Cortica, Ltd. System and methods thereof for visual analysis of an image on a web-page and matching an advertisement thereto
US9031999B2 (en) 2005-10-26 2015-05-12 Cortica, Ltd. System and methods for generation of a concept based database
US8266185B2 (en) 2005-10-26 2012-09-11 Cortica Ltd. System and methods thereof for generation of searchable structures respective of multimedia data content
US7730405B2 (en) 2005-12-07 2010-06-01 Iac Search & Media, Inc. Method and system to present video content
US11477617B2 (en) 2006-03-20 2022-10-18 Ericsson Evdo Inc. Unicasting and multicasting multimedia services
US20070244902A1 (en) 2006-04-17 2007-10-18 Microsoft Corporation Internet search-based television
HK1094647A2 (en) 2006-04-19 2007-04-04 面面通网络系统有限公司 System and method for distributing targeted content
US8009861B2 (en) 2006-04-28 2011-08-30 Vobile, Inc. Method and system for fingerprinting digital video object based on multiresolution, multirate spatial and temporal signatures
US7536417B2 (en) 2006-05-24 2009-05-19 Microsoft Corporation Real-time analysis of web browsing behavior
US7921116B2 (en) 2006-06-16 2011-04-05 Microsoft Corporation Highly meaningful multimedia metadata creation and associations
US8098934B2 (en) 2006-06-29 2012-01-17 Google Inc. Using extracted image text
US20080040278A1 (en) 2006-08-11 2008-02-14 Dewitt Timothy R Image recognition authentication and advertising system
US20080040277A1 (en) 2006-08-11 2008-02-14 Dewitt Timothy R Image Recognition Authentication and Advertising Method
US20080049629A1 (en) 2006-08-22 2008-02-28 Morrill Robert J System and method for monitoring data link layer devices and optimizing interlayer network performance
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US20080091527A1 (en) 2006-10-17 2008-04-17 Silverbrook Research Pty Ltd Method of charging for ads associated with predetermined concepts
WO2008077119A2 (en) 2006-12-19 2008-06-26 Ortiva Wireless Intelligent video signal encoding utilizing regions of interest information
US8312558B2 (en) 2007-01-03 2012-11-13 At&T Intellectual Property I, L.P. System and method of managing protected video content
US20090245603A1 (en) 2007-01-05 2009-10-01 Djuro Koruga System and method for analysis of light-matter interaction based on spectral convolution
US20080201314A1 (en) 2007-02-20 2008-08-21 John Richard Smith Method and apparatus for using multiple channels of disseminated data content in responding to information requests
JP2008250654A (en) 2007-03-30 2008-10-16 Alpine Electronics Inc Video player and video playback control method
US20080256056A1 (en) 2007-04-10 2008-10-16 Yahoo! Inc. System for building a data structure representing a network of users and advertisers
US7974994B2 (en) 2007-05-14 2011-07-05 Microsoft Corporation Sensitive webpage content detection
US8171030B2 (en) 2007-06-18 2012-05-01 Zeitera, Llc Method and apparatus for multi-dimensional content search and video identification
US8627509B2 (en) 2007-07-02 2014-01-07 Rgb Networks, Inc. System and method for monitoring content
US8417037B2 (en) 2007-07-16 2013-04-09 Alexander Bronstein Methods and systems for representation and matching of video content
US8358840B2 (en) 2007-07-16 2013-01-22 Alexander Bronstein Methods and systems for representation and matching of video content
US20110145068A1 (en) 2007-09-17 2011-06-16 King Martin T Associating rendered advertisements with digital content
US7987194B1 (en) 2007-11-02 2011-07-26 Google Inc. Targeting advertisements based on cached contents
US20090119157A1 (en) 2007-11-02 2009-05-07 Wise Window Inc. Systems and method of deriving a sentiment relating to a brand
US20090125529A1 (en) 2007-11-12 2009-05-14 Vydiswaran V G Vinod Extracting information based on document structure and characteristics of attributes
US20090148045A1 (en) 2007-12-07 2009-06-11 Microsoft Corporation Applying image-based contextual advertisements to images
US20090172730A1 (en) 2007-12-27 2009-07-02 Jeremy Schiff System and method for advertisement delivery optimization
US9117219B2 (en) 2007-12-31 2015-08-25 Peer 39 Inc. Method and a system for selecting advertising spots
US8065143B2 (en) 2008-02-22 2011-11-22 Apple Inc. Providing text input using speech data and non-speech data
US20090216639A1 (en) 2008-02-25 2009-08-27 Mark Joseph Kapczynski Advertising selection and display based on electronic profile information
US8344233B2 (en) 2008-05-07 2013-01-01 Microsoft Corporation Scalable music recommendation by search
US20110055585A1 (en) 2008-07-25 2011-03-03 Kok-Wah Lee Methods and Systems to Create Big Memorizable Secrets and Their Applications in Information Engineering
MX2011001959A (en) 2008-08-18 2012-02-08 Ipharro Media Gmbh Supplemental information delivery.
US8898568B2 (en) 2008-09-09 2014-11-25 Apple Inc. Audio user interface
US20100082684A1 (en) 2008-10-01 2010-04-01 Yahoo! Inc. Method and system for providing personalized web experience
US8000655B2 (en) 2008-12-19 2011-08-16 Telefonaktiebolaget L M Ericsson (Publ) Uplink multi-cell signal processing for interference suppression
US9317684B2 (en) 2008-12-23 2016-04-19 Valve Corporation Protecting against polymorphic cheat codes in a video game
US20100191567A1 (en) 2009-01-26 2010-07-29 At&T Intellectual Property I, L.P. Method and apparatus for analyzing rhetorical content
US10326848B2 (en) 2009-04-17 2019-06-18 Empirix Inc. Method for modeling user behavior in IP networks
US8359315B2 (en) 2009-06-11 2013-01-22 Rovi Technologies Corporation Generating a representative sub-signature of a cluster of signatures by using weighted sampling
US8406532B2 (en) 2009-06-17 2013-03-26 Chevron U.S.A., Inc. Image matching using line signature
US8417096B2 (en) 2009-09-14 2013-04-09 Tivo Inc. Method and an apparatus for determining a playing position based on media content fingerprints
US9710491B2 (en) 2009-11-02 2017-07-18 Microsoft Technology Licensing, Llc Content-based image search
US8875038B2 (en) * 2010-01-19 2014-10-28 Collarity, Inc. Anchoring for content synchronization
US20110208822A1 (en) 2010-02-22 2011-08-25 Yogesh Chunilal Rathod Method and system for customized, contextual, dynamic and unified communication, zero click advertisement and prospective customers search engine
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US20110251896A1 (en) 2010-04-09 2011-10-13 Affine Systems, Inc. Systems and methods for matching an advertisement to a video
US20120131454A1 (en) 2010-11-24 2012-05-24 Siddharth Shah Activating an advertisement by performing gestures on the advertisement
US20120167133A1 (en) 2010-12-23 2012-06-28 Carroll John W Dynamic content insertion using content signatures
US10409851B2 (en) 2011-01-31 2019-09-10 Microsoft Technology Licensing, Llc Gesture-based search
US20120330869A1 (en) 2011-06-25 2012-12-27 Jayson Theordore Durham Mental Model Elicitation Device (MMED) Methods and Apparatus
US20130067035A1 (en) 2011-09-08 2013-03-14 Bubble Ads Holdings Llc System and method for cloud based delivery and display of content on mobile devices
US20130086499A1 (en) 2011-09-30 2013-04-04 Matthew G. Dyor Presenting auxiliary content in a gesture-based system
US8620718B2 (en) 2012-04-06 2013-12-31 Unmetric Inc. Industry specific brand benchmarking system based on social media strength of a brand
US20140019264A1 (en) 2012-05-07 2014-01-16 Ditto Labs, Inc. Framework for product promotion and advertising using social networking services
US8775442B2 (en) 2012-05-15 2014-07-08 Apple Inc. Semantic search using a single-source semantic model
US9888289B2 (en) 2012-09-29 2018-02-06 Smartzer Ltd Liquid overlay for video content
US8922414B2 (en) 2013-02-12 2014-12-30 Cortica, Ltd. Multi-layer system for symbol-space based compression of patterns

Also Published As

Publication number Publication date
US9286623B2 (en) 2016-03-15
US20130246166A1 (en) 2013-09-19

Similar Documents

Publication Publication Date Title
US20200175550A1 (en) Method for identifying advertisements for placement in multimedia content elements
US9286623B2 (en) Method for determining an area within a multimedia content element over which an advertisement can be displayed
US9191626B2 (en) System and methods thereof for visual analysis of an image on a web-page and matching an advertisement thereto
US9235557B2 (en) System and method thereof for dynamically associating a link to an information resource with a multimedia content displayed in a web-page
US9792620B2 (en) System and method for brand monitoring and trend analysis based on deep-content-classification
US9466068B2 (en) System and method for determining a pupillary response to a multimedia data element
US10742340B2 (en) System and method for identifying the context of multimedia content elements displayed in a web-page and providing contextual filters respective thereto
US10380164B2 (en) System and method for using on-image gestures and multimedia content elements as search queries
US10380623B2 (en) System and method for generating an advertisement effectiveness performance score
US10210257B2 (en) Apparatus and method for determining user attention using a deep-content-classification (DCC) system
US20130191323A1 (en) System and method for identifying the context of multimedia content elements displayed in a web-page
US11537636B2 (en) System and method for using multimedia content as search queries
US20130191368A1 (en) System and method for using multimedia content as search queries
US10387914B2 (en) Method for identification of multimedia content elements and adding advertising content respective thereof
US20140200971A1 (en) System and method for matching informative content to a multimedia content element based on concept recognition of the multimedia content
US9558449B2 (en) System and method for identifying a target area in a multimedia content element
US11954168B2 (en) System and method thereof for dynamically associating a link to an information resource with a multimedia content displayed in a web-page
US11604847B2 (en) System and method for overlaying content on a multimedia content element based on user interest
US9767143B2 (en) System and method for caching of concept structures
JP5013840B2 (en) Information providing apparatus, information providing method, and computer program
US20140258328A1 (en) System and method for visual determination of the correlation between a multimedia content element and a plurality of keywords
US20150128024A1 (en) System and method for matching content to multimedia content respective of analysis of user variables

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: CORTICA LTD, ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RAICHELGAUZ, IGAL;ODINAEV, KARINA;ZEEVI, YEHOSHUA Y;REEL/FRAME:047952/0252

Effective date: 20181125

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION