US8782689B2 - Method of broadcasting content and at least one complementary element, utilizing a server and a terminal - Google Patents
Method of broadcasting content and at least one complementary element, utilizing a server and a terminal Download PDFInfo
- Publication number
- US8782689B2 US8782689B2 US12/664,602 US66460208A US8782689B2 US 8782689 B2 US8782689 B2 US 8782689B2 US 66460208 A US66460208 A US 66460208A US 8782689 B2 US8782689 B2 US 8782689B2
- Authority
- US
- United States
- Prior art keywords
- content
- terminal
- contextual index
- server
- complementary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 230000000295 complement effect Effects 0.000 title claims abstract description 62
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000001514 detection method Methods 0.000 claims abstract description 91
- 230000006870 function Effects 0.000 claims description 81
- 230000004807 localization Effects 0.000 claims description 4
- 230000002452 interceptive effect Effects 0.000 claims description 3
- 238000009877 rendering Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 239000000203 mixture Substances 0.000 description 3
- 230000001953 sensory effect Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/233—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25808—Management of client data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8126—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
- H04N21/8133—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/858—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
- H04N21/8586—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/611—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for multicast or broadcast
Definitions
- the field of the disclosure is that of the broadcasting of contextual information elements from an indexed multi-media content publication platform.
- the technical field of the disclosure relates to multimedia applications and the use of information about content usage in order to send a relevant additional content, or additional data, which will be included in the final presentation made to the user, the relevant additional content being dependent upon the content previously broadcast on the terminal of said user.
- aspects of disclosure may be used in a great number of multi-media applications and particularly those that require a representation of the constituent signals thereof in the form of a spatiotemporal arrangement of graphic objects, such as for example “RichMedia” applications on mobiles or “RichInternet” applications on the Web.
- the disclosure applies to already known graphic scene description formats such as MPEG-4/BIFS (Binary Format Scene), MPEG-4/LASeR (Lightweight Application Scene Representation), SVG (Scalable Vector Graphics), SMIL (Synchronised Multimedia Integration Language), XHTML (eXtensible HyperText Markup Language), etc.
- MPEG-4/BIFS Binary Format Scene
- MPEG-4/LASeR Lightweight Application Scene Representation
- SVG Scalable Vector Graphics
- SMIL Synchronet Markup Language
- XHTML eXtensible HyperText Markup Language
- said method comprises the following steps:
- An embodiment of the invention is thus based on a new and inventive approach to the insertion of complementary elements into a content, based on the detection of indices.
- An inventive method implements index detection algorithms in order to determine whether one or more complementary elements are to be combined with the broadcast content, also known as start content.
- the insertion of complementary elements into a content is based on the presence of certain indices in the content, so as to be in harmony with the content being broadcast. Since these indices can be highly varied, they are detectable by detection functions that are also of varied natures and complexities.
- said detection functions implement at least one of the operations belonging to the group that includes:
- an index detection function may comprise comparing a plurality of elements, in other words detecting whether an element defined in the detection function is present in the content.
- said elements belong to the group that includes:
- a detection function is thus able to detect for example whether a particular geometric shape, or particular texture, or particular colour is present in the content.
- a detection function is also able to detect whether a particular geometric shape and a particular colour is present.
- More complex index detection functions may be implemented by the method, such as voice recognition, possibly speaker independent (multi-speaker), or image or character recognition.
- a detection function is also able to comprise visual scene recognition.
- said detection functions take into account at least one predetermined detection criterion belonging to the group that includes:
- a detection function based on these features will not be applied to the content.
- the application of a particular detection function may take into account the localisation of the terminal, for example whether the complementary element to be inserted is linked to a particular localisation (advertising message for a local event for example).
- the application of detection functions may also take into account the user of the terminal, who is pre-identified, so as to take into consideration a particular user-related profile.
- said step of applying at least one detection function is implemented in said terminal.
- said at least one detection function is applied to the part of the content rendered on said terminal corresponding to said broadcast content.
- the detection functions can thus be implemented directly in the terminal on which the content is rendered.
- the detection functions may be applied either to the broadcast content, intended to be rendered on the terminal, or to only one part of the broadcast content, said part being able for example to be the part which is rendered on the terminal.
- said step of applying at least one detection function is implemented in a broadcasting system component belonging to the group that includes:
- a broadcasting system including the terminal on which the content is rendered is also able to include other components, such as for example a multiplexer and/or a broadcasting server and/or a services server, the detection functions are therefore also able to be implemented in one of these broadcasting system components, and no longer in the terminal.
- said complementary element is of a type belonging to the group that includes:
- a complementary element is thus able to correspond to a multi-media element or RichMedia type represented in space and time, on the terminal screen or the other input-output modules of the terminal such as the loudspeaker, the keys etc.
- a complementary element is also able to be interactive, in other words bring about interactions with the user once combined with the start content.
- a complementary element is also able to be a pointer to a “url” Web address.
- said step of providing said complementary element includes a step of transmitting said complementary element from a server including said database to said terminal and in that said combination of said content and said complementary element is implemented dynamically in said terminal.
- the complementary element or elements corresponding to these indices can be combined with the broadcast content.
- These complementary elements may come from a server including the database in which the detection functions are stored and can be transmitted to the terminal, in order to be combined, in the terminal, with broadcast content.
- said step of providing said complementary element includes a step of transmitting said complementary element from a server including said database to said terminal through a multiplexer or a broadcasting server.
- the complementary elements can thus be transmitted from a server to the terminal through another broadcasting system component, for example a multiplexer or a broadcasting server.
- another broadcasting system component for example a multiplexer or a broadcasting server.
- said combination of said content and said complementary element is implemented in a broadcasting system component belonging to the group that includes:
- the complementary elements are thus not transmitted directly to the terminal, so as to be combined therein with the start content, but are combined with the content in one of the system components, such as a multiplexer, a broadcasting server or a services server.
- the combination of complementary elements and start content, known as complete content, is then broadcast to the terminal.
- An embodiment of the invention also relates to a server including a database storing at least one detection function.
- each of said detection functions is associated with at least one complementary element.
- said server includes means for providing at least one complementary element associated with at least one of said positive response detection functions.
- said server also includes means for applying, to a content broadcast to a terminal, at least one of said detection functions stored in said database.
- Said server is thus capable of implementing one or more detection functions on the start content, and of providing one or more complementary elements corresponding to these functions.
- said server further includes means for combining said content and at least one complementary element.
- Said server is thus also capable of effecting a combination between a complementary element and a start content.
- an embodiment of the invention also relates to a terminal capable of receiving and rendering multimedia contents, consisting of at least one elementary component.
- said terminal includes means for applying, to a content intended to be rendered on said terminal, at least one contextual index detection function, stored in a database, said detection function implementing an analysis of at least one of said elementary components and delivering an information cue regarding the presence or absence of said contextual index.
- Said terminal is thus capable of implementing one or more index detection functions, thereby making it possible to determine whether one or more complementary elements are to be inserted into the start content.
- FIG. 1 shows an example of a system wherein an inventive method is implemented
- FIG. 2 shows the main steps in an inventive broadcasting method.
- An example of an inventive principle lies in the detection, in a broadcast content, of indices indicating that one or more additional contents are to be proposed to the content user.
- the indices to be detected are referenced, in the form of detection functions, in a database on a server, and are associated with one or more additional contents.
- the detection of these indices therefore comprises applying to the broadcast content, or a part of said content, one or more detection functions stored in the database.
- the complementary element which is associated with it in the database is provided to the terminal, directly or through other elements of the broadcasting system in which the inventive method is implemented, with a view to combining it with the broadcast content.
- FIG. 1 an example of a broadcasting system has been shown wherein the inventive broadcasting method is implemented.
- the broadcasting system includes a plurality of servers 2 , 6 , 11 , 12 , a multiplexer, two networks 8 , 10 , and a terminal 1 , to which the content is broadcast.
- the server 2 contains in particular the database 3 including the detection functions associated with the complementary elements to be broadcast.
- the server 12 is a multimedia audiovisual content server, connected to the server 2 , to the distribution server 11 and to the broadcasting server 6 .
- the server 11 is itself connected to the terminal 1 by the distribution network 10 .
- the server 6 is connected to a multiplexer 5 , itself connected to the terminal 1 by the telecommunications network 8 .
- the terminal 1 is also connected to the server 2 .
- the terminal 1 and the servers 5 , 6 , 2 may contain a combination module denoted 7 a , 7 b , 7 c , 7 d respectively capable of effecting the combination of one or more complementary elements with the content, and a reference value denoted 9 a , 9 b , 9 c and 9 d respectively, used by some detection functions as described below.
- the different steps described below relate to the implementation of a detection function, and may be repeated, simultaneously or successively, for one and the same content, according to predetermined criteria.
- the method may apply by default a certain number of detection functions, successively, whatever the result thereof, and a plurality of different complementary elements may therefore be inserted into one and the same content, if a plurality of indexes are detected.
- different detection functions may be applied successively, in the event of a failure of the previous one, and the method stops implementing detection functions as soon as the first index is detected.
- the first step 20 therefore comprises applying a detection function to at least one part of the broadcast content, in order to detect at least one index making it possible to determine whether or not a complementary element is to be inserted into the broadcast content.
- the index detection function or functions are stored in a database, on a server, and are applied to the content, or a part of the content, as a function of predetermined detection criteria.
- the detection functions may be applied, during the broadcasting of the content, to a part of the content which will be broadcast subsequently. This alternative makes it possible to anticipate index detection and the transmission of complementary elements to be inserted into the content.
- one or more complementary elements associated with this index are made available at the next step 21 of providing this (or these) complementary element(s).
- the complementary element or elements associated with the detected index are therefore made available to the broadcasting system component (as described previously in relation to FIG. 1 ) which will be responsible for the process of combining these complementary elements with the start content.
- the complementary elements provided are therefore inserted into the start content, delivering a complete content.
- This last step may therefore be implemented in the terminal, or in one of the other components of the broadcasting system, such as a multiplexer, a broadcasting server or a services server.
- a first detection function example comprises detecting, in the content, the presence of an element, known as a primitive element or elementary component, such as for example an image or a sound, identical or similar to a reference primitive element, associated with the detection function under consideration.
- an element known as a primitive element or elementary component, such as for example an image or a sound, identical or similar to a reference primitive element, associated with the detection function under consideration.
- Said detection function implements a comparison of the primitive elements of the content with the reference primitive element and delivers a positive or negative comparison result.
- the comparison may comprise a calculation of the distance between a reference primitive element and a primitive element detected and presumed similar to the reference primitive element. If the distance between these two elements is below a predetermined threshold, also known as a reference value ( 7 a to 7 d in FIG. 1 ), then the detection function delivers a positive result, otherwise it delivers a negative result.
- a predetermined threshold also known as a reference value ( 7 a to 7 d in FIG. 1 .
- the complementary element associated with the detection function is provided for the purpose of combining it with the content.
- Another more complex detection function example comprises effecting a speaker independent (multi-speaker) voice recognition.
- This detection function is not a straightforward comparison of two sounds, but a comparison of two utterances by different voices.
- the detection function corresponds to a voice recognition algorithm, applied to the content, delivering a positive response in the event of successful recognition of a key phrase utterance and negative in the contrary event.
- the detection function is not a straightforward comparison between two sounds, but corresponds to a method which extracts a predetermined number of parameters from the content and compares them with the reference parameters associated with the detection function, in order to render the comparison independent of the effects related for example to the encoding, or to the sound montage, in the content.
- Another detection function used in the inventive broadcasting method comprises effecting a character recognition, so as to recognise text “buried” in an image for example.
- the detection function implements a character recognition algorithm applied to a visual content, delivering a positive response in the event of successful recognition of a particular character string and negative in the contrary event.
- a detection function may also comprise detecting a complex element known as a “sensory outcome”.
- a sensory outcome may result from the audiovisual composition of primitive elements (images, graphic animation, sounds) and/or from processes conducted on these primitive elements (such as filtering, colour reduction, symmetry, etc).
- a sensory outcome is for example a change of screen background colour, a change of character font etc., and corresponds to a complex graphic element presented to the user.
- the inventive broadcasting method may apply these different detection functions types successively, whatever the respective results thereof, if it is wished to complete the content with as many complementary elements as possible, or to interrupt the application of the detection functions as soon as the first positive result is obtained, so as to broadcast only one complementary element.
- the detection function implemented in the inventive broadcasting method it may be applied to the whole content or only to one part of said content, for example a specific part of the screen in the case of image detection. This distinction is indicated in the detection function itself.
- Another possible alternative comprises applying the detection functions to the part of the content broadcast during rendering: this applies to complex contents that have a plurality of sub-parts which are not rendered at the same time.
- the advantage of this alternative is that it renders the complementary element or elements only on rendering of the broadcast content part which contains the detection source.
- a complementary element may be of the RichMedia type, such as sound, video, animated vector graphics, user interactions etc.
- a complementary element is of the spatiotemporal type, in other words it corresponds to a multimedia element, or RichMedia type, represented in space and time on the screen and the other input-output modules (loud speaker, ringer, keys, etc) of the terminal on which the broadcast content is rendered.
- An embodiment of the invention proposes a technique suitable for multi-media environments.
- An embodiment provides a technique for broadcasting additional contents which is straightforward to implement and inexpensive in terms of referencing these additional contents.
- An embodiment of the invention allows the implementation of said technique in the case of live contents.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Graphics (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0755745A FR2917553B1 (fr) | 2007-06-13 | 2007-06-13 | Procede de diffusion d'un element complementaire, serveur et terminal correspondants |
FR0755745 | 2007-06-13 | ||
PCT/EP2008/057043 WO2008155240A2 (fr) | 2007-06-13 | 2008-06-05 | Procede de diffusion d'un element complementaire, serveur et terminal correspondants |
Publications (2)
Publication Number | Publication Date |
---|---|
US20100175083A1 US20100175083A1 (en) | 2010-07-08 |
US8782689B2 true US8782689B2 (en) | 2014-07-15 |
Family
ID=39111990
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/664,602 Active 2030-11-21 US8782689B2 (en) | 2007-06-13 | 2008-06-05 | Method of broadcasting content and at least one complementary element, utilizing a server and a terminal |
Country Status (5)
Country | Link |
---|---|
US (1) | US8782689B2 (fr) |
EP (1) | EP2156644A2 (fr) |
CN (1) | CN101715642A (fr) |
FR (1) | FR2917553B1 (fr) |
WO (1) | WO2008155240A2 (fr) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101286351B (zh) * | 2008-05-23 | 2011-02-23 | 广州视源电子科技有限公司 | 生成流媒体增值描述文件及插播多媒体信息的方法、系统 |
US9621932B2 (en) * | 2012-02-28 | 2017-04-11 | Google Inc. | Enhancing live broadcast viewing through display of filtered internet information streams |
US20150319506A1 (en) * | 2014-04-30 | 2015-11-05 | Netflix, Inc. | Displaying data associated with a program based on automatic recognition |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5805763A (en) * | 1995-05-05 | 1998-09-08 | Microsoft Corporation | System and method for automatically recording programs in an interactive viewing system |
WO2001022729A1 (fr) | 1999-09-20 | 2001-03-29 | Tivo, Inc. | Systeme ferme de reperes |
US6389168B2 (en) * | 1998-10-13 | 2002-05-14 | Hewlett Packard Co | Object-based parsing and indexing of compressed video streams |
US20020115047A1 (en) * | 2001-02-16 | 2002-08-22 | Golftec, Inc. | Method and system for marking content for physical motion analysis |
US20040249650A1 (en) * | 2001-07-19 | 2004-12-09 | Ilan Freedman | Method apparatus and system for capturing and analyzing interaction based content |
US20050271251A1 (en) * | 2004-03-16 | 2005-12-08 | Russell Stephen G | Method for automatically reducing stored data in a surveillance system |
WO2006114796A1 (fr) | 2005-04-25 | 2006-11-02 | Hewlett-Packard Development Company, L.P. | Systemes et procedes pour l'evaluation des niveaux d'interet d'audience relative aux medias de radiodiffusion et pour la fourniture d'information supplementaire sur la base des niveaux d'interet |
US20090089064A1 (en) * | 2007-09-28 | 2009-04-02 | International Business Machines Corporation | System, method and architecture for control and multi-modal synchronization of speech browsers |
US20120180083A1 (en) * | 2000-09-08 | 2012-07-12 | Ntech Properties, Inc. | Method and apparatus for creation, distribution, assembly and verification of media |
-
2007
- 2007-06-13 FR FR0755745A patent/FR2917553B1/fr active Active
-
2008
- 2008-06-05 US US12/664,602 patent/US8782689B2/en active Active
- 2008-06-05 EP EP08760617A patent/EP2156644A2/fr not_active Withdrawn
- 2008-06-05 WO PCT/EP2008/057043 patent/WO2008155240A2/fr active Application Filing
- 2008-06-05 CN CN200880019788A patent/CN101715642A/zh active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5805763A (en) * | 1995-05-05 | 1998-09-08 | Microsoft Corporation | System and method for automatically recording programs in an interactive viewing system |
US20050262539A1 (en) * | 1998-07-30 | 2005-11-24 | Tivo Inc. | Closed caption tagging system |
US6389168B2 (en) * | 1998-10-13 | 2002-05-14 | Hewlett Packard Co | Object-based parsing and indexing of compressed video streams |
WO2001022729A1 (fr) | 1999-09-20 | 2001-03-29 | Tivo, Inc. | Systeme ferme de reperes |
US20120180083A1 (en) * | 2000-09-08 | 2012-07-12 | Ntech Properties, Inc. | Method and apparatus for creation, distribution, assembly and verification of media |
US20020115047A1 (en) * | 2001-02-16 | 2002-08-22 | Golftec, Inc. | Method and system for marking content for physical motion analysis |
US20040249650A1 (en) * | 2001-07-19 | 2004-12-09 | Ilan Freedman | Method apparatus and system for capturing and analyzing interaction based content |
US20050271251A1 (en) * | 2004-03-16 | 2005-12-08 | Russell Stephen G | Method for automatically reducing stored data in a surveillance system |
WO2006114796A1 (fr) | 2005-04-25 | 2006-11-02 | Hewlett-Packard Development Company, L.P. | Systemes et procedes pour l'evaluation des niveaux d'interet d'audience relative aux medias de radiodiffusion et pour la fourniture d'information supplementaire sur la base des niveaux d'interet |
US20090089064A1 (en) * | 2007-09-28 | 2009-04-02 | International Business Machines Corporation | System, method and architecture for control and multi-modal synchronization of speech browsers |
Non-Patent Citations (3)
Title |
---|
English translation of Preliminary Report on Patentability and Written Opinion for corresponding International Application No. PCT/EP2008/057043, filed Jun. 5, 2008. |
French Search Report of Counterpart Application No. FR 07/55745 Filed on Jun. 13, 2007. |
International Search Report of Counterpart Application No. PCT/EP2008/057043 filed on Jun. 5, 2008. |
Also Published As
Publication number | Publication date |
---|---|
FR2917553A1 (fr) | 2008-12-19 |
EP2156644A2 (fr) | 2010-02-24 |
CN101715642A (zh) | 2010-05-26 |
FR2917553B1 (fr) | 2010-06-18 |
WO2008155240A3 (fr) | 2009-04-23 |
US20100175083A1 (en) | 2010-07-08 |
WO2008155240A2 (fr) | 2008-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10476925B2 (en) | Media stream cue point creation with automated content recognition | |
US20210029396A1 (en) | Systems and Methods for Advertising Continuity | |
US20190045240A1 (en) | Method and Device for Generating and Detecting a Fingerprint Functioning as a Trigger Marker in a Multimedia Signal | |
KR100959574B1 (ko) | 모바일 브로드캐스트/멀티캐스트 스트리밍 서버들에 의해사용되는 리치 미디어 컨테이너 형식에 대한 확장들 | |
US11770589B2 (en) | Using text data in content presentation and content search | |
US9147291B2 (en) | Method and apparatus of processing data to support augmented reality | |
CN105049896A (zh) | 一种基于hls协议的流媒体广告插入方法及系统 | |
US11880871B2 (en) | Methods and systems for providing content | |
US20230224552A1 (en) | Timely Addition of Human-Perceptible Audio to Mask an Audio Watermark | |
US8782689B2 (en) | Method of broadcasting content and at least one complementary element, utilizing a server and a terminal | |
US8234158B1 (en) | Analyzing text streams for cue points of advertisements in a media stream | |
US20230276105A1 (en) | Information processing apparatus, information processing apparatus, and program | |
EP3043572A1 (fr) | Reconnaissance automatique de contenu hybride et inscription d'un filigrane | |
US20100332673A1 (en) | Method and apparatus of referring to stream included in other saf session for laser service and apparatus for providing laser service | |
US20140115484A1 (en) | Apparatus and method for providing n-screen service using depth-based visual object groupings | |
KR20090054139A (ko) | 통신 단말기의 멀티미디어 자막 공유 장치 및 방법 | |
WO2008069503A1 (fr) | Appareil et procédé de traitement dynamique d'informations extensibles dans un processus de codage vidéo extensible |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: STREAMEZZO, FRANCE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DUFOURD, JEAN-CLAUDE;GEGOUT, CEDRIC;REEL/FRAME:023992/0782 Effective date: 20100120 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |