WO2009053685A1 - Procédé et appareil permettant la génération d'une signature sécurisée - Google Patents

Procédé et appareil permettant la génération d'une signature sécurisée Download PDF

Info

Publication number
WO2009053685A1
WO2009053685A1 PCT/GB2008/003567 GB2008003567W WO2009053685A1 WO 2009053685 A1 WO2009053685 A1 WO 2009053685A1 GB 2008003567 W GB2008003567 W GB 2008003567W WO 2009053685 A1 WO2009053685 A1 WO 2009053685A1
Authority
WO
WIPO (PCT)
Prior art keywords
video content
face
faces
signature
segment
Prior art date
Application number
PCT/GB2008/003567
Other languages
English (en)
Inventor
Baolin Tan
Original Assignee
Dwight Cavendish Systems Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dwight Cavendish Systems Limited filed Critical Dwight Cavendish Systems Limited
Publication of WO2009053685A1 publication Critical patent/WO2009053685A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • G06F21/16Program or content traceability, e.g. by watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/56Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • H04H60/59Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 of video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems

Definitions

  • the invention concerns a method and apparatus for generating a security signature, in particular one that can be used to control access to video content over a network.
  • a method is required which can automatically detect video content which consists of an unauthorised copy of a copyrighted film, television programme, or other commercial audio/visual work.
  • “fingerprints” short numerical representations of the content which can be generated from the content using a specific algorithm. When potential pirate copies are found, the same algorithm produces a fingerprint from the pirate copy, and if this fingerprint is found in the database then the content can be identified, and appropriate action taken. In practice, the matches do not always need to be exact, and a given piece of content may have multiple discrete "fingerprints" for different sections; this is one possible method of enabling even short excerpts to be identified.
  • video encoding can degrade the original footage, introducing all kinds of video coding artefacts, e.g. blockiness, pixilation, stuttering, mosquito noise etc.
  • video pirates may modify the content, e.g. by cropping, rotating, zooming, resizing, low pass filtering, median filtering, noise addition, affine transformation, changing brightness/ contrast/ gamma/ colour etc. Many of these can be accidentally or intentionally introduced by pointing a video camera at another display or projection screen, and so are inherently present in content pirated in a movie theatre or cinema.
  • the invention provides a security signature for video content that is generated based on the faces of the protagonists appearing in the content.
  • the signature is then used in a technique for identifying pirated video content on a network.
  • identification data such as a watermark
  • the technique is robust to protection countermeasures that might modify the content and avoid the protection.
  • Various properties of detected faces may be used to form the signature, such as the presence or absence of faces, coterminous appearances of faces, the location of faces, and the identity of faces.
  • Figure 1 is a schematic illustration of the components of a first embodiment of the invention
  • Figure 2 is a flowchart illustrating the operation of the first embodiment of the invention
  • Figure 3 is an illustration of face analysis used in a face detection technique
  • Figure 4 is an illustration of an analysis step in the face detection technique
  • Figure 5 is a schematic illustration of face detection in two frames of video content.
  • Figure 6 illustrates an additional face detection technique;
  • Figure 7 is an illustrative diagram showing one example of data that may be used to describe video data;
  • Figure 8 is an illustrative example of a database structure for use in the system of figure 1 ; and Figure 9 is a schematic illustration of the components of a second embodiment of the invention. Detailed Description of the Preferred Embodiments
  • the present invention seeks to use a specific aspect of the content for identification via fingerprinting.
  • the chosen aspect is one which cannot easily be destroyed, either accidentally or intentionally, without destroying much of the entertainment value of the content.
  • the present invention uses the faces of the protagonists, namely the actors or other individuals appearing in the video content, as a means to identify the content.
  • the face detection technique could be combined with text recognition techniques run on the same video content, such as identifying the title of the video content, if a title is displayed on screen towards the beginning of the content, or from any credits appearing at the beginning or the end.
  • text recognition techniques run on the same video content, such as identifying the title of the video content, if a title is displayed on screen towards the beginning of the content, or from any credits appearing at the beginning or the end.
  • not all video content comprises text, and pirated feature films or movies could be edited so that the text is obscured.
  • face detection provides a technique that is robust and that cannot easily be circumvented.
  • the first preferred embodiment comprises face detection and authentication modules 2 deployed on a server architecture 4, and/or a video player architecture 6 respectively.
  • the video player architecture 6 may be video player software such as that provided on a Personal Computer, or may be in dedicated video players such as home electronics equipment like DVD players/recorders or hard disk player/recorders.
  • a rights authentication module 8 is illustrated that is accessible to the server 4 and player 6 via a network, such as the Internet.
  • the various modules may be implemented as one or more software programs interacting with the server or host computer network as appropriate for the server or computer platform, as will be known in the art. Alternatively, the modules may be implemented fully or in part as hardware.
  • Face detection and authorisation modules 2 comprise conversion module 10, dedicated face detection module 12 and authorisation module 14.
  • Rights authentication module comprises a determination module 16 and a database 18.
  • the database 18 contains data describing various instances of video content, in particular feature films, movies, television programmes. Such data preferably includes at least one fingerprint or signature identifying the video content, as well as textual data or image data describing the content or acting as a reference material in connection with it.
  • step S2 conversion module 10 receives video content from a video content source and in step s4 converts it into a format suitable for parsing by the face detection and analysis module 12.
  • step s6 face detection and analysis module 12 parses at least a portion of the video data to extract data about the video content, and generate a signature. The extracted signature is then transmitted to the rights authentication module 8 for verification.
  • the determination module 16 receives the extracted video content data from the face detection and analysis module 12, as well as an indication of the IP address from which the information is transmitted, and in step s10, compares this with the information stored in the database.
  • the IP address will be that of the server or the personal computer on which the face detection and authentication module is housed, and so can be extracted from the transmission to the rights management module.
  • the extracted video content data is compared with the information stored in the database to first identify what feature film, movie or television programme title has been received by the server 4 or the player 6 from the video content source. If there is a match with data stored in the database, the IP address is compared with those stored in the database to determine whether or not the IP address is registered as an authorised holder or player of the video content. The results of the determination are then transmitted to the authorisation module 14.
  • Authorisation module either allows further use of the video content to continue, in step s12, or blocks further use of the video content in step s14.
  • the authorisation module, or the determination module may, in step s16 take further action to enforce the rights in video content, such as deleting the video content so that it cannot be used, requesting payment before the video content can be used, or forwarding the IP address of the unauthorised server or player to a copyright enforcement service.
  • This list is not exhaustive.
  • the conversion module is arranged to receive the video content in its original file encoding, such as DivX, WMV, AVI, On2, FLV, MOV, or MPG for example and convert it to a file encoding suitable for use with the face detection and analysis module.
  • this encoding is MPEG-2.
  • a further advantage of using a conversion module is to parse the video in a more compact manner than would be possible with uncompressed video.
  • the conversion module may be omitted.
  • the conversion module is preferably located on the server 4 such that it receives any video content uploaded to the server for storage, and passes the converted video content to the face analysis and detection module. In this way, all video content will be checked, and action can be taken if deemed necessary.
  • the conversion module is located on a client machine, then it is preferably provided as part of a video content player or browser. Such programmes can be modified using plug-ins or components that alter their functionality.
  • the output of the conversion module is data in MPEG 2 format that represents frames of the received video content. It will be appreciated that the output could be in file form, or as streamed data, and need not be actually displayed in order for further processing and analysis to occur.
  • Face detection and analysis module receives this data from the conversion module, and parses it in order to identify the primary characteristics of the video content.
  • the video content is analysed frame by frame.
  • the face detection and analysis module detects the faces of the protagonists in the pixel domain of the video content, as well as other information such as coordinates describing the position of the detected faces in the frames of video content. Face colour, edge and movement of the faces may also be used.
  • the method involves breaking the image down into rectangles and subsequently comparing the average brightness in two, three, or four adjacent rectangles of an image.
  • the features of these rectangles which are appropriate to detecting faces can be determined by a machine learning algorithm fed with suitable training data, namely pictures correctly sorted into those which are of faces and those which are not.
  • the learning algorithm uses the Adaboost technique. This works by finding approximate solutions to the problem (weak hypotheses) and then concentrating on the cases where the solution was incorrect, that is false positives (a non-face is identified as a face) and false negatives (a face is identified as a non face).
  • each of the possible solutions are weighted and summed, and the weights are increased for those solutions which correct previous mistakes. It may take about six rounds of processing (running through the entire dataset) to reach a reasonable solution which allows the vast majority of possible rectangle comparisons to be ignored, and those most important to face recognition to be used in order of importance.
  • comparing the brightness in the rectangles comprises subtracting the brightness in one rectangle from the brightness in the other(s), and thresholding the result, where a result above the threshold indicates a face, and a result below the threshold indicates a non-face.
  • the threshold is determined from the training data set to best delineate faces from non-faces.
  • the input images are transformed into 'integral images' where each pixel in the integral image is the sum of all the pixels above and to the left of that pixel in the original image.
  • the brightness of arbitrary rectangles from the original image can be calculated from just four pixels in the integral image (rather than summing all the relevant pixels in the original image). This may be understood by reference to Figure 4.
  • rectangle D the sum of all pixels in rectangle D, however large, can be calculated by the addition and subtraction of 4 pixel values from the integral image.
  • To calculate the average pixel brightness we divide by the number of pixels. In fact there is no necessity to divide by the number of pixels, and a more efficient approach can be to work with absolute totals, and simply to scale the thresholds against which these are judged when looking over larger areas than the original dataset.
  • the rectangle classifiers are "cascaded". The first level is
  • the face detection and analysis module could scan all of the video content, or just a segment of video content having a run time of t, where t is less than or equal to the total length of the video content T. Scanning a segment does however assume that the segment itself is indicative of the video content as a whole, that is it displays at least one key protagonists and lasts for enough time for the position of the face throughout the segment to constitute a signature for the content.
  • the preferred embodiment therefore operates as follows: firstly, the face detection module is used to locate "faces" throughout the content. The module merely finds a "face", and may provide information about its orientation (i.e. the direction the person is looking); it does not identify the face at this stage.
  • Figure 5 shows the two faces in two frames before and after movement from initial positions AO and BO to final positions A and B. Post-processing is employed to ignore "minor" faces (i.e. smaller faces) in frames where there are more than a pre-set number of "faces" found, otherwise crowd scenes (for example) may overload the process.
  • the simplest implementation, and one that works for a reasonably small data set, such as several movies or large clip durations, that is pirate copies of many minutes of a movie, is to form the fingerprint simply from the number of faces and their locations.
  • the location or coordinate data for each face can be quantised to discrete square/regions (e.g. based on a 24x24 grid).
  • the coordinate data is also transformed into a relative rather than absolute positional indicators (e.g. distance and direction relative to other faces, rather than absolute location on screen - one notation is in the form of normalised angular co-ordinates, rather than normalised or absolute XY coordinates) to make it robust to cropping, rotation etc. If the location of one face is also stored absolutely relative to top left, or centre of the image, this can be used to make an easier identification on content without significant cropping, rotation etc.
  • a relative rather than absolute positional indicators e.g. distance and direction relative to other faces, rather than absolute location on screen - one notation is in the form of normalised angular co-ordinates, rather than normalised or absolute XY coordinates
  • a "face” is detected within a short distance of a "face” in the previous frame, it can be assumed to be the same face.
  • the movement of the "faces” with time can be described in a compact manner using motion vectors to approximate the movement of each face over several frames.
  • the output for a sequence of frames is a set of location and motion vector data for each major "face” on screen at that time.
  • a face recognition algorithm is run on the "faces" found by the face detection process.
  • the objective is not necessarily to identify who the face belongs to, but to match the faces found in each frame of the film with faces in other frames of the film. Thus, where a particular face appears for 5 minutes, is absent for several minutes, and then appears again for another 10 minutes, this is detected and the fact that both appearances are of "the same face" is recorded in the database.
  • each "face” is tagged with an ID which is consistent within that content, e.g. ID1 appears 32 seconds into the movie 15% up and 32% right from the centre of the screen, moves this way and that, and then disappears for several minutes before returning.
  • ID2 appears 40 seconds into the movie, at 15 degrees from ID1 , and 30% screen height away, moves this way then that, then remains stationary.
  • the actual binary representation has been found to be quite compact, with either fixed or variable length fields storing this data in a compact, quantised manner. Each number above (apart from time) requires only 8 bits to store, typically much less.
  • the first two blocks in the process illustrate detecting a face in the video content, and detecting its orientation.
  • the face detection module may also extract information about the features of the face and match these with features stored in the database.
  • the output data represents a signature or fingerprint identifying the video content that was scanned.
  • Ensuring that the signature or fingerprint is unique is a matter of increasing the detail of the descriptive data elements in the data output, while balancing this against the increased processing and storage costs associated with more data.
  • a signature may comprise one or more of the following data features, as shown in Figure 7: a) frame numbers or time describing presence/absence of a face with ID number #n; b) frame numbers or time describing presence/absence of a face with ID number #n; c) indications of when face with ID #n and ID #m share frames d) position within frame (absolute or relative) of a face with ID number #n or #m; e) identity of a detected face.
  • This list is not exhaustive but is intended to show the preferred possibilities for signature generation.
  • This generated signature or fingerprint is preferably transmitted to the determination module 16 on the rights authentication module 8, where comparison of the generated signature is made with pre- generated and stored signatures for known video content.
  • the IP address of the server or computer on which the player is located is also transmitted to the rights authentication module.
  • Rights authentication module comprises determination module 16 and a database 18. Determination module 16 receives the generated signature from the face detection and analysis module, and compares it with pre-generated signatures in the database 18.
  • signatures will be stored in the database for nearly all commercial video content that is available and that is to be protected via the preferred system. It is also assumed that the signatures stored in the database are generated by substantially the same face detection module as is provided in the server and the player. In this way, determining the identity of the video content received at the server 4 or being played on player 6 is simply a matter of matching one signature with another. As the signature itself is data representing at least the presence of a face and the coordinates of the face in the video content matching signatures is furthermore a matter of comparing corresponding data elements in the two signatures and determining with data elements match.
  • the data set forming the signature can be compared against that in the database by means of an equation or correlation, such as that given below.
  • the database also stores information relating to rights management. This is illustrated schematically in Figure 8 to which reference should be made. Broadly speaking the database stores the name, title or other text for the video content 20, information identifying authorised rights holders 22, that is those who have the right though payment of a fee or otherwise to store the video content or play it, as well as data defining the video content signature 24.
  • the information 22 identifying authorised rights holders may be one or more of individual or corporate name, postal or street address, but most importantly should contain at least one of Internet protocol, email or web address to identify the location where authorised storage or playback of the video content can take place on the network. For example, if a rights holder is authorised to play back or store Film #1 on their server or personal computer, then detection of protected video content at that IP address, email account or web address will not trigger the protection, while detection at a different IP address, email account or web address will.
  • the information identifying the authorised rights holder may optionally include account or debiting information. In some configurations, it may be desired to levy a fee from the right holder each time the video content is played or recorded.
  • the name, title or text information is optional 20, but is useful so that a human operator can view the database contents and see at a glance which video content is protected and who are the authorised rights holders.
  • the video content signature data 24 may comprise any of a) to e) mentioned above. Where pre-generated signatures have already been recorded in the database for each video content, the data will logically be arranged in rows of the database, wherein each row corresponds to a video content title.
  • the determination module 16 on receiving a signature from the face detection module in the server or player, checks the database to find a matching entry- This can happen at any level, from using just the face detection data, to using the guessed ID names.
  • the face detection module where the identity of the faces is part of the signature, it makes more sense to work down from the detected IDs to concentrate on those signatures indicating those protagonists. If necessary, on screen times, combinations of them, positions and motion vectors could also be checked.
  • the determination module Based on the signature received from the face detection and analysis module, and the information stored in the database, the determination module makes a determination as to whether the instance of video content detected is authorised or legal. This determination is then transmitted to authorisation module 14 located on the server or player. If the video content is determined to be illegal, the authorisation module 14 may take any number of actions, as discussed below.
  • the authorisation module may block access to the video content.
  • access includes playback, storage, duplication, transmission or manipulation of the video content.
  • the authorisation module may simply prevent the player from playing back the content, while at the server, the authorisation module may prevent the video content being stored, or may delete any stored copies.
  • the authorisation module may simply cause a warning message to be displayed at the server or player, warning a user that the video content is potentially illegal or pirated material.
  • the display of the warning may be logged as future evidence in any copyright infringement action.
  • the warning may request that the user contact the entity responsible for maintaining the rights authorisation module to negotiate access. This may involve the payment of a fee. It is also desirable if the proprietors of the video content are notified, as in many piracy cases, they do not know when and where their content is being reproduced illegally.
  • the database and determination module is located on a dedicated server across the network.
  • the face detection is configured to transit the signature to the determination module, and the authorisation module receives a determination from the authorisation module in return.
  • the database and the determination may be located at the server or at the player. In order to accommodate them, it may be necessary to have a reduced size implementation of the data signature or analysis, such as use of face identity where possible. Periodic updates of information to the server or player could be used to install new information.
  • the second embodiment comprises a module 30 for scanning video content available on websites, and determining whether the display or manipulation of that content on the website is authorised or constitutes a breach of the video content owner's rights.
  • the second embodiment comprises similar elements of functionality to the first, and where possible these are given the same reference numbers in Figure 9, to aid clarity.
  • the second embodiment can be seen to comprise a rights authorisation module 8 having determination module 16 and database 18.
  • scanner 30 comprises conversion module 10, face detection module 12, and authorisation module 14. It also comprises scanning module 32.
  • Scanning module 32 scans websites on the Internet 34 for new video content. Such websites may be stored on computers 36 or servers 38. The websites being scanned may be limited to those sites known to carry video content, or further to those that carry video content and are known to disregard copyright issues surrounding such content. The site may be scanned simply by noting each new file that comes on line, or by scanning for the titles of known video content: film titles for example, especially those of newly released, or about-to-be released films. If scanning module 32 finds a new file, or finds a film title of interest, it downloads the file or part of the file to conversion module 10 and subsequently to face detection module 12 for analysis. As before the face detection module produces a signature which is passed to determination module for matching with the information stored in the database 18.
  • the database contains additional information that allows a decision to be taken regarding suitable action.
  • the database may: a) state that the title is not released beyond cinemas (so should not be on websites) b) state that the title is released for sale (so should only be official/licensed websites) c) state that the content has been released for wider distribution (such as a movie trailer that can be copied) d) state that the content is historic and no longer of significant commercial interest.
  • the determination module concludes from analysis of the database that the video content is unauthorised, a number of actions may be taken, such as denial of service, action via the Internet Service Provider, legal action, or action to corrupt the illegal content.
  • a further alternative embodiment is monitor traffic within the internet infrastructure, such as at an Internet Service Provider or caching service, at a node backbone, or intercontinental connection. Video content packets passing through the monitoring point would be analysed in the manner described above and action taken.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Technology Law (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Television Signal Processing For Recording (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé selon lequel une signature sécurisée pour un contenu vidéo est générée basée sur les visages des protagonistes apparaissant dans le contenu. La signature est ensuite utilisée dans une technique permettant d'identifier un contenu de vidéo piratée sur un réseau. Contrairement aux autres techniques de protection, il n'est pas nécessaire d'incorporer une donnée d'identification, tel qu'un filigrane, dans le contenu vidéo pour réaliser la protection. En outre, étant donné qu'il n'est pas possible d'altérer l'information de visages du contenu vidéo sans endommager la valeur de divertissement du contenu, la technique est résistante aux contre-mesures qui pourraient modifier le contenu pour éluder la protection. Diverses propriétés de visages détectées peuvent être utilisées pour former la signature, telles que la présence ou l'absence coïncidente de visages, la localisation de visages, et l'identification de visages.
PCT/GB2008/003567 2007-10-22 2008-10-21 Procédé et appareil permettant la génération d'une signature sécurisée WO2009053685A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0720526A GB2455280A (en) 2007-10-22 2007-10-22 Generating a security signature based on face analysis
GB0720526.3 2007-10-22

Publications (1)

Publication Number Publication Date
WO2009053685A1 true WO2009053685A1 (fr) 2009-04-30

Family

ID=38814167

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2008/003567 WO2009053685A1 (fr) 2007-10-22 2008-10-21 Procédé et appareil permettant la génération d'une signature sécurisée

Country Status (2)

Country Link
GB (1) GB2455280A (fr)
WO (1) WO2009053685A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016038103A1 (fr) * 2014-09-09 2016-03-17 Piksel, Inc Gestion de conformité automatisée
EP3346377A1 (fr) * 2009-09-14 2018-07-11 TiVo Solutions Inc. Dispositif multimédia polyvalent

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975964B (zh) * 2016-07-01 2019-04-02 恒东信息科技无锡有限公司 一种基于感知通道的智能综合应用平台

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002082271A1 (fr) * 2001-04-05 2002-10-17 Audible Magic Corporation Detection de copyright et systeme et procede de protection

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002082271A1 (fr) * 2001-04-05 2002-10-17 Audible Magic Corporation Detection de copyright et systeme et procede de protection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NIKOS NIKOLAIDIS ET AL: "Image and video fingerprinting for digital rights management of multimedia data", INTELLIGENT SIGNAL PROCESSING AND COMMUNICATIONS, 2006. ISPACS '0 6. INTERNATIONAL SYMPOSIUM ON, IEEE, PI, 1 December 2006 (2006-12-01), pages 801 - 807, XP031092458, ISBN: 978-0-7803-9732-3 *
VIOLA P ET AL: "ROBUST REAL-TIME FACE DETECTION", INTERNATIONAL JOURNAL OF COMPUTER VISION, DORDRECHT, NL, vol. 57, no. 2, 1 January 2004 (2004-01-01), pages 137 - 154, XP008035702 *
ZHAO W ET AL: "FACE RECOGNITION: A LITERATURE SURVEY", ACM COMPUTING SURVEYS, ACM, NEW YORK, NY, US, US, vol. 35, no. 4, 1 December 2003 (2003-12-01), pages 399 - 459, XP001156024, ISSN: 0360-0300 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3346377A1 (fr) * 2009-09-14 2018-07-11 TiVo Solutions Inc. Dispositif multimédia polyvalent
US10097880B2 (en) 2009-09-14 2018-10-09 Tivo Solutions Inc. Multifunction multimedia device
US10805670B2 (en) 2009-09-14 2020-10-13 Tivo Solutions, Inc. Multifunction multimedia device
US11653053B2 (en) 2009-09-14 2023-05-16 Tivo Solutions Inc. Multifunction multimedia device
WO2016038103A1 (fr) * 2014-09-09 2016-03-17 Piksel, Inc Gestion de conformité automatisée
US10298999B2 (en) 2014-09-09 2019-05-21 Piksel, Inc. Automated compliance management

Also Published As

Publication number Publication date
GB2455280A (en) 2009-06-10
GB0720526D0 (en) 2007-11-28

Similar Documents

Publication Publication Date Title
US11693928B2 (en) System and method for controlling content upload on a network
US8761452B2 (en) System, method and computer program product for video fingerprinting
KR101171536B1 (ko) 비디오 지문의 시간 세그먼트 기반 추출 및 강건한 일치
Coskun et al. Spatio–temporal transform based video hashing
US7088823B2 (en) System and method for secure distribution and evaluation of compressed digital information
US8587668B2 (en) Method and apparatus for detecting near duplicate videos using perceptual video signatures
JP2003032641A (ja) ビデオ信号に情報を挿入することを容易にするための方法およびビデオ信号を保護することを容易にするための方法
JP4951521B2 (ja) ビデオフィンガープリントのシステム、方法、及びコンピュータプログラム製品
GB2419489A (en) Method of identifying video by creating and comparing motion fingerprints
Lian et al. Content-based video copy detection–a survey
JP2003304388A (ja) 付加情報検出処理装置、コンテンツ再生処理装置、および方法、並びにコンピュータ・プログラム
US20030031318A1 (en) Method and system for robust embedding of watermarks and steganograms in digital video content
Lakshmi et al. Digital video watermarking tools: an overview
WO2009053685A1 (fr) Procédé et appareil permettant la génération d'une signature sécurisée
US8411752B2 (en) Video signature
Lefèbvre et al. Image and video fingerprinting: forensic applications
Baudry et al. A framework for video forensics based on local and temporal fingerprints
Schaber et al. Semi-automatic registration of videos for improved watermark detection
Parmar et al. A review on video/image authentication and temper detection techniques
US20230315882A1 (en) Secure Client Watermark
Gavade et al. Review of techniques of digital video forgery detection
Zavaleta et al. Content Multimodal Based Video Copy Detection Method for Streaming Applications
Jang A Study on Extraction and Comparison of Digital Content Key Frame in UCC Service Environment
WO2001013642A1 (fr) Filigranage de trains de donnees a des niveaux de distribution multiples
Kushwaha et al. Video Forensic Framework for Video Forgeries

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08842586

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08842586

Country of ref document: EP

Kind code of ref document: A1