WO2011063520A1 - Procédé et appareil de délivrance de signatures de signaux audio/vidéo et d'utilisation de celles-ci - Google Patents

Procédé et appareil de délivrance de signatures de signaux audio/vidéo et d'utilisation de celles-ci Download PDF

Info

Publication number
WO2011063520A1
WO2011063520A1 PCT/CA2010/001876 CA2010001876W WO2011063520A1 WO 2011063520 A1 WO2011063520 A1 WO 2011063520A1 CA 2010001876 W CA2010001876 W CA 2010001876W WO 2011063520 A1 WO2011063520 A1 WO 2011063520A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
audio
signature
video
providing
Prior art date
Application number
PCT/CA2010/001876
Other languages
English (en)
Inventor
Pascal Carrieres
Original Assignee
Miranda Technologies Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/627,728 external-priority patent/US8860883B2/en
Priority claimed from CA2686869A external-priority patent/CA2686869C/fr
Application filed by Miranda Technologies Inc. filed Critical Miranda Technologies Inc.
Priority to GB1210285.1A priority Critical patent/GB2489133B/en
Publication of WO2011063520A1 publication Critical patent/WO2011063520A1/fr
Priority to HK13103398.7A priority patent/HK1176203A1/xx

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • G06V20/47Detecting features for summarising video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/57Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for processing of video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4408Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video stream encryption, e.g. re-encrypting a decrypted video stream for redistribution in a home network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transforming into visible information
    • G10L2021/105Synthesis of the lips movements from speech, e.g. for talking heads

Definitions

  • the invention relates to the processing of audio and video signals. More precisely, this invention pertains to a method and apparatus for providing signatures of audio/video signals and to a method and apparatus for using the provided signatures for detecting lip sync.
  • an audio signal may become desynchronized with a corresponding video signal during the course of a transmission of the audio/video signal.
  • Such phenomenon is known by the skilled addressee as lip sync.
  • Detecting lip sync may be of great interest for multiple system operators (MSO) that are looking for example to improve the quality of their channels and to avoid a phenomenon unacceptable at a certain degree for the end user.
  • MSO system operators
  • many prior art references for detecting lip sync disclose techniques that are either cumbersome to implement or require excessive processing resources.
  • the content of a video signal should be understood as an indication of an evolution of the content of the video signal over time.
  • Yet another object of the invention is to provide a method and apparatus for generating a signature representative of a content of an audio signal.
  • the content of an audio signal should be understood as an indication of an evolution of the content of the audio signal over time.
  • the invention provides a method for generating a video signature representative of a content of a video signal, the method comprising receiving pixel data of each of a plurality of given pixels in a first image of the video signal, receiving pixel data of each of the plurality of given pixels in a second image subsequent to the first image, for each given pixel of the plurality of given pixels, comparing corresponding pixel data of the given pixel of the first image with the corresponding pixel data of the given pixel in the second image to provide a corresponding indication of a difference between the pixel data of the given pixel of the first image and the pixel data of the given pixel in the second image; if the difference is greater than a threshold, incrementing a counter and providing an indication of the counter to thereby generate the video signature.
  • An advantage of the method disclosed is that such method may be used for providing information, i.e. a signature, for characterizing the content of the video signal.
  • a signature for characterizing the content of the video signal.
  • those signatures may be advantageously correlated and in turn be used for detecting a delay between the two video signals which is of great advantage.
  • Another advantage of the method disclosed is that it requires limited processing for generating the signature which is therefore of great advantage.
  • the method further comprises selecting a plurality of given pixels according to at least one criteria.
  • the at least one criteria comprises a spatial criteria.
  • the threshold is equal to thirty two (32).
  • the providing of the indication of the counter comprises normalizing the indication of the counter.
  • the invention further provides a method for generating an audio signature representative of a content of an audio signal, the method comprising receiving an audio signal; performing a first filtering of the audio signal to provide a first filtered audio signal; performing a second filtering of the audio signal to provide a second filtered audio signal; comparing the first filtered audio signal to the second filtered audio signal and assigning a value depending on the comparison to thereby generate the audio signature representative of the content of the audio signal.
  • An advantage of the method disclosed is that such method may be used for providing information, i.e. a signature, for characterizing the content of the audio signal.
  • a signature for characterizing the content of the audio signal.
  • those two signatures may be advantageously correlated and in turn be used for detecting a delay between the two audio signals which is of great advantage.
  • An advantage of the method disclosed is that such method may be used for detecting lip sync between a first audio/video signal and a second audio/video signal.
  • the first filtering comprises detecting an envelope signal of the received audio signal.
  • the second filtering comprises computing an average value of the received audio signal.
  • the assigning of a value depending on the comparison to thereby generate the audio signature representative of the content of the audio signal comprises performing decimation.
  • an audio signature extraction unit for providing an audio signature signal AS t representative of a content of an audio signal A, , the audio signature extraction unit comprising a first filtering unit for receiving the audio signal A, and for providing a first filtered signal; a second filtering unit for receiving the audio signal A, and providing a second filtered signal; a comparator for receiving the first filtered signal and the second filtered signal and for performing a comparison of the first filtered with the second filtered signal and for providing a comparison result to thereby provide the audio signature signal AS j representative of a content of an audio signal.
  • the first filtering unit comprises an envelope detector for providing a signal E s indicative of an envelope of the audio signal A, .
  • the second filtering unit comprises a mean detector for providing a signal M s indicative of a mean the audio signal A, .
  • the audio signature extraction unit further comprises an absolute value providing unit for receiving the audio signal A, and for providing a signal indicative of an absolute value of the value of the audio signal A, wherein each of the mean detector and the envelope detector receives the signal indicative of an absolute value of the value of the audio signal A, .
  • the audio signature extraction unit further comprises a decimator for receiving the audio signature signal AS, and for decimating the audio signature signal AS, .
  • a video signature extraction unit for providing a video signature signal VS ' representative of a content of an incoming image signal v> , the video signature extraction unit comprising a first image pixel data providing unit for receiving the incoming image signal ⁇ ⁇ and for providing a corresponding pixel data of a pixel k of a plurality of given pixels 1 to N of a first image; a second image pixel data providing unit for receiving the incoming image signal V ⁇ and for providing a corresponding pixel data * of the corresponding given pixel
  • a comparator for receiving the corresponding pixel data * and the corresponding pixel data * and for comparing the corresponding pixel p c
  • a counter for receiving and counting the result signal of each of the plurality of given pixels 1 to N and for providing a counter signal and a video signature providing unit for receiving the counter signal and for vs
  • the result signal is a logic value one (1) if Abs ⁇ P k - C k ) is greater than thirty two (32) and a logic value zero (0) otherwise.
  • the video signature providing unit receives the counter signal and divides the counter signal by N/240, wherein N is the number of given pixels.
  • the video extraction unit further comprises a filtering unit for receiving the incoming image signal V i and for providing a corresponding filtered signal to the first image pixel data providing unit and to the second image pixel data providing unit.
  • the video extraction unit further comprises a windowing unit, wherein the incoming image signal V i is received by the windowing unit an a corresponding selected signal selected signal W t is provided to the filtering unit.
  • the video signature extraction unit further comprises a windowing unit for receiving the incoming image signal V t and for providing a selected signal W l .
  • an apparatus for providing an indication of a lip sync comprising a first audio signature extraction unit for receiving a first audio signal and a first video signal and for providing a first video signature and a first audio signature; a second signature extraction unit for receiving a second audio signal and a second video signal and for providing a second video signature and a second audio signature and a signature analysis unit for receiving the first video signature, the first audio signature, the second video signature and the second audio signature and for correlating the first video signature with the second video signature to provide a video delay and for further correlating the first audio signature with the second audio signature to provide an audio delay, the signature analysis unit for further comparing the audio delay with said video delay and for providing an indication of a lip sync is the audio delay is different from said video delay.
  • the first signature extraction unit comprises a first audio signature extraction unit for receiving the first audio signal and for providing the first audio signature signal and the first signature extraction unit comprises a first video signature extraction unit for receiving the first video signal and for providing the first video signature signal.
  • the second signature extraction unit comprises a second audio signature extraction unit for receiving the second audio signal and for providing the second audio signature signal and the second signature extraction unit comprises a second video signature unit for receiving the second video signal and for providing the second video signature signal.
  • the first audio signature extraction unit comprises a first filtering unit for receiving the first audio signal and for providing a first filtered signal, a second filtering unit for receiving the first audio signal and for providing a second filtered signal, a comparator for receiving the first filtered signal and the second filtered signal and for performing a comparison of the first filtered with the second filtered signal and for providing a comparison result to thereby provide the first audio signature signal.
  • the first video signature extraction unit comprises a first image pixel data providing unit for receiving the first video signal and for providing a corresponding p
  • a second image pixel data providing unit for receiving the first c
  • a comparator for receiving the corresponding pixel data P k and the corresponding pixel data C k p
  • a counter for receiving and counting the result signal of each of the plurality of given pixels 1 to N and for providing a counter signal; and a video signature providing unit for receiving the counter signal and for providing the first video signature signal.
  • the second audio signature extraction unit comprises a first filtering unit for receiving the second audio signal and for providing a first filtered signal; a second filtering unit for receiving the second audio signal and for providing a second filtered signal; a comparator for receiving the first filtered signal and the second filtered signal and for performing a comparison of the first filtered with the second filtered signal and for providing a comparison result to thereby provide the second audio signature signal.
  • the second video signature extraction unit comprises a first image pixel data providing unit for receiving the second video signal and for providing a corresponding pixel data P k of a pixel k of a plurality of given pixels 1 to N of a first image of the second video signal; a second image pixel data providing unit for receiving the second video signal and for providing a corresponding pixel data C k of the corresponding given pixel k in a second image of the second video signal; a comparator for receiving the corresponding pixel data P k and the corresponding pixel data C k and for comparing the corresponding pixel data P k with the corresponding pixel data C k to provide a result signal indicative of the comparison; a counter for receiving and counting the result signal of each of the plurality of given pixels 1 to N and for providing a counter signal and a video signature providing unit for receiving the counter signal and for providing the second video signature signal.
  • Another advantage of the embodiment disclosed is that it may be used for content comparison instead
  • the embodiment disclosed may be used for detecting pirated copies or for checking that a given ads has been properly inserted in a video signal. This may be done by comparing the video signal carrying the ads and the ads per se.
  • Figure 1 is a flowchart which shows an embodiment of a method for generating a signature representative of a content of a video signal
  • Figure 2 is a flowchart which shows an embodiment of a method for generating a signature representative of a content of an audio signal
  • Figure 3A is a block diagram which shows an embodiment of an apparatus for generating a video signature representative of a content of a video signal
  • Figure 3B is a schematic which shows how the video signature representative of a content of a video signal is generated according to one embodiment
  • Figure 3C is a block diagram which shows an embodiment of a video signature extraction unit
  • Figure 4A is a block diagram which shows an embodiment of an apparatus for generating an audio signature representative of a content of an audio signal
  • Figure 4B is a block diagram which shows an embodiment of an apparatus for generating an audio signature representative of a content of an audio signal; in this embodiment, the apparatus for generating an audio signature representative of a content of an audio signal comprises, inter alia, an envelope detector, a mean detector and a comparator;
  • Figure 5 is a block diagram of an embodiment of an apparatus comprising an audio signal signature extraction unit, a video signal signature extraction unit and a signature analysis unit for providing, inter alia, an indication of a lip sync;
  • Figure 6 is a block diagram which shows an embodiment of the signature analysis unit disclosed in Fig. 5;
  • Figure 7A is a schematic which shows an embodiment of two streams of video signature signals delayed in time
  • Figure 7B is a block diagram which shows an embodiment of a video signature analysis unit used for analyzing two video signature signals
  • Figure 7C is a graph which shows an embodiment of a variation of the result of the convolution of a first video signature signal with a second video signature signal
  • Figure 8A is a schematic which shows an embodiment of two streams of audio signature signals delayed in time
  • Figure 8B is a block diagram which shows an embodiment of an audio signature analysis unit used for analyzing two audio signature signals; and Figure 8C is a graph which shows an embodiment of a variation of the result of the convolution of a first audio signature signal with a second audio signature signal.
  • Fig. there is shown an embodiment of a method for generating a video signature representative of a content of a video signal. It will be appreciated that generating a video signature representative of a content of a video signal may be of great advantage for various reasons as further explained herein below.
  • processing step 102 pixel data of each of a plurality of given pixels in a first image are received.
  • the image originates from a digital video stream.
  • the image may originate from an analog video stream.
  • the image may originate from a file-based source.
  • the given pixels are pixels that may be selected in the image according to various criteria. For instance, the given pixels may be selected according to spatial criteria. For instance, it has been contemplated that some parts of an image may be of less interest for the purpose of generating a signature.
  • a windowing may be accordingly performed on a part of interest of an image in order to remove those parts of the image that have a limited interest.
  • the given pixels are pixels each separated from one another by a given amount of pixels.
  • the amount of given pixels does not change when the resolution of the video signal changes in the preferred embodiment.
  • the amount of given pixels changes when the resolution of the video signal changes.
  • processing step 104 pixel data of each of a plurality of given pixels in a second image are received.
  • the second image is an image following in time the first image in the video signal.
  • the second image is not the image following immediately after the first image in the video signal but the image following immediately after the image following immediately after the first image in the video signal.
  • a first field of a first image is compared with a first field of a second image immediately following the first image, while a second field of the first image is compared with a second field of the second image immediately following the first image.
  • processing step 106 a comparison is performed.
  • the comparison is performed for each given pixel of the plurality of given pixels.
  • corresponding pixel data of the given pixel of the first image is compared with the corresponding pixel data of the given pixel in the second image in order to provide a corresponding indication of a difference between the pixel data of the given pixels of the first image and the pixel data of the given pixel in the second image.
  • the comparison is a subtraction of the pixel data of the given pixel of the first image with corresponding pixel data of the given pixel of the second image. The comparison is performed for each given pixel of the plurality of given pixels.
  • the comparison may be a combination of operations involving the pixel data of each of the given pixels of the first image with corresponding pixel data of each of the given pixels of the second image.
  • a counter value is incremented based on the result of the comparison.
  • the counter value is incremented in the case where the result of the operation is greater than a given threshold value.
  • a given threshold value may be provided.
  • the threshold value is equal to thirty two (32) on an 8 bit precision video pixel.
  • the given threshold value may be provided according to various criteria such as a type of video signal.
  • processing step 110 an indication of the counter value is provided.
  • the indication of the counter value is used as the video signature representative of a content of the video signal.
  • the processing step of providing the indication of a counter value may comprise normalizing the counter value to provide a counter value limited by a given value.
  • a normalizing of the counter value is performed by dividing the counter value by (N/240) where N is the number of given pixels.
  • An advantage of the method disclosed is that such method may be used for providing information, i.e. a signature, for characterizing the content of the video signal.
  • a signature for characterizing the content of the video signal.
  • those signatures may be advantageously correlated and in turn be used for detecting a delay between the two video signals which is of great advantage.
  • Another advantage of the method disclosed is that it requires limited processing resources for generating the signature which is therefore of great advantages. Such method does not require any complex algorithm for analyzing the content of the video signal.
  • an audio signal is received.
  • the audio signal may be received from various sources.
  • the audio signal may be receiving from an audio stream.
  • the audio signal may be received from a file.
  • the audio signal may be imbedded in a video stream.
  • the audio signal may be provided in various forms such as in a digital format or in an analog format. Moreover it will be appreciated that the audio signal may be formatted according to various standards known to the skilled addressee.
  • processing step 204 a first filtering of the received audio signal is performed.
  • the first filtering of the received audio signal comprises detecting an envelope of the audio signal.
  • processing step 206 a second filtering of the received audio signal is performed.
  • the second filtering of the received audio signal comprises computing an average value of the received audio signal. It will be appreciated by the skilled addressee that processing steps 204 and 206 may be performed in parallel. Alternatively, processing steps 204 and 206 may be performed serially.
  • the first filtered audio signal is compared to the second filtered audio signal. It will be appreciated that the comparison may be a combination of operations involving the first filtered audio signal and the second filtered audio signal.
  • the comparison comprises checking if the first filtered audio signal is greater than the second filtered audio signal.
  • a value is assigned depending on the result of the comparison. It will be appreciated that the value may be any type of value. ln a preferred embodiment, the value is a binary value. Still in a preferred embodiment, binary value one (1) is assigned if the first filtered audio signal is greater than the second filtered audio signal while binary value zero (0) is assigned if the second filtered audio signal is greater or equal than the first filtered audio signal.
  • the assigned value is provided. It will be appreciated that the assigned value.
  • the assigned value is used as the audio signature representative of a content of the audio signal.
  • the providing of the assigned value may comprise a decimation processing step.
  • the skilled addressee will appreciate that the purpose of the decimation processing step is to remove a given amount of unwanted/redundant data. The skilled addressee will also appreciate that this will further result in a signature having a shorter size which is also of great advantage.
  • the skilled addressee will appreciate that the method disclosed for providing an audio signature representative of a content of the audio signal is of great advantage for various reasons.
  • An advantage of the method disclosed is that such method may be used for providing information, i.e. a signature, for characterizing the content of the audio signal.
  • a signature for characterizing the content of the audio signal.
  • those two signatures may be advantageously correlated and in turn be used for detecting a delay between the two audio signals which is of great advantage.
  • Another advantage of the method disclosed above is that they require limited processing resources for generating the signature which is of great advantage.
  • a further advantage is that the implementation of the method disclosed may require limited memory resource which is also of great advantage.
  • the apparatus 300 for providing a video signature comprises an optional windowing unit 302, a filtering unit 303 and a video signature extraction unit 304.
  • the optional windowing unit 302 is used for performing a windowing of an incoming image signal V t .
  • the optional windowing unit 302 provides a corresponding selected signal W
  • the optional windowing unit 302 may be implemented according to various embodiments known to the skilled addressee.
  • the apparatus 300 for providing a video signature further comprises the filtering unit 303.
  • the filtering unit 303 is used for filtering the corresponding selected signal W
  • the filtering unit 303 provides a filtered signal W lf . It will be appreciated that the filtering unit 303 operates according to various embodiments.
  • a filtered pixel data of a given pixel is equal to the average of the pixel data of the given pixel, the pixel immediately following the given pixel on a same line and the pixel immediately preceding the given pixel on the same line.
  • a filtered pixel data of a given pixel is equal to the average of the pixel data of the given pixel and the pixel immediately following the given pixel on a same line.
  • the apparatus 300 for providing a video signature further comprises the video signature extraction unit 304.
  • the video signature extraction unit 304 is used for extracting a video signature from an incoming video signal.
  • the video signature extraction unit 304 receives the filtered signal W lf provided by the filtering unit 303 and provides a video signature signal VS : .
  • a corresponding pixel data P k of a pixel k of a plurality of given pixels 1 to N of a first image is compared with a corresponding pixel data C k of the corresponding given pixel k in a second image in order to provide a corresponding indication of a difference ⁇ P k - C k ⁇ between the pixel data of the given pixel of the first image and the pixel data of the given pixel in the second image.
  • the video signature signal may be therefore defined as:
  • the video signature extraction unit 304 comprises a first image pixel data providing unit 306, a second image pixel data providing unit 308, a comparator 310, a counter 312 and a video signature providing unit 314.
  • the first image pixel data providing unit 306 is used for receiving the filtered signal W lf provided by the filtering unit 303 and for providing a corresponding pixel data P k of a pixel k of a plurality of given pixels 1 to N of the first image.
  • the second image pixel data providing unit 308 is used for receiving the filtered signal W if provided by the filtering unit 303 and for providing a corresponding pixel data C k of the corresponding given pixel k in the second image.
  • the comparator 310 is used for receiving the corresponding pixel data P k of a pixel k of a plurality of given pixels 1 to N of the first image and the corresponding pixel data C k of the corresponding given pixel k in the second image and for comparing the corresponding pixel data P k with the corresponding pixel data C k .
  • the comparator 310 outputs a logic value one (1) if ⁇ P k - C k ⁇ is greater than thirty two (32) and a logic value zero (0) otherwise.
  • the counter 312 receives the output from the comparator 310 and provides a signal indicative of a number of logic value ones received by the counter 312.
  • the video signature providing unit 314 receives the signal indicative of a number of logic value ones received by the counter 312 and provides the video signature signal VS t . In one embodiment, the video signature providing unit 314 performs a division of the signal indicative of a number of logic value ones received by the counter 312 by N/240 wherein N is the number of given pixels.
  • FIG. 4A there is shown an embodiment of an apparatus 402 for generating an audio signature representative of a content of an audio signal.
  • the apparatus 402 for generating an audio signature representative of a content of an audio signal receives an audio signal A, and provides a corresponding audio signature signal AS, .
  • the audio signal A may be of various forms such as in a digital format or in an analog format. Moreover it will be appreciated that the audio signal may be formatted according to various standards as already explained above.
  • the corresponding audio signature signal AS may be of various forms such as in a digital format or in an analog format.
  • the corresponding audio signature signal AS may be formatted according to various standards as explained above.
  • FIG. 4B there is shown an embodiment of the apparatus 402 for generating an audio signature representative of a content of an audio signal.
  • the apparatus 402 for generating an audio signature representative of a content of an audio signal comprises an absolute value providing unit 404, an envelope detector 406, a mean detector 408, a comparator 410 and a decimator 412.
  • the absolute value providing unit 404 is used to provide a signal indicative of an absolute value of the audio signal A, .
  • the envelope detector 406 is used to provide a signal E s indicative of an envelope of the signal indicative of an absolute value of the value of the audio signal A, . It will be appreciated that the envelope detector 406 is an embodiment of a first filtering unit.
  • the envelope detector 406 may be implemented in various ways as known by the skilled addressee.
  • the envelope detector 406 is implemented using a one-tap Infinite Impulse Response (MR) filter.
  • MR Infinite Impulse Response
  • the mean detector 408 is used to provide a signal M s indicative of a mean of the signal indicative of an absolute value of the value of the audio signal A t . It will be appreciated that the mean detector 408 is an embodiment of a second filtering unit.
  • mean detector 408 may be implemented in various ways as known by the skilled addressee.
  • the mean detector 408 is implemented using a one- tap Infinite Impulse Response (II R) filter.
  • II R Infinite Impulse Response
  • the comparator 410 is used for making a comparison between two incoming signals. More precisely, the comparator 410 receives the signal M s indicative of a mean of the signal indicative of an absolute value of the value of the audio signal A i and the signal E s indicative of an envelope of the signal indicative of an absolute value of the value of the audio signal A t and performs a comparison between those two signals.
  • the comparator 410 may be implemented in various ways.
  • the decimator 412 is used for performing decimation on a signal provided by the comparator 410.
  • the decimator 412 provides the audio signature signal AS i .
  • decimator 412 is optional.
  • decimator 412 may be implemented in various ways as known by the skilled addressee. In one embodiment, a decimation by fifty two (52) is performed by the decimator 4 2.
  • the decimator 412 takes a sample and ignores fifty one (51 ) following samples.
  • the sample is taken every one (1) ms.
  • Other methods may alternatively be used. For instance, those methods may take into consideration the value of each sample for selecting a given sample.
  • FIG. 5 there is shown an embodiment of an apparatus 500 which uses an audio signal signature extraction unit and a video signal signature extraction unit for providing, inter alia, an indication of a lip sync.
  • a first audio/video content is to be compared to a second audio/video content.
  • the first audio/video content comprises a first audio signal and a first video signal while the second audio/video content comprises a second audio signal an a second video signal.
  • the apparatus 500 comprises a first signature extraction unit 502, a second signature extraction unit 504 and a signature analysis unit 506.
  • the first signature extraction unit 502 is used for providing a first video signature signal VS i and a first audio signature signal AS ⁇ of respectively a first video signal V ⁇ and a first audio signal A ] associated with the first video signal V x .
  • the first signature extraction unit 502 comprises, as not shown in Fig. 5, an audio signature extraction unit and a video signature extraction unit each responsible for respectively receiving the first audio signal and providing the first audio signature signal AS ⁇ and receiving the first video signal V ⁇ and providing the first video signature signal VS t .
  • Such video signature extraction unit and audio signature extraction unit have been already described above.
  • the second signature extraction unit 504 is used for providing a second video signature signal VS 2 and a second audio signature signal AS 2 of respectively a second video signal V 2 and a second audio signal A 2 associated with the second video signal V 2 .
  • the second signature extraction unit 504 comprises, as not shown in Fig. 5, an audio signature extraction unit and a video signature extraction unit each responsible for respectively receiving the second audio signal A 2 and providing the second audio signature signal AS 2 and receiving the second video signal V 2 and providing the second video signature signal VS 2 .
  • Such video signature extraction unit and audio signature extraction unit have also been already described above.
  • the signature analysis unit 506 is used for receiving the first video signature signal VS ⁇ , the first audio signature signal AS , the second video signature signal VS 2 and the second audio signature signal AS 2 .
  • the signature analysis unit 506 provides a signal VD indicative of a video delay, a signal AD indicative of an audio delay, a signal LS indicative of a lip sync, a signal VMF indicative of a video matching factor and a signal AMF indicative of an audio matching factor.
  • the signal VD indicative of a video delay is generated by comparing the first video signature signal VS with the second video signature signal VS 2 and by determining a delay between those two signals as shown further below.
  • the signal AD indicative of an audio delay is generated by comparing the first audio signature signal AS ⁇ with the second audio signature signal AS 2 and by determining a delay between those two signals as shown further below.
  • the signal VMF indicative of a video matching factor and the signal AMF indicative of an audio matching factor are generated as further explained below.
  • the signal LS indicative of a lip sync is generated by comparing the signal VD indicative of a video delay with the signal
  • the signal VMF indicative of a video matching factor and the signal AMF indicative of an audio matching factor are further used for generating the signal LS indicative of a lip sync. While the embodiment disclosed in Fig. 5 has been shown to be used for lip sync detection, the skilled addressee will appreciate that such embodiment may be advantageously used for content comparison for instance.
  • the embodiment may be used for detecting pirated copies.
  • the method disclosed may be used for checking that a given ads has been properly inserted in a video signal. This may be done by comparing the signatures of the audio/video signal carrying the ads and the signatures of the ads per se.
  • Fig. 6 there is shown an embodiment of the signature analysis unit 506 disclosed in Fig. 5.
  • the signature analysis unit 506 comprises a video signature analysis unit 602, an audio signature analysis unit 604 and a video signature correlation analysis unit 606.
  • the video signature analysis unit 602 is used for determining an estimation VD' of the signal indicative of a video delay between the first video signature signal VS X and the second video signature signal VS 2 .
  • the video signature analysis unit 602 is further used for determining an estimation VMF' of a signal indicative of a video matching factor.
  • the audio signature analysis unit 604 is used for determining an estimation AD' of the signal indicative of an audio delay between the first audio signature signal AS X and the second audio signature signal AS 2 .
  • the audio signature analysis unit 604 is further used for determining an estimation AMF' of a signal indicative of an audio matching factor.
  • the video signature correlation analysis unit 606 receives the estimation VD' of the signal indicative of a video delay between the first video signature signal VS X and the second video signature signal VS 2 , the estimation AD' of the signal indicative of an audio delay between the first audio signature signal AS and the second audio signature signal AS 2 , the estimation VMF' of a signal indicative of a video matching factor and the estimation AMF' of a signal indicative of an audio matching factor.
  • the video signature correlation analysis unit 606 provides the signal LS indicative of a lip sync, the signal VD indicative of a video delay, the signal AD indicative of an audio delay, the signal VMF indicative of a video matching factor and the signal AMF indicative of an audio matching factor.
  • the thresholdl is equal to 50% while the threshold2 is equal to 40%. The skilled addressee will appreciate that various other values may be used.
  • the signal VD indicative of a video delay and the signal AD indicative of an audio delay are generated using respectively at least the estimation VD of the signal indicative of a video delay between the first video signature signal VS ] and the second video signature signal VS 2 and the estimation AD' of the signal indicative of an audio delay between the first audio signature signal AS X and the second audio signature signal AS 2 .
  • estimations are first computed prior to provide the values for the sake validating the values.
  • FIG. 7A there are shown a graph 702 showing an example of the first video signature signal VS ] and a graph 704 showing an example of the second video signature signal VS 2 .
  • first video signature signal VS and the second video signature signal VS 2 are desynchronized in time by an amount of time corresponding to the estimation VD' of the signal indicative of a video delay between the first video signature signal VS and the second video signature signal VS 2 . Still in this embodiment, the first video signature signal VS ] is delayed in time compared to the second video signature signal VS 2 .
  • the video signature analysis unit 602 comprises a convolution unit 706 and a minimum detection and delay extraction unit 708.
  • the convolution unit 706 is used for performing a convolution of the first video signature signal VS t and the second video signature signal VS 2 .
  • the convolution is a time shifted convolution of the second video signature signal VS 2 window across the first video signature signal VS window.
  • the second video signature signal VS 2 window has a value of twenty (20) seconds while the first video signature signal VS ] window has a value of thirty (30) seconds.
  • the convolution unit 706 provides a convolution signal £,.(/) .
  • the convolution may be alternatively performed in various other ways.
  • the convolution unit 706 may implemented in various way.
  • the minimum detection and delay extraction unit 708 is used for detecting a minimum in the convolution signal E v ⁇ t) . It will be appreciated that the minimum in the convolution signal E v (t) is indicative of the estimation VD of the signal indicative of a video delay between the first video signature signal VS X and the second video signature signal VS 2 .
  • the minimum detection and delay extraction unit 708 is further used for providing the estimation VMF' of a signal indicative of a video matching factor.
  • the signal VMF indicative of a video matching factor is indicative of a level of similarity measured in the video signatures.
  • the signal estimation VMF' of a signal indicative of a video matching factor is a function of the convolution signal E v (t) and the estimation VD' of the signal indicative of a video delay between the first video signature signal VS ] and the second video signature signal VS, . ln a preferred embodiment, the signal estimation VMF' of a signal indicative of a video matching factor is defined as
  • VMF' 100 * ( ⁇ -(Min(E v (t))/Sum(VS 2 )) , wherein VMF' is the estimated video matching factor between the first video signature signal VS X and the second video signature signal VS 2 and is expressed in percentage, Min ⁇ E v (t)) is the minimum error found in the video correlation graph disclosed at Fig. 7c, and Sum VS 2 ) is the sum of the signature vector in the window W V2 . It will be appreciated that in that embodiment, when V x and V are exactly the same, VMF' is equal to 100% while if one of the two video sources is altered, VMF' will be reduced. In this embodiment, under 50%, the video sources are considered to be different.
  • minimum detection and delay extraction unit 708 may be implemented in various ways.
  • FIG. 8 there are shown a graph 800 showing an example of the first audio signature signal AS and a graph 802 showing an example of the second audio signature signal AS 2 .
  • first audio signature signal AS X and the second audio signature signal AS 2 are desynchronized in time by an amount of time corresponding to the estimation AD' of the signal indicative of an audio delay between the first audio signature signal AS ⁇ and the second audio signature signal AS 2 . Still in this embodiment, the first audio signature signal AS X is delayed in time compared to the second audio signature signal AS 2 .
  • FIG. 8B there is shown an embodiment of audio signature analysis unit 604 for determining the estimation AD' of the signal indicative of an audio delay between the first audio signature signal AS ] and the second audio signature signal AS 2 and the estimation AMF' of a signal indicative of an audio matching factor.
  • the audio signature analysis unit 604 comprises a convolution unit 804 and a minimum detection and delay extraction unit 806.
  • the convolution unit 804 is used for performing a convolution of the first audio signature signal AS ⁇ and the second audio signature signal AS 2 .
  • the convolution is a time shifted convolution of the second audio signature signal AS 2 window across the first audio signature signal AS window.
  • the second audio signature signal AS 2 window has a value of one (1 ) second while the first audio signature signal AS ⁇ window has a value of ten (10) seconds. It will be appreciated that those values may be changed depending on various needs.
  • the convolution unit 804 provides a convolution signal E A (t) .
  • the convolution may be alternatively performed in various other ways.
  • the convolution unit 804 may implemented in various ways.
  • the minimum detection and delay extraction unit 806 is used for detecting a minimum in the convolution signal E A ⁇ t) . It will be appreciated that the minimum in the convolution signal E A ⁇ t) is indicative of the estimation AD' of the signal indicative of an audio delay between the first audio signature signal AS and the second audio signature signal AS 2 .
  • the minimum detection and delay extraction unit 806 is further used for providing the signal estimation AMF' of a signal indicative of an audio matching factor.
  • the signal estimation AMF' of a signal indicative of an audio matching factor is indicative of a level of similarity measured in the audio signatures. It will be appreciated that the signal estimation AMF' of a signal indicative of an audio matching factor is a function of the convolution signal E A ⁇ t) and the estimation AD' of the signal indicative of an audio delay between the first audio signature signal AS and the second audio signature signal AS 2 .
  • the signal estimation AMF' of a signal indicative of an audio matching factor is defined as: AMF'- 100 * (1 - Min ⁇ (E A (t)) /(Size(AS 2 ) / 5)) , wherein AMF' is the estimated audio matching factor between the first audio signature signal AS and the second audio signature signal AS 2 expressed in percentage, Min((E A (t)) is the minimum error found in the audio correlation graph shown in Fig. 8c and Size(AS 2 ) is the size of the audio signature vector in the window W A2 . It will be appreciated that when A x and A 2 are exactly the same, the signal estimation AMF' of a signal indicative of an audio matching factor is equal to 100%.
  • the signal estimation AMF' of a signal indicative of an audio matching factor will be reduced. Still in this embodiment, when the signal estimation AMF' of a signal indicative of an audio matching factor is under 40%, the audio sources are considered to be different.
  • the minimum detection and delay extraction unit 806 may be implemented in various ways. For instance, the minimum detection and delay extraction unit 806 may first perform a pre-scan in order to select a first selection of convoluted samples. In one embodiment the first selection represents one-eight (1/8) of all the convoluted samples. A search may then be performed in the selected convoluted samples.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Television Systems (AREA)

Abstract

L'invention porte sur un procédé et sur un appareil destinés à délivrer une signature vidéo représentative d'un contenu d'un signal vidéo. L'invention porte en outre sur un procédé et sur un appareil destinés à délivrer une signature audio représentative d'un contenu d'un signal audio. L'invention porte en outre sur un procédé et sur un appareil destinés à détecter une synchronisation labiale et profiter du procédé et de l'appareil décrits dans la présente demande pour délivrer une signature vidéo et une signature audio.
PCT/CA2010/001876 2009-11-30 2010-11-23 Procédé et appareil de délivrance de signatures de signaux audio/vidéo et d'utilisation de celles-ci WO2011063520A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1210285.1A GB2489133B (en) 2009-11-30 2010-11-23 Method and apparatus for providing signatures of audio/video signals and for making use thereof
HK13103398.7A HK1176203A1 (en) 2009-11-30 2013-03-19 Method and apparatus for providing signatures of audio video signals and for making use thereof

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CA2686869 2009-11-30
US12/627,728 US8860883B2 (en) 2009-11-30 2009-11-30 Method and apparatus for providing signatures of audio/video signals and for making use thereof
CA2686869A CA2686869C (fr) 2009-11-30 2009-11-30 Procede et appareil pour la fourniture de signatures de signaux audio et/ou video et leur utilisation
US12/627,728 2009-11-30

Publications (1)

Publication Number Publication Date
WO2011063520A1 true WO2011063520A1 (fr) 2011-06-03

Family

ID=44065788

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2010/001876 WO2011063520A1 (fr) 2009-11-30 2010-11-23 Procédé et appareil de délivrance de signatures de signaux audio/vidéo et d'utilisation de celles-ci

Country Status (3)

Country Link
GB (3) GB2489133B (fr)
HK (2) HK1198311A1 (fr)
WO (1) WO2011063520A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2562515A (en) 2017-05-17 2018-11-21 Snell Advanced Media Ltd Generation of audio or video hash

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002097792A1 (fr) * 2001-05-25 2002-12-05 Dolby Laboratories Licensing Corporation Segmentation de signaux audio en evenements auditifs
WO2005046201A2 (fr) * 2003-10-16 2005-05-19 Nielsen Media Research, Inc. Appareil de signature audio et procedes associes
WO2008066930A2 (fr) * 2006-11-30 2008-06-05 Dolby Laboratories Licensing Corporation Extraction de particularités d'un contenu de signal vidéo et audio pour fournir une identification fiable des signaux

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5585859A (en) * 1994-05-17 1996-12-17 The University Of British Columbia System for reducing beat type impairments in a TV signal
US7577259B2 (en) * 2003-05-20 2009-08-18 Panasonic Corporation Method and apparatus for extending band of audio signal using higher harmonic wave generator
US7649937B2 (en) * 2004-06-22 2010-01-19 Auction Management Solutions, Inc. Real-time and bandwidth efficient capture and delivery of live video to multiple destinations
GB2457694B (en) * 2008-02-21 2012-09-26 Snell Ltd Method of Deriving an Audio-Visual Signature
US8600531B2 (en) * 2008-03-05 2013-12-03 The Nielsen Company (Us), Llc Methods and apparatus for generating signatures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002097792A1 (fr) * 2001-05-25 2002-12-05 Dolby Laboratories Licensing Corporation Segmentation de signaux audio en evenements auditifs
WO2005046201A2 (fr) * 2003-10-16 2005-05-19 Nielsen Media Research, Inc. Appareil de signature audio et procedes associes
WO2008066930A2 (fr) * 2006-11-30 2008-06-05 Dolby Laboratories Licensing Corporation Extraction de particularités d'un contenu de signal vidéo et audio pour fournir une identification fiable des signaux

Also Published As

Publication number Publication date
GB201403468D0 (en) 2014-04-16
GB201407414D0 (en) 2014-06-11
HK1176203A1 (en) 2013-07-19
HK1198311A1 (zh) 2015-03-27
GB2508115A8 (en) 2015-09-09
GB2511655B (en) 2014-10-15
GB2489133A (en) 2012-09-19
GB2489133B (en) 2014-05-07
GB2508115A (en) 2014-05-21
GB2508115B (en) 2014-08-27
GB2511655A (en) 2014-09-10
GB201210285D0 (en) 2012-07-25

Similar Documents

Publication Publication Date Title
US8860883B2 (en) Method and apparatus for providing signatures of audio/video signals and for making use thereof
US9536545B2 (en) Audio visual signature, method of deriving a signature, and method of comparing audio-visual data background
TWI442773B (zh) 抽取視訊與音訊信號內容之特徵以提供此等信號之可靠識別的技術
US8406462B2 (en) Signature derivation for images
US10219033B2 (en) Method and apparatus of managing visual content
US8928809B2 (en) Synchronizing videos
WO2013103544A2 (fr) Détection automatisée d'artéfacts vidéo dans un signal d'informations
WO2015168893A1 (fr) Procédé et dispositif de détection de qualité vidéo
US10395121B2 (en) Comparing video sequences using fingerprints
US9852489B2 (en) Method and apparatus for modifying a video stream to encode metadata
WO2011063520A1 (fr) Procédé et appareil de délivrance de signatures de signaux audio/vidéo et d'utilisation de celles-ci
CA2686869C (fr) Procede et appareil pour la fourniture de signatures de signaux audio et/ou video et leur utilisation
US8285051B2 (en) Information processing apparatus and method for detecting associated information from time-sequential information
JP4939860B2 (ja) イメージ・データを分析するための方法および装置
GB2487499A (en) Audio-Visual Signature, Method of Deriving a Signature, and Method of Comparing Audio-Visual Data
JP2010148017A (ja) ノイズ低減装置及びそのプログラム
JP2019125939A (ja) コマーシャル境界判定装置、コマーシャル境界判定方法、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10832471

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 1210285

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20101123

WWE Wipo information: entry into national phase

Ref document number: 1210285.1

Country of ref document: GB

122 Ep: pct application non-entry in european phase

Ref document number: 10832471

Country of ref document: EP

Kind code of ref document: A1