US11570506B2 - Method for synchronizing an additional signal to a primary signal - Google Patents

Method for synchronizing an additional signal to a primary signal Download PDF

Info

Publication number
US11570506B2
US11570506B2 US16/955,966 US201816955966A US11570506B2 US 11570506 B2 US11570506 B2 US 11570506B2 US 201816955966 A US201816955966 A US 201816955966A US 11570506 B2 US11570506 B2 US 11570506B2
Authority
US
United States
Prior art keywords
signal
time
playback device
additional
primary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/955,966
Other languages
English (en)
Other versions
US20200322671A1 (en
Inventor
Christof Haslauer
Oliver Dumböck
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nativewaves GmbH
Original Assignee
Nativewaves GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DE102017131266.8A external-priority patent/DE102017131266A1/de
Priority claimed from ATA50180/2018A external-priority patent/AT520998B1/de
Application filed by Nativewaves GmbH filed Critical Nativewaves GmbH
Assigned to NATIVEWAVES GMBH reassignment NATIVEWAVES GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DUMBÖCK, Oliver, HASLAUER, Christof
Publication of US20200322671A1 publication Critical patent/US20200322671A1/en
Application granted granted Critical
Publication of US11570506B2 publication Critical patent/US11570506B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43074Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of additional data with content streams on the same device, e.g. of EPG data or interactive icon with a TV program
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43079Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of additional data with content streams on multiple devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4621Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen

Definitions

  • the invention relates to a method for synchronizing an additional signal to a primary signal and a device for synchronizing an additional signal to a primary signal.
  • the signals are “continuous signals;” continuous signals are understood to be signals that can be described by a feature sequence of chronologically consecutive features.
  • Typical continuous signals are audio signals and video signals, which can be sampled at regular intervals in order to generate corresponding features.
  • Continuous signals can also be signals that are used to transmit digitally encoded text.
  • U.S. Pat. No. 9,609,034 B2 has disclosed a method for identifying media data by means of metadata.
  • WO 2016/085414 A1 describes a method in which a mobile platform recognizes the station on a television and during commercial breaks, receives information that is appropriate to it.
  • EP 2 507 790 B1 describes a method and system for channel-invariant robust audio hashing with a subsequent comparison of two audio hashes.
  • the audio signals are first divided into fragments with a typical length of 2 seconds. Then these fragments are divided further into frames with a typical length of 0.36 seconds. The frames are Fourier transformed and the resulting data are then normalized. The hash values are obtained by quantizing these data.
  • WO 2012/049223 A2 describes a method for synchronizing an alternative audio signal to a combined video and audio signal. Two possibilities for this are mentioned. First, a watermark method is described, which for the video and audio signal, constitutes an additional signal that is not perceptible to humans and can, for example, be described as a modulation of the primary audio signal. The other method describes the fingerprint method.
  • the main audio signal is characterized based on the amplitude, frequency, zero crossing rate, tempo, spectral flatness, bandwidth, and/or audio fingerprints and is compared to the corresponding characteristics of the second signal. If the position in the second signal is detected, then this can be chronologically adapted to the primary signal.
  • WO 2014/018652 A2 describes a method for synchronizing an alternative audio signal to a combined video and audio signal.
  • a fingerprint process is used as the method.
  • the fingerprints of the main audio signal are coupled in their entirety to the second signal.
  • the second signal and the fingerprints of the first signal are loaded in advance onto the device that is to be synchronized so that during the synchronization, only the first signal has to be analyzed and compared to the fingerprints on the device.
  • WO 03003743 A2 has disclosed a method and device that supply synchronization signals for synchronizing parallel media.
  • a synchronization server is coupled to a communication network and connected to a broadcast media start time database.
  • the synchronization server receives a synchronization request from a user client via the communications network.
  • the synchronization server generates synchronization data using the synchronization request and the broadcast media start time database.
  • WO 2014209179 A1 describes a method and transceiver for network diversity in long distance communications.
  • the method in a main node comprises the following steps for communication with a destination node over long distances:
  • the object of the present invention is to provide a quick, robust, and precise method and corresponding device with which an additional signal can be output synchronously to an arbitrary, even continuous, primary signal.
  • Another object lies in providing a streaming method with a short latency.
  • Another object lies in providing a quick, robust, and precise method with which the delay between the reception and output of a media playback device can be measured and calibrated.
  • Another object lies in providing a method for synchronizing an additional signal to a primary signal, which measures the time delay between these two signals and adapts at least one of these signals such that the time delay is as small as possible.
  • a method for synchronizing an additional signal to a primary signal is provided with the following steps:
  • the primary signal can be a continuously transmitted television signal that does not have a starting point.
  • the primary signal can be a continuously transmitted television signal that does not have a starting point.
  • the primary signal can be a continuously transmitted television signal that does not have a starting point.
  • synchronization information can be allocated to the primary signal at a specified position that relates to the identified signal feature sequence. In other words, this means that the synchronization information relate to the location or position of the identified signal feature sequence in the primary signal.
  • This synchronization information is stored, for example, together with the DB feature sequences in the database.
  • This synchronization information can also include a time, in particular a server time of a synchronization server on which this method is carried out, which is detected for example if the additional signal with this signal feature sequence is received from the synchronization server, the signal feature sequence is extracted, or the additional signal with this signal feature sequence is transmitted from the synchronization server to the playback device.
  • This signal feature sequence therefore constitutes a particular location in the primary signal to which a particular time, in particular the server time, is then allocated. This particular time can also be extracted from time information contained within the signal and can be allocated to the respective signal feature sequence.
  • this method can also be used with a primary signal that has a particular starting point.
  • this time reference is generated during the passage of the primary signal (English: on the fly) for example through a synchronization server in that the comparison to the DB feature sequences is used to identify at least one signal feature sequence of the primary signal to which corresponding synchronization information can then be allocated.
  • time information allocated to the primary signal is generated, which connects a location or point in the primary signal to a time.
  • Such an allocation of time information to the primary signal can also make sense if the primary signal already includes time information, e.g. in the form of time markers. If need be, this is used to generate a second reference, which can be connected to further information.
  • the DB feature sequence can be allocated time information, which defines a particular DB time relative to the DB feature sequence that is used to generate the synchronization information.
  • This time information is typically stored together with the DB feature sequence in the database. For example, it indicates a particular time when a signal feature sequence, which matches this DB feature sequence, occurs at a particular point in a larger signal segment such as a film. The additional signals can then be synchronized relative to this DB time on the playback device.
  • Synchronization information can also be allocated to the additional signal by extracting a signal feature sequence of the additional signal and comparing it to a DB feature sequence stored in a database; if the signal feature sequence matches one of the DB feature sequences to a predetermined degree, then synchronization information is allocated to the additional signal at a position specified by the signal feature sequence.
  • Synchronization information can also be allocated to the additional signal manually. For example, an operator can allocate to the additional signal time information as to when it is to be broadcast relative to a primary signal.
  • the synchronization information can comprise one or more of the following pieces of data:
  • the synchronization information can be composed very differently depending on the application.
  • the synchronization information of the matching DB feature sequences is allocated to the primary signal at a position specified by the signal feature sequence.
  • Particular information in the database is allocated to the DB feature sequences. But this information does not absolutely have to include time information. It can, for example, be metadata, which describe the meaning (e.g. the title of a piece of music, act of an opera, etc.) of the DB feature sequence or of a segment of the signal in which this DB feature sequence is found.
  • the synchronization information can then be generated for example based on time information contained within the primary signal, which is extracted together with the signal feature sequence, or based on the extraction time, which times are for example combined with this meta-information, thus yielding synchronization information based on which an additional signal can be allocated with the same meta-information; the time of the allocation and synchronization is derived based on the extracted time information or based on the extraction time.
  • the invention is also based on the discovery that media signals from different signal sources often have similar segments of features. These feature segments do not have to be exactly the same. For example, if a primary signal is a high-quality audio signal of a concert and an additional signal is a video signal with a low-quality audio signal, then based on the low-quality audio signal, e.g. when the musicians are greeted with applause, the additional signal can be synchronized very exactly since the audio features here are very similar, even if the quality of the two audio signals differs significantly. This is likewise possible with video signals that have been recorded with professional cameras and those that have been recorded with a mobile phone, for example.
  • the inventors have discovered that based on these feature segments, it is possible to carry out an automatic identification of one or more particular signal feature sequences in order to synchronize different signals.
  • the synchronization information can be generated in a synchronization server, which is embodied independently of a playback device. This synchronization information must then be transmitted to the playback device on which the additional signal is output synchronously to the primary signal.
  • a synchronization of the playback device and the synchronization server could also be performed, for example by determining a time interval that is required to transmit the corresponding signal from a predetermined point, in particular from the synchronization server, to the playback device.
  • the synchronization server can also be embodied in the playback device itself. If a digital transmission of data between the synchronization server and the playback device is used, then it is generally not possible to determine this time interval because it varies.
  • the additional signal can be synchronized to the primary signal by outputting the primary signal and the additional signal—which each contain one or more time markers as synchronization information—on the playback device. Based on the time markers, the playback device can synchronously allocate the additional signal to the primary signal and can output them synchronously.
  • the synchronization information can be used in the playback device to allocate the additional signal to this playback time in such a way that the additional signal is output synchronously to the primary signal.
  • the primary signal and the additional signal are output with the same playback device so that the time markers in the two signals are sufficient in order to output the signals synchronously.
  • the synchronization information it is advantageous for the synchronization information to be used to allocate the additional signal to a playback time that is measured by means of a clock in the playback device.
  • the synchronization information includes the corresponding information for allocating the additional signal to the playback time in such a way that the additional signal is output synchronously to the primary signal.
  • the primary signal and the additional signal are output by different playback devices, then it is advantageous to use a first time to which the output of the primary signal is allocated. This time therefore describes the times of the individual features or feature sequences in the primary signal. Basically, this could be accomplished using the clock of the playback device with which the primary signal is output. But if a synchronization server is provided, which is independent of this playback device for the primary signal, then it is also possible to use the clock of the synchronization server, particularly if the primary signal is transmitted continuously from the synchronization server to the playback device.
  • synchronization information is provided, which describes the relationship of the playback time on the playback device to the server time on the synchronization server.
  • the matching of all of the signal feature sequences that meet the predetermined degree of matching within a predetermined time interval is evaluated and the signal feature sequence with the best evaluation is selected for allocating the synchronization information to the primary signal at a position specified by the signal feature sequence.
  • the goal is to achieve a unique allocation of the synchronization information to the primary signal. With the use of a plurality of signal feature sequences, this is not always guaranteed.
  • the use of the signal feature sequence that best matches a DB feature sequence also achieves the best synchronization.
  • the predetermined rules for evaluating the degree of matching of the signal feature sequence with the DB feature sequence include one or more of the following rules:
  • a method for synchronizing an additional signal to a primary signal includes the following steps:
  • This method can be used to create a database for a primary signal as the primary signal is being transmitted from a broadcast station to a playback device; this database is immediately available for synchronization of an additional signal to this primary signal. It is thus possible to analyze and synchronize a live signal. For this reason, a database created in this way is also referred to as a live database.
  • the time information can be generated or extracted by means of time information of a clock provided in a corresponding server and/or based on time information contained within the primary signal.
  • a method for synchronizing an additional signal to a primary signal; to calibrate a transmission path from a server to a playback device and/or to calibrate the latency in a playback device for outputting a media signal on the playback device, a reference signal is output, which is simultaneously received by a corresponding sensor; the output reference signal and the reference signal received by the sensor are compared to each other in order to determine the time interval required for relaying the reference signal and/or for actually outputting it to the playback device and this time interval is used as a time offset in order, based on time information relating to the clock of the server and/or of the playback device, to determine an output time at which a signal is output on the media playback device.
  • This method can be used to automatically calibrate transmission paths or playback devices.
  • the latency in a playback device can differ significantly as a function of whether, for example, an audio signal is output on a hard-wired speaker, on a speaker connected by Bluetooth, or on a subordinate audio system.
  • the time interval required to transmit signals can differ significantly depending on the respective transmission path.
  • This method can be used to calibrate the transmission path and/or the latency of the output device once or several times before or also during the playback of the media signal so that the correct offset with which the signals are output is respectively present.
  • the reference signal can include an audio signal; in this case, the sensor is a microphone.
  • the reference signal can also include a video signal. In that case, a camera is used as the sensor.
  • the time interval can be determined by determining the transmission time and the reception time of the reference signal; the time interval is derived from the time difference between these two times. If the transmission time and the reception time of the reference signal are measured at the same location, then the time interval to be determined is half of the time difference between these two times. If the transmission time of the reference signal is measured at the beginning of the transmission path and the reception time of the reference signal is measured directly at the sensor, then the time interval to be determined is this time difference.
  • One or both times can be carried out by comparing an extracted reference feature sequence to one or more previously stored reference feature sequences. This method corresponds to the above-explained method for identifying signal feature sequences based on DB feature sequences. It is thus possible to determine a time using such a comparison of feature sequences. The precision of such a time is limited by the length of the feature in the feature sequence that is used to determine the time. A typical length of such a feature is in the range of ⁇ 8 ms.
  • a method for synchronizing an additional signal to a primary signal in which an additional signal is transmitted from a synchronization server, which is embodied to be independent of a playback device, to the playback device and in the synchronization server, synchronization information is generated, which relates to a server time of the synchronization server that is measured in the synchronization server with a clock provided there; in the playback device, a playback device clock is provided for measuring a playback time, which is synchronized with the server time at least once, and a time drift of the playback time relative to the server time is measured and this time drift is taken into account in the synchronization of the additional signal to the primary signal.
  • the additional signal can be output on the playback device, controlled by the playback time available from the playback device.
  • the time drift can have different causes.
  • the clock of the playback device is not always able to run at exactly the same speed as the clock of the server or the clocks of different playback devices run at different speeds.
  • the additional signal can have a temporal elongation or compression in comparison to the primary signal so that an additional signal, which is synchronized exactly to the primary signal at a particular time, deviates from the primary signal more as the playback time increases.
  • Such a temporal elongation or compression comes into being, for example, in the conversion of analog signals into digital signals by means of a corresponding analog-to-digital converter.
  • the primary signal and the additional signal or additional signals are at least transmitted via different paths and are therefore converted with different analog-to-digital converters at different locations.
  • Each analog-to-digital converter has a clock generator (clock), which adds time information in the form of time markers to the digital signal.
  • clock clock generator
  • the time of different clock generators can differ slightly. This leads to the fact that even if a primary signal and an additional signal are identical in the analog state, they have slightly different time information in the digital state. If they are played back on a playback device simultaneously, then a time offset between the two signals can arise as the playback time decreases.
  • the time drift due to clocks or clock generators on different playback devices running at different speeds can be eliminated through a regular comparison to a reference clock (e.g. atomic clock or TMP).
  • a reference clock e.g. atomic clock or TMP.
  • the output unit often has its own clock signal and the playback device has a control unit with its own clock.
  • the clock signal of the playback unit is regularly synchronized with the clock of the control unit of the playback device and the clock of the control unit of the playback device is synchronized with the reference clock at regular intervals.
  • the time drift due to the compression or elongation of signals can be measured.
  • the additional signal can then be perpetually played back synchronously to the primary signal without having to perform a new synchronization between the additional signal and the primary signal at regular intervals.
  • a synchronization of the two signals to each other can also be carried out repeatedly without having to take into account a time drift in order to accomplish this.
  • the time drift can take place through repeated comparison to a reference time in order to calculate a respective time difference; the time drift is determined based on the deviations in the time difference. The greater the intervals between the first and last comparison, the more precisely the time drift can be determined.
  • a method for synchronizing an additional signal to a primary signal is created; in a first step, the time stamps of the available additional signals are transmitted to a playback device. It is thus possible to calculate the available buffer time.
  • the buffer time describes the time that is still available to the additional signal before it has to be played back in order to be synchronous with the primary signal.
  • the available bandwidth is determined.
  • the buffer time is used to encode the additional signal, to transmit it from the additional signal server to the playback device, and then to decode the additional signal again.
  • the quality of the additional signal in this case depends on the available buffer time and on the available bit rate.
  • a high signal quality can be achieved either by selecting the shortest possible encoding/decoding time, but this results in large data quantities that require correspondingly long transmission times, or by selecting a long encoding/decoding time, which reduces the bit rate and accelerates the transmission.
  • the encoding/decoding time must be determined again.
  • this method can be embodied in such a way that on the server or servers, the signals (primary signal and/or additional signal) are encoded differently, for example with different codecs, so that the signals are available in different qualities.
  • the playback device a decision is then made as to which signal is used and retrieved from the server.
  • the additional signal is transmitted from the additional signal server to the playback device in chunks with time lengths of 10 frames, which corresponds to about 400 ms, particularly at most 5 frames, which corresponds to about 200 ms, and preferably at most 1 frame, which corresponds to about 40 ms, and in the playback device, the additional signal is received by means of a local web server.
  • a local web server By providing the local web server with a direct link via a web socket, the additional signal can be received essentially without delay.
  • a direct connection is a connection that is retained after a transmission event.
  • the local web server is preferably compatible with the transmission protocol (as a rule: http) used by the playback device so that the playback device itself does not have to be modified, except for the fact that the local web server must be added.
  • the local web server can be embodied so that it requests a plurality of chunks at the same time or in quick succession without having to wait for reception of previously requested chunks.
  • the chunks are requested individually and an additional chunk is requested only when the previously requested chunk has already been received.
  • conventional streaming techniques such as HLS or DASH can be used for this.
  • the additional signal can generally arrive at the recipient within two to three seconds.
  • an additional signal can be synchronized to a primary signal. It is also possible, however, to synchronize a plurality of additional signals to a primary signal.
  • FIG. 1 shows a system for playing back a plurality of camera signals synchronously to a primary signal
  • FIG. 2 shows a system for loading external additional information relating to a live broadcast
  • FIG. 3 shows a system for loading external additional information relating to a television broadcast
  • FIG. 4 shows a system for loading external additional information relating to a television broadcast with a local server.
  • a first exemplary embodiment relates to a system for broadcasting a live event on a stage 1 with a plurality of cameras 2 and a broadcast studio 3 in which the camera signals of the individual cameras 2 merge in order to be transformed by the director into a primary signal.
  • the broadcast studio 3 is connected to a synchronization server 5 to which the primary signal 4 is transmitted.
  • the synchronization server 5 conveys the primary signal 4 as a data stream to one or more playback devices 6 . Only a single playback device is shown in FIG. 1 . In reality, the primary signal 4 is transmitted to many playback devices, for example in a broadcasting process.
  • the signals of the individual cameras are conveyed as additional signals to an additional signal synchronization server 7 .
  • the additional signal synchronization server 7 is connected to a web server 8 from which the individual additional signals can be retrieved according to an Internet protocol and can be supplied to the respective playback devices 6 via the Internet 18 .
  • a web server 8 from which the individual additional signals can be retrieved according to an Internet protocol and can be supplied to the respective playback devices 6 via the Internet 18 .
  • the web server 8 there is a bidirectional data connection so that in the playback devices, an individual selection can be made about the additional signals to be retrieved.
  • the primary signal is pre-processed and optimized.
  • the individual additional signals are output with or without further pre-processing.
  • the two synchronization servers 5 , 7 are each connected to a database server 9 on which a database is provided, in which particular feature sequences are stored along with synchronization information allocated to the feature sequences.
  • a database server 9 on which a database is provided, in which particular feature sequences are stored along with synchronization information allocated to the feature sequences.
  • only a single database is provided, which is accessed by both synchronization servers 5 , 7 . It can also be advantageous, however, to provide a respective copy of the database in the immediate vicinity of each of the synchronization servers 5 , 7 to enable rapid access or also to provide two databases with somewhat different data contents.
  • the primary signal 4 can be output on the playback device 6 and the user of the playback device should nevertheless also have the possibility of synchronously outputting at least one of the additional signals on the playback device 6 .
  • Both the primary signal 4 and the additional signals each have a video track and an audio track.
  • the audio tracks of the additional signals are each recorded by means of a microphone mounted on the respective camera.
  • the audio signal of the primary signal is recorded by means of a microphone system installed on the stage 1 and is thus of significantly better quality.
  • the synchronization server 5 successive segments of a predetermined length are read from the audio track and particular features are extracted from them. To accomplish this, a fast Fourier transformation is carried out to transform these segments into the frequency space or Fourier space.
  • the length of the individual segments is 16 ms. But in any case, they should be no longer than 50 ms and in particular, no longer than 32 ms since short segments permit a correspondingly precise synchronization. The shorter the segments and time slots are, the more pronounced the tendency is for low frequencies to no longer be taken into account. Surprisingly, however, it has turned out that with time slots of up to a maximum length of 8-10 ms, a sufficiently large number of high-frequency signals are available for carrying out the synchronization.
  • the read and transformed time slots preferably overlap one another. With an overlapping of e.g. 50% and a time slot length of 32 ms or 16 ms, a resolution of 16 ms or 8 ms can be achieved.
  • the features are intensity values of particular frequencies that lie above the predetermined threshold.
  • the sequence is not a chronological sequence, but rather a listing of features in order of frequency.
  • the feature sequences derived from the audio track are referred to below as signal feature sequences. These signal feature sequences are compared to DB feature sequences that are stored in the database.
  • the database 9 contains a multitude of such DB feature sequences that have been stored in advance.
  • a database 9 is used in which all of the songs of the corresponding musical act are converted into DB feature sequences and possibly also songs from other artists that are nevertheless often played live.
  • the feature sequences are characterized by the fact that—even if the signals from which the DB feature sequences have been generated and the live signals are not identical—they nevertheless have a similarity that allows them to be allocated to each other.
  • the comparison of the signal feature sequences to the DB feature sequences determines a matching to a predetermined degree, then this is evaluated to be an allocation.
  • the extraction time is measured by means of the synchronization server clock 11 and is allocated to the respective feature sequence.
  • This extraction time is used to describe the time of a particular feature sequence in the corresponding signal.
  • the extraction times can thus be used to uniquely describe the relative time allocation of a plurality of signal feature sequences within a signal. It is also possible, however, for the process of the extraction itself to be subject to time fluctuations. In this case, the extraction times are encumbered by an error caused by the time fluctuations. For this reason, it can be advantageous instead of the time measured by the synchronization server clock 11 to use time information contained in the primary signal, which describes the time of a particular point in the primary signal. Such time information is inherently contained in the primary signal and it is referred to below as signal time information. If the primary signal is a video signal, for example, then it has a particular frame rate at which individual frames are recorded and played back.
  • the time interval between two particular frames of this signal is the number of frames in the signal between these frames multiplied by the inverse of the frame rate.
  • the number of a frame of a video signal therefore constitutes signal time information of this kind.
  • signal time information is explicitly encoded in the primary signal. It can, however, also be implicitly contained, for example in that the number of frames of a video signal are counted.
  • the feature sequence it is thus possible for the feature sequence to be extracted together with the signal time information that indicates the time of this feature sequence in the primary signal. This yields an extraction time that is independent of the chronological sequence of the process of the extraction.
  • the signal time information can, for example, be allocated an absolute time by means of the synchronization server clock 11 . This allocation is carried out once and is then maintained.
  • the identified signal feature sequences are allocated synchronization information that is stored together with the corresponding DB feature sequence in the database.
  • the synchronization information includes identification markers, which describe the respective song and define the point in the song.
  • the synchronization information also includes the extraction time of the corresponding signal feature sequences.
  • the additional signal synchronization server 7 On the additional signal synchronization server 7 , the same process is carried out with the respective additional signals 10 ; here, too, the signal feature sequences are extracted from the audio track and compared to the DB feature sequences of the database.
  • the extraction times can be measured using the additional signal synchronization server clock 12 or can be extracted from the corresponding signal time information and the extraction times are transmitted to the playback device together with the synchronization information derived from the database with an allocation to the respective additional signals.
  • time information which describes the respective time of the extracted feature sequences in the respective signal, is allocated to both the primary signal and the additional signal.
  • This time information can already be synchronized in advance through comparison of the extracted feature sequences to the DB feature sequences stored in the database 9 in that when the extracted feature sequence matches one of the DB feature sequences to a predetermined degree, the synchronization information or time information of this DB feature sequence is allocated to the extracted feature sequence, a time difference is calculated, and this is allocated to the primary signal and to the additional signal; the time difference is added to all of the extraction times of the primary signal and of the additional signal, as a result of which the same synchronization information or the same time information is allocated to the same feature sequences in the primary signal and in the additional signal.
  • the synchronization information is coupled to the respective signals.
  • the synchronization information which is about the primary signal 4 and is generated on the synchronization server 5
  • the synchronization information generated on the additional signal synchronization server 7 is coupled to the corresponding additional signals.
  • the synchronization information is transmitted to the playback device 6 together with the corresponding signals from the respective server 5 , 7 .
  • Additional signals are transmitted from the web server 8 to the playback device 6 only if the corresponding additional signals have been requested by the playback device 6 .
  • the primary signal 4 and the requested additional signal are then output on the playback device 6 .
  • These two signals are synchronized based on the synchronization information transmitted along with them; the synchronization information includes time markers (e.g. the synchronized extraction times) based on which the playback device can recognize when the additional signal is to be output synchronously to the primary signal.
  • the corresponding signals are provided with the synchronization information in a kind of watermark.
  • the synchronization information are not coupled to the primary signal and the additional signals, but instead transmitted to the playback device 6 separately.
  • the synchronization information respectively includes time information, which is coupled to a particular identifier of the respective signal. If the signal is a defined segment with a defined beginning, then the time signal can refer to this beginning point or starting point. This can be advantageous particularly with additional signals, which each contain only abbreviated additional information that lasts, for example, from a few tens of seconds up to a few minutes and can be output in addition to the primary signal. Then based on the starting point and the respective time information, the playback device can synchronize the additional signal to the primary signal.
  • the time information must refer to another reference point.
  • This reference point can, for example, be a feature sequence in the respective signal. This feature sequence can occur at a different point in the signal.
  • the playback device must be provided with a module that can extract the feature sequence from the respective primary signal and/or additional signal and can compare it to the feature sequence that is supplied along with the synchronization information. It is therefore possible, without a uniquely defined starting point in the primary signal or additional signal, to obtain a unique reference of the time information to the respective primary signal and additional signal.
  • the playback device must be provided with a module for extracting the feature sequence and for comparing the extracted feature sequence to the feature sequences contained in the synchronization information. It is, however, advantageous that in this variant, the additional signal and/or the primary signal does not have to be modified and can be transmitted in the original form.
  • a playback device clock 13 provided in the playback device 6 , the synchronization server clock 11 , and the additional signal synchronization server clock 12 are synchronized.
  • the playback device clock 13 is respectively synchronized pairwise with the synchronization server clock 11 and the additional signal synchronization server clock 12 .
  • the transmission times of the primary signal from the synchronization server 5 to the playback device 6 and the transmission time from the additional signal synchronization server 7 to the playback device 6 are known.
  • the transmission paths in this case are embodied in such a way that the transmission times remain constant. With short transmission paths such as Bluetooth links, the transmission times are generally constant. With longer transmission paths, particularly when data are transmitted via the Internet, the transmission times often vary significantly so that in that case, this variant does not work.
  • the time information contained in the synchronization information relates to a particular event on the synchronization server 5 or on the additional synchronization server 7 .
  • This event is typically the time of the extraction of a particular signal feature sequence, which it has been possible to identify based on the DB feature sequences. It is therefore known when the primary signal or the additional signal has been conveyed through to the corresponding synchronization server 5 , 7 along with the corresponding signal feature sequence. Since the transmission time from the respective synchronization server 5 , 7 to the playback device 6 is also known, this can be used to determine when the signal feature sequences arrive at the playback device 6 .
  • the additional signal can be timereferenced to the primary signal, i.e. the additional signal can be synchronized with the primary signal.
  • the corresponding time relationship is included in the synchronization information.
  • the playback device clock 13 must be synchronized respectively with the synchronization server clock 11 and with the additional signal synchronization server clock 12 and the transmission times from the individual synchronization servers 5 , 7 to the playback device must be known and stable. In this case, however, it is advantageous that neither the primary signal nor the synchronization signal has to be modified. In addition, a module for extracting feature sequences does not have to be integrated into the playback device. This is a very simple solution, which permits reliable synchronization.
  • Another advantage of the third variant lies in the fact that this third variant can also be simply carried out with two different playback devices; one playback device is provided for playing back the primary signal and a second playback device is provided for playing back the additional signal.
  • a playback device clock of the primary signal playback device must be synchronized with the synchronization server clock 11 of the synchronization server 5 and an additional signal playback device clock must be synchronized with the additional signal synchronization server clock 12 .
  • the two playback device clocks must be synchronized to each other.
  • the primary signal playback device can be a television and the additional signal playback device can be a mobile phone.
  • the primary signal and the additional signal are output synchronously to each other.
  • variants explained above can be combined with one another, for example by transmitting the additional signal with the playback device according to one of the three variants and transmitting the additional signal according to one of the other variants to the playback device and synchronizing with the additional signal.
  • the third variant is preferred for the transmission of the primary signal, whereas all three variants are of equal value for the transmission of the additional signals.
  • FIG. 2 A second exemplary embodiment ( FIG. 2 ) will be explained below; elements that are the same as those in the first exemplary embodiment are provided with the same reference numerals. For elements that remain the same, the above explanations apply unless otherwise stated below.
  • a stage 1 is once again provided, which is scanned by a plurality of cameras 2 .
  • the signals of the cameras 2 are transformed into a primary signal 4 in a broadcast studio 3 .
  • the broadcast studio 3 is connected to a synchronization server 5 .
  • the synchronization server 5 is coupled to a database server 9 , which has a database containing DB feature sequences and the associated synchronization information.
  • a PS playback device 6 / 1 is connected to the synchronization server 5 in order to receive and playback the primary signal. Once again, a plurality of PS playback devices 6 / 1 can be provided.
  • the second exemplary embodiment differs from the first exemplary embodiment in that an independent source for additional information is provided.
  • this source is an additional information database server 15 .
  • this source is an additional information database server 15 .
  • the additional information database server 15 can also contain foreign language translations of the corresponding song lyrics as audio tracks.
  • songs that are known in many languages such as the children's lullaby “Frère Jacques.”
  • the additional information stored on the additional information database server 15 is already provided with corresponding synchronization information.
  • this can be the starting time and other time markers during the song.
  • the additional information database server 15 is connected to a web server 8 .
  • the additional information can be retrieved from the web server 8 via the Internet 18 .
  • An AS playback device 6 / 2 for playing back an additional signal is connected to the Internet 14 .
  • the synchronization server 5 also has a connection to the Internet 14 so that synchronization information generated on the synchronization server 5 can be supplied to the AS playback device 6 / 2 via the Internet 14 .
  • a synchronization clock 11 is provided, which is respectively synchronized with a playback device clock 13 / 1 of the PS playback device and with a playback device clock 13 / 2 of the AS playback device 6 / 2 .
  • the synchronization clock 11 of the synchronization server 5 is the main clock
  • the playback device clock 13 is the main clock with which all other clocks are synchronized.
  • synchronization information is generated by extracting signal feature sequences from the primary signal and comparing them to corresponding DB feature sequences of the database server 9 .
  • the generation of the synchronization information corresponds essentially to that of the first exemplary embodiment.
  • the transmission time for transmitting the primary signal from the synchronization server 5 to the PS playback device 6 / 1 is known so that if the time at which when a particular segment of the primary signal is conveyed through the synchronization server 5 is known, then the time at which this segment is output on the PS playback device 6 / 1 is also known.
  • the synchronization information that is transmitted from the synchronization server 5 to the AS playback device 6 / 2 therefore includes time information, which respectively describes a time of the primary signal relative to a detected signal feature sequence, and identification markers, which describe the content of the primary signal.
  • the identification markers indicate which song is played back with the primary signal.
  • the identification markers can optionally also include additional information such as the verse, the line, or lyrics excerpts of the song. These lyrics excerpts are preferably lyrics excerpts from the point at which one of the signal feature sequences has been detected.
  • the time information preferably includes an indication of the time at which the corresponding signal feature sequence on the synchronization server 5 was extracted.
  • the AS playback device 6 / 2 Based on this synchronization information, the AS playback device 6 / 2 knows when each song is output on the PS playback device 6 / 1 . Correspondingly, the AS playback device can output the additional signals—which are received from the additional information database server 15 or from the web server 8 and have already been provided with synchronization information in advance—on the AS playback device 6 / 2 synchronously to the output of the primary signal on the PS playback device 6 / 1 .
  • an additional signal synchronization server 7 can be provided, which is embodied similarly to the one in the first exemplary embodiment.
  • the additional information is in the form of song lyrics that are encoded in ASCII for example, then the additional information does not include any audio signals.
  • audio signal-like feature sequences can be generated from the words contained in the song lyrics, as is known from speech synthesis. These feature sequences can then in turn be compared to DB feature sequences, which have been stored in another database server 16 . This also makes it possible to compare lyrics segments of songs directly to corresponding lyrics segments stored on the database server 16 . In this case, the individual letters of the lyrics segments constitute the corresponding features.
  • the feature sequences stored on the database server 16 are respectively allocated synchronization information, which can be added to the additional information or additional signals.
  • spoken or sung texts can also be converted into text form through speech recognition.
  • the features are then text and/or letter sequences that are likewise stored in the database.
  • a third exemplary embodiment ( FIG. 3 ) essentially corresponds to the second exemplary embodiment and differs from it in that the synchronization server 5 is embodied independently of the connection between the broadcast station 3 and the PS playback devices 6 / 1 for playing back the primary signal.
  • the AS playback device 6 / 2 has a sensor 17 for detecting at least a part of the primary signal output by the PS playback device 6 / 1 .
  • This sensor 17 can be a microphone for detecting the audio signal of the primary signal 4 or can be a camera for capturing the video output of the primary signal 4 .
  • the AS playback device 6 / 2 is embodied with a module for extracting the signal feature sequences of the primary signal 4 ; these signal feature sequences are extracted from the primary signal 4 that is picked up by the sensor 17 .
  • the extraction time can be measured by means of the AS playback device clock 13 / 2 . Since the process of the extraction itself can be subject to time fluctuations as has already been explained above, it can be advantageous to use the signal time information in order to determine the extraction time. In this embodiment, instead of signal time information inherently contained in the primary signal, it is also possible to use signal time information, which is added during the recording with the sensor 17 (microphone) and which describes the recording time of the signal. Such signal time information is independent of time fluctuations of the extraction process and enables a unique relative time positioning of the extracted signal feature sequences.
  • the signal feature sequences are transmitted to the synchronization server 5 and are analyzed and identified therein based on the DB feature sequences from the database server 9 as in the first and second exemplary embodiments.
  • synchronization information is in turn generated; the synchronization information of the third exemplary embodiment differs from the synchronization information of the preceding exemplary embodiments in that for it, only the time of the AS playback device clock 13 / 2 matters.
  • the synchronization information is transmitted from the synchronization server to the AS playback device 6 / 2 via the Internet 14 .
  • the additional signal 10 is synchronized to the primary signal 4 based on the synchronization information as it is in the preceding exemplary embodiments; in this case, however, the synchronization is carried out based solely on the playback time measured with the AS playback device clock 13 / 2 . There is no need to synchronize different times between the AS playback device 6 / 2 , the PS playback device 6 / 1 , or the synchronization server 5 .
  • the server identifies the signal feature sequences of the signal sequences and analyzes and identifies them based on the DB feature sequences from the database server 9 .
  • the signal sequences are no longer than 60 s and in particular, no longer than 30 s or no longer than 15 s.
  • the third exemplary embodiment can also be modified in that a module for extracting the signal feature sequences is provided in the synchronization server 5 instead of in the AS playback device 6 / 2 .
  • the third exemplary embodiment is a very elegant solution for outputting additional signals to a separate AS playback device 6 / 2 .
  • the additional signal can be synchronized to a primary signal; the transmission time can freely vary within a predetermined scope, for example, between the broadcast station 3 and the PS playback device 6 / 1 .
  • a fourth exemplary embodiment corresponds essentially to the third exemplary embodiment and differs from the latter in that the synchronization server 5 has the sensor 17 .
  • the synchronization server 5 is implemented on a local computing unit, e.g. a computer, a minicomputer, or even a game console.
  • the sensor 17 can be a microphone for detecting the audio signal of the primary signal 4 or can be a camera for capturing the video output of the primary signal 4 .
  • the synchronization server 5 is embodied with a module for extracting the signal feature sequences of the primary signal 4 ; these signal feature sequences are extracted from the primary signal 4 that is picked up by the sensor 17 .
  • the extraction time is measured by means of the synchronization clock 11 .
  • the signal feature sequences are analyzed and identified on the synchronization server 5 based on the DB feature sequences from the database server 9 , as in the first, second, and third exemplary embodiments.
  • On the synchronization server 5 synchronization information is in turn generated; for the synchronization information, only the time of the synchronization clock 11 matters.
  • the synchronization information is transmitted from the synchronization server 5 to the AS playback device 6 / 2 via an intranet 14 or another data connection such as Bluetooth.
  • the additional signal 10 is synchronized to the primary signal 4 based on the synchronization information, as in the preceding exemplary embodiments. In this case, the time of the synchronization clock 11 is synchronized with the AS playback device clock 13 / 2 .
  • the main difference between the fourth exemplary embodiment and the preceding ones lies in the fact that the synchronization server 5 is not controlled via the Internet, but is instead provided to a user locally. This has the advantage that even if the Internet is down, the synchronization always works since it is not dependent on the Internet.
  • the database server 9 can be controlled via the Internet or it is likewise provided in the same computing unit as the synchronization server 5 .
  • the synchronization server 5 , the database server 9 , and the AS playback device 6 / 2 can be embodied on a single device such as a computer (desktop, laptop, etc.) or on a mobile phone.
  • the synchronization server 5 can also be provided on a hardware element that is separate from the playback device 6 / 2 .
  • the synchronization server 5 can be connected to the playback device 6 / 2 via the Internet.
  • the data quantity exchanged between the synchronization server 5 and the playback device is small.
  • synchronization information is generated based on one or more signal feature sequences that are extracted from the primary signal. This makes it possible (on the fly) to synchronize additional signals to a primary signal about which no particular time such as a starting time is known in advance. Naturally, this method can also be used if a previously determined time is indicated in the respective signal and can be used for orientation.
  • Another aspect of the invention is to adjust the quality of the additional signal streaming not only based on the available bandwidth, but also based on the available buffer time.
  • the additional signal playback device 6 / 2 receives the synchronization information and sends a query to the additional information database server 14 as to which additional signals are available. If a corresponding additional signal 10 is found, then the buffer time is also known. The buffer time in this connection describes the remaining time that is still available for the additional signal before it must be played back in order to be synchronous with the primary signal. This query can also roughly check the available bandwidth of the network. Depending on the bandwidth and buffer time, another encoding step is automatically selected.
  • the additional signal is encoded, transmitted from the additional signal server to the playback device, and then decoded again.
  • the file to be transmitted or the portion of the file is of different length and requires a different amount of time for the transmission. A balance must therefore be struck between the encoding time and the transmission time so that optimal use is made of the buffer time and the quality of the additional signal is as high as possible.
  • This method can also be carried out so that the server or servers encode the signals in different qualities or in different encoding steps and simultaneously make them available for retrieval and the playback device, which is to play back the signal, selects and retrieves the signal in the suitable quality or encoding step.
  • the chunk length of the additional signal to be transmitted is selected to be as short as possible.
  • the signal can be transmitted split into a plurality of chunks; the chunks must first be generated. The shorter the chunks are, the more complex the handling of the chunks because they are transmitted individually. But when retrieving a chunk, it is necessary to wait at least as long as the length of the respective chunk. For this reason, the shorter the chunks are, the quicker a reaction is required.
  • the chunk length can be reduced until it corresponds to a single frame. At 25 frames per second, this corresponds to 40 ms. Very rapid transmissions are therefore possible.
  • “zero latency” settings are also possible. This means that the time for the encoding and the subsequent decoding again is very short and for example is less than 1 s. A certain amount of latency is unavoidable. But with a “zero latency” setting, the corresponding codec method does not cause any additional latency. The buffer time is thus required almost exclusively for the actual transmission of the signal, which can also be very significantly reduced with a correspondingly higher bandwidth. For example, in a live concert, at which the organizer provides a camera view via a web server to the attendees with smartphones, a corresponding WLAN infrastructure can also be provided so that the video signal can be transmitted almost without delay.
  • the encoding of the additional signal and/or the transmission path for the transmission of the additional signal to the additional signal playback device 6 / 2 can thus be automatically selected as a function of the determined synchronization information. If the synchronization information includes an indication that not much time is left for transmitting the additional signal, then it is advantageous to reduce the data amount of the additional signal through a correspondingly compressing encoding and to select a rapid transmission path. The encoding should also occur very rapidly. A sharp reduction of the data amount and a rapid compression often negatively affect the quality of the additional signal. But if more time is available, then a more laborious encoding and/or a low compression rate can be used, which achieves a higher quality of the additional signal.
  • a database server 9 is provided with a previously prepared database containing DB feature sequences and synchronization information.
  • the database can also be created on the database server 9 during operation (live database). This is advantageous primarily if there is a primary signal that additional signals should be output synchronously to and the primary signal was not previously known. In such a case, feature sequences are extracted from the primary signal and the time respectively available for the extraction is measured. These extracted feature sequences are stored in the database together with the extraction time. Instead of or in addition to the extraction time, it is also possible for time information contained in the primary signal to be extracted and stored together with the feature sequences in the database server 9 .
  • the time information in this case constitutes all or part of the synchronization information.
  • the database generated in this way during operation of the system can be synchronized with another database in which different signals have already been stored in advance in feature sequences; this database can also contain meta-information, which describes the content, the times, and the meaning of these feature sequences and/or of this signal.
  • a wide variety of media streams can be stored as feature sequences in this database.
  • the feature sequences of the database that is generated “online” or “on the fly” can be allocated meta-information, in particular semantic information or meanings.
  • Such an online generation of the database on the database server 9 is possible with all of the exemplary embodiments explained above.
  • a user can also locally generate such a live database on site in his user device (computer, mobile phone, etc.).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Synchronisation In Digital Transmission Systems (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
US16/955,966 2017-12-22 2018-12-19 Method for synchronizing an additional signal to a primary signal Active US11570506B2 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
DE102017131266.8A DE102017131266A1 (de) 2017-12-22 2017-12-22 Verfahren zum Einspielen von Zusatzinformationen zu einer Liveübertragung
DE102017131266.8 2017-12-22
ATA50180/2018A AT520998B1 (de) 2018-03-02 2018-03-02 Verfahren zum Synchronisieren von einem Zusatzsignal zu einem Hauptsignal
ATA50180/2018 2018-03-02
PCT/EP2018/085831 WO2019121904A1 (de) 2017-12-22 2018-12-19 Verfahren zum synchronisieren von einem zusatzsignal zu einem hauptsignal

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/085831 A-371-Of-International WO2019121904A1 (de) 2017-12-22 2018-12-19 Verfahren zum synchronisieren von einem zusatzsignal zu einem hauptsignal

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/147,025 Continuation US20230137315A1 (en) 2017-12-22 2022-12-28 Method for Synchronizing Additional Signal to Primary Signal

Publications (2)

Publication Number Publication Date
US20200322671A1 US20200322671A1 (en) 2020-10-08
US11570506B2 true US11570506B2 (en) 2023-01-31

Family

ID=64755569

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/955,966 Active US11570506B2 (en) 2017-12-22 2018-12-19 Method for synchronizing an additional signal to a primary signal
US18/147,025 Pending US20230137315A1 (en) 2017-12-22 2022-12-28 Method for Synchronizing Additional Signal to Primary Signal

Family Applications After (1)

Application Number Title Priority Date Filing Date
US18/147,025 Pending US20230137315A1 (en) 2017-12-22 2022-12-28 Method for Synchronizing Additional Signal to Primary Signal

Country Status (10)

Country Link
US (2) US11570506B2 (ru)
EP (2) EP4178212A1 (ru)
JP (2) JP7362649B2 (ru)
KR (1) KR20200142496A (ru)
CN (1) CN111656795A (ru)
BR (1) BR112020012544A2 (ru)
MX (1) MX2020006551A (ru)
RU (1) RU2020123356A (ru)
WO (1) WO2019121904A1 (ru)
ZA (1) ZA202003761B (ru)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102439201B1 (ko) * 2020-09-14 2022-09-01 네이버 주식회사 멀티미디어 콘텐츠와 음원을 동기화하기 위한 전자 장치 및 그의 동작 방법

Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1030278A1 (de) 1999-02-18 2000-08-23 VIDEOR TECHNICAL Services GmbH Schutzgehäuse für optische Geräte, insbesondere für Video-Kameras
WO2002037850A2 (en) * 2000-10-30 2002-05-10 Koninklijke Philips Electronics N.V. Adaptive method and apparatus for automatically customizing enhanced program content to user preferences
WO2003003743A2 (en) 2001-06-29 2003-01-09 Lightmotive Technologies Method and apparatus for synchronization of parallel media networks
US20040114919A1 (en) 2002-12-17 2004-06-17 Raytheon Company Modular thermal security camera system
US6990453B2 (en) * 2000-07-31 2006-01-24 Landmark Digital Services Llc System and methods for recognizing sound and music signals in high noise and distortion
US20060095942A1 (en) * 2004-10-30 2006-05-04 Van Beek Petrus J Wireless video transmission system
EP1729173A2 (en) 2005-05-27 2006-12-06 Telegraf ApS System for generating synchronized add-on information
WO2007072326A2 (en) 2005-12-23 2007-06-28 Koninklijke Philips Electronics N.V. Script synchronization using fingerprints determined from a content stream
US20090300204A1 (en) * 2008-05-30 2009-12-03 Microsoft Corporation Media streaming using an index file
US20110137976A1 (en) 2009-12-04 2011-06-09 Bob Poniatowski Multifunction Multimedia Device
US20110135283A1 (en) 2009-12-04 2011-06-09 Bob Poniatowki Multifunction Multimedia Device
WO2011069035A1 (en) 2009-12-04 2011-06-09 Tivo Inc. Multifunction multimedia device
CA2739104A1 (en) * 2010-05-06 2011-11-06 Research In Motion Limited Multimedia playback calibration methods, devices and systems
US20110286735A1 (en) 2010-05-19 2011-11-24 Flir Systems, Inc. Infrared camera assembly systems and methods
WO2012049223A2 (en) 2010-10-12 2012-04-19 Compass Interactive Limited Alternative audio
KR20120063798A (ko) 2010-12-08 2012-06-18 아이플래테아 엘엘씨 방송콘텐츠의 부가정보 제공 시스템 및 그 방법
EP2507790B1 (en) 2011-06-06 2014-01-22 Bridge Mediatech, S.L. Method and system for robust audio hashing.
WO2014018652A2 (en) 2012-07-24 2014-01-30 Adam Polak Media synchronization
US20140201769A1 (en) 2009-05-29 2014-07-17 Zeev Neumeier Systems and methods for identifying video segments for displaying contextually relevant content
WO2014178796A1 (en) 2013-05-03 2014-11-06 Telefun Transmedia Pte Ltd System and method for identifying and synchronizing content
US20140360343A1 (en) * 2010-05-04 2014-12-11 Shazam Entertainment Limited Methods and Systems for Disambiguation of an Identification of a Sample of a Media Stream
WO2014209179A1 (en) 2013-06-26 2014-12-31 Saab Ab Method and transceiver for network diversity in long distance communications
US20150189347A1 (en) 2013-12-31 2015-07-02 Google Inc. Methods, systems, and media for presenting supplemental information corresponding to on-demand media content
RU2566808C2 (ru) 2010-07-09 2015-10-27 Энновейшнз Холдингз Пте. Лтд Система и способ для приема и синхронизации контента на устройстве связи
WO2016085414A1 (en) 2014-11-27 2016-06-02 JOHN SMITH s.r.o. Method to lower decline in watching channels during commercial breaks and a connection
US20160227241A1 (en) * 2015-01-29 2016-08-04 Ecole De Technologie Superieure Method and apparatus for video intermodal transcoding
JP2016219979A (ja) * 2015-05-19 2016-12-22 西日本電信電話株式会社 クライアント端末、インターネット動画再生システム、及びプログラム
US9609034B2 (en) 2002-12-27 2017-03-28 The Nielsen Company (Us), Llc Methods and apparatus for transcoding metadata
US20170181113A1 (en) * 2015-12-16 2017-06-22 Sonos, Inc. Synchronization of Content Between Networked Devices
WO2017181852A1 (zh) 2016-04-19 2017-10-26 腾讯科技(深圳)有限公司 一种歌曲确定方法和装置、存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006528859A (ja) * 2003-07-25 2006-12-21 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ オーディオとビデオを同期させるための指紋生成及び検出の方法及び装置
US8311487B2 (en) * 2010-05-06 2012-11-13 Research In Motion Limited Multimedia playback calibration methods, devices and systems
US20130304243A1 (en) * 2012-05-09 2013-11-14 Vyclone, Inc Method for synchronizing disparate content files

Patent Citations (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1030278A1 (de) 1999-02-18 2000-08-23 VIDEOR TECHNICAL Services GmbH Schutzgehäuse für optische Geräte, insbesondere für Video-Kameras
EP1307833B1 (en) 2000-07-31 2006-06-07 Landmark Digital Services LLC Method for search in an audio database
US6990453B2 (en) * 2000-07-31 2006-01-24 Landmark Digital Services Llc System and methods for recognizing sound and music signals in high noise and distortion
WO2002037850A2 (en) * 2000-10-30 2002-05-10 Koninklijke Philips Electronics N.V. Adaptive method and apparatus for automatically customizing enhanced program content to user preferences
WO2003003743A2 (en) 2001-06-29 2003-01-09 Lightmotive Technologies Method and apparatus for synchronization of parallel media networks
US20040114919A1 (en) 2002-12-17 2004-06-17 Raytheon Company Modular thermal security camera system
US9609034B2 (en) 2002-12-27 2017-03-28 The Nielsen Company (Us), Llc Methods and apparatus for transcoding metadata
US20060095942A1 (en) * 2004-10-30 2006-05-04 Van Beek Petrus J Wireless video transmission system
EP1729173A2 (en) 2005-05-27 2006-12-06 Telegraf ApS System for generating synchronized add-on information
WO2007072326A2 (en) 2005-12-23 2007-06-28 Koninklijke Philips Electronics N.V. Script synchronization using fingerprints determined from a content stream
US20090300204A1 (en) * 2008-05-30 2009-12-03 Microsoft Corporation Media streaming using an index file
US20140201769A1 (en) 2009-05-29 2014-07-17 Zeev Neumeier Systems and methods for identifying video segments for displaying contextually relevant content
US20110137976A1 (en) 2009-12-04 2011-06-09 Bob Poniatowski Multifunction Multimedia Device
US20110135283A1 (en) 2009-12-04 2011-06-09 Bob Poniatowki Multifunction Multimedia Device
WO2011069035A1 (en) 2009-12-04 2011-06-09 Tivo Inc. Multifunction multimedia device
US20140360343A1 (en) * 2010-05-04 2014-12-11 Shazam Entertainment Limited Methods and Systems for Disambiguation of an Identification of a Sample of a Media Stream
CA2739104A1 (en) * 2010-05-06 2011-11-06 Research In Motion Limited Multimedia playback calibration methods, devices and systems
US20110286735A1 (en) 2010-05-19 2011-11-24 Flir Systems, Inc. Infrared camera assembly systems and methods
RU2566808C2 (ru) 2010-07-09 2015-10-27 Энновейшнз Холдингз Пте. Лтд Система и способ для приема и синхронизации контента на устройстве связи
WO2012049223A2 (en) 2010-10-12 2012-04-19 Compass Interactive Limited Alternative audio
KR20120063798A (ko) 2010-12-08 2012-06-18 아이플래테아 엘엘씨 방송콘텐츠의 부가정보 제공 시스템 및 그 방법
EP2507790B1 (en) 2011-06-06 2014-01-22 Bridge Mediatech, S.L. Method and system for robust audio hashing.
WO2014018652A2 (en) 2012-07-24 2014-01-30 Adam Polak Media synchronization
WO2014178796A1 (en) 2013-05-03 2014-11-06 Telefun Transmedia Pte Ltd System and method for identifying and synchronizing content
WO2014209179A1 (en) 2013-06-26 2014-12-31 Saab Ab Method and transceiver for network diversity in long distance communications
US20150189347A1 (en) 2013-12-31 2015-07-02 Google Inc. Methods, systems, and media for presenting supplemental information corresponding to on-demand media content
WO2016085414A1 (en) 2014-11-27 2016-06-02 JOHN SMITH s.r.o. Method to lower decline in watching channels during commercial breaks and a connection
US20160227241A1 (en) * 2015-01-29 2016-08-04 Ecole De Technologie Superieure Method and apparatus for video intermodal transcoding
JP2016219979A (ja) * 2015-05-19 2016-12-22 西日本電信電話株式会社 クライアント端末、インターネット動画再生システム、及びプログラム
US20170181113A1 (en) * 2015-12-16 2017-06-22 Sonos, Inc. Synchronization of Content Between Networked Devices
WO2017181852A1 (zh) 2016-04-19 2017-10-26 腾讯科技(深圳)有限公司 一种歌曲确定方法和装置、存储介质

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
Anonymous, "Accurately synchronizing companion devices with TV programs" Internet Citation, Jan. 10, 2012 (Jan. 10, 2012), pp. 1-2, Retrieved from the Internet: http://www.civolution.com/fileadmin/bestanden/datasheets/VideoSync_-2nd_screen.pdf [retrieved on Jan. 10, 2012] XP007920064.
Brandenburg, K., "MP3 and AAC Explained," Fraunhofer Institute for Integrated Circuits FhG-11S A, 1-12 (1999).
Communication under Article 94(3) EPC dated Feb. 23, 2021 for European Patent Application No. 18825684.6 filed Dec. 19, 2018. 10 pages.
International Preliminary Report on Patentability, dated Jul. 2, 2020, from International Application No. PCT/EP2018/085531, filed on Dec. 19, 2018. 21 pages.
International Search Report of the International Searching Authority, dated Feb. 15, 2019, from International Application No. PCT/EP2018/085531, filed on Dec. 19, 2018. 7 pages.
Khalifeh, A.F., et al., "Perceptual Evaluation of Audio Quality Under Lossy Networks" IEEE WiSPNET 2017 Conference, 1-5 (2017).
Noll, P., "MPEG Digital Audio Coding," IEEE Signal Processing Magazine, 59-81 (1997).
Salomon, D., "Data Compression The Complete Reference," Springer Science + Business Media, LLC, 821-847 (2007).
Written Opinion of the International Searching Authority, dated Feb. 15, 2019, from International Application No. PCT/EP2018/085531, filed on Dec. 19, 2018. 9 pages.

Also Published As

Publication number Publication date
RU2020123356A3 (ru) 2022-01-25
JP7362649B2 (ja) 2023-10-17
US20200322671A1 (en) 2020-10-08
ZA202003761B (en) 2021-04-28
WO2019121904A1 (de) 2019-06-27
US20230137315A1 (en) 2023-05-04
JP2021507654A (ja) 2021-02-22
JP2023171914A (ja) 2023-12-05
RU2020123356A (ru) 2022-01-24
KR20200142496A (ko) 2020-12-22
EP4178212A1 (de) 2023-05-10
EP3729817A1 (de) 2020-10-28
CN111656795A (zh) 2020-09-11
BR112020012544A2 (pt) 2020-11-24
MX2020006551A (es) 2020-11-24

Similar Documents

Publication Publication Date Title
US8817183B2 (en) Method and device for generating and detecting fingerprints for synchronizing audio and video
US10034037B2 (en) Fingerprint-based inter-destination media synchronization
US11170818B2 (en) Data transmission method, data playback method, data transmission device, and data playback device
US20020157034A1 (en) Data streaming system substituting local content for unicasts
US20090172200A1 (en) Synchronization of audio and video signals from remote sources over the internet
US20230137315A1 (en) Method for Synchronizing Additional Signal to Primary Signal
US20130151251A1 (en) Automatic dialog replacement by real-time analytic processing
US11792254B2 (en) Use of in-band metadata as basis to access reference fingerprints to facilitate content-related action
KR102056796B1 (ko) 듀엣 모드 및 동시 음향 모드 제공 기반 방송 송출 시스템 및 그 방법
AT520998B1 (de) Verfahren zum Synchronisieren von einem Zusatzsignal zu einem Hauptsignal
JP2007225934A (ja) カラオケシステム及びそのホスト装置
KR102320670B1 (ko) 데이터 동기화 시스템 및 방법
DE102017131266A1 (de) Verfahren zum Einspielen von Zusatzinformationen zu einer Liveübertragung
JP2023105359A (ja) コンテンツ配信装置、受信装置及びプログラム
KR20160063952A (ko) 부분 선택적 스트리밍 시스템 및 그 방법
JP2008147973A (ja) コンテンツ再生システム
BR102014004527A2 (pt) sistema de identificação de fonogramas
EP1908283A1 (en) Apparatus and method for transforming terrestrial dmb stream, and terrestrial dmb stream transmitting system and method employing the same

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

AS Assignment

Owner name: NATIVEWAVES GMBH, AUSTRIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HASLAUER, CHRISTOF;DUMBOECK, OLIVER;REEL/FRAME:053021/0701

Effective date: 20200618

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE