EP1030290A2 - Procédé de transfert et/ou de mémorisation d'informations additionelles cachées dans un signal, en particulier en signal audio - Google Patents

Procédé de transfert et/ou de mémorisation d'informations additionelles cachées dans un signal, en particulier en signal audio Download PDF

Info

Publication number
EP1030290A2
EP1030290A2 EP00103108A EP00103108A EP1030290A2 EP 1030290 A2 EP1030290 A2 EP 1030290A2 EP 00103108 A EP00103108 A EP 00103108A EP 00103108 A EP00103108 A EP 00103108A EP 1030290 A2 EP1030290 A2 EP 1030290A2
Authority
EP
European Patent Office
Prior art keywords
additional information
quantization
signal
subband
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP00103108A
Other languages
German (de)
English (en)
Other versions
EP1030290A3 (fr
Inventor
Frank Kurth
Michael Clausen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DE1999106512 external-priority patent/DE19906512C2/de
Application filed by Individual filed Critical Individual
Publication of EP1030290A2 publication Critical patent/EP1030290A2/fr
Publication of EP1030290A3 publication Critical patent/EP1030290A3/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/0204Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using subband decomposition
    • G10L19/0208Subband vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/018Audio watermarking, i.e. embedding inaudible data in the audio signal

Definitions

  • the invention relates to a method for unnoticed transmission and / or Storage of additional information within a signal, in particular Audio signal according to the preamble of claim 1. Further relates the invention the application of this method in cascaded coding and decoding signals, in particular audio signals.
  • Digital audio signals are a representation of acoustic signals as a sequence of discrete temporal samples. Such a representation is used, for example, on CDs, DAT devices or digital computers.
  • a certain sampling rate for example 32000, 44100 or 48000 Hz, and a certain quantization accuracy, for example 12 or 16 bit, are characteristic.
  • the sampling rate indicates the number of discrete values per second and the quantization accuracy indicates the number of bits used per sample.
  • the preamble of claim 1 encompasses the use of the components of an audio coding method.
  • a typical such audio coding method is given by the standard ISO 11172-3.
  • Methods according to ISO 11172-3 are used to display a digital audio signal in a coded, data-reduced form and to recover the digital audio signal therefrom.
  • the data-reduced form consists of digital code words for representing the actual signal content, as well as control information for determining the type of data reduction on the decoder side and enabling the reconstruction of a digital audio signal.
  • a characteristic of such audio coding methods is the conversion or transformation of digital audio signals into a subband or spectral component representation, or a corresponding reverse conversion or reverse transformation from this.
  • the values of the subband or spectral components are interpreted as frequencies or frequency bands contained in the digital audio signal.
  • the terms subband and spectral components are used synonymously in all places where only subband components are used.
  • the data reduction is performed on the subband signals, usually through the quantization process.
  • the subband values are represented by code words which are characteristically represented with fewer bits than the subband values.
  • the data reduction can include further steps.
  • the other terms used are derived from the ISO 11172-3 standard.
  • Procedure for unnoticed transmission of additional information within other data sets are also known as steganography methods. This are used particularly in image processing. Numerous basic procedures of the prior art essentially replace the least significant Bits of a data record by bits of the additional information to be embedded. Procedures that pursue the goal of additional information within a to store data-reduced signals often embed the additional information psychovisual or psychoacoustic considerations in the code of the data-reduced values.
  • a method for unnoticed transmission and / or storage of additional information within a coded data-reduced audio signal under Exploitation of psychoacoustic aspects is known from DE 44 30 864 A1.
  • Appropriate procedures store the additional information in the lower order Bits of the data reduced code words at the locations where the reduced code words have more bits available than for one Coding that is free of subjective impairments, according to the psychoacoustic Model is necessary.
  • the length of the binary code words by appending the additionally inserted bits to the Digits that the decoder regards as the least significant bits are increased.
  • a decoder that has no knowledge of the additional information decodes this extended code word. If this method is expanded to include decoding of a given data stream to a temporal audio signal, one gets one Method for transmitting additional information within audio signals.
  • the method according to DE 44 30 864 A1 embeds the additional information in the data-reduced code, however, an extension by a downstream decoder as described above, just this additional information to embed in the temporal audio signal. Especially with low allowed However, bit rates of the data-reduced code can be expected that the The amount of additional information that can be transferred per data block is only small becomes. Furthermore, in the exemplary embodiments to DE 44 30 864 A1 technology used to extend the code words by low-order bits a reconstruction error with numerous types of quantization or dequantization, which is generally about the reconstruction error that at Use of the non-extended code words occurs, arises, lies. Therefore With this method, the induced noise can exceed the masking threshold exceed and thus cause loss of quality of the temporal audio signal.
  • the object of the method according to the invention is to embed the additional information in the decoded and re-quantized subband signals in such a way that when the temporal Audio signal is no longer perceptible.
  • a variant of the method has the task of adding the additional information to embed the subband signals that the induced by the quantization Reconstruction errors are not increased by the embedding.
  • Another object of the invention is the optimal use of the capacity available from a psychoacoustic point of view for the additional information to be embedded.
  • Another task concerns the coding and embedding of the additional information such that a corresponding. Coder or decoder this again can reconstruct. This enables the additional information to be transmitted to a recipient.
  • Another object of the method according to the invention is that of Robustness of the code used against arithmetic errors of the Transformation reverse transformation pair, as e.g. at the in the standard Subband transformation used in ISO 11172-3 occur.
  • Audio signals are in consecutive or overlapping blocks of temporally connected signal values, e.g. using windowed Fourier transforms, Cosine transformations and / or filter banks, broken down into subband signals.
  • the parameters for data reduction set so that, often on condition of compliance with the maximum for a data block available bit rate caused by the data reduction Noise the masking threshold, which determines whether certain spectral components are audible, not exceeding.
  • data reduction includes the type of quantization, the space made available per block in the form of the bit width of one Codewords per sub-block, as well as scale factors.
  • the decoder When quantizing Signal values of a coherent amplitude range to one Code word shown.
  • the decoder forms this during the back quantization Codeword to a representative within the initial amplitude range from. To reduce the maximum reconstruction error to the Half of the interval size becomes a value in the for such a representative Middle of the interval selected. Since there is no code word on the back quantization the other values within the amplitude range can be mapped these are used to transport the additional information.
  • the task The embedding of the additional information is determined by specific mapping on certain Representatives resolved within the amplitude range. Doing there the code word the back quantization representative and the one to be embedded Additional information the position within the amplitude range of the representative on.
  • this selection of representatives is by directly replacing the low-order bits with parts of the one to be barked Codes performed.
  • this method generally leads to a higher than the maximum quantization error normally caused by this quantization. However, this error is acceptable in some applications.
  • the encoder side becomes everyone for the method relevant subband value integrates a trend bit into the code, which indicates whether this subband value is greater than that used for the reconstruction Representative is, or smaller.
  • the ones available for embedding the code standing values of the amplitude range are larger than those the representative used for the reconstruction and those smaller than this Representatives divided. Shows the transmitted trend bit that the original subband value greater than that used for reconstruction Representative, one of the amplitude values becomes greater than this Representatives used as coding of the additional information, otherwise one of the amplitude values less than that of the representative.
  • This can be done by Addition or subtraction of the binary representation of the code and the Representatives can be realized. In this way, the original Reconstruction errors due to embedding not increased.
  • the backward transformation is completed by embedding the additional information of the processed block, with the composition of the resulting temporal Data blocks to a temporal audio signal.
  • the recovery of the embedded additional information can be done in blocks after performing the inverse transformation associated with the above reverse transformation be performed. Assuming invertibility or reversibility of the transformation, can use a corresponding decoder from the resulting subband signals from the embedding mechanism read selected amplitude values and using the known Representatives used the additional information for the reconstruction extract.
  • Transformation and inverse transformation processes that occur in practice usually return arithmetic or reconstruction errors within of the processed data stream. To avoid impairing the embedded codes it is appropriate to use the trend bit method above To use redundancy. It is also useful to Methods that use the direct bit embedding described above Code words of the additional information in each case by means of a code Secure arithmetic or transformation errors.
  • the embedded code For the purpose of decoding the embedded additional information on the part of a provided encoder, it is advantageous to use the embedded code to provide characteristic features or brands that are based on those for embedding used subband values, as well as bit width and position of the code within these subband values. Because any signals input of such a coder and thus incorrect decoding is possible are, an embedded brand should be of a quality that is a decision about an existing code with a high probability of success allowed.
  • the process continues to change for audio codecs with the features the preamble of claim 1 does not reduce the data generated on the encoder side Code and only requires knowledge of the type of data reduction (Type of quantization, scale factors, etc.) and the used Settings. Therefore, the procedure is simple in a variety of different ways to adapt such codecs. For example, with the standard ISO 11172-3 transmitted to the decoder to recover the temporal audio signal Tax information already all for embedding the additional information required parameters. It also follows that in the process embedding both temporal audio signals and data-reduced code can serve as input. However, this applies to procedures according to the subclaims 2 and 6 only if in the data-reduced code as additional information the required trend information has been inserted.
  • the method according to the invention takes place as a universal method for hidden Transmission of additional information in conventional digital Audio signals numerous applications, especially as special to audio signals aligned steganography process.
  • This Information can e.g. conform to the method of the invention Equipped with decoders, CD players can be used. This can for example for the simultaneous output of acoustic and textual information to pieces of music containing vocal parts. With not playback media equipped with such an additional decoder arise in this case no impairment of the sound quality.
  • Audio databases which are often data-reduced forms of archiving these days use for audio material. For example, become a piece of music extracted from the database of an audio-on-demand provider, decoded and sent to a customer on CD, additional information such as Shipping day, distributor, buyer or even copyright rights embedded unnoticed become.
  • Fig. 1 shows a conventional audio codec.
  • the temporal audio signal 1 is broken down in blocks by the filter bank 2 into subbands 3 and the quantizer 4 fed.
  • the temporal audio signal 1 continues to be block-synchronous subjected to a psychoacoustic analysis 7.
  • the calculated by this analysis Determine parameter 8 in combination with a previously determined required bit rate the bit allocation 9.
  • These code words 5 are used together with those for back quantization required quantization parameters 11 fed to a multiplexer 6, which, however, encodes and transmits them for further transmission 12.
  • a demultiplexer 13 decodes the code words 14 and quantization parameters 15 required for back quantization and leads this backquantization level 16. After back quantization the subband values 17 are fed to the reconstruction filter bank 18 and transformed into a block of the temporal output signal 19.
  • Fig. 2 shows an audio codec for embedding additional information the bit replacement process.
  • the temporal audio signal 1 is blocked by the filter bank 2 is broken down into subbands 3 and fed to the quantizer 4.
  • the temporal audio signal 1 continues to be block-synchronous to a psychoacoustic Subjected to analysis 7.
  • the parameters calculated by this analysis 8 determine in combination with a predetermined bit rate the bit allocation 9.
  • the quantization parameters are calculated from the bit allocation 10 using the subband values 3 from the quantizer 4 are converted into data-reduced subband values 5.
  • a demultiplexer 13 Decoded on the decoder side a demultiplexer 13 the code words 14 and those for back quantization required quantization parameters 15.
  • the demultiplexer 13 performs the Code words 14 to the back quantizer 16 and the quantization parameters 15 to the back quantizer 16 and the embedding module 20.
  • the subband values 17 are requantized to the embedding module 20 fed.
  • the embedding module determines using the quantization parameters 15 20 parameters for embedding using the bit replacement method and carries out the embedding of the additional information 25.
  • the resulting Subband signals 21 are fed to the reconstruction filter bank 18 and transformed into a block of the temporal output signal 19.
  • Fig. 3 shows an audio codec for embedding additional information the trend bit method.
  • the temporal audio signal 1 is blocked by the Filter bank 2 broken down into subbands 3 and fed to quantizer 4.
  • the temporal audio signal 1 continues to be block-synchronous to a psychoacoustic Subjected to analysis 7.
  • the parameters 8 calculated by this analysis determine in combination with a predetermined bit rate the bit allocation 9.
  • the quantization parameters are calculated from the bit allocation 10 using the subband values 3 from the quantizer 4 are converted into data-reduced subband values 5.
  • the quantizer 4 calculates the trend bit information during the data reduction the relevant subband values. To determine relevant subband values the quantizer 4 calculates the number of embedding the additional information 25 required subband values and then selects subband values out.
  • the code words 5 are used together with those required for back quantization Quantization parameters 11 and the trend bit information 22 fed to a multiplexer 6 which, however, for further transmission encodes and transmits 12.
  • a demultiplexer decodes on the decoder side 13 the code words 14 and the quantization parameters required for the back quantization 15.
  • the demultiplexer passes the code words 14 to the back quantizer 16, the trend bit information 23 to the embedding module 20, as well as the quantization parameters 15, the back quantizer 16 and the Embedding module 20 too. After the quantization has been carried out, the subband values 17 fed to the embedding module.
  • the quantization parameters 15 and the trend bit information 23 determines the embedding module 20 parameters for embedding using the trend bit method and carries out the embedding of the additional information 25.
  • the resulting Subband signals are fed to the reconstruction filter bank 18 and in transformed a block of the temporal output signal 19.
  • Fig. 4 shows an example of the bit replacement process. From the first Subband T1 of the illustrated subband 1 of a block becomes the sixth subband value considered. The quantization forms the twelve-bit subband value 2 on a four-bit code word 3. The back quantization 4 forms the code 3 a twelve-bit subband value. Embedding after bit replacement replaces the eight least significant bits of 4 with bits of additional information (a1, ..., a8). The subband value with embedded code 5 is on the corresponding position of the sub-bands 6 to be re-transformed is inserted.
  • Fig. 5 shows an example of the trend bit method for a subband value.
  • the additional information Z to be transmitted is represented by one of the values 0, 1, or 2 given.
  • the underlying quantization is in steps A through E. given.
  • the exemplary amplitude interval A from 0 to 9 is in the intervals A1 divided from 0 to 4 and A2 from 5 to 9. Values from the interval A1 are on the code word C1 and values from the interval A2 on the code word C2 shown (B and C).
  • the reconstruction D forms the code word C1 to the value 2 and the code word C2 to the value 7.
  • a quantizer accordingly 4 from Figure 3 forms the trend bit T for one, according to the regulation (A-C) quantized into the code word Ci, subband value according to Table 1.
  • the back quantization level determines from the code word, the additional information Z and the trend bit T a back quantized value while observing the possible values from Table 2.
  • the possible Reconstruction levels according to Table 2 follow from the requirement that the maximum quantization error as it arises from A-E, also when using of the trend bit method should be less than two in amount.
  • Fig. 6 shows a block diagram of a decoder for recovering the embedded Further information.
  • the temporal audio signal 1 is blocked by the filter bank 2 is transformed into subbands 3.
  • a detector 4 checks with inclusion the type of embedding used and knowledge of all possible bit or Code widths whether there is a mark of an embedded code.
  • the detector 4 can, if necessary Translation of the temporal audio signal 1 control 9, and a repetition the steps related to components 2, 3, 4 and 9.
  • Another scenario which is becoming increasingly important, delivers archiving large amounts of data in digital (music) libraries.
  • the method presented as an application example is therefore particularly suitable for Intended for audio data, but of course it also works for other data, e.g. Video data. Due to the massive amount of data, such as. it is in the digital archiving of radio productions close to converting the resulting data into a space-saving format.
  • the psychoacoustic requirements described in the next section meet this requirement Compression method with data reduction rates of up to 1:12 for HiFi recordings and perceptually transparent quality (no audible Quality differences). Because the original data at such high compression rates are no longer reproducible from the code - the decompressed ones Data only matches the original perceptually this is lossy proceedings.
  • Tasks of a music library e.g. in connection with audio editing and Cutting systems exist in retrieval and transfer as well as in processing (e.g. mixing of several audio pieces) and repeated storage of the Audio data. If the transfer is uncompressed (e.g. via CD or DAT), the recipient receives first generation data with the data described above Problems. Are multiple decompressed records mixed together or even edited, so is for re-saving in the music library, in turn, only worked on data of the first generation. To the sensible usability of such a digital music library a method to avoid such generation effects is therefore necessary.
  • Time concealment means that in chronological order two signals one of the signals make the other appear inaudible can.
  • the effect of forward concealment occurs here (a signal conceals that Subsequent) on a larger time interval than the effect of backward masking (a signal obscures the previous one).
  • Frequency masking can be based on the spectral or Fourier analysis of a signal describe a (relatively short) time interval. Here all events interpreted as occurring simultaneously.
  • the data reduction is essentially changed by a coarser Quantization of the digital subband signals performed. So far in this type of audio compression a relationship to the method according to the preamble of claim 1.
  • the coarser quantization lost data is no longer when decompressed reconstructable. This is the first generation signal compared to the original changed and a recalculation of the psychoacoustic model on the changed signal i.a. a different parameter set. This parameter change represents an important one in codecs of this type Reason for the generation effects.
  • the procedure presented here represents a proposed solution for psychoacoustic Compression process any repetition of compression and decompression, so any number of generations allowed, while maintaining the perceptual quality of the first generation. More accurate copies of other generations are made when choosing suitable encoder parameters theoretically lossless, practical are any resulting quality losses from the accuracy of the computer arithmetic used dependent.
  • the method works with regard to the additional information required in situ, i.e. no additional data formats needed.
  • the audio data generated by the decoder (PCM) can be used on any conventional digital medium and both with standard media are reproduced, as well as with one of the proposed Process-compliant encoder is compressed lossless in the above sense become.
  • the process is essentially based on two basic ideas that make up derive two fundamental sub-algorithms.
  • the page information is e.g. out Information about quantization levels, types or subbands used.
  • control information or Coding parameter can both the decoder from the code reconstruct corresponding output signal, as well as the encoder from the subband-transformed signals the code.
  • the first basic idea provides a process that uses PCM data and page information encoded in a common file (hybrid code).
  • This file is both as Audio file usable and on standard media without noticeable loss of quality reproduce, as well as from a corresponding encoder decipherable that the entire page information is reconstructed can.
  • the main principle here is the use of psychoacoustic Parameters in a way that the combination of PCM code and page information allowed without loss of quality. Roughly describe the masking parameters in which subbands the page information is encoded can.
  • the principle developed here can be achieved by methods with the characteristics of claim 1 can be realized. More specifically, the page information in the transformation area into the subband signals accordingly the coarsening of quantization induced by the psychoacoustic model embedded.
  • the encoding in the subband signals described as "targeted dequantization" can be and in the signal processing of a kind of modulation corresponds to a carrier signal, uses the second basic idea.
  • This Idea leads to an algorithm using a minor More information allows targeted dequantization so that the requirements of the psychoacoustic model are observed.
  • the price for this is the slightly larger page information, which provides a slightly enlarged compressed file.
  • This second Principle can be advantageous by a method with the features of the claim 2 can be realized.
  • the trend information mentioned in this claim corresponds to the above information.
  • the scheme of the proposed codec is shown in Fig. 7. As description How it works we first consider the decoder part.
  • the Side information 5 and the code words Q of the subband samples are first won from the transmission channel.
  • the page information serves dequantization of the code words in subband samples Q '.
  • the module for robust coding R now uses the above-mentioned method for modulation the page information into suitable subband samples. For later Detection of the modulation by the encoder becomes the subband samples provided with a bit signature. Finally the inverse subband transformation the hybrid PCM code y '.
  • the encoder can process two types of input signals, original data (PCM data) and hybrid PCM code.
  • PCM data original data
  • hybrid PCM code hybrid PCM code.
  • the coder works on original data just like a conventional subband encoder. The decision between Original and hybrid code falls into detector D after subband transformation of the input tries to recognize the bit signature. Was the bit signature the entire page information from the subband samples is recognized extracted. If not, it is done in a conventional way with the help of the psychoacoustic Model determines the side information. Then determined the encoder by quantizing the code words Q (labeled Q in Fig. 7).
  • Fig. 8 shows a codec stage of the method according to the invention.
  • the temporal Audio signal 1 is blocked by the filter bank 2 in subbands 30 disassembled and fed to the detector 31.
  • the detector 31 tries from previous ones Recognize codec levels and their information Check integrity. If this is successful, he initiates the decoder process 33 passing the subband values 32. This extracts those for quantization required parameters 36 and passes them to the quantizer 4 and the Multiplexer 6 too.
  • the decoder 33 also carries out the processed subband values 34 to the quantizer 4. Is the detection or integrity check of the Detector 31 unsuccessful, the detector 35 initiates a block-synchronous psychoacoustic analysis 7 of the temporal audio signal 1.
  • the by this Analysis calculated parameter 8 determine in combination with a previously determined, required bit rate, the bit allocation 9. From the bit allocation calculate quantization parameters 10 using them which, in this case looped through from filter bank 2 to quantizer 4, Subband values (identified 30 and 34) from quantizer 4 to data reduced Subband values 5 are transferred. These code words 5 are put together with the parameters required for quantization or requantization 11 fed to a multiplexer 6 which, however, for further transmission encodes, and transmits 12. A demultiplexer decodes on the decoder side 13 the code words 14 and those required for quantization or back quantization Quantization parameter 15 and leads the back quantization stage 16 and embedding level 20. After back quantization the subband values 17 are fed to the embedding stage 20 which the embedding of the quantization information 15 in the back-quantized Performs subband signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
EP00103108A 1999-02-17 2000-02-16 Procédé de transfert et/ou de mémorisation d'informations additionelles cachées dans un signal, en particulier en signal audio Withdrawn EP1030290A3 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
DE1999106512 DE19906512C2 (de) 1999-02-17 1999-02-17 Verfahren zum unbemerkten Übertragen und/oder Speichern von Zusatzinformationen innerhalb eines Signals, insbesondere Audiosignals
DE19906513 1999-02-17
DE19906512 1999-02-17
DE19906513 1999-02-17

Publications (2)

Publication Number Publication Date
EP1030290A2 true EP1030290A2 (fr) 2000-08-23
EP1030290A3 EP1030290A3 (fr) 2002-12-11

Family

ID=26051894

Family Applications (1)

Application Number Title Priority Date Filing Date
EP00103108A Withdrawn EP1030290A3 (fr) 1999-02-17 2000-02-16 Procédé de transfert et/ou de mémorisation d'informations additionelles cachées dans un signal, en particulier en signal audio

Country Status (1)

Country Link
EP (1) EP1030290A3 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010103442A1 (fr) * 2009-03-13 2010-09-16 Koninklijke Philips Electronics N.V. Incorporation et extraction de métadonnées
CN114242084A (zh) * 2021-11-12 2022-03-25 合肥工业大学 基于分层的低比特率语音流大容量隐写方法和系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0372601A1 (fr) * 1988-11-10 1990-06-13 Koninklijke Philips Electronics N.V. Codeur pour insérer une information supplémentaire dans un signal audio numérique de format préalablement déterminé, décodeur pour déduire cette information supplémentaire de ce signal numérique, dispositif muni d'un tel codeur, pour enregister un signal numérique sur un support d'information et support d'information obtenu avec ce dispositif
DE4405659C1 (de) * 1994-02-22 1995-04-06 Fraunhofer Ges Forschung Verfahren zum kaskadierten Codieren und Decodieren von Audiodaten
DE4430864A1 (de) * 1994-08-31 1996-03-07 Corporate Computer Systems Eur Verfahren zum unbemerktem Übertragen und/oder Speichern von Zusatzinformationen innerhalb eines quellencodierten, datenreduzierten Audiosignals

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0372601A1 (fr) * 1988-11-10 1990-06-13 Koninklijke Philips Electronics N.V. Codeur pour insérer une information supplémentaire dans un signal audio numérique de format préalablement déterminé, décodeur pour déduire cette information supplémentaire de ce signal numérique, dispositif muni d'un tel codeur, pour enregister un signal numérique sur un support d'information et support d'information obtenu avec ce dispositif
DE4405659C1 (de) * 1994-02-22 1995-04-06 Fraunhofer Ges Forschung Verfahren zum kaskadierten Codieren und Decodieren von Audiodaten
DE4430864A1 (de) * 1994-08-31 1996-03-07 Corporate Computer Systems Eur Verfahren zum unbemerktem Übertragen und/oder Speichern von Zusatzinformationen innerhalb eines quellencodierten, datenreduzierten Audiosignals

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KURTH F: "Vermeidung von Generationseffekten in der Audiocodierung" DISSERTATION, UNIVERSIT[T BONN, [Online] 9. März 1999 (1999-03-09), XP002213969 Gefunden im Internet: <URL:http://www-mmdb.iai.uni-bonn.de/~fran k/diss_kurth.ps.gz> [gefunden am 2002-09-17] *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010103442A1 (fr) * 2009-03-13 2010-09-16 Koninklijke Philips Electronics N.V. Incorporation et extraction de métadonnées
CN114242084A (zh) * 2021-11-12 2022-03-25 合肥工业大学 基于分层的低比特率语音流大容量隐写方法和系统
CN114242084B (zh) * 2021-11-12 2023-03-10 合肥工业大学 基于分层的低比特率语音流大容量隐写方法和系统

Also Published As

Publication number Publication date
EP1030290A3 (fr) 2002-12-11

Similar Documents

Publication Publication Date Title
EP0954909B1 (fr) Procede de codage d&#39;un signal audio
DE69524512T2 (de) Verfahren zur Aufnahme und Wiedergabe digitaler Audiosignale und Apparat dafür
DE69723959T2 (de) Datenkompression und -dekompression durch rice-kodierer/-dekodierer
DE69429499T2 (de) Verfahren und vorrichtung zum kodieren oder dekodieren von signalen und aufzeichnungsmedium
DE69632340T2 (de) Transport von versteckten daten nach komprimierung
EP0393526B2 (fr) Procédé de codage numérique
DE69534273T2 (de) Verfahren und vorrichtung zum signalkodieren, signalubertragungsverfahren und verfahren und vorrichtung zur signaldekodierung
EP1382038B1 (fr) Dispositif et procede pour l&#39;insertion d&#39;un filigrane dans un signal audio
DE69628972T2 (de) MPEG Audio Dekoder
DE69431622T2 (de) Verfahren und gerät zum kodieren von mit mehreren bits kodiertem digitalem ton durch subtraktion eines adaptiven zittersignals, einfügen von versteckten kanalbits und filtrierung, sowie kodiergerät zur verwendung bei diesem verfahren
EP1112621B1 (fr) Dispositif et procede pour effectuer un codage entropique de mots d&#39;information et dispositif et procede pour decoder des mots d&#39;information ayant subi un codage entropique
DE602004005197T2 (de) Vorrichtung und verfahren zum kodieren eines audiosignals und vorrichtung und verfahren zum dekodieren eines kodierten audiosignals
DE60213394T2 (de) Audiokodierung mit partieller enkryption
DE69431025T2 (de) Signalkodier- oder -dekodiergerät und Aufzeichnungsmedium
EP1212857B1 (fr) Procede et dispositif pour l&#39;introduction d&#39;informations dans un flux de donnees, ainsi que procede et dispositif pour le codage d&#39;un signal audio
DE60020663T2 (de) Verfahren zur Formatierung eines Audiodatenstroms
DE60311334T2 (de) Verfahren und Vorrichtung zur Kodierung und Dekodierung eines digitalen Informationssignals
EP0611516B1 (fr) Procede de reduction de donnees dans la transmission et/ou la mise en memoire de signaux numeriques de plusieurs canaux dependants
WO2008031498A1 (fr) Stéganographie dans des codeurs de signaux numériques
DE69721404T2 (de) Diktiersystem
EP1604527B1 (fr) Dispositif et procede pour incorporer une information utile dans un signal porteur
DE69428435T2 (de) Signalkodierer, signaldekodierer, aufzeichnungsträger und signalkodiererverfahren
DE69325950T2 (de) Digitales Übertragungssystem
DE19906512C2 (de) Verfahren zum unbemerkten Übertragen und/oder Speichern von Zusatzinformationen innerhalb eines Signals, insbesondere Audiosignals
DE69635973T2 (de) Audio-Teilbandkodierverfahren

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

RIC1 Information provided on ipc code assigned before grant

Free format text: 7G 10L 19/02 A, 7G 10L 19/14 B

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

AKX Designation fees paid
REG Reference to a national code

Ref country code: DE

Ref legal event code: 8566

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20030612