EP1665234B1 - Information flow transmission method whereby said flow is inserted into a speech data flow, and parametric codec used to implement same - Google Patents
Information flow transmission method whereby said flow is inserted into a speech data flow, and parametric codec used to implement same Download PDFInfo
- Publication number
- EP1665234B1 EP1665234B1 EP04787314A EP04787314A EP1665234B1 EP 1665234 B1 EP1665234 B1 EP 1665234B1 EP 04787314 A EP04787314 A EP 04787314A EP 04787314 A EP04787314 A EP 04787314A EP 1665234 B1 EP1665234 B1 EP 1665234B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- bits
- information stream
- stream
- frames
- vocoder
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
- 238000000034 method Methods 0.000 title claims description 42
- 230000005540 biological transmission Effects 0.000 title abstract description 20
- 230000005284 excitation Effects 0.000 claims description 78
- 238000003780 insertion Methods 0.000 claims description 39
- 230000037431 insertion Effects 0.000 claims description 39
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 230000035945 sensitivity Effects 0.000 claims description 5
- 239000013598 vector Substances 0.000 description 30
- 230000003044 adaptive effect Effects 0.000 description 25
- 230000015572 biosynthetic process Effects 0.000 description 20
- 238000003786 synthesis reaction Methods 0.000 description 20
- 230000000875 corresponding effect Effects 0.000 description 19
- 230000006870 function Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 230000007774 longterm Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000003595 spectral effect Effects 0.000 description 5
- 238000012937 correction Methods 0.000 description 4
- 230000004907 flux Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000012966 insertion method Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000011282 treatment Methods 0.000 description 2
- DTCAGAIFZCHZFO-UHFFFAOYSA-N 2-(ethylamino)-1-(3-fluorophenyl)propan-1-one Chemical compound CCNC(C)C(=O)C1=CC=CC(F)=C1 DTCAGAIFZCHZFO-UHFFFAOYSA-N 0.000 description 1
- OQEBBZSWEGYTPG-UHFFFAOYSA-N 3-aminobutanoic acid Chemical compound CC(N)CC(O)=O OQEBBZSWEGYTPG-UHFFFAOYSA-N 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000010237 hybrid technique Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 210000001260 vocal cord Anatomy 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/08—Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
- G10L19/12—Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
Definitions
- the present invention relates generally to the field of speech coding, and in particular to a method of inserting an information flow within a speech data stream, the information flow. inserted may be a stream of lower speed speech data or a transparent data stream.
- the invention finds applications, in particular, in public or professional mobile radiocommunication systems (PMR systems, "Professional Mobile Radiocommunication”).
- a speech signal is an acoustic signal emitted by a human voice apparatus.
- a codec is a hardware and / or software unit for coding and decoding a digital stream. Its coding function makes it possible to transcode a digital stream of quantized samples in the time domain of a source signal (for example a speech signal) into a compressed digital stream. Its decoding function makes it possible to perform a pseudo-inverse operation in order to restore attributes representative of the source signal, for example perceptible attributes in a receiver such as the human ear.
- a speech data stream is a data stream generated by a speech codec, from the encoding of a speech signal.
- a transparent data stream is a binary digital sequence whose content type is unspecified, whether it is a computer data stream or a speech data stream. The data are said to be transparent in that, from an external point of view, all the bits have equal importance vis-à-vis, for example of the correction of the transmission errors so that an error-correcting coding must therefore be uniform on all the bits. Conversely, if the stream is a speech data stream, some bits are more important to protect than others.
- a speech codec also called vocoder (in English "Vocoder””SpeechCodec” or “Voice Codec”) is a specialized codec which is adapted to the encoding of a quantized speech signal and to the decoding of a stream of frames. lyrics. In particular, it presents for its coding function a sensitivity which depends on the characteristics of the speech of the speaker and a low bit rate associated with a more limited frequency band than the general audio frequency band (20 Hz-20 kHz).
- speech coding techniques including coding techniques of the speech signal waveform (eg ITU-T G.711 MIC A / mu law coding), coding techniques for speech coding. source model (the best known being the CELP coding, Code-Excited Linear Prediction), perceptual encodings, and hybrid techniques based on the combination of techniques belonging to at least two of the above families.
- the invention is directed to the application of "source model” coding techniques.
- These techniques are also called parametric coding techniques, because they are based on the representation of excitation parameters of the speech source and / or parameters describing the spectral envelope of the signal emitted by the speaker (for example according to a model of linear prediction coding exploiting the correlation between the consecutive values of the parameters associated with a synthesis filter, or according to a cepstral model) and / or source-dependent acoustic parameters, for example the amplitude and the perceived fundamental central frequency ( "Pitch” in English), the period (“Pitch period” in English) and the amplitude of the energy peaks of the first harmonics of a pitch frequency at different intervals, its degree of voicing ("voicing rate” in English). ), its melody and its sequences.
- a vocoder is called a vocoder that implements digital speech coding using a parametric model of the speech source.
- a vocoder associates several parameters to each frame of the speech stream.
- First linear prediction spectral parameters also called, for example, LP coefficients (Linear Prediction) or Linear Prediction Coding (LPC) coefficients, which define the vocoder linear prediction filter ( short-term filter).
- adaptive excitation parameters associated with one or more adaptive excitation vectors also known as LTP (Long Term Predictor) parameters or adaptive prediction coefficients, which define a long term in the form of a first excitation vector and a gain associated with applying the input of the synthesis filter.
- LTP Long Term Predictor
- adaptive prediction coefficients which define a long term in the form of a first excitation vector and a gain associated with applying the input of the synthesis filter.
- fixed excitation parameters associated with (or more) fixed excitation vector (s) also called algebraic parameters or stochastic parameters which define a second excitation vector
- auxiliary information in a main information stream corresponding to a speech signal, said auxiliary information being inserted in the vocoder CELP which encodes the speech signal, replacing the index of the adaptive excitation vector and / or index of the fixed excitation vector.
- the auxiliary information bits are inserted in the vocoder of the transmitter in place of the bits normally encoding the corresponding index, and the value of the gain is set to zero in order to inform the vocoder of the receiver.
- the insertion of an auxiliary information stream into the stream is not discrete, in that it is sufficient to note the null value of the gain to know that the bits normally allocated to the coding of the associated index actually contain the auxiliary information. This is considered a disadvantage for the implementation of the method in a system in which the confidentiality of the transmissions is important.
- the document US 2001/038643 discloses a method of inserting a secondary information stream into a main information stream, wherein sub-bands of an audio signal corresponding to the main stream that can contain data of the secondary stream are determined. This selection of subbands is performed according to the characteristics of the audio signal in question, such as the signal-to-noise ratio in the subbands under consideration. Then, for a selected subband, the number of coding bits available for inserting data of the secondary stream is determined. Again, this determination makes use of audio signal characteristics such as the difference between the scale factor and the noise floor level in the subband. In the end, values associated with subbands are masked with data of the secondary information stream to be transmitted.
- the main object of the invention is to allow the discrete insertion of a secondary stream into a main stream corresponding to a speech stream.
- Other objects of the invention are aimed at maximizing the rate of the insertable secondary stream, while at the same time preserving the performance of the coding of the main stream with respect to attributes of the source (ie by preserving the perceived quality of the source stream). hearing during speech flow synthesis).
- Another object of the invention is also to simultaneously preserve the performance of the coding of the secondary flow with respect to attributes of the source of the secondary flow, especially when it is also a speech flow.
- the transmitter and the receiver must be interpreted in their broadest sense.
- the transmitter and the receiver are terminal equipment of the system, and the transmission is a radio transmission.
- the insertion is performed at a parametric vocoder transmitter that produces said main information stream, without changing the bit rate of the latter compared to what it would be without insertion.
- the secondary information flow is interpreted as a sequence of constraints on the sequence of values of certain parameters of the parametric coding model of the main information flow.
- the method according to the invention has the advantage that nothing in the main information stream that is transmitted betrays the presence of the secondary information flow inserted.
- the intelligibility of the speech signal encoded in the main information stream is preserved, which is not the case with the aforementioned known insertion method.
- the frame mask can be variable. It is then generated according to a common parallel algorithm in the transmitter and in the receiver, in order to synchronize the coding and the decoding of the main information stream, respectively in the transmitter and in the receiver.
- the frame mask may advantageously define a subset of groups of consecutive frames in each of which bits of the secondary information stream are inserted, in order to take advantage of the sliding effect of the coding which results from the storage of the frames in the frame.
- parametric vocoder This helps to preserve the fidelity of the main information flow to the speech signal.
- the number of frames of a group of consecutive frames is then substantially equal to the depth of storage of the frames in the parametric vocoder.
- the mask of bits may be such that bits of the secondary information stream are inserted into these frames by imposing a priority constraint on the bits belonging to the least sensitive bit class. This also helps to preserve the fidelity of the main information flow to the speech signal.
- the secondary information stream may be a speech data stream having a lower rate than the main information rate. This is the case when the secondary information stream leaves another vocoder having a lower rate than the parametric vocoder bit rate.
- the secondary information flow can also be a transparent data stream.
- the rate of the secondary information flow to be inserted is too high compared to the rate of the parametric vocoder, it may be necessary to remove bits from the secondary information stream, if this is compatible with the application. Conversely, if the secondary information flow rate is too low, it is possible to repeat certain bits or to introduce stuffing bits.
- the secondary information flow is subjected to error correction coding before insertion into the main information flow. This makes it possible to overcome the fact that, in the context of parametric vocoders, certain bits of the frames of the main information stream are weakly or even not subject to an error correcting coding (forming channel coding) prior to transmission.
- bits of the secondary information stream are inserted by imposing values on bits that belong to excitation parameters of a filter of the source model of the parametric vocoder, for example parameters of adaptive excitation and / or fixed excitation parameters of the linear prediction filter of a CELP vocoder.
- excitation parameters of a filter of the source model of the parametric vocoder for example parameters of adaptive excitation and / or fixed excitation parameters of the linear prediction filter of a CELP vocoder.
- bits of the secondary information stream may also be inserted into silence frames of the main information stream, instead of or in addition to insertion into speech frames.
- bits of the secondary information stream may be inserted by imposing constraints on unencrypted bits as end-to-end encryption of the main information stream. This allows a receiving equipment can, after extraction, decode the secondary information stream although not having the ability to decrypt as such.
- the bits concerned may nevertheless undergo one or more encryption / decryption operations in another respect, for example link ciphers or radio interface ciphers.
- the insertion constraint may be a constraint of equality of the bits of the frame of the main information stream with the bits of the inserted secondary information stream.
- a second aspect of the invention relates to a parametric vocoder according to claim 13, adapted for carrying out the method according to the first aspect.
- a parametric vocoder includes insertion means for inserting a secondary information stream into a main information stream that is generated by the vocoder parametric from a speech signal.
- the vocoder includes means for extracting the secondary information stream from the main information stream.
- a third aspect of the invention further relates to a terminal equipment of a radio communication system comprising a parametric vocoder according to the second aspect.
- the figure 1 is a diagram illustrating the general principle of inserting a secondary data stream DS2 into a main data stream DS1 encoding a speech signal VS1.
- This insertion is carried out at a transmitter which, after multiplexing and channel coding, transmits the DS1 stream, and therefore the DS2 stream that it contains, to a remote receiver.
- a transmitter and such a receiver are for example mobile terminals of a public radio system such as GSM or UMTS, or a professional radio system such as TETRA or TETRAPOL.
- the stream DS1 is generated by a vocoder 10 from the speech signal VS1, which is produced by a speech source 1 such as the voice apparatus of an individual.
- the speech signal VS1 is digitized according to a linear encoding (pulse modulation coding) PCM, and segmented into frames called speech frames.
- each frame is generally segmented at vocoder 10 into a fixed number M of segments called subframes in the time domain (CELP model) or in the frequency domain (MBE model). Excitation").
- M is between 2 and 6, depending on the vocoders).
- Each frame comprises a determined number N of bits.
- the figure 2 illustrates a digitized speech signal segmented into successive F [i] frames, for i ranging from 0 to infinity.
- each frame F [i] can be segmented into M subframes SF [m], for m between 1 and M.
- D is the duration of a frame .
- the secondary data stream DS2 is for example generated by a codec 20, which receives a data stream to be coded from a source 2.
- the source 2 also transmits a speech signal, the codec 20 then being a bitrate vocoder less than that of the cocodener 10.
- the stream DS2 is also a stream of speech frames.
- the invention allows the discrete insertion of a secondary communication in a main communication.
- the codec 20, more specifically the vocoder 20, may be a MF / MELP type vocoder (of the "Multi-Frame - Mixed Excitation Linear Prediction") at 1200/2400 bits / s described in NATO STANAG 4591.
- the stream DS2 may be subjected to an error-correcting coding, for example a CRC (of the English "Cyclic Redundancy Code”) or a convolutional coding, which forms a channel coding for transmission through the transmission channel.
- error-correcting coding for example a CRC (of the English "Cyclic Redundancy Code") or a convolutional coding, which forms a channel coding for transmission through the transmission channel.
- CRC of the English "Cyclic Redundancy Code”
- convolutional coding which forms a channel coding for transmission through the transmission channel.
- the vocoder 10 comprises an encoder 100 which implements a source model coding algorithm (or parametric model), for example of the CELP or MELP type.
- a source model coding algorithm or parametric model
- the parameters corresponding to the coding of a transmitter-side speech frame include, among others, excitation vectors which are subjected, on the receiver side, to a filter whose response models the speech.
- Parametric coding algorithms use parameters calculated either directly as a function of the stream of incoming speech frames and an internal state of the vocoder, or calculated by iterations (on successive frames and / or subframes) by optimizing a criterion given.
- the first parameters include the linear prediction (LP) parameters defining a short-term filter
- the second parameters include the adaptive excitation (LTP) parameters defining a long-term filter and the fixed excitation parameters. Each iteration corresponds to the coding of a sub-frame in a frame of the input stream.
- the adaptive excitation parameters and the fixed excitation parameters are selected by successive iterations to minimize the quadratic error between the synthesized speech signal and the original VS1 speech signal.
- this iterative selection is sometimes called "Codebook search” or “Analysis by Synthesis Search”, or "Error Minimization Loop” or “Closed Loop Pitch Analysis”.
- the adaptive excitation parameters and / or the fixed excitation parameters may each comprise, on the one hand, an index corresponding to a value of a vector in the adaptive dictionary (depending on the subframe) or in a fixed dictionary, respectively, and on the other hand a gain value associated with said vector.
- the parameters of at least one of the adaptive and fixed excitations directly define the excitation vector to be applied, that is to say without addressing a dictionary by an index.
- the mode of definition of the excitation vectors The constraints imposed by the bits of the stream DS2 applying either to the index relating to the value of the excitation vector in the dictionary, or to the value of the excitation itself.
- the vocoder 10 receives, according to the invention, a stream TS of frame masks, and / or a stream BS of bit masks .
- the stream FS is generated by a frame mask generator 3, from a bit stream received from a pseudo-random generator 5, which operates from a secret key Kf known to the transmitter and the receiver.
- a frame mask has the function of selecting, from among a certain number of frames of the stream of DS1 speech frames, those in which only the bits of the secondary data stream DS2 are inserted.
- the generator 3 executes the following process. Let the sequence of the frames F [i] of the main stream DS1, let h be a numerical function with integer values, and let k be a determined integer, which is preferably substantially equal to the depth of storage of successive frames in the vocoder 10 ( see further, number P, with reference to the diagram of the figure 3 ), then the frames F [h (i)], F [h (i) +1], ..., F [h (i) + k] define what is called here a subset of groups of frames of the sequence of frames F [i].
- the frames undergoing the insertion constraint are frames belonging to a subset of groups of consecutive frames of the main stream DS1.
- the number k which corresponds to the frame length of a group of frames, is preferably equal to or at least close to the storage depth R of the vocoder 10, as it has been said above.
- the stream BS is generated by a bit mask generator 4, from a bit stream received from a pseudo-random generator 6, which operates from a secret key Kb, also known from the transmitter and receiver.
- a bit mask has the function of selecting, from among the N bits of a frame of the speech frame stream DS1 selected by virtue of the frame mask associated with the current F [i] frame, those which are only constrained by bits of the secondary data stream DS2.
- the generator 4 executes the following process. It produces a flow of a fixed number Smax bits, where Smax denotes the maximum number of bits of a current frame Fi of the main stream DS1 that can be constrained by bits of the secondary stream DS2. A determined number S of among these Smax bits, where S is less than or equal to Smax (S ⁇ Smax), have the logic value 1, the others having the logic value 0. These Smax bits are inserted in an N-bit string at predefined and fixed positions which are provided in the vocoder software 10, so as to form a bit mask on the frame. This mask, called a bit mask, thus comprises S bits equal to 1. In one example, when a bit of the bit mask is equal to 1, it indicates a position for inserting a bit of the secondary stream DS2 into the current frame Fi of the main stream DS1.
- the number Smax is fixed by making a compromise between the maximum number of bits of the secondary stream DS2 that can be inserted into a frame of the main stream DS1, on the one hand, and the concern to preserve the quality of the coding of the speech signal.
- VS1 in the main stream DS1 on the other hand. Since the number Smax is fixed, the number S depends on the flow rate of the secondary stream DS2.
- the S / N ratio defines what can be called the insertion rate of the secondary stream DS2 in the main stream DS1 for the current frame F [i], the ratio Smax / N defining the maximum insertion rate.
- a mean flow channel of 1215 bit is obtained for insertion of the secondary flow. / s.
- a bit rate allows the insertion of a secondary data stream generated by a MF-MELP type codec at 1200 bit / s (requiring 81 bits in 67.5 ms) described in NATO STANAG 4591.
- the rate of The insertion obtained is sufficient to discretely transmit a secondary stream which is also a speech stream generated by a secondary vocoder 20 of a rate lower than that of the main vocoder 10.
- An example of an insertion constraint consists in replacing (ie, overwriting) the bits of the main stream DS1 normally generated according to the standard coding algorithm implemented by the vocoder 10 from the speech signal VS1, by bits of the stream secondary DS2.
- the constraints applied to the speech coding parameters of the main stream are equality constraints with the bits of the second stream, combined with logical AND operation selection constraints applying a bit mask on the bits forming the main stream. .
- the set of indices of excitations in a dictionary generally has a distribution of bits at 0 and at 1 completely neutral with respect to a statistical analysis of occurrences. It is usually possible to encrypt the DS2 secondary stream in a pseudo-random form before insertion, without changing the statistical distribution of the 0s and 1s in the modified bits of the main stream. Assuming a speech coding model leading to a coded stream of which some subframes would have a correlation to 0 or 1, the aforementioned pseudo-random generator or a secondary stream encryption algorithm will also have this bias.
- the number of bits constrained during coding varies from one frame to another according to a known evolution law of the transmitter and the receiver, which are supposed to be synchronized.
- the synchronization of the transmitter and the receiver with regard to the application of the frame masks and / or bit masks results from the general synchronization between these two devices. Typically, this synchronization is ensured by the labeling of the frames using values generated by a frame counter.
- the general synchronization between the transmitter and the receiver may also originate, wholly or in addition, synchronization elements (particular bit patterns) inserted into the main stream DS1.
- the encoder 100 of the transmitter and the decoder of the receiver share the same initial information making it possible to determine the subsequence of the groups frames and subframes where the insertion of the secondary flow takes place.
- This information may comprise an initialization vector of the pseudo-random generators 5 and 6. It may be fixed. It may also depend, for example, on the average rate imposed by the secondary stream, or may depend on unconstrained parameters of the main codec 10 calculated during the coding of the main stream.
- the encoder 100 comprises a module 11 which is a hardware and / or software module for synthesis of the linear prediction parameters, receiving as input the speech signal VS1 and outputting an LP information corresponding to the linear prediction parameters (filter coefficients short-term linear prediction).
- the LP information is passed to the input of a logical unit 12, for example a multiplexer, which is controlled by the frame mask stream FS and the bit mask stream BS.
- the unit 12 outputs LP information LP 'corresponding to the LP information, some bits of which at least for at least some frames, have been altered by applying the constraints resulting from the secondary stream DS2 via the frame mask and the bit mask associated with the current frame.
- a memorization of the information LP ' with a depth of memorization corresponding to a predetermined number P of successive frames, may be provided for the module 11.
- the encoder 100 also comprises a module 21 which is a hardware and / or software module for synthesizing the adaptive excitation parameters, receiving as input the information LP 'and outputting an LTP information corresponding to the adaptive excitation parameters (defining a first quantization vector and an associated gain for the short-term synthesis filter).
- the LTP information is passed to the input of a logic unit 22, for example a multiplexer, which is controlled by the frame mask stream FS and the bit mask stream BS.
- the unit 22 outputs LTP 'information corresponding to the LTP information of which at least some bits for certain frames and / or for at least some subframes have been altered by applying the constraints resulting from the secondary stream DS2 via the frame mask and the bit mask associated with the current frame.
- a storage of the information LTP ' with a storage depth corresponding to a given number Q of successive subframes of the current frame (Q ⁇ M-1), may be provided for the module 21.
- the encoder 100 finally comprises a module 31 which is a hardware and / or software module for the synthesis of the fixed excitation parameters, receiving as input the information LTP 'and outputting a FIX information corresponding to the fixed excitation parameters (defining a second quantization vector and an associated gain for the short-term synthesis filter).
- the FIX information is passed to the input of a logical unit 32, for example a multiplexer, which is controlled by the frame mask stream FS and the bit mask stream BS.
- the unit 32 outputs FIX information corresponding to the FIX information of which at least some bits for certain frames and / or for at least some subframes have been altered by applying the constraints resulting from the secondary stream DS2 via the frame mask and the bit mask associated with the current frame.
- a storage of the information FIX ' is provided for the module 21.
- a storage of the information FIX ' with a storage depth corresponding for example to a determined number W of successive subframes of the current frame (W ⁇ M-1), may be provided for the module 21.
- the recovery of the information coded by the bits of this secondary stream requires a synchronization of the equipment with the sending equipment, means for extracting the secondary stream DS2 from the stream main DS1. identical to the codec 20 of the sending equipment.
- FIG. 4 schematically shows the means of a receiver device vocoder 10a for the processing of the secondary stream transmitted by the method according to the invention.
- the vocoder 10a if necessary after demultiplexing and decoding the channel, receives the main stream DS1 at the input, and delivers a speech signal VS1 'at the output.
- the signal VS1 ' is less faithful to the source speech signal VS1 ( figure 3 ) that it would be in the absence of implementation of the insertion method according to the invention. This reflects the loss of quality of the coding performed on the transmitter side, because of the external constraints applied to the vocoder 1 of the transmitting equipment.
- the receiving equipment may also include means for reproducing the speech signal VS1 ', for example a loudspeaker or the like.
- the known transmission protocols provide for a general synchronization of the receiving equipment with the transmitting equipment.
- the implementation of the invention does not require any particular means in this respect.
- the vocoder 10a For the extraction of the secondary stream, the vocoder 10a comprises a frame mask generator 3a and a bit mask generator 4a, respectively associated with a pseudo-random generator 5a and a pseudo-random generator 6a, which are identical and arranged in the same way as the means 3, 4, 5 and 6 respectively of the vocoder 10 of the sending equipment ( figure 3 ).
- the generators 5a and 6a of the receiver receive the same secret key, respectively Kf and Kb, as the generators 5 and 6 of the vocoder 10 of the transmitter equipment. These keys are stored in an ad hoc memory of equipment.
- the generators 3a and 4a respectively generate a frame mask stream FSa and a stream of bit masks BSa, which are supplied at the input of a decoder 100a of the vocoder 10a.
- the extraction of the bits of the secondary stream DS2 is done by synchronous application (for example via logical AND operations) of the frame masks and input bit masks of the decoder 100a (for example via logical AND operations), without that affects the decoding of the main stream DS1 by the latter.
- the stream DS1 is provided at the input of the decoder 100a via a logic unit 7a, which extracts the secondary information stream DS2 from the main information stream DS1 under the control of the frame mask stream FSa and the stream of information.
- BSa bit masks bit masks.
- the receiving equipment may also include a secondary codec identical to the codec 20 of the transmitting equipment for decoding the secondary stream DS2.
- this stream is a speech stream
- the secondary codec generates a speech signal that can be output via a speaker or the like.
- the fluctuation of the bit transmission rate of the secondary stream DS2 does not pose any particular problem on the receiver side, since the secondary stream DS2 is supplied as input to a variable rate secondary codec, as is the case with all the vocoders of the market.
- a codec comprises an input buffer ("Input Buffer" in English) in which DS2 stream data are stored for decoding. Just make sure that the input buffer is never empty.
- the appropriate insertion rate is determined, taking into account, in particular, the bit rate of the encoder 100 and the secondary vocoder 20 and the objectives of preserving the fidelity of the main stream VS1 to the speech signal VS1.
- the secondary stream is a speech stream and in order to provide the second decoder a regular stream of frames, it is possible optionally to memorize the sequences and not to start the decoding immediately.
- the secondary stream is a transparent data stream
- the transparent data stream may be sent to an encryption module or to a "Text-to-Speech" type of transcoding and synthesis module.
- constraints are imposed during the coding on the value of zero, several or all the bits of the frame which are associated with an excitation vector of determined, adaptive or fixed type before performing the iterations making it possible to calculate the parameters which depend on said excitation vector by virtue of the memorizations made in the vocoder.
- These constrained value bits are then the information of the secondary stream transported by the frame and constitute the channel of the secondary information stream DS2.
- the secondary stream is inserted by imposing values on bits forming the parameters of the adaptive or fixed excitation vectors. This may possibly be extended by applying constraints simultaneously to the excitation vectors of the other type, respectively fixed or adaptive.
- the bit mask may advantageously coincide with a set of unencrypted bits of a frame. This allows the receiving equipment acting as gateway to extract the secondary stream inserted in the main stream without having the means to decrypt the main stream.
- this embodiment of the method is characterized in that the secondary information stream is inserted by imposing constraints on unencrypted bits of parameters of the speech pattern of the main stream.
- This mode of implementation is illustrated by an example concerning an EFR vocoder (see above) used as the main codec.
- One chooses to use bits among the unprotected bits of each frame as a channel for the secondary stream, by overwriting their value calculated by the source stream encoding algorithm of the main stream by applying a bit mask on the unprotected 78 bits. of each frame.
- These unprotected 78 bits are identified in Table 6 (entitled “Ordering of Enhanced Full Speech Parameters for the Channel Encoder" in ETSI EN 300 909 V8.5.1 GSM 05.03 "Channel Coding") and relate to a subset of bits describing the fixed excitation vectors.
- the stream of a secondary codec for example the 1200/2400 bit / s MELP coder described in NATO STANAG 4591, requiring 81 bits per 67.5 ms at 1200 bits / s (respectively 54 bits by 22.5 ms 2400 bits / s), embedded in its own error correcting coding (2/3 FEC rate), for example, which protects 100% of the bits at 1200 bit / s (respectively 50% of the bits at 2400 bit / s), and or embedded in NATO-defined Future Narrow Band Digital Terminal (FNBDT) security interoperability frames, or a lighter type of security protocol.
- a secondary codec for example the 1200/2400 bit / s MELP coder described in NATO STANAG 4591, requiring 81 bits per 67.5 ms at 1200 bits / s (respectively 54 bits by 22.5 ms 2400 bits / s), embedded in its own error correcting coding (2/3 FEC rate), for example, which protects 100% of the bits at 1
- the constraint consists in imposing a determined excitation value, taken from the dictionary.
- the dictionary is partitioned into several sub-dictionaries, and the constraint consists in imposing one of the sub-dictionaries.
- Another variant comprises the combination of the two types of constraint above.
- the secondary stream defines a differential coding of the excitation vector indices, for example fixed excitation vectors, in the subsequence of successive frames of the main stream.
- the constrained bits may be the least significant bits of the fixed excitations (i.e. nonadaptive excitations) for each speech frame and possibly for each subframe defined in FIG. speech frame within the meaning of the vocoder coding algorithm 10.
- the number and position of the constrained bits are identified for each successive frame according to an algorithm for calculating a mask and a known secret element of the transmitter and the receiver, to increase the chances of non-detection of the existence of the secondary stream by a third party.
- IEC 14496-3 Subpart 3 for which some fixed excitations of a frame are chosen from previous calculations and where other fixed excitations of the same frame are calculated by synthesis analysis on a dictionary (see the ISO specification / IEC 14496-3 ⁇ 7.9.3.4 "Multi-Pulse Excitation for the bandwidth extension tool"), consists in imposing the constraint on the dictionary choice of the first fixed excitation and then using the iterations of analysis by synthesis on the second fixed excitation to catch the error imposed by the stress on the first fixed excitation.
- the subset of the frames of the main stream that are concerned by the insertion of the secondary stream includes only the frames that have sufficient energy and speech in the vocoder sense.
- MELP vocoders which define several levels of voicing
- vocoder HVXC Harmonic Vector excitation Codec
- the subsequence only applies to the unvoiced or totally unvoiced segments of the frames.
- the sequence of modified fixed excitations may be statistically atypical for human speech or possibly atypical for the speaker recognition method, depending on the constraints applied and the desired fidelity objective.
- a parameter processing comprising a smoothing of the gains of the fixed excitations associated with a treatment of the pulses isolated from the excitation vectors followed by a post-excitation filtering after speech synthesis, can be applied to decoding.
- the sub-sequel frames on which the constraints are applied can be defined according to preliminary statistical analyzes on the values of the consecutive parameters of the vocoder speech model, for example by taking advantage of the texture of the speech parameters, defined by an inertia, entropy or energy derived from the probability of parameter value sequences, for example in eight consecutive frames representative of the duration of a phoneme.
- the performance of the synthesis of the main stream DS1, that is to say the fidelity to the signal VS1 is inversely proportional to the relative flow rate of the secondary stream DS2.
- the subjective fidelity performance at source 1 of speech signal VS1 can however be reached when the proposed method keeps invariant some subjective attributes (for example certain psychoacoustic criteria) of source 1. It can be measured by statistical measurements ( "Mean Opinion Score", or MOS) according to a standardized scale (see ITU-T Recommendation P.862 "Perceptual evaluation of speech quality -PESQ").
- the degradation of the subjective quality of the DS1 speech stream from the vocoder 10, which is due to the insertion of the secondary stream DS2, is assumed to be acceptable to justify the application of the proposed method.
- This is particularly the case when the secondary stream is also a speech stream and the auditory content of the main stream is much less important than the content of the secondary stream for the legitimate listener.
- the psycho-acoustic perception of the possible presence of the flow secondary when listening to the main stream decoded and restored does not help to locate the secondary stream in the main stream and therefore to provide a formal proof of its existence.
- an implementation mode consists in applying the constraints on subframes different from the sub-frames on which the long-term analysis windows of the frame are concentrated, namely, for example, the second one. and the fourth subframe for the 12.2 kbit / s coding mode of the AMR vocoder discussed above (see 3GPP specification TS 26.090 V5.0.0, ⁇ 5.2.1 "Windowing and auto-correlation computation").
- it will avoid disturbing many voiced segments, generally carrying the majority of speaker identification characteristics.
- the error between the signal of the main stream and the signal synthesized by the short-term filter with the contribution of the constrained adaptive vector is compensated by the choice of the fixed excitation vector which tries to catch the residual error (for example the error residual quadratic) of the long-term prediction on the same subframe, as well as the excitation vectors of the successive subframes.
- the constrained excitation vectors encode the secondary flow as an adaptive residue above the response of the short-term synthesis filter of the main flux corrected by the fixed residue.
- an implementation mode leads to interest in the least significant bits of the harmonic amplitude parameters of the frame segments or the sample amplitude parameters of the sample. spectral envelope.
- the excitation parameters are the fundamental frequency as well as the voiced / unvoiced decision for each frequency band.
- the main stream DS1 also contains frames of silence, which are frames coded by the vocoder 10 with a lower bit rate and transmitted with a periodicity less than the speech frames, to synthesize the periods of silence contained in the speech signal VS1. These frames of silence synthesize what is called a noise of comfort.
- one embodiment of the method may provide, alternatively or in addition, the insertion of the secondary flow via numerical constraints on the values of the descriptors parameters of the comfort noise to be generated under the main flow.
- This mode of implementation is illustrated by an example concerning an EFR or AMR codec (see above) used as the main codec.
- the frames carrying comfort noise are called SID frames (see, for example, the specification 3GPP TS 26.092 "Mandatory Speech Codec Speech Processing Functions"; AMR Speech Codec; Comfort Noise Aspects ";'AND IF). More specifically, Frames considered here are SID-UPDATE frames that contain 35-bit comfort noise parameters and 7-bit error correction code.
- silence frame In a GSM or UMTS system, it is the source that controls the transmission of the silence frames, that is to say the codec of the transmitter (subject to interactions with the voice activity detection process and discontinuous transmission, particularly on the downlink from the relay to the mobile terminal). It is therefore possible to proceed by inserting the second stream according to a method similar to that applicable to a frame containing sufficient speech energy (speech frame).
- the frequency of the silence frames is controlled by the source or the relay and corresponds either to a silence frame every 20 ms or to a silence frame every 160 ms, or to a frame of silence every 480 ms for the codec EFR of the GSM system. This determines the maximum flow rate for the secondary flow in this variant of the process.
- the duplex transmission channel it is possible to use the duplex transmission channel to send frames of silence when the speaker is a second participant in the communication or in silences in a first conversation, that is to say between groups of phonemes emitted according to the main flow.
- 3GPP TS 26.090 specifies that the size of the coding field of the comfort noise of the EFR codec, namely 35 bits per silence frame, is identical to the size of the fixed excitation parameter for this same codec. This means that one can apply the same constraints and obtain a permanent minimum insertion rate using all the frames regardless of the nature, speech or silence, of the main stream.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Time-Division Multiplex Systems (AREA)
- Communication Control (AREA)
- Telephone Set Structure (AREA)
Abstract
Description
La présente invention se rapporte de façon générale au domaine du codage de la parole, et en particulier à un procédé d'insertion d'un flux d'information à l'intérieur d'un flux de données de parole, le flux d'information inséré pouvant être un flux de données de parole à plus faible débit ou un flux de données transparentes.The present invention relates generally to the field of speech coding, and in particular to a method of inserting an information flow within a speech data stream, the information flow. inserted may be a stream of lower speed speech data or a transparent data stream.
L'invention trouve des applications, en particulier, dans les systèmes de radiocommunication mobile publics ou professionnels (systèmes PMR, de l'anglais "Professional Mobile Radiocommunication").The invention finds applications, in particular, in public or professional mobile radiocommunication systems (PMR systems, "Professional Mobile Radiocommunication").
On appelle signal de parole un signal acoustique émis par un appareil vocal humain.A speech signal is an acoustic signal emitted by a human voice apparatus.
On appelle codec une unité matérielle et/ou logicielle de codage et de décodage d'un flux numérique. Sa fonction de codage permet de transcoder un flux numérique d'échantillons quantifiés dans le domaine temporel d'un signal source (par exemple un signal de parole) en un flux numérique comprimé. Sa fonction de décodage permet d'effectuer une opération pseudo-inverse dans l'objectif de restituer des attributs représentatifs du signal source, par exemple des attributs perceptibles dans un récepteur tel que l'oreille humaine.A codec is a hardware and / or software unit for coding and decoding a digital stream. Its coding function makes it possible to transcode a digital stream of quantized samples in the time domain of a source signal (for example a speech signal) into a compressed digital stream. Its decoding function makes it possible to perform a pseudo-inverse operation in order to restore attributes representative of the source signal, for example perceptible attributes in a receiver such as the human ear.
Un flux de données de parole est un flux de données généré par un codec de parole, à partir du codage d'un signal de parole. Un flux de données transparentes est une suite numérique binaire dont le type de contenu est non spécifié, qu'il soit effectivement un flux de données informatiques ou un flux de données de parole. Les données sont dites transparentes en ce sens que, d'un point de vue externe, tous les bits ont une égale importance vis-à-vis, par exemple de la correction des erreurs de transmission en sorte qu'un codage correcteur d'erreurs doit donc être uniforme sur l'ensemble des bits. A l'inverse, si le flux est un flux de données de parole, certains bits sont plus importants à protéger que d'autres.A speech data stream is a data stream generated by a speech codec, from the encoding of a speech signal. A transparent data stream is a binary digital sequence whose content type is unspecified, whether it is a computer data stream or a speech data stream. The data are said to be transparent in that, from an external point of view, all the bits have equal importance vis-à-vis, for example of the correction of the transmission errors so that an error-correcting coding must therefore be uniform on all the bits. Conversely, if the stream is a speech data stream, some bits are more important to protect than others.
Un codec de parole, aussi appelé vocodeur (en anglais "Vocoder" "Speech Codec" ou "Voice Codec") est un codec spécialisé qui est adapté au codage d'un signal de parole quantifié et au décodage d'un flux de trames de paroles. En particulier, il présente pour sa fonction codage une sensibilité qui dépend des caractéristiques de la parole du locuteur et un bas débit binaire associé à une bande de fréquences plus limitée que la bande de fréquences audio générale (20 Hz-20 kHz).A speech codec, also called vocoder (in English "Vocoder""SpeechCodec" or "Voice Codec") is a specialized codec which is adapted to the encoding of a quantized speech signal and to the decoding of a stream of frames. lyrics. In particular, it presents for its coding function a sensitivity which depends on the characteristics of the speech of the speaker and a low bit rate associated with a more limited frequency band than the general audio frequency band (20 Hz-20 kHz).
Il existe plusieurs familles de techniques de codage de la parole, notamment des techniques de codage de la forme d'onde du signal de parole (par exemple le codage ITU-T G.711 MIC loi A/mu), des techniques de codage à modèle de source (le plus connu étant le codage CELP, de l'anglais "Code-Excited Linear Prediction"), des codages perceptuels, et des techniques hybrides fondées sur la combinaison de techniques appartenant à au moins deux des familles ci-dessus.There are several families of speech coding techniques, including coding techniques of the speech signal waveform (eg ITU-T G.711 MIC A / mu law coding), coding techniques for speech coding. source model (the best known being the CELP coding, Code-Excited Linear Prediction), perceptual encodings, and hybrid techniques based on the combination of techniques belonging to at least two of the above families.
L'invention vise l'application à des techniques de codage "à modèle de source". Ces techniques sont aussi appelées techniques de codage paramétrique, car elles sont basées sur la représentation de paramètres d'excitation de la source de parole et/ou de paramètres décrivant l'enveloppe spectrale du signal émis par le locuteur (par exemple selon un modèle de codage par prédiction linéaire exploitant la corrélation entre les valeurs consécutives des paramètres associés à un filtre de synthèse, ou encore selon un modèle cepstral) et/ou de paramètres acoustiques dépendant de la source, par exemple l'amplitude et la fréquence centrale fondamentale perçue ("Pitch" en anglais), la période ("Pitch period" en anglais) et l'amplitude des pics d'énergie des premières harmoniques d'une fréquence de pitch à différents intervalles, son degré de voisement ("voicing rate" en anglais), sa mélodie et ses enchaînements.The invention is directed to the application of "source model" coding techniques. These techniques are also called parametric coding techniques, because they are based on the representation of excitation parameters of the speech source and / or parameters describing the spectral envelope of the signal emitted by the speaker (for example according to a model of linear prediction coding exploiting the correlation between the consecutive values of the parameters associated with a synthesis filter, or according to a cepstral model) and / or source-dependent acoustic parameters, for example the amplitude and the perceived fundamental central frequency ( "Pitch" in English), the period ("Pitch period" in English) and the amplitude of the energy peaks of the first harmonics of a pitch frequency at different intervals, its degree of voicing ("voicing rate" in English). ), its melody and its sequences.
On appelle vocodeur paramétrique un vocodeur mettant en oeuvre un codage numérique de la parole utilisant un modèle paramétrique de la source de parole. En pratique, un tel vocodeur associe plusieurs paramètres à chaque trame du flux de parole. Premièrement des paramètres spectraux de prédiction linéaire aussi appelés, par exemple, coefficients LP (de l'anglais "Linear Prediction") ou coefficients LPC (de l'anglais "Linear Prediction Coding"), qui définissent le filtre de prédiction linéaire du vocodeur (filtre à court terme). Deuxièmement des paramètres d'excitation adaptative associés à un (ou plusieurs) vecteur(s) d'excitation adaptative, aussi appelés paramètres LTP (de l'anglais "Long Term Predictor") ou encore coefficients de prédiction adaptative, qui définissent un filtre à long terme sous la forme d'un premier vecteur d'excitation et d'un gain associé à appliquer en entrée du filtre de synthèse.. Et, troisièmement, des paramètres d'excitation fixe associés à (ou plusieurs) vecteur(s) d'excitation fixe, aussi appelés paramètres algébriques ou paramètres stochastiques qui définissent un second vecteur d'excitation et un gain associé à appliquer en entrée du filtre de synthèse.A vocoder is called a vocoder that implements digital speech coding using a parametric model of the speech source. In practice, such a vocoder associates several parameters to each frame of the speech stream. First linear prediction spectral parameters also called, for example, LP coefficients (Linear Prediction) or Linear Prediction Coding (LPC) coefficients, which define the vocoder linear prediction filter ( short-term filter). Secondly, adaptive excitation parameters associated with one or more adaptive excitation vectors, also known as LTP (Long Term Predictor) parameters or adaptive prediction coefficients, which define a long term in the form of a first excitation vector and a gain associated with applying the input of the synthesis filter. And, thirdly, fixed excitation parameters associated with (or more) fixed excitation vector (s), also called algebraic parameters or stochastic parameters which define a second excitation vector and an associated gain to be applied at the input of the synthesis filter.
Du document
Selon un inconvénient, l'insertion d'un flux d'information auxiliaire dans le flux n'est pas discrète, en ce sens qu'il suffit de constater la valeur nulle du gain pour savoir que les bits normalement alloués au codage de l'index associé contiennent en fait l'information auxiliaire. Ceci est considéré comme un inconvénient pour la mise en oeuvre de la méthode dans un système dans lequel la confidentialité des transmissions est importante.According to a drawback, the insertion of an auxiliary information stream into the stream is not discrete, in that it is sufficient to note the null value of the gain to know that the bits normally allocated to the coding of the associated index actually contain the auxiliary information. This is considered a disadvantage for the implementation of the method in a system in which the confidentiality of the transmissions is important.
Le document
L'invention a pour principal objet de permettre l'insertion discrète d'un flux secondaire dans un flux principal correspondant à un flux de parole. D'autres objets de l'invention visent à maximiser le débit du flux secondaire pouvant être inséré, tout en préservant au mieux la performance du codage du flux principal vis à vis d'attributs de la source (i.e. en préservant la qualité perçue à l'audition lors de la synthèse du flux de parole). Un autre l'objet de l'invention est aussi de préserver simultanément la performance du codage du flux secondaire vis à vis d'attributs de la source du flux secondaire, notamment lorsqu'il s'agit également d'un flux de parole.The main object of the invention is to allow the discrete insertion of a secondary stream into a main stream corresponding to a speech stream. Other objects of the invention are aimed at maximizing the rate of the insertable secondary stream, while at the same time preserving the performance of the coding of the main stream with respect to attributes of the source (ie by preserving the perceived quality of the source stream). hearing during speech flow synthesis). Another object of the invention is also to simultaneously preserve the performance of the coding of the secondary flow with respect to attributes of the source of the secondary flow, especially when it is also a speech flow.
Certains ou la totalité de ces objets sont atteints, selon un premier aspect de l'invention grâce à un procédé de transmission d'un flux d'information secondaire entre un émetteur et un récepteur selon la revendication 1.Some or all of these objects are achieved according to a first aspect of the invention by a method of transmitting a secondary information stream between an emitter and a receiver according to
L'émetteur et le récepteur, de même que la transmission, doivent être interprétés dans leur acception la plus large. Dans un exemple d'application à un système de radiocommunication, l'émetteur et le récepteur sont des équipements terminaux du système, et la transmission est une transmission radio.The transmitter and the receiver, as well as the transmission, must be interpreted in their broadest sense. In an exemplary application to a radio system, the transmitter and the receiver are terminal equipment of the system, and the transmission is a radio transmission.
L'insertion est réalisée au niveau d'un vocodeur paramétrique de l'émetteur qui produit ledit flux d'information principal, sans modification du débit binaire de ce dernier par rapport à ce qu'il serait sans insertion. Dit autrement, le flux d'information secondaire est interprété comme une suite de contraintes sur la suite de valeurs de certains paramètres du modèle de codage paramétrique du flux d'information principal. Par rapport au procédé d'insertion connu dans l'art antérieur, le procédé selon l'invention présente l'avantage que rien dans le flux d'information principal qui est transmis ne trahit la présence du flux d'information secondaire inséré. De plus, en limitant l'insertion à certaines trames et/ou à certaines bits dans une trame seulement, on préserve l'intelligibilité du signal de parole codé dans le flux d'information principal, ce qui n'est nullement le cas avec le procédé d'insertion connu précité.The insertion is performed at a parametric vocoder transmitter that produces said main information stream, without changing the bit rate of the latter compared to what it would be without insertion. In other words, the secondary information flow is interpreted as a sequence of constraints on the sequence of values of certain parameters of the parametric coding model of the main information flow. Compared to the insertion method known in the prior art, the method according to the invention has the advantage that nothing in the main information stream that is transmitted betrays the presence of the secondary information flow inserted. Moreover, by limiting the insertion to certain frames and / or bits in a frame only, the intelligibility of the speech signal encoded in the main information stream is preserved, which is not the case with the aforementioned known insertion method.
Afin de renforcer la discrétion de l'insertion, et donc la robustesse vis-à-vis des tentatives de piratage de la transmission, le masque de trames peut être variable. Il est alors généré selon un algorithme commun parallèlement dans l'émetteur et dans le récepteur, afin d'assurer la synchronisation du codage et du décodage du flux d'information principal, respectivement dans l'émetteur et dans le récepteur.In order to reinforce the discretion of the insertion, and therefore the robustness vis-à-vis attempts to hack the transmission, the frame mask can be variable. It is then generated according to a common parallel algorithm in the transmitter and in the receiver, in order to synchronize the coding and the decoding of the main information stream, respectively in the transmitter and in the receiver.
Le masque de trames peut avantageusement définir une sous-suite de groupes de trames consécutives dans chacune desquelles des bits du flux d'information secondaire sont insérés, afin de profiter de l'effet de glissement du codage qui résulte de la mémorisation des trames dans le vocodeur paramétrique. Ceci contribue à préserver la fidélité du flux d'information principal au signal de parole.The frame mask may advantageously define a subset of groups of consecutive frames in each of which bits of the secondary information stream are inserted, in order to take advantage of the sliding effect of the coding which results from the storage of the frames in the frame. parametric vocoder. This helps to preserve the fidelity of the main information flow to the speech signal.
De préférence, la longueur en nombre de trames d'un groupe de trames consécutives est alors sensiblement égale à la profondeur de mémorisation des trames dans le vocodeur paramétrique.Preferably, the number of frames of a group of consecutive frames is then substantially equal to the depth of storage of the frames in the parametric vocoder.
Lorsque le modèle de source du vocodeur paramétrique prévoit, pour certaines au moins des trames du flux d'information principal, différentes classes de bits en fonction de leur sensibilité vis-à-vis de la qualité du codage du signal de parole, le masque de bits peut être tel que des bits du flux d'information secondaire sont insérés dans ces trames en imposant une contrainte en priorité aux bits appartenant à la classe de bits la moins sensible. Ceci contribue aussi à préserver la fidélité du flux d'information principal au signal de parole.When the source model of the parametric vocoder provides, for at least some of the frames of the main information stream, different bit classes according to their sensitivity to the quality of speech coding, the mask of bits may be such that bits of the secondary information stream are inserted into these frames by imposing a priority constraint on the bits belonging to the least sensitive bit class. This also helps to preserve the fidelity of the main information flow to the speech signal.
Le flux d'information secondaire peut-être un flux de données de parole ayant un débit plus faible que le débit d'information principal. Ceci est le cas lorsque le flux d'information secondaire sort d'un autre vocodeur ayant un débit plus faible que le débit du vocodeur paramétrique.The secondary information stream may be a speech data stream having a lower rate than the main information rate. This is the case when the secondary information stream leaves another vocoder having a lower rate than the parametric vocoder bit rate.
Bien entendu, le flux d'information secondaire peut aussi être un flux de données transparentes.Of course, the secondary information flow can also be a transparent data stream.
Lorsque le débit du flux d'information secondaire à insérer est trop élevé par rapport au débit du vocodeur paramétrique, on peut être amené à supprimer des bits du flux d'information secondaire, si cela est compatible avec l'application. Inversement, en cas de débit trop faible du flux d'information secondaire, on peut répéter certains bits ou introduire des bits de bourrage.When the rate of the secondary information flow to be inserted is too high compared to the rate of the parametric vocoder, it may be necessary to remove bits from the secondary information stream, if this is compatible with the application. Conversely, if the secondary information flow rate is too low, it is possible to repeat certain bits or to introduce stuffing bits.
Le flux d'information secondaire est soumis à un codage correcteur d'erreurs avant insertion dans le flux d'information principal. Ceci permet de pallier le fait que, dans le contexte des vocodeurs paramétriques, certains bits des trames du flux d'information principal sont faiblement voire non soumis à un codage correcteur d'erreurs (formant codage de canal) avant la transmission.The secondary information flow is subjected to error correction coding before insertion into the main information flow. This makes it possible to overcome the fact that, in the context of parametric vocoders, certain bits of the frames of the main information stream are weakly or even not subject to an error correcting coding (forming channel coding) prior to transmission.
Dans un mode de mise en oeuvre possible, des bits du flux d'information secondaire sont insérés en imposant des valeurs à des bits qui appartiennent à des paramètres d'excitation d'un filtre du modèle de source du vocodeur paramétrique, par exemple des paramètres d'excitation adaptative et/ou des paramètres d'excitation fixe du filtre de prédiction linéaire d'un vocodeur CELP. Le fait de ne pas imposer de contrainte sur les bits des paramètres de prédiction linéaire préserve l'intelligibilité du flux d'information principal. A cet effet également, on préfère imposer des contraintes aux bits formant les paramètres d'excitation adaptative plutôt que sur ceux formant les paramètres d'excitation fixe.In one possible embodiment, bits of the secondary information stream are inserted by imposing values on bits that belong to excitation parameters of a filter of the source model of the parametric vocoder, for example parameters of adaptive excitation and / or fixed excitation parameters of the linear prediction filter of a CELP vocoder. The fact of not imposing a constraint on the bits of the linear prediction parameters preserves the intelligibility of the main information flow. For this purpose also, it is preferred to impose constraints on the bits forming the adaptive excitation parameters rather than on those forming the fixed excitation parameters.
Dans un mode de mise en oeuvre, des bits du flux d'information secondaire peuvent également être insérés dans des trames de silence du flux d'information principal, à la place ou en plus de l'insertion dans des trames de parole.In one embodiment, bits of the secondary information stream may also be inserted into silence frames of the main information stream, instead of or in addition to insertion into speech frames.
Dans un autre mode de mise en oeuvre, des bits du flux d'information secondaire peuvent être insérés en imposant des contraintes à des bits non chiffrés au titre d'un chiffrement de bout en bout du flux d'information principal. Cela permet à un équipement récepteur de pouvoir, après extraction, décoder le flux d'information secondaire bien que n'ayant pas la capacité de déchiffrement à ce titre. Bien entendu, les bits concernés peuvent néanmoins subir une ou plusieurs opérations de chiffrement/déchiffrement à un autre titre, par exemple des chiffrements de lien ou d'interface radio.In another embodiment, bits of the secondary information stream may be inserted by imposing constraints on unencrypted bits as end-to-end encryption of the main information stream. This allows a receiving equipment can, after extraction, decode the secondary information stream although not having the ability to decrypt as such. Of course, the bits concerned may nevertheless undergo one or more encryption / decryption operations in another respect, for example link ciphers or radio interface ciphers.
Par exemple, la contrainte d'insertion peut être une contrainte d'égalité des bits de la trame du flux d'information principal avec les bits du flux d'information secondaire insérés.For example, the insertion constraint may be a constraint of equality of the bits of the frame of the main information stream with the bits of the inserted secondary information stream.
Un deuxième aspect de l'invention se rapporte à un vocodeur paramétrique selon la revendication 13, adapté pour la mise en oeuvre du procédé selon le premier aspect. En ce qui concerne sa fonction de codage, un tel vocodeur paramétrique comprend des moyens d'insertion pour l'insertion d'un flux d'information secondaire dans un flux d'information principal qui est généré par le vocodeur paramétrique à partir d'un signal de parole.A second aspect of the invention relates to a parametric vocoder according to claim 13, adapted for carrying out the method according to the first aspect. With respect to its coding function, such a parametric vocoder includes insertion means for inserting a secondary information stream into a main information stream that is generated by the vocoder parametric from a speech signal.
Pour sa fonction décodage, le vocodeur comprend des moyens d'extraction du flux d'information secondaire à partir du flux d'information principal.For its decoding function, the vocoder includes means for extracting the secondary information stream from the main information stream.
Un troisième aspect de l'invention se rapporte encore à un équipement terminal d'un système de radiocommunications comprenant un vocodeur paramétrique selon le deuxième aspect.A third aspect of the invention further relates to a terminal equipment of a radio communication system comprising a parametric vocoder according to the second aspect.
D'autres caractéristiques et avantages de l'invention apparaîtront encore à la lecture de la description qui va suivre. Celle-ci est purement illustrative et doit être lue en regard des dessins annexés sur lesquels :
- la
figure 1 est un diagramme illustrant un exemple de flux de données de paroles codées (flux de parole) organisé en trames et sous-trames ; - la
figure 2 est un schéma synoptique partiel d'un exemple d'équipement émetteur selon l'invention ; - la
figure 3 est un schéma synoptique partiel d'un exemple d'un vocodeur selon l'invention ; et - la
figure 4 est un schéma synoptique partiel d'un exemple de vocodeur utilisé dans équipement récepteur selon l'invention.
- the
figure 1 is a diagram illustrating an example of coded speech data flow (speech flow) organized into frames and subframes; - the
figure 2 is a partial block diagram of an exemplary transmitter equipment according to the invention; - the
figure 3 is a partial block diagram of an example of a vocoder according to the invention; and - the
figure 4 is a partial block diagram of an example of vocoder used in receiver equipment according to the invention.
La
Le flux DS1 est généré par un vocodeur 10 à partir du signal de parole VS1, lequel est produit par une source de parole 1 telle que l'appareil vocal d'un individu. A cet effet, le signal de parole VS1 est numérisé selon un codage MIC (codage par modulation d'impulsion) linéaire, et segmenté en trames appelées trames de parole. De plus, chaque trame est en général segmentée au niveau du vocodeur 10 en un nombre M fixé de segments appelés sous-trames dans le domaine temporel (modèle CELP) ou dans le domaine fréquentiel (modèle MBE, de l'anglais "Multi-Band Excitation"). Typiquement M est compris entre 2 et 6, selon les vocodeurs). Chaque trame comprend un nombre déterminé N de bits.The stream DS1 is generated by a
La
De retour à la
Le flux de données secondaire DS2 est par exemple généré par un codec 20, lequel reçoit un flux de données à coder d'une source 2. Dans un exemple d'application de l'invention, la source 2 émet aussi un signal de parole, le codec 20 étant alors un vocodeur de débit inférieur à celui du cocodeur 10. Dans ce cas, le flux DS2 est aussi un flux de trames de parole. Dans cette application, l'invention permet l'insertion discrète d'une communication secondaire dans une communication principale. Le codec 20, plus spécifiquement le vocodeur 20, peut être un vocodeur de type MF-MELP (de l'anglais "Multi-Frame - Mixed Excitation Linear Prediction") à 1200/2400 bits/s décrit dans NATO STANAG 4591.The secondary data stream DS2 is for example generated by a
Eventuellement, le flux DS2 peut être soumis à un codage correcteur d'erreurs, par exemple un codage CRC (de l'anglais "Cyclic Redundancy Code") ou un codage convolutif, qui forme un codage de canal en vue de sa transmission à travers le canal de transmission. En effet, on sait que certains bits des trames du flux de parole DS1 sont peu voire pas protégés par un codage de canal, en sorte qu'une protection spécifique des bits du flux d'information DS2 peut être requise, selon les applications.Optionally, the stream DS2 may be subjected to an error-correcting coding, for example a CRC (of the English "Cyclic Redundancy Code") or a convolutional coding, which forms a channel coding for transmission through the transmission channel. In fact, it is known that certain bits of the frames of the DS1 speech stream are little or not protected by channel coding, so that specific protection of the bits of the DS2 information stream may be required, depending on the applications.
Le vocodeur 10 comprend un codeur 100 qui met en oeuvre un algorithme de codage à modèle de source (ou modèle paramétrique), par exemple de type CELP ou de type MELP. Dans un tel cas, les paramètres correspondant au codage d'une trame de parole côté émetteur incluent, entre autres, des vecteurs d'excitation qui sont soumis, côté récepteur, à un filtre dont la réponse modélise la parole.The
Les algorithmes de codage paramétriques utilisent des paramètres calculés soit directement en fonction du flux de trames de paroles entrant et d'un état interne du vocodeur, soit calculés par itérations (sur des trames et/ou des sous-trames successives) en optimisant un critère donné. Typiquement, les premiers paramètres comprennent les paramètres de prédiction linéaire (LP) définissant un filtre court terme, et les seconds paramètres comprennent les paramètres d'excitation adaptative (LTP) définissant un filtre long terme et les paramètres d'excitation fixe. Chaque itération correspond au codage d'une sous-trame dans une trame du flux d'entrée.Parametric coding algorithms use parameters calculated either directly as a function of the stream of incoming speech frames and an internal state of the vocoder, or calculated by iterations (on successive frames and / or subframes) by optimizing a criterion given. Typically, the first parameters include the linear prediction (LP) parameters defining a short-term filter, and the second parameters include the adaptive excitation (LTP) parameters defining a long-term filter and the fixed excitation parameters. Each iteration corresponds to the coding of a sub-frame in a frame of the input stream.
Ainsi, par exemple, les paramètres d'excitation adaptative et les paramètres d'excitation fixe sont sélectionnés par itérations successives afin de minimiser l'erreur quadratique entre le signal de parole synthétisé et le signal de parole VS1 original. Dans la littérature anglo-saxonne, cette sélection itérative est parfois appelée "Codebook search" ou "Analysis by Synthesis Search", ou "Error Minimization Loop" ou encore "Closed Loop Pitch Analysis".Thus, for example, the adaptive excitation parameters and the fixed excitation parameters are selected by successive iterations to minimize the quadratic error between the synthesized speech signal and the original VS1 speech signal. In the Anglo-Saxon literature, this iterative selection is sometimes called "Codebook search" or "Analysis by Synthesis Search", or "Error Minimization Loop" or "Closed Loop Pitch Analysis".
En général, les paramètres d'excitation adaptative et/ou les paramètres d'excitation fixe peuvent comprendre chacun, d'une part un indice correspondant à une valeur d'un vecteur dans le dictionnaire adaptatif (dépendant de la sous-trame) ou dans un dictionnaire fixe, respectivement, et d'autre part une valeur de gain associée audit vecteur. Néanmoins, dans certains vocodeurs tels que le vocodeur TETRAPOL, les paramètres de l'une au moins des excitations adaptative et fixe définissent directement le vecteur d'excitation à appliquer, c'est-à-dire sans adressage d'un dictionnaire par un indice. Dans ce qui suit, il n'est pas fait de distinction entre le mode de définition des vecteurs d'excitation. Les contraintes imposées par les bits du flux DS2 s'appliquant soit à l'indice se rapportant à la valeur du vecteur d'excitation dans le dictionnaire, soit à la valeur de l'excitation elle-même.In general, the adaptive excitation parameters and / or the fixed excitation parameters may each comprise, on the one hand, an index corresponding to a value of a vector in the adaptive dictionary (depending on the subframe) or in a fixed dictionary, respectively, and on the other hand a gain value associated with said vector. Nevertheless, in certain vocoders such as the TETRAPOL vocoder, the parameters of at least one of the adaptive and fixed excitations directly define the excitation vector to be applied, that is to say without addressing a dictionary by an index. In what follows, no distinction is made between the mode of definition of the excitation vectors. The constraints imposed by the bits of the stream DS2 applying either to the index relating to the value of the excitation vector in the dictionary, or to the value of the excitation itself.
En plus du flux de données principal (flux de trames de parole) VS1 et du flux de données secondaire DS2, le vocodeur 10 reçoit, selon l'invention un flux TS de masques de trames, et/ou un flux BS de masques de bits.In addition to the main data stream (speech frame stream) VS1 and the secondary data stream DS2, the
Le flux FS est généré par un générateur de masques de trames 3, à partir d'un flux binaire reçu d'un générateur pseudo-aléatoire 5, lequel fonctionne à partir d'une clé secrète Kf connue de l'émetteur et du récepteur. Un masque de trames a pour fonction de sélectionner, parmi un nombre déterminé de trames du flux de trames de parole DS1, celles dans lesquelles, seulement, les bits du flux de données secondaires DS2 sont insérés.The stream FS is generated by a frame mask generator 3, from a bit stream received from a pseudo-random generator 5, which operates from a secret key Kf known to the transmitter and the receiver. A frame mask has the function of selecting, from among a certain number of frames of the stream of DS1 speech frames, those in which only the bits of the secondary data stream DS2 are inserted.
A cet effet, le générateur 3 exécute le processus suivant. Soit la suite des trames F[i] du flux principal DS1, soit h une fonction numérique à valeurs entières, et soit k un nombre entier déterminé, qui est de préférence sensiblement égal à la profondeur de mémorisation de trames successives dans le vocodeur 10 (voir plus loin, nombre P, en référence au schéma de la
Selon une modalité préférée de l'invention, les trames subissant la contrainte d'insertion sont des trames appartenant à une sous-suite de groupes de trames consécutives du flux principal DS1. Ceci permet de profiter de l'effet glissant du codage de parole résultant de la mémorisation de trames prévue dans le vocodeur 10, afin de préserver la qualité du codage du signal de parole VS1 dans le flux principal DS1. C'est pour cela que le nombre k, qui correspond à la longueur en trames d'un groupe de trames, est de préférence égal à, ou du moins proche de la profondeur de mémorisation R du vocodeur 10, ainsi qu'il a été dit plus haut.According to a preferred embodiment of the invention, the frames undergoing the insertion constraint are frames belonging to a subset of groups of consecutive frames of the main stream DS1. This makes it possible to take advantage of the sliding effect of the speech coding resulting from the storage of frames provided in the
Par exemple, en choisissant h(i) = 10 x i et k = 5 , alors les trames F[0] à F[5] subissent la contrainte d'insertion, les trames F[6] à F[9] ne subissent pas la contrainte d'insertion, les trames F[10] à F[15] subissent la contrainte d'insertion, les trames F[16] à F[19] ne subissent pas la contrainte d'insertion, etc. Dit autrement, dans cet exemple, 6 trames consécutives sur 10 subissent la contrainte d'insertion.For example, by choosing h (i) = 10 xi and k = 5, then the frames F [0] to F [5] undergo the insertion constraint, the frames F [6] to F [9] do not undergo the insertion constraint, the frames F [10] to F [15] undergo the insertion constraint, the frames F [16] to F [19] do not undergo the insertion constraint, etc. In other words, in this example, 6 out of 10 consecutive frames undergo the insertion constraint.
Le flux BS est quant à lui généré par un générateur de masques de bits 4, à partir d'un flux binaire reçu d'un générateur pseudo-aléatoire 6, lequel fonctionne à partir d'une clé secrète Kb, également connue de l'émetteur et du récepteur. Un masque de bits a pour fonction de sélectionner, parmi les N bits d'une trame du flux de trames de parole DS1 sélectionnée en vertu du masque de trames associée à la trame F[i] courante, ceux qui, seulement, sont contraints par des bits du flux de données secondaire DS2.The stream BS is generated by a bit mask generator 4, from a bit stream received from a pseudo-random generator 6, which operates from a secret key Kb, also known from the transmitter and receiver. A bit mask has the function of selecting, from among the N bits of a frame of the speech frame stream DS1 selected by virtue of the frame mask associated with the current F [i] frame, those which are only constrained by bits of the secondary data stream DS2.
A cet effet, le générateur 4 exécute le processus suivant. Il produit un flux d'un nombre fixé Smax bits, où Smax désigne le nombre maximum de bits d'une trame courante Fi du flux principal DS1 qui peuvent être contraints par des bits du flux secondaire DS2. Un nombre déterminé S de bits parmi ces Smax bits, où S est inférieur ou égal à Smax (S≤Smax), ont la valeur logique 1, les autres ayant la valeur logique 0. Ces Smax bits sont insérés dans une chaîne de N bits, à des positions prédéfinies et fixes qui sont prévues dans le logiciel du vocodeur 10, en sorte de former un masque binaire sur la trame. Ce masque, appelé masque de bits, comprend donc S bits égaux à 1. Dans un exemple, lorsqu'un bit du masque de bits est égal à 1, il indique une position d'insertion d'un bit du flux secondaire DS2 dans la trame courante Fi du flux principal DS1.For this purpose, the generator 4 executes the following process. It produces a flow of a fixed number Smax bits, where Smax denotes the maximum number of bits of a current frame Fi of the main stream DS1 that can be constrained by bits of the secondary stream DS2. A determined number S of among these Smax bits, where S is less than or equal to Smax (S≤Smax), have the
Le nombre Smax est fixé en réalisant un compromis entre le nombre de bits maximum du flux secondaire DS2 qu'on peut insérer dans une trame du flux principal DS1, d'une part, et le souci de préserver la qualité du codage du signal de parole VS1 dans le flux principal DS1, d'autre part. Le nombre Smax étant fixé, le nombre S dépend du débit du flux secondaire DS2. Le rapport S/N définit ce qu'on peut appeler le taux d'insertion du flux secondaire DS2 dans le flux principal DS1 pour la trame courante F[i], le rapport Smax/N définissant le taux d'insertion maximum.The number Smax is fixed by making a compromise between the maximum number of bits of the secondary stream DS2 that can be inserted into a frame of the main stream DS1, on the one hand, and the concern to preserve the quality of the coding of the speech signal. VS1 in the main stream DS1, on the other hand. Since the number Smax is fixed, the number S depends on the flow rate of the secondary stream DS2. The S / N ratio defines what can be called the insertion rate of the secondary stream DS2 in the main stream DS1 for the current frame F [i], the ratio Smax / N defining the maximum insertion rate.
Dans un exemple où on utilise un vocodeur TETRAPOL (pour lequel N = 120) avec h(i) = 10 x i, k = 5 et S = 50, on obtient pour l'insertion du flux secondaire un canal de débit moyen de 1215 bit/s. Un tel débit permet l'insertion d'un flux de données secondaire généré par un codec de type MF-MELP à 1200 bit/s (nécessitant 81 bits dans 67,5 ms) décrit dans NATO STANAG 4591. Dit autrement, le taux d'insertion obtenu est suffisant pour transmettre discrètement un flux secondaire qui est aussi un flux de parole généré par un vocodeur secondaire 20 de débit inférieur à celui du vocodeur principal 10.In an example where a TETRAPOL vocoder (for which N = 120) is used with h (i) = 10 xi, k = 5 and S = 50, a mean flow channel of 1215 bit is obtained for insertion of the secondary flow. / s. Such a bit rate allows the insertion of a secondary data stream generated by a MF-MELP type codec at 1200 bit / s (requiring 81 bits in 67.5 ms) described in NATO STANAG 4591. In other words, the rate of The insertion obtained is sufficient to discretely transmit a secondary stream which is also a speech stream generated by a
Un exemple de contrainte d'insertion consiste à remplacer (i.e., écraser) les bits du flux principal DS1 normalement généré suivant l'algorithme de codage standard mis en oeuvre par le vocodeur 10 à partir du signal de parole VS1, par des bits du flux secondaire DS2. Dit autrement, les contraintes appliqués aux paramètres de codage de la parole du flux principal sont des contraintes d'égalité avec les bits du second flux, combinées à des contraintes de sélection par opération ET logique appliquant un masque binaire sur les bits formant le flux principal.An example of an insertion constraint consists in replacing (ie, overwriting) the bits of the main stream DS1 normally generated according to the standard coding algorithm implemented by the
Cet exemple est le plus simple, mais il n'est pas le seul. En effet, des algorithmes sur le flux principal et sur le flux secondaire utilisant toute grammaire contextuelle ou algèbre linéaire ou non-linéaire, y compris l'algèbre de Boole et l'algèbre temporelle de Allen (voir l'article "
Notons en particulier que l'ensemble des indices des excitations dans un dictionnaire a généralement une distribution des bits à 0 et à 1 totalement neutre vis à vis d'une analyse statistique d'occurrences. Il est généralement possible de chiffrer le flux secondaire DS2 sous une forme pseudo-aléatoire avant insertion, sans modifier la distribution statistique des 0 et 1 dans les bits modifiés du flux principal. Dans l'hypothèse d'un modèle de codage de parole conduisant à un flux codé dont certaines sous-trames auraient une corrélation vers 0 ou vers 1, le générateur pseudo-aléatoire susmentionné ou un algorithme de chiffrement du flux secondaire devront aussi avoir ce biais.Note in particular that the set of indices of excitations in a dictionary generally has a distribution of bits at 0 and at 1 completely neutral with respect to a statistical analysis of occurrences. It is usually possible to encrypt the DS2 secondary stream in a pseudo-random form before insertion, without changing the statistical distribution of the 0s and 1s in the modified bits of the main stream. Assuming a speech coding model leading to a coded stream of which some subframes would have a correlation to 0 or 1, the aforementioned pseudo-random generator or a secondary stream encryption algorithm will also have this bias.
Ainsi qu'on l'aura compris, le nombre de bit contraints lors du codage varie d'une trame à l'autre selon une loi d'évolution connue de l'émetteur et du récepteur, qui sont supposés être synchronisés.As will be understood, the number of bits constrained during coding varies from one frame to another according to a known evolution law of the transmitter and the receiver, which are supposed to be synchronized.
La synchronisation de l'émetteur et du récepteur en ce qui concerne l'application des masques de trame et/ou des masques de bits résulte de la synchronisation générale entre ces deux équipements. Typiquement, cette synchronisation est assurée par l'étiquetage des trames à l'aide de valeurs générées par un compteur de trames. De façon connue, la synchronisation générale entre l'émetteur et le récepteur peut aussi provenir, en totalité ou en complément, d'éléments de synchronisation (motifs de bits particuliers) insérés dans le flux principal DS1.The synchronization of the transmitter and the receiver with regard to the application of the frame masks and / or bit masks results from the general synchronization between these two devices. Typically, this synchronization is ensured by the labeling of the frames using values generated by a frame counter. In a known manner, the general synchronization between the transmitter and the receiver may also originate, wholly or in addition, synchronization elements (particular bit patterns) inserted into the main stream DS1.
Le codeur 100 de l'émetteur et le décodeur du récepteur partagent une même information initiale permettant de déterminer la sous-suite des groupes trames et des sous-trames où l'insertion du flux secondaire a lieu. Cette information peut comprendre un vecteur d'initialisation des générateurs pseudo-aléatoires 5 et 6. Elle peut être fixe. Elle peut aussi dépendre, par exemple, du débit moyen imposé par le flux secondaire, ou encore dépendre de paramètres non contraints du codec principal 10 calculés lors du codage du flux principal.The
Ainsi qu'il est représenté à la
Le codeur 100 comprend aussi un module 21 qui est un module matériel et/ou logiciel de synthèse des paramètres d'excitation adaptative, recevant en entrée l'information LP' et délivrant en sortie une information LTP correspondant aux paramètres d'excitation adaptative (définissant un premier vecteur de quantification et un gain un associé pour le filtre de synthèse court terme). L'information LTP est passée en entrée d'une unité logique 22, par exemple un multiplexeur, qui est commandée par le flux de masques de trames FS et le flux de masques de bits BS. L'unité 22 génère en sortie une information LTP' correspondant à l'information LTP dont certains bits au moins pour certaines trames et/ou pour certaines sous-trames au moins, ont été altérés par application des contraintes résultant du flux secondaires DS2 via le masque de trame et le masque de bits associés à la trame courante. Une mémorisation de l'information LTP', avec une profondeur de mémorisation correspondant à un nombre déterminé Q de sous-trames successives de la trame courante (Q≤M-1), peut-être prévue pour le module 21.The
Le codeur 100 comprend enfin un module 31 qui est un module matériel et/ou logiciel de synthèse des paramètres d'excitation fixe, recevant en entrée l'information LTP' et délivrant en sortie une information FIX correspondant aux paramètres d'excitation fixe (définissant un second vecteur de quantification et un gain un associé pour le filtre de synthèse court terme). L'information FIX est passée en entrée d'une unité logique 32, par exemple un multiplexeur, qui est commandée par le flux de masques de trames FS et le flux de masques de bits BS. L'unité 32 génère en sortie une information FIX' correspondant à l'information FIX dont certains bits au moins pour certaines trames et/ou pour certaines sous-trames au moins, ont été altérés par application des contraintes résultant du flux secondaires DS2 via le masque de trame et le masque de bits associés à la trame courante. Une mémorisation de l'information FIX', avec une profondeur de mémorisation correspondant à un nombre déterminé R de sous-trames successives de la trame courante (R≤M-1), est prévue pour le module 21. De plus, une mémorisation de l'information FIX', avec une profondeur de mémorisation correspondant par exemple à un nombre déterminé W de sous-trames successives de la trame courante (W≤M-1), peut-être prévue pour le module 21.The
Pour chaque trame courante, l'information LP'(F[i]) correspondant aux paramètres de prédiction linéaire de la trame, les informations LTP'(SF[1]),..., LTP'(SF[M] correspondant aux paramètres d'excitation adaptative respectivement pour chacune des sous-trames SF[1] à SF[M] de la trame, et les informations FIX'(SF[1]),..., FIX'(SF[M] correspondant aux paramètres d'excitation fixe respectivement pour chacune des sous-trames SF[1] à SF[M] de la trame, sont transmises en entrée d'un multiplexeur 41 qui les concatène pour former une trame du flux principal DS1.For each current frame, the information LP '(F [i]) corresponding to the parameters of linear prediction of the frame, the information LTP' (SF [1]), ..., LTP '(SF [M] corresponding to adaptive excitation parameters respectively for each of the subframes SF [1] to SF [M] of the frame, and the information FIX '(SF [1]), ..., FIX' (SF [M] corresponding to fixed excitation parameters respectively for each of the subframes SF [1] to SF [M] of the frame, are transmitted at the input of a
Les mémorisations dont il est question ci-dessus permettent ici d'atténuer l'effet des contraintes appliquées aux bits des paramètres de prédiction linéaire, des paramètres d'excitation adaptative et/ou des paramètres d'excitation fixe, vis-à-vis de la fidélité du flux principal DS1 au signal de parole source VS1. En effet, ces mémorisations permettent un effet de glissement dans le calcul des paramètres, en sorte que, pour une trame déterminée, les contraintes appliquées à des premiers paramètres sont au moins partiellement compensées, du point de vue perceptuel, par le calcul de paramètres calculés ensuite à partir d'une synthèse de parole basée sur lesdits premiers paramètres.The memorizations mentioned above make it possible to attenuate the effect of the constraints applied to the bits of the linear prediction parameters, the adaptive excitation parameters and / or the fixed excitation parameters, with respect to the fidelity of the main stream DS1 to the source speech signal VS1. Indeed, these memorizations allow a sliding effect in the calculation of the parameters, so that, for a given frame, the stresses applied to the first parameters are at least partially compensated, from the perceptual point of view, by the calculation of calculated parameters. then from a speech synthesis based on said first parameters.
Plus spécifiquement, on peut écrire les relations suivantes, où f désigne une fonction traduisant l'analyse par synthèse :
- 1°) LP'(F[i])=f(LP'(F[i-1]), LP'(F[i-2]),..., LP'(F[i-P]) ;
- 2°) LTP'(SF[i])=f(LTP'(SF[i-1]),..., LTP'(SF[i-R]), FIX'(SF[i-1]),..., FIX'(SF[i-W]) ;
- 3°) FIX'(SF[i])=f(FIX'(SF[i-1]),..., FIX'(SF[i-W]).
- 1 °) LP '(F [i]) = f (LP' (F [i-1]), LP '(F [i-2]), ..., LP' (F [iP]);
- 2 °) LTP '(SF [i]) = f (LTP' (SF [i-1]), ..., LTP '(SF [iR]), FIX' (SF [i-1]) ,. .., FIX '(SF [iW]);
- 3 °) FIX '(SF [i]) = f (FIX' (SF [i-1]), ..., FIX '(SF [iW]).
Ces compensations, et aussi le fait que l'insertion des bits du flux secondaire n'est pas aléatoire, permettent d'atteindre en pratique, pour certains vocodeurs, des taux d'insertion de l'ordre de 10 % sans générer de dégradation (du point de vue perceptuel) du signal de parole VS1 supérieure à ce que génère un taux d'erreur bit résiduel (après codage canal) de l'ordre de quelques %.These compensations, and also the fact that the insertion of the bits of the secondary flow is not random, make it possible to achieve in practice, for some vocoders, insertion rates of the order of 10% without generating degradation ( from the perceptual point of view) of the speech signal VS1 greater than what generates a residual bit error rate (after channel coding) of the order of a few%.
On va maintenant décrire les implications du procédé côté récepteur.The implications of the receiver-side method will now be described.
Notons tout d'abord que, pour un équipement récepteur ne traitant pas le flux secondaire DS2, le décodage des trames du flux DS1 reçues, est seul effectué selon l'algorithme de synthèse standard du vocodeur 10 de l'équipement émetteur.Note first of all that, for a receiver equipment not processing the secondary stream DS2, the decoding of the frames of the stream DS1 received, is only performed according to the standard synthesis algorithm of the
Pour un équipement récepteur traitant le flux secondaire DS2, la récupération de l'information codée par les bits de ce flux secondaire nécessite une synchronisation de l'équipement avec l'équipement émetteur, des moyens d'extraction du flux secondaire DS2 à partir du flux principal DS1. identique au codec 20 de l'équipement émetteur.For a receiving equipment processing the secondary stream DS2, the recovery of the information coded by the bits of this secondary stream requires a synchronization of the equipment with the sending equipment, means for extracting the secondary stream DS2 from the stream main DS1. identical to the
On se réfère au schéma de la
Le vocodeur 10a, le cas échéant après démultiplexage et décodage canal, reçoit le flux principal DS1 en entrée, et délivre un signal de parole VS1' en sortie.The
Le signal VS1' est moins fidèle au signal de parole source VS1 (
L'équipement récepteur peut aussi comprendre un moyen de restitution du signal de parole VS1', par exemple un haut-parleur ou similaire.The receiving equipment may also include means for reproducing the speech signal VS1 ', for example a loudspeaker or the like.
Ainsi qu'il a déjà été dit plus haut, les protocoles de transmission connus prévoient une synchronisation générale de l'équipement récepteur avec l'équipement émetteur. La mise en oeuvre de l'invention ne requiert donc pas de moyens particuliers à cet égard.As already mentioned above, the known transmission protocols provide for a general synchronization of the receiving equipment with the transmitting equipment. The implementation of the invention does not require any particular means in this respect.
Pour l'extraction du flux secondaire, le vocodeur 10a comprend un générateur de masques de trames 3a et un générateur de masques de bits 4a, respectivement associés à un générateur pseudo-aléatoire 5a et à un générateur pseudo-aléatoire 6a, qui sont identiques et agencés de la même façon que les moyens respectivement 3, 4, 5 et 6 du vocodeur 10 de l'équipement émetteur (
L'extraction des bits du flux secondaire DS2 se fait par application synchrone (par exemple via des opération ET logique) des masques de trames et des masques de bits en entrée du décodeur 100a (par exemple via des opération ET logique), sans que cela affecte le décodage du flux principal DS1 par ce dernier. A cet effet, le flux DS1 est fourni en entrée du décodeur 100a via une unité logique 7a, qui extrait le flux d'information secondaire DS2 du flux d'information principal DS1 sous la commande du flux de masques de trames FSa et du flux de masques de bits BSa.The extraction of the bits of the secondary stream DS2 is done by synchronous application (for example via logical AND operations) of the frame masks and input bit masks of the decoder 100a (for example via logical AND operations), without that affects the decoding of the main stream DS1 by the latter. For this purpose, the stream DS1 is provided at the input of the
L'équipement récepteur peut aussi comprendre un codec secondaire, identique au codec 20 de l'équipement émetteur pour le décodage du flux secondaire DS2. Lorsque ce flux est un flux de parole, le codec secondaire génère un signal de parole qui peut être restitué via un haut parleur ou similaire.The receiving equipment may also include a secondary codec identical to the
On notera que la fluctuation du taux de transmission des bits du flux secondaire DS2 ne pose pas de problème particulier côté récepteur, dès lors que le flux secondaire DS2 est fourni en entrée d'un codec secondaire à débit variable comme c'est le cas de tous les vocodeurs du marché. En effet, un tel codec comprend une mémoire tampon d'entrée ("Input Buffer" en anglais) dans laquelle les données du flux DS2 sont stockées en vue de leur décodage. Il faut juste s'assurer que la mémoire tampon d'entrée n'est jamais vide. A cet effet, on détermine le taux d'insertion qui convient, en tenant compte en particulier du débit binaire du codeur 100 et du vocodeur secondaire 20 et des objectifs de préservation de la fidélité du flux principal VS1 au signal de parole VS1. Compte tenu des taux d'insertion élevés obtenus en pratique (de l'ordre de 10 %), cette question de l'alimentation du vocodeur secondaire de l'équipement récepteur ne devrait pas poser de problème, avec un vocodeur principal 10 de type AMR dans son mode de codage à 12,2 kbits/s et un vocodeur secondaire 20 de débit environ dix fois moindre.It will be noted that the fluctuation of the bit transmission rate of the secondary stream DS2 does not pose any particular problem on the receiver side, since the secondary stream DS2 is supplied as input to a variable rate secondary codec, as is the case with all the vocoders of the market. Indeed, such a codec comprises an input buffer ("Input Buffer" in English) in which DS2 stream data are stored for decoding. Just make sure that the input buffer is never empty. For this purpose, the appropriate insertion rate is determined, taking into account, in particular, the bit rate of the
Par ailleurs, dans le cas où le flux secondaire est un flux de parole et afin de fournir au second décodeur un flux régulier de trames, on peut optionnellement mémoriser les séquences et de ne pas commencer immédiatement le décodage.Moreover, in the case where the secondary stream is a speech stream and in order to provide the second decoder a regular stream of frames, it is possible optionally to memorize the sequences and not to start the decoding immediately.
Dans le cas où le flux secondaire est un flux de données transparentes, il est proposé de les concaténer et de les traiter comme si elles avaient été transmises au moyen d'une messagerie courte de longueur maximale (service SMS en GSM, par exemple), et d'y adjoindre un code convolutif correcteur d'erreurs. Alternativement, le flux de données transparentes peut être envoyé à un module de chiffrement ou à un module de transcodage et de synthèse de type "Text-to-Speech ».In the case where the secondary stream is a transparent data stream, it is proposed to concatenate and treat them as if they had been transmitted by means of a short messenger of maximum length (SMS service in GSM, for example), and to add a convolutive error correction code. Alternatively, the transparent data stream may be sent to an encryption module or to a "Text-to-Speech" type of transcoding and synthesis module.
Revenons maintenant à la description générale des modalités de mise en oeuvre du procédé de transmission selon l'invention.Returning now to the general description of the methods of implementation of the transmission method according to the invention.
Le choix des bits d'une trame déterminée du flux principal qui subissent l'application de la contrainte du flux secondaire est déterminé selon les particularités de chaque application. On donne ci-après plusieurs modes de mise en oeuvre possibles à cet égard, ainsi que d'autres particularités et avantages de l'invention.The choice of the bits of a given frame of the main stream that undergo the application of the constraint of the secondary stream is determined according to the particularities of each application. The following are several possible embodiments in this respect, as well as other features and advantages of the invention.
Dans un mode de mise en oeuvre possible, des contraintes sont imposées lors du codage sur la valeur de zéro, plusieurs ou tous les bits de la trame qui sont associés à un vecteur d'excitation de type déterminé, adaptative ou fixe, avant d'effectuer les itérations permettant de calculer les paramètres qui dépendent dudit vecteur d'excitation en vertu des mémorisations réalisées dans le vocodeur. Ces bits de valeur contrainte sont alors les informations du flux secondaire transportées par la trame et constituent le canal du flux d'information secondaire DS2. Autrement dit, le flux secondaire est inséré en imposant des valeurs à des bits formant les paramètres des vecteurs d'excitation adaptative ou fixe. Ceci peut éventuellement être étendu en appliquant des contraintes simultanément aux vecteurs d'excitation de l'autre type, respectivement fixe ou adaptative.In one possible embodiment, constraints are imposed during the coding on the value of zero, several or all the bits of the frame which are associated with an excitation vector of determined, adaptive or fixed type before performing the iterations making it possible to calculate the parameters which depend on said excitation vector by virtue of the memorizations made in the vocoder. These constrained value bits are then the information of the secondary stream transported by the frame and constitute the channel of the secondary information stream DS2. In other words, the secondary stream is inserted by imposing values on bits forming the parameters of the adaptive or fixed excitation vectors. This may possibly be extended by applying constraints simultaneously to the excitation vectors of the other type, respectively fixed or adaptive.
Lorsque la transmission entre l'émetteur et le récepteur prévoit un chiffrement partiel des trames du flux principal (c'est-à-dire un chiffrement de certains bits seulement dans chaque trame), le masque de bits peut avantageusement coïncider avec un ensemble de bits non chiffrés d'une trame. Ceci permet à l'équipement récepteur jouant le rôle de passerelle d'effectuer l'extraction du flux secondaire inséré dans le flux principal sans disposer des moyens de déchiffrer le flux principal.When the transmission between the transmitter and the receiver provides partial encipherment of the frames of the main stream (i.e., encryption of only some bits in each frame), the bit mask may advantageously coincide with a set of unencrypted bits of a frame. This allows the receiving equipment acting as gateway to extract the secondary stream inserted in the main stream without having the means to decrypt the main stream.
Ceci est particulièrement utile tout en préservant la confidentialité du flux principal, sous l'hypothèse approximative de linéarité du modèle de parole du vocodeur, c'est-à-dire en considérant que les paramètres résiduels ou d'excitation des cordes vocales sont non corrélés aux coefficients décrivant l'enveloppe spectrale de réponse du conduit vocal.This is particularly useful while preserving the confidentiality of the main stream, under the approximate linearity of the vocoder speech model, ie considering that the residual or excitation parameters of the vocal cords are uncorrelated. the coefficients describing the spectral envelope of response of the vocal tract.
Autrement dit, ce mode de mise en oeuvre du procédé est caractérisé en ce que le flux d'information secondaire est inséré en imposant des contraintes à des bits non chiffrés de paramètres du modèle de parole du flux principal.In other words, this embodiment of the method is characterized in that the secondary information stream is inserted by imposing constraints on unencrypted bits of parameters of the speech pattern of the main stream.
Ce mode de mise en oeuvre est illustré par un exemple concernant un vocodeur EFR (voir plus haut) utilisé comme codec principal. On choisit d'utiliser des bits parmi les bits non protégés de chaque trame comme canal pour le flux secondaire, en écrasant leur valeur calculée par l'algorithme de codage source du flux principal par application d'un masque binaire sur les 78 bits non protégés de chaque trame. Ces 78 bits non protégés sont identifiés dans le tableau 6 (intitulé "Ordering of Enhanced Full Rate Speech Parameters for the Channel Encoder" dans la spécification ETSI EN 300 909 V8.5.1 GSM 05.03 "Channel coding") et concernent un sous-ensemble des bits décrivant les vecteurs d'excitation fixes. Avec ces 78 bits de classe 2 par trame de 20 ms, on obtient un canal secondaire de débit nominal 3900 bit/s. On peut utiliser de préférence les bits les moins sensibles du mode de codage 12,2 kbit/s du codec AMR (voir plus haut) identifiés par ordre de sensibilité dans le tableau B.8 (intitulé "Ordering of the Speech Encoder Bits from the 12,2 kbit/s Mode" dans la spécification 3GPP TS26.101 "Adaptative Multi-Rate (AMR) Speech Codec Frame Structure").This mode of implementation is illustrated by an example concerning an EFR vocoder (see above) used as the main codec. One chooses to use bits among the unprotected bits of each frame as a channel for the secondary stream, by overwriting their value calculated by the source stream encoding algorithm of the main stream by applying a bit mask on the unprotected 78 bits. of each frame. These unprotected 78 bits are identified in Table 6 (entitled "Ordering of Enhanced Full Speech Parameters for the Channel Encoder" in ETSI EN 300 909 V8.5.1 GSM 05.03 "Channel Coding") and relate to a subset of bits describing the fixed excitation vectors. With these 78
Il est donc également possible d'introduire, dans le mode de codage à 12,2 kbits/s du codec AMR, le flux d'un codec secondaire, par exemple le codeur MELP 1200/2400 bit/s décrit dans NATO STANAG 4591, nécessitant 81 bits par 67,5 ms à 1200 bits/s (respectivement 54 bits par 22,5 ms 2400 bits/s), enrobé dans son propre codage correcteur d'erreur (taux 2/3 FEC), par exemple, qui protège 100% des bits à 1200 bit/s (respectivement 50% des bits à 2400 bit/s), et/ou enrobé dans des trames de négociation d'interopérabilité de sécurité de type FNBDT ("Future Narrow Band Digital Terminal") définies par l'OTAN, ou d'un type de protocole de sécurité plus léger.It is therefore also possible to introduce, in the 12.2 kbit / s coding mode of the AMR codec, the stream of a secondary codec, for example the 1200/2400 bit / s MELP coder described in NATO STANAG 4591, requiring 81 bits per 67.5 ms at 1200 bits / s (respectively 54 bits by 22.5 ms 2400 bits / s), embedded in its own error correcting coding (2/3 FEC rate), for example, which protects 100% of the bits at 1200 bit / s (respectively 50% of the bits at 2400 bit / s), and or embedded in NATO-defined Future Narrow Band Digital Terminal (FNBDT) security interoperability frames, or a lighter type of security protocol.
Dans un autre mode de mise en oeuvre, applicable aux vocodeurs utilisant un algorithme basé sur la sélection d'excitations quantifiées dans un dictionnaire, la contrainte consiste à imposer une valeur d'excitation déterminée, tirée du dictionnaire. En variante, le dictionnaire est partitionné en plusieurs sous-dictionnaires, et la contrainte consiste à imposer l'un des sous-dictionnaires. Une autre variante comprend la combinaison des deux types de contrainte ci-dessus. Lors du décodage du flux principal côté récepteur, la connaissance de l'excitation reçue permet d'identifier le sous-dictionnaire et/ou l'excitation concernés, et d'en déduire la contrainte qui détermine les bits du flux secondaire. Notons qu'à une permutation près des excitations, la contrainte d'imposition du sous-dictionnaire peut être équivalente à l'application des contraintes sur les bits de poids faible des indices d'excitation dans le dictionnaire.In another embodiment, applicable to vocoders using an algorithm based on the selection of quantized excitations in a dictionary, the constraint consists in imposing a determined excitation value, taken from the dictionary. In a variant, the dictionary is partitioned into several sub-dictionaries, and the constraint consists in imposing one of the sub-dictionaries. Another variant comprises the combination of the two types of constraint above. During the decoding of the main stream on the receiver side, the knowledge of the received excitation makes it possible to identify the sub-dictionary and / or the excitation concerned, and to deduce the constraint which determines the bits of the secondary stream. Note that at a permutation close to the excitations, the imposition constraint of the sub-dictionary can be equivalent to the application of the constraints on the low-order bits of the excitation indices in the dictionary.
Dans un autre mode de mise en oeuvre, le flux secondaire définit un codage différentiel des indices de vecteurs d'excitation, par exemple de vecteurs d'excitation fixes, dans la sous-suite de trames successives du flux principal.In another embodiment, the secondary stream defines a differential coding of the excitation vector indices, for example fixed excitation vectors, in the subsequence of successive frames of the main stream.
Dans un autre mode de mise en oeuvre, les bits contraints peuvent être les bits de poids faibles des excitations fixes (c'est-à-dire des excitations non adaptatives) pour chaque trame de parole et éventuellement pour chaque sous-trame définie dans la trame de parole au sens de l'algorithme de codage du vocodeur 10.In another embodiment, the constrained bits may be the least significant bits of the fixed excitations (i.e. nonadaptive excitations) for each speech frame and possibly for each subframe defined in FIG. speech frame within the meaning of the
Dans un autre mode de mise en oeuvre, le nombre et la position des bits contraints sont identifiés pour chaque trame successive en fonction d'un algorithme de calcul d'un masque et d'un élément secret connu de l'émetteur et du récepteur, afin d'accroître les chances de non-détection de l'existence du flux secondaire par un tiers.In another embodiment, the number and position of the constrained bits are identified for each successive frame according to an algorithm for calculating a mask and a known secret element of the transmitter and the receiver, to increase the chances of non-detection of the existence of the secondary stream by a third party.
Un autre mode de mise en oeuvre, applicable à un algorithme de codage nécessitant plusieurs vecteurs d'excitation fixe par trame ou sous-trame, tel que le codec CELP pour la parole d'un flux MPEG-4 (défini dans la spécification ISO/IEC 14496-3 Sub-part 3) pour lequel certaines excitations fixes d'une trame sont choisies à partir de calculs précédents et où d'autres excitations fixes de la même trame sont calculés par analyse par synthèse sur un dictionnaire (voir la spécification ISO/IEC 14496-3 §7.9.3.4 "Multi-Pulse Excitation for the bandwidth extension tool"), consiste à imposer la contrainte sur le choix par dictionnaire de la première excitation fixe et à utiliser ensuite les itérations d'analyse par synthèse sur la seconde excitation fixe pour rattraper l'erreur imposée par la contrainte sur la première excitation fixe.Another embodiment, applicable to an encoding algorithm requiring several fixed excitation vectors per frame or subframe, such as the CELP codec for the speech of an MPEG-4 stream (defined in the ISO / IEC specification). IEC 14496-3 Subpart 3) for which some fixed excitations of a frame are chosen from previous calculations and where other fixed excitations of the same frame are calculated by synthesis analysis on a dictionary (see the ISO specification / IEC 14496-3 §7.9.3.4 "Multi-Pulse Excitation for the bandwidth extension tool"), consists in imposing the constraint on the dictionary choice of the first fixed excitation and then using the iterations of analysis by synthesis on the second fixed excitation to catch the error imposed by the stress on the first fixed excitation.
Dans un autre mode de mise en oeuvre, la sous-suite des trames du flux principal qui sont concernées par l'insertion du flux secondaire ne comprend que les trames qui présentent suffisamment d'énergie et de parole au sens du vocodeur. Dans une variante applicable par exemple aux vocodeurs MELP (qui définissent plusieurs niveaux de voisement) ou aux vocodeurs HVXC (de l'anglais "Harmonie Vector excitation Codec", qui sont des vocodeurs paramétriques d'un flux de parole MPEG-4 définis dans la spécification ISO/IEC 14496-3 Sub-part 2) la sous-suite ne concerne que les segments peu voisés ou totalement non voisés des trames.In another embodiment, the subset of the frames of the main stream that are concerned by the insertion of the secondary stream includes only the frames that have sufficient energy and speech in the vocoder sense. In an alternative applicable for example to MELP vocoders (which define several levels of voicing) or vocoder HVXC (Harmony Vector excitation Codec), which are parametric vocoders of an MPEG-4 speech stream defined in the ISO / IEC 14496-3 Subpart specification 2) the subsequence only applies to the unvoiced or totally unvoiced segments of the frames.
Lorsque la contrainte est appliquée sur les paramètres d'excitation, par exemple sur les indices d'excitation fixes, les paramètres d'une sous-trame du flux principal DS1 restent tout à fait conformes au modèle de codage de parole du vocodeur 10. Néanmoins, la séquence des excitations fixes modifiées est peut-être statistiquement atypique pour une parole humaine ou éventuellement atypique pour le procédé de reconnaissance du locuteur, selon les contraintes appliquées et l'objectif de fidélité souhaité. Pour éviter que la présence du flux secondaire dans ces excitations ne puisse être détectée dans un équipement récepteur, un traitement des paramètres comprenant un lissage des gains des excitations fixes associé à un traitement des impulsions isolées des vecteurs d'excitation suivi d'un post-filtrage après la synthèse de parole, peuvent être appliqués au décodage. Ces traitements permettent d'exclure des séquences acoustiques apparaissant après transmission dans un canal bruité, qui seraient impossible à prononcer par un appareil vocal humain dans l'ambiance d'un microphone. Il s'agit par exemple de certaines séquences de cliquetis, chuintement, crissements, sifflements ou autres, dans le bruit de fond que le vocodeur standard n'aurait pas suffisamment filtrées lors de la synthèse de parole du fait des contraintes imposées. C'est ainsi que peuvent être rendus imperceptibles des sons non voisés indésirables, qui seraient corrélés aux séquences d'excitation fixe contraintes selon le procédé de l'invention.When the constraint is applied to the excitation parameters, for example to the fixed excitation indices, the parameters of a subframe of the main stream DS1 remain entirely in accordance with the speech coding model of
Néanmoins, lorsque l'application de contraintes risque de conduire à la perception de sons non-voisés indésirables corrélés à une séquence d'excitation fixe atypique d'une parole humaine et non filtrée par le filtrage du décodeur standard du vocodeur, la sous-suite des trames sur lesquelles sont appliquées les contraintes peut être définie en fonction d'analyses statistiques préalables sur les valeurs des paramètres consécutifs du modèle de parole du vocodeurs, par exemple en tirant parti de la texture des paramètres de la parole, définie par une inertie, une entropie ou une énergie dérivée de la probabilité des séquences de valeurs des paramètres, par exemple dans huit trames consécutives représentatives de la durée d'un phonème.Nevertheless, when the application of constraints risks leading to the perception of unwanted voiceless sounds correlated with an atypical fixed excitation sequence of human speech and unfiltered by the filtering of the standard decoder of the vocoder, the sub-sequel frames on which the constraints are applied can be defined according to preliminary statistical analyzes on the values of the consecutive parameters of the vocoder speech model, for example by taking advantage of the texture of the speech parameters, defined by an inertia, entropy or energy derived from the probability of parameter value sequences, for example in eight consecutive frames representative of the duration of a phoneme.
Pour chaque mode de mise en oeuvre, la performance de la synthèse du flux principal DS1, c'est-à-dire la fidélité au signal VS1, est inversement proportionnelle au débit relatif du flux secondaire DS2. La performance de fidélité subjective à la source 1 du signal de parole VS1 peut toutefois être atteinte lorsque le procédé proposé garde invariants certains attributs subjectifs (par exemple certains critères psycho-acoustiques) de la source 1. Elle peut être mesurée par des mesures statistiques ("Mean Opinion Score", ou MOS) selon une échelle standardisée (voir la recommandation ITU-T P.862 "Perceptual evaluation of speech quality -PESQ").For each mode of implementation, the performance of the synthesis of the main stream DS1, that is to say the fidelity to the signal VS1, is inversely proportional to the relative flow rate of the secondary stream DS2. The subjective fidelity performance at
Dans certains modes de réalisation, la dégradation de la qualité subjective du flux de parole DS1 issu du vocodeur 10, qui est due à l'insertion du flux secondaire DS2, est supposée acceptable pour justifier l'application du procédé proposé. C'est en particulier le cas lorsque le flux secondaire est également un flux de parole et que le contenu auditif du flux principal est bien moins important que le contenu du flux secondaire pour l'auditeur légitime. En effet, la perception psycho-acoustique de la présence éventuelle du flux secondaire lors de l'écoute du flux principal décodé et restitué ne permet pas d'aider à localiser le flux secondaire dans le flux principal et donc d'apporter une preuve formelle de son existence. Ceci est en particulier le cas pour un vocodeur 10 à bas débit utilisé dans un environnement bruité, car le décodage et la restitution du flux principal DS1 fournissent des séquences de parole conformes au modèle du vocodeur 10. C'est aussi le cas, dans certaines limites psycho-acoustiques, lorsque le débit minimal du flux secondaire doit être assuré au détriment de la qualité de restitution du flux principal.In some embodiments, the degradation of the subjective quality of the DS1 speech stream from the
Afin de préserver au mieux l'intelligibilité de la synthèse du flux principal DS1, on préfère ne pas appliquer de contraintes sur les paramètres spectraux de prédiction linéaire (LP) définissant le filtre court terme, et ne pas trop perturber les paramètres à long terme (LTP) adaptés à chaque sous-trame, afin de conserver des caractéristiques subjectives jugées essentielles dans le signal de parole VS1. En particulier, un mode de mise en oeuvre consiste à appliquer de préférence les contraintes sur des sous-trames différentes des sous-trames sur lesquelles les fenêtres d'analyse à long terme de la trame sont concentrées, à savoir, par exemple, la seconde et la quatrième sous-trame pour le mode de codage 12,2 kbit/s du vocodeur AMR évoqué supra (voir la spécification 3GPP TS 26.090 V5.0.0, §5.2.1 "Windowing and auto-correlation computation"). En particulier, on évitera de perturber beaucoup de segments voisés, généralement porteurs de la majorité des caractéristiques d'identification du locuteur.In order to best preserve the intelligibility of the synthesis of the main stream DS1, it is preferred not to apply constraints on the linear prediction spectral parameters (LP) defining the short-term filter, and not to disturb the long-term parameters too much ( LTP) adapted to each subframe, in order to maintain subjective characteristics deemed essential in the speech signal VS1. In particular, an implementation mode consists in applying the constraints on subframes different from the sub-frames on which the long-term analysis windows of the frame are concentrated, namely, for example, the second one. and the fourth subframe for the 12.2 kbit / s coding mode of the AMR vocoder discussed above (see 3GPP specification TS 26.090 V5.0.0, §5.2.1 "Windowing and auto-correlation computation"). In particular, it will avoid disturbing many voiced segments, generally carrying the majority of speaker identification characteristics.
A titre d'exemple élaboré, dans le mode de codage 12,2 kbit/s du vocodeur AMR, il est possible d'imposer une contrainte sur le choix de l'excitation adaptative en imposant des valeurs initiales aux échantillons u(n) n=0,...,39, dans l'équation récursive (38) de calcul du vecteur adaptatif décrit au paragraphe 5.6.1 (intitulé "Adaptative Codebook Search") de la spécification 3GPP TS 26.090 évoquée supra, en substituant aux valeurs du résidu LP, calculé dans l'équation (36), 40 valeurs extraites du flux secondaire. L'erreur entre le signal du flux principal et le signal synthétisé par le filtre court terme avec la contribution du vecteur adaptatif contraint est compensée par le choix du vecteur d'excitation fixe qui tente de rattraper l'erreur résiduelle (par exemple l'erreur résiduelle quadratique) de la prédiction long terme sur la même sous-trame, ainsi que les vecteurs d'excitation des sous-trames successives. Ainsi les vecteurs d'excitation contraints codent le flux secondaire comme résidu adaptatif au dessus de la réponse du filtre de synthèse court terme du flux principal corrigé par le résidu fixe.As a developed example, in the 12.2 kbit / s coding mode of the AMR vocoder, it is possible to impose a constraint on the choice of the adaptive excitation by imposing initial values on the samples u (n) n = 0, ..., 39, in the recursive equation (38) for calculating the adaptive vector described in paragraph 5.6.1 (entitled "Adaptive Codebook Search") of the specification 3GPP TS 26.090 mentioned above, substituting for the values of the LP residue, calculated in equation (36), 40 values extracted from the secondary stream. The error between the signal of the main stream and the signal synthesized by the short-term filter with the contribution of the constrained adaptive vector is compensated by the choice of the fixed excitation vector which tries to catch the residual error (for example the error residual quadratic) of the long-term prediction on the same subframe, as well as the excitation vectors of the successive subframes. Thus the constrained excitation vectors encode the secondary flow as an adaptive residue above the response of the short-term synthesis filter of the main flux corrected by the fixed residue.
Dans une autre exemple, pour un modèle de parole du vocodeur paramétrique de type STC (de l'anglais "Sinusoidal Transform Coding") ou de type MBE ("Multi Band Excitation") par exemple selon le standard spécifications ANSI/TIA/EIA 102.BABA ("APCO Project 25 Vocoder Description"), un mode de mis en oeuvre conduit à s'intéresser aux bits de poids faible des paramètres d'amplitude des harmoniques des segments des trames ou aux paramètres d'amplitudes d'échantillons de l'enveloppe spectrale. Dans un codec MBE, les paramètres d'excitation sont la fréquence fondamentale ainsi que la décision voisé/non-voisé pour chaque bande de fréquences.In another example, for a speech model of the parametric vocoder type STC (English "Sinusoidal Transform Coding") or type MBE ("Multi Band Excitation") for example according to the standard specifications ANSI / TIA / EIA 102 .BABA ("APCO Project 25 Vocoder Description"), an implementation mode leads to interest in the least significant bits of the harmonic amplitude parameters of the frame segments or the sample amplitude parameters of the sample. spectral envelope. In an MBE codec, the excitation parameters are the fundamental frequency as well as the voiced / unvoiced decision for each frequency band.
Dans ce qui précède, on a décrit des modes de mise en oeuvre prévoyant l'insertion des bits du flux secondaire dans des trames de parole du flux principal. Néanmoins, on sait que le flux principal DS1 contient aussi des trames de silence, qui sont des trames codées par le vocodeur 10 avec un moindre débit binaire et émises avec une périodicité moindre que les trames de parole, pour synthétiser les périodes de silences contenues dans le signal de parole VS1. Ces trames de silence synthétisent ce qu'on appelle un bruit de confort.In the foregoing, embodiments have been described providing for the insertion of the bits of the secondary stream into speech frames of the main stream. Nevertheless, it is known that the main stream DS1 also contains frames of silence, which are frames coded by the
Or, un mode de mise en oeuvre du procédé peut prévoir, en variante ou en complément, l'insertion du flux secondaire via des contraintes numériques sur les valeurs des paramètres descripteurs du bruit de confort à générer au titre du flux principal.However, one embodiment of the method may provide, alternatively or in addition, the insertion of the secondary flow via numerical constraints on the values of the descriptors parameters of the comfort noise to be generated under the main flow.
Ce mode de mise en oeuvre est illustré par un exemple concernant un codec EFR ou AMR (voir plus haut) utilisé comme codec principal. Dans les système GSM et UMTS, les trames transportant du bruit de confort (trames de silence) sont nommées trames SID (voir par exemple la spécification 3GPP TS 26.092 "Mandatory Speech Codec Speech Processing Functions ; AMR Speech Codec ; Confort Noise Aspects" de l'ETSI). Plus précisément les trames considérées ici sont les trames SID-UPDATE qui contiennent 35 bits de paramètres de bruit de confort et un code correcteur d'erreur sur 7 bits.This mode of implementation is illustrated by an example concerning an EFR or AMR codec (see above) used as the main codec. In the GSM and UMTS systems, the frames carrying comfort noise (silence frames) are called SID frames (see, for example, the specification 3GPP TS 26.092 "Mandatory Speech Codec Speech Processing Functions"; AMR Speech Codec; Comfort Noise Aspects ";'AND IF). More specifically, Frames considered here are SID-UPDATE frames that contain 35-bit comfort noise parameters and 7-bit error correction code.
Dans un système GSM ou UMTS, c'est la source qui contrôle l'émission des trames de silence, c'est-à-dire le codec de l'émetteur (sous réserve des interactions avec le processus de détection d'activité vocale et de transmission discontinue, en particulier sur la voie descendante du relais vers le terminal mobile). Il est donc possible de procéder par insertion du second flux selon un procédé similaire à celui applicable à une trame contenant suffisamment d'énergie de parole (trame de parole).In a GSM or UMTS system, it is the source that controls the transmission of the silence frames, that is to say the codec of the transmitter (subject to interactions with the voice activity detection process and discontinuous transmission, particularly on the downlink from the relay to the mobile terminal). It is therefore possible to proceed by inserting the second stream according to a method similar to that applicable to a frame containing sufficient speech energy (speech frame).
De manière alternative, il est possible de commander l'émission d'une trame de silence particulière à partir de l'entrée analogique numérisée du codec en générant le bruit analogique de confort représentatif des 35 bits du flux secondaire. Dans les systèmes GSM et UMTS, la fréquence des trames de silence est contrôlée par la source ou par le relais et correspond soit à une trame de silence toutes les 20 ms soit à une trame de silence toutes les 160 ms, soit encore à une trame de silence toutes les 480 ms pour le codéc EFR du système GSM. Ceci détermine le débit maximal pour le flux secondaire dans cette variante du procédé.Alternatively, it is possible to control the transmission of a particular silence frame from the digitized analog input of the codec by generating the comfort analog noise representative of the 35 bits of the secondary stream. In GSM and UMTS systems, the frequency of the silence frames is controlled by the source or the relay and corresponds either to a silence frame every 20 ms or to a silence frame every 160 ms, or to a frame of silence every 480 ms for the codec EFR of the GSM system. This determines the maximum flow rate for the secondary flow in this variant of the process.
Dans une modalité particulière, il est possible d'utiliser le canal de transmission duplex pour envoyer des trames de silence lorsque le locuteur est un second participant à la communication ou dans les silences dans une première conversation, c'est-à-dire entre les groupes de phonèmes émis selon le flux principal.In a particular mode, it is possible to use the duplex transmission channel to send frames of silence when the speaker is a second participant in the communication or in silences in a first conversation, that is to say between groups of phonemes emitted according to the main flow.
On notera que la spécification 3GPP TS 26.090 précise que la taille du champ de codage du bruit de confort du codec EFR, à savoir 35 bits par trame de silence, est identique à la taille du paramètre d'excitation fixe pour ce même codec. Cela signifie qu'on peut appliquer les mêmes contraintes et obtenir un débit d'insertion minimal permanent en utilisant toutes les trames indépendamment de la nature, parole ou silence, du flux principal.It should be noted that the specification 3GPP TS 26.090 specifies that the size of the coding field of the comfort noise of the EFR codec, namely 35 bits per silence frame, is identical to the size of the fixed excitation parameter for this same codec. This means that one can apply the same constraints and obtain a permanent minimum insertion rate using all the frames regardless of the nature, speech or silence, of the main stream.
Claims (18)
- Method of transmitting a secondary information stream (DS2) between a sender and a receiver, the method including inserting said secondary information stream in a parametric vocoder (100) of the sender generating a main information stream (DS1) that is a voice data stream coding a voice signal and is transmitted from the sender to the receiver, in which method bits of the secondary information stream are inserted:• into only some of the frames (F[i]) of the main information stream selected by a frame mask known to the sender and to the receiver, and• into a selected frame of the main information stream, at predefined positions by imposing a constraint on only some of the bits of the frame selected by a bit mask known to the sender and to the receiver;
wherein the frame mask defines a subseries (SF[m]) of groups of consecutive frames in each of which bits of the secondary information stream are inserted; and,
the length in frames of a group (M) of consecutive frames is substantially equal to the depth of storage of the frames in the parametric vocoder. - Method according to claim 1, wherein the frame mask is variable and is generated in parallel in the sender and in the receiver using a common algorithm.
- Method according to any preceding claim, wherein, the source model of the parametric vocoder providing, for at least some of the frames of the main information stream, different classes of bits as a function of their sensitivity to the quality of voice signal coding, the bit mask is such that bits of the secondary information stream are inserted into these frames, by imposing a constraint as a matter of priority on the bits belonging to the least sensitive bit class.
- Method according to any one of claims 1 to 3, wherein the secondary information stream is a voice data stream from another vocoder (20) having a lower bit rate than the parametric vocoder.
- Method according to any one of claims 1 to 3, wherein the secondary information stream is a transparent data stream.
- Method according to anyone of the preceding claims, wherein the secondary information stream is subjected to error corrector coding before inserting it into the main information stream.
- Method according to anyone of the preceding claims, wherein bits of the secondary information stream are inserted by imposing values on bits that belong to excitation parameters of a filter of the source model of the parametric vocoder.
- Method according to anyone of the preceding claims, wherein bits of the secondary information stream are inserted into silence frames of the main information stream.
- Method according to anyone of the preceding claims, wherein bits of the secondary information stream are inserted by imposing constraints on unencrypted bits in relation to end-to-end encryption of the main information stream.
- Method according to any preceding claim, wherein the constraint is a constraint of equality of the bits of the frame of the main information stream and the inserted bits of the secondary information stream.
- Parametric vocoder (100) including, for inserting a secondary information stream (DS2) into a main information stream (DS1) that is generated by the parametric vocoder from a voice signal, insertion means adapted to insert bits of the secondary information stream:• into only some of the frames (F[i]) of the main information stream selected by a particular frame mask known to the sender and to the receiver, and/or• into a selected frame of the main information stream, at predefined positions by imposing a constraint on only some of the bits of the frame selected by a particular bit mask known to the sender and to the receiver ;
wherein the frame mask defines a subseries of consecutive (SF[m]) frames into each of which bits of the secondary information stream are inserted; and,
wherein the length in frames of the subseries of consecutive frames is substantially equal to the depth of storage of the frames in the parametric voice codec. - Parametric vocoder according to claim 13, wherein the frame mask is variable and is generated by an algorithm based on a secret key.
- Parametric vocoder according to any one of claims 11 and 12, wherein, the source model of the parametric vocoder providing, in at least some of the frames of the main information stream, different classes of bits as a function of their sensitivity to the quality of voice signal coding, the bit mask is such that bits of the secondary information stream are inserted into these frames, by imposing a constraint as a matter of priority on the bits belonging to the least sensitive bit class.
- Parametric vocoder according to any one of claims 11 to 13, further including means for subjecting the secondary information stream to error corrector coding before inserting it into the main information stream.
- Parametric vocoder according to any one of claims 11 to 14, wherein the insertion means are adapted to insert bits of the secondary information stream by imposing values on bits that belong to excitation parameters of a filter of the source model of the parametric vocoder.
- Parametric vocoder according to any one of claims 11 to 15, wherein the insertion means are adapted to insert bits of the secondary information stream into silence frames of the main information stream.
- Parametric vocoder according to any one of claims 11 to 16, wherein the insertion means are adapted to insert bits of the secondary information stream by imposing constraints on unencrypted bits in relation to end-to-end encryption of the main information stream.
- Terminal equipment of a radio system including a parametric vocoder according to any one of claims 11 to 17.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0310546A FR2859566B1 (en) | 2003-09-05 | 2003-09-05 | METHOD FOR TRANSMITTING AN INFORMATION FLOW BY INSERTION WITHIN A FLOW OF SPEECH DATA, AND PARAMETRIC CODEC FOR ITS IMPLEMENTATION |
PCT/FR2004/002259 WO2005024786A1 (en) | 2003-09-05 | 2004-09-06 | Information flow transmission method whereby said flow is inserted into a speech data flow, and parametric codec used to implement same |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1665234A1 EP1665234A1 (en) | 2006-06-07 |
EP1665234B1 true EP1665234B1 (en) | 2010-10-13 |
Family
ID=34178831
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP04787314A Expired - Lifetime EP1665234B1 (en) | 2003-09-05 | 2004-09-06 | Information flow transmission method whereby said flow is inserted into a speech data flow, and parametric codec used to implement same |
Country Status (8)
Country | Link |
---|---|
US (1) | US7684980B2 (en) |
EP (1) | EP1665234B1 (en) |
AT (1) | ATE484821T1 (en) |
CA (1) | CA2541805A1 (en) |
DE (1) | DE602004029590D1 (en) |
ES (1) | ES2354024T3 (en) |
FR (1) | FR2859566B1 (en) |
WO (1) | WO2005024786A1 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2898229B1 (en) | 2006-03-06 | 2008-05-30 | Eads Secure Networks Soc Par A | INTERLACEE CRYPTOGRAPHIC SYNCHRONIZATION |
US8054969B2 (en) * | 2007-02-15 | 2011-11-08 | Avaya Inc. | Transmission of a digital message interspersed throughout a compressed information signal |
WO2009004227A1 (en) * | 2007-06-15 | 2009-01-08 | France Telecom | Coding of digital audio signals |
US8792473B2 (en) * | 2008-12-18 | 2014-07-29 | Motorola Solutions, Inc. | Synchronization of a plurality of data streams |
BR112012025347B1 (en) * | 2010-04-14 | 2020-06-09 | Voiceage Corp | combined innovation codebook coding device, celp coder, combined innovation codebook, celp decoder, combined innovation codebook coding method and combined innovation codebook coding method |
US8689089B2 (en) * | 2011-01-06 | 2014-04-01 | Broadcom Corporation | Method and system for encoding for 100G-KR networking |
CN103187065B (en) | 2011-12-30 | 2015-12-16 | 华为技术有限公司 | The disposal route of voice data, device and system |
US9165162B2 (en) * | 2012-12-28 | 2015-10-20 | Infineon Technologies Ag | Processor arrangements and a method for transmitting a data bit sequence |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1991003901A1 (en) * | 1989-09-04 | 1991-03-21 | Fujitsu Limited | Time-division multiplex data relay exchange system |
US5319735A (en) * | 1991-12-17 | 1994-06-07 | Bolt Beranek And Newman Inc. | Embedded signalling |
US5937000A (en) * | 1995-09-06 | 1999-08-10 | Solana Technology Development Corporation | Method and apparatus for embedding auxiliary data in a primary data signal |
US5790759A (en) * | 1995-09-19 | 1998-08-04 | Lucent Technologies Inc. | Perceptual noise masking measure based on synthesis filter frequency response |
US5757788A (en) * | 1996-01-11 | 1998-05-26 | Matsushita Electric Ind. | Digital radio communication system with efficient audio and non-audio data transmission |
JP4456185B2 (en) * | 1997-08-29 | 2010-04-28 | 富士通株式会社 | Visible watermarked video recording medium with copy protection function and its creation / detection and recording / playback device |
WO1999041094A1 (en) * | 1998-02-17 | 1999-08-19 | Mi-Jack Products | Railwheel system for supporting loads on a road-traveling gantry crane |
GB2340351B (en) * | 1998-07-29 | 2004-06-09 | British Broadcasting Corp | Data transmission |
WO2000039955A1 (en) * | 1998-12-29 | 2000-07-06 | Kent Ridge Digital Labs | Digital audio watermarking using content-adaptive, multiple echo hopping |
AU6533799A (en) * | 1999-01-11 | 2000-07-13 | Lucent Technologies Inc. | Method for transmitting data in wireless speech channels |
US7130309B2 (en) * | 2002-02-20 | 2006-10-31 | Intel Corporation | Communication device with dynamic delay compensation and method for communicating voice over a packet-switched network |
-
2003
- 2003-09-05 FR FR0310546A patent/FR2859566B1/en not_active Expired - Fee Related
-
2004
- 2004-09-06 CA CA002541805A patent/CA2541805A1/en not_active Abandoned
- 2004-09-06 EP EP04787314A patent/EP1665234B1/en not_active Expired - Lifetime
- 2004-09-06 WO PCT/FR2004/002259 patent/WO2005024786A1/en active Application Filing
- 2004-09-06 AT AT04787314T patent/ATE484821T1/en not_active IP Right Cessation
- 2004-09-06 DE DE602004029590T patent/DE602004029590D1/en not_active Expired - Lifetime
- 2004-09-06 US US10/569,914 patent/US7684980B2/en active Active
- 2004-09-06 ES ES04787314T patent/ES2354024T3/en not_active Expired - Lifetime
Also Published As
Publication number | Publication date |
---|---|
ES2354024T3 (en) | 2011-03-09 |
FR2859566B1 (en) | 2010-11-05 |
ATE484821T1 (en) | 2010-10-15 |
CA2541805A1 (en) | 2005-03-17 |
ES2354024T8 (en) | 2011-04-12 |
US7684980B2 (en) | 2010-03-23 |
EP1665234A1 (en) | 2006-06-07 |
WO2005024786A1 (en) | 2005-03-17 |
FR2859566A1 (en) | 2005-03-11 |
DE602004029590D1 (en) | 2010-11-25 |
US20060247926A1 (en) | 2006-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2277172B1 (en) | Concealment of transmission error in a digital signal in a hierarchical decoding structure | |
EP1372289B1 (en) | Generation of a frame descriptor of silence for generation of comfort noise | |
EP1316087B1 (en) | Transmission error concealment in an audio signal | |
EP0139803A1 (en) | Method of recovering lost information in a digital speech transmission system, and transmission system using said method | |
FR2907586A1 (en) | Digital audio signal e.g. speech signal, synthesizing method for adaptive differential pulse code modulation type decoder, involves correcting samples of repetition period to limit amplitude of signal, and copying samples in replacing block | |
KR20090051760A (en) | Packet based echo cancellation and suppression | |
EP1051703A1 (en) | Method for decoding an audio signal with transmission error correction | |
EP2080194B1 (en) | Attenuation of overvoicing, in particular for generating an excitation at a decoder, in the absence of information | |
EP1692687B1 (en) | Transcoding between the indices of multipulse dictionaries used for coding in digital signal compression | |
JP2005338200A (en) | Device and method for decoding speech and/or musical sound | |
EP1665234B1 (en) | Information flow transmission method whereby said flow is inserted into a speech data flow, and parametric codec used to implement same | |
EP2347411A1 (en) | Pre-echo attenuation in a digital audio signal | |
JP2003295879A (en) | Method, apparatus, and system for embedding data in and extracting data from voice code | |
EP1554878A2 (en) | Adaptive and progressive audio stream scrambling | |
EP2203915B1 (en) | Transmission error dissimulation in a digital signal with complexity distribution | |
EP2171713B1 (en) | Coding of digital audio signals | |
Geiser et al. | Binaural wideband telephony using steganography | |
Prasad et al. | Speech Bandwidth Enhancement Based on Spectral-Domain Approach | |
WO2009080982A2 (en) | Processing of binary errors in a digital audio binary frame | |
MX2007015190A (en) | Robust decoder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20060306 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: EADS SECURE NETWORKS |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20081022 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: FRENCH |
|
REF | Corresponds to: |
Ref document number: 602004029590 Country of ref document: DE Date of ref document: 20101125 Kind code of ref document: P |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: VDEP Effective date: 20101013 |
|
RAP2 | Party data changed (patent owner data changed or rights of a patent transferred) |
Owner name: EADS SECURE NETWORKS |
|
REG | Reference to a national code |
Ref country code: ES Ref legal event code: FG2A Effective date: 20110225 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 602004029590 Country of ref document: DE Owner name: CASSIDIAN SAS, FR Free format text: FORMER OWNER: EADS SECURE NETWORKS, MONTIGNY LE BRETONNEUX, FR Effective date: 20110302 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FD4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110113 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110214 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: CA |
|
RAP2 | Party data changed (patent owner data changed or rights of a patent transferred) |
Owner name: CASSIDIAN SAS |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110114 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 |
|
RAP2 | Party data changed (patent owner data changed or rights of a patent transferred) |
Owner name: EADS SECURE NETWORKS |
|
26N | No opposition filed |
Effective date: 20110714 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602004029590 Country of ref document: DE Effective date: 20110714 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 |
|
BERE | Be: lapsed |
Owner name: EADS SECURE NETWORKS Effective date: 20110930 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110930 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110930 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110930 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110930 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION Effective date: 20101013 Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110906 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: TP Owner name: CASSIDIAN SAS, FR Effective date: 20130719 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 602004029590 Country of ref document: DE Owner name: CASSIDIAN SAS, FR Free format text: FORMER OWNER: EADS SECURE NETWORKS, ELANCOURT, FR Effective date: 20130725 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 732E Free format text: REGISTERED BETWEEN 20130926 AND 20131002 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20101013 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: ES Payment date: 20130906 Year of fee payment: 10 Ref country code: DE Payment date: 20130820 Year of fee payment: 10 |
|
REG | Reference to a national code |
Ref country code: ES Ref legal event code: PC2A Owner name: CASSISIAN SAS Effective date: 20131028 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20130823 Year of fee payment: 10 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20130920 Year of fee payment: 10 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: CD Owner name: AIRBUS DS SAS, FR Effective date: 20150106 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 602004029590 Country of ref document: DE |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20140906 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST Effective date: 20150529 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20140906 Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20150401 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20140930 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602004029590 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: G10L0019140000 Ipc: G10L0019040000 |
|
REG | Reference to a national code |
Ref country code: ES Ref legal event code: FD2A Effective date: 20151127 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20140907 |