EP4333310B1 - Verfahren und systeme zur datenübertragung über einen kommunikationskanal - Google Patents

Verfahren und systeme zur datenübertragung über einen kommunikationskanal

Info

Publication number
EP4333310B1
EP4333310B1 EP23194604.7A EP23194604A EP4333310B1 EP 4333310 B1 EP4333310 B1 EP 4333310B1 EP 23194604 A EP23194604 A EP 23194604A EP 4333310 B1 EP4333310 B1 EP 4333310B1
Authority
EP
European Patent Office
Prior art keywords
code
neural network
codeword
channel
encoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP23194604.7A
Other languages
English (en)
French (fr)
Other versions
EP4333310A1 (de
Inventor
Onur GÜNLÜ
Rick FRITSCHEK
Rafael Felix SCHAEFER
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Technische Universitaet Dresden
Original Assignee
Technische Universitaet Dresden
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Technische Universitaet Dresden filed Critical Technische Universitaet Dresden
Publication of EP4333310A1 publication Critical patent/EP4333310A1/de
Application granted granted Critical
Publication of EP4333310B1 publication Critical patent/EP4333310B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2906Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes using block codes
    • H03M13/2927Decoding strategies
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/033Theoretical methods to calculate these checking codes
    • H03M13/036Heuristic code construction methods, i.e. code construction or code search based on using trial-and-error
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/27Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes using interleaving techniques
    • H03M13/2703Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes using interleaving techniques the interleaver involving at least two directions
    • H03M13/2707Simple row-column interleaver, i.e. pure block interleaving
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2906Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes using block codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6597Implementations using analogue techniques for coding or decoding, e.g. analogue Viterbi decoder
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Definitions

  • the present disclosure refers to methods and systems for data transfer via a communication channel.
  • US 11 387 848 B1 describes a controller hierarchical decoding architecture, in which multiple decoder hierarchies are implemented along with use of hierarchies of codes with locality (e.g., larger code length of a hierarchy is composed of local codes from a lower hierarchy).
  • methods and systems for data transfer via a communication channel according to the independent claims are provided.
  • a method for data transfer via a communication channel comprises determining, in a first data processing unit, a codeword from a message using a channel code and sending the codeword via the communication channel.
  • the channel code comprises an outer code concatenated with an inner code;
  • the outer code is one of a Reed-Solomon code, a folded Reed-Solomon code, a twisted Reed-Solomon code, and a generalized Reed-Solomon code;
  • the inner code is a neural network code comprising a neural encoder-decoder pair which includes an encoding neural network and a decoding neural network; a nonlinear channel and/or a noisy channel is arranged between the encoding neural network and the decoding neural network;
  • the neural encoder-decoder pair has been adapted such that the decoding neural network provides an estimated outer codeword symbol for an input outer codeword symbol by training the neural encoder-decoder pair using a training data set which comprises a plurality of outer codewords determined from
  • a method for data transfer via a communication channel comprises receiving, in a second data processing unit, a noisy version of a codeword that has been transmitted via the communication channel and determining an estimated message from the noisy version of the codeword using a channel code.
  • the channel code comprises an outer code concatenated with an inner code;
  • the outer code is one of a Reed-Solomon code, a folded Reed-Solomon code, a twisted Reed-Solomon code, and a generalized Reed-Solomon code;
  • the inner code is a neural network code comprising a neural encoder-decoder pair which includes an encoding neural network and a decoding neural network; a nonlinear channel and/or a noisy channel is arranged between the encoding neural network and the decoding neural network;
  • the neural encoder-decoder pair has been adapted such that the decoding neural network provides an estimated outer codeword symbol for an input outer codeword symbol by training the neural encoder-decoder pair using a training data set which comprises a plurality of outer codewords determined from a plurality of input messages by an outer code encoder. Each outer codeword symbols of the plurality of outer codewords is used as an input to the encoding neural network.
  • a system for data transfer via a communication channel comprises a first data processing unit and is configured to determine, in the first data processing unit, a codeword from a message using a channel code and to send the codeword via the communication channel.
  • the channel code comprises an outer code concatenated with an inner code;
  • the outer code is one of a Reed-Solomon code, a folded Reed-Solomon code, a twisted Reed-Solomon code, and a generalized Reed-Solomon code;
  • the inner code is a neural network code comprising a neural encoder-decoder pair which includes an encoding neural network and a decoding neural network; a nonlinear channel and/or a noisy channel is arranged between the encoding neural network and the decoding neural network;
  • the neural encoder-decoder pair has been adapted such that the decoding neural network provides an estimated outer codeword symbol for an input outer codeword symbol by training the neural encoder-decoder pair using a training data set which comprises a plurality of outer codewords determined from a plurality of input messages by an outer code encoder. Each outer codeword symbols of the plurality of outer codewords is used as an input to the encoding neural network.
  • a system for data transfer via a communication channel comprises a second data processing unit and is configured to receive, in the second data processing unit, a noisy version of a codeword that has been transmitted via the communication channel and to determine an estimated message from the noisy version of the codeword using a channel code.
  • the channel code comprises an outer code concatenated with an inner code;
  • the outer code is one of a Reed-Solomon code, a folded Reed-Solomon code, a twisted Reed-Solomon code, and a generalized Reed-Solomon code;
  • the inner code is a neural network code comprising a neural encoder-decoder pair which includes an encoding neural network and a decoding neural network; a nonlinear channel and/or a noisy channel is arranged between the encoding neural network and the decoding neural network;
  • the neural encoder-decoder pair has been adapted such that the decoding neural network provides an estimated outer codeword symbol for an input outer codeword symbol by training the neural encoder-decoder pair using a training data set which comprises a plurality of outer codewords determined from a plurality of input messages by an outer code encoder. Each outer codeword symbols of the plurality of outer codewords is used as an input to the encoding neural network.
  • the training data set comprising the outer codewords determined from input messages by the outer code encoder as opposed to, e.g., a uniform sampling of input symbols, allows for specific optimization of the inner neural network code.
  • a channel code may be employed for data transfer which comprises a specific classic outer code for a given neural superchannel - comprising a chain of a neural network encoder, a channel, and neural network decoder - such that the blocklength and code dimension of the inner neural network code can be extended linearly with the parameters of the outer code, wherein the complexity increases algebraically.
  • RS codes are maximum distance separable (MDS) codes defined over a large Galois field.
  • MDS maximum distance separable
  • the inner code may comprise a high error-correction capability and small code dimension, remaining errors (and possibly also erasures) may be corrected by a high-rate non-binary outer code with a low-complexity decoder.
  • the employed outer codes in particular RS codes, may protect against bursty errors caused by, e.g., memory effects in the noisy channel, thereby being especially advantageous for noisy channels with a channel memory that is less than the blocklength of the inner neural network code.
  • a boldface lower case letter, such as x may represent a vector, in particular a vector of random variable realizations, with elements x i
  • a boldface upper case letter X may represent a matrix of realizations.
  • F q denotes a Galois field with q elements, where q is a prime power.
  • a range indicated by the phrase "between x and y" includes the boundary points x and y.
  • the quantity in case a quantity is used as an input to (output from) a neural network, the quantity may be an input to/of (output from) the neural network.
  • the quantity may be a direct input (output) to the neural network, i.e., there may be no further intermediate quantity that is determined from the quantity (from which the quantity is determined) and which is subsequently (previously) used as input to (output from) the neural network.
  • the quantity may also be an indirect input to (output from) the neural network.
  • each outer codeword symbol may be a direct or an indirect input to the encoding neural network.
  • the training may comprise optimizing an encoding parameter vector of the encoding neural network (inner code encoder) and/or optimizing a decoding parameter vector of the decoding neural network (inner code decoder).
  • the encoding parameter vector may comprise a plurality of encoding weight matrices and/or a plurality of encoding bias vectors. Each of the encoding weight matrices and/or encoding bias vectors may be assigned to a layer of the encoding neural network.
  • the decoding parameter vector may comprise a plurality of decoding weight matrices and/or a plurality of decoding bias vectors. Each of the decoding weight matrices and/or decoding bias vectors may be assigned to a layer of the decoding neural network.
  • the encoding parameter vector and/or the decoding parameter vector may be determined by a gradient-descend method.
  • the encoding neural network and the decoding neural network may, e.g., be trained alternatingly.
  • Training may be carried out using a loss function that is configured to minimize an error probability of the concatenated code.
  • the loss function may be a categorical cross-entropy loss or a binary cross-entropy loss.
  • the training may comprise feedbacked neural learning, such as reinforcement learning, and/or mutual-information-estimation based neural learning and/or generative adversarial network (GAN) based learning.
  • the neural encoder-decoder pair may correspond to an (over-complete) autoencoder, in particular with a noisy channel in between.
  • Reinforcement learning may for example be carried out using a policy gradient method.
  • the policy gradient method may comprise perturbing an encoding neural network output for determining a gradient estimate in order to train the encoding neural network.
  • Employing the outer code may reduce a required amount of channel simulations, especially for high signal-to-noise ratios as compared to end-to-end neural network code constructions, since the error correction capability of the outer (classic) code may already provide a coarse target symbol error probability for the small inner neural network code, given a (compound) channel model.
  • Determining the training data set may comprise randomly sampling the plurality of input messages (having a fixed message length), in particular randomly determining a plurality of input messages, preferably with a fixed message length.
  • the random determining may, e.g., comprise uniform random sampling.
  • Determining the training data set may further comprise determining the plurality of outer codewords by encoding the plurality of input messages with the outer code encoder, yielding, in particular, each of the outer codeword symbols of the plurality of outer codewords. Each outer codeword symbol of each codeword of the plurality of outer codewords may be used as an input to the encoding neural network.
  • the training may comprise determining a plurality of binary vectors from the plurality of outer codeword symbols, preferably by one-hot encoding each of the plurality of outer codeword symbols (provided as an input to the encoding neural network).
  • the plurality of binary vectors may also be determined from the plurality of outer codeword symbols by binary encoding each of the plurality of outer codeword symbols.
  • the training may further comprise determining a plurality of normalized vectors from a plurality of encoder output vectors by subjecting each of the plurality of encoder output vectors to a power constraint, preferably a block power constraint.
  • the plurality of normalized vectors may be determined by subtracting a mean value from each of the plurality of encoder output vectors and subsequently dividing by the square root of a variance value.
  • the training may comprise determining a plurality of noisy vectors from a plurality of intermediate vectors by subjecting the plurality of intermediate vectors, preferably each of the plurality of intermediate vectors, to noise, wherein the plurality of intermediate vectors may result from the encoding neural network.
  • the plurality of intermediate vectors and/or the plurality of noisy vectors may be used as an input of the decoding neural network.
  • the plurality of intermediate vectors may be subjected to noise via the nonlinear and/or noisy channel.
  • Each intermediate vector may be one of the plurality of encoder output vectors. Each intermediate vector may also be one of the plurality of normalized vectors
  • the training may comprise determining the plurality of noisy vectors from the plurality of normalized vectors by subjecting the plurality of normalized vectors, preferably each of the plurality of normalized vectors, to noise.
  • the training may also comprise determining the plurality of noisy vectors from the plurality of encoder output vectors, by subjecting the plurality of encoder output vectors, preferably each of the plurality of encoder output vectors, to noise.
  • the noise may be independent and identically distributed (i.i.d.).
  • the noise may have a Gaussian distribution, in particular with zero mean and/or a variance proportional to a noise power per positive frequency.
  • the noise may be additive, in particular additive white Gaussian noise (AWGN).
  • AWGN additive white Gaussian noise
  • the noise may also have any other distribution associated with a communication model.
  • the noise may comprise Rayleigh channel noise and/or bursterror channel noise.
  • the training may comprise performing a plurality of training trials, for each of which the noise comprises a different one of a plurality of noise levels. For example, in a first training trial, the noise may have a first noise level and in a second training trial, the noise may have a second noise level different from the first noise level. Further, in a third training trial, the noise may have a third noise level different from the first and the second noise level. An entire training set may be processed for each training trial.
  • the plurality of noise levels may comprise energy per bit to noise power spectral density ratios (signal-to-noise ratio per bit values) E b /N 0 from 1.0 dB to 3.5 dB and/or from 4.0 dB to 7.5 dB and/or from 8.0 dB to 15.0 dB.
  • the plurality of noise levels may comprise signal-to-noise ratio per bit values E b /N 0 from 2.5 dB to 3.5 dB and/or from 4.5 dB to 5.5 dB and/or from 9.5 dB to 10.5 dB.
  • the plurality of noise levels may comprise signal-to-noise ratio per bit values E b /N 0 from 2.9 dB to 3.1 dB and/or from 4.9 dB to 5.1 dB and/or from 9.9 dB to 10.1 dB.
  • E b denotes the average signal energy per information bit and N 0 denotes the noise power per positive-frequency.
  • At least one of the plurality of noise levels may be due to one of an AWGN channel, a Rayleigh fast fading channel, and a bursty channel.
  • the plurality of noise levels may for example be selected such that a symbol error rate between 0.001 and 0.1, preferably between 0.005 and 0.05, is achieved.
  • the training may comprise determining a plurality of decoder output vectors from the plurality of noisy vectors by applying the decoding neural network to the plurality of noisy vectors.
  • the estimated message may be determined from the plurality of noisy vectors and/or decoding neural network outputs by using an outer code decoder, preferably a list decoder, which more preferably uses soft information obtained from the decoding neural network.
  • the list decoder may for example be a Guruswami-Sudan list decoder for RS codes.
  • the outer code decoder may be an errors-only decoder that may preferably correct only outer codeword symbol errors.
  • the outer code decoder may also be an errors-and-erasures decoder that may preferably correct both outer codeword symbol errors and symbol erasures.
  • the errors-and-erasures decoder may require a thresholding step for the decoding neural network comprising setting a neural decoder output vector component to an erasure symbol based on the minimum confidence level being below a predefined symbol confidence threshold.
  • the decoding neural network output may take any value between 0 and 1.
  • the symbol confidence threshold for the highest value of the softmax function output and/or the sigmoid function output may for example be 0.5 for the softmax value and an appropriate value for the sigmoid function output. Confidence values may vary in correspondence with the specific signal-to-noise characteristic of the system and may accordingly be optimized.
  • a single encoding neural network may be used serially/sequentially by providing only one encoding neural network (e.g., by hardware implementation), in particular one neural encoder-decoder pair.
  • multiple copies of the (same) encoding neural network in particular, multiple copies of the (same) neural encoder-decoder pair may be provided such that each of the copies of the encoding neural network (in particular, of the neural encoder-decoder pair) receives one of the outer codeword symbols.
  • Each of the copies of the encoding neural network (or the neural encoder-decoder pair) may have the same encoding neural network parameters (and/or decoding neural network parameters).
  • a more compact and cost-efficient design may be achieved.
  • computations may be parallelized so that the total computation time may be reduced.
  • the number of (implemented) encoding neural networks may be equal to the number of decoding neural networks.
  • the number of encoding neural networks may be different from (e.g., greater or less than) the number of decoding neural networks.
  • the complexity for encoding and decoding may thus be optimized separately since the power constraint may be applied to an entire codeword (of length n 1 ⁇ n 2 ) in between the encoding neural network(s) and decoding neural network(s), which may result in a complete separation of neural encoding and decoding.
  • the outer code may have an outer code dimension between 127 and 4095, preferably 223, and/or the inner code has an inner code dimension between 3 and 27, preferably between 6 and 12, more preferably 4 or 8.
  • the outer code may have an outer blocklength between 127 and 4096, preferably 255, and/or the inner code has an inner blocklength between 8 and 24, preferably 12.
  • the inner code dimension may for example be smaller than the outer code dimension.
  • the inner code dimension may be the binary logarithm of the number of possible outer codeword symbols.
  • the outer codeword symbols may be non-binary.
  • the number of possible outer codeword symbols (which is the cardinality of the set of possible symbols) may be 8, 9, 16, 27, 32, 64, 81, 128, 256, 512 or 1024.
  • Each possible outer codeword symbol may be represented as a binary or ternary string that may be given as an input sequence to the encoding neural network.
  • the outer codeword symbols may in particular be from a Galois field with a field size being a power of two with an integer exponent greater than one.
  • An input size of the encoding neural network may be equal to said integer exponent.
  • Each of the outer codeword symbols from each of the plurality of outer codewords may be used as an input to the encoding neural network, in particular after representing the symbol as a binary or ternary string.
  • the outer codeword symbols (of different codewords of the plurality of outer codewords) may be interleaved by an interleaver, preferably before being used as an input to the encoding neural network.
  • the outer codeword symbols from the plurality of outer codewords used (when a plurality of transmissions is combined) as an input to the encoding neural network may thus result from different codewords of the plurality of outer codewords.
  • the interleaver may be a block interleaver, preferably a row-column block interleaver.
  • the interleaving may further comprise determining the outer codeword symbols from the plurality of outer codewords used (for one pass) as an input to the inner encoding neural network from matrix columns (or matrix rows) of the interleaving matrix.
  • the neural encoder-decoder pair may have been adapted and/or trained in a third data processing unit.
  • the third data processing unit may be different from the first data processing unit and/or the second data processing unit.
  • the neural encoder-decoder pair may also have been adapted and/or trained in the first data processing unit and/or the second data processing unit.
  • the neural encoder-decoder pair may have been adapted and/or trained before and/or independently from determining, in the first data processing unit, the codeword to be sent.
  • the neural encoder-decoder pair may have been adapted and/or trained before and/or independently from receiving, in the second data processing unit, the noisy version of the codeword transmitted via the communication channel.
  • the neural encoder-decoder pair may have been adapted and/or trained before and/or independently from determining the estimated message from the noisy version of the codeword by using the channel code.
  • the first data processing unit may be a first data processing device.
  • the second data processing unit may be a second data processing device.
  • the third data processing unit may be a third data processing device.
  • the first data processing unit and the second data processing unit may also be part of a common data processing device, such as a coded data storage device.
  • the third data processing unit may also be a part of the common data processing device.
  • At least one or each of the data processing devices may, e.g., be a computer.
  • At least one of or each of the data processing units may comprise a (volatile and/or non-volatile) memory.
  • the (physical) communication channel may comprise a wired channel and/or a wireless channel.
  • the communication channel may for example comprise at least one of an optical fiber, a Wi-Fi channel, a radio channel or a storage device channel.
  • Determining the codeword from the message using the channel code may comprise encoding the message with the channel code. Determining the codeword may in particular comprise determining a first intermediate (outer) codeword by encoding the message (to the first intermediate codeword) using the outer code. Determining the codeword may further comprise encoding the first intermediate codeword using the inner code (in particular, using the encoding neural network), which preferably has been adapted/trained using the training data set.
  • Sending (transmitting) the codeword via the communication channel may comprise sending/transmitting the codeword from the first data processing unit, in particular by a first transceiver unit.
  • the codeword emitted via/to the communication channel may yield a noisy version, which is received at the second data processing unit.
  • the noisy version of the codeword may for example be yielded due to noise in the communication channel.
  • the codeword and the noisy version of the codeword may be different from each other or, depending on the noise power/level, be the same.
  • the codeword may be stored in the first data processing unit/device.
  • the noisy version of the codeword may be stored in the second data processing unit/device.
  • Receiving the noisy version of the codeword may comprise receiving the noisy version of the codeword by a second transceiver unit of the second data processing unit.
  • Determining the estimated message from the noisy version of the codeword using the channel code may comprise decoding the noisy version of the codeword with the channel code.
  • Determining the estimated message may in particular comprise determining a second intermediate codeword by decoding the noisy version of the codeword using the inner code (in particular, using the decoding neural network), which preferably has been adapted/trained using the training data set. Determining the estimated message may further comprise decoding the second intermediate codeword (to the estimated message) using the outer code.
  • the first intermediate codeword may be interleaved with further first intermediate codewords from further messages, preferably before being encoded using the inner code.
  • the second intermediate codeword may be de-interleaved, preferably before being decoded using the outer code.
  • the embodiments described above in connection with the methods for data transfer via a communication channel may be provided correspondingly for the systems for data transfer via a communication channel.
  • the decoding described for the second data processing unit may also correspondingly be carried out in the first data processing unit.
  • the encoding described for the first data processing unit may also correspondingly be carried out in the second data processing unit.
  • the system may comprise a first data processing unit 10.
  • the first data processing unit 10 may be a first data processing device with a first processor 10a, a first memory unit 10b, and a first transceiver unit 10c.
  • the first transceiver unit 10c is configured to send/emit and/or receive signals via the communication channel 11.
  • the communication channel 11 may or may not be part of the system.
  • the system may comprise a second data processing unit 12.
  • the second data processing unit 12 may be a second data processing device with a second processor 12a, a second memory unit 12b, and a second transceiver unit 12c.
  • the second transceiver unit 12c is configured to send/emit and/or receive signals via the communication channel 11.
  • the third data processing unit may be a third data processing device with a third processing unit, a third memory unit and, optionally, a third transceiver unit.
  • a codeword is determined in the first data processing unit 10 from an (initial) message using a channel code.
  • a first intermediate codeword is determined from the message by encoding the message with an outer code.
  • the codeword is determined from the first intermediate codeword by encoding the first intermediate codeword using an inner code.
  • the inner code is a neural network code comprising a neural encoder-decoder pair with a channel between them that has been adapted to provide a reliable estimate of the transmitted message by training the neural encoder-decoder pair.
  • the codeword is sent from the first data processing unit 10 to the second data processing unit 12 via the communication channel 11, resulting in a noisy version of the codeword, which is received at the second data processing unit 12.
  • an estimate of the message is determined in the second data processing unit 12 using the decoder of the channel code.
  • a second intermediate codeword is determined from the noisy version of the codeword by decoding the noisy version of the codeword using the decoder of the (trained) inner code.
  • the estimated message is determined from the second intermediate codeword by decoding the second intermediate codeword using the decoder of the outer code. In case of successful error correction, the (initial) message and the estimated message coincide and one can say that there was no block error.
  • a graphical representation of a neural encoder-decoder pair with a channel between them that are encoders and decoders of an inner neural network code is shown together with an outer code input/output.
  • Training the encoder-decoder pairs may be carried out as follows. Steps as a part of the training may model and/or correspond to respective steps for transmission of messages / data transfer.
  • a training data set may be determined by randomly sampling (e.g., according to a uniform distribution for all message symbols) a plurality of input messages with a fixed message length k 2 , where k 2 is a positive natural number. Subsequently, a plurality of outer codewords / outer code sequences ⁇ n 2 , where n 2 is a positive natural number that is larger than or equal to k 2 , is determined by encoding the plurality of input messages with an outer code encoder.
  • Each symbol of the outer codeword ⁇ n 2 may be represented as a binary vector s ⁇ F 2 2 k 1 , where k 1 , where k 1 is a positive natural number.
  • Each binary vector s is determined by one-hot encoding one of the plurality of outer codeword symbols of a codeword.
  • the outer code may for example be a Reed-Solomon code.
  • the outer code is not a neural network code (i.e., the outer code is a "classic code").
  • an encoder output vector x ⁇ ⁇ R n 1 is determined from each binary vector s by applying a (parametrized) encoding function f ⁇ which represents an encoding neural network (inner code encoder) 21 as a part of an (n 1 , k 1 ) neural network code.
  • Each affine map F l represents an l-th hidden layer of the encoding neural network 21 and comprises an (encoding) weight matrix W l and an (encoding) bias vector b l .
  • Each of the non-linear activation functions ⁇ l , ..., ⁇ L-1 may for example be the rectified linear unit (ReLU) activation function or any of its variants, including Leaky ReLU, randomized leaky ReLU, parametric leaky ReLU, or newer variants such as (exponential linear unit) ELU or scaled ELU. Other activation functions such as, e.g., the softplus function may also be provided.
  • the number of encoder hidden layers L of the encoding neural network may for example be between 1 and 10, preferably be 2.
  • the loss function takes as input the input values of the encoding neural network and the output values of the decoding neural network. The loss function therefore calculates how far, in cross-entropy terms, the neural decoder output is from the neural encoder input.
  • the random determining of the plurality of input messages results in a corresponding distribution of encoder output vectors x ⁇ .
  • a noisy vector y is sampled. This corresponds to the normalized vectors x ⁇ ⁇ passing through a non-linear channel and/or noisy channel 22.
  • AWGN additive white Gaussian noise
  • a (neural) decoder output vector ⁇ ⁇ [0,1] 2k 1 is determined from each noisy vector y by applying a decoding function g ⁇ ' , which represents a decoding neural network (inner code decoder) 23, and subsequently applying, e.g., a softmax function.
  • the decoder output vector ⁇ is restricted to values between 0 and 1, for every element of the one-hot encoded input vector, where all elements sum to 1. Due to these properties, the values of the decoder output vector ⁇ can be regarded as confidence values that a certain input element was active (i.e., has a 1), while all other input elements are non-active (zero).
  • the index of the element with the largest confidence value represents the estimated symbol. Moreover, if the largest confidence value of a certain decoder output falls below a certain threshold, the symbol transmission is discarded by flagging the symbol as erasure.
  • Each decoding affine map G 1 represents an l-th (decoder) hidden layer of the decoding neural network 23 and comprises a (decoding) weight matrix W l ′ and a (decoding) bias vector b l ′ .
  • Each of the decoder non-linear activation functions ⁇ 1 ′ , ... , ⁇ L ⁇ 1 ′ may for example be the rectified linear unit activation function or any of its variants, including Leaky ReLU, randomized leaky ReLU, parametric leaky ReLU, or newer variants such as (exponential linear unit) ELU or scaled ELU. Other activation functions such as, e.g., the softplus function may also be provided.
  • the number of decoder hidden layers L' may for example be between 1 and 10, preferably be 2.
  • the decoder output vector ⁇ can be interpreted as a vector of probabilities in which the j-th element is the estimated probability that the j-th outer codeword symbol was transmitted ( J ⁇ ⁇ ⁇ 1,2, ..., 2 k 1 ⁇ .
  • the optimizer may for example comprise stochastic gradient descent or Adam with Nesterov momentum (NAdam).
  • the neural encoder-decoder pair is optimized over the outer code sequences.
  • the neural encoder-decoder pair may be adapted/trained such that for input outer code sequences, corresponding estimated outer code sequences are provided and the loss function is minimized by the neural encoder-decoder pair.
  • an estimated message may be provided for each input message.
  • Fig. 3 shows a graphical representation of a channel code comprising an outer code concatenated with an inner code.
  • each outer codeword symbol is represented by a binary vector s with 256 bits, in particular by one of (0,...0,1), (0,...,0,1,0), ..., (0,1,0,...,0), and (1,0,...,0).
  • 255 noisy vectors y each comprising 12 bits, are obtained.
  • the interleaver may for example be a row-column block interleaver.
  • the interleaver outputs one column comprising 255 outer codeword symbols of the interleaver matrix. This action is reversed by the deinterleaver before the outer code decoder 32.
  • An errors-and-erasures decoder may be provided by applying a thresholding algorithm to the softmax outputs of the decoding neural network 23 and determining a symbol as erased based on being below a confidence threshold, which may yield further BLER gains.
  • plain polar codes and plain LDPC (low-density-parity check) codes may result in smaller BLER values for the AWGN channel at the same E b /N 0 level, these codes may not be robust to changes in the channel parameters and channel model.
  • the proposed channel code thus also provides advantages for, e.g., fading and bursty channels.
  • Fig. 3 is only an example.
  • the number of processed symbols may be larger or smaller than 255.
  • the number of symbols each encoding neural network 21 (and decoding neural network 23) may receive and process and output may be larger than one. In other words, each encoding neural network 21 and each decoding neural network 23 may receive and process and output one or more symbols.
  • the proposed method allows for employing higher blocklengths, in particular above 256, as well as higher code rates with respect to general neural encode-decoder code designs.
  • the latter are generally more restricted due to a huge training and computation complexity of encoding and decoding, which corresponds to neural networks becoming huge.
  • high blocklengths and code rates can be achieved together with a small complexity, which may be even lower than the decoding complexity of polar and LDPC codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Error Detection And Correction (AREA)

Claims (15)

  1. Verfahren zur Datenübertragung über einen Kommunikationskanal (11), aufweisend:
    Ermitteln, in einer ersten Datenverarbeitungseinheit (10), eines Codeworts aus einer Nachricht unter Verwendung eines Kanalcodes und Senden des Codeworts über den Kommunikationskanal (11),
    wobei der Kanalcode einen mit einem inneren Code verketteten äußeren Code aufweist;
    wobei der äußere Code einer der folgenden ist: ein Reed-Solomon-Code, ein gefalteter Reed-Solomon-Code, ein twisted Reed-Solomon-Code und ein verallgemeinerter Reed-Solomon-Code;
    wobei der innere Code ein Neuronales-Netzwerk-Code ist, welcher ein neuronales Codierer-Decodierer-Paar aufweist, welches ein codierendes neuronales Netzwerk (21) und ein decodierendes neuronales Netzwerk (23) aufweist;
    wobei ein nichtlinearer Kanal und/oder ein verrauschter Kanal (22) zwischen dem codierenden neuronalen Netzwerk (21) und dem decodierenden neuronalen Netzwerk (23) angeordnet ist; und
    wobei das neuronale Codierer-Decodierer-Paar derart angepasst wurde, dass das decodierende neuronale Netzwerk (23) ein geschätztes äußeres Codewortsymbol für ein eingegebenes äußeres Codewortsymbol bereitstellt, indem es das neuronale Codierer-Decodierer-Paar unter Verwendung eines Trainingsdatensatzes trainiert, welcher eine Vielzahl von äußeren Codewörtern aufweist, welche aus einer Vielzahl von Eingangsnachrichten von einem Äußeren-Code-Codierer (31) ermittelt wurden,
    wobei jedes äußere Codewortsymbol der Vielzahl von äußeren Codewörtern als Eingabe für das codierende neuronale Netzwerk (21) verwendet wird.
  2. Das Verfahren gemäß Anspruch 1,
    wobei das Training das Ermitteln einer Vielzahl von binären Vektoren aus den äußeren Codewortsymbolen aufweist, vorzugsweise durch One-Hot-Codierung jedes der äußeren Codewortsymbole.
  3. Das Verfahren gemäß einem der vorstehenden Ansprüche,
    wobei das Training das Ermitteln einer Vielzahl von normalisierten Vektoren aus einer Vielzahl von Codierer-Ausgangsvektoren aufweist, indem jeder der Vielzahl von Codierer-Ausgangsvektoren einer Leistungsbeschränkung unterworfen wird.
  4. Das Verfahren gemäß einem der vorstehenden Ansprüche,
    wobei das Training das Ermitteln einer Vielzahl von verrauschten Vektoren aus einer Vielzahl von Zwischenvektoren aufweist, indem die Vielzahl von Zwischenvektoren Rauschen ausgesetzt wird,
    wobei die Vielzahl von Zwischenvektoren aus dem codierenden neuronalen Netzwerk (21) resultiert und als Eingabe des decodierenden neuronalen Netzwerks (23) verwendet wird.
  5. Das Verfahren gemäß Anspruch 4,
    wobei das Training das Durchführen einer Vielzahl von Trainingsversuchen aufweist, für welche jeweils das Rauschen einen unterschiedlichen Rauschpegel einer Vielzahl von Rauschpegeln aufweist.
  6. Das Verfahren gemäß Anspruch 4 oder 5,
    wobei das Training das Ermitteln einer Vielzahl von Decodierer-Ausgangsvektoren aus der Mehrzahl von verrauschten Vektoren durch Anwenden des decodierenden neuronalen Netzwerks (23) auf die Vielzahl von verrauschten Vektoren aufweist.
  7. Das Verfahren gemäß einem der vorstehenden Ansprüche, ferner aufweisend:
    Ermitteln einer geschätzten Nachricht aus der Vielzahl von verrauschten Vektoren unter Verwendung eines Äußeren-Code-Codierers, vorzugsweise eines Listen-Decodierers.
  8. Das Verfahren gemäß einem der vorstehenden Ansprüche,
    wobei die äußeren Codewortsymbole seriell oder parallel als Eingabe für das codierende neuronale Netzwerk (21) eingegeben werden.
  9. Das Verfahren gemäß einem der vorstehenden Ansprüche,
    wobei der äußere Code eine äußere Code-Dimension zwischen 127 und 1023, vorzugsweise 223, und/oder der innere Code eine innere Code-Dimension zwischen 3 und 27, vorzugsweise 4 oder 8, hat.
  10. Das Verfahren gemäß einem der vorstehenden Ansprüche,
    wobei die äußeren Codewortsymbole nicht binär sind.
  11. Das Verfahren gemäß einem der vorstehenden Ansprüche,
    wobei die äußeren Codewortsymbole verschiedener Codewörter der Vielzahl von äußeren Codewörtern von einem Verschachteler verschachtelt werden, vorzugsweise bevor sie als Eingabe für das codierende neuronale Netzwerk (21) verwendet werden.
  12. Das Verfahren gemäß Anspruch 11,
    wobei der Verschachteler ein Block-Verschachteler, vorzugsweise ein Zeilen-Spalten-Block- Verschachteler ist.
  13. Verfahren zur Datenübertragung über einen Kommunikationskanal (11), aufweisend:
    Empfangen, in einer zweiten Datenverarbeitungseinheit (12), einer verrauschten Version eines Codeworts, welches über den Kommunikationskanal (11) übertragen wurde, und Ermitteln einer geschätzten Nachricht aus der verrauschten Version des Codeworts unter Verwendung eines Kanalcodes,
    wobei der Kanalcode einen mit einem inneren Code verketteten äußeren Code aufweist
    wobei der äußere Code einer der folgenden ist: ein Reed-Solomon-Code, ein gefalteter Reed-Solomon-Code, ein twisted Reed-Solomon-Code und ein verallgemeinerter Reed-Solomon-Code;
    wobei der innere Code ein Neuronales-Netzwerk-Code ist, welcher ein neuronales Codierer-Decodierer-Paar aufweist, welches ein codierendes neuronales Netzwerk (21) und ein decodierendes neuronales Netzwerk (23) aufweist;
    wobei ein nichtlinearer Kanal und/oder ein verrauschter Kanal (22) zwischen dem codierenden neuronalen Netzwerk (21) und dem decodierenden neuronalen Netzwerk (23) angeordnet ist; und
    wobei das neuronale Codierer-Decodierer-Paar derart angepasst wurde, dass das decodierende neuronale Netzwerk (23) ein geschätztes äußeres Codewortsymbol für ein eingegebenes äußeres Codewortsymbol bereitstellt, indem es das neuronale Codierer-Decodierer-Paar unter Verwendung eines Trainingsdatensatzes trainiert, welcher eine Vielzahl von äußeren Codewörtern aufweist, welche aus einer Vielzahl von Eingangsnachrichten von einem Äußeren-Code-Kodierer (31) ermittelt wurden,
    wobei jedes äußere Codewortsymbol der Vielzahl von äußeren Codewörtern als Eingabe für das codierende neuronale Netzwerk (21) verwendet wird.
  14. System zur Datenübertragung über einen Kommunikationskanal (11), wobei das System eine erste Datenverarbeitungseinheit (10) aufweist und eingerichtet ist zum:
    Ermitteln, in der ersten Datenverarbeitungseinheit (10), eines Codeworts aus einer Nachricht unter Verwendung eines Kanalcodes und Senden des Codeworts über den Kommunikationskanal (11),
    wobei der Kanalcode einen mit einem inneren Code verketteten äußeren Code aufweist;
    wobei der äußere Code einer der folgenden ist: ein Reed-Solomon-Code, ein gefalteter Reed-Solomon-Code, ein twisted Reed-Solomon-Code und ein verallgemeinerter Reed-Solomon-Code;
    wobei der innere Code ein Neuronales-Netzwerk-Code ist, welcher ein neuronales Codierer-Decodierer-Paar aufweist, welches ein codierendes neuronales Netzwerk (21) und ein decodierendes neuronales Netzwerk (23) aufweist;
    wobei ein nichtlinearer Kanal und/oder ein verrauschter Kanal (22) zwischen dem codierenden neuronalen Netzwerk (21) und dem decodierenden neuronalen Netzwerk (23) angeordnet ist; und
    wobei das neuronale Codierer-Decodierer-Paar derart angepasst wurde, dass das decodierende neuronale Netzwerk (23) ein geschätztes äußeres Codewortsymbol für ein eingegebenes äußeres Codewortsymbol bereitstellt, indem es das neuronale Codierer-Decodierer-Paar unter Verwendung eines Trainingsdatensatzes trainiert, welcher eine Vielzahl von äußeren Codewörtern aufweist, welche aus einer Vielzahl von Eingangsnachrichten von einem Äußeren-Code-Kodierer (31) ermittelt wurden,
    wobei jedes äußere Codewortsymbol der Vielzahl von äußeren Codewörtern als Eingabe für das codierende neuronale Netzwerk (21) verwendet wird.
  15. System zur Datenübertragung über einen Kommunikationskanal (11), wobei das System eine zweite Datenverarbeitungseinheit (12) aufweist und eingerichtet ist zum:
    Empfangen, in der zweiten Datenverarbeitungseinheit (12), einer verrauschten Version eines Codeworts, welches über den Kommunikationskanal (11) übertragen wurde, und Ermitteln einer geschätzten Nachricht aus der verrauschten Version des Codeworts unter Verwendung eines Kanalcodes,
    wobei der Kanalcode einen mit einem inneren Code verketteten äußeren Code aufweist
    wobei der äußere Code einer der folgenden ist: ein Reed-Solomon-Code, ein gefalteter Reed-Solomon-Code, ein verdrehter Reed-Solomon-Code und ein verallgemeinerter Reed-Solomon-Code;
    wobei der innere Code ein Neuronales-Netzwerk-Code ist, welcher ein neuronales Codierer-Decodierer-Paar aufweist, welches ein codierendes neuronales Netzwerk (21) und ein decodierendes neuronales Netzwerk (23) aufweist;
    wobei ein nichtlinearer Kanal und/oder ein verrauschter Kanal (22) zwischen dem codierenden neuronalen Netzwerk (21) und dem decodierenden neuronalen Netzwerk (23) angeordnet ist; und
    wobei das neuronale Codierer-Decodierer-Paar derart angepasst wurde, dass das decodierende neuronale Netzwerk (23) ein geschätztes äußeres Codewortsymbol für ein eingegebenes äußeres Codewortsymbol bereitstellt, indem es das neuronale Codierer-Decodierer-Paar unter Verwendung eines Trainingsdatensatzes trainiert, welcher eine Vielzahl von äußeren Codewörtern aufweist, welche aus einer Vielzahl von Eingangsnachrichten von einem Äußeren-Code-Kodierer (31) ermittelt wurden,
    wobei jedes äußere Codewortsymbol der Vielzahl von äußeren Codewörtern als Eingabe für das codierende neuronale Netzwerk (21) verwendet wird.
EP23194604.7A 2022-08-31 2023-08-31 Verfahren und systeme zur datenübertragung über einen kommunikationskanal Active EP4333310B1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
LU502737A LU502737B1 (en) 2022-08-31 2022-08-31 Methods and systems for data transfer via a communication channel

Publications (2)

Publication Number Publication Date
EP4333310A1 EP4333310A1 (de) 2024-03-06
EP4333310B1 true EP4333310B1 (de) 2025-10-29

Family

ID=84331792

Family Applications (1)

Application Number Title Priority Date Filing Date
EP23194604.7A Active EP4333310B1 (de) 2022-08-31 2023-08-31 Verfahren und systeme zur datenübertragung über einen kommunikationskanal

Country Status (2)

Country Link
EP (1) EP4333310B1 (de)
LU (1) LU502737B1 (de)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11387848B1 (en) * 2021-03-11 2022-07-12 Samsung Electronics Co., Ltd. Hierarchical error correction code

Also Published As

Publication number Publication date
LU502737B1 (en) 2024-02-29
EP4333310A1 (de) 2024-03-06

Similar Documents

Publication Publication Date Title
US10673462B2 (en) Coding method and coding device
CN101689866B (zh) 低密度奇偶校验卷积码编码器和低密度奇偶校验卷积码解码器
CN1132320C (zh) 截尾格子码的软输出译码器
US8369448B2 (en) Bit mapping scheme for an LDPC coded 32APSK system
CN101103533B (zh) 编码方法
US8386880B2 (en) Method for transmitting non-binary codes and decoding the same
EP2099135B1 (de) Vorrichtung und Verfahren zum Kodieren und Dekodieren eines Kanals in einem Kommunikationssystem unter Verwendung von Low-Density-Parity-Check-Codes
KR100984289B1 (ko) 통신 시스템에서 가변 부호화율을 지원하는 신호 송수신장치 및 방법
US8806288B2 (en) Systems and methods for providing unequal error protection code design from probabilistically fixed composition codes
KR20070119580A (ko) 통신 시스템에서 블록 저밀도 패리티 검사 부호부호화/복호 장치 및 방법
CN109921803B (zh) 基于神经网络的高密度线性分组码译码方法
CN110739977A (zh) 一种基于深度学习的bch码译码方法
Feng et al. A novel high-rate polar-staircase coding scheme
CN112332866B (zh) 一种基于dvb-s与dvb-s2信号的级联码参数识别方法
EP4333310B1 (de) Verfahren und systeme zur datenübertragung über einen kommunikationskanal
Günlü et al. Concatenated classic and neural (CCN) codes: ConcatenatedAE
US20240137151A1 (en) Hybrid product polar codes-based communication systems and methods
CN112470406B (zh) 用于针对非二进制码的消息传递解码的基本校验节点处理的排序设备和方法
Fujiwara et al. Quantum speedup for polar maximum likelihood decoding
CN110190925B (zh) 一种数据处理方法及装置
Liang et al. Rateless transmission of polar codes with information unequal error protection
Shalin et al. Differential properties of polar codes
CN120165706B (zh) 一种多步译码判决加速译码方法、设备及存储介质
EP1901436A2 (de) Bit-Mapping-Schema für ein LDPC-kodiertes 16APSK-System
US20250274141A1 (en) Product autoencoder for error-correcting via sub-stage processing

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20240906

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G06N 3/048 20230101ALN20250506BHEP

Ipc: H03M 13/00 20060101ALI20250506BHEP

Ipc: G06N 3/08 20230101ALI20250506BHEP

Ipc: G06N 3/0455 20230101ALI20250506BHEP

Ipc: H03M 13/03 20060101ALI20250506BHEP

Ipc: H03M 13/27 20060101ALI20250506BHEP

Ipc: H03M 13/29 20060101AFI20250506BHEP

INTG Intention to grant announced

Effective date: 20250520

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

P01 Opt-out of the competence of the unified patent court (upc) registered

Free format text: CASE NUMBER: UPC_APP_5759_4333310/2025

Effective date: 20250903

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: CH

Ref legal event code: F10

Free format text: ST27 STATUS EVENT CODE: U-0-0-F10-F00 (AS PROVIDED BY THE NATIONAL OFFICE)

Effective date: 20251029

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602023007973

Country of ref document: DE