CN115720707A - Training in a communication system - Google Patents

Training in a communication system Download PDF

Info

Publication number
CN115720707A
CN115720707A CN202080102550.8A CN202080102550A CN115720707A CN 115720707 A CN115720707 A CN 115720707A CN 202080102550 A CN202080102550 A CN 202080102550A CN 115720707 A CN115720707 A CN 115720707A
Authority
CN
China
Prior art keywords
transmitter
receiver
weights
algorithm
transmission system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080102550.8A
Other languages
Chinese (zh)
Inventor
F·艾特·奥迪亚
J·霍伊迪斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of CN115720707A publication Critical patent/CN115720707A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • H04L25/03006Arrangements for removing intersymbol interference
    • H04L25/03343Arrangements at the transmitter end
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • H04L25/03006Arrangements for removing intersymbol interference
    • H04L25/03165Arrangements for removing intersymbol interference using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns

Abstract

An apparatus, method and computer program are described, comprising: receiving at a receiver (46) of a transmission system (40) a transmission signal from each of a plurality of transmitters (42 a, 42b, 42 c), wherein each transmitter communicates with the receiver over one of a plurality of channels (44 a, 44b, 44 c) of the transmission system, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter algorithm has the same trainable weights and wherein each transmission signal is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver; updating, at a receiver, the weights of the transmitter algorithm based on a loss function; providing the updated weights to each transmitter of the transmission system; and repeating the receiving and updating until the first condition is reached.

Description

Training in a communication system
Technical Field
The present description relates to training in a communications system, such as a communications system having trainable parameters.
Background
End-to-end communication systems comprising a transmitter, a channel and a receiver are known, wherein the transmitter and/or the receiver have trainable parameters. Although a variety of algorithms are known for training such systems, further development is still needed in this field.
Disclosure of Invention
In a first aspect, this specification describes an apparatus comprising means configured to: receiving, at a receiver of the transmission system, a transmission signal from each of a plurality of transmitters, wherein each transmitter communicates with the receiver over one of a plurality of channels of the transmission system, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter algorithm has the same trainable weights, and wherein each of the transmission signals is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver; updating, at a receiver, the weights of the transmitter algorithm based on a loss function; providing (e.g., broadcasting) the updated weights to each transmitter of the transmission system; and repeating the receiving and updating until the first condition is reached. The apparatus may be a receiver of a communication system, e.g. a central node (such as a base station) communicating with a number of transmitters (such as user equipments). The receiver may include a receiver algorithm with at least some trainable weights.
At the receiver, a loss function may be determined based on a sum of losses for each of a plurality of symbols transmitted from the plurality of transmitters based on knowledge of the respective channel symbols and the respective perturbations at the receiver.
Example embodiments further include components configured to perform: trainable weights of the receiver algorithm are trained (e.g., simultaneously with the weights of the transmitter algorithm).
The first condition may include a defined number of iterations. Other example first conditions are possible in addition to or instead of the defined number of iterations.
The channel symbols and/or the perturbations may be generated pseudo-randomly.
The apparatus may further comprise means configured to perform initializing the transmitter weights (e.g. to a predefined starting point or to a random (or pseudo-random) starting point).
The transmitter algorithm may be implemented using a neural network. The receiver algorithm may also be implemented using a neural network.
In a second aspect, this specification describes an apparatus comprising means configured to: transmitting a signal from one of a plurality of transmitters of a transmission system to a receiver of the transmission system, wherein each of the plurality of transmitters communicates with the receiver over one of a plurality of channels of the transmission system, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter has the same trainable weights and wherein the transmitted signal is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver; and receiving updated weights for the transmitter algorithm from the receiver, wherein the weights are updated at the receiver based on a loss function. The apparatus may be one of a plurality of transmitters of a communication system (e.g., a mobile communication system).
The channel symbols and/or the perturbations may be generated pseudo-randomly.
The apparatus may further comprise means configured to perform initializing said transmitter parameters, e.g. to a predefined starting point or to a random (or pseudo-random) starting point.
The receiver may be a communication node (e.g. a base station) of a mobile communication system.
The transmitter algorithm may be implemented using a neural network. The receiver algorithm may also be implemented using a neural network.
In a third aspect, the present specification describes a transmission system comprising a plurality of transmitters, a plurality of channels and a receiver, the transmission system comprising means configured to: transmitting a signal from one or more transmitters of a plurality of transmitters to a receiver, wherein each transmitter of the plurality of transmitters communicates with the receiver over one channel of a plurality of channels, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter has the same trainable weights and wherein transmitting the signal is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver; receiving a transmitted signal at a receiver of a transmission system; updating, at the receiver, the weights of the transmitter algorithm based on a loss function; providing (e.g., broadcasting) the updated weights to each transmitter of the transmission system; and repeating the receiving and updating until the first condition is reached. The transmission system may further comprise the features of the first and second aspects described above.
In the first, second, and third aspects described above, the apparatus may include: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program configured to, with the at least one processor, cause execution of the apparatus.
In a fourth aspect, the present specification describes a method comprising: receiving, at a receiver of a transmission system, a transmission signal from each of a plurality of transmitters, wherein each transmitter communicates with the receiver over one of a plurality of channels of the transmission system, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter algorithm has the same trainable weights, and wherein each of the transmission signals is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver; updating, at the receiver, weights of the transmitter algorithm based on the loss function; providing (e.g., broadcasting) the updated weights to each transmitter of the transmission system; and repeating the receiving and updating until a first condition (defined number of iterations) is reached.
At the receiver, a loss function is determined based on knowledge of the respective channel symbols and the respective perturbations at the receiver, based on a sum of losses for each of a plurality of symbols transmitted from the plurality of transmitters.
The receiver may include a receiver algorithm with at least some trainable weights. The method may further include training trainable weights of the receiver algorithm (e.g., while training weights of the transmitter algorithm).
The channel symbols and/or the perturbations may be pseudo-randomly generated.
The method may further comprise performing an initialization of said weights of said transmitter algorithm, e.g. to a predefined starting point or to a random (or pseudo-random) starting point.
In a fifth aspect, the present specification describes a method comprising: transmitting a signal from a transmitter of a plurality of transmitters of a transmission system to a receiver of the transmission system, wherein each transmitter of the plurality of transmitters communicates with the receiver through a channel of a plurality of channels of the transmission system, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter has the same trainable weights and wherein the transmitted signal is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver; and receiving updated weights for the transmitter algorithm from the receiver, wherein the weights are updated at the receiver based on a loss function.
The channel symbols and/or the perturbations may be generated pseudo-randomly.
The method may comprise initializing said weights of said transmitter algorithm (e.g. to a predefined starting point or to a random (or pseudo-random) starting point).
In a sixth aspect, the present specification describes a method comprising: transmitting a signal from one or more transmitters of a plurality of transmitters of a transmission system to a receiver of the transmission system, wherein each transmitter of the plurality of transmitters communicates with the receiver over a channel of the plurality of channels, wherein each transmitter includes a transmitter algorithm having at least some trainable weights, wherein each transmitter has the same trainable weights and wherein transmitting the signal is based on generating perturbed channel symbols at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver; receiving a transmitted signal at a receiver of a transmission system; updating, at a receiver, the weights of the transmitter algorithm based on a loss function; providing (e.g., broadcasting) the updated weights to each transmitter of the transmission system; and repeating the receiving and updating until the first condition is reached.
In a seventh aspect, the present specification describes an apparatus configured to (at least) perform any of the methods described with reference to the fourth, fifth or sixth aspects.
In an eighth aspect, the specification describes computer readable instructions which, when executed by a computing device, cause the computing device to perform (at least) any of the methods described with reference to the fourth, fifth or sixth aspect.
In a ninth aspect, the specification describes a computer readable medium (such as a non-transitory computer readable medium) comprising program instructions stored thereon for (at least) performing any of the methods described with reference to the fourth, fifth or sixth aspects.
In a tenth aspect, this specification describes an apparatus comprising: at least one processor; at least one memory including computer program code which, when executed by the at least one processor, causes the apparatus to (at least) perform any of the methods described with reference to the fourth, fifth or sixth aspect.
In an eleventh aspect, this specification describes a computer program comprising instructions for causing an apparatus to perform at least the following: receiving, at a receiver of a transmission system, a transmission signal from each of a plurality of transmitters, wherein each transmitter communicates with the receiver over one of a plurality of channels of the transmission system, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter algorithm has the same trainable weights, and wherein each of the transmission signals is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver; updating, at a receiver, the weights of the transmitter algorithm based on a loss function; providing (e.g., broadcasting) the updated weights to each transmitter of the transmission system; and repeating the receiving and updating until the first condition is reached. The apparatus may be a receiver of a communication system, e.g. a central node (such as a base station) communicating with a number of transmitters (such as user equipments). The receiver may include a receiver algorithm having at least some trainable weights. The loss function may be determined at the receiver based on a sum of the losses for each of a plurality of symbols transmitted from the plurality of transmitters based on knowledge at the receiver of the respective channel symbols and the respective perturbations.
In a twelfth aspect, the present specification describes a computer program comprising instructions for causing an apparatus to perform at least the following: transmitting a signal from one of a plurality of transmitters of a transmission system to a receiver of the transmission system, wherein each transmitter of the plurality of transmitters communicates with the receiver through one of a plurality of channels of the transmission system, wherein each transmitter comprises a transmitter having an algorithm with at least some trainable weights, wherein each transmitter has the same trainable weights and wherein the transmitted signal is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver; and receiving updated weights for the transmitter algorithm from the receiver, wherein the weights are updated at the receiver based on a loss function. The apparatus may be one of a plurality of transmitters of a communication system, such as a mobile communication system.
In a thirteenth aspect, the present specification describes an apparatus comprising: means (such as an input of a first processor) for receiving a transmitted signal from each of a plurality of transmitters at a receiver of a transmission system, wherein each transmitter communicates with the receiver over one of a plurality of channels of the transmission system, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter algorithm has the same trainable weights, and wherein each of the transmitted signals is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and the perturbations are known to the receiver; means (such as a first processor) for updating the weights of the transmitter algorithm at a receiver based on a loss function; means (such as the output of the first processor) for providing (e.g. broadcasting) the updated weights to each transmitter of the transmission system; and means (such as a control module) for repeating the receiving and updating until the first condition is reached.
In a fourteenth aspect, this specification describes an apparatus comprising: means (such as an output of a first processor) for transmitting a signal from one of a plurality of transmitters of a transmission system to a receiver of the transmission system, wherein each transmitter of the plurality of transmitters is in communication with the receiver through one of a plurality of channels of the transmission system, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter has the same trainable weights and wherein transmitting the signal is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver; and means (such as an input to the first processor) for receiving from the receiver updated weights for the transmitter algorithm, wherein the weights are updated at the receiver based on a loss function.
Drawings
Example embodiments will now be described, by way of non-limiting example, with reference to the following schematic drawings:
fig. 1-3 are block diagrams of example end-to-end communication systems;
FIG. 4 is a block diagram of a communication system according to an example embodiment;
FIG. 5 is a flow chart showing an algorithm according to an example embodiment;
FIG. 6 is a block diagram of a transmitter in accordance with an example embodiment;
FIG. 7 is a flow chart showing an algorithm according to an example embodiment;
FIG. 8 is a block diagram of an optimization module in accordance with an example embodiment;
9-11 are flow charts illustrating algorithms according to example embodiments;
FIG. 12 illustrates an example neural network that can be used in one or more example embodiments;
FIG. 13 is a block diagram of components of a system according to an example embodiment;
fig. 14A and 14B illustrate tangible media, respectively, a removable nonvolatile memory unit and a Compact Disc (CD) storing computer readable code, which, when executed by a computer, perform operations according to example embodiments.
Detailed Description
The scope of protection sought for the various embodiments of the invention is set forth by the independent claims. Embodiments and features, if any, described in this specification which do not fall within the scope of the independent claims are to be construed as examples to facilitate the understanding of the various embodiments of the invention.
In the description and drawings, like reference numerals refer to like elements throughout.
Fig. 1 is a block diagram of an exemplary end-to-end communication system, generally indicated by reference numeral 10. System 10 includes a transmitter 12, a channel 14, and a receiver 16. Viewed at the system level, system 10 converts data received at the input of transmitter 12 into transmit symbols (x) for transmission over channel 14, and receiver 16 generates estimates of the transmit data from symbols (y) received from channel 14
Figure BDA0004023976370000071
The transmitter 12 may include a modulator that converts data symbols into transmission symbols (x) according to a modulation scheme. The transmitted symbols are then transmitted over the channel 14 and received at the receiver 16 as received symbols (y). The receiver may include a demodulator that converts the received symbols (y) into estimates of the originally transmitted data symbols.
Fig. 2 is a block diagram of an exemplary end-to-end communication system, generally indicated by reference numeral 20. The system 20 includes a transmitter 22, a channel model 24, and a receiver 26, and is therefore similar to the system 10 described above.
The system 20 uses a Neural Network (NN) based autoencoder. In this way, the transmitter and receiver are implemented as neural networks and jointly optimized for specific performance metrics and channels. Channel model 24, which models the performance of the true channel used for communication between transmitter 22 and receiver 26, is a distinguishable channel model so that system 20 can be trained using back propagation. More specifically, because a distinguishable channel model is available, the channel model may be implemented as a non-trainable layer between transmitter 22 and receiver 26, and the end-to-end system may be trained by propagating gradients back from receiver to transmitter, as indicated by dashed arrows 28 and 29 in system 20.
Training the channel model 24 may result in suboptimal performance due to a mismatch between the channel model and the performance of the actual channel used for transmission. To address this problem, training over the actual channel may be preferred. However, the actual channel is usually a "black box" that can only be sampled, preventing the back propagation of the gradient from the receiver to the transmitter.
By way of example, fig. 3 is a block diagram of an exemplary end-to-end communication system, generally referred to by reference numeral 30. The system 30 includes the transmitter 22 and receiver 26 described above, and further includes an actual channel 34 (rather than a channel model) for transmission between the transmitter and receiver. The use of the actual channel 34, as generally indicated by the cross arrow 38, prevents training the system 30 as an auto-encoder in the manner described above, since back propagation through the channel 34 is not possible.
Fig. 4 is a block diagram of a communication system, generally indicated by reference numeral 40, according to an example embodiment. The system 40 includes a first transmitter 42a, a second transmitter 42b, a third transmitter 42c, and a receiver 46. The first transmitter 42a communicates bi-directionally with the receiver 46 via a first channel 44 a. Similarly, the second transmitter 42b communicates bi-directionally with the receiver 46 via the second channel 44b, and the third transmitter 42c communicates bi-directionally with the receiver 46 via the third channel 44 c.
In system 40, each of transmitters 42a, 42b, and 42c has a set of trainable parameters (e.g., neural network weights) represented by θ that are shared by all transmitters.
As shown in FIG. 4, a first transmitter 42a transmits data(s) received at an input of the transmitter 1 ) Is converted intoIn the first transmission symbol transmitted through the first channel 44a
Figure BDA0004023976370000091
And the receiver 46 receives a first symbol (y) from the first channel 44a (and may, for example, generate an estimate of the transmitted data from the received symbol) 1 ). Similarly, the second transmitter 42b transmits data(s) received at the transmitter's input 2 ) Conversion to a second transmitted symbol for transmission over a second channel 44b
Figure BDA0004023976370000092
And the receiver 46 receives a second symbol (y) from the second channel 44b 2 ). The third transmitter 42c transmits the data(s) received at the transmitter's input 3 ) Converted to a third transmitted symbol for transmission over a third channel 44c
Figure BDA0004023976370000093
And the receiver 46 receives a third symbol (y) from the third channel 44c 3 )。
As described further below, trainable weights θ shared by all transmitters 42a, 42b, and 42cs are updated at receiver 46 based on a loss function. The updated weights are provided (e.g., broadcast) to each of the transmitters 42a, 42b, and 42c, as indicated by the dashed lines in fig. 4.
Fig. 5 is a flow chart illustrating an algorithm, generally indicated by reference numeral 50, according to an example embodiment.
The algorithm 50 begins at operation 52, where each of the transmitters 42a, 42b, and 42c generates a symbol to be transmitted. Thus, the ith transmitter generates a message s i Using neural networks within the respective transmitters to transmit the message s i Mapping to channel symbol x i . Each transmitter neural network implementation is represented by f θ Mapping of the representation. Transmitter pass-through signal x i Adding small perturbations (by ∈ C) i Representation) to perturb the signal. Perturbations are added to achieve an approximation of the gradient of the channel transfer function. Message s i And perturbation e i Is pseudo-randomly generated, e.g. using pseudo-random numbersA generator (PRNG) or generated according to a predefined sequence. Disturbed signal
Figure BDA0004023976370000094
Is transmitted over channels 44a, 44b and 44 such that signal y i Is received at receiver 46.
In operation 54, the receiver 46 depends on the received signal y i And using the transmitted message s i And the perturbation e i To train a set of weights theta that are shared by all transmitters. It is possible to assume that the message and the perturbation are generated by a PRNG with a seed that is known to the receiver or generated according to a predefined sequence at the time of training.
In operation 56, the weight θ to be generated in operation 54 is broadcast (or otherwise provided) to the transmitter.
The receiver 46 performs training of transmitter weights (such as the transmitters 42a, 42b, and 42c described above) based on samples from each transmitter of the communication system such that the optimization takes into account each of the various channels (such as the channels 44a, 44b, and 44 c). Furthermore, the calculation of the updated parameters is performed at (or under the control of) the receiver, thereby simplifying the functionality of the sender (e.g. avoiding the requirement for dedicated hardware at the sender, which may be a user equipment).
Using this approach, the optimization process, which may require dedicated resources, would typically be done at the receiver (e.g. base station) or in the core network (e.g. in the cloud). Optimization can be achieved based on a data set consisting of samples from multiple receivers, avoiding overfitting to a particular link. When a user wants to connect to a base station or when its channel status changes significantly, it can receive a set of weights for transmission from the base station.
Note that the channels described herein may include interference from other cells, making the system described herein compatible with multi-cell systems.
Fig. 6 is a block diagram of a transmitter, generally indicated by reference numeral 60, according to an example embodiment. The transmitter 60 is an example implementation of the first transmitter 42a described above. The second transmitter 42b and the third transmitter 42c may have the same structure.
The transmitter 60 comprises an implementation function f θ A first module 62 and a second module 64. The first module 62 may include a neural network.
The first module 62 maps s the message to 1 To channel symbol x 1 . The second module 64 provides the channel symbols x with a Signal Perturbation Ratio (SPR) denoted by gamma 1 Adding perturbation E 1 To generate an output signal
Figure BDA0004023976370000101
(thereby carrying out operation 52 of the algorithm 50).
The transmitter 60 receives the weight theta discussed above. It should be noted that the function f θ Typically a relatively small neural network, so that the broadcast of the weights theta does not incur too high an overhead. Furthermore, it is broadcast to all transmitters participating in the training process, which allows further savings in communication resources.
Fig. 7 is a flow chart illustrating an algorithm, generally indicated by reference numeral 70, according to an example embodiment. The algorithm 70 may be implemented by the transmitter 60.
The algorithm 70 begins at operation 72, where the transmitter transmits the perturbed channel symbols to the receiver, where the channel symbols and the perturbations thereto are known to the receiver. As described above, the channel symbols and/or the perturbations may be pseudo-randomly generated.
At operation 74, updated transmitter parameters are received from the receiver.
In operation 76, the transmitter parameters are updated based on the parameters received in operation 74.
The algorithm 70 may then return to operation 72 such that operations 72-76 are repeated. Operations 72 through 76 may be repeated, for example, a set number of times or until some other metric (e.g., performance metric) is reached.
FIG. 8 is a block diagram of an optimization module, generally indicated by reference numeral 80. The optimization module 80 may form part of the receiver 46 of the system 40 described above. The provision of the optimization module 80 at the receiver is not necessary for all example embodiments. For example, the optimization module 80 may be provided at a central node (such as a base station) that communicates with a number of transmitters, or may be provided elsewhere, but accessible by one or more transmitters.
Optimization module 80 receives a first symbol (y) from first channel 44a 1 ) Receiving a second symbol (y) from a second channel 44b 2 ) And receives a third symbol (y) from a third channel 44c 3 ). Optimization module 80 also receives a first message s from which a first symbol is derived 1 And a first perturbation e 1 A second message s from which a second symbol is derived 2 And a second perturbation e 2 And a third message s from which a third symbol is derived 3 And a third perturbation e 3 . The various messages and perturbations may be generated locally at the optimization module, for example, using one or more pseudo-random number generators. As discussed further below, the optimization module 80 generates a weight θ that is provided to the transmitter.
Fig. 9 is a flow chart illustrating an algorithm, generally indicated by reference numeral 90, according to an example embodiment. The algorithm 90 may be implemented by the optimization module 80.
The algorithm 90 begins at operation 92, where the transmitted symbols are received. The transmitted symbols (e.g., transmitted by transmitters 42a, 42b, and 42 c) may be received at a receiver 46 of the transmission system 40. As described above, each transmitter includes a transmitter algorithm having at least some trainable weights, where each transmitter algorithm has the same trainable weights, and where each transmitted signal is based on perturbed channel symbols generated at the respective transmitter, where the channel symbols and perturbations are known to the receiver.
At operation 94, the transmitter parameter θ is updated, for example, by the optimization module 80 described above (which may form part of the receiver 46). The transmitter parameter θ may be updated based on a loss function, as discussed further below.
At operation 96, the transmitter parameters are provided (e.g., broadcast) to each transmitter of the transmission system.
Operations 92 through 96 may be repeated until a first condition (e.g., a defined number of repetitions or a defined performance metric) is reached.
Fig. 10 is a flow chart illustrating an algorithm, generally indicated by reference numeral 100, according to an example embodiment. The algorithm 100 may be implemented using a plurality of transmitters 60, each transmitter 60 communicating with the optimization module 80 via a channel.
In the algorithm 100, N is used T To indicate the number of transmitters (e.g., N in the above-described communication system 40) T = 3). All transmitters use trainable parameters to implement the same algorithm
Figure BDA0004023976370000125
Where theta is a vector of trainable parameters,
Figure BDA0004023976370000121
is the space (e.g., bits) of the message to be sent and n is the number of complex channels used per message.
The algorithm 100 begins at operation 101, where a parameter θ is initialized at the optimization module 80. The initialization may be, for example, predefined or pseudo-random if all transmitters use the same initial value θ.
Initializing the parameter θ may include setting the parameter to a predefined starting point or to a random or pseudo-random starting point. Alternative implementations of operation 101 will be apparent to those of ordinary skill in the art.
In operation 102, each transmitter i ∈ { 1.,. N T Generate a batch of size B pseudo-random messages, e.g., using a PRNG or a predefined sequence. B pseudo-random message composed of
Figure BDA0004023976370000122
And (4) showing. Each message S e S i Trainable algorithm f by transmitter θ Mapping to complex channel symbols
Figure BDA0004023976370000123
And the resultant vector of the superimposed channel symbols is formed by
Figure BDA0004023976370000124
And (4) showing.
In operation 103, each transmitter i ∈ { 1., N ∈ · T Generation of a pseudorandom perturbation vector, e.g. using a PRNG or a predefined sequence
Figure BDA0004023976370000131
In operation 104, each sender i ∈ {1,.., N T By using perturbation e i And according to the equation formed by γ ∈ (0, 1'):
Figure BDA0004023976370000132
the signal is perturbed by a Signal Perturbation Ratio (SPR) represented to generate a signal x to be transmitted i Wherein the scaling ensures an average energy constraint
Figure BDA0004023976370000133
Suppose that
Figure BDA0004023976370000134
And
Figure BDA0004023976370000135
in operation 105, each transmitter i ∈ { 1., N ∈ · T Sending a signal
Figure BDA0004023976370000136
In operation 106, the optimization module (e.g., receiver) receives the signal transmitted by the transmitter and changed by the channel
Figure BDA0004023976370000137
And (4) showing. The optimization module/receiver performs a one-step random gradient descent (SGD) on the loss function:
Figure BDA0004023976370000138
where l (·,) depends on each instance loss of the application,
Figure BDA0004023976370000139
is the jth message sent by the ith user,
Figure BDA00040239763700001310
is the output of the transmitter's perturbation to the ith message of the jth user, i.e.
Figure BDA00040239763700001311
Wherein
Figure BDA00040239763700001312
And is provided with
Figure BDA00040239763700001313
Is a pseudo-random disturbance, and
Figure BDA00040239763700001314
is corresponding to
Figure BDA00040239763700001315
Of the received signal.
At operation 107, it is determined whether a stop criterion has been reached. (the stop criteria may include one or more of a predetermined number of iterations, a defined performance metric, or whether the loss function has not decreased for the predetermined number of iterations.) if the stop criteria has been reached, the algorithm 100 proceeds to operation 108 where the trained parameters are broadcast to all transmitters. Otherwise, the algorithm returns to operation 102 such that operations 102 through 107 are repeated.
Each example loss function l (·,) mentioned in operation 106 may take a variety of forms. For example, assume that the message sent is a vector of size m bits, i.e., s ∈ {0,1} m (i.e., the amount of the acid,
Figure BDA00040239763700001316
). (Note that for readability, user subscript i is deleted.) The receiver will then first calculate the probability of the transmitted bit from the received signal y, i.e. p(s) k Y), where s k K is the k-th bit of s represented by k ∈ { 1. In this case, the loss function for each example may be the total binary cross entropy:
Figure BDA0004023976370000141
note that performing the SGD on the loss function in operation 106 does not require any knowledge of the channel model. Furthermore, this penalty is calculated for all users, which means that the set of weights θ of the transmitter is optimized taking into account all users and channels.
Furthermore, the computationally intensive SGD step performed in operation 106 is performed at the receiver side, eliminating the need for computationally intensive computations at the transmitter.
Many variations of the algorithm 100 are possible. For example, in some example embodiments, after the gradient is calculated at the receiver (in operation 106), the gradient may be broadcast to the transmitter (rather than the parameter itself). The transmitter may then update the weights by applying the gradient to the weights. Further, in some example embodiments, only gradients with values above a predefined threshold may be broadcast to reduce the amount of communication resources required.
In some implementations of the algorithm 100, one or more of the learning rate, batch size, and possibly other parameters of the SGD variant (Adam, RMSProp \8230;) may be optimization hyper-parameters.
Assuming that the receiver is implemented by an algorithm (e.g., a neural network) with trainable parameters, the receiver may be optimized together with the transmitter (e.g., by joint optimization). As described below, an algorithm may be provided for training trainable weights of a receiver algorithm having at least some trainable weights.
If the training of the receiver does not require back propagation of the gradient through the channel, then no channel model is required to optimize the receiver. Thus, joint training of the transmitter and receiver may be achieved by alternating between conventional SL-based receiver training and transmitter training using the previously described algorithm.
Fig. 11 is a flow chart illustrating an algorithm, generally indicated by reference numeral 110, according to an example embodiment.
By g ψ To represent the function implemented by the receiver, where ψ is a vector of trainable parameters. Then, the transmitter f θ And a receiver g ψ The joint training of (c) may be implemented as follows.
At operation 112, trainable parameters of the receiver ψ transmitter θ are initialized. The initialization may be, for example, predefined or pseudo-random.
At operation 113, parameters from the transmitter (e.g., by implementing instances of operations 102-107 of algorithm 100 described above) are updated.
At operation 114, parameters of the receiver are updated. An example implementation of operation 114 is provided below.
In operation it is determined whether the stopping criterion has been reached. Similar criteria may be used for the criteria discussed above with respect to algorithm 100. If the stop criteria have been met, parameters are provided (e.g., broadcast) and the algorithm 110 terminates. If the stop criteria are not met, the algorithm returns to operation 113.
Operation 114 may be implemented by performing one or more steps of receiver training as follows:
each sender i e {1,. Cndot., N T Generate a batch of pseudorandom messages of size B, e.g., using a PRNG or a predefined sequence. For B pseudo-random messages
Figure BDA0004023976370000151
And (4) showing. Each message s e s i Trainable algorithm f by a transmitter θ Mapping to complex channel symbols
Figure BDA0004023976370000152
And the resultant vector of the superimposed channel symbols is formed by
Figure BDA0004023976370000153
And (4) showing.
Each sender i e { 1.,. N T } sending a signal x i
The receiver receives the signal transmitted by the transmitter and changed by the channel
Figure BDA0004023976370000154
i∈{1,...,N T Denotes.
The receiver performs one step of the SGD on the loss function:
Figure BDA0004023976370000155
of course, many variations of the algorithm 110 trained in conjunction with the transmitter and receiver parameters will be apparent to those of ordinary skill in the art.
A possible use of the concepts described herein is online training of periodic or event-triggered transmitters (and possibly receivers). In this case, the training process discussed above may be performed periodically or after the triggering of certain events (e.g., when the communication rate has significantly deteriorated). In this case, the samples (y, s, x) from the previous training may be reused to reduce the number of transmissions required for the training process (e.g., operations 102-106 of algorithm 100 or operations 113 and/or 114 of algorithm 110). This may be accomplished by storing the newly observed samples in replay memory for use by future training instances. For example, the samples used to calculate the loss function may be sampled randomly from the playback memory, and thus may include samples collected in previous training, not just newly received samples. Irrelevant samples may be discarded from the replay memory according to some policy, e.g. the oldest sample is removed when a new sample is received. In addition to reducing the number of samples required by the transmitter per training, this approach may also introduce time diversity in the training process by taking into account the channel over a longer time window. This may avoid over-adapting to the current state of the channel.
Of the algorithm 100 described aboveOperation 108 comprises broadcasting a set of weights θ to the transmitters. If this group is too large, this may be cost prohibitive. An alternative to approval is to add a compression constraint to the loss function at operation 106 of the algorithm 100. As an example, L 1 Constraints may be added to enforce sparsity of weights.
Neural network techniques may be used in various implementations described herein. Fig. 12 illustrates an example neural network 120 that may be used in one or more example embodiments. Neural network 120 includes a plurality of interconnected nodes arranged in multiple layers. A neural network (such as network 120) may be trained by adjusting the connections between nodes and the relative weights of those connections. As described above, the transmitter and receiver algorithms may be implemented using one of a plurality of neural networks (e.g., a neural network in the form of neural network 120).
For completeness, fig. 13 is a schematic diagram of the components of one or more of the example embodiments previously described, which are collectively referred to hereinafter as processing system 300. The processing system 300 may be, for example, an apparatus as referred to in the appended claims.
The processing system 300 may have a processor 302, a memory 304 closely coupled to the processor and including a RAM 314 and a ROM 312, and optionally a user input 310 and a display 318. The processing system 300 may include one or more network/device interfaces 308 for connecting to a network/device, such as a modem, which may be wired or wireless. The network/device interface 308 may also operate as a connection to other devices, such as devices/devices that are not network-side devices. Thus, a direct connection between devices/apparatuses without network participation is possible.
A processor 302 is connected to each of the other components in order to control its operation.
The memory 304 may include a non-volatile memory, such as a Hard Disk Drive (HDD) or a Solid State Drive (SSD). The ROM 312 of the memory 304 stores, among other things, an operating system 315 and may store software applications 316. The processor 302 uses the RAM 314 of the memory 304 to temporarily store data. The operating system 315 may contain code that, when executed by a processor, implements aspects of the algorithms 50, 70, 90, 100, and 110 described above. Note that in the case of small devices/apparatuses, the memory may be most suitable for small usage, i.e., a Hard Disk Drive (HDD) or a Solid State Drive (SSD) is not always used.
The processor 302 may take any suitable form. For example, it may be one microcontroller, a plurality of microcontrollers, one processor, or a plurality of processors.
Processing system 300 may be a stand-alone computer, a server, a console, or a network thereof. The processing system 300 and required structural components may both be internal to a device/apparatus, such as an internet of things device/appliance, i.e., embedded in a very small size.
In some example embodiments, the processing system 300 may also be associated with external software applications. These may be applications stored on a remote server device/apparatus and may run partially or exclusively on the remote server device/apparatus. These applications may be referred to as cloud-hosted applications. The processing system 300 may communicate with a remote server device/apparatus to utilize software applications stored therein.
Fig. 14A and 14B illustrate tangible media, i.e., a removable storage unit 365 and a Compact Disc (CD) 368, respectively, storing computer readable code that, when executed by a computer, may perform a method according to the example embodiments described above. The removable storage unit 365 may be a memory stick, such as a USB memory stick, having an internal memory 366 storing computer readable code. The computer system may access the internal memory 366 via the connector 367. CD 368 may be a CD-ROM or DVD or the like. Other forms of tangible storage media may be used. The tangible medium may be any device/apparatus capable of storing data/information that may be exchanged between devices/apparatuses/networks.
Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on memory or any computer medium. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a "memory" or "computer-readable medium" can be any non-transitory medium or means that can contain, store, communicate, propagate, or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
In relevant cases, references to "computer-readable medium", "computer program product", "tangibly embodied computer program", or the like, or "processor" or "processing circuitry", or the like, should be understood to include not only computer architectures having different features, e.g., single/multi-processor architectures and sequential/parallel architectures, but also specialized circuits (such as field programmable gate arrays, FPGAs, application specific circuits, ASICs, signal processing devices/devices, and other devices/devices). References to computer programs, instructions, code, etc., should be understood to mean software for the programmable processor firmware, such as instructions for a hardware device/apparatus as a processor or configuration settings for a fixed-function device/apparatus, gate array, programmable logic device/apparatus, etc.
If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined. Similarly, it should also be understood that the flowcharts of fig. 5, 7, and 9-11 are merely examples and that various operations depicted therein may be omitted, reordered, and/or combined.
It should be understood that the above-described exemplary embodiments are merely illustrative and do not limit the scope of the invention. Other variations and modifications will be apparent to persons skilled in the art upon reading the present specification.
Furthermore, the disclosure of the present application should be understood to include any novel feature or any novel combination of features, whether explicitly or implicitly disclosed herein or any generalization thereof, and during the prosecution of the present application or of any application derived therefrom, new claims may be formulated to cover any such functionality and/or combination of such functionalities.
Although various aspects of the invention are set out in the independent claims, other aspects of the invention comprise other combinations of features from the described example embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.
It is also noted herein that while various examples are described above, these descriptions should not be viewed as limiting. Rather, various changes and modifications may be made without departing from the scope of the invention as defined in the appended claims.

Claims (15)

1. An apparatus comprising means configured to:
receiving, at a receiver of a transmission system, a transmitted signal from each of a plurality of transmitters, wherein each transmitter communicates with the receiver over one of a plurality of channels of the transmission system, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter algorithm has the same trainable weights, and wherein each of the transmitted signals is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver;
updating, at the receiver, the weights of the transmitter algorithm based on a loss function;
providing the updated weights to each transmitter of the transmission system; and
the receiving and updating are repeated until a first condition is reached.
2. The apparatus of claim 1, wherein at the receiver, the loss function is determined based on knowledge of respective channel symbols and respective perturbations at the receiver based on a sum of losses for each of a plurality of symbols transmitted from the plurality of transmitters.
3. The apparatus of any one of the preceding claims, wherein the receiver comprises a receiver algorithm having at least some trainable weights.
4. The apparatus of claim 3, further comprising means configured to: training the trainable weights of the receiver algorithm.
5. The apparatus of any one of the preceding claims, wherein the first condition comprises a defined number of iterations.
6. An apparatus comprising means configured to:
transmitting a signal from one of a plurality of transmitters of a transmission system to a receiver of the transmission system, wherein each of the plurality of transmitters communicates with the receiver over one of a plurality of channels of the transmission system, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter has the same trainable weights and wherein the transmitted signal is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver; and
receiving updated weights of the transmitter algorithm from the receiver, wherein the weights are updated at the receiver based on a loss function.
7. The apparatus according to any of the preceding claims, wherein the channel symbols and/or the perturbations are pseudo-randomly generated.
8. The apparatus according to any one of the preceding claims, further comprising means configured to perform: initializing the weights of the transmitter algorithm.
9. The apparatus according to any of the preceding claims, wherein the receiver is a communication node of a mobile communication system.
10. The apparatus of any one of the preceding claims, wherein the transmitter algorithm is implemented using a neural network.
11. A transmission system comprising a plurality of transmitters, a plurality of channels and a receiver, the transmission system comprising means configured to:
transmitting a signal from one or more transmitters of the plurality of transmitters to the receiver, wherein each transmitter of the plurality of transmitters communicates with the receiver over one of the plurality of channels, wherein each transmitter comprises a transmitter algorithm including transmitter parameters having at least some trainable weights, wherein each transmitter has the same trainable weights and wherein the transmitted signal is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver;
receiving the transmitted signal at the receiver of the transmission system;
updating, at the receiver, the weights of the transmitter algorithm based on a loss function;
providing the updated weights to each transmitter of the transmission system; and
the receiving and updating are repeated until a first condition is reached.
12. The device or system of any of the preceding claims, wherein the component comprises:
at least one processor; and
at least one memory including computer program code, the at least one memory and the computer program configured to: causing, with the at least one processor, execution of the apparatus.
13. A method, comprising:
receiving, at a receiver of a transmission system, a transmitted signal from each of a plurality of transmitters, wherein each transmitter communicates with the receiver over one of a plurality of channels of the transmission system, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter algorithm has the same trainable weights, and wherein each of the transmitted signals is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver;
updating, at the receiver, the weights of the transmitter algorithm based on a loss function;
providing the updated weights to each transmitter of the transmission system; and
the receiving and updating are repeated until a first condition is reached.
14. A method, comprising:
transmitting a signal from one of a plurality of transmitters of a transmission system to a receiver of the transmission system, wherein each of the plurality of transmitters communicates with the receiver over one of a plurality of channels of the transmission system, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter has the same trainable weights and wherein the transmitted signal is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver; and
receiving updated weights of the transmitter algorithm from the receiver, wherein the weights are updated at the receiver based on a loss function.
15. A computer readable medium comprising program instructions stored thereon for performing at least the following:
receiving, at a receiver of a transmission system, a transmission signal from each of a plurality of transmitters, wherein each transmitter communicates with the receiver over one of a plurality of channels of the transmission system, wherein each transmitter comprises a transmitter algorithm having at least some trainable weights, wherein each transmitter algorithm has the same trainable weights, and wherein each of the transmission signals is based on perturbed channel symbols generated at the respective transmitter, wherein the channel symbols and perturbations are known to the receiver;
updating, at the receiver, the weights of the transmitter algorithm based on a loss function;
providing the updated weights to each transmitter of the transmission system; and
the receiving and updating are repeated until a first condition is reached.
CN202080102550.8A 2020-06-29 2020-06-29 Training in a communication system Pending CN115720707A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/068238 WO2022002347A1 (en) 2020-06-29 2020-06-29 Training in communication systems

Publications (1)

Publication Number Publication Date
CN115720707A true CN115720707A (en) 2023-02-28

Family

ID=71401783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080102550.8A Pending CN115720707A (en) 2020-06-29 2020-06-29 Training in a communication system

Country Status (4)

Country Link
US (1) US20230246887A1 (en)
EP (1) EP4173243A1 (en)
CN (1) CN115720707A (en)
WO (1) WO2022002347A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726394B (en) * 2022-03-01 2022-09-02 深圳前海梵天通信技术有限公司 Training method of intelligent communication system and intelligent communication system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1399835A (en) * 1999-11-24 2003-02-26 艾利森公司 Method, receiver devices and systems for whitening signal disturbance in communication signal
CN101662442A (en) * 2003-07-24 2010-03-03 科达无线私人有限公司 Method and system for communication in a multiple access network
CN110267274A (en) * 2019-05-09 2019-09-20 广东工业大学 A kind of frequency spectrum sharing method according to credit worthiness selection sensing user social between user
CN110753937A (en) * 2017-06-19 2020-02-04 诺基亚技术有限公司 Data transmission network configuration

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019080988A1 (en) * 2017-10-23 2019-05-02 Nokia Technologies Oy End-to-end learning in communication systems
EP3884582A1 (en) * 2018-11-23 2021-09-29 Nokia Technologies Oy End-to-end learning in communication systems
US20220393795A1 (en) * 2019-10-31 2022-12-08 Nokia Technologies Oy Apparatuses and methods for providing feedback
US11431583B2 (en) * 2019-11-22 2022-08-30 Huawei Technologies Co., Ltd. Personalized tailored air interface
US20220303158A1 (en) * 2021-03-19 2022-09-22 NEC Laboratories Europe GmbH End-to-end channel estimation in communication networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1399835A (en) * 1999-11-24 2003-02-26 艾利森公司 Method, receiver devices and systems for whitening signal disturbance in communication signal
CN101662442A (en) * 2003-07-24 2010-03-03 科达无线私人有限公司 Method and system for communication in a multiple access network
CN110753937A (en) * 2017-06-19 2020-02-04 诺基亚技术有限公司 Data transmission network configuration
CN110267274A (en) * 2019-05-09 2019-09-20 广东工业大学 A kind of frequency spectrum sharing method according to credit worthiness selection sensing user social between user

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FAYÇAL AIT AOUDIA; JAKOB HOYDIS: "End-to-End Learning of Communications Systems Without a Channel Model", 2018 52ND ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 21 February 2019 (2019-02-21), pages 1 - 4 *

Also Published As

Publication number Publication date
WO2022002347A1 (en) 2022-01-06
US20230246887A1 (en) 2023-08-03
EP4173243A1 (en) 2023-05-03

Similar Documents

Publication Publication Date Title
Wang et al. Deep reinforcement learning for dynamic multichannel access in wireless networks
CN110753937B (en) Data transmission network configuration
US11556799B2 (en) Channel modelling in a data transmission system
CN112166567B (en) Learning in a communication system
Lee et al. Adaptive transmission scheduling in wireless networks for asynchronous federated learning
Tutuncuoglu et al. The binary energy harvesting channel with a unit-sized battery
CN111629380A (en) Dynamic resource allocation method for high-concurrency multi-service industrial 5G network
CN110167176B (en) Wireless network resource allocation method based on distributed machine learning
EP4228347A1 (en) Communication method and communication apparatus
Li et al. Deep reinforcement learning optimal transmission policy for communication systems with energy harvesting and adaptive MQAM
CN113169748B (en) End-to-end learning in a communication system
Farshbafan et al. Curriculum learning for goal-oriented semantic communications with a common language
CN113507328B (en) Time slot MAC protocol method, system, device and medium for underwater acoustic network
CN110278570B (en) Wireless communication system based on artificial intelligence
Mafuta et al. Decentralized resource allocation-based multiagent deep learning in vehicular network
CN115720707A (en) Training in a communication system
CN113923743A (en) Routing method, device, terminal and storage medium for electric power underground pipe gallery
CN111459780B (en) User identification method and device, readable medium and electronic equipment
Yemini et al. The restless hidden markov bandit with linear rewards and side information
JP7455240B2 (en) Methods, systems, and computer programs for optimizing communication channel capacity using Dirichlet processes
US20220303158A1 (en) End-to-end channel estimation in communication networks
Shuangshuang et al. Adaptive modulation and feedback strategy for an underwater acoustic link
Zhou et al. DRL-Based Workload Allocation for Distributed Coded Machine Learning
Assaouy et al. Policy iteration vs Q-Sarsa approach optimization for embedded system communications with energy harvesting
Oh et al. Federated Flowchart: Overview of State-of-the-Arts based on Federated Learning Process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination