CN114584436B - Message aggregation system and method in concurrent communication network of single handshake - Google Patents
Message aggregation system and method in concurrent communication network of single handshake Download PDFInfo
- Publication number
- CN114584436B CN114584436B CN202210483218.1A CN202210483218A CN114584436B CN 114584436 B CN114584436 B CN 114584436B CN 202210483218 A CN202210483218 A CN 202210483218A CN 114584436 B CN114584436 B CN 114584436B
- Authority
- CN
- China
- Prior art keywords
- quantization
- module
- modulation
- codebook
- gradient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000002776 aggregation Effects 0.000 title claims abstract description 39
- 238000004220 aggregation Methods 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000004891 communication Methods 0.000 title claims abstract description 24
- 238000013139 quantization Methods 0.000 claims abstract description 108
- 238000012549 training Methods 0.000 claims abstract description 29
- 230000005540 biological transmission Effects 0.000 claims abstract description 19
- 238000001514 detection method Methods 0.000 claims abstract description 17
- 230000008569 process Effects 0.000 claims abstract description 15
- 238000012935 Averaging Methods 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 11
- 239000013598 vector Substances 0.000 claims description 33
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 239000000126 substance Substances 0.000 claims description 5
- 230000008054 signal transmission Effects 0.000 claims description 4
- 230000003993 interaction Effects 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000006116 polymerization reaction Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L25/00—Baseband systems
- H04L25/02—Details ; arrangements for supplying electrical power along data transmission lines
- H04L25/03—Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
- H04L25/03006—Arrangements for removing intersymbol interference
- H04L25/03343—Arrangements at the transmitter end
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Power Engineering (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
The invention discloses a message aggregation system and a message aggregation method in a concurrent communication network of single handshake, which belong to the technical field of data transmission in the communication network, wherein the system comprises a transmitting end and a receiving end, and the transmitting end comprises a downlink channel estimation and synchronization module, a local training module, a quantization module, a codebook modulation module and a pre-equalization module; the receiving end comprises a signal detection module, a gradient aggregation module, an averaging module and a model updating module; the method comprises a transmitting end signal processing process, an uplink transmission process and a receiving end signal processing process; the invention utilizes the characteristic that only the result of gradient message aggregation is needed to be obtained by federal learning and each user message does not need to be calculated independently, multiple users adopt a common quantization codebook and a common modulation codebook, and simultaneously, the respective gradient messages are transmitted in uplink at the same frequency, thereby realizing the high-efficiency aggregation of the gradient messages of the multiple users and reducing the communication resource overhead in the federal learning.
Description
Technical Field
The invention relates to the field of data transmission in a communication network, in particular to a message aggregation system and a message aggregation method in a concurrent communication network with single handshake.
Background
Traditional machine learning centralizes user data to a central node, and utilizes massive computing resources to perform central learning. However, the central machine learning has a risk of revealing user private data, and simultaneously faces a problem of high overhead of mass data transmission. With the increase of the intelligent degree of the user terminal, distributed machine learning becomes possible, thereby overcoming the defects.
Federated learning is a typical distributed machine learning framework, where a central node trains a neural network together with multiple users through multiple message interactions. Taking any one-time message interaction process as an example, a plurality of users train according to respective local data sets to obtain local gradients, then local gradient information is sent to a central node, the central node aggregates the gradient information of the plurality of users to obtain global gradients, model updating is completed, updated model parameters are fed back to all the users, and next local training and message interaction are started. Due to the fact that the number of users and the dimensionality of the gradient vector are quite large, the message interaction process of federal learning brings huge burden to a communication network. Therefore, how to implement message interaction with low communication overhead is a key issue to be solved urgently.
Disclosure of Invention
In view of this, the present invention provides a message aggregation system and method in a concurrent communication network with single handshake, which can effectively reduce the communication resource overhead of federal learning.
The technical scheme for realizing the invention is as follows:
a message aggregation system in a concurrent communication network of single handshake comprises a transmitting terminal and a receiving terminal;
the transmitting terminal comprises a downlink channel estimation and synchronization module, a local training module, a quantization module, a codebook modulation module and a pre-equalization module;
the downlink channel estimation and synchronization module is used for performing downlink channel estimation and time synchronization according to downlink broadcast signals from the central node to multiple users;
the local training module is used for performing neural network training on each user according to local data to obtain respective local gradients of the multiple users;
the quantization module is used for quantizing the local gradient of each user according to the quantization codebook to obtain quantization code words and quantization indexes of the quantization code words in the quantization codebook;
the codebook modulation module is used for carrying out codebook modulation on the quantization value output by the quantization module according to a modulation codebook to obtain a modulation codeword corresponding to each quantization codeword;
particularly, all users adopt the same quantization codebook and modulation codebook, and the modulation code words in the modulation codebook correspond to the quantization code words in the quantization codebook one by one;
the pre-equalization module is used for pre-equalizing the modulation code words before each user sends the modulation code words according to the downlink channel estimation values to obtain sending signals;
and the receiving end carries out multi-user signal transmission detection to obtain the number of times of each modulation code word being transmitted, namely the number of times of each quantization code word appearing, then carries out gradient aggregation and averaging to finally obtain a global gradient, and completes model updating.
Furthermore, the quantization mode of the quantization module is scalar quantization or vector quantization, when scalar quantization is adopted, the dimension of the local gradient vector is not changed, and when vector quantization is adopted, the dimension of the local gradient vector is compressed.
Further, the modulation code words are transmitted on pre-allocated time-frequency resources, all users allocate the same time-frequency resources, each modulation code word is a vector containing a plurality of scalar elements, and each element in the modulation code word is pre-equalized according to a channel corresponding to a subcarrier where the element is located, because channels corresponding to different subcarriers (frequency domain resources) are different.
Further, the system considers the situation of time division duplex, so that the uplink and downlink channels have reciprocity, and the uplink transmission is pre-equalized according to the downlink channel estimation value.
Further, the receiving end comprises a signal detection module, a gradient aggregation module, an averaging module and a model updating module;
the signal detection module is used for carrying out multi-user signal transmission detection according to the received signal and the modulation codebook to obtain the number of times that each modulation code word in the modulation codebook is transmitted;
a gradient aggregation module, which is used for detecting the output of the module according to the signal, because the modulation code words and the quantization code words are in one-to-one correspondence, namely: modulating the number of times that each modulation code word in the codebook is sent to obtain the number of times that each quantization code word in the quantization codebook appears, then multiplying each quantization code word by the number of times that the quantization code word appears to obtain a multiplied quantization code word, and then summing all multiplied quantization code words;
the averaging module is used for calculating the number of users participating in the federal learning, and then dividing the summation result output by the gradient aggregation module by the number of the users to obtain a global gradient; specifically, the number of users is equal to the sum of the sending times of all modulation code words obtained by the signal detection module;
and the model updating module is used for updating the parameters of the neural network according to the global gradient output by the averaging module.
A message aggregation method in a concurrent communication network of single handshake comprises a transmitting end signal processing process, an uplink transmission process and a receiving end signal processing process;
the transmitting terminal signal processing process comprises the steps that each user receives a downlink broadcast signal, downlink channel estimation and synchronization are started, local training is started, local gradients obtained by the local training are quantized to obtain quantized code words and quantized indexes, codebook modulation is carried out according to the quantized indexes to obtain modulated code words, and then pre-equalization is carried out on the modulated code words to obtain a transmitting signal;
the uplink transmission process comprises that multiple users simultaneously transmit respective sending signals in the same frequency uplink, and the sending signals of the multiple users reach the central node through a channel;
the receiving end signal processing process comprises the steps that the central node carries out signal detection according to a received signal and a modulation codebook to obtain the number of times that each modulation code word in the modulation codebook is sent, namely the number of times that each quantization code word in the quantization codebook appears, then each quantization code word is multiplied by the number of times that each quantization code word appears to obtain multiplied quantization code words, then all multiplied quantization code words are summed to complete gradient aggregation, then the summation result is averaged to obtain a global gradient, and finally the global gradient is used for model updating.
Further, the method can complete the multi-user gradient aggregation only by one uplink transmission, that is: only a single handshake is required by the multi-user and the central node.
Has the advantages that:
(1) the invention realizes the high-efficiency aggregation of multi-user gradient messages without independently calculating the message of each user, thereby greatly reducing the communication resource overhead of federal learning;
(2) the invention only needs single handshake with the central node for multiple users, namely: one uplink transmission is carried out, so that the signaling overhead is reduced;
(3) when the quantization module of the transmitting terminal adopts a vector quantization mode, the dimensionality of the gradient vector is compressed, so that the transmission delay is reduced.
Drawings
Fig. 1 is a flow chart of message aggregation in a concurrent communication network with single handshake according to the present invention.
FIG. 2 is a comparison graph of performance evaluations performed in accordance with an embodiment.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention provides a message aggregation system and a message aggregation method in a concurrent communication network with single handshake, which are used for realizing efficient gradient aggregation of federal learning.
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention considers 1 central node andKand (4) carrying out federal learning by each user, wherein all users and the central node are single antennas and jointly train a neural network. Setting the training phase of Federal learning collectively comprisesTRound training with the firstt(1≤t≤T) The training rounds are taken as an example, and the process of the transmitting end will be first detailed below. As shown in fig. 1, the transmitting end includes a downlink channel estimation and synchronization module, a local training module, a quantization module, a codebook modulation module, and a pre-equalization module; wherein the content of the first and second substances,
a downlink channel estimation and synchronization module: each user estimates a downlink channel from the downlink broadcast signal, the firstk The downlink channel estimation value of each user is expressed asMeanwhile, the step completes the time synchronization of multiple users; under the condition of a time division duplex system and perfect channel estimation, the estimated values of an uplink channel and a downlink channel are the same timeThe same;
a local training module: each user performs neural network training based on a local data set, ak The output of individual users being the local gradient;
A quantization module: in a preferred embodiment, the firstk The local gradient is processed by the individual user according to the Lloyd algorithm (for the Lloyd algorithm, see the literature "Least Squares Quantization in pulse code modulation", author, English name and presentation as "Lloyd, S.P.," Least Squares Quantization in PCM, "IEEE Transactions on Information Theory, Vol. IT-28, March, 1982, pp. 129-Non-uniform quantization is performed to obtain a scalar quantization codebookAt the same time obtainQuantization indices in a quantization codebook, expressed as matrices(ii) a Attention is paid toIs a set of integers which are,the value of the medium element is an integer; for theThe first of the local gradientEach element is represented asQuantization indexTo (1) aThe columns are shown as(ii) a Note thatOnly one element is 1, the other elements are all 0, and;
a codebook modulation module: the modulation codebook is expressed asWherein A is aqColumn modulation code word and quantization codebookTo (1)qThe quantized code words are in one-to-one correspondence,and the columns of A are not linearly related; without loss of generality, if the modulation index is set to be the same as the quantization index, thenOf 1 atk According to modulation index (quantization index)Selection of a transmission modulation code word is made, the selected transmission modulation code word being
A pre-equalization module: the channel through which all elements in each transmit modulation codeword travel is set to be the same,
for the firstkThe user multiplies the selected transmitting modulation code word by the reciprocal of the channel estimation value to complete pre-equalization and obtain the transmitting signal;
Due to simultaneous uplink transmission of multiple users, the central node receives signalsExpressed as:
wherein the content of the first and second substances,represents the true secondkThe individual user is attUplink channel of training round, modulation index of multi-user superposition,Representing thermal noise;
the receiving end comprises a signal detection module, a gradient aggregation module, an averaging module and a model updating module, and the flow of the receiving end is detailed below. Wherein the content of the first and second substances,
the signal detection module: based on received signalsAnd a modulation codebook A known to the transmitting and receiving ends, recovering the modulation index of the multi-user superposition in the formula (1)(ii) a Attention is paid toEach column vector of (a) has sparsity, anThe value of the medium element is an integer, and the Bayesian compressed sensing algorithm is adopted to detect the signal to obtain the modulation index of multi-user superpositionIs estimated value of
A gradient polymerization module: the purpose of the module is to superpose the multi-user gradients
Completing the gradient polymerization; quantization codebooksAnd modulation index estimated by last moduleSince the modulation index is the same as the quantization index, thenQuantization index for multi-user stacks, hence gradient of multi-user stacks;
An averaging module: the modules firstly pairSumming by columns to obtainThen on vector nWAveraging the elements to obtain an average value which is the number of usersKThen averaging the result of gradient aggregation of the previous module to obtain a global gradient;
wherein the content of the first and second substances,is the neural network parameter, global gradient, of the last round of trainingThe output of the last module is output by the last module,βis the learning rate, equation (2) completes the model update.
For the quantization module of the transmitting end, a detailed description of vector quantization is given below. The quantization codebook for the hypothetical vector quantization is represented asWherein each vector has a dimension of(ii) a In advance, firstlyIn (1)WA scalar element, each adjacentVEach of which is to be regarded as a vector,
setting upInteger division is then obtainedA vector element, thenAnVector elements of dimensionality are used as input of a vector quantization algorithm to obtain vectorsQuantization codebook(ii) a Take clustering algorithm as an example, for thisBy vector elementClustering of individual classes to obtainA cluster ofThe centroids (vectors) of the clusters form a vector quantization codebook(ii) a Note that the quantization module takes dimensions ofV×QThe vector quantization codebook of (1) is required to arrange the local gradients at the transmitting end of multiple users according to a preset arrangement ruleDimension ofThen carrying out vector quantization; after the receiving end passes through the gradient aggregation module, the receiving end needs to be arranged according to the arrangement rule of the transmitting endDimension ofIs reduced to 1WThe other modules at the transceiving end are not changed.
The invention discloses a message aggregation system and a message aggregation method in a concurrent communication network with single handshake, which can reduce the communication overhead of federal study.
To illustrate the advantages of the method proposed by the present invention, the effect of the present invention will be described with reference to fig. 2.
In the simulation, the parameters related to the federal learning are set as follows: the central node and multiple users train a convolutional neural network together, and the structure of the convolutional neural network adopts LeNet (for the detailed structure of LeNet, see the literature ' translation: application of Gradient learning in document identification ', and the author, English name and provenance of the convolutional neural network are ' Y, Lecun, L, Bottou, Y, Bengio and P, Haffner, ' Gradient-based learning applied to document recognition, ' in Proceedings of the IEEE, vol.86, No. 11, pp. 2278-; the data set adopts fast-MNIST, 60000 training images are independently and uniformly distributed to 100 users, and the data samples of each user are guaranteed to be the same in quantity; randomly selecting 30 users to participate in federal learning in each training turn; the model training process adopts an adaptive moment estimation (Adam) optimizer; the learning rate is 0.001; training 10 times by the local network, and setting the batch size to be 5;
the communication-related parameter settings are as follows: the signal-to-noise ratio is set to 20 dB; each modulation code word in the modulation codebook is set to lengthL=16, when 4-bit quantization is adopted, the dimension of the quantization codebook isQ=16, when 5 bit quantization is used, the quantization codebook has dimensions ofQ=32, modulation codebookEach element of (a) follows a complex gaussian distribution that is independently identically distributed;
scalar quantity adopts Lloyd algorithm to carry out local gradient on first user in advanceInWNon-uniform quantization is performed on scalar elements to obtain a quantization codebookThen all users adopt the quantization codebook(ii) a The quantization codebook for vector quantization is represented asSetting upBy usingK-mean clustering algorithm to derive vector quantization codebook(ii) a The receiver Signal detection module adopts An Approximate Message transfer algorithm (for the Approximate Message transfer algorithm, see the literature 'translation name: An Approximate Message transfer algorithm of An expected Propagation view angle', the author, English name and appearance of which are 'X, Meng, S, Wu, L, Kuang and J. Lu', 'An expression Propagation utilization permission on application Message Page', 'in IEEE Signal Processing Letters, vol.22, No. 8, pp. 1194-1197, Aug.2015, doi: 10.1109/LSP 2015.2391287'), and a modulation index for multi-user superposition to be estimatedRestoring column by column, and setting the prior of each element to be restored as an integer less than or equal to the total number of users;
specifically, fig. 2 shows that when the proposed scheme employs 4-bit scalar quantization, the accuracy of the test set of the neural network model obtained by training approaches the reference scheme employing perfect gradient aggregation; due to the vector quantizationV=20 in formula (1)The number of columns is reduced by 20 times, so that the communication overhead is reduced by 20 times in each round of training process; when the training turns are the same, the 4-bit vector quantization adopted by the proposed scheme can reduce the accuracy of the test set of the training model, which is quantizationCaused by loss; when 5-bit vector quantization is adopted, the accuracy of a test set of the neural network model obtained by the scheme is obviously improved, and the test set approaches a reference scheme along with the increase of training rounds; therefore, the performance loss of vector quantization can be reduced by increasing the quantization bit.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (7)
1. A message aggregation system in a concurrent communication network with single handshake, the system being suitable for federal learning, comprising a transmitting end and a receiving end, characterized in that:
the transmitting terminal comprises a downlink channel estimation and synchronization module, a local training module, a quantization module, a codebook modulation module and a pre-equalization module; wherein the content of the first and second substances,
the downlink channel estimation and synchronization module is used for performing downlink channel estimation and time synchronization according to downlink broadcast signals from the central node to multiple users; the central node is a receiving end;
the local training module is used for performing neural network training on each user according to local data to obtain respective local gradients of the multiple users;
the quantization module is used for quantizing the local gradient of each user according to the quantization codebook to obtain quantization code words and quantization indexes of the quantization code words in the quantization codebook;
the codebook modulation module is used for carrying out codebook modulation on the quantized code words output by the quantization module according to a modulation codebook to obtain modulation code words corresponding to each quantized code word;
the modulation code words of the multiple users are transmitted on the same time-frequency resource, all the users adopt the same quantization codebook and modulation codebook, and the modulation code words in the modulation codebook correspond to the quantization code words in the quantization codebook one by one;
the pre-equalization module is used for pre-equalizing the modulation code words before each user sends the modulation code words according to the downlink channel estimation values to obtain sending signals;
and the receiving end carries out multi-user signal transmission detection to obtain the number of times of transmitting each modulation code word and the number of times of appearing each quantization code word, then carries out gradient aggregation and averaging to finally obtain a global gradient, and completes model updating.
2. The message aggregation system in a single handshake concurrent communication network as claimed in claim 1, wherein the quantization mode of the quantization module is scalar quantization or vector quantization, when scalar quantization is used, the dimension of the local gradient vector is unchanged, and when vector quantization is used, the dimension of the local gradient vector is compressed.
3. The system of claim 1, wherein the modulation code words are transmitted on pre-allocated time-frequency resources, all users allocate the same time-frequency resources, each modulation code word is a vector comprising a plurality of scalar elements, and each element in the modulation code word is pre-equalized according to a channel corresponding to a subcarrier where the element is located, since channels corresponding to different subcarriers are different.
4. The system of claim 1, wherein the system has reciprocity between uplink and downlink channels in tdd (time division duplex) conditions, so that uplink transmission is pre-equalized according to the estimated value of the downlink channel.
5. The message aggregation system in the concurrent communication network with single handshake as claimed in claim 1, wherein the receiving end includes a signal detection module, a gradient aggregation module, an averaging module and a model update module;
the signal detection module is used for carrying out multi-user signal transmission detection according to the received signal and the modulation codebook to obtain the number of times that each modulation code word in the modulation codebook is transmitted;
the gradient aggregation module is used for obtaining the occurrence frequency of each quantization code word in the quantization codebook according to the output of the signal detection module, multiplying each quantization code word by the occurrence frequency of the quantization code word to obtain multiplied quantization code words, and summing all multiplied quantization code words;
the averaging module is used for calculating the number of users participating in the federal learning, and then dividing the summation result output by the gradient aggregation module by the number of the users to obtain a global gradient; the number of users is equal to the sum of the sending times of all the modulation code words obtained by the signal detection module;
and the model updating module is used for updating the parameters of the neural network according to the global gradient output by the averaging module.
6. A message aggregation method in a concurrent communication network of single handshake is characterized by comprising a transmitting end signal processing process, an uplink transmission process and a receiving end signal processing process;
the transmitting terminal signal processing process comprises the steps that each user receives a downlink broadcast signal, downlink channel estimation and synchronization are started, local training is started, local gradients obtained by the local training are quantized to obtain quantized code words and quantized indexes, codebook modulation is carried out according to the quantized indexes to obtain modulation code words, the modulation code words in the modulation codebook correspond to the quantized code words in the quantization codebook one by one, and then pre-equalization is carried out on the modulation code words to obtain a transmitting signal;
the uplink transmission process comprises that multiple users transmit respective sending signals in the same frequency uplink transmission at the same time, the sending signals of the multiple users reach a central node through a channel, and the central node is a receiving end;
the receiving end signal processing process comprises the steps that the central node carries out signal detection according to a received signal and a modulation codebook to obtain the number of times that each modulation code word in the modulation codebook is sent and the number of times that each quantization code word in the quantization codebook appears, then each quantization code word is multiplied by the number of times that each quantization code word appears to obtain multiplied quantization code words, then all multiplied quantization code words are summed to complete gradient aggregation, then the summation result is averaged to obtain a global gradient, and finally model updating is carried out by utilizing the global gradient.
7. The method as claimed in claim 6, wherein the method only needs one uplink transmission to complete the gradient aggregation of multiple users, and the multiple users and the central node only need one handshake.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210483218.1A CN114584436B (en) | 2022-05-06 | 2022-05-06 | Message aggregation system and method in concurrent communication network of single handshake |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210483218.1A CN114584436B (en) | 2022-05-06 | 2022-05-06 | Message aggregation system and method in concurrent communication network of single handshake |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114584436A CN114584436A (en) | 2022-06-03 |
CN114584436B true CN114584436B (en) | 2022-07-01 |
Family
ID=81785060
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210483218.1A Active CN114584436B (en) | 2022-05-06 | 2022-05-06 | Message aggregation system and method in concurrent communication network of single handshake |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114584436B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110297848A (en) * | 2019-07-09 | 2019-10-01 | 深圳前海微众银行股份有限公司 | Recommended models training method, terminal and storage medium based on federation's study |
CN112288097A (en) * | 2020-10-29 | 2021-01-29 | 平安科技(深圳)有限公司 | Federal learning data processing method and device, computer equipment and storage medium |
CN112532251A (en) * | 2019-09-17 | 2021-03-19 | 华为技术有限公司 | Data processing method and device |
WO2021189225A1 (en) * | 2020-03-24 | 2021-09-30 | Oppo广东移动通信有限公司 | Machine learning model training method, electronic device and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11017322B1 (en) * | 2021-01-28 | 2021-05-25 | Alipay Labs (singapore) Pte. Ltd. | Method and system for federated learning |
-
2022
- 2022-05-06 CN CN202210483218.1A patent/CN114584436B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110297848A (en) * | 2019-07-09 | 2019-10-01 | 深圳前海微众银行股份有限公司 | Recommended models training method, terminal and storage medium based on federation's study |
CN112532251A (en) * | 2019-09-17 | 2021-03-19 | 华为技术有限公司 | Data processing method and device |
WO2021189225A1 (en) * | 2020-03-24 | 2021-09-30 | Oppo广东移动通信有限公司 | Machine learning model training method, electronic device and storage medium |
CN112288097A (en) * | 2020-10-29 | 2021-01-29 | 平安科技(深圳)有限公司 | Federal learning data processing method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114584436A (en) | 2022-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109560841B (en) | Large-scale MIMO system channel estimation method based on improved distributed compressed sensing algorithm | |
Li et al. | Spatio-temporal representation with deep neural recurrent network in MIMO CSI feedback | |
CN108462517B (en) | MIMO link self-adaptive transmission method based on machine learning | |
Elbir et al. | Federated learning for hybrid beamforming in mm-wave massive MIMO | |
CN111279337A (en) | Lattice reduction in orthogonal time-frequency space modulation | |
CN111698182A (en) | Time-frequency blocking sparse channel estimation method based on compressed sensing | |
CN113691288B (en) | Joint pilot frequency, feedback and multi-user hybrid coding method based on deep learning | |
CN109474388B (en) | Low-complexity MIMO-NOMA system signal detection method based on improved gradient projection method | |
CN111555781B (en) | Large-scale MIMO channel state information compression and reconstruction method based on deep learning attention mechanism | |
CN114641941B (en) | Communication system and method using ultra-large Multiple Input Multiple Output (MIMO) antenna system with extremely large class of fast unitary transforms | |
Chen et al. | A novel quantization method for deep learning-based massive MIMO CSI feedback | |
JP7146095B2 (en) | Method, computer program product and device for decryption | |
Tseng et al. | Deep-learning-aided cross-layer resource allocation of OFDMA/NOMA video communication systems | |
Lan et al. | Communication-efficient federated learning for resource-constrained edge devices | |
CN110311876A (en) | The implementation method of underwater sound OFDM receiver based on deep neural network | |
CN106911622A (en) | ACO ofdm system channel estimation methods based on compressed sensing | |
Kong et al. | Knowledge distillation-aided end-to-end learning for linear precoding in multiuser MIMO downlink systems with finite-rate feedback | |
Ouyang et al. | Channel estimation for underwater acoustic OFDM communications: An image super-resolution approach | |
CN114584436B (en) | Message aggregation system and method in concurrent communication network of single handshake | |
Qiao et al. | Unsourced massive access-based digital over-the-air computation for efficient federated edge learning | |
Liu et al. | OFDM-based digital semantic communication with importance awareness | |
Xu et al. | Detect to learn: Structure learning with attention and decision feedback for MIMO-OFDM receive processing | |
CN114826832A (en) | Channel estimation method, neural network training method, device and equipment | |
CN110138423B (en) | Non-orthogonal multiplexing method | |
Qing et al. | Transfer learning-based channel estimation in orthogonal frequency division multiplexing systems using data-nulling superimposed pilots |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231214 Address after: Room 1401, 14th Floor, Building 6, Courtyard 8, Kegu 1st Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing, 100176 Patentee after: Beijing Institute of Technology Measurement and Control Technology Co.,Ltd. Address before: 100081 No. 5 South Main Street, Haidian District, Beijing, Zhongguancun Patentee before: BEIJING INSTITUTE OF TECHNOLOGY |
|
TR01 | Transfer of patent right |