CN113962400A - Wireless federal learning method based on 1bit compressed sensing - Google Patents

Wireless federal learning method based on 1bit compressed sensing Download PDF

Info

Publication number
CN113962400A
CN113962400A CN202111136679.3A CN202111136679A CN113962400A CN 113962400 A CN113962400 A CN 113962400A CN 202111136679 A CN202111136679 A CN 202111136679A CN 113962400 A CN113962400 A CN 113962400A
Authority
CN
China
Prior art keywords
training
user side
base station
learning
1bit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111136679.3A
Other languages
Chinese (zh)
Inventor
章振宇
谭国平
周思源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN202111136679.3A priority Critical patent/CN113962400A/en
Publication of CN113962400A publication Critical patent/CN113962400A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a wireless federal learning method based on 1bit compressed sensing, which comprises the following steps: the base station issues a global model to each user side; the user side provides data locally for training; the user side compares the local model with the issued global model; the user side records the updating amplitude and the updating trend of the local model in the current round; the client dynamically selects sparsity and a threshold value for sparsification according to the updating amplitude of the local model; the user side sparsizes the updating trend of the local model according to the threshold value; the user side compresses by a 1-bit compression sensing method; the base station reconstructs the observation signal through a BIHT algorithm and the received sparsity; the base station recovers a local model of the user side through the received threshold and the reconstructed updated sparse trend; the base station updates the global model; and the base station issues a new global model to each user terminal to perform a new round of training until convergence is reached. The invention reduces the transmission energy consumption of the user terminal.

Description

Wireless federal learning method based on 1bit compressed sensing
Technical Field
The invention relates to the technical field of wireless communication of mobile equipment, in particular to a wireless federal learning method based on 1bit compressed sensing.
Background
With the rapid development of artificial intelligence, the deep learning neural network also obtains rapid development, and simultaneously requires to gather a large amount of data information to obtain the best learning effect, but the problem of various privacy disclosure is caused. Thus, a distributed learning framework, federated learning, is proposed to protect the privacy of users. Different from the traditional centralized learning method, the federated learning can store data information in the user side, does not require to collect data of the user side, and only needs to collect local model data after training at the user side. Due to the fact that the data security of the user side can be protected, model training can be conducted by using data of more users on the premise that privacy is not disclosed. However, since all data of the user is not collected at one time, but the model interaction with the user terminal is performed in each training round, continuous data communication exists between the base station and the user terminal, which puts high requirements on wireless transmission.
In the federal learning training, the size of the model is often large, and the federal learning architecture requires continuous exchange of model data information between a server side and a user side, so that it is difficult for the user side to upload all data in the whole model to the server side in a wireless transmission mode in each training round. The reasons are as follows: 1. on the premise of ensuring communication quality, a user side continuously sends a large amount of model data to generate great transmission energy consumption, and the mobile users who provide data for federal learning often have huge cardinality, wherein users using portable mobile devices such as mobile phones occupy a considerable proportion, and for the battery endurance of the small portable devices, continuously sending huge amount of model data information generates great energy consumption burden on the users; 2. thinking from learning efficiency, when there is a large number of data sets, the federate learning model also needs to perform rounds and updates for a sufficient number of times to converge, so even if the user side can accurately upload the model data, if the time delay generated by communication cannot be guaranteed to be low, the federate learning model training is also affected: 3. since wireless transmission is adopted in model uploading, the problem of communication overhead has to be considered, and the smaller the total amount of communication data is, the smaller the communication overhead is.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the defects of the prior art and provide a wireless federal learning method based on 1bit compressed sensing, which reduces the information quantity of local model data uploaded by a user terminal through a reliable mode, and reduces the communication overhead of the user terminal while ensuring the effectiveness of model training.
The invention adopts the following technical scheme for solving the technical problems:
the invention provides a wireless federal learning method based on 1bit compressed sensing, which comprises the following steps:
step 1, initializing the number of iterations t to 1, and initializing the global model to be G (omega) by the base station0) And sends down G (omega)0) To each user terminal;
step 2, the base station obtains a global model G (omega) through the t-1 learning trainingt-1) Sending the data to each user side, and carrying out the t-th learning training by each user side locally by using local data to obtain an updated local model;
step 3, during the t-th learning training, each user side updates the local model Git) Expressed in the form of a one-dimensional column vector;
step 4, comparing G (omega) of each user side participating in training during the t-th learning trainingt-1) And Git) Obtaining the updating amplitude D of the local model of the ith user terminal in the t learning trainingit) Trend S of local model update of ith user terminal in the t learning trainingit);
Step 5, during the t learning training, the ith user end dynamically selects the pair Sit) Sparsity K for sparsificationi,tAnd a threshold thi,t
Step 6, during the t learning training, the ith user end is according to the selected threshold thi,tTrends for local model updates Sit) Carrying out sparsification to obtain the sparse trend of local model update
Figure BDA0003282652200000021
And 7, during the t-th learning training, selecting a Gaussian random measurement matrix A as a sensing matrix by each user side, and compressing the sensing matrix by a 1-bit compression sensing method
Figure BDA0003282652200000022
Obtaining an observation signal yit);
Step 8, during the t learning training, each user end observes the signal yit) Degree of sparsity Ki,tSum threshold thi,tSending the data to a base station;
step 9, during the t learning training, the base station receives the observation signal yit) Selecting a sensing matrix A and sparsity Ki,tUsing BIHT algorithm on observed signal yit) Signal reconstruction is carried out to obtain the updated sparse trend of the reconstructed local model
Figure BDA0003282652200000023
The base station and the user side share a matrix A;
step 10, during the t-th learning training, the base station updates the sparse trend according to the reconstructed local model
Figure BDA0003282652200000024
G(ωt-1) Sum threshold thi,tRecovering the local model of the user side after the t-th learning training to obtain a recovered local model G'it);
Step 11, during the t learning training, the base station obtains the recovered local models G 'of all the user terminals'it) Then, averaging the recovered local models to obtain a global model G (omega) of the t-th learning trainingt);
Step 12, the base station converts the new global model G (omega)t) Issuing the new round of learning training to the user end, making t equal to t +1, returning to the step 2 until the learning training process reaches convergence。
As a further optimization scheme of the wireless federal learning method based on 1bit compressed sensing, step 2 is G (omega)t-1) And sending the data to each user side in a broadcasting mode.
As a further optimization scheme of the wireless federal learning method based on 1bit compressed sensing, the step 4 specifically comprises the following steps: during the t-th learning training, each user end participating in the training compares G (omega)t-1) And Git) Each bit parameter in Git) Subtract G (omega)t-1) The absolute value of the obtained difference is recorded as the updated amplitude D of the local model of the ith user end in the t learning trainingit) Recording the sign of the obtained difference as the trend S of updating the local model of the ith user end in the t learning trainingit)。
As a further optimization scheme of the wireless federal learning method based on 1bit compressed sensing, the method in step 5 is as follows:
setting a parameter alphatE (0.4,0.8), and respectively setting the sparsification parameters as first sparsification parameters p1E (4%, 6%), a second sparsification parameter p2E (9%, 11%) and Dit) N data are contained in the training data, the first sparsity degree in the t-th learning training is ensured
Figure BDA0003282652200000031
Second degree of sparsity
Figure BDA0003282652200000032
Record Dit) The value size of N data is
Figure BDA0003282652200000033
A large number of
Figure BDA0003282652200000034
The value is the first
Figure BDA0003282652200000035
A large number of
Figure BDA0003282652200000036
If it is
Figure BDA0003282652200000037
Setting a threshold value
Figure BDA0003282652200000038
Degree of sparseness
Figure BDA0003282652200000039
Otherwise, set the threshold value to
Figure BDA00032826522000000310
Degree of sparseness
Figure BDA00032826522000000311
Wherein [. ]]Indicating rounding.
As a further optimization scheme of the wireless federal learning method based on 1bit compressed sensing, in step 6, the sparsification method is as follows:
will Dit) Each value of (d) and thi,tMaking a comparison if Dit) The nth number in (1) is greater than thi,tThen S is retainedit) N, otherwise will be Sit) Is set to 0.
As a further optimization scheme of the wireless federal learning method based on 1bit compressed sensing, in the step 10, the recovery method is as follows:
reading
Figure BDA00032826522000000312
If each numerical value of
Figure BDA00032826522000000313
If the nth value in (1) is equal to G (ω)t-1) The nth value of (1) plus thi,t(ii) a If it is not
Figure BDA00032826522000000314
Is-1, then G (ω) is calculatedt-1) The nth number of (1) minus the thi,t(ii) a If it is not
Figure BDA00032826522000000315
If the nth value in (1) is 0, then G (ω) is not includedt-1) Operates on the nth value of (1).
As a further optimization scheme of the wireless federal learning method based on 1bit compressed sensing, in step 4,
Dit)=abs(Git)-G(ωt-1))
Sit)=sign(Git)-G(ωt-1));
where abs (. cndot.) represents the absolute value and sign (. cndot.) represents the sign information.
As a further optimization scheme of the wireless federal learning method based on 1bit compressed sensing, in step 7,
Figure BDA0003282652200000041
the invention relates to a further optimization scheme of a wireless federal learning method based on 1bit compressed sensing, which is shown in step 9
Figure BDA0003282652200000042
Wherein the content of the first and second substances,
Figure BDA0003282652200000043
to adopt BIHT algorithm, with Ki,tSignal reconstruction of the observed signal for input sparsity, where Ki,tExpressing the sparse trend of the local model update obtained after the ith user side is subjected to sparse operation
Figure BDA0003282652200000044
The sparsity of (a).
As a further optimization scheme of the wireless federal learning method based on 1bit compressed sensing, in step 11,
G(ωt)=FL_mean(G′it));
wherein FL _ mean (G'it) Represents for all G'it) The average is taken bit by bit.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
according to the method, the dynamic threshold is introduced for sparsification, and the 1bit compressed sensing method is introduced, so that the information quantity of the model data needing to be uploaded by the user side is reduced, the model data is changed into a type which is easier to transmit, the information quantity of the model data needing to be transmitted is reduced at the user side, the transmission energy consumption of the user side and the cost of model training are reduced, and the lossless transmission effect can be well approximated.
Drawings
Fig. 1 is a flow chart of a wireless federal learning method based on 1bit compressed sensing of the present invention.
Fig. 2 is a schematic diagram of the relationship between the wireless federal learning base station and the user based on 1bit compressed sensing according to the present invention.
FIG. 3 is a simulation diagram of the training effect of the wireless federal learning method based on 1bit compressed sensing according to the present invention.
FIG. 4 is a communication overhead simulation diagram of the wireless federal learning method based on 1bit compressive sensing of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
The invention adopts the BIHT algorithm for signal reconstruction in 1bit compressed sensing, and because the signal to be processed belongs to symbolic data, the core steps of the BIHT algorithm for the symbolic data are introduced as follows:
in the following process: x is the number oftValue of x, A, representing the t-th iterationTA transposed matrix, η, representing the matrix Ak(v) Representing the value of the K elements with the largest magnitude in the vector v being kept constant and the other elements being set to 0, sign (·) representing a sign-taking function, i.e.
Figure BDA0003282652200000051
Initialization x0=0N×1
Inputting a parameter a for controlling gradient descending step length in the algorithm, and measuring a matrix A belonging to RM×NThe measurement vector y-sign (ax) e BMSignal sparsity K, maximum iteration number nIter;
cycling the following steps nIter times;
to atCarry out iteration
at=xt-1+aAT(y-sign(Axt-1));
For xtCarry out iteration
xt=ηK(at);
Output of
Figure BDA0003282652200000052
In view of the above problems, we propose a solution to optimize source coding at the source, on the premise that the model transmission is digital signal transmission.
As shown in fig. 1, the wireless federal learning method based on 1bit compressed sensing of the present invention includes the following steps:
(1) and (5) initializing the global model by the base station, wherein the number of the initialization iterations t is 1, and issuing the global model to each user side.
(2) The base station transmits the global model obtained by the t-1 learning training to each user side in a broadcasting mode, and each user side performs the t learning training by using local data locally to obtain an updated local model;
(3) during the t-th learning training, each user side represents the updated local model in a one-dimensional column vector form;
(4) during the t-th learning training, each user side participating in the training compares the global model with the local model, subtracts the global model from the local model, records the absolute value of the obtained difference as the updating amplitude of the local model of the ith user side during the t-th learning training, and records the sign of the obtained difference as the updating trend of the local model of the ith user side during the t-th learning training;
(5) during the t learning training, the ith user side dynamically selects the sparsity and the threshold for sparsifying the updating trend of the local model;
(6) during the t learning training, the ith user side performs sparsification on the updating trend of the local model according to the selected threshold value to obtain the sparse trend of the updating of the local model;
(7) during the t-th learning training, each user side selects a proper Gaussian random measurement matrix as a sensing matrix, and compresses the updated sparse trend of the local model by a 1-bit compressive sensing method to obtain an observation signal;
(8) during the t learning training, each user side sends an observation signal, sparsity and a threshold value to a base station;
(9) during the t learning training, the base station selects a sensing matrix and sparsity according to the received observation signals, and performs signal reconstruction on the observation signals by using a BIHT algorithm to obtain a sparse trend of updating a reconstructed local model; sharing a sensing matrix base station with a user side;
(10) during the t-th learning training, the base station recovers the local model of the user side after the t-th learning training according to the updated sparse trend of the reconstructed local model, the global model obtained in the t-1-th round and the threshold value to obtain a recovered local model;
(11) during the t learning training, after the base station obtains the recovered local models of all the user terminals, averaging the recovered local models to obtain a global model of the t learning training;
(12) and (3) the base station issues the new global model to the user side to perform a new round of learning training, and returns to the step (2) in the specific implementation until the learning training process converges, wherein t is t + 1.
As shown in fig. 2, in the wireless federal learning method based on 1bit compressed sensing of the present invention, the relationship between the base station and each user end includes the following situations:
(1) the base station acquires the global model and simultaneously issues the global model to each user side;
(2) the user end participating in the training of the current round trains a local model locally and transmits the local model back to the base station in a 1bit compressed sensing mode;
(3) the base station decodes and reconstructs the obtained observation signal to obtain a restored local model, updates the global model and finishes the training of the current round;
(4) stopping if the training reaches convergence, otherwise starting a new round of training.
As shown in fig. 3, in the wireless federal learning method based on 1bit compressive sensing of the present invention, an experiment is performed on the MNIST of the handwritten data set, and a standard federal learning method and a wireless federal learning method based on 1bit compressive sensing which can be used for lossless signal reconstruction are compared with each other.
As shown in fig. 4, in the wireless federal learning method based on 1bit compressive sensing of the present invention, the data set MNIST is tested, and compared with the standard federal learning method, when training to the same level, the wireless federal learning method based on 1bit compressive sensing of the present invention reduces the total size of the data uploaded by the user side by about 5.5 times.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (10)

1. A wireless federal learning method based on 1bit compressed sensing is characterized by comprising the following steps:
step 1, initializing the number of iterations t to 1, and initializing the global model to be G (omega) by the base station0) And sends down G (omega)0) To each user terminal;
step 2, the base station obtains a global model G (omega) through the t-1 learning trainingt-1) Sending the data to each user side, and carrying out the t-th learning training by each user side locally by using local data to obtain an updated local model;
step 3, during the t-th learning training, each user side updates the local model Git) Expressed in the form of a one-dimensional column vector;
step 4, comparing G (omega) of each user side participating in training during the t-th learning trainingt-1) And Git) Obtaining the updating amplitude D of the local model of the ith user terminal in the t learning trainingit) Trend S of local model update of ith user terminal in the t learning trainingit);
Step 5, during the t learning training, the ith user end dynamically selects the pair Sit) Sparsity K for sparsificationi,tAnd a threshold thi,t
Step 6, during the t learning training, the ith user end is according to the selected threshold thi,tTrends for local model updates Sit) Carrying out sparsification to obtain the sparse trend of local model update
Figure FDA0003282652190000011
And 7, during the t-th learning training, selecting a Gaussian random measurement matrix A as a sensing matrix by each user side, and compressing the sensing matrix by a 1-bit compression sensing method
Figure FDA0003282652190000012
Obtaining an observation signal yit);
Step 8, learning for the t timeDuring training, each user terminal observes the signal yit) Degree of sparsity Ki,tSum threshold thi,tSending the data to a base station;
step 9, during the t learning training, the base station receives the observation signal yit) Selecting a sensing matrix A and sparsity Ki,tUsing BIHT algorithm on observed signal yit) Signal reconstruction is carried out to obtain the updated sparse trend of the reconstructed local model
Figure FDA0003282652190000013
The base station and the user side share a matrix A;
step 10, during the t-th learning training, the base station updates the sparse trend according to the reconstructed local model
Figure FDA0003282652190000014
G(ωt-1) Sum threshold thi,tRecovering the local model of the user side after the t-th learning training to obtain a recovered local model G'it);
Step 11, during the t learning training, the base station obtains the recovered local models G 'of all the user terminals'it) Then, averaging the recovered local models to obtain a global model G (omega) of the t-th learning trainingt);
Step 12, the base station converts the new global model G (omega)t) And issuing the new round of learning training to the user side, enabling t to be t +1, and returning to the step 2 until the learning training process converges.
2. The wireless federal learning method based on 1bit compressed sensing as claimed in claim 1, wherein in step 2, G (ω) is defined ast-1) And sending the data to each user side in a broadcasting mode.
3. The wireless federal learning method based on 1bit compressed sensing according to claim 1, wherein the step 4 is as follows: the t-th learning trainingDuring training, each user end participating in training compares G (omega)t-1) And Git) Each bit parameter in Git) Subtract G (omega)t-1) The absolute value of the obtained difference is recorded as the updated amplitude D of the local model of the ith user end in the t learning trainingit) Recording the sign of the obtained difference as the trend S of updating the local model of the ith user end in the t learning trainingit)。
4. The wireless federal learning method based on 1bit compressed sensing according to claim 1, wherein the method in step 5 is as follows:
setting a parameter alphatE (0.4,0.8), and respectively setting the sparsification parameters as first sparsification parameters p1E (4%, 6%), a second sparsification parameter p2E (9%, 11%) and Dit) N data are contained in the training data, the first sparsity degree in the t-th learning training is ensured
Figure FDA0003282652190000021
Second degree of sparsity
Figure FDA0003282652190000022
Record Dit) The value size of N data is
Figure FDA0003282652190000023
A large number of
Figure FDA0003282652190000024
The value is the first
Figure FDA0003282652190000025
A large number of
Figure FDA0003282652190000026
If it is
Figure FDA0003282652190000027
Setting a threshold value
Figure FDA0003282652190000028
Degree of sparseness
Figure FDA0003282652190000029
Otherwise, set the threshold value to
Figure FDA00032826521900000210
Degree of sparseness
Figure FDA00032826521900000211
Wherein [. ]]Indicating rounding.
5. The wireless federal learning method based on 1bit compressed sensing according to claim 1, wherein in step 6, the thinning method is as follows:
will Dit) Each value of (d) and thi,tMaking a comparison if Dit) The nth number in (1) is greater than thi,tThen S is retainedit) N, otherwise will be Sit) Is set to 0.
6. The wireless federal learning method based on 1bit compressed sensing according to claim 1, wherein in the step 10, the recovery method is as follows:
reading
Figure FDA00032826521900000212
If each numerical value of
Figure FDA00032826521900000213
If the nth value in (1) is equal to G (ω)t-1) The nth value of (1) plus thi,t(ii) a If it is not
Figure FDA00032826521900000214
Is-1, then G (ω) is calculatedt-1) The nth number of (1) minus the thi,t(ii) a If it is not
Figure FDA00032826521900000215
If the nth value in (1) is 0, then G (ω) is not includedt-1) Operates on the nth value of (1).
7. The wireless federal learning method based on 1bit compressed sensing as claimed in claim 1, wherein in step 4,
Dit)=abs(Git)-G(ωt-1))
Sit)=sign(Git)-G(ωt-1));
where abs (. cndot.) represents the absolute value and sign (. cndot.) represents the sign information.
8. The wireless federal learning method based on 1bit compressed sensing as claimed in claim 1, wherein in step 7,
Figure FDA0003282652190000031
9. the wireless federal learning method based on 1bit compressed sensing as claimed in claim 1, wherein, in step 9,
Figure FDA0003282652190000032
wherein the content of the first and second substances,
Figure FDA0003282652190000034
to adopt BIHT algorithm, with Ki,tLooking at sparsity for inputSignal reconstruction of the measured signal, where Ki,tExpressing the sparse trend of the local model update obtained after the ith user side is subjected to sparse operation
Figure FDA0003282652190000033
The sparsity of (a).
10. The wireless federal learning method based on 1bit compressed sensing as claimed in claim 1, wherein, in step 11,
G(ωt)=FL_mean(G′it));
wherein FL _ mean (G'it) Represents for all G'it) The average is taken bit by bit.
CN202111136679.3A 2021-09-27 2021-09-27 Wireless federal learning method based on 1bit compressed sensing Pending CN113962400A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111136679.3A CN113962400A (en) 2021-09-27 2021-09-27 Wireless federal learning method based on 1bit compressed sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111136679.3A CN113962400A (en) 2021-09-27 2021-09-27 Wireless federal learning method based on 1bit compressed sensing

Publications (1)

Publication Number Publication Date
CN113962400A true CN113962400A (en) 2022-01-21

Family

ID=79462922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111136679.3A Pending CN113962400A (en) 2021-09-27 2021-09-27 Wireless federal learning method based on 1bit compressed sensing

Country Status (1)

Country Link
CN (1) CN113962400A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841370A (en) * 2022-04-29 2022-08-02 杭州锘崴信息科技有限公司 Processing method and device of federal learning model, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841370A (en) * 2022-04-29 2022-08-02 杭州锘崴信息科技有限公司 Processing method and device of federal learning model, electronic equipment and storage medium
CN114841370B (en) * 2022-04-29 2022-12-09 杭州锘崴信息科技有限公司 Processing method and device of federal learning model, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111901829B (en) Wireless federal learning method based on compressed sensing and quantitative coding
WO2022105117A1 (en) Method and device for image quality assessment, computer device, and storage medium
Shao et al. Branchy-GNN: A device-edge co-inference framework for efficient point cloud processing
CN112348914A (en) Deep learning image compression sensing algorithm and system based on Internet of vehicles
CN104301728A (en) Compressed video capture and reconstruction system based on structured sparse dictionary learning
CN113962400A (en) Wireless federal learning method based on 1bit compressed sensing
CN115905978A (en) Fault diagnosis method and system based on layered federal learning
CN109672885B (en) Video image coding and decoding method for intelligent monitoring of mine
CN113194493B (en) Wireless network data missing attribute recovery method and device based on graph neural network
CN114301889A (en) Efficient federated learning method and system based on weight compression
CN116306780B (en) Dynamic graph link generation method
CN116776014B (en) Multi-source track data representation method and device
Li et al. Towards communication-efficient digital twin via AI-powered transmission and reconstruction
CN117093830A (en) User load data restoration method considering local and global
CN116189048A (en) Non-contact heart rate detection method and system based on deep neural network and self-attention mechanism
CN116258923A (en) Image recognition model training method, device, computer equipment and storage medium
CN104901704A (en) Body sensing network signal reconstruction method with spatial-temporal correlation characteristics
CN104243986A (en) Compressed video capture and reconstruction system based on data drive tensor subspace
CN111045861B (en) Sensor data recovery method based on deep neural network
CN114630207A (en) Multi-sensing-node perception data collection method based on noise reduction self-encoder
He Exploration of Distributed Image Compression and Transmission Algorithms for Wireless Sensor Networks.
CN113949880A (en) Extremely-low-bit-rate man-machine collaborative image coding training method and coding and decoding method
Van Luong et al. Online decomposition of compressive streaming data using n-l1 cluster-weighted minimization
CN111476408A (en) Power communication equipment state prediction method and system
Wu et al. [Retracted] Application of Image Processing Variation Model Based on Network Control Robot Image Transmission and Processing System in Multimedia Enhancement Technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination