CN110728314A - Method for detecting active users of large-scale scheduling-free system - Google Patents
Method for detecting active users of large-scale scheduling-free system Download PDFInfo
- Publication number
- CN110728314A CN110728314A CN201910939500.4A CN201910939500A CN110728314A CN 110728314 A CN110728314 A CN 110728314A CN 201910939500 A CN201910939500 A CN 201910939500A CN 110728314 A CN110728314 A CN 110728314A
- Authority
- CN
- China
- Prior art keywords
- data
- encoder
- network
- training
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/70—Services for machine-to-machine communication [M2M] or machine type communication [MTC]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Computer Networks & Wireless Communication (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
A method for detecting active users of a large-scale dispatch-free system includes the steps of firstly utilizing a variational method to deduce a loss function in a specific communication scene, utilizing a variational self-encoder neural network structure to train a large amount of training data, detecting test data by a trained network, and utilizing F in machine learningβThe invention utilizes the variation self-encoder of deep learning to solve the problem of the detection of active users of a large-scale dispatch-free system, inputs a network obtained by training a large amount of data under the condition that the number of the active users is unknownThe method has the advantages that the method can achieve accurate detection results corresponding to the observation data of different active user numbers, only a simple network structure is used compared with the reconstruction process with expensive compressed sensing, the neural network trained by using a large amount of data can directly output accurate results for newly input data to be detected, the robustness is good, and iterative computation is not needed to be carried out again.
Description
Technical Field
The invention relates to the technical field of communication, in particular to an active user detection method under a large-scale scheduling-free transmission scene by utilizing deep learning.
Background
The internet of things is structurally divided into a sensing layer, a network layer and an application layer, and important components forming the sensing layer are various sensors. Next generation mobile communication technology has intensively introduced machine type communication scenarios (MTC). The MTC communication is widely applied to environment sensing, event monitoring and control, and various monitoring sensor devices are required. For MTC communications, the system needs to support massive connections, and the number of devices accessing the base station may reach 104To 106On the order of magnitude that only a small fraction of the devices are active at each given time, detection of a user that is active in the network is of paramount importance to ensure efficient communication between the base station and the user.
Compared with the total registered users in the current network, the number of active users is relatively small, so that a lot of research works have previously proposed that a signal processing method in compressed sensing is adopted for active user detection. Such as the BOMP (block orthogonal matching pursuit) algorithm, the AMP (approximate messaging) algorithm and the BP (base pursuit) algorithm. Compressed sensing has the advantages of flexibility and high data efficiency, but its application is limited due to its expensive reconstruction process.
With the extensive research of deep learning methods, the technology thereof is also widely applied in the communication field. The variational auto-encoder is a generation model, is widely applied to unsupervised learning and semi-supervised learning of deep learning, and is currently widely used for generating images. However, how to apply the method to the active user detection of the large-scale dispatch-free system in the communication field has not been reported yet.
Disclosure of Invention
In order to overcome the technical limitation of the traditional compressed sensing algorithm and solve the problem of detection of active users of a large-scale scheduling-free system, the invention aims to provide a method for detecting the active users of the large-scale scheduling-free system by using a deep learning variational self-encoder.
In order to achieve the above object, the object of the present invention is achieved by the following means.
A method for detecting active users of a large-scale dispatch-free system comprises the following steps:
(1) creating a data set of the received signal y;
(2) dividing the data set obtained in the step (1) into a training set and a testing set;
(3) designing a loss function and a neural network structure by using a variational self-encoder method, and inputting the training set data obtained in the step (2) into a network for training;
(4) inputting the network trained in the step (3) into the network for testing by using the test set data obtained in the step (2);
(5) and (4) in the reconstructed signal output in the step (4), representing the user in an active state by the position corresponding to the element with the output value of 1, representing the user in an inactive state by the position corresponding to the element with the output value of 0, and adopting F in machine learningβThe measurement measures the detection result.
In the step (1), the method for creating the data set of the received signal y includes: firstly, generating a sparse vector a with the length of N as an indication vector of an active user, wherein N represents the total number of registered users, a only contains a few 1, the rest elements are 0, the sparse vector a is multiplied by corresponding elements of a channel coefficient vector h under a specific scene, then the sparse vector a is subjected to matrix multiplication with a pilot sequence matrix S distributed by a base station, and a received signal is obtained after superposition of Gaussian white noise w, wherein y is S (h a) + w, the sparse vector a represents that the corresponding elements are multiplied one by one, a plurality of data are generated according to the method, and a data set is created in a Tensflown deep learning frame and used as a training data set and a testing data set;
the step (3) is specifically as follows: constructing a variational self-encoder network structure in a Tensorflow deep learning framework, wherein an encoder and a decoder part are both composed of two full-connection layers, and deduction is carried out through variationalAs a loss function for training the variational self-coder networkThe lower bounds of variation are expressed as follows:
wherein p isθ(a) As a function of prior probability density, pθ(y | a) is a Gaussian likelihood function, qΦ(a | y) is an approximation a posteriori probability pθ(a | y) an approximate posterior probability density function, referred to as the encoder;
network training adopts a small batch training mode to minimize a loss function by using a gradient descent methodAttention encoder qΦThe output result of (a | y) is realized for the network of the encoder by using two layers of fully-connected neural networks, because the processed data is complex, the input layer has two input channels respectively corresponding to the real part and the imaginary part of the observed signal y, and the output layer has one channel representing the output probability vectorWherein q isiRepresenting the probability predicted value of each user in an activation state, wherein an activation function between a first layer full connection layer and a second layer full connection layer is a Softmax function and is defined asThe output layer activation function is a Sigmoid function, and the output result is ensured to be [0,1 ]]Represents a valid probability value.
Specifically, the step (5) is to define the combination of different situations of the true value and the predicted value of the indication vector of the active user as follows: a ═ a1,a2,…,aN]Element a in a vectoriIs 1, the predicted value qiThe condition corresponding to 1 hourThe number is recorded as TP; a isiIs 1, the predicted value qiThe corresponding number of the cases is recorded as FN when the number is 0; a isiIs 0, the predicted value qiThe corresponding number of the conditions is recorded as TN when the number is 0; a isiIs 0, the predicted value qiThe number of corresponding cases is denoted as FP for 1, and two metrics are defined: the precision ratio P and the recall ratio R are respectively as follows:
the difference in the degree of importance of P and R may be represented by FβThe measurement is weighted and harmoniously averaged, and F is takenβThe parameter β of (1) represents a specific gravity given to a higher recall ratio, FβIs defined as:
the invention solves the problem of active user detection in a large-scale scheduling-free transmission scene by using a deep learning method based on a variational self-encoder. Aiming at the reconstruction effect of a piece of data, the detection result of the active user is compared with the result of the traditional compressed sensing algorithm, and the deep learning method based on the variational self-encoder can achieve a good effect.
Drawings
FIG. 1 is a flow chart of an embodiment of the method of the present invention.
FIG. 2 is a comparison of the method of the present invention with other algorithms.
Detailed Description
For better clarity of the description of the objects, technical solutions and advantages of the present invention, the following detailed description of the present invention is provided with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Considering an uplink scheduling-free wireless cellular system, which comprises N single-antenna users, wherein all the users access the same base station, the base station is provided with an antenna, the N users are uniformly located on a circle taking the base station as the center, and when the users submit registration information to the base station and successfully access the network, the base station becomes an online user under the coverage network of the base station. The system needs to support large scale user connections, but only a small percentage of the users are active at each given time.
Referring to fig. 1, a method for detecting active users based on the large-scale scheduling-free system includes the following steps:
(1) creating a data set of the received signal y;
firstly, a sparse vector a with the length of N is generated as an indication vector of an active user, N represents the total number of registered users, a only contains a few 1, and the rest elements are all 0. For example, the length N of the vector is 100, which represents 100 online users to be detected, where the number of users in an active state may be 2,3,4,5,6, and so on. And multiplying the sparse vector a by the corresponding element of the channel coefficient vector h in a specific scene, then performing matrix multiplication on the sparse vector a and a pilot sequence S matrix distributed by the base station, and superposing Gaussian white noise w to obtain a received signal, wherein y is S (h a) + w, and the ([ the corresponding elements are multiplied one by one). Generating a plurality of pieces of data according to the method, and creating a data set in a Tensorflow deep learning framework to be used as a training data set and a testing data set;
the method specifically comprises the following steps: the base station allocates a pilot sequence as a unique identifier to each online user. Let an∈{1,0},a=[a1,a2,…,aN]The indication vector for active users contains only 0 and a few 1's, with 1 representing the active state and 0 representing the inactive state. The pilot sequence allocated to user n isWhere L is the pilot sequence length. Suppose snThe complex Gaussian distribution with a mean of 0 and a variance of 1/L obeying the independent homodistribution. Considering a block fading model, the channel remains unchanged within each block. The sparse vector a is multiplied by the corresponding element of the channel coefficient vector h in a specific scene, then is subjected to matrix multiplication with a pilot sequence matrix S distributed by the base station, and a received signal is obtained after superposition of white gaussian noise w, so that the received signal at the base station end can be modeled as follows:
element h in (1)nE C is the channel coefficient between the base station and the user, w e CL×1Representing additive white gaussian noise.WhereinAn effective channel representing a user is assumed that a channel coefficient is h + Δ h, h is channel state information estimated by a transmitting end, Δ h is a channel state information error, and it is assumed thatIs a channel that is subject to rayleigh fading,and deltah also obeys a complex gaussian distribution,the received signal at the base station side can be written as:
y=Sx+w=S(h*a)+w
thus:
y=S(h*a)+S(Δh*a)+w=Φa+n
where Φ represents each row of the matrix S and h is multiplied accordingly. n ═ S (Δ h × a) + w are still a mean of 0 and a variance σ2Complex gaussian distribution.
1060 pieces of data are randomly generated in a Tensorflow deep learning framework, the total number N of users is assumed to be 100, the length L of the pilot sequence is set to be 20, the situation that the number of active users is 2,3,4,5 and 6 respectively is included, and the number of data corresponding to each active user number is equal. 980 pieces of data were selected as training sets and another 80 pieces of data were selected as test sets.
(2) Dividing the data set obtained in the step (1) into a training set and a testing set;
in the example, 980 pieces of data were selected as a training set, and another 80 pieces of data were selected as a test set. The training set and the test set respectively contain data corresponding to the number of active users of equal number, namely 2,3,4,5 and 6.
(3) Designing a loss function and a neural network structure by using a variational self-encoder method, and inputting the training set data obtained in the step (2) into a network for training;
namely, a variational self-encoder network structure is built in a Tensorflow deep learning framework, and an encoder and a decoder part are both composed of two fully-connected layers. Deriving a lower bound of variation as a loss function for training of a variational self-coder network by a method of variational inferenceThe lower bounds of variation are expressed as follows:
network training adopts a small batch training mode to minimize a loss function by using a gradient descent method
The method specifically comprises the following steps:
the invention aims to recover an active user indication signal a on the basis of a received signal y, and the number of users in the scene is far larger than the length of a pilot sequence, namely N > L.
For the above problem, consider estimating the active user indication vector a with ML, i.e., looking for logpθ(y) maximum a-vector sum σ2Here a priori pθ(a)=(ε)Nε(1-ε)N(1-ε)Assuming that each user is synchronous and the probability of activity within each coherent block is equal, both s; the Gaussian likelihood function isPosterior probability of pθ(a | y). However p isθ(y)=∫xp(x)pθ(y | x) dx is difficult to solve and this problem can be solved by the variational approach, by the Variational Autoencoder (VAE) approach, p can be not directly maximizedθ(y), but to maximize a lower bound:
where D isKL[·||·]KL distance, q, representing two density functionsΦ(ay) is an arbitrary approximation pθProbability density function of (y | a). The problem now turns into maximizing the lower boundDecoder pθ(y | a) and encoder qΦ(a | y) are all implemented by a neural network.
let the first term on the right of the equation be a and the second term be B, then:
wherein (phi x)m)jRepresents the m-th row and j-th column of the matrix phi.By using Derivative sigma 2 and make the derivative result equal to 0, use sigma2Is optimized value ofBringing inAnd after constant terms which should not participate in training are removed, the specific form of the loss function used for network training under the communication scene is obtained:
the invention is primarily concerned with the encoder qΦThe output result of (a | y) is realized for the network of the encoder by using two layers of fully-connected neural networks, because the processed data is complex, the input layer has two input channels respectively corresponding to the real part and the imaginary part of the observed signal y, and the output layer has one channel representing the output probability vectorWherein q isiAnd representing the probability predicted value of each user in the activation state. The activation function between the first layer full connection layer and the second layer full connection layer is a Softmax function and is defined asThe output layer activation function is a Sigmoid function, and the output result is ensured to be [0,1 ]]Represents a valid probability value.
(4) Inputting the network trained in the step (3) into the network for testing by using the test set data obtained in the step (2);
the test set comprises data corresponding to the number of active users of equal number, namely 2,3,4,5 and 6. And inputting the test data of different active users into the trained network to obtain corresponding output results.
(5) And (4) in the reconstructed signal output in the step (4), representing the user in an active state by the position corresponding to the element with the output value of 1, representing the user in an inactive state by the position corresponding to the element with the output value of 0, and adopting F in machine learningβThe measurement measures the detection result.
Indicating for active users that vector a ═ a1,a2,…,aN]Element a in a vectoriIs 1, the predicted value qiThe number of corresponding cases is recorded as TP when the number is 1; a isiIs 1, the predicted value qiThe corresponding number of the cases is recorded as FN when the number is 0; a isiIs 0, the predicted value qiThe corresponding number of the conditions is recorded as TN when the number is 0; a isiIs 0, the predicted value qiThe number of cases corresponding to 1 is denoted as FP. Two metrics are defined: the precision ratio P and the recall ratio R are respectively as follows:
the difference in the degree of importance of P and R may be represented by FβThe metric is a weighted harmonic average of the two. For example, in the commodity recommendation algorithm, in order to disturb the user as little as possible, it is more desirable that the recommended content is interesting to the user, and the precision ratio P is more important; in a evasive search system, it is more desirable to miss as few evasions as possible, where recall is more important. To highlight the importance of recall ratio, we take FβThe parameter β of (1) represents a higher specific gravity given to recall. FβIs defined as:
FIG. 2 shows F obtained by the process according to the inventionβMeasuring the change curve along with the number of different active users, wherein the other four curves are F obtained by comparing the traditional compressed sensing algorithms BPDN, SAMP and ISTβThe curve of the measurement changes with the number of active users. As can be seen from FIG. 2, the method of the present invention is obviously superior to other algorithms, and has ideal detection performance. And trainThe trained network can detect the newly input data without carrying out iterative computation again, and has better robustness.
Claims (4)
1. A method for detecting active users of a large-scale dispatch-free system is characterized by comprising the following steps:
(1) creating a data set of the received signal y;
(2) dividing the data set obtained in the step (1) into a training set and a testing set;
(3) designing a loss function and a neural network structure by using a variational self-encoder method, and inputting the training set data obtained in the step (2) into a network for training;
(4) inputting the network trained in the step (3) into the network for testing by using the test set data obtained in the step (2);
(5) and (4) in the reconstructed signal output in the step (4), representing the user in an active state by the position corresponding to the element with the output value of 1, representing the user in an inactive state by the position corresponding to the element with the output value of 0, and adopting F in machine learningβThe measurement measures the detection result.
2. The massive exempt scheduling system active user detection method of claim 1,
in the step (1), the method for creating the data set of the received signal y includes: firstly, a sparse vector a with the length of N is generated as an indication vector of an active user, N represents the total number of registered users, a only contains a few 1, the rest elements are all 0, the sparse vector a is multiplied by corresponding elements of a channel coefficient vector h under a specific scene, then matrix multiplication is carried out on the sparse vector a and a pilot frequency sequence matrix S distributed by a base station, a received signal is obtained after Gaussian white noise w is superposed, y is S (h a) + w, the sparse vector a represents that the corresponding elements are multiplied one by one, a plurality of data are generated according to the method, and a data set is created in a Tensflown deep learning frame and used as a training data set and a testing data set.
3. The massive exempt scheduling system active user detection method of claim 1,
the step (3) is specifically as follows: constructing a variational self-encoder network structure in a Tensorflow deep learning framework, wherein an encoder and a decoder part are both composed of two full-connection layers, and a variational lower bound is deduced by a variational inference method and is used as a loss function for training a variational self-encoder networkThe lower bounds of variation are expressed as follows:
wherein p isθ(a) As a function of prior probability density, pθ(y | a) is a Gaussian likelihood function, qΦ(a | y) is an approximation a posteriori probability pθ(a | y) an approximate posterior probability density function, referred to as the encoder;
network training adopts a small batch training mode to minimize a loss function by using a gradient descent methodAttention encoder qΦThe output result of (a | y) is realized for the network of the encoder by using two layers of fully-connected neural networks, because the processed data is complex, the input layer has two input channels respectively corresponding to the real part and the imaginary part of the observed signal y, and the output layer has one channel representing the output probability vectorWherein q isiRepresenting the probability predicted value of each user in an activation state, wherein an activation function between a first layer full connection layer and a second layer full connection layer is a Softmax function and is defined asThe activation function of the output layer is a Sigmoid function to ensure an output nodeFruit is in the range of 0,1]Represents a valid probability value.
4. The massive exempt scheduling system active user detection method of claim 1,
specifically, the step (5) is to define the combination of different situations of the true value and the predicted value of the indication vector of the active user as follows: a ═ a1,a2,…,aN]Element a in a vectoriIs 1, the predicted value qiThe number of corresponding cases is recorded as TP when the number is 1; a isiIs 1, the predicted value qiThe corresponding number of the cases is recorded as FN when the number is 0; a isiIs 0, the predicted value qiThe corresponding number of the conditions is recorded as TN when the number is 0; a isiIs 0, the predicted value qiThe number of corresponding cases is denoted as FP for 1, and two metrics are defined: the precision ratio P and the recall ratio R are respectively as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910939500.4A CN110728314B (en) | 2019-09-30 | 2019-09-30 | Method for detecting active users of large-scale scheduling-free system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910939500.4A CN110728314B (en) | 2019-09-30 | 2019-09-30 | Method for detecting active users of large-scale scheduling-free system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110728314A true CN110728314A (en) | 2020-01-24 |
CN110728314B CN110728314B (en) | 2022-10-25 |
Family
ID=69218678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910939500.4A Active CN110728314B (en) | 2019-09-30 | 2019-09-30 | Method for detecting active users of large-scale scheduling-free system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110728314B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111553542A (en) * | 2020-05-15 | 2020-08-18 | 无锡职业技术学院 | User coupon verification and sale rate prediction method |
CN111563548A (en) * | 2020-04-30 | 2020-08-21 | 鹏城实验室 | Data preprocessing method and system based on reinforcement learning and related equipment |
CN115695105A (en) * | 2023-01-03 | 2023-02-03 | 南昌大学 | Channel estimation method and device based on deep iteration intelligent super-surface auxiliary communication |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180218261A1 (en) * | 2017-01-31 | 2018-08-02 | Paypal, Inc. | Fraud prediction based on partial usage data |
US20180322388A1 (en) * | 2017-05-03 | 2018-11-08 | Virginia Tech Intellectual Properties, Inc. | Learning and deployment of adaptive wireless communications |
CN109672464A (en) * | 2018-12-13 | 2019-04-23 | 西安电子科技大学 | Extensive mimo channel state information feedback method based on FCFNN |
CN109743268A (en) * | 2018-12-06 | 2019-05-10 | 东南大学 | Millimeter wave channel estimation and compression method based on deep neural network |
US20190171187A1 (en) * | 2016-05-09 | 2019-06-06 | StrongForce IoT Portfolio 2016, LLC | Methods and systems for the industrial internet of things |
-
2019
- 2019-09-30 CN CN201910939500.4A patent/CN110728314B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190171187A1 (en) * | 2016-05-09 | 2019-06-06 | StrongForce IoT Portfolio 2016, LLC | Methods and systems for the industrial internet of things |
US20180218261A1 (en) * | 2017-01-31 | 2018-08-02 | Paypal, Inc. | Fraud prediction based on partial usage data |
US20180322388A1 (en) * | 2017-05-03 | 2018-11-08 | Virginia Tech Intellectual Properties, Inc. | Learning and deployment of adaptive wireless communications |
CN109743268A (en) * | 2018-12-06 | 2019-05-10 | 东南大学 | Millimeter wave channel estimation and compression method based on deep neural network |
CN109672464A (en) * | 2018-12-13 | 2019-04-23 | 西安电子科技大学 | Extensive mimo channel state information feedback method based on FCFNN |
Non-Patent Citations (3)
Title |
---|
DIEDERIK P.KINGMA 等,: "auto-encoding variational bayes", 《ARXIV》 * |
MEHDI MOHAMMADI 等,: "Semisupervised Deep Reinforcement Learning in Support of IoT and Smart City Services", 《IEEE INTERNET OF THINGS JOURNAL》 * |
佘博 等,: "基于深度卷积变分自编码网络的故障诊断方法", 《仪器仪表学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111563548A (en) * | 2020-04-30 | 2020-08-21 | 鹏城实验室 | Data preprocessing method and system based on reinforcement learning and related equipment |
CN111563548B (en) * | 2020-04-30 | 2024-02-02 | 鹏城实验室 | Data preprocessing method, system and related equipment based on reinforcement learning |
CN111553542A (en) * | 2020-05-15 | 2020-08-18 | 无锡职业技术学院 | User coupon verification and sale rate prediction method |
CN111553542B (en) * | 2020-05-15 | 2023-09-05 | 无锡职业技术学院 | User coupon verification rate prediction method |
CN115695105A (en) * | 2023-01-03 | 2023-02-03 | 南昌大学 | Channel estimation method and device based on deep iteration intelligent super-surface auxiliary communication |
CN115695105B (en) * | 2023-01-03 | 2023-03-17 | 南昌大学 | Channel estimation method and device based on deep iteration intelligent super-surface auxiliary communication |
Also Published As
Publication number | Publication date |
---|---|
CN110728314B (en) | 2022-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110728314B (en) | Method for detecting active users of large-scale scheduling-free system | |
Njima et al. | DNN-based indoor localization under limited dataset using GANs and semi-supervised learning | |
Wang et al. | Device-free wireless localization and activity recognition: A deep learning approach | |
Xie et al. | Sequential and adaptive sampling for matrix completion in network monitoring systems | |
Li et al. | Pseudo label-driven federated learning-based decentralized indoor localization via mobile crowdsourcing | |
CN112512069B (en) | Network intelligent optimization method and device based on channel beam pattern | |
Zhang et al. | FS-MAC: An adaptive MAC protocol with fault-tolerant synchronous switching for FANETs | |
Dev et al. | DDI: A novel architecture for joint active user detection and IoT device identification in grant-free NOMA systems for 6G and beyond networks | |
Zhao et al. | A deep-learning method for device activity detection in mMTC under imperfect CSI based on variational-autoencoder | |
Chauhan et al. | Approximating parameters of photovoltaic models using an amended reptile search algorithm | |
Yang et al. | Learning the interference graph of a wireless network | |
Li et al. | Energy-efficient task offloading of edge-aided maritime UAV systems | |
Zhao et al. | An accurate approach of device-free localization with attention empowered residual network | |
Sun et al. | Propagation map reconstruction via interpolation assisted matrix completion | |
Chen et al. | A wifi indoor localization method based on dilated cnn and support vector regression | |
CN116050509A (en) | Clustering federal learning method based on momentum gradient descent | |
Liu et al. | UAV-aided radio map construction exploiting environment semantics | |
CN113194493B (en) | Wireless network data missing attribute recovery method and device based on graph neural network | |
Yu et al. | Multi-attribute missing data reconstruction based on adaptive weighted nuclear norm minimization in IoT | |
Luo et al. | Passive source localization from array covariance matrices via joint sparse representations | |
Si et al. | Multi-agent interactive localization: A positive transfer learning perspective | |
CN115577797A (en) | Local noise perception-based federated learning optimization method and system | |
CN112437397B (en) | Distributed sensor node positioning method based on alternative correction Newton method | |
Wang et al. | Improved sparsity adaptive matching pursuit algorithm based on compressed sensing | |
Geng et al. | Data Collection for Mobile Crowd Sensing Based on Tensor Completion. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |