CN113766669B - Large-scale random access method based on deep learning network - Google Patents

Large-scale random access method based on deep learning network Download PDF

Info

Publication number
CN113766669B
CN113766669B CN202111323583.8A CN202111323583A CN113766669B CN 113766669 B CN113766669 B CN 113766669B CN 202111323583 A CN202111323583 A CN 202111323583A CN 113766669 B CN113766669 B CN 113766669B
Authority
CN
China
Prior art keywords
neural network
user
matrix
random access
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111323583.8A
Other languages
Chinese (zh)
Other versions
CN113766669A (en
Inventor
黄川�
崔曙光
黄坚豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese University of Hong Kong Shenzhen
Original Assignee
Chinese University of Hong Kong Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese University of Hong Kong Shenzhen filed Critical Chinese University of Hong Kong Shenzhen
Priority to CN202111323583.8A priority Critical patent/CN113766669B/en
Publication of CN113766669A publication Critical patent/CN113766669A/en
Application granted granted Critical
Publication of CN113766669B publication Critical patent/CN113766669B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W74/00Wireless channel access, e.g. scheduled or random access
    • H04W74/08Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access]
    • H04W74/0833Non-scheduled or contention based access, e.g. random access, ALOHA, CSMA [Carrier Sense Multiple Access] using a random access procedure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a large-scale random access method based on a deep learning network, which comprises the following steps: constructing a system model based on large-scale random access; constructing a transmit signal to a user using a deep neural network
Figure 618710DEST_PATH_IMAGE001
A model for detection and user access judgment; carrying out neural network training and parameter updating; and detecting the user emission signal according to the neural network after the training update, thereby judging whether the user is successfully accessed. In the large-scale random access scheme provided by the invention, a decoding algorithm with low complexity is provided, the communication performance is effectively improved, specifically, compared with the traditional algorithm, the detection algorithm based on the neural network does not need the prior statistical characteristic of a channel, the loss of the system can be greatly reduced, and the method is more suitable for the actual communication systemThe algorithm will provide better performance.

Description

Large-scale random access method based on deep learning network
Technical Field
The invention relates to a deep learning network, in particular to a large-scale random access method based on the deep learning network.
Background
With the rapid development of communication technology, base stations are more and more widely applied in social life, and the base stations are often required to be accessed to a large number of users and support uplink transmission of the large number of users; the access method of the user is very important at this time.
The traditional access strategy and the data transmission strategy are independent and are divided into two steps: firstly, active users are detected, and then channel estimation and data detection are carried out on the detected active users. This discrete strategy requires the user to complete activity detection and channel estimation through the pilot before data transmission, which can generate huge time delay and performance overhead. Therefore, it is difficult for such a conventional communication mode to satisfy the communication demand of high energy efficiency and low communication delay in a large-scale scenario. In addition, conventional access algorithms often need to know the statistical properties of the channel and the user activity characteristics, which is difficult to implement in practical situations.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a large-scale random access method based on a deep learning network, provides a low-complexity decoding scheme and effectively improves the communication performance.
The purpose of the invention is realized by the following technical scheme: a large-scale random access method based on a deep learning network comprises the following steps:
s1, constructing a system model based on large-scale random access;
s2, constructing a transmitting signal for a user by utilizing a deep neural network
Figure DEST_PATH_IMAGE001
A model for detection and user access judgment;
s3, carrying out neural network training and parameter updating;
and S4, detecting the signal emitted by the user according to the neural network after the training and updating, thereby judging whether the user is successfully accessed.
Further, the step S1 includes the following sub-steps:
s101, for the content containing
Figure DEST_PATH_IMAGE002
Communication system comprising a single antenna subscriber and a receiver, each subscriber being randomly connected to the receiver, i.e. transmitting information to the receiver with a certain probability in each transmission time slot, wherein the receiver is provided with
Figure DEST_PATH_IMAGE003
A root antenna; by random variables
Figure DEST_PATH_IMAGE004
To describe the user
Figure DEST_PATH_IMAGE005
The active nature of the slot, at each time slot,
Figure 506649DEST_PATH_IMAGE004
satisfies the following conditions:
Figure DEST_PATH_IMAGE006
s102, each user adopts a random access scheme based on free access; each user is pre-assigned a dedicated pilot sequence prior to transmission
Figure DEST_PATH_IMAGE007
Wherein
Figure DEST_PATH_IMAGE008
For pilot length, symbols
Figure DEST_PATH_IMAGE009
Representative length of
Figure 971260DEST_PATH_IMAGE008
A set of complex sequences of (a); the elements of each pilot are derived from an independent identically distributed gaussian distribution,namely, it is
Figure DEST_PATH_IMAGE010
Wherein the symbol
Figure DEST_PATH_IMAGE011
Represents a mean of 0 and a variance of
Figure DEST_PATH_IMAGE012
The complex gaussian distribution of (a) is,
Figure DEST_PATH_IMAGE013
representative dimension of
Figure 360784DEST_PATH_IMAGE008
The identity matrix of (1); storing the pilot sequences of all users in a receiving end;
s103, each active user synchronously transmits a pilot frequency sequence and a transmission signal in each transmission time slot
Figure 165447DEST_PATH_IMAGE001
To the receiving end, the received signal is represented as
Figure DEST_PATH_IMAGE014
Order to
Figure DEST_PATH_IMAGE016
Obtaining a matrix expression of the received signal,
Figure DEST_PATH_IMAGE017
wherein
Figure 100002_DEST_PATH_IMAGE018
Is Gaussian noise, each element satisfies the conditions that the mean value of independent equal distribution is zero and the variance is
Figure DEST_PATH_IMAGE019
(ii) a gaussian distribution of;
Figure DEST_PATH_IMAGE020
representing a usernThe channel parameters to the receiving end are,
Figure DEST_PATH_IMAGE021
to indicate a length of
Figure DEST_PATH_IMAGE022
And is unknown at the receiving end, is set
Figure DEST_PATH_IMAGE023
For the usernIs transmitted. In which the signal is transmitted
Figure 340339DEST_PATH_IMAGE023
Is generated from the following codebook:
Figure DEST_PATH_IMAGE024
wherein
Figure DEST_PATH_IMAGE025
Is the first
Figure DEST_PATH_IMAGE026
A number of modulation code words is modulated,
Figure DEST_PATH_IMAGE027
is a usernThe rate of transmission of (a) is,
Figure DEST_PATH_IMAGE028
representing the user as inactive, i.e. inactive
Figure DEST_PATH_IMAGE029
Further, the step S2 includes the following sub-steps:
s201, initialization: inputting a received signal
Figure DEST_PATH_IMAGE030
Sparse parameters of usersgRate per user
Figure DEST_PATH_IMAGE031
(ii) a Initialization order
Figure DEST_PATH_IMAGE032
S202, firstly, receiving signals
Figure 785971DEST_PATH_IMAGE030
Inputting into a designed neural network algorithm for interference elimination, wherein the neural network algorithm is based on a multilayer structuretThe calculation process of the layer is as follows:
Figure DEST_PATH_IMAGE033
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE034
Figure DEST_PATH_IMAGE035
is a matrix
Figure DEST_PATH_IMAGE036
The conjugate transpose of (a) is performed,tis an integer greater than zero, and the maximum number of layers is set to
Figure DEST_PATH_IMAGE037
I.e. by
Figure DEST_PATH_IMAGE038
Figure DEST_PATH_IMAGE039
Figure DEST_PATH_IMAGE040
Representing the action of a noise remover onnThe column signals are then transmitted to the display device,
Figure DEST_PATH_IMAGE041
representing the first derivative of the denoiser function; the design of the denoiser will be implemented by a deep neural network,
Figure DEST_PATH_IMAGE042
representative denoiser
Figure DEST_PATH_IMAGE043
A neural network parameter of (a);
noise removing device
Figure DEST_PATH_IMAGE044
The design of (2) is as follows: firstly, a complex matrix is formed
Figure DEST_PATH_IMAGE045
Conversion into a real number matrix
Figure DEST_PATH_IMAGE046
Wherein
Figure DEST_PATH_IMAGE047
Representative dimension of
Figure DEST_PATH_IMAGE048
The conversion mode of the real number matrix set is as follows:
Figure DEST_PATH_IMAGE049
wherein
Figure DEST_PATH_IMAGE050
Wherein
Figure DEST_PATH_IMAGE051
Representative dimension of
Figure DEST_PATH_IMAGE052
Is a matrix
Figure DEST_PATH_IMAGE053
To (1) anA section matrix; the matrix is then input into the following neural network:
Figure DEST_PATH_IMAGE054
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE055
represents a combination of two neural networks;
Figure DEST_PATH_IMAGE056
is a convolutional neural network with a number of filters of
Figure DEST_PATH_IMAGE057
The kernel size is (1,1), and the step size is (1, 1);
in a convolutional network
Figure DEST_PATH_IMAGE058
And
Figure DEST_PATH_IMAGE059
adding Relu function as an activation function at the end of (1); order to
Figure DEST_PATH_IMAGE060
Figure DEST_PATH_IMAGE061
Is a soft shrinkage function:
Figure DEST_PATH_IMAGE062
wherein, the matrix
Figure DEST_PATH_IMAGE063
Is a matrix
Figure DEST_PATH_IMAGE064
To (1) anThe number of the slices is one,
Figure DEST_PATH_IMAGE065
is that the puncturing parameter is included in the parameter set
Figure DEST_PATH_IMAGE066
Performing the following steps; finally, will
Figure DEST_PATH_IMAGE067
Conversion into a complex matrix
Figure DEST_PATH_IMAGE068
(ii) a Output signal
Figure DEST_PATH_IMAGE069
Let us order
Figure DEST_PATH_IMAGE070
S203, calculating by using a neural network
Figure DEST_PATH_IMAGE071
The posterior probability of (2):
first, each complex phasor
Figure DEST_PATH_IMAGE072
Conversion into real number vector
Figure DEST_PATH_IMAGE073
That is to say that,
Figure DEST_PATH_IMAGE074
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE075
representative vector
Figure DEST_PATH_IMAGE076
To middle
Figure DEST_PATH_IMAGE077
Element to element
Figure DEST_PATH_IMAGE078
A vector of the composition of the individual elements,
Figure DEST_PATH_IMAGE079
and
Figure DEST_PATH_IMAGE080
respectively representing real numbers and imaginary numbers; then, the obtained vector is
Figure DEST_PATH_IMAGE081
The input neural network, i.e.,
Figure DEST_PATH_IMAGE082
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE083
and
Figure DEST_PATH_IMAGE084
is a fully connected neural network layer, the number of neurons is respectively
Figure DEST_PATH_IMAGE085
And
Figure DEST_PATH_IMAGE086
(ii) a The Relu function and the Softmax function are respectively added to the network
Figure 687892DEST_PATH_IMAGE083
And
Figure 417950DEST_PATH_IMAGE084
at the end of the time period (c) of (c),
Figure DEST_PATH_IMAGE087
is a parameter of the neural network;
finally, based on the obtained output
Figure DEST_PATH_IMAGE088
The optimal a posteriori probability for detection is calculated, i.e.,
Figure DEST_PATH_IMAGE089
wherein
Figure DEST_PATH_IMAGE090
Is to transmit information
Figure DEST_PATH_IMAGE091
The thermally encoded codeword of (a);
if it is
Figure DEST_PATH_IMAGE092
Then, then
Figure DEST_PATH_IMAGE093
When is coming into contact with
Figure DEST_PATH_IMAGE094
Then, then
Figure DEST_PATH_IMAGE095
Wherein
Figure DEST_PATH_IMAGE096
Representative length ofnA zero vector of (d);
s204, after the posterior probability is obtained, the user emission information is detected by a method of maximizing the posterior probability, namely,
Figure DEST_PATH_IMAGE097
to obtain
Figure DEST_PATH_IMAGE098
Then, the transmission information is obtained through the corresponding relationship of the thermal coding in step S203
Figure DEST_PATH_IMAGE099
S205, passing the detected information
Figure 195151DEST_PATH_IMAGE099
Thus, whether the user is successfully accessed is judged: when in use
Figure DEST_PATH_IMAGE100
Then represents the usernAnd successfully accessing the receiving end.
Further, the step S3 includes the following sub-steps:
the step S3 includes the following sub-steps:
s301, initializing and inputting
Figure DEST_PATH_IMAGE101
Parameter of
Figure DEST_PATH_IMAGE102
And
Figure DEST_PATH_IMAGE103
training sample
Figure DEST_PATH_IMAGE104
Wherein, in the step (A),
Figure DEST_PATH_IMAGE105
is as followsjThe received signal at the time of one sample,
Figure DEST_PATH_IMAGE106
represents the firstjUnder the samplenThe transmitted code words of the individual users are,Bis the total number of samples, positive real number
Figure DEST_PATH_IMAGE107
S302, sampling
Figure 411018DEST_PATH_IMAGE105
The input enters the neural network in S202,
Figure DEST_PATH_IMAGE108
representing the output of a neural networknA line real number signal; then will be
Figure DEST_PATH_IMAGE109
Neural network in input S203, output
Figure DEST_PATH_IMAGE110
S303. utilize
Figure DEST_PATH_IMAGE111
Figure DEST_PATH_IMAGE112
And thermally encoded code word
Figure DEST_PATH_IMAGE113
To neural network parameters
Figure DEST_PATH_IMAGE114
And
Figure DEST_PATH_IMAGE115
updating is carried out;
firstly, designing a loss function for training a neural network, wherein the loss function comprises three aspects:
Figure DEST_PATH_IMAGE116
Figure DEST_PATH_IMAGE117
Figure DEST_PATH_IMAGE118
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE119
representative vector
Figure DEST_PATH_IMAGE120
To (1) aiThe number of the elements is one,
Figure DEST_PATH_IMAGE121
is a transmitted codeword obtained by randomly scrambling a training sample; equation of
Figure DEST_PATH_IMAGE122
Is given by the parameter
Figure DEST_PATH_IMAGE123
The design method of the neural network comprises the following steps:
Figure DEST_PATH_IMAGE124
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE125
is a fully connected neural network with a number of nodes of
Figure DEST_PATH_IMAGE126
(ii) a Is provided with
Figure DEST_PATH_IMAGE127
(ii) a An ELU function is arranged behind each neural network as an activation function;
for each training, input samples
Figure DEST_PATH_IMAGE128
Enter a neural network to obtain
Figure DEST_PATH_IMAGE129
And
Figure DEST_PATH_IMAGE130
then calculating a loss function, and then using a backward iterative algorithm and an Ada optimizer to pair the parameters
Figure DEST_PATH_IMAGE131
Updating; make things stand moreAfter a new fixed number of times, the output is the updated neural network parameters, i.e.
Figure DEST_PATH_IMAGE132
S304, the updated neural network parameters are used in the algorithm of step S2
Figure DEST_PATH_IMAGE133
(ii) a The updated neural network can obtain more accurate transmission information
Figure DEST_PATH_IMAGE134
And the random access is more accurate.
The invention has the beneficial effects that: in the large-scale random access scheme provided by the invention, a decoding algorithm with low complexity is provided, and the communication performance is effectively improved. Specifically, compared with the traditional algorithm, the detection algorithm based on the neural network does not need the prior statistical characteristic of a channel, can greatly reduce the loss of the system, and is more suitable for the actual communication system. In addition, the proposed algorithm will be more robust than the conventional algorithm, i.e. it will provide better performance, such as lower error rates, when the system a priori knowledge is not complete.
Drawings
Fig. 1 is a schematic diagram of a large scale random access channel;
FIG. 2 is a flow chart of a method of the present invention;
FIG. 3 is a schematic diagram of a neural network algorithm based on a multilayer structure;
FIG. 4 is a schematic diagram of the design principle of a de-noiser;
FIG. 5 is a schematic diagram showing a comparison of algorithms when the number of users is (4,8,28) and the sparsity is (0.2,0.1,0.2) in the embodiment;
FIG. 6 is a schematic diagram showing a comparison of the algorithm with the number of users being (8,20,12) and the sparsity being (0.1,0.2,0.3) in the embodiment;
fig. 7 is a schematic diagram showing comparison of algorithms in the embodiment in which the number of users is (4,8,28) and the sparsity is (0.1,0.2, 0.3).
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
As shown in fig. 1, aiming at the problem of large-scale random access in 5G communication, the invention designs a random access algorithm based on a deep learning network. Considering a large-scale random access channel as shown in fig. 1, a base station needs to support uplink transmission of a large number of users at the same time. At one transmission moment, only a few users are in an active state to transmit information to the base station, and other users are in a dormant state. As shown in fig. 2, a specific method includes the following steps:
s1, constructing a system model based on large-scale random access:
s101. for
Figure DEST_PATH_IMAGE135
Communication system comprising a single antenna subscriber and a receiver, each subscriber being randomly connected to the receiver, i.e. transmitting information to the receiver with a certain probability in each transmission time slot, wherein the receiver is provided with
Figure DEST_PATH_IMAGE136
A root antenna; by random variables
Figure DEST_PATH_IMAGE137
To describe the usernThe active nature of the slot, at each time slot,
Figure 743229DEST_PATH_IMAGE137
satisfies the following conditions:
Figure DEST_PATH_IMAGE138
s102, each user adopts a random access scheme based on free access; each user is pre-assigned a dedicated pilot sequence prior to transmission
Figure DEST_PATH_IMAGE139
Wherein
Figure DEST_PATH_IMAGE140
For pilot length, symbols
Figure DEST_PATH_IMAGE141
Representative length of
Figure 493623DEST_PATH_IMAGE140
A set of complex sequences of (a); the elements of each pilot being derived from an independent identically distributed gaussian distribution, i.e.
Figure DEST_PATH_IMAGE142
Wherein the symbol
Figure DEST_PATH_IMAGE143
Represents a mean of 0 and a variance of
Figure DEST_PATH_IMAGE144
The complex gaussian distribution of (a) is,
Figure DEST_PATH_IMAGE145
representative dimension of
Figure 830057DEST_PATH_IMAGE140
The identity matrix of (1); storing the pilot sequences of all users in a receiving end;
s103, each active user synchronously transmits a pilot frequency sequence and a transmission signal in each transmission time slot
Figure 791060DEST_PATH_IMAGE001
To the receiving end, the received signal is represented as
Figure DEST_PATH_IMAGE146
Order to
Figure 970368DEST_PATH_IMAGE016
To obtain a received signalThe expression of the matrix of (a),
Figure DEST_PATH_IMAGE147
wherein
Figure DEST_PATH_IMAGE148
Is Gaussian noise, each element satisfies the conditions that the mean value of independent equal distribution is zero and the variance is
Figure DEST_PATH_IMAGE149
(ii) a gaussian distribution of;
Figure DEST_PATH_IMAGE150
representing a usernThe channel parameters to the receiving end are,
Figure DEST_PATH_IMAGE151
to indicate a length ofMAnd is unknown at the receiving end, is set
Figure DEST_PATH_IMAGE152
For the usernThe transmission signal of (1); in which the signal is transmitted
Figure 999154DEST_PATH_IMAGE152
Is generated from the following codebook:
Figure DEST_PATH_IMAGE153
wherein
Figure DEST_PATH_IMAGE154
Is the first
Figure DEST_PATH_IMAGE155
A number of modulation code words is modulated,
Figure DEST_PATH_IMAGE156
is a usernThe rate of transmission of (a) is,
Figure DEST_PATH_IMAGE157
representing the user as inactive, i.e. inactive
Figure DEST_PATH_IMAGE158
S2, constructing a transmitting signal for a user by utilizing a deep neural network
Figure DEST_PATH_IMAGE159
A model for detection and user access judgment;
the step S2 includes:
s201, initialization: inputting a received signal
Figure DEST_PATH_IMAGE160
Sparse parameters of usersgRate per user
Figure DEST_PATH_IMAGE161
. Initialization order
Figure DEST_PATH_IMAGE162
S202, firstly, receiving signals
Figure 406739DEST_PATH_IMAGE160
Inputting into a designed neural network algorithm for interference elimination, wherein the neural network algorithm is based on a multilayer structuretThe calculation process of the layer is as follows:
Figure DEST_PATH_IMAGE163
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE164
Figure DEST_PATH_IMAGE165
is a matrix
Figure DEST_PATH_IMAGE166
The conjugate transpose of (a) is performed,tis an integer greater than zero, and the maximum number of layers is set to
Figure DEST_PATH_IMAGE167
I.e. by
Figure DEST_PATH_IMAGE168
Figure DEST_PATH_IMAGE169
Figure DEST_PATH_IMAGE170
Representing the action of a noise remover onnThe column signals are then transmitted to the display device,
Figure DEST_PATH_IMAGE171
representing the first derivative of the denoiser function; the design of the denoiser will be implemented by a deep neural network,
Figure DEST_PATH_IMAGE172
representative denoiser
Figure DEST_PATH_IMAGE173
A neural network parameter of (a);
noise removing device
Figure DEST_PATH_IMAGE174
The design of (2) is as follows: firstly, a complex matrix is formed
Figure DEST_PATH_IMAGE175
Conversion into a real number matrix
Figure DEST_PATH_IMAGE176
Wherein
Figure DEST_PATH_IMAGE177
Representative dimension of
Figure DEST_PATH_IMAGE178
The conversion mode of the real number matrix set is as follows:
Figure DEST_PATH_IMAGE179
wherein
Figure DEST_PATH_IMAGE180
Wherein
Figure DEST_PATH_IMAGE181
Representative dimension ofMIs a matrix
Figure DEST_PATH_IMAGE182
To (1) anA section matrix; the matrix is then input into the following neural network:
Figure DEST_PATH_IMAGE183
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE184
represents a combination of two neural networks;
Figure DEST_PATH_IMAGE185
is a convolutional neural network with a number of filters of
Figure DEST_PATH_IMAGE186
The kernel size is (1,1), and the step size is (1, 1);
in a convolutional network
Figure DEST_PATH_IMAGE187
And
Figure DEST_PATH_IMAGE188
adding Relu function as an activation function at the end of (1); order to
Figure DEST_PATH_IMAGE189
Figure DEST_PATH_IMAGE190
Is a soft shrink boxNumber:
Figure DEST_PATH_IMAGE191
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE192
the matrix being a matrix
Figure DEST_PATH_IMAGE193
To (1) anThe number of the slices is one,
Figure DEST_PATH_IMAGE194
is that the puncturing parameter is included in the parameter set
Figure DEST_PATH_IMAGE195
Performing the following steps; finally, will
Figure DEST_PATH_IMAGE196
Conversion into a complex matrix
Figure DEST_PATH_IMAGE197
After the step of S202, the signal is outputted
Figure DEST_PATH_IMAGE198
Let us order
Figure DEST_PATH_IMAGE199
S203. in this step, we will use neural network to calculate the basis
Figure DEST_PATH_IMAGE200
The posterior probability of (d). First, each complex phasor
Figure DEST_PATH_IMAGE201
Conversion into real number vector
Figure DEST_PATH_IMAGE202
That is to say that,
Figure DEST_PATH_IMAGE203
wherein
Figure DEST_PATH_IMAGE204
Representative vector
Figure DEST_PATH_IMAGE205
To middle
Figure DEST_PATH_IMAGE206
Element to element
Figure DEST_PATH_IMAGE207
A vector of the composition of the individual elements,
Figure DEST_PATH_IMAGE208
and
Figure DEST_PATH_IMAGE209
representing real and imaginary numbers, respectively. Then, we will get the vector
Figure DEST_PATH_IMAGE210
The input neural network, i.e.,
Figure DEST_PATH_IMAGE211
wherein
Figure DEST_PATH_IMAGE212
And
Figure DEST_PATH_IMAGE213
is a fully connected neural network layer, the number of neurons is respectively
Figure DEST_PATH_IMAGE214
And
Figure DEST_PATH_IMAGE215
. Relu function andsoftmax function is respectively added in the network
Figure 955620DEST_PATH_IMAGE212
And
Figure 950121DEST_PATH_IMAGE213
at the end of the time period (c) of (c),
Figure DEST_PATH_IMAGE216
are parameters of the neural network. Finally, based on the obtained output
Figure DEST_PATH_IMAGE217
We can calculate the optimal a posteriori probability for detection, i.e.,
Figure DEST_PATH_IMAGE218
wherein
Figure DEST_PATH_IMAGE219
Is to transmit information
Figure DEST_PATH_IMAGE220
The thermally encoded codeword of (a); if it is
Figure DEST_PATH_IMAGE221
Then, then
Figure DEST_PATH_IMAGE222
When is coming into contact with
Figure DEST_PATH_IMAGE223
Then, then
Figure DEST_PATH_IMAGE224
Wherein
Figure DEST_PATH_IMAGE225
Representative length ofnThe zero vector of (2).
S204, after the posterior probability is obtained, the user emission information is detected by a method of maximizing the posterior probability, namely,
Figure DEST_PATH_IMAGE226
to obtain
Figure DEST_PATH_IMAGE227
Then, through the correspondence relationship of the thermal coding in S203, we can obtain the transmission information
Figure DEST_PATH_IMAGE228
S205, passing the detected information
Figure 810628DEST_PATH_IMAGE228
Thus, whether the user is successfully accessed is judged: when in use
Figure DEST_PATH_IMAGE229
Then represents the usernAnd successfully accessing the receiving end.
Step S2 describes the specific steps of the neural network algorithm, however, the parameters of the neural network need to be trained before they can be used. To this end, we describe in detail how to train the neural network and update the parameters in S3.
The step S3 includes the following sub-steps:
s301, initializing and inputting
Figure DEST_PATH_IMAGE230
Parameter of
Figure DEST_PATH_IMAGE231
And
Figure DEST_PATH_IMAGE232
training sample
Figure DEST_PATH_IMAGE233
Wherein, in the step (A),
Figure DEST_PATH_IMAGE234
is as followsjThe received signal at the time of one sample,
Figure DEST_PATH_IMAGE235
represents the firstjUnder the samplenThe transmitted code words of the individual users are,Bis the total number of samples, positive real number
Figure DEST_PATH_IMAGE236
S302, sampling
Figure 197354DEST_PATH_IMAGE234
The input enters the neural network in S202,
Figure DEST_PATH_IMAGE237
representing the output of a neural networknA line real number signal; then will be
Figure 703421DEST_PATH_IMAGE237
Neural network in input S203, output
Figure DEST_PATH_IMAGE238
S303. utilize
Figure 591743DEST_PATH_IMAGE237
Figure 880773DEST_PATH_IMAGE238
And thermally encoded code word
Figure DEST_PATH_IMAGE239
To neural network parameters
Figure DEST_PATH_IMAGE240
And
Figure 108623DEST_PATH_IMAGE232
updating is carried out;
firstly, designing a loss function for training a neural network, wherein the loss function comprises three aspects:
Figure DEST_PATH_IMAGE241
Figure DEST_PATH_IMAGE242
Figure DEST_PATH_IMAGE243
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE244
representative vector
Figure DEST_PATH_IMAGE245
To (1) aiThe number of the elements is one,
Figure DEST_PATH_IMAGE246
is a transmitted codeword obtained by randomly scrambling a training sample; equation of
Figure DEST_PATH_IMAGE247
Is given by the parameter
Figure DEST_PATH_IMAGE248
The design method of the neural network comprises the following steps:
Figure DEST_PATH_IMAGE249
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE250
is a fully connected neural network with a number of nodes of
Figure DEST_PATH_IMAGE251
(ii) a Is provided with
Figure DEST_PATH_IMAGE252
(ii) a An ELU function is arranged behind each neural network as an activation function;
for each training, input samples
Figure DEST_PATH_IMAGE253
Enter a neural network to obtain
Figure DEST_PATH_IMAGE254
And
Figure DEST_PATH_IMAGE255
then calculating a loss function, and then using a backward iterative algorithm and an Ada optimizer to pair the parameters
Figure DEST_PATH_IMAGE256
Updating; when updated a fixed number of times, the output is the updated neural network parameters, i.e.
Figure DEST_PATH_IMAGE257
S304, the updated neural network parameters are used in the algorithm of step S2
Figure DEST_PATH_IMAGE258
(ii) a The updated neural network can obtain more accurate transmission information
Figure DEST_PATH_IMAGE259
And the random access is more accurate.
S4, detecting the user emission signal according to the neural network after training update, thereby judging whether the user is successfully accessed; when the access is determined to be successful, the steps in steps S201 to S205 are followed.
In the embodiments of the present application, some simulation results are given to verify the feasibility of the proposed random access scheme. The experimental parameters were selected as: number of usersN=40, sequence lengthK= 30. Three different transmission rates are considered: the codebook of users in group 1 is
Figure DEST_PATH_IMAGE260
The codebook of users in group 2 is
Figure DEST_PATH_IMAGE261
The codebook of users in group 3 is
Figure DEST_PATH_IMAGE262
. The channels satisfying a Rice distribution, i.e.
Figure DEST_PATH_IMAGE263
. Channel parameters for each user
Figure DEST_PATH_IMAGE264
Are all composed ofK-the factor rice distribution is randomly generated. We compare the proposed algorithm with the traditional message-based algorithm and set the parameters
Figure DEST_PATH_IMAGE265
To measure the estimation error for the channel profile, i.e.
Figure DEST_PATH_IMAGE266
. The parameters of the neural network are designed as:
Figure DEST_PATH_IMAGE267
Figure DEST_PATH_IMAGE268
. Number of training samples is
Figure DEST_PATH_IMAGE269
In the experiment of fig. 5, we set the number of users in user groups 1, 2, and 3 to be (4,8, and 28), respectively, and the sparsity to be (0.2,0.1, and 0.2), respectively. As shown in fig. 5, our proposed algorithm is more robust than the conventional message passing algorithm. The neural network algorithm we propose has better performance when there is error to the channel profile estimate. In fig. 6, we change the number of users in the user group to (8,20,12), and the sparsity to (0.1,0.2, 0.3). As shown in fig. 6, the performance of our proposed algorithm still has better performance in robustness than the message passing algorithm.
In fig. 7, we investigated the effect of the number of antennas on performance. We set the number of users in user groups 1, 2, 3 to be (12,20,8), and the sparsity to be (0.1,0.2, 0.1). As shown in fig. 7, as the number of antennas increases, the error rate of the proposed algorithm decreases and is more robust than the conventional message passing algorithm.
The foregoing is a preferred embodiment of the present invention, it is to be understood that the invention is not limited to the form disclosed herein, but is not to be construed as excluding other embodiments, and is capable of other combinations, modifications, and environments and is capable of changes within the scope of the inventive concept as expressed herein, commensurate with the above teachings, or the skill or knowledge of the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (2)

1. A large-scale random access method based on a deep learning network is characterized in that: the method comprises the following steps:
s1, constructing a system model based on large-scale random access;
s2, constructing a transmitting signal for a user by utilizing a deep neural network
Figure 81890DEST_PATH_IMAGE001
A model for detection and user access judgment;
s3, carrying out neural network training and parameter updating;
s4, detecting the user emission signal according to the neural network after training update, thereby judging whether the user is successfully accessed;
the step S1 includes the following sub-steps:
s101, for the content containing
Figure 326927DEST_PATH_IMAGE002
Communication between single antenna user and receiving endA system for transmitting information to a receiver with a certain probability in each transmission time slot, wherein each user randomly accesses the receiver, and the receiver is provided with
Figure 450872DEST_PATH_IMAGE003
A root antenna; by random variables
Figure 314923DEST_PATH_IMAGE004
To describe the user
Figure 457191DEST_PATH_IMAGE005
The active nature of the slot, at each time slot,
Figure 748495DEST_PATH_IMAGE004
satisfies the following conditions:
Figure 94157DEST_PATH_IMAGE006
s102, each user adopts a random access scheme based on free access; each user is pre-assigned a dedicated pilot sequence prior to transmission
Figure 886532DEST_PATH_IMAGE007
Wherein
Figure 758673DEST_PATH_IMAGE008
For pilot length, symbols
Figure 96245DEST_PATH_IMAGE009
Representative length of
Figure 178470DEST_PATH_IMAGE008
A set of complex sequences of (a); the elements of each pilot being derived from an independent identically distributed gaussian distribution, i.e.
Figure 384324DEST_PATH_IMAGE010
Wherein the symbol
Figure 977549DEST_PATH_IMAGE011
Represents a mean of 0 and a variance of
Figure 876234DEST_PATH_IMAGE012
The complex gaussian distribution of (a) is,
Figure 445756DEST_PATH_IMAGE013
representative dimension of
Figure 799508DEST_PATH_IMAGE014
The identity matrix of (1); storing the pilot sequences of all users in a receiving end;
s103, each active user synchronously transmits a pilot frequency sequence and a transmission signal in each transmission time slot
Figure DEST_PATH_IMAGE015
To the receiving end, the received signal is represented as
Figure 505296DEST_PATH_IMAGE016
Order to
Figure 919091DEST_PATH_IMAGE017
Obtaining a matrix expression of the received signal,
Figure DEST_PATH_IMAGE018
wherein
Figure 444750DEST_PATH_IMAGE019
Is Gaussian noise, each element satisfies the conditions that the mean value of independent equal distribution is zero and the variance is
Figure 992406DEST_PATH_IMAGE020
Gauss ofDistributing;
Figure 569012DEST_PATH_IMAGE021
representing a usernThe channel parameters to the receiving end are,
Figure 934134DEST_PATH_IMAGE022
to indicate a length of
Figure 353614DEST_PATH_IMAGE023
And is unknown at the receiving end, is set
Figure 577398DEST_PATH_IMAGE024
For the usernThe transmission signal of (1); in which the signal is transmitted
Figure 398723DEST_PATH_IMAGE024
Is generated from the following codebook:
Figure 669168DEST_PATH_IMAGE025
wherein
Figure 575944DEST_PATH_IMAGE026
Is the first
Figure 75189DEST_PATH_IMAGE027
A number of modulation code words is modulated,
Figure 875655DEST_PATH_IMAGE028
is a usernThe rate of transmission of (a) is,
Figure 67733DEST_PATH_IMAGE029
representing the user as inactive, i.e. inactive
Figure 196226DEST_PATH_IMAGE030
The step S2 includes the following sub-steps:
s201, initialization: inputting a received signal
Figure 748430DEST_PATH_IMAGE031
Sparse parameters of usersgRate per user
Figure 154135DEST_PATH_IMAGE032
(ii) a Initialization order
Figure 641748DEST_PATH_IMAGE033
S202, firstly, receiving signals
Figure 382171DEST_PATH_IMAGE031
Inputting into a designed neural network algorithm for interference elimination, wherein the neural network algorithm is based on a multilayer structuretThe calculation process of the layer is as follows:
Figure 491728DEST_PATH_IMAGE034
wherein the content of the first and second substances,
Figure 1207DEST_PATH_IMAGE035
Figure 659721DEST_PATH_IMAGE036
is a matrix
Figure 903752DEST_PATH_IMAGE037
The conjugate transpose of (a) is performed,tis an integer greater than zero, and the maximum number of layers is set to
Figure 938704DEST_PATH_IMAGE038
I.e. by
Figure 833848DEST_PATH_IMAGE039
Figure 397684DEST_PATH_IMAGE040
Figure 129011DEST_PATH_IMAGE041
Representing the action of a noise remover onnThe column signals are then transmitted to the display device,
Figure 967654DEST_PATH_IMAGE042
representing the first derivative of the denoiser function; the design of the denoiser will be implemented by a deep neural network,
Figure 451725DEST_PATH_IMAGE043
representative denoiser
Figure 717621DEST_PATH_IMAGE044
A neural network parameter of (a);
noise removing device
Figure 405086DEST_PATH_IMAGE044
The design of (2) is as follows: firstly, a complex matrix is formed
Figure 172053DEST_PATH_IMAGE045
Conversion into a real number matrix
Figure 385997DEST_PATH_IMAGE046
Wherein
Figure 695231DEST_PATH_IMAGE047
Representative dimension of
Figure 260205DEST_PATH_IMAGE048
The conversion mode of the real number matrix set is as follows:
Figure 565284DEST_PATH_IMAGE049
wherein
Figure 509100DEST_PATH_IMAGE050
Wherein
Figure 116799DEST_PATH_IMAGE051
Representative dimension of
Figure 293703DEST_PATH_IMAGE052
Is a matrix
Figure 12260DEST_PATH_IMAGE053
To (1) anA section matrix; the matrix is then input into the following neural network:
Figure 341741DEST_PATH_IMAGE054
wherein the content of the first and second substances,
Figure 120341DEST_PATH_IMAGE055
represents a combination of two neural networks;
Figure 784541DEST_PATH_IMAGE056
is a convolutional neural network with a number of filters of
Figure 41210DEST_PATH_IMAGE057
The kernel size is (1,1), and the step size is (1, 1);
in a convolutional network
Figure 225198DEST_PATH_IMAGE058
And
Figure 174699DEST_PATH_IMAGE059
adding Relu function as an activation function at the end of (1); order to
Figure 60616DEST_PATH_IMAGE060
Figure 386555DEST_PATH_IMAGE061
Is a soft shrinkage function:
Figure 439697DEST_PATH_IMAGE062
wherein, the matrix
Figure 419155DEST_PATH_IMAGE063
Is a matrix
Figure 933313DEST_PATH_IMAGE064
To (1) anThe number of the slices is one,
Figure 672730DEST_PATH_IMAGE065
is that the puncturing parameter is included in the parameter set
Figure 690364DEST_PATH_IMAGE066
Performing the following steps; finally, will
Figure 371881DEST_PATH_IMAGE067
Conversion into a complex matrix
Figure 842177DEST_PATH_IMAGE068
(ii) a Output signal
Figure 385285DEST_PATH_IMAGE069
Let us order
Figure 257426DEST_PATH_IMAGE070
S203, calculating by using a neural network
Figure 109844DEST_PATH_IMAGE071
The posterior probability of (2):
first, each complex phasor
Figure 942802DEST_PATH_IMAGE072
Conversion into real number vector
Figure 148656DEST_PATH_IMAGE073
That is to say that,
Figure 265516DEST_PATH_IMAGE074
wherein the content of the first and second substances,
Figure 164202DEST_PATH_IMAGE075
representative vector
Figure 481526DEST_PATH_IMAGE076
To middle
Figure 491071DEST_PATH_IMAGE077
Element to element
Figure 728017DEST_PATH_IMAGE078
A vector of the composition of the individual elements,
Figure 532025DEST_PATH_IMAGE079
and
Figure 339575DEST_PATH_IMAGE080
respectively representing real numbers and imaginary numbers; then, the obtained vector is
Figure 887231DEST_PATH_IMAGE081
The input neural network, i.e.,
Figure 978684DEST_PATH_IMAGE082
wherein the content of the first and second substances,
Figure 953593DEST_PATH_IMAGE083
and
Figure 514019DEST_PATH_IMAGE084
is a fully connected neural network layer, the number of neurons is respectively
Figure 865366DEST_PATH_IMAGE085
And
Figure 545746DEST_PATH_IMAGE086
(ii) a The Relu function and the Softmax function are respectively added to the network
Figure 832502DEST_PATH_IMAGE083
And
Figure 739278DEST_PATH_IMAGE084
at the end of the time period (c) of (c),
Figure 753370DEST_PATH_IMAGE087
is a parameter of the neural network;
finally, based on the obtained output
Figure 694781DEST_PATH_IMAGE088
The optimal a posteriori probability for detection is calculated, i.e.,
Figure 889789DEST_PATH_IMAGE089
wherein
Figure 18282DEST_PATH_IMAGE090
Is to transmit information
Figure 836065DEST_PATH_IMAGE091
The thermally encoded codeword of (a);
if it is
Figure 631983DEST_PATH_IMAGE092
Then, then
Figure 260542DEST_PATH_IMAGE093
When is coming into contact with
Figure 141910DEST_PATH_IMAGE094
Then, then
Figure 497805DEST_PATH_IMAGE095
Wherein
Figure 758016DEST_PATH_IMAGE096
Representative length ofnA zero vector of (d);
s204, after the posterior probability is obtained, the user emission information is detected by a method of maximizing the posterior probability, namely,
Figure 682110DEST_PATH_IMAGE097
to obtain
Figure 175408DEST_PATH_IMAGE098
Then, the transmission information is obtained through the corresponding relationship of the thermal coding in step S203
Figure 210360DEST_PATH_IMAGE099
S205, passing the detected information
Figure 590657DEST_PATH_IMAGE099
Thus, whether the user is successfully accessed is judged: when in use
Figure 685652DEST_PATH_IMAGE100
Then represents the usernAnd successfully accessing the receiving end.
2. The large-scale random access method based on the deep learning network as claimed in claim 1, wherein: the step S3 includes the following sub-steps:
s301, initializing and inputting
Figure 400667DEST_PATH_IMAGE101
Parameter of
Figure 973731DEST_PATH_IMAGE102
And
Figure 471184DEST_PATH_IMAGE103
training sample
Figure 737080DEST_PATH_IMAGE104
Wherein, in the step (A),
Figure 673812DEST_PATH_IMAGE105
is as followsjThe received signal at the time of one sample,
Figure 191512DEST_PATH_IMAGE106
represents the firstjUnder the samplenThe transmitted code words of the individual users are,Bis the total number of samples, positive real number
Figure 671035DEST_PATH_IMAGE107
S302, sampling
Figure 232466DEST_PATH_IMAGE105
The input enters the neural network in S202,
Figure 797440DEST_PATH_IMAGE108
representing the output of a neural networknA line real number signal; then will be
Figure 587672DEST_PATH_IMAGE108
Input deviceNeural network, output in S203
Figure 921702DEST_PATH_IMAGE109
S303. utilize
Figure 654034DEST_PATH_IMAGE110
Figure 706304DEST_PATH_IMAGE111
And thermally encoded code word
Figure 300228DEST_PATH_IMAGE112
To neural network parameters
Figure 754343DEST_PATH_IMAGE113
And
Figure 657577DEST_PATH_IMAGE114
updating is carried out;
firstly, designing a loss function for training a neural network, wherein the loss function comprises three aspects:
Figure 197142DEST_PATH_IMAGE115
Figure 320389DEST_PATH_IMAGE116
Figure 753644DEST_PATH_IMAGE117
wherein the content of the first and second substances,
Figure 703145DEST_PATH_IMAGE118
representative vector
Figure 339794DEST_PATH_IMAGE119
To (1) aiThe number of the elements is one,
Figure 665733DEST_PATH_IMAGE120
is a transmitted codeword obtained by randomly scrambling a training sample; equation of
Figure 953495DEST_PATH_IMAGE121
Is given by the parameter
Figure 808319DEST_PATH_IMAGE122
The design method of the neural network comprises the following steps:
Figure 197843DEST_PATH_IMAGE123
wherein the content of the first and second substances,
Figure 61894DEST_PATH_IMAGE124
is a fully connected neural network with a number of nodes of
Figure 204162DEST_PATH_IMAGE125
(ii) a Is provided with
Figure 761045DEST_PATH_IMAGE126
(ii) a An ELU function is arranged behind each neural network as an activation function;
for each training, input samples
Figure 106707DEST_PATH_IMAGE127
Enter a neural network to obtain
Figure 633503DEST_PATH_IMAGE128
And
Figure 771224DEST_PATH_IMAGE129
then calculating a loss function and then using backward iterative calculationsFarad and Ada optimizer Pair parameters
Figure 371445DEST_PATH_IMAGE130
Updating; when updated a fixed number of times, the output is the updated neural network parameters, i.e.
Figure 329037DEST_PATH_IMAGE131
S304, the updated neural network parameters are used in the algorithm of step S2
Figure 925103DEST_PATH_IMAGE132
(ii) a The updated neural network can obtain more accurate transmission information
Figure 917330DEST_PATH_IMAGE133
And the random access is more accurate.
CN202111323583.8A 2021-11-10 2021-11-10 Large-scale random access method based on deep learning network Active CN113766669B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111323583.8A CN113766669B (en) 2021-11-10 2021-11-10 Large-scale random access method based on deep learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111323583.8A CN113766669B (en) 2021-11-10 2021-11-10 Large-scale random access method based on deep learning network

Publications (2)

Publication Number Publication Date
CN113766669A CN113766669A (en) 2021-12-07
CN113766669B true CN113766669B (en) 2021-12-31

Family

ID=78784916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111323583.8A Active CN113766669B (en) 2021-11-10 2021-11-10 Large-scale random access method based on deep learning network

Country Status (1)

Country Link
CN (1) CN113766669B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107743103A (en) * 2017-10-26 2018-02-27 北京交通大学 The multinode access detection of MMTC systems based on deep learning and channel estimation methods
CN107820321A (en) * 2017-10-31 2018-03-20 北京邮电大学 Large-scale consumer intelligence Access Algorithm in a kind of arrowband Internet of Things based on cellular network
CN108882301A (en) * 2018-07-25 2018-11-23 西安交通大学 The nonopiate accidental access method kept out of the way in extensive M2M network based on optimal power
CN109862567A (en) * 2019-03-28 2019-06-07 电子科技大学 A kind of method of cell mobile communication systems access unlicensed spectrum
CN111182649A (en) * 2020-01-03 2020-05-19 浙江工业大学 Random access method based on large-scale MIMO
CN111224905A (en) * 2019-12-25 2020-06-02 西安交通大学 Multi-user detection method based on convolution residual error network in large-scale Internet of things
CN111343730A (en) * 2020-04-15 2020-06-26 上海交通大学 Large-scale MIMO passive random access method under space correlation channel
CN111641570A (en) * 2020-04-17 2020-09-08 浙江大学 Joint equipment detection and channel estimation method based on deep learning
CN111683023A (en) * 2020-04-17 2020-09-18 浙江大学 Model-driven large-scale equipment detection method based on deep learning
CN112188539A (en) * 2020-10-10 2021-01-05 南京理工大学 Interference cancellation scheduling code design method based on deep reinforcement learning
CN112492686A (en) * 2020-11-13 2021-03-12 辽宁工程技术大学 Cellular network power distribution method based on deep double-Q network
CN112910806A (en) * 2021-01-19 2021-06-04 北京理工大学 Joint channel estimation and user activation detection method based on deep neural network
CN113303022A (en) * 2019-01-10 2021-08-24 苹果公司 2-step RACH fallback procedure
CN113344212A (en) * 2021-05-14 2021-09-03 香港中文大学(深圳) Model training method and device, computer equipment and readable storage medium
CN113438746A (en) * 2021-08-27 2021-09-24 香港中文大学(深圳) Large-scale random access method based on energy modulation
CN113573284A (en) * 2021-06-21 2021-10-29 吉林大学 Random access backoff method for large-scale machine type communication based on machine learning
US11164348B1 (en) * 2020-06-29 2021-11-02 Tsinghua University Systems and methods for general-purpose temporal graph computing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11188035B2 (en) * 2018-07-19 2021-11-30 International Business Machines Corporation Continuous control of attention for a deep learning network
US11874897B2 (en) * 2020-04-09 2024-01-16 Micron Technology, Inc. Integrated circuit device with deep learning accelerator and random access memory

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107743103A (en) * 2017-10-26 2018-02-27 北京交通大学 The multinode access detection of MMTC systems based on deep learning and channel estimation methods
CN107820321A (en) * 2017-10-31 2018-03-20 北京邮电大学 Large-scale consumer intelligence Access Algorithm in a kind of arrowband Internet of Things based on cellular network
CN108882301A (en) * 2018-07-25 2018-11-23 西安交通大学 The nonopiate accidental access method kept out of the way in extensive M2M network based on optimal power
CN113303022A (en) * 2019-01-10 2021-08-24 苹果公司 2-step RACH fallback procedure
CN109862567A (en) * 2019-03-28 2019-06-07 电子科技大学 A kind of method of cell mobile communication systems access unlicensed spectrum
CN111224905A (en) * 2019-12-25 2020-06-02 西安交通大学 Multi-user detection method based on convolution residual error network in large-scale Internet of things
CN111182649A (en) * 2020-01-03 2020-05-19 浙江工业大学 Random access method based on large-scale MIMO
CN111343730A (en) * 2020-04-15 2020-06-26 上海交通大学 Large-scale MIMO passive random access method under space correlation channel
CN111641570A (en) * 2020-04-17 2020-09-08 浙江大学 Joint equipment detection and channel estimation method based on deep learning
CN111683023A (en) * 2020-04-17 2020-09-18 浙江大学 Model-driven large-scale equipment detection method based on deep learning
US11164348B1 (en) * 2020-06-29 2021-11-02 Tsinghua University Systems and methods for general-purpose temporal graph computing
CN112188539A (en) * 2020-10-10 2021-01-05 南京理工大学 Interference cancellation scheduling code design method based on deep reinforcement learning
CN112492686A (en) * 2020-11-13 2021-03-12 辽宁工程技术大学 Cellular network power distribution method based on deep double-Q network
CN112910806A (en) * 2021-01-19 2021-06-04 北京理工大学 Joint channel estimation and user activation detection method based on deep neural network
CN113344212A (en) * 2021-05-14 2021-09-03 香港中文大学(深圳) Model training method and device, computer equipment and readable storage medium
CN113573284A (en) * 2021-06-21 2021-10-29 吉林大学 Random access backoff method for large-scale machine type communication based on machine learning
CN113438746A (en) * 2021-08-27 2021-09-24 香港中文大学(深圳) Large-scale random access method based on energy modulation

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
R3-206092 "Initial Analyse on the Interface Impact with AI-based RAN Architecture";ZTE Corporation等;《3GPP tsg_ran\wg3_iu》;20201023;全文 *
R3-206403 "Use cases, AI/ML algorithms, and general concepts";Intel Corporation;《3GPP tsg_ran\wg3_iu》;20201022;全文 *
去蜂窝大规模MIMO系统研究进展与发展趋势;章嘉懿;《重庆邮电大学学报(自然科学版)》;20190615(第03期);全文 *
智简6G无线接入网:架构、技术和展望;彭木根等;《北京邮电大学学报》;20201120(第03期);全文 *
面向5G/B5G通信的智能无线资源管理技术;史清江等;《中国科学基金》;20200425(第02期);全文 *

Also Published As

Publication number Publication date
CN113766669A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
Ye et al. Power of deep learning for channel estimation and signal detection in OFDM systems
CN111224906B (en) Approximate message transfer large-scale MIMO signal detection algorithm based on deep neural network
CN109246038B (en) Dual-drive GFDM receiver and method for data model
CN102104574B (en) Orthogonal frequency division multiplexing (OFDM)-transform domain communication system (TDCS) signal transmission and receiving methods, devices and system
CN111698182A (en) Time-frequency blocking sparse channel estimation method based on compressed sensing
CN111740934A (en) Underwater sound FBMC communication signal detection method based on deep learning
Zhang et al. Deep learning based on orthogonal approximate message passing for CP-free OFDM
CN110430013B (en) RCM method based on deep learning
CN113438746B (en) Large-scale random access method based on energy modulation
CN110233628B (en) Self-adaptive belief propagation list decoding method for polarization code
CN107864029A (en) A kind of method for reducing Multiuser Detection complexity
CN110572340A (en) turbo time domain equalization method for short wave communication
Essai Ali Deep learning‐based pilot‐assisted channel state estimator for OFDM systems
CN113112028A (en) Machine learning time synchronization method based on label design
CN112215335A (en) System detection method based on deep learning
CN114500322B (en) Method for detecting device activity and estimating channel under unauthorized large-scale access scene
CN113766669B (en) Large-scale random access method based on deep learning network
CN102882654A (en) Encoding constraint and probability calculation based encoding and decoding synchronization method
Yuan et al. Channel estimation and pilot design for uplink sparse code multiple access system based on complex-valued sparse autoencoder
CN115412416A (en) Low-complexity OTFS signal detection method for high-speed mobile scene
CN106911431B (en) Improved partial edge information transmission method applied to demodulation process of sparse code multiple access system
Wang et al. A Signal processing method of OFDM communication receiver based on CNN
Baek et al. Interference cancellation and signal detection technique based on QRD‐M algorithm for FTN signalling
Sakoda et al. Residue Effect of Parallel Interference Canceller in Belief Propagation Decoding in Massive MIMO Systems
CN110912651A (en) FTN index modulation signal detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant