CN117858113A - Distributed graph neural network beam forming method and communication network system - Google Patents

Distributed graph neural network beam forming method and communication network system Download PDF

Info

Publication number
CN117858113A
CN117858113A CN202311733177.8A CN202311733177A CN117858113A CN 117858113 A CN117858113 A CN 117858113A CN 202311733177 A CN202311733177 A CN 202311733177A CN 117858113 A CN117858113 A CN 117858113A
Authority
CN
China
Prior art keywords
communication link
pilot
representing
neural network
beamforming
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311733177.8A
Other languages
Chinese (zh)
Inventor
顾一帆
全智
毕宿志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202311733177.8A priority Critical patent/CN117858113A/en
Publication of CN117858113A publication Critical patent/CN117858113A/en
Pending legal-status Critical Current

Links

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The invention provides a distributed graph neural network beam forming method and a communication network system for a large-scale high-reliability low-delay scene. The method comprises the steps of constructing a PG4U model oriented to a large-scale high-reliability low-delay network, and carrying out centralized training and distributed deployment on the model. The invention realizes a high-efficiency distributed beam forming mechanism by adopting the PG4U model oriented to a large-scale high-reliability low-delay network. The invention can be applied to communication networks with different wireless link numbers and network topologies, and has extremely strong ductility; when the distributed beam forming is inferred, the channel state information of all interference links can be effectively extracted by a pilot frequency transmission and signal processing mode only based on the local channel state information of each communication link, so that signaling overhead is greatly reduced; the time correlation of the channel state information is fully utilized, and the problem of algorithm calculation delay is solved by making a decision of the current frame beam forming mechanism based on the channel state information of the previous frame.

Description

Distributed graph neural network beam forming method and communication network system
Technical Field
The invention relates to the field of communication, in particular to a distributed graph neural network beam forming method for a large-scale high-reliability low-delay scene.
Background
High reliability low latency communications (Ultrareliable and low-latency communication, URLLC) are 5G core application scenarios defined by the third generation partnership project (3rd Generation Partnership Project,3GPP). High reliability low latency communications end-to-end latency (e.g., 1 ms) and Block error rate (e.g., 10) for each communication link -5 ) The method has a severe performance requirement, mainly uses fixed bit number short packet communication oriented to machine type data transmission, and plays a decisive role in constructing the industrial Internet of things. The next generation of large-scale industrial internet of things will require deployment of mass intelligent devices, such as industrial robots, unmanned vehicles, unmanned aerial vehicles, head-mounted devices and the like, to support comprehensive automation and intellectualization of industrial manufacturing. However, under the constraint of limited spectrum resources, beamforming (beamforming) technology of a base station becomes a prerequisite for supporting communication of mass intelligent devices, and is also a key technical bottleneck.
Early beamforming algorithms are often based on theoretical models and employ a centralized processing architecture, and the algorithms collect global state information of the network, such as channel state information of each communication link and interference links, through a central controller, and then solve the information by using an iterative optimization algorithm. In practical deployment, in order to meet the severe delay requirement of high-reliability low-delay communication, a shorter frame duration (for example, 1 ms) is often adopted, and both signaling overhead of an algorithm and calculation delay of the algorithm limit the data transmission time of a high-reliability low-delay link, so that the reliability is greatly reduced.
For the computational delay problem of beamforming iterative algorithms, deep neural networks (Deep neural network, DNN) have been used in recent years to replace early iterative algorithms. Most DNN-based techniques employ classical feed-forward neural networks (Feedforward neural network, FNN). However, although reducing the computational delay to some extent, they cannot cope with dynamic changes in network topology, such as changes in the number of wireless links caused by the access or the egress of wireless devices. The core reason is that the dimension of the input layer of the FNN is fixed, and when the number of wireless links changes, in order to ensure the superiority of strategies, the neural network structure is usually required to be adjusted and retrained, so that the performance is greatly lost and even the system is stopped.
In order to increase the ductility of the strategy, graph neural networks (Graph neural network, GNN) have recently been used to solve the beamforming problem for large-scale networks. However, although the existing GNN algorithm framework can effectively cope with the change of the network topology structure, the existing GNN algorithm framework still cannot solve the problems of signaling overhead and calculation delay faced by the algorithm in distributed deployment. In terms of signaling overhead, the existing GNN framework is accompanied by a large amount of signaling message transmission during distributed operation, and the number of message transmission times increases along with the square level of the number of communication links, so that the existing GNN framework becomes a reason for restricting the wide application of the GNN framework in a large-scale network. In terms of computational delay, unlike FNN-based algorithms, existing GNN frameworks require running multiple FNNs in policy inference, resulting in non-negligible computational delay [8]. In summary, although the existing GNN-based beamforming technology can achieve excellent performance in networks of different scales, implementing graph convolution operation involves high signaling overhead and computation delay, and cannot be used in large-scale high-reliability low-delay networks with extremely short frame duration.
Disclosure of Invention
The invention aims to solve the technical problems of the prior art, and provides a distributed graph neural network beamforming method and a communication network system which are oriented to a large-scale high-reliability low-delay scene and can solve the problems of lack of ductility, huge signaling overhead and calculation delay of the existing beamforming algorithm.
The technical scheme adopted for solving the technical problems is as follows: a distributed graph neural network beam forming method facing to a large-scale high-reliability low-delay scene is constructed, which comprises the following steps:
s1, constructing a PG4U model oriented to a large-scale high-reliability low-delay network, wherein the kernel of the PG4U model comprises:
pilot beamforming vector:
G4U polymerization:
embedding and updating:
data transmission beamforming vector:
wherein t represents a frame index,representing the ith communication link L i Pilot beamforming vector e i Representing the ith communication link L i Is embedded in the diagram, h i,i Representing the ith communication link L i Channel state information of h j,i Representing the jth communication link L j To the ith communication link L i Is/are interference link channel state information->Representing the jth communication link L j Pilot beamforming vector,/, for (a)>Representing the remaining communication links except the ith communication link,/and/or>Representing the ith communication link L i Aggregate information of v i Representing the ith communication link L i Is used for the data transmission beamforming vector, agg represents an aggregation function, phi (& gttheta), U (& phi) and +.>Three feedforward neural networks respectively representing inferred pilot beamforming vectors, embedded updates, and data transmission beamforming vectors, where θ, φ, +.>Three feedforward neural networks Φ (. Theta.; θ), U (. Phi.; phi.) and +.>I and j respectively represent link sequence numbers, and i is not equal to j;
s2, carrying out centralized training on the PG4U model;
and S3, deploying the PG4U model which completes centralized training on each base station, and calculating pilot frequency beam forming vectors and data transmission beam forming vectors of each frame to transmit data.
In the method for forming the distributed graph neural network beam for the large-scale high-reliability low-delay scene, the step S1 further comprises the following steps:
s11, when each frame starts, each base station adopts a pilot wave beam forming vector established by the formula (4) of the previous frame and broadcasts pilot frequency once at the same time, and each communication link respectively extracts aggregation information and local channel state information in the formula (5);
s12, each communication link performs data transmission by using the beamforming vector established by the formula (7) of the last frame, and simultaneously each communication link performs the formula (6) to be embedded with an updated map, and performs the formulas (4) and (7) to determine the pilot beamforming vector and the data transmission beamforming vector of the pilot transmission and the data transmission of the next frame.
In the large-scale high-reliability low-delay scene oriented distributed graph neural network beamforming method, in step S11, the aggregation information and the local channel state information are calculated based on formulas (9) and (10), respectively:
wherein,representing the superimposed signal received by the ith communication link at the time of pilot broadcast, < >>Indicating the pilot sequence matrix used for the ith communication link,/for the communication link>Representing a pilot sequence matrix adopted by a jth communication link, wherein K represents the number of the communication links, and i and j are smaller than or equal to K;
wherein the method comprises the steps ofRepresentation->Is a conjugate vector of (c).
In the distributed graph neural network beamforming method facing the large-scale high-reliability low-delay scene, the step S2 further comprises the steps of collecting data in an offline mode and optimizing three feedforward neural networks phi (theta), U (phi) and phi in the PG4U model in an unsupervised learning mode through a gradient descent algorithmIs a function of the neural network parameters.
In the large-scale high-reliability low-delay scene-oriented distributed graph neural network beamforming method, the gradient descent algorithm comprises an Adam algorithm, and in the step S2, optimization is performed based on a formula (11):
wherein, eta represents the learning rate,representation->Regarding the derivative of Θ, Θ represents the neural network parameter to be optimized, < >>Representing a beamforming optimization problem.
In the method for forming the beam of the distributed graph neural network facing the large-scale high-reliability low-delay scene, the step S3 further comprises the following steps:
s31, respectively deploying three feedforward neural networks for deducing the pilot wave beam forming vector, embedding updating and data transmission wave beam forming vector on each base station;
s32, at each frame, each base station and user perform distributed inference to calculate pilot beamforming vector and data transmission beamforming vector of the next frame, respectively.
In the method for forming the beam of the distributed graph neural network facing the large-scale high-reliability low-delay scene, the step S32 further comprises the following steps:
s321, each base station adopts the determination of the previous frame by phi (& theta)As pilot beamformingVector, all base stations transmit pilot simultaneously, each user receives the superimposed signal in equation (8)>
S322, each user is based on the superimposed signalAnd (9) and (10) computing aggregate information->Estimating channel state information h for a local communication link i,i (t) and will->And h i,i (t) feeding back to the base station side through a feedback link;
s323, after receiving the feedback information, each base station adopts the last frame to passV determined i (t) transmitting data as a data transmission beamforming vector.
The invention solves the technical problem by adopting another technical scheme that a communication network system is constructed, the communication network system comprises a plurality of base stations and a plurality of users, the base stations and the users are connected through communication links, the base station side of each communication link is provided with a plurality of antennas, the user side is provided with an antenna, the base station stores a computer program, and the computer program is executed by a processor of the base station to realize the distributed graph neural network beam forming method facing to the large-scale high-reliability low-delay scene.
The invention can solve the problems of lack of ductility, huge signaling overhead, calculation delay and the like of the existing beam forming algorithm.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 illustrates a large-scale high-reliability low-latency communication network including K communication links;
FIG. 2 is a flow chart of a preferred embodiment of the distributed graph neural network beamforming method of the present invention for a large scale high reliability low latency scenario;
fig. 3 shows a PG4U model for a large-scale high-reliability low-latency network according to a preferred embodiment of the present invention;
FIG. 4 illustrates an aggregation process of the PG4U model shown in FIG. 3;
fig. 5 shows the relationship between QoS outage probability and the number of communication links obtained with different algorithms;
fig. 6 shows the relationship between QoS outage probability and frame duration obtained with different algorithms.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The present invention addresses mainly the following three shortcomings surrounding the deficiencies of existing beamforming algorithms. Namely, the disadvantage: ductility. The existing FNN-based algorithm cannot cope with dynamic changes of a network topology structure and lacks ductility. When the network topology and the number of links change, the neural network architecture of the FNN needs to be adjusted and retrained, which easily causes great performance loss and even system stagnation. And the second disadvantage is that: signaling overhead. The existing beamforming algorithms all need each wireless device to fully observe the channel state information of the network to infer the optimal policy, including the channel state information of the communication link and the channel state information of the interference link, which causes huge signaling overhead. In addition, the existing GNN-based algorithm requires frequent interaction of observation information of wireless devices in distributed deployment to complete operation of graph convolution, and the signaling overhead which is difficult to bear by a system is caused in a large-scale high-reliability low-delay network with extremely short frame duration, so that the time of data transmission is greatly shortened, and the reliability of communication is restricted. And the third disadvantage is: the delay is calculated. Iterative algorithms based on theoretical models often require hundreds of iterative operations to converge to an optimal solution. GNN-based algorithms then require forward computation of multiple FNNs to achieve graph convolution. In a highly reliable low-delay network with extremely short frame duration, the calculation delay caused by the algorithm also becomes a core factor for restricting the data transmission duration and the communication reliability.
The invention provides a distributed graph neural network beam forming method and a communication network system for a large-scale high-reliability low-delay scene, which are used for realizing a high-efficiency distributed beam forming mechanism by adopting a pipeline graph neural network (Pipeline GNN for massive Ultrareliable and low-latency communication, PG 4U) algorithm framework for the large-scale high-reliability low-delay network. Aiming at the problem of ductility, the PG4U algorithm framework is an extension of the existing GNN algorithm framework, can be applied to communication networks with different numbers of wireless links and different network topologies, and has extremely strong ductility. Aiming at the difficult problem of signaling overhead, the PG4U algorithm framework only needs to be based on the local channel state information of each communication link when the distributed beam forming is inferred, and the channel state information of all interference links can be effectively extracted in a pilot frequency transmission and signal processing mode, so that the acquisition of a large amount of channel state information of the interference links and frequent signaling message interaction are avoided, and the signaling overhead required by the algorithm in deployment is greatly reduced. Aiming at calculation delay, the PG4U algorithm framework fully utilizes the time correlation of channel state information in a high-reliability low-delay network, and forward calculation of all FNNs in PG4U can be executed in parallel during data transmission by making a decision of a current frame beam forming mechanism based on the channel state information of the previous frame. Compared with the prior FNN and GNN algorithm which need to execute the FNN forward computation before each frame of data transmission, the FNN forward computation in PG4U is completed in parallel with the data transmission, thereby greatly reducing the computation delay.
The method for forming the distributed graph neural network beam facing the large-scale high-reliability low-delay scene is specifically described below with reference to the accompanying drawings. The invention is oriented to large-scale high-reliability low-delay scenesThe distributed graph neural network beamforming method may be applied to any suitable large-scale high-reliability low-latency communication network, including K communication links L, as shown in FIG. 1 1 ,L 2 ,…,L K Sharing the same frequency band resource. Wherein the base station side of each communication link is provided with N t Individual antennas, the user side configures a single antenna. Existing 5G clock synchronization standards are employed between base stations to synchronize the transmission of each frame. At the t-th frame, the i-th communication link L i The signal-to-interference-plus-noise ratio (SINR) of (c) can be expressed as:
wherein,channel state information matrix representing all communication links, < >> Representing the ith communication link L i Channel state information of>Is the ith communication link L i To the jth communication link L j Is used to determine the interference link channel state information. /> Is the beamforming matrix of all base stations, +.>Representing the ith communication link L i CollectedBeamforming vector for->Representing the jth communication link L j The beamforming vector used, where beamforming vector is a proper term, each complex number represents the amplitude and phase of each antenna transmit signal, σ 2 Is the average power of additive white gaussian noise (additive white Gaussian noise, AWGN), t represents the frame index, +.>Represents h i,i Conjugate transpose of (t),>representing complex numbers, T representing transpose, N t Represents the number of antennas and K represents the number of communication links. Note that, for the same parameter, when (t) is used, it means that the parameter is a value at the time of the t-th frame, and when (t) is not used, it is generally referred to as the parameter. For example, h i,i General reference indicates the ith communication link L i Channel state information of h i,i (t) when referring specifically to the t frame, the i-th communication link L i Channel state information of (a) is provided. The remaining parameters are similar and will not be described in detail here.
Based on the communication link L at the t-th frame i The signal-to-interference-and-noise ratio in equation (1), the Block error rate (Block error rate) of the data transmission can be expressed as:
wherein Q (-) represents a Gaussian Q function, T f Representing the duration of each frame, T o Representing the total overhead duration required to infer the distributed beamforming strategy, including signaling overhead and computational delay, T d =T f -T o Representing the remaining time for each frame for data transmission. B represents the number of bits of base station transmitted data per frame, B represents the bandwidth,representing the degree of dispersion of the channel (Channel dispersion), we will be ζ for simplicity of expression i (H (t), V (t)) is abbreviated as xi i (t)。
Based on the block error rate of each communication link in equation (2), the beam forming optimization problem focused by the present invention can be expressed as:
wherein P is max Indicating the maximum transmit power of each base station. In order to ensure fairness of reliability indexes of each link and avoid too low reliability of part of links, the block error rate of the minimized worst link is adopted as an optimization target in the formula (3), namely epsilon max (t)=max(ε i (t), i=1, …, K), the value of the block error rate is usually very small, e.g. 10, taking into account the optimization problem -5 To amplify the gradient to speed algorithm convergence, U (t) log is used 10max (t)+10 ) +β as a utility function of the optimization problem. The utility function monotonically decreases and approaches 0 as the block error rate of the worst link of the network continues to decrease to 0, thus minimizing the expectation of the utility functionThe block error rate of the system can be effectively reduced. Finally, the expectations in the optimization problem are for a random topology of the network and random channel state information.
Based on the analysis, we provide a distributed graph neural network beamforming method oriented to a large-scale high-reliability low-delay scene, and a brand new PG4U algorithm frame oriented to the large-scale high-reliability low-delay network is constructed to solve the problem of distributed beamforming, so that the problems of lack of ductility, huge signaling overhead and calculation delay of the existing beamforming algorithm are solved. Fig. 2 is a flow chart of a preferred embodiment of the distributed graph neural network beamforming method of the present invention for a large scale high reliability low latency scenario. As shown in fig. 2, in step S1, we first construct a PG4U model for a large-scale high-reliability low-latency network. Fig. 3 shows a PG4U model for a large-scale high-reliability low-latency network according to a preferred embodiment of the present invention. As shown in fig. 3, the PG4U model is proposed based on the existing GNN model, and a kernel (kernel) thereof is as follows:
pilot beamforming vector:
G4U polymerization:
embedding and updating:
data transmission beamforming vector:wherein t represents a frame index,representing the ith communication link L i Is used only for graph convolution calculation and channel state information estimation, e i Representing the ith communication link L i Is embedded in the diagram, h i,i Representing the ith communication link L i Channel state information of h j,i Representing the jth communication link L j To the ith communication link L i Is/are interference link channel state information->Represents h ji Conjugate transpose of->Representing the jth communication link L j Pilot beamforming vector,/, for (a)>Representing the remaining communication links except the ith communication link,/and/or>Representing the ith communication link L i Aggregate information of v i Representing the ith communication link L i The data transmission beamforming vector of (2), thus pilot beamforming vector +.>Beamforming vector v with data transmission i The meanings are different; agg represents an aggregate function, and can be selected from "sum", "mean", and "max". The pilot wave beam forming vector, embedded updating and the calculation of the data transmission wave beam forming vector in the PG4U frame are respectively carried out by three feedforward neural networks phi ([ theta ]), U ([ phi ]) and +.>Represented by θ, φ, < >>Representing the neural network parameters, t-1 representing the t-1 frame, and t-2 representing the t-2 frame.
The PG4U model only needs to be based on local channel state information h when performing beamforming inference, namely, calculating the formula (4-7) i,i (i.e. ith communication link L) i Channel state information), and local graph embedding e i (i.e. the ith communication link L) i Is embedded) by the calculation formulas (4), (6) and (7) without acquiring the interference link channel state information h i,j (i.e. jth communication link L j To the ith communication link L i Is associated with the interference link channel state information). (5) G4U aggregation information in formula(i.e. ith communication link L) i Aggregate information of (c), and local channel state information h i,i Can pass throughThere is a way for the base station to transmit pilot once at the same time, and extract the pilot from the signal superimposed on the receiving end by using a suitable signal processing technique, please refer to the following equations (9) and (10) in the present invention.
As shown in fig. 3, the calculation flow of the PG4U model in each frame is as follows. At the beginning of each frame, each base station uses the pilot beamforming vector established by the formula (4) of the previous frame and broadcasts pilot once at the same time, and each communication link can extract aggregation information in the formula (5) and local channel state information according to the following formulas (9) and (10) respectively. Each communication link then uses the beamforming vector established by equation (7) for data transmission. At the same time, each communication link performs (6) to update the local map embedding, (4) and (7) to determine the pilot beamforming vector and the data transmission beamforming vector for the next frame pilot transmission and data transmission.
Taking fig. 3 as an example, for the ith communication link, at the beginning of the t-1 st frame, the pilot beamforming vector established by equation (4) in the t-1 st frame is calculatedAnd simultaneously broadcast pilot frequency once, each communication link can extract G4U aggregation information +_in (5) according to the following (9) and (10) respectively>Local channel state information h i,i (t). The ith communication link uses the beamforming vector v established by (7) for the last frame i (t) data transmission. At the same time, the ith communication link performs (6) to update the local map-embedded e i (t), (4) and (7) determining the next frame pilot beamforming vectorAnd beamforming vector v i (t+1). Next, the present invention will explain how to acquire G4U aggregation information in each communication link (5) based on one pilot broadcast +.>Local channel state information h i,i (t). Fig. 4 shows an aggregation process of the PG4U model shown in fig. 3. As shown in FIG. 4, when the ith communication link employs pilot beamforming vector +.>And transmitting pilot simultaneously, the superimposed signal received by the communication link at the time of pilot broadcast can be expressed as
Wherein,indicating the pilot sequence matrix used for the ith communication link,/for the communication link>Representing the matrix of pilot sequences used by the jth communication link, diag representing the diagonal matrix,/->Represents h i,i The conjugate transpose of (t), K representing the number of communication links, i and j being less than K, {>Represents h j,i Conjugate transpose of (t), h i,i (t) represents the ith communication link L i Channel state information of h i,j (t) represents the ith communication link L i To the jth communication link L j Is used to determine the interference link channel state information.
Based on the superimposed signal of each communication link calculated by equation (8), when each link adopts orthogonal pilot frequency, namely In the t-th frame (t omitted), the equation (5) can be solved as follows.
Wherein,representing the superimposed signal received by i communication links at the time of pilot broadcast,/for>Conjugate transpose of the superimposed signal received representing i communication links, ">Representing the transpose of the pilot sequence matrix employed by the jth communication link, < >>Representing the transpose of the pilot sequence matrix employed by the ith communication link.
Further, in the t-th frame (t omitted), based on the received superimposed signal in the expression (8), the local channel state information of each communication link can be estimated as follows
Wherein the method comprises the steps ofRepresentation->Is a conjugate vector of (c). h is a i,i Representing the ith communication link L i Channel state information of (a) is provided.
The existing GNN algorithm framework needs to estimate channel state information of the communication link and the interference link at the same time at the beginning of each frame, and performs frequent signaling message interaction to complete the operation of graph convolution. In the distributed graph neural network beam forming method facing the large-scale high-reliability low-delay scene, the adopted PG4U algorithm frame only needs one-time pilot signal simultaneous broadcast transmission, so that graph convolution operation can be completed, channel state information can be estimated, and signaling overhead is greatly reduced. In addition, the existing GNN algorithm framework requires that forward computation of multiple FNNs be completed before each frame of data is transmitted. In the distributed graph neural network beam forming method facing the large-scale high-reliability low-delay scene, the adopted PG4U algorithm frame is different from the PG4U algorithm frame, and all the forward computation and data transmission of the FNN are completed in parallel, so that the computation delay is greatly reduced. Finally, in the distributed graph neural network beam forming method facing the large-scale high-reliability low-delay scene, the adopted PG4U algorithm frame reserves the characteristic of GNN, can cope with the topological structure change of the network, and has stronger ductility.
In the method for forming the distributed graph neural network beam oriented to the large-scale high-reliability low-delay scene, in step S2, the PG4U model oriented to the large-scale high-reliability low-delay network is subjected to centralized training.
Requiring the central processor to gather global network state information H (t) for each frame, i.e. the channel state information matrix of all communication links, for calculation of optimization expectations during trainingI.e. the beam forming optimization problem focused by the present invention. The expectation can be estimated through a certain number of network random topological structures and channel state information samples, and the training can acquire data in an off-line mode. It is worth emphasizing that the algorithm does not require global network state information H (t) for beamforming policy inference at the time of distributed deployment. Next, use->Representing three feedforward in PG4U modelNeural networks Φ (. Smallcircle.; θ), U (. Smallcircle.; Φ) and +.>Is a neural network parameter of>Represents the feedforward neural network phi (. Theta.; theta.), U (. Phi.; phi.) and +.>Is a neural network parameter of (a).
Aiming at the optimization problem (3), the neural network parameters can be optimized by a gradient descent algorithm, such as Adam algorithm, and adopting an unsupervised learning mode, namely:
wherein, eta represents the learning rate,representation->Regarding the derivative of Θ, Θ represents the neural network parameter to be optimized, < >>I.e. the beamforming optimization problem in equation 3. After the centralized training is completed, the trained PG4U model can be deployed on each base station and distributed inference is performed efficiently.
In the distributed graph neural network beam forming method facing the large-scale high-reliability low-delay scene, in step S3, the PG4U model facing the large-scale high-reliability low-delay network is deployed on each base station. Specifically, three feedforward neural networks phi (·; θ), U (·; phi) and phi in PG4U model facing large-scale high-reliability low-delay network are adoptedAfter the centralized training is completed, the distributed inference flow of each frame (t-th frame) in the actual system is specifically as follows.
Step one, each base station adopts the determination of phi (& theta) in the previous frameAs pilot beamforming vector, all base stations transmit pilot simultaneously, each user receives the superimposed signal in equation (8)>
Step two, each user is based on the superimposed signalAnd (9) and (10) computing aggregate information->Estimating channel state information h for a local communication link i,i (t) and will->And h i,i And (t) feeding back the feedback to the base station side through a feedback link.
Step three, after receiving the feedback information, each base station adopts the last frame to pass throughV determined i (t) transmitting data as a data transmission beamforming vector. At the same time, each base station embeds e based on the map of the previous frame i (t-1), and user feedback information +.>And h i,i (t) performing a phi (.; theta) decision on the pilot beamforming vector for the next frameAnd performs U (& phi.; phi.) decision on the data transmission beamforming vector v of the next frame i (t+1)。
In order to embody the beneficial effects of the large-scale high-reliability low-delay scene oriented distributed graph neural network beam forming method, the invention compares the method with the existing algorithm through computer simulation, and the considered existing algorithm is as follows:
1. equal power allocation (Equal power allocation, EPA) in which each base station uses maximum transmission power P max And equally dividing the antenna signals to each antenna;
2. weighted Minimum Mean Square Error (WMMSE), which is the most advanced algorithm based on a theoretical iterative optimization method;
3. a neural network (Graph neural network, GNN) that employs an existing GNN framework.
20000 network layouts were randomly generated in the simulation as training samples and 50000 network layouts as test samples. Each network topology contains 20 communication links, and the coordinates of the individual base stations and users are randomly generated within a 500m x 500m square area. Other wireless network environment parameters considered are as follows:
parameters (parameters) Numerical value Parameters (parameters) Numerical value
Bandwidth of a communication device 5M Channel correlation coefficient 0.99
Carrier frequency 2.4GHz Maximum transmission power 40dBm
Number of antennas N t 4 Antenna height 2m
Training network layout quantity 2×10 4 Test network layout quantity 5×10 4
Frame number 10 Frame duration 1ms
Number of communication links 20 Noise average power -174dBm/Hz
Number of transmission bits per frame 128 Path loss model UHF model for ITU
The invention adopts a GNN model structure of a GNN algorithm, a PG4U model structure adopted by the invention for the large-scale high-reliability low-delay scene oriented distributed graph neural network beam forming method, and training super parameters of the model structure and the PG4U model structure are as follows:
the invention adopts QoS interruption probability (Quality of Service outage probability) as the performance index of the high-reliability low-delay network to judge the performance of each algorithm, namely the probability that the algorithm can not meet the high-reliability low-delay performance requirement in any network layout is defined as follows:
Pr{ε max (t)>10 -5 } (12);
wherein ε is max And (t) represents the block error rate of the worst communication link of the network at the t frame, namely the maximum value of the block error rates of the K communication links respectively. Considering that the block error rate requirement in a high reliability low latency network is typically 10 -5 ~10 -9 Therefore, the evaluation is performed by using the formula (12).
Fig. 5 shows the relationship between QoS outage probability and the number of communication links obtained with different algorithms. It can be seen from fig. 5 that when the number of communication links exceeds 15, the QoS interruption probability of WMMSE and EPA is greater than 10 -1 . Further, when the number of communication links is small, the QoS outage probability of GNN is low, but as the number of communication links increases, the outage probability increases rapidly, and when the number of communication links exceeds 20, the outage probability reaches 100%. This is because the signaling overhead of GNNs increases rapidly with the number of communication links, and thus is difficult to apply to large-scale networks in URLLC scenarios where the frame duration is very short. The PG4U model structure adopted by the large-scale high-reliability low-delay scene oriented distributed graph neural network beam forming method can greatly reduce signaling overhead and calculation delay, so that the performance of the method aims at different communication link numbers; has strong ductility, and the QoS interruption probability is still less than 10 when the number of the communication links is 35 -2
FIG. 6 shows the adoption of noThe relation between QoS outage probability and frame duration obtained by the same algorithm. As can be seen from fig. 5, the QoS outage probability of WMMSE is maintained around 0.35 as the frame duration varies. The QoS outage probability of EPA decreases with increasing frame duration, reaching 10 at a frame duration of 2ms -1 Left and right. For GNN, the QoS outage probability is 100% when the frame duration is less than 1ms, and rapidly decreases with increasing frame duration, and is about 0.003 when the frame duration is 2 ms. The QoS interruption probability of the PG4U model structure adopted by the distributed graph neural network beam forming method facing the large-scale high-reliability low-delay scene is superior to all the existing strategies. When the frame length is 0.2ms, the PG4U model structure can still realize 10 -2 Left and right QoS outage probabilities. Therefore, the PG4U model structure adopted by the large-scale high-reliability low-delay scene oriented distributed graph neural network beam forming method provided by the invention can be suitable for a large-scale high-reliability low-delay network with short frame duration.
The PG4U model structure adopted by the distributed graph neural network beam forming method facing the large-scale high-reliability low-delay scene, namely (4-7), can solve the problem of the prior art that the method (i) has poor ductility and the signaling overhead (ii) is high and the time is prolonged in calculation, so that the method can be used in a large-scale high-reliability low-delay network with short frame duration and sensitivity to the signaling overhead and the calculation time delay. From simulation results, the high-reliability low-delay performance of the distributed graph neural network beam forming method based on the PG4U model in a large-scale high-reliability low-delay scene is far superior to that of the prior other algorithms.
In addition, the distributed graph neural network facing the large-scale high-reliability low-delay scene is based on the existing GNN framework, so that the distributed graph neural network has strong ductility. The core difference between the PG4U model and the GNN framework is embodied as the following two points. Firstly, the GNN framework needs to acquire channel state information of a communication link and an interference link at the same time, and massive signaling interaction between links is needed to complete calculation of distributed graph convolution. The PG4U model in the invention only needs to transmit pilot signals once at the same time by all communication links, and can complete calculation of distributed graph convolution and channel state information estimation by utilizing superposition signals of a receiving end and a proper signal processing technology, thereby greatly reducing signaling overhead. Second, at the beginning of each frame, the GNN framework needs to complete forward computation of multiple FNNs first, so as to infer the beamforming strategy and then perform data transmission. The PG4U model in the invention directly carries out data transmission at the beginning of each frame, and all the forward computation and data transmission of the FNN are executed in parallel. The time sequence correlation of the channel state information in the high-reliability low-delay network is considered in the design of the kernel of the PG4U, the channel state information acquired by the previous frame is fully utilized to infer the strategy of the current frame, so that data transmission and forward computation of the FNN can be simultaneously carried out, and the computation delay is greatly reduced.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from its scope. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (8)

1. A distributed graph neural network beam forming method for a large-scale high-reliability low-delay scene is characterized by comprising the following steps:
s1, constructing a PG4U model oriented to a large-scale high-reliability low-delay network, wherein the kernel of the PG4U model comprises:
pilot beamforming vector:
G4U polymerization:
Embedding and updating:
data transmission beamforming vector:
wherein t represents a frame index,representing the ith communication link L i Pilot beamforming vector e i Representing the ith communication link L i Is embedded in the diagram, h i,i Representing the ith communication link L i Channel state information of h j,i Representing the jth communication link L j To the ith communication link L i Is/are interference link channel state information->Representing the jth communication link L j Is used for the pilot beamforming vector of (a),representing the remaining communication links except the ith communication link,/and/or>Representing the ith communication link L i Aggregate information of v i Representing the ith communication link L i Is used for the data transmission beamforming vector, agg represents an aggregation function, phi (& theta), U (& phi) and phiRespectively represent inferred pilot beamforming vectors, embeddingThree feedforward neural networks into which updated, and data transmission beamforming vectors are entered, where θ, φ, < ->Three feedforward neural networks Φ (. Theta.; θ), U (. Phi.; phi.) and +.>I and j respectively represent link sequence numbers, and i is not equal to j;
s2, carrying out centralized training on the PG4U model;
and S3, deploying the PG4U model which completes centralized training on each base station, and calculating pilot frequency beam forming vectors and data transmission beam forming vectors of each frame to transmit data.
2. The method for forming a large-scale high-reliability low-latency scene-oriented distributed graph neural network beam according to claim 1, wherein the step S1 further comprises:
s11, when each frame starts, each base station adopts a pilot wave beam forming vector established by the formula (4) of the previous frame and broadcasts pilot frequency once at the same time, and each communication link respectively extracts aggregation information and local channel state information in the formula (5);
s12, each communication link performs data transmission by using the beamforming vector established by the formula (7) of the last frame, and simultaneously each communication link performs the formula (6) to be embedded with an updated map, and performs the formulas (4) and (7) to determine the pilot beamforming vector and the data transmission beamforming vector of the pilot transmission and the data transmission of the next frame.
3. The large-scale high-reliability low-latency scenario-oriented distributed graph neural network beamforming method according to claim 2, wherein in step S11, the aggregated information and the local channel state information are calculated based on equations (9) and (10), respectively:
wherein,representing the superimposed signal received by the ith communication link at the time of pilot broadcast, < >>Indicating the pilot sequence matrix used for the ith communication link,/for the communication link>Representing a pilot sequence matrix adopted by a jth communication link, wherein K represents the number of the communication links, and i and j are smaller than or equal to K;
wherein the method comprises the steps ofRepresentation->Is a conjugate vector of (c).
4. The method of claim 3, wherein step S2 further comprises collecting data offline and optimizing three feedforward neural networks Φ (·; θ), U (·; Φ) and in the PG4U model by gradient descent algorithm using unsupervised learningIs a function of the neural network parameters.
5. The large-scale high-reliability low-latency scene oriented distributed graph neural network beamforming method according to claim 4, wherein the gradient descent algorithm comprises Adam algorithm, and in the step S2, optimization is performed based on equation (11):
wherein, eta represents the learning rate,representation->Regarding the derivative of Θ, Θ represents the neural network parameter to be optimized, < >>Representing a beamforming optimization problem.
6. The method for forming a large-scale, high-reliability, low-latency scene-oriented distributed graph neural network beam according to claim 3, wherein the step S3 further comprises:
s31, respectively deploying three feedforward neural networks for deducing the pilot wave beam forming vector, embedding updating and data transmission wave beam forming vector on each base station;
s32, at each frame, each base station and user perform distributed inference to calculate pilot beamforming vector and data transmission beamforming vector of the next frame, respectively.
7. The method for beamforming a large-scale, high-reliability, low-latency scene-oriented distributed graph neural network according to claim 6, wherein step S32 further comprises:
s321, each base station adopts the passing phi in the previous frame;θ) determined byAs pilot beamforming vector, all base stations transmit pilot simultaneously, each user receives the superimposed signal in equation (8)>
S322, each user is based on the superimposed signalAnd (9) and (10) computing aggregate information->Estimating channel state information h for a local communication link i,i (t) and will->And h i,i (t) feeding back to the base station side through a feedback link;
s323, after receiving the feedback information, each base station adopts the last frame to passV determined i (t) transmitting data as a data transmission beamforming vector.
8. A communication network system comprising a plurality of base stations and a plurality of users, said base stations and said users being connected by communication links, the base station side of each communication link being provided with a plurality of antennas, the user side being provided with an antenna, said base station having stored thereon a computer program which when executed by a processor of said base station implements a large-scale high-reliability low-latency scenario oriented distributed graph neural network beamforming method according to any of claims 1-7.
CN202311733177.8A 2023-12-15 2023-12-15 Distributed graph neural network beam forming method and communication network system Pending CN117858113A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311733177.8A CN117858113A (en) 2023-12-15 2023-12-15 Distributed graph neural network beam forming method and communication network system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311733177.8A CN117858113A (en) 2023-12-15 2023-12-15 Distributed graph neural network beam forming method and communication network system

Publications (1)

Publication Number Publication Date
CN117858113A true CN117858113A (en) 2024-04-09

Family

ID=90532055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311733177.8A Pending CN117858113A (en) 2023-12-15 2023-12-15 Distributed graph neural network beam forming method and communication network system

Country Status (1)

Country Link
CN (1) CN117858113A (en)

Similar Documents

Publication Publication Date Title
CN111901862B (en) User clustering and power distribution method, device and medium based on deep Q network
Wei et al. Joint optimization of caching, computing, and radio resources for fog-enabled IoT using natural actor–critic deep reinforcement learning
CN109474980A (en) A kind of wireless network resource distribution method based on depth enhancing study
WO2021036414A1 (en) Co-channel interference prediction method for satellite-to-ground downlink under low earth orbit satellite constellation
CN113222179A (en) Federal learning model compression method based on model sparsification and weight quantization
CN111800828A (en) Mobile edge computing resource allocation method for ultra-dense network
CN111526592B (en) Non-cooperative multi-agent power control method used in wireless interference channel
CN105379412A (en) System and method for controlling multiple wireless access nodes
Geng et al. Hierarchical reinforcement learning for relay selection and power optimization in two-hop cooperative relay network
CN116017507B (en) Decentralizing federation learning method based on wireless air calculation and second-order optimization
CN114885340B (en) Ultra-dense wireless network power distribution method based on deep migration learning
Ji et al. Reconfigurable intelligent surface enhanced device-to-device communications
CN113613301A (en) Air-space-ground integrated network intelligent switching method based on DQN
Giri et al. Deep Q-learning based optimal resource allocation method for energy harvested cognitive radio networks
CN111669777B (en) Mobile communication system intelligent prediction method based on improved convolutional neural network
CN115811788B (en) D2D network distributed resource allocation method combining deep reinforcement learning and unsupervised learning
CN117240331A (en) No-cellular network downlink precoding design method based on graph neural network
Huang et al. Fast spectrum sharing in vehicular networks: A meta reinforcement learning approach
CN117858113A (en) Distributed graph neural network beam forming method and communication network system
CN116074974A (en) Multi-unmanned aerial vehicle group channel access control method under layered architecture
CN114337881B (en) Wireless spectrum intelligent sensing method based on multi-unmanned aerial vehicle distribution and LMS
CN114268348A (en) Honeycomb-free large-scale MIMO power distribution method based on deep reinforcement learning
CN113747386A (en) Intelligent power control method in cognitive radio network spectrum sharing
CN113595609A (en) Cellular mobile communication system cooperative signal sending method based on reinforcement learning
CN113115355A (en) Power distribution method based on deep reinforcement learning in D2D system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination