WO2021147469A1 - 基于神经网络模型的通信系统及其配置方法 - Google Patents

基于神经网络模型的通信系统及其配置方法 Download PDF

Info

Publication number
WO2021147469A1
WO2021147469A1 PCT/CN2020/127846 CN2020127846W WO2021147469A1 WO 2021147469 A1 WO2021147469 A1 WO 2021147469A1 CN 2020127846 W CN2020127846 W CN 2020127846W WO 2021147469 A1 WO2021147469 A1 WO 2021147469A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
network model
node
sub
child
Prior art date
Application number
PCT/CN2020/127846
Other languages
English (en)
French (fr)
Inventor
郑旭飞
李安新
郭垿宏
姜宇
陈岚
Original Assignee
株式会社Ntt都科摩
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Ntt都科摩 filed Critical 株式会社Ntt都科摩
Priority to US17/759,168 priority Critical patent/US20230045011A1/en
Priority to CN202080093993.5A priority patent/CN115004649A/zh
Publication of WO2021147469A1 publication Critical patent/WO2021147469A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/34Signalling channels for network management communication

Definitions

  • the present disclosure relates to the fields of mobile communication technology and artificial intelligence technology. More specifically, the present disclosure relates to a communication system based on a neural network model and a configuration method thereof.
  • a neural network model is configured in the main node and multiple sub-nodes to perform network optimization, large input data set processing, and network recommendation Or complex tasks such as network element configuration.
  • the neural network model of the newly added child node needs to be initialized. If only the predetermined default settings are used, targeted optimized configuration cannot be achieved.
  • the configured neural network model is not updated for a specific task, it is difficult to achieve the best processing effect.
  • the present disclosure is made in view of the above-mentioned problems.
  • the present disclosure provides a communication system based on a neural network model and a configuration method thereof.
  • a method for configuring a communication system based on a neural network model includes at least one master node and a plurality of sub-nodes communicatively connected with the master node. Each is configured with a sub-node neural network model, and the communication system configuration method includes: acquiring characteristic information of the multiple sub-nodes; and dynamically configuring the sub-node neural network model based on the acquired characteristic information.
  • the communication system configuration method wherein the acquiring the characteristic information of the plurality of sub-nodes includes: receiving the characteristic information transmitted from one of the plurality of sub-nodes.
  • the communication system configuration method wherein the acquiring characteristic information of the plurality of sub-nodes includes: receiving initial information transmitted from one of the plurality of sub-nodes; and based on the initial Information, predicting the characteristic information of the one child node.
  • the dynamically configuring the sub-node neural network model based on the acquired characteristic information includes: selecting from a plurality of predetermined neural network models based on the characteristic information A neural network model; and the child node neural network model for configuring the one child node using the selected one neural network model.
  • the dynamically configuring the sub-node neural network model based on the acquired characteristic information includes: based on the characteristic information, selecting and The matching sub-node matched by the one sub-node; receiving the sub-node neural network model of the matching sub-node from the matching sub-node; and configuring the one sub-node using the sub-node neural network model of the matching sub-node The neural network model of the child node.
  • the one child node is a child node that newly joins the communication system.
  • the communication system configuration method wherein the acquiring characteristic information of the plurality of sub-nodes includes: receiving the characteristic information transmitted from each of the plurality of sub-nodes.
  • the dynamically configuring the sub-node neural network model based on the acquired characteristic information includes: dividing the multiple sub-nodes into Multiple categories; using the feature information to perform training of the child node neural network model for the multiple categories to obtain an updated child node neural network model; and using the child node neural network model to update the multiple A neural network model of the child node of each child node.
  • the dynamically configuring the sub-node neural network model based on the acquired characteristic information includes: dividing the multiple sub-nodes into Multiple categories; according to multiple categories, the feature information of the child nodes belonging to the same category in the multiple categories is notified to the child nodes of the same category; and the child nodes of the same category use the child nodes of the same category Perform training on the feature information of the node, and update the neural network model of the child node of the child node of the same category.
  • the characteristic information includes the height of the sub-node, antenna configuration, coverage area size, service type, service volume, user distribution, environmental information, and historical configuration information .
  • the configuration of the sub-node neural network model includes one of the following: establishing an index of a plurality of neural network models, and using the index to instruct the sub-node neural network
  • the model is one of the plurality of neural network models; the model weight of the neural network model is used to indicate the child node neural network model; the model weight change of the neural network model is used to indicate the child node neural network model; and the neural network is used
  • the semantic representation of the model indicates the neural network model of the child node.
  • the characteristic information is a historical optimal beam set of the user equipment corresponding to the sub-node, and wherein the historical optimal beam set includes: multiple consecutive times The difference sequence between the multiple optimal beams at a point and the optimal beam at the nearest time point; or the difference sequence between the optimal beams at two adjacent time points in multiple consecutive time points;
  • using the characteristic information to update the sub-node neural network model includes: using the number of occurrences of each historical optimal beam in the historical optimal beam set to determine each Weight of each historical optimal beam; and according to the weight of each historical optimal beam and the historical optimal beam set, construct a weighted loss function to perform training to update the sub-node neural network model.
  • the updating of the sub-node neural network model using the feature information includes: configuring an attention layer in the sub-node neural network model, and using an attention layer that includes the attention layer
  • the child node neural network model performs training to update the child node neural network model.
  • a communication system based on a neural network model, including: at least one master node; a plurality of sub-nodes communicating with the master node, and each of the plurality of sub-nodes is configured There is a sub-node neural network model, wherein the at least one main node obtains feature information of the multiple sub-nodes; and based on the obtained feature information, the sub-node neural network model is dynamically configured.
  • the communication system according to another aspect of the present disclosure, wherein the at least one master node receives the characteristic information transmitted from one of the plurality of child nodes.
  • the communication system according to another aspect of the present disclosure, wherein the at least one master node receives initial information transmitted from one of the plurality of child nodes; and based on the initial information, predicts the one child node Of the feature information.
  • the at least one master node selects a neural network model from a plurality of predetermined neural network models based on the feature information; and configures the neural network model using the selected one neural network model The neural network model of the child node of the one child node.
  • the at least one master node selects a matching child node that matches the one child node from the plurality of child nodes based on the characteristic information;
  • the node receives the child node neural network model of the matched child node; and configures the child node neural network model of the one child node by using the child node neural network model of the matched child node.
  • the communication system according to another aspect of the present disclosure, wherein the one child node is a child node newly added to the communication system.
  • the communication system according to another aspect of the present disclosure, wherein the at least one master node receives the characteristic information transmitted from each of the plurality of child nodes.
  • the communication system wherein the at least one master node divides the plurality of sub-nodes into a plurality of categories based on the characteristic information; and using the characteristic information to target the plurality of categories Perform training of the child node neural network model to obtain an updated child node neural network model; and use the child node neural network model to update the child node neural network model of the plurality of child nodes.
  • the communication system wherein the at least one master node classifies the multiple sub-nodes into multiple categories based on the characteristic information; according to the multiple categories, they belong to the same one of the multiple categories.
  • the feature information of the child nodes of the category is notified to the child nodes of the same category; and the child nodes of the same category use the feature information of the child nodes of the same category to perform training to update the child nodes of the same category.
  • the neural network model of the child nodes of the node wherein the at least one master node classifies the multiple sub-nodes into multiple categories based on the characteristic information; according to the multiple categories, they belong to the same one of the multiple categories.
  • the feature information of the child nodes of the category is notified to the child nodes of the same category; and the child nodes of the same category use the feature information of the child nodes of the same category to perform training to update the child nodes of the same category.
  • the neural network model of the child nodes of the node wherein the at least one master
  • the communication system includes the height of the sub-node, antenna configuration, coverage area size, service type, service volume, user distribution, environmental information, and historical configuration information.
  • the configuring the neural network model of the child node includes one of the following: establishing an index of a plurality of neural network models, and using the index to indicate the neural network model of the child node Is one of the plurality of neural network models; using the model weight of the neural network model to indicate the child node neural network model; using the model weight change of the neural network model to indicate the child node neural network model; and using the neural network model The semantic representation of indicates the neural network model of the child node.
  • the characteristic information is a historical optimal beam set of the user equipment corresponding to the sub-node, and wherein the historical optimal beam set includes: multiple consecutive time points The difference sequence between the multiple optimal beams and the optimal beam at the nearest time point; or the difference sequence between the optimal beams at two adjacent time points in multiple consecutive time points;
  • the communication system uses the number of appearances of each historical optimal beam in the historical optimal beam set to determine each historical optimal beam And according to the weight of each historical optimal beam and the historical optimal beam set, construct a weighted loss function to perform training to update the sub-node neural network model.
  • the communication system according to another aspect of the present disclosure, wherein the at least one main node or the child node configures an attention layer in the child node neural network model, and uses the child node neural network model including the attention layer.
  • the network model performs training to update the sub-node neural network model.
  • the dynamic configuration of the neural network model for the new sub-nodes in the communication system is realized, and online data is fully utilized during operation.
  • the master node realizes the centralized update at the master node or the distributed update at each sub-node.
  • the process of dynamic configuration and update consider the full sharing and utilization of training data and neural network models between the same or similar sub-nodes, which improves the efficiency of training and the accuracy of the resulting model.
  • the various characteristics of the sub-nodes are fully considered, such as the height of the sub-node, antenna configuration, coverage area size, service type, service volume, user distribution, environmental information, and historical configuration information.
  • the neural network model is expressed in different ways such as the neural network model index, the model weight of the neural network model, the model weight change of the neural network model, and the semantic representation of the neural network model, which further improves the efficiency of training and the accuracy of the obtained model Rate.
  • a lightweight recurrent neural network (RNN) is used, and a gated recurrent unit (GRU) module is used to capture the input sequence.
  • RNN lightweight recurrent neural network
  • GRU gated recurrent unit
  • the long-term dependence on the information choose the appropriate training data representation, and improve the structure of the loss function in a targeted manner, and at the same time introduce an attention mechanism into the neural network model to effectively extract valuable information from the input sequence, thereby effectively This greatly improves the accuracy of predicting the optimal beam candidate set, especially in the case of long-term prediction.
  • FIG. 1 is a schematic diagram outlining a communication system according to an embodiment of the present disclosure
  • Figure 2 is a flowchart outlining a communication system configuration method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram illustrating a configuration example of a communication system according to an embodiment of the present disclosure
  • FIG. 4 is a flowchart illustrating an example of a communication system configuration method according to an embodiment of the present disclosure
  • FIG. 5 is a flowchart illustrating an example of a communication system configuration method according to an embodiment of the present disclosure
  • Fig. 6 is a schematic diagram illustrating a configuration example of a communication system according to an embodiment of the present disclosure
  • FIG. 7 is a flowchart illustrating an example of a communication system configuration method according to an embodiment of the present disclosure
  • FIG. 8 is a flowchart illustrating an example of a communication system configuration method according to an embodiment of the present disclosure
  • Fig. 9 is a schematic diagram illustrating a configuration example of a communication system according to an embodiment of the present disclosure.
  • FIG. 10 is a flowchart illustrating an example of a communication system configuration method according to an embodiment of the present disclosure
  • FIG. 11 is a flowchart illustrating an example of a communication system configuration method according to an embodiment of the present disclosure
  • FIG. 12 is a schematic diagram illustrating that a communication system according to an embodiment of the present disclosure performs an optimal beam scanning task
  • FIG. 13 is a schematic diagram illustrating the training and prediction process of a neural network model configured in a communication system according to an embodiment of the present disclosure
  • FIG. 14 is a schematic diagram illustrating a neural network model configured in a communication system according to an embodiment of the present disclosure.
  • Fig. 15 is a block diagram illustrating an example of the hardware configuration of a child node and a user equipment according to an embodiment of the present invention.
  • the solution provided by the present disclosure relates to the combination of mobile communication technology and artificial intelligence technology, and is specifically described by the following embodiments.
  • FIG. 1 is a schematic diagram outlining a communication system according to an embodiment of the present disclosure.
  • a communication system 1 includes at least one master node 10 and a plurality of sub-nodes 11, 12, 13, and 14 communicatively connected with the master node.
  • the master node 10 serves as a central control unit to implement the configuration, scheduling and management of multiple sub-nodes 11, 12, 13, and 14 and corresponding resources.
  • Each of the plurality of child nodes 11, 12, 13, and 14 is configured with child node neural network models 111, 121, 131, and 141.
  • the master node 10 is, for example, a central unit (CU) of a communication network, and the child nodes 11, 12, 13, and 14 are, for example, a distribution unit (DU) of the communication network.
  • the master node 10 is, for example, a cloud server, and the child nodes 11, 12, 13, and 14 are, for example, a multi-access edge computing (MEC) server.
  • MEC multi-access edge computing
  • Fig. 2 is a flowchart outlining a communication system configuration method according to an embodiment of the present disclosure.
  • the communication system configuration method according to the embodiment of the present disclosure shown in FIG. 2 is executed.
  • step S201 feature information of multiple child nodes is acquired.
  • the characteristic information of multiple sub-nodes may be the height of the sub-nodes, antenna configuration, coverage area size, service type, service volume, user distribution, environmental information, etc.
  • the characteristic information of the multiple sub-nodes may also include historical information obtained by the sub-node neural network of the multiple sub-nodes in the process of performing a specific task.
  • the feature information of multiple sub-nodes may be reported by the sub-node to the main node, or the main node may predict the relevant feature information of the sub-node based on the acquired initial information of the sub-node.
  • step S202 based on the acquired characteristic information, the sub-node neural network model is dynamically configured.
  • dynamically configuring the child node neural network model may be when a new child node joins the communication network, initializing the child node neural network model of the new child node.
  • dynamically configuring the sub-node neural network model may also be using online data generated in real time as training data during the operation of the communication network to train and update the sub-node neural network model of each sub-node.
  • Fig. 3 is a schematic diagram illustrating one configuration example of a communication system according to an embodiment of the present disclosure.
  • a child node 14 is newly added to the communication system 1, and the master node 10 performs initial configuration on the newly added child node 14.
  • Figures 4 and 5 are exemplary flowcharts of the communication system configuration method corresponding to the scenario of Figure 3, and Figures 4 and 5 respectively show two different ways of acquiring feature information.
  • an example of a communication system configuration method includes the following steps.
  • step S401 the characteristic information transmitted from one of the multiple child nodes is received. That is, referring to FIG. 3, the master node 10 receives the characteristic information P i,4 transmitted from the newly added child node 14.
  • the characteristic information P i,4 may be the height, antenna configuration, and coverage area of the child node 14. Size, business type, business volume, user distribution, environmental information, etc.
  • one neural network model is selected from a plurality of predetermined neural network models.
  • the main node 10 is preset with multiple neural network models for different sub-node types and task types.
  • the master node 10 selects a neural network model from a plurality of predetermined neural network models according to the characteristic information P i,4 transmitted from the newly added child node 14.
  • the selected one neural network model is used to configure the child node neural network model of the one child node.
  • the master node 10 selects a neural network model from a plurality of predetermined neural network models and sends it to the child node 14 through signaling information P i+1,4 , thereby configuring the child node neural network model 114 of the one child node 14.
  • the master node may represent the neural network model in many different ways. For example, indexes of multiple neural network models can be established, and the index is used to indicate that the child node neural network model is one of the multiple neural network models.
  • the model weight of the neural network model may be used to indicate the sub-node neural network model.
  • the model weight change amount of the neural network model may be used to indicate the sub-node neural network model.
  • the semantic representation of the neural network model may also be used to indicate the sub-node neural network model, for example, the topology structure diagram of the neural network can be used as the semantic representation of the neural network model. It is easy to understand that the above representation of the neural network model is only illustrative, and the representation of the neural network model in the communication system configuration method according to an embodiment of the present disclosure is not limited to this.
  • an example of a communication system configuration method includes the following steps.
  • step S501 the initial information transmitted from one of the multiple child nodes is received. That is, referring to FIG. 3, the master node 10 receives the initial information P i,4 transmitted from the newly added child node 14. It should be noted that the initial information transmitted by the child node 14 may be different from the characteristics described with reference to FIG. 4 information. In addition, in the embodiment of the present disclosure, step S501 is optional, and the newly added child node 14 does not need to report initial information.
  • step S502 the characteristic information of the one child node is predicted based on the initial information. Different from the example described with reference to FIG. 4, in the flowchart shown in FIG. 5, the master node 10 predicts the characteristic information of the newly added child node 14.
  • one neural network model is selected from a plurality of predetermined neural network models.
  • the main node 10 is preset with multiple neural network models for different sub-node types and task types.
  • the main node 10 selects a neural network model from a plurality of predetermined neural network models according to the predicted feature information of the newly added child node 14.
  • step S504 the selected one neural network model is used to configure the child node neural network model of the one child node.
  • the master node 10 will select a neural network model from a plurality of predetermined neural network models and send it to the child node 14 through signaling information Pi+1,4 , thereby configuring the one child node 14 The child node neural network model 114.
  • Fig. 6 is a schematic diagram illustrating one configuration example of a communication system according to an embodiment of the present disclosure. Similar to the example shown in FIG. 3, a child node 14 is newly added to the communication system 1, and the master node 10 performs initial configuration on the newly added child node 14. Unlike the example shown in Figure 3, in the configuration example shown in Figure 6, the master node will select the neural network model from the child nodes that are similar to the newly added child node, instead of selecting from the set of multiple neural network models .
  • Figures 7 and 8 are exemplary flowcharts of a communication system configuration method corresponding to the scenario of Figure 6, and Figures 7 and 8 respectively show two different ways of acquiring feature information.
  • an example of a communication system configuration method includes the following steps.
  • step S701 the characteristic information transmitted from one of the multiple child nodes is received. That is, referring to FIG. 6, the master node 10 receives the characteristic information P i,4 transmitted from the newly added child node 14.
  • the characteristic information P i,4 may be the height, antenna configuration, and coverage area of the child node 14. Size, business type, business volume, user distribution, environmental information, etc.
  • step S702 based on the characteristic information, a matching child node that matches the one child node is selected from the multiple child nodes. That is, referring to FIG. 6, the master node 10 selects the matching child node 11 that matches the newly added child node 14 from a plurality of existing child nodes based on the characteristic information P i,4 transmitted from the newly added child node 14 .
  • step S703 the child node neural network model of the matched child node is received from the matched child node. That is, referring to FIG. 6, the master node 10 receives the child node neural network model 111 transmitted through the signaling Pi,1 from the matching child node 11.
  • step S704 the child node neural network model of the one child node is configured using the child node neural network model of the matching child node. That is, referring to FIG. 6, the master node 10 will receive the child node neural network model 111 transmitted through the signaling Pi,1 from the matching child node 11 and send it to the child node 14 through signaling information Pi+1,4. , Thereby configuring the child node neural network model 114 of the one child node 14.
  • an example of a communication system configuration method includes the following steps.
  • step S801 the initial information transmitted from one of the multiple child nodes is received. That is, referring to FIG. 6, the master node 10 receives the initial information P i,4 transmitted from the newly added child node 14. It should be noted that the initial information transmitted by the child node 14 may be different from the characteristics described with reference to FIG. 7 information. In addition, in the embodiment of the present disclosure, step S801 is optional, and the newly added child node 14 does not need to report initial information.
  • step S802 the characteristic information of the one child node is predicted based on the initial information. Unlike the example described with reference to FIG. 7, in the flowchart shown in FIG. 8, the master node 10 predicts the characteristic information of the newly added child node 14.
  • step S803 based on the characteristic information, a matching child node that matches the one child node is selected from the multiple child nodes. That is, referring to FIG. 6, the master node 10 selects the matching child node 11 that matches the newly added child node 14 from a plurality of existing child nodes based on the characteristic information P i,4 transmitted from the newly added child node 14 .
  • step S804 the child node neural network model of the matched child node is received from the matched child node. That is, referring to FIG. 6, the master node 10 receives the child node neural network model 111 transmitted through the signaling Pi,1 from the matching child node 11.
  • step S805 the child node neural network model of the one child node is configured using the child node neural network model of the matching child node. That is, referring to FIG. 6, the master node 10 will receive the child node neural network model 111 transmitted through the signaling Pi,1 from the matching child node 11 and send it to the child node 14 through signaling information Pi+1,4. , Thereby configuring the child node neural network model 114 of the one child node 14.
  • FIG. 9 is a schematic diagram illustrating one configuration example of a communication system according to an embodiment of the present disclosure.
  • the master node 10 in the communication system 1 coordinates the training and updating of the respective child nodes 11, 12, 13, and 14.
  • 10 and 11 are exemplary flowcharts of the communication system configuration method corresponding to the scenario in FIG. 9, and FIG. 10 and FIG. 11 respectively show two different update methods.
  • an example of a communication system configuration method includes the following steps.
  • step S1001 the characteristic information transmitted from each of the multiple child nodes is received. That is, referring to FIG. 9, the master node 10 receives the characteristic information Pi ,1 , Pi ,2 , Pi ,3 and Pi ,4 transmitted from the child nodes 11, 12, 13, and 14. More specifically, in a scenario where each child node 11, 12, 13, and 14 is trained and updated, the feature information P i,1 , P i,2 , P i transmitted from each of the child nodes 11, 12, 13 and 14 ,3 and Pi ,4 are the online data generated by each child node 11, 12, 13 and 14 for a specific task. For example, in the embodiment described further below, the online data is the historical optimal beam set predicted by the child node for the user equipment.
  • step S1002 based on the feature information, the multiple sub-nodes are divided into multiple categories. That is, referring to FIG. 9, the master node 10, based on the characteristic information P i,1 , Pi ,2 , Pi ,3, and Pi ,4 transmitted from the respective child nodes 11, 12, 13, and 14, will The multiple child nodes are divided into two categories, that is, child nodes 11 and 14 are one category, and child nodes 12 and 13 are one category.
  • step S1003 the feature information is used to perform the training of the sub-node neural network model for the multiple categories to obtain an updated sub-node neural network model. That is, referring to FIG. 9, the master node 10 uses the feature information Pi ,1 and Pi ,4 of the first type of child nodes 11 and 14 to compare the child node neural network models 111 and 141 of the first type of child nodes. Training is performed, and the master node 10 uses the feature information Pi ,2 and Pi ,3 of the second type of child nodes 12 and 13 to train the child node neural network models 121 and 131 of the second type of child nodes.
  • the available training data is expanded compared with the training process for a single sub-node, thereby improving the training efficiency and training the sub-node neural network.
  • the accuracy of the network model is to say, by training each sub-node 11, 12, 13 and 14 according to the classification, the available training data is expanded compared with the training process for a single sub-node, thereby improving the training efficiency and training the sub-node neural network.
  • step S1004 the child node neural network model of the plurality of child nodes is updated by using the child node neural network model. That is, referring to Figure 10 the sub-node a plurality of neural network training child node classification resulting master node 9 respectively, through the signaling information P i + 1,1, P i + 1,2, P i +1,3 and Pi +1,4 are sent to the child nodes 11, 12, 13, and 14, so as to configure the child node neural network models 111, 121, 131 and of the child nodes 11, 12, 13 and 14 141.
  • an example of a communication system configuration method includes the following steps.
  • step S1101 the characteristic information transmitted from each of the multiple child nodes is received. That is, referring to FIG. 9, the master node 10 receives the characteristic information Pi ,1 , Pi ,2 , Pi ,3 and Pi ,4 transmitted from the child nodes 11, 12, 13, and 14. More specifically, in a scenario where each child node 11, 12, 13, and 14 is trained and updated, the feature information P i,1 , P i,2 , P i transmitted from each of the child nodes 11, 12, 13 and 14 ,3 and Pi ,4 are the online data generated by each child node 11, 12, 13 and 14 for a specific task. For example, in the embodiment described further below, the online data is the historical optimal beam set predicted by the child node for the user equipment.
  • step S1102 the multiple sub-nodes are divided into multiple categories based on the characteristic information. That is, referring to FIG. 9, the master node 10, based on the characteristic information P i,1 , Pi ,2 , Pi ,3, and Pi ,4 transmitted from the respective child nodes 11, 12, 13, and 14, will The multiple child nodes are divided into two categories, that is, child nodes 11 and 14 are one category, and child nodes 12 and 13 are one category.
  • step S1103 according to multiple categories, the feature information of the child nodes belonging to the same category in the multiple categories is notified to the child nodes of the same category.
  • the master node 10 is divided into multiple categories (two categories as shown in FIG. 9, and sub-nodes 11 and 14 are one type).
  • the child nodes 12 and 13 are of one type), and the feature information of the child nodes belonging to the same category in the multiple categories is notified to the child nodes of the same category.
  • the master node 10 notifies the feature information of the first type (that is, the online data of the first type) Pi ,1 and Pi ,4 through signaling Pi+1,1 and Pi +1,4, respectively.
  • the first type of child nodes 11 and 14, and the master node 10 sends the second type of feature information (that is, the second type of online data) Pi , through signaling Pi+1, 2 and Pi +1, 3 , respectively.
  • 2 and Pi ,2 are notified to the second type of child nodes 12 and 13.
  • step S1104 the child nodes of the same category perform training using the feature information of the child nodes of the same category, and the child node neural network model of the child nodes of the same category is updated. That is, referring to FIG. 9, the child nodes 11 and 14 use the feature information Pi ,1 and Pi ,4 of the first category to perform training and update their child node neural network models 111 and 141, and the child node 12 And 13 use the second type of feature information Pi ,2 and Pi ,3 to perform training and update their own child node neural network models 121 and 131.
  • each sub-node 11, 12, 13 and 14 using the feature information of its own category for training, compared with the training process of a single sub-node using only its own feature information, the available training data is expanded, thereby improving The training efficiency and the accuracy of the training sub-node neural network model.
  • FIG. 12 is a schematic diagram illustrating that a communication system according to an embodiment of the present disclosure performs an optimal beam scanning task.
  • the child node 11 is, for example, a base station using NR massive MIMO.
  • the optimal beams of the user equipment 20 at different times T 1 and T 2 will change significantly.
  • the prediction task of the future optimal beam candidate set can be performed on the user equipment 20 through the neural network model 111 configured in the child node 11. It should be understood that in order to perform the prediction task of the future optimal beam candidate set, it is necessary to perform training on the neural network model 111 configured in the child node 11.
  • the communication system configuration method according to the embodiment of the present disclosure described above with reference to FIGS. 3 to 11 may be used, and the master node 10 or the child node 11 may perform training.
  • FIG. 13 is a schematic diagram illustrating the training and prediction process of a neural network model configured in a communication system according to an embodiment of the present disclosure.
  • Fig. 13 shows the training phase 130 and the prediction phase 140 of the neural network model, respectively.
  • the historical optimal beam set is used as the training data set 1301. More specifically, in the embodiment of the present disclosure, the relative index of the historical optimal beam is used as the training data.
  • the difference sequence of multiple optimal beams Idx t1 , Idx t2 , ... Idx tn-1 at multiple consecutive time points and the optimal beam Idx tn at the nearest time point is adopted ⁇ Idxt1-Idxtn ⁇ , ⁇ Idxt2-Idxtn ⁇ , whil, ⁇ Idxtn-1-Idxtn ⁇ , ⁇ 0 ⁇ as the historical optimal beam set.
  • the difference sequence ⁇ 0 ⁇ , ⁇ Idxt2-Idxt1 ⁇ , ⁇ Idxt3–Idxt2 ⁇ , ⁇ Idxt4–Idxt3, between the optimal beams at two adjacent time points in multiple consecutive time points is used ⁇ , whil, ⁇ Idxtn–Idxtn-1 ⁇ as the historical optimal beam set.
  • the same change trend of the optimal beam will be recognized as the same training data by the neural network model, thereby reducing the redundancy of the training data.
  • a weighted binary cross-entropy is used to construct a loss function required for training.
  • the loss function required for training is expressed as:
  • x n is the prediction results during training the neural network model
  • y n is the prediction target neural network model
  • w n is the weight corresponding to a respective beam weights.
  • each beam is assigned the same initial weight.
  • each beam becomes the optimal beam, and its corresponding weight is incremented, while maintaining the normalization of all beam weights. .
  • a more accurate training result can be achieved by using a loss function constructed by considering the frequency at which different beams become optimal beams.
  • the child node neural network model 111 of the trained child node 11 will output the corresponding candidate beam set 1501.
  • FIG. 14 is a schematic diagram illustrating a neural network model configured in a communication system according to an embodiment of the present disclosure.
  • the sub-node neural network model 111 of the sub-node 11 uses a cascaded gated recurrent unit (GRU) module to extract the long-term change trend of the beam.
  • GRU gated recurrent unit
  • the attention layer 400 is introduced into the neural network model to more effectively extract valuable information from the input sequence, thereby effectively improving the accuracy of the optimal beam candidate set prediction, especially This improves the accuracy in long-term forecasting situations.
  • Fig. 15 is a block diagram illustrating an example of the hardware configuration of a child node and a user equipment according to an embodiment of the present invention.
  • the aforementioned child nodes 11, 12, 13, 14 and user equipment 20 can be used as computer devices that physically include a processor 1001, a memory 1002, a memory 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, etc. constitute.
  • the words “device” may be replaced with circuits, devices, units, and the like.
  • the hardware structure of the child nodes 11, 12, 13, 14 and the user equipment 20 may include one or more of the devices shown in the figure, or may not include some of the devices.
  • processor 1001 For example, only one processor 1001 is shown in the figure, but it may be multiple processors.
  • processing may be executed by one processor, or may be executed by more than one processor simultaneously, sequentially, or by other methods.
  • processor 1001 may be installed by more than one chip.
  • the functions in the child nodes 11, 12, 13, 14 and the user equipment 20 are implemented, for example, in the following manner: by reading prescribed software (programs) into hardware such as the processor 1001, the memory 1002, and so on, the processor 1001 can perform The calculation controls the communication performed by the communication device 1004, and controls the reading and/or writing of data in the memory 1002 and the memory 1003.
  • programs programs
  • the processor 1001 can perform The calculation controls the communication performed by the communication device 1004, and controls the reading and/or writing of data in the memory 1002 and the memory 1003.
  • the processor 1001 operates, for example, an operating system to control the entire computer.
  • the processor 1001 may be composed of a central processing unit (CPU, Central Processing Unit) including interfaces with peripheral devices, control devices, computing devices, registers, and the like.
  • the processor 1001 reads programs (program codes), software modules, data, and the like from the memory 1003 and/or the communication device 1004 to the memory 1002, and executes various processes according to them.
  • programs program codes
  • software modules software modules, data, and the like from the memory 1003 and/or the communication device 1004 to the memory 1002, and executes various processes according to them.
  • the program a program that causes a computer to execute at least a part of the operations described in the above-mentioned embodiments can be adopted.
  • the polarization encoder 300 can be implemented by a control program stored in the memory 1002 and operated by the processor 1001, and can also be implemented in the same way for other functional blocks.
  • the memory 1002 is a computer readable recording medium, for example, it can be composed of read only memory (ROM, ReadOnlyMemory), programmable read only memory (EPROM, ErasableProgrammableROM), electrically programmable read only memory (EEPROM, Electrically EPROM), random access memory ( RAM, RandomAccessMemory), and at least one of other appropriate storage media.
  • the memory 1002 may also be called a register, a cache, a main memory (main storage device), and the like.
  • the memory 1002 can store executable programs (program codes), software modules, etc., for implementing the wireless communication method according to an embodiment of the present invention.
  • the memory 1003 is a computer-readable recording medium, such as a flexible disk, a floppy (registered trademark) disk, a magneto-optical disk (for example, a CD-ROM (CompactDiscROM), etc.), a digital universal CD, Blu-ray (Blu-ray, registered trademark) CD), removable disk, hard drive, smart card, flash memory device (for example, card, stick, key driver), magnetic stripe, database, server , At least one of other suitable storage media.
  • the memory 1003 may also be referred to as an auxiliary storage device.
  • the communication device 1004 is hardware (transmitting and receiving device) used for communication between computers through a wired and/or wireless network, and is also referred to as a network device, a network controller, a network card, a communication module, etc., for example.
  • a network device for example, Frequency Division Duplex (FDD, Frequency Division Duplex) and/or Time Division Duplex (TDD, Time Division Duplex)
  • the communication device 1004 may include a high-frequency switch, a duplexer, a filter, a frequency synthesizer, and the like.
  • the aforementioned transmitter 202 may be implemented by the communication device 1004.
  • the input device 1005 is an input device that accepts input from the outside (for example, a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.).
  • the output device 1006 is an output device that implements output to the outside (for example, a display, a speaker, a light emitting diode (LED, Light Emitting Diode) lamp, etc.).
  • the input device 1005 and the output device 1006 may also be an integrated structure (for example, a touch panel).
  • bus 1007 for communicating information.
  • the bus 1007 may be composed of a single bus, or may be composed of different buses between devices.
  • the child nodes 11, 12, 13, 14 and the user equipment 20 may include a microprocessor, a digital signal processor (DSP, Digital Signal Processor), an application specific integrated circuit (ASIC, Application Specific Integrated Circuit), a programmable logic device (PLD, Programmable Logic Device), Hardware such as Field Programmable Gate Array (FPGA, Field Programmable Gate Array) can be used to implement part or all of each functional block.
  • DSP digital signal processor
  • ASIC Application Specific Integrated Circuit
  • PLD programmable logic device
  • Hardware such as Field Programmable Gate Array (FPGA, Field Programmable Gate Array) can be used to implement part or all of each functional block.
  • the processor 1001 may be installed by at least one of these hardwares.
  • the communication system based on the neural network model and its configuration method according to the present disclosure are described with reference to Figs. 1 to 15, which realizes the dynamic configuration of the neural network model of the new sub-nodes in the communication system, and makes full use of it during operation.
  • the master node implements centralized update at the master node or distributed update at each sub-node.
  • the process of dynamic configuration and update consider the full sharing and utilization of training data and neural network models between the same or similar sub-nodes, which improves the efficiency of training and the accuracy of the resulting model.
  • the various characteristics of the sub-nodes are fully considered, such as the height of the sub-node, antenna configuration, coverage area size, service type, service volume, user distribution, environmental information, and historical configuration information.
  • the neural network model is expressed in different ways such as the neural network model index, the model weight of the neural network model, the model weight change of the neural network model, and the semantic representation of the neural network model, which further improves the efficiency of training and the accuracy of the obtained model Rate.
  • a lightweight recurrent neural network (RNN) is used, and a gated recurrent unit (GRU) module is used to capture the input sequence.
  • RNN lightweight recurrent neural network
  • GRU gated recurrent unit
  • the long-term dependence on the information choose the appropriate training data representation, and improve the structure of the loss function in a targeted manner, and at the same time introduce an attention mechanism into the neural network model to effectively extract valuable information from the input sequence, thereby effectively This greatly improves the accuracy of predicting the optimal beam candidate set, especially in the case of long-term prediction.
  • the channel and/or symbol may also be a signal (signaling).
  • the signal can also be a message.
  • the reference signal may also be referred to as RS (Reference Signal) for short, and may also be referred to as pilot (Pilot), pilot signal, etc., according to the applicable standard.
  • a component carrier CC, Component Carrier
  • CC Component Carrier
  • the information, parameters, etc. described in this specification may be expressed in absolute values, may be expressed in relative values to predetermined values, or may be expressed in corresponding other information.
  • the wireless resource can be indicated by a prescribed index.
  • the formulas etc. using these parameters may also be different from those explicitly disclosed in this specification.
  • the information, signals, etc. described in this specification can be expressed using any of a variety of different technologies.
  • the data, commands, instructions, information, signals, bits, symbols, chips, etc. that may be mentioned in all the above descriptions can pass voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of them. Combination to express.
  • information, signals, etc. can be output from the upper layer to the lower layer, and/or from the lower layer to the upper layer.
  • Information, signals, etc. can be input or output via multiple network nodes.
  • the input or output information, signals, etc. can be stored in a specific place (such as memory), or can be managed through a management table.
  • the input or output information, signals, etc. can be overwritten, updated or supplemented.
  • the output information, signals, etc. can be deleted.
  • the input information, signals, etc. can be sent to other devices.
  • the notification of information is not limited to the mode/implementation described in this specification, and may be performed by other methods.
  • the notification of information may be through physical layer signaling (e.g., Downlink Control Information (DCI, Downlink Control Information), Uplink Control Information (UCI, Uplink Control Information)), upper layer signaling (e.g., Radio Resource Control (RRC, RadioResource Control) ) Signaling, broadcast information (Master Information Block (MIB, Master Information Block), System Information Block (SIB, System Information Block), etc.), media access control (MAC, Medium Access Control) signaling), other signals, or a combination of them.
  • DCI Downlink Control Information
  • UCI Uplink Control Information
  • RRC Radio Resource Control
  • RRC RadioResource Control
  • Signaling broadcast information
  • MIB Master Information Block
  • SIB System Information Block
  • MAC Medium Access Control
  • the physical layer signaling may also be referred to as L1/L2 (Layer 1/Layer 2) control information (L1/L2 control signal), L1 control information (L1 control signal), or the like.
  • the RRC signaling may also be referred to as an RRC message, for example, it may be an RRC Connection Setup (RRC Connection Setup) message, an RRC Connection Reconfiguration (RRC Connection Reconfiguration) message, and so on.
  • the MAC signaling may be notified by, for example, a MAC control element (MAC CE (Control Element)).
  • notification of prescribed information is not limited to being explicitly performed, and can also be done implicitly (for example, by not performing notification of the prescribed information, or by notifying other information )conduct.
  • the judgment can be made by the value (0 or 1) represented by 1 bit, by the true or false value (Boolean value) represented by true (true) or false (false), or by the comparison of numerical values ( For example, comparison with a predetermined value) is performed.
  • software, commands, information, etc. may be transmitted or received via a transmission medium.
  • a transmission medium For example, when using wired technology (coaxial cable, optical cable, twisted pair, digital subscriber line (DSL, Digital Subscriber Line), etc.) and/or wireless technology (infrared, microwave, etc.) to send software from a website, server, or other remote source
  • wired technology coaxial cable, optical cable, twisted pair, digital subscriber line (DSL, Digital Subscriber Line), etc.
  • wireless technology infrared, microwave, etc.
  • system and "network” used in this manual can be used interchangeably.
  • base station (BS, BaseStation)
  • wireless base station eNB
  • gNB gNodeB
  • cell gNodeB
  • cell group femto cell
  • carrier femto cell
  • the base station can accommodate one or more (for example, three) cells (also called sectors). When the base station accommodates multiple cells, the entire coverage area of the base station can be divided into multiple smaller areas, and each smaller area can also be passed through the base station subsystem (for example, indoor small base stations (RRH, RRH) RemoteRadioHead))) to provide communication services.
  • the term "cell” or “sector” refers to a part or the whole of the coverage area of a base station and/or a base station subsystem that performs communication services in the coverage.
  • the base station may also be called by terms such as fixed station, NodeB, eNodeB (eNB), access point, transmission point, reception point, femto cell, and small cell.
  • Mobile stations are sometimes used by those skilled in the art as subscriber stations, mobile units, subscriber units, wireless units, remote units, mobile devices, wireless devices, wireless communication devices, remote devices, mobile subscriber stations, access terminals, mobile terminals, wireless Terminal, remote terminal, handset, user agent, mobile client, client, or some other appropriate terminology.
  • the wireless base station in this specification can also be replaced with a user terminal.
  • the various modes/implementations of the present invention can also be applied.
  • the functions possessed by the aforementioned child nodes 11, 12, 13, and 14 can be regarded as functions possessed by the user terminal 20.
  • words such as "up” and “down” can also be replaced with "side”.
  • the upstream channel can also be replaced with a side channel.
  • the user terminal in this specification can also be replaced with a wireless base station.
  • the functions of the user terminal 20 described above can be regarded as the functions of the child nodes 11, 12, 13, and 14.
  • a specific operation performed by a base station may also be performed by its upper node depending on the situation.
  • various actions performed for communication with the terminal can pass through the base station or more than one network node other than the base station.
  • MME mobility management entity
  • S-GW serving gateway
  • Serving-Gateway Serving-Gateway
  • LTE Long Term Evolution
  • LTE-A Long Term Evolution
  • LTE-B Long Term Evolution Beyond
  • LTE-Beyond Long Term Evolution Beyond
  • SUPER 3G Super First 3rd generation mobile communication system
  • IMT-Advanced 4th generation mobile communication system
  • 4G 4th generation mobile communication system
  • 5G 5th generation mobile communication system
  • Future Radio Access Future Radio Access
  • New Radio Access Technology New-RAT, Radio Access Technology
  • New Radio New Radio
  • NR New Radio
  • NX New Radio Access
  • Next-generation wireless access FX, Future generation radio access
  • GSM Global System for Mobile communications
  • CDMA2000 Code Division Multiple Access 2000
  • UMB Ultra Mobile Broadband
  • IEEE 802.11 Wi-Fi (registered trademark)
  • IEEE 802.16 WiMAX (registered trademark)
  • any reference to the units using the names "first”, “second”, etc. used in this specification does not fully limit the number or order of these units. These names can be used in this specification as a convenient way to distinguish two or more units. Therefore, the reference of the first unit and the second unit does not mean that only two units can be used or that the first unit must precede the second unit in several forms.
  • determining used in this specification may include various actions. For example, regarding “judgment (determining)", you can calculate (calculating), calculate (computing), process (processing), derive (deriving), investigate (investigating), search (lookingup) (such as tables, databases, or other data The search in the structure), ascertaining, etc. are regarded as “judgment (confirmation)”. In addition, regarding “judgment (determination)", it is also possible to combine receiving (for example, receiving information), transmitting (for example, sending information), input, output, and accessing (for example, Access to the data in the memory), etc. are regarded as "judgment (confirmation).
  • judgment (determination) resolving, selecting, choosing, establishing, comparing, etc. can also be regarded as performing "judgment (determination)”.
  • judgment (confirmation) several actions can be regarded as “judgment (confirmation)”.
  • connection refers to any direct or indirect connection or combination between two or more units, which can be It includes the following situations: between two units that are “connected” or “combined” with each other, there is one or more intermediate units.
  • the combination or connection between the units may be physical, logical, or a combination of the two. For example, "connect” can also be replaced with "access”.
  • two units are connected by using one or more wires, cables, and/or printed electrical connections, and as several non-limiting and non-exhaustive examples, by using radio frequency regions , Microwave region, and/or light (both visible light and invisible light) wavelengths of electromagnetic energy, etc., are “connected” or “combined” with each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本发明涉及一种基于神经网络模型的通信系统及其配置方法。所述通信系统包括至少一个主节点和与所述主节点通信连接的多个子节点,并且所述多个子节点的每一个中配置有子节点神经网络模型,所述通信系统配置方法包括:获取所述多个子节点的特征信息;以及基于获取的所述特征信息,动态配置所述子节点神经网络模型。

Description

基于神经网络模型的通信系统及其配置方法 技术领域
本公开涉及移动通信技术和人工智能技术领域,更具体地,本公开涉及一种基于神经网络模型的通信系统及其配置方法。
背景技术
在传统移动通信系统中,主要依靠人工方式完成网络部署及运维,既耗费大量人力资源又增加运行成本,而且网络优化也不理想。随着第五代移动通信技术投入商用,通信系统朝着网络多元化、宽带化、综合化、智能化的方向发展,从而网络优化、大型输入数据集处理、网络推荐或网元配置等复杂任务正在成为更大的挑战。同时,由于近年来大数据技术、计算能力、以及各种算法和网络框架的突破性进展,人工智能技术也呈现爆发式增长。当前,人工智能技术正在越来越多地与移动通信技术相结合,移动通信技术为人工智能技术提供很多智能应用场景所需的大数据吞吐量、低延时传输,而人工智能技术则也为解决移动通信技术中的各种复杂问题提供有力的解决方案。
在由至少一个主节点和与所述主节点通信连接的多个子节点构成的通信系统,在主节点和多个子节点中配置有神经网络模型,以便执行网络优化、大型输入数据集处理、网络推荐或网元配置等复杂任务。当通信系统有新的子节点加入时,需要对新加入的子节点神经网络模型进行初始化,如果仅仅采用预定的缺省设置,则无法实现有针对性的优化配置。同时,在通信系统的运行过程中,如果不针对特定的任务对配置的神经网络模型进行更新,则难以实现最佳的处理效果。进一步地,在针对特定任务的训练过程中,如果仅仅利用单个子节点自己的本地数据执行训练,则由于训练数据有限,无法实现最佳的模型优化和相同或近似的子节点之间的模型共享。此外,仅仅利用最近时刻的数据执行训练,则无法利用历史训练数据来提升神经网络模型处理的准确性。
发明内容
鉴于上述问题而提出了本公开。本公开提供了一种基于神经网络模型的 通信系统及其配置方法。
根据本公开的一个方面,提供了一种基于神经网络模型的通信系统配置方法,所述通信系统包括至少一个主节点和与所述主节点通信连接的多个子节点,并且所述多个子节点的每一个中配置有子节点神经网络模型,所述通信系统配置方法包括:获取所述多个子节点的特征信息;以及基于获取的所述特征信息,动态配置所述子节点神经网络模型。
此外,根据本公开一个方面的通信系统配置方法,其中,所述获取所述多个子节点的特征信息包括:接收从所述多个子节点中的一个子节点传输的所述特征信息。
此外,根据本公开一个方面的通信系统配置方法,其中,所述获取所述多个子节点的特征信息包括:接收从所述多个子节点中的一个子节点传输的初始信息;以及基于所述初始信息,预测所述一个子节点的所述特征信息。
此外,根据本公开一个方面的通信系统配置方法,其中,所述基于获取的所述特征信息,动态配置所述子节点神经网络模型包括:基于所述特征信息,从多个预定神经网络模型选择一个神经网络模型;以及利用选择的所述一个神经网络模型配置所述一个子节点的所述子节点神经网络模型。
此外,根据本公开一个方面的通信系统配置方法,其中,所述基于获取的所述特征信息,动态配置所述子节点神经网络模型包括:基于所述特征信息,从所述多个子节点选择与所述一个子节点匹配的匹配子节点;从所述匹配子节点接收所述匹配子节点的子节点神经网络模型;以及利用所述匹配子节点的子节点神经网络模型配置所述一个子节点的所述子节点神经网络模型。
此外,根据本公开一个方面的通信系统配置方法,其中,所述一个子节点是新加入所述通信系统的子节点。
此外,根据本公开一个方面的通信系统配置方法,其中,所述获取所述多个子节点的特征信息包括:接收从所述多个子节点中的每一个子节点传输的所述特征信息。
此外,根据本公开一个方面的通信系统配置方法,其中,所述基于获取的所述特征信息,动态配置所述子节点神经网络模型包括:基于所述特征信息,将所述多个子节点分为多个类别;利用所述特征信息,针对所述多个类别执行所述子节点神经网络模型的训练,以获取更新的子节点神经网络模型; 以及利用所述子节点神经网络模型更新所述多个子节点的所述子节点神经网络模型。
此外,根据本公开一个方面的通信系统配置方法,其中,所述基于获取的所述特征信息,动态配置所述子节点神经网络模型包括:基于所述特征信息,将所述多个子节点分为多个类别;按照多个类别,将属于多个类别中同一类别的子节点的所述特征信息通知给所述同一类别的子节点;以及所述同一类别的子节点利用所述同一类别的子节点的所述特征信息执行训练,更新所述同一类别的子节点的所述子节点神经网络模型。
此外,根据本公开一个方面的通信系统配置方法,其中,所述特征信息包括所述子节点的高度、天线配置、覆盖区域大小、业务类型、业务量、用户分布、环境信息、以及历史配置信息。
此外,根据本公开一个方面的通信系统配置方法,其中,所述配置所述子节点神经网络模型包括以下之一:建立多个神经网络模型的索引,利用所述索引指示所述子节点神经网络模型为所述多个神经网络模型之一;利用神经网络模型的模型权重指示所述子节点神经网络模型;利用神经网络模型的模型权重变化量指示所述子节点神经网络模型;以及利用神经网络模型的语义表示指示所述子节点神经网络模型。
此外,根据本公开一个方面的通信系统配置方法,其中,所述特征信息为所述子节点对应用户设备的历史最优波束集,并且其中,所述历史最优波束集包括:连续多个时间点的多个最优波束与最近时间点的最优波束的差别序列;或者连续多个时间点中两个相邻时间点的最优波束之间的差别序列;
此外,根据本公开一个方面的通信系统配置方法,其中,利用所述特征信息更新所述子节点神经网络模型包括:利用所述历史最优波束集中每个历史最优波束的出现次数,确定每个历史最优波束的权重;以及根据每个历史最优波束的权重以及所述历史最优波束集,构造加权损失函数执行训练,以更新所述子节点神经网络模型。
此外,根据本公开一个方面的通信系统配置方法,其中,利用所述特征信息更新所述子节点神经网络模型包括:在所述子节点神经网络模型中配置注意力层,利用包括注意力层的所述子节点神经网络模型执行训练,以更新所述子节点神经网络模型。
根据本公开的另一个方面,提供了一种基于神经网络模型的通信系统, 包括:至少一个主节点;多个子节点,与所述主节点通信连接,并且所述多个子节点的每一个中配置有子节点神经网络模型,其中,所述至少一个主节点获取所述多个子节点的特征信息;以及基于获取的所述特征信息,动态配置所述子节点神经网络模型。
此外,根据本公开另一个方面的通信系统,其中,所述至少一个主节点接收从所述多个子节点中的一个子节点传输的所述特征信息。
此外,根据本公开另一个方面的通信系统,其中,所述至少一个主节点接收从所述多个子节点中的一个子节点传输的初始信息;以及基于所述初始信息,预测所述一个子节点的所述特征信息。
此外,根据本公开另一个方面的通信系统,其中,所述至少一个主节点基于所述特征信息,从多个预定神经网络模型选择一个神经网络模型;以及利用选择的所述一个神经网络模型配置所述一个子节点的所述子节点神经网络模型。
此外,根据本公开另一个方面的通信系统,其中,所述至少一个主节点基于所述特征信息,从所述多个子节点选择与所述一个子节点匹配的匹配子节点;从所述匹配子节点接收所述匹配子节点的子节点神经网络模型;以及利用所述匹配子节点的子节点神经网络模型配置所述一个子节点的所述子节点神经网络模型。
此外,根据本公开另一个方面的通信系统,其中,所述一个子节点是新加入所述通信系统的子节点。
此外,根据本公开另一个方面的通信系统,其中,所述至少一个主节点接收从所述多个子节点中的每一个子节点传输的所述特征信息。
此外,根据本公开另一个方面的通信系统,其中,所述至少一个主节点基于所述特征信息,将所述多个子节点分为多个类别;利用所述特征信息,针对所述多个类别执行所述子节点神经网络模型的训练,以获取更新的子节点神经网络模型;以及利用所述子节点神经网络模型更新所述多个子节点的所述子节点神经网络模型。
此外,根据本公开另一个方面的通信系统,其中,所述至少一个主节点基于所述特征信息,将所述多个子节点分为多个类别;按照多个类别,将属于多个类别中同一类别的子节点的所述特征信息通知给所述同一类别的子节点;以及所述同一类别的子节点利用所述同一类别的子节点的所述特征信 息执行训练,更新所述同一类别的子节点的所述子节点神经网络模型。
此外,根据本公开另一个方面的通信系统,其中,所述特征信息包括所述子节点的高度、天线配置、覆盖区域大小、业务类型、业务量、用户分布、环境信息、以及历史配置信息。此外,根据本公开另一个方面的通信系统,其中,所述配置所述子节点神经网络模型包括以下之一:建立多个神经网络模型的索引,利用所述索引指示所述子节点神经网络模型为所述多个神经网络模型之一;利用神经网络模型的模型权重指示所述子节点神经网络模型;利用神经网络模型的模型权重变化量指示所述子节点神经网络模型;以及利用神经网络模型的语义表示指示所述子节点神经网络模型。
此外,根据本公开另一个方面的通信系统,其中,所述特征信息为所述子节点对应用户设备的历史最优波束集,并且其中,所述历史最优波束集包括:连续多个时间点的多个最优波束与最近时间点的最优波束的差别序列;或者连续多个时间点中两个相邻时间点的最优波束之间的差别序列;
此外,根据本公开另一个方面的通信系统,其中,所述至少一个主节点或所述子节点利用所述历史最优波束集中每个历史最优波束的出现次数,确定每个历史最优波束的权重;以及根据每个历史最优波束的权重以及所述历史最优波束集,构造加权损失函数执行训练,以更新所述子节点神经网络模型。
此外,根据本公开另一个方面的通信系统,其中,所述至少一个主节点或所述子节点在所述子节点神经网络模型中配置注意力层,利用包括注意力层的所述子节点神经网络模型执行训练,以更新所述子节点神经网络模型。
如以下将详细描述的,根据本公开的基于神经网络模型的通信系统及其配置方法,实现了对于通信系统中新子节点的神经网络模型的动态配置,并且在运行过程中充分利用在线数据,由主节点实现主节点处的集中式更新或各子节点处的分布式更新。在动态配置和更新的过程中,考虑相同或相似子节点之间的训练数据和神经网络模型的充分共享利用,提升了训练的效率和所得模型的准确率。此外,在神经网络模型的配置过程中,充分考虑子节点的各种特征,诸如子节点的高度、天线配置、覆盖区域大小、业务类型、业务量、用户分布、环境信息、以及历史配置信息,并且通过神经网络模型索引、神经网络模型的模型权重、神经网络模型的模型权重变化量和神经网络模型的语义表示等不同方式对神经网络模型进行表示,进一步提升了训练的 效率和所得模型的准确率。进一步地,在诸如针对为用户设备配置最优波束候选集这样的具体任务进行神经网络模型时,通过采用轻量的循环神经网络(RNN),并且利用门控循环单元(GRU)模块捕获输入序列的长时依赖信息,选择适当的训练数据表示方式,并且有针对性地改进损失函数的构造,同时在神经网络模型中引入注意力机制以便有效地从输入序列中提取有价值的信息,从而有效地提升对于最优波束候选集预测的准确性,特别是提升了在长时预测情况下的准确性。
要理解的是,前面的一般描述和下面的详细描述两者都是示例性的,并且意图在于提供要求保护的技术的进一步说明。
附图说明
通过结合附图对本公开实施例进行更详细的描述,本公开的上述以及其它目的、特征和优势将变得更加明显。附图用来提供对本公开实施例的进一步理解,并且构成说明书的一部分,与本公开实施例一起用于解释本公开,并不构成对本公开的限制。在附图中,相同的参考标号通常代表相同部件或步骤。
图1是概述根据本公开实施例的通信系统的示意图;
图2是概述根据本公开实施例的通信系统配置方法的流程图;
图3是图示根据本公开实施例的通信系统的一个配置示例的示意图;
图4是图示根据本公开实施例的通信系统配置方法的一个示例的流程图;
图5是图示根据本公开实施例的通信系统配置方法的一个示例的流程图;
图6是图示根据本公开实施例的通信系统的一个配置示例的示意图;
图7是图示根据本公开实施例的通信系统配置方法的一个示例的流程图;
图8是图示根据本公开实施例的通信系统配置方法的一个示例的流程图;
图9是图示根据本公开实施例的通信系统的一个配置示例的示意图;
图10是图示根据本公开实施例的通信系统配置方法的一个示例的流程图;
图11是图示根据本公开实施例的通信系统配置方法的一个示例的流程图;
图12是图示根据本公开实施例的通信系统执行最优波束扫描任务的示意图;
图13是图示根据本公开实施例的通信系统中配置的神经网络模型的训练和预测过程示意图;
图14是图示根据本公开实施例的通信系统中配置的神经网络模型的示意图;以及
图15是图示根据本发明实施例的子节点及用户设备的硬件构成的示例的框图。
具体实施方式
为了使得本公开的目的、技术方案和优点更为明显,下面将参照附图详细描述根据本公开的示例实施例。显然,所描述的实施例仅仅是本公开的一部分实施例,而不是本公开的全部实施例,应理解,本公开不受这里描述的示例实施例的限制。
本公开提供的方案涉及移动通信技术和人工智能技术的结合,具体通过如下实施例进行说明。
图1是概述根据本公开实施例的通信系统的示意图。
如图1所示,根据本公开实施例的通信系统1包括至少一个主节点10和与所述主节点通信连接的多个子节点11、12、13和14。主节点10作为中心控制单元,实现对于多个子节点11、12、13和14以及相应资源的配置、调度和管理。多个子节点11、12、13和14的每一个中配置有子节点神经网络模型111、121、131和141。
在本公开的一个实施例中,主节点10例如是通信网络的中央单元(CU),子节点11、12、13和14例如是通信网络的分布单元(DU)。在本公开的另一个实施例中,主节点10例如是云服务器,子节点11、12、13和14例如是多接入边缘计算(MEC)服务器。容易理解的是,主节点和子节点的数量和类型、子节点神经网络的数量和类型都是非限制性的。
图2是概述根据本公开实施例的通信系统配置方法的流程图。在如图1所示的通信系统1中,执行如图2所示的根据本公开实施例的通信系统配置 方法。
具体地,在步骤S201中,获取多个子节点的特征信息。
如下将参照附图详细描述的,在本公开的实施例中,多个子节点的特征信息可以是子节点的高度、天线配置、覆盖区域大小、业务类型、业务量、用户分布、环境信息等。此外,在本公开的实施例中,多个子节点的特征信息还可以包括多个子节点的子节点神经网络在执行特定任务的过程中获取的历史信息。在本公开的实施例中,多个子节点的特征信息可以是子节点报告给主节点,或者主节点根据获取的子节点的初始信息预测子节点的相关特征信息。
在步骤S202中,基于获取的所述特征信息,动态配置所述子节点神经网络模型。
如下将参照附图详细描述的,在本公开的实施例中,动态配置所述子节点神经网络模型可能是当新子节点加入通信网络时,初始化配置新子节点的子节点神经网络模型。在本公开的实施例中,动态配置所述子节点神经网络模型还可能是在通信网络运行过程中,利用实时生成的在线数据作为训练数据,训练并且更新各子节点的子节点神经网络模型。
以下,将参照图3到图11详细描述根据本公开实施例的通信系统及其配置方法的具体示例。
图3是图示根据本公开实施例的通信系统的一个配置示例的示意图。如图3所示,通信系统1中新加入子节点14,主节点10对新加入的子节点14进行初始化配置。图4和图5是与图3场景相对应的通信系统配置方法的示例流程图,并且图4和图5分别示出两种不同的特征信息获取方式。
如图4所示,根据本公开实施例的通信系统配置方法的一个示例包括如下步骤。
在步骤S401中,接收从所述多个子节点中的一个子节点传输的所述特征信息。也就是说,参照图3所示,主节点10接收从新加入的子节点14传输的特征信息P i,4,特征信息P i,4可以是所述子节点14的高度、天线配置、覆盖区域大小、业务类型、业务量、用户分布、环境信息等。
在步骤S402中,基于所述特征信息,从多个预定神经网络模型选择一个神经网络模型。主节点10预先设置有多个神经网络模型,用于不同的子节点类型和任务类型。主节点10根据从新加入的子节点14传输的特征信息 P i,4,从多个预定神经网络模型选择一个神经网络模型。
在步骤S403中,利用选择的所述一个神经网络模型配置所述一个子节点的所述子节点神经网络模型。主节点10将从多个预定神经网络模型选择一个神经网络模型通过信令信息P i+1,4发送给子节点14,从而配置所述一个子节点14的所述子节点神经网络模型114。
在本公开的实施例中,主节点可以通过多种不同的方式表示神经网络模型。例如,可以建立多个神经网络模型的索引,利用所述索引指示所述子节点神经网络模型为所述多个神经网络模型之一。可以利用神经网络模型的模型权重指示所述子节点神经网络模型。可以利用神经网络模型的模型权重变化量指示所述子节点神经网络模型。此外,还可以利用神经网络模型的语义表示指示所述子节点神经网络模型,例如利用神经网络的拓扑架构图作为神经网络模型的语义表示。容易以理解的是,以上神经网络模型的表示方式仅是示意性的,根据本公开实施例的通信系统配置方法中的神经网络模型的表示方式不限于此。
如图5所示,根据本公开实施例的通信系统配置方法的一个示例包括如下步骤。
在步骤S501中,接收从所述多个子节点中的一个子节点传输的初始信息。也就是说,参照图3所示,主节点10接收从新加入的子节点14传输的初始信息P i,4,需要注意的是,子节点14传输的初始信息可能不同于参照图4描述的特征信息。此外,本公开的实施例中,步骤S501是可选的,新加入的子节点14也可不必报告初始信息。
在步骤S502中,基于所述初始信息,预测所述一个子节点的所述特征信息。不同于参照图4描述的示例,在图5所示的流程图中,由主节点10预测新加入的子节点14的特征信息。
在步骤S503中,基于所述特征信息,从多个预定神经网络模型选择一个神经网络模型。主节点10预先设置有多个神经网络模型,用于不同的子节点类型和任务类型。主节点10根据预测的新加入的子节点14的特征信息,从多个预定神经网络模型选择一个神经网络模型。
在步骤S504中,利用选择的所述一个神经网络模型配置所述一个子节点的所述子节点神经网络模型。与参照图4描述的配置步骤相同,主节点10将从多个预定神经网络模型选择一个神经网络模型通过信令信息P i+1,4发送 给子节点14,从而配置所述一个子节点14的所述子节点神经网络模型114。
图6是图示根据本公开实施例的通信系统的一个配置示例的示意图。类似于图3所示的示例,通信系统1中新加入子节点14,主节点10对新加入的子节点14进行初始化配置。与图3所示的示例不同,在图6所示的配置示例中,主节点将从与新加入子节点匹配相似的子节点选择神经网络模型,而不是从设置的多个神经网络模型中选择。图7和图8是与图6场景相对应的通信系统配置方法的示例流程图,并且图7和图8分别示出两种不同的特征信息获取方式。
如图7所示,根据本公开实施例的通信系统配置方法的一个示例包括如下步骤。
在步骤S701中,接收从所述多个子节点中的一个子节点传输的所述特征信息。也就是说,参照图6所示,主节点10接收从新加入的子节点14传输的特征信息P i,4,特征信息P i,4可以是所述子节点14的高度、天线配置、覆盖区域大小、业务类型、业务量、用户分布、环境信息等。
在步骤S702中,基于所述特征信息,从所述多个子节点选择与所述一个子节点匹配的匹配子节点。也就是说,参照图6所示,主节点10基于从新加入的子节点14传输的特征信息P i,4,从现有的多个子节点选择与新加入的子节点14匹配的匹配子节点11。
在步骤S703中,从所述匹配子节点接收所述匹配子节点的子节点神经网络模型。也就是说,参照图6所示,主节点10从匹配子节点11接收通过信令P i,1传输的子节点神经网络模型111。
在步骤S704中,利用所述匹配子节点的子节点神经网络模型配置所述一个子节点的所述子节点神经网络模型。也就是说,参照图6所示,主节点10将从匹配子节点11接收通过信令P i,1传输的子节点神经网络模型111通过信令信息P i+1,4发送给子节点14,从而配置所述一个子节点14的所述子节点神经网络模型114。
如图8所示,根据本公开实施例的通信系统配置方法的一个示例包括如下步骤。
在步骤S801中,接收从所述多个子节点中的一个子节点传输的初始信息。也就是说,参照图6所示,主节点10接收从新加入的子节点14传输的初始信息P i,4,需要注意的是,子节点14传输的初始信息可能不同于参照图 7描述的特征信息。此外,本公开的实施例中,步骤S801是可选的,新加入的子节点14也可不必报告初始信息。
在步骤S802中,基于所述初始信息,预测所述一个子节点的所述特征信息。不同于参照图7描述的示例,在图8所示的流程图中,由主节点10预测新加入的子节点14的特征信息。
在步骤S803中,基于所述特征信息,从所述多个子节点选择与所述一个子节点匹配的匹配子节点。也就是说,参照图6所示,主节点10基于从新加入的子节点14传输的特征信息P i,4,从现有的多个子节点选择与新加入的子节点14匹配的匹配子节点11。
在步骤S804中,从所述匹配子节点接收所述匹配子节点的子节点神经网络模型。也就是说,参照图6所示,主节点10从匹配子节点11接收通过信令P i,1传输的子节点神经网络模型111。
在步骤S805中,利用所述匹配子节点的子节点神经网络模型配置所述一个子节点的所述子节点神经网络模型。也就是说,参照图6所示,主节点10将从匹配子节点11接收通过信令P i,1传输的子节点神经网络模型111通过信令信息P i+1,4发送给子节点14,从而配置所述一个子节点14的所述子节点神经网络模型114。
以上,参照图3到图8描述了当通信系统中有新加入的子节点时,不是采用预定的缺省设置对于新加入的子节点进行初始化配置,而是针对新加入的子节点自身的特征,进行针对性的优化配置。
图9是图示根据本公开实施例的通信系统的一个配置示例的示意图。如图9所示,通信系统1中主节点10协调对于各个子节点11、12、13和14进行训练更新。图10和图11是与图9场景相对应的通信系统配置方法的示例流程图,并且图10和图11分别示出两种不同的更新方式。
如图10所示,根据本公开实施例的通信系统配置方法的一个示例包括如下步骤。
在步骤S1001中,接收从所述多个子节点中的每一个子节点传输的所述特征信息。也就是说,参照图9所示,主节点10接收从子节点11、12、13和14传输的特征信息P i,1、P i,2、P i,3和P i,4。更具体地,在对于各个子节点11、12、13和14进行训练更新的场景下,从各子节点11、12、13和14传输的特征信息P i,1、P i,2、P i,3和P i,4是各子节点11、12、13和14针对特定任务生 成的在线数据。例如,在如下进一步描述的实施例中,所述在线数据是子节点为用户设备预测的历史最优波束集。
在步骤S1002中,基于所述特征信息,将所述多个子节点分为多个类别。也就是说,参照图9所示,主节点10基于从各子节点11、12、13和14传输的特征信息P i,1、P i,2、P i,3和P i,4,将多个子节点分为两个类别,即子节点11和14为一类,子节点12和13为一类。
在步骤S1003中,利用所述特征信息,针对所述多个类别执行所述子节点神经网络模型的训练,以获取更新的子节点神经网络模型。也就是说,参照图9所示,主节点10利用第一类子节点11和14的特征信息P i,1和P i,4,对第一类子节点的子节点神经网络模型111和141进行训练,并且主节点10利用第二类子节点12和13的特征信息P i,2和P i,3,对第二类子节点的子节点神经网络模型121和131进行训练。也就是说,通过对于各子节点11、12、13和14按照分类进行训练,相对于针对单个子节点的训练过程来说扩大了可用的训练数据,从而提高了训练效率和训练所得子节点神经网络模型的精度。
在步骤S1004中,利用所述子节点神经网络模型更新所述多个子节点的所述子节点神经网络模型。也就是说,参照图9所示,主节点10将分类训练所得的多个子节点的所述子节点神经网络模型分别通过信令信息P i+1,1、P i+1,2、P i+1,3和P i+1,4发送给子节点11、12、13和14,从而配置所述子节点11、12、13和14的所述子节点神经网络模型111、121、131和141。
如图11所示,根据本公开实施例的通信系统配置方法的一个示例包括如下步骤。
在步骤S1101中,接收从所述多个子节点中的每一个子节点传输的所述特征信息。也就是说,参照图9所示,主节点10接收从子节点11、12、13和14传输的特征信息P i,1、P i,2、P i,3和P i,4。更具体地,在对于各个子节点11、12、13和14进行训练更新的场景下,从各子节点11、12、13和14传输的特征信息P i,1、P i,2、P i,3和P i,4是各子节点11、12、13和14针对特定任务生成的在线数据。例如,在如下进一步描述的实施例中,所述在线数据是子节点为用户设备预测的历史最优波束集。
在步骤S1102中,基于所述特征信息,将所述多个子节点分为多个类别。也就是说,参照图9所示,主节点10基于从各子节点11、12、13和14传 输的特征信息P i,1、P i,2、P i,3和P i,4,将多个子节点分为两个类别,即子节点11和14为一类,子节点12和13为一类。
在步骤S1103中,按照多个类别,将属于多个类别中同一类别的子节点的所述特征信息通知给所述同一类别的子节点。不同于图10所示的由主节点10执行训练,在图11所示的配置方法中,主节点10按照多个类别(如图9所示的两个类别,子节点11和14为一类,子节点12和13为一类),将属于多个类别中同一类别的子节点的所述特征信息通知给所述同一类别的子节点。例如,主节点10分别通过信令P i+1,1和P i+1,4将第一类的特征信息(即,第一类的在线数据)P i,1和P i,4通知给第一类子节点11和14,并且主节点10分别通过信令P i+1,2和P i+1,3将第二类的特征信息(即,第二类的在线数据)P i,2和P i,2通知给第二类子节点12和13。
在步骤S1104中,所述同一类别的子节点利用所述同一类别的子节点的所述特征信息执行训练,更新所述同一类别的子节点的所述子节点神经网络模型。也就是说,参照图9所示,子节点11和14利用第一类的特征信息P i,1和P i,4执行训练并且更新自己的子节点神经网络模型111和141,并且子节点12和13利用第二类的特征信息P i,2和P i,3执行训练并且更新自己的子节点神经网络模型121和131。也就是说,通过各子节点11、12、13和14利用自身所属类别的特征信息进行训练,相对于单个子节点仅利用自身特征信息的训练过程来说扩大了可用的训练数据,从而提高了训练效率和训练所得子节点神经网络模型的精度。
以上,参照图9到图11描述了当通信系统中进行各子节点的神经网络模型训练更新时,不是采用单个子节点仅利用自身训练数据训练的方式,而是针对子节点自身的特征,将子节点进行分类,从而利用同一类子节点的所有训练数据执行对于该类子节点的神经网络模型训练更新,扩大了可用的训练数据,从而提高了训练效率和训练所得子节点神经网络模型的精度。
以下,将进一步参照图12到图14描述针对为用户设备提供最优波束候选集的目的,训练子节点的神经网络模型的具体示例。
图12是图示根据本公开实施例的通信系统执行最优波束扫描任务的示意图。
如图12所示,子节点11例如是采用NR大规模MIMO的基站。当用户设备20处于移动状态时,用户设备20在不同时刻T 1和T 2的最优波束会 出现显著的变化。
在根据本公开实施例的通信系统中,可以通过子节点11中配置的神经网络模型111对用户设备20执行未来最优波束候选集的预测任务。需要理解的是,为了执行未来最优波束候选集的预测任务,需要对子节点11中配置的神经网络模型111执行训练。可以采用以上参照图3到11描述的根据本公开实施例的通信系统配置方法,由主节点10或子节点11执行训练。
图13是图示根据本公开实施例的通信系统中配置的神经网络模型的训练和预测过程示意图。
图13分别示出了神经网络模型的训练阶段130和预测阶段140。在训练阶段130,利用历史最优波束集作为训练数据集1301。更具体地,在本公开的实施例中,采用历史最优波束的相对索引作为训练数据。
例如,在一个实施例中,采用连续多个时间点的多个最优波束Idx t1、Idx t2、…Idx tn-1与最近时间点的最优波束Idx tn的差别序列{{Idxt1-Idxtn},{Idxt2-Idxtn},......,{Idxtn-1-Idxtn},{0}}作为历史最优波束集。
在另一个实施例中,采用连续多个时间点中两个相邻时间点的最优波束之间的差别序列{{0},{Idxt2-Idxt1},{Idxt3–Idxt2},{Idxt4–Idxt3},......,{Idxtn–Idxtn-1}}作为历史最优波束集。
通过如此配置训练数据的表示,最优波束的相同变化趋势情况将被神经网络模型识别为相同的训练数据,从而降低了训练数据的冗余。
此外,在本公开的实施例中,使用加权的二进制交叉熵构造训练所需的损失函数。在一个示例中,训练所需的损失函数表示为:
L n=-w n[y n·log(x n)+(1-y n)·log(1-x n)]
其中,x n是训练期间神经网络模型的预测结果,y n是神经网络模型的预测目标,w n是对应波束的相应权重。在初始训练时,每个波束分配有初始的相同权重,随着训练的进行,每次波束变为最优波束就对其相应的权重进行增量操作,同时也保持所有波束权重的归一化。如此,利用通过考虑不同波束成为最优波束的频率来构造的损失函数,能够实现更准确的训练结果。
在预测阶段140,利用历史最优波束集1401作为输入,训练好的子节点11的子节点神经网络模型111将输出相应的候选波束集1501。
图14是图示根据本公开实施例的通信系统中配置的神经网络模型的示意图。
如图14所示,在本公开的一个实施例中,子节点11的子节点神经网络模型111采用级联的门控循环单元(GRU)模块提取波束的长期变化趋势。此外,如图14所示,在神经网络模型中引入注意力层400以便更有效地从输入序列中提取有价值的信息,从而有效地提升对于最优波束候选集预测的准确性,特别是提升了在长时预测情况下的准确性。
图15是图示根据本发明实施例的子节点及用户设备的硬件构成的示例的框图。上述的子节点11、12、13、14和用户设备20可以作为在物理上包括处理器1001、内存1002、存储器1003、通信装置1004、输入装置1005、输出装置1006、总线1007等的计算机装置来构成。
另外,在以下的说明中,“装置”这样的文字也可替换为电路、设备、单元等。子节点11、12、13、14和用户设备20的硬件结构可以包括一个或多个图中所示的各装置,也可以不包括部分装置。
例如,处理器1001仅图示出一个,但也可以为多个处理器。此外,可以通过一个处理器来执行处理,也可以通过一个以上的处理器同时、依次、或采用其它方法来执行处理。另外,处理器1001可以通过一个以上的芯片来安装。
子节点11、12、13、14和用户设备20中的各功能例如通过如下方式实现:通过将规定的软件(程序)读入到处理器1001、内存1002等硬件上,从而使处理器1001进行运算,对由通信装置1004进行的通信进行控制,并对内存1002和存储器1003中的数据的读出和/或写入进行控制。
处理器1001例如使操作系统进行工作从而对计算机整体进行控制。处理器1001可以由包括与周边装置的接口、控制装置、运算装置、寄存器等的中央处理器(CPU,Central Processing Unit)构成。此外,处理器1001将程序(程序代码)、软件模块、数据等从存储器1003和/或通信装置1004读出到内存1002,并根据它们执行各种处理。作为程序,可以采用使计算机执行在上述实施方式中说明的动作中的至少一部分的程序。例如,极化编码器300可以通过保存在内存1002中并通过处理器1001来工作的控制程序来实现,对于其它功能块,也可以同样地来实现。内存1002是计算机可读取记录介质,例如可以由只读存储器(ROM,ReadOnlyMemory)、可编程只读存储器(EPROM,ErasableProgrammableROM)、电可编程只读存储器(EEPROM,ElectricallyEPROM)、随机存取存储器(RAM,RandomAccessMemory)、其 它适当的存储介质中的至少一个来构成。内存1002也可以称为寄存器、高速缓存、主存储器(主存储装置)等。内存1002可以保存用于实施本发明的一实施方式所涉及的无线通信方法的可执行程序(程序代码)、软件模块等。
存储器1003是计算机可读取记录介质,例如可以由软磁盘(flexible disk)、软(注册商标)盘(floppy disk)、磁光盘(例如,只读光盘(CD-ROM(CompactDiscROM)等)、数字通用光盘、蓝光(Blu-ray,注册商标)光盘)、可移动磁盘、硬盘驱动器、智能卡、闪存设备(例如,卡、棒(stick)、密钥驱动器(key driver))、磁条、数据库、服务器、其它适当的存储介质中的至少一个来构成。存储器1003也可以称为辅助存储装置。
通信装置1004是用于通过有线和/或无线网络进行计算机间的通信的硬件(发送接收设备),例如也称为网络设备、网络控制器、网卡、通信模块等。通信装置1004为了实现例如频分双工(FDD,FrequencyDivisionDuplex)和/或时分双工(TDD,TimeDivisionDuplex),可以包括高频开关、双工器、滤波器、频率合成器等。例如,上述的发射器202可以通过通信装置1004来实现。
输入装置1005是接受来自外部的输入的输入设备(例如,键盘、鼠标、麦克风、开关、按钮、传感器等)。输出装置1006是实施向外部的输出的输出设备(例如,显示器、扬声器、发光二极管(LED,LightEmittingDiode)灯等)。另外,输入装置1005和输出装置1006也可以为一体的结构(例如触控面板)。
此外,处理器1001、内存1002等各装置通过用于对信息进行通信的总线1007连接。总线1007可以由单一的总线构成,也可以由装置间不同的总线构成。
此外,子节点11、12、13、14和用户设备20可以包括微处理器、数字信号处理器(DSP,DigitalSignalProcessor)、专用集成电路(ASIC,ApplicationSpecificIntegratedCircuit)、可编程逻辑器件(PLD,ProgrammableLogicDevice)、现场可编程门阵列(FPGA,FieldProgrammableGateArray)等硬件,可以通过该硬件来实现各功能块的部分或全部。例如,处理器1001可以通过这些硬件中的至少一个来安装。
以上,参照图1到图15描述了根据本公开的基于神经网络模型的通信 系统及其配置方法,实现了对于通信系统中新子节点的神经网络模型的动态配置,并且在运行过程中充分利用在线数据,由主节点实现主节点处的集中式更新或各子节点处的分布式更新。在动态配置和更新的过程中,考虑相同或相似子节点之间的训练数据和神经网络模型的充分共享利用,提升了训练的效率和所得模型的准确率。此外,在神经网络模型的配置过程中,充分考虑子节点的各种特征,诸如子节点的高度、天线配置、覆盖区域大小、业务类型、业务量、用户分布、环境信息、以及历史配置信息,并且通过神经网络模型索引、神经网络模型的模型权重、神经网络模型的模型权重变化量和神经网络模型的语义表示等不同方式对神经网络模型进行表示,进一步提升了训练的效率和所得模型的准确率。进一步地,在诸如针对为用户设备配置最优波束候选集这样的具体任务进行神经网络模型时,通过采用轻量的循环神经网络(RNN),并且利用门控循环单元(GRU)模块捕获输入序列的长时依赖信息,选择适当的训练数据表示方式,并且有针对性地改进损失函数的构造,同时在神经网络模型中引入注意力机制以便有效地从输入序列中提取有价值的信息,从而有效地提升对于最优波束候选集预测的准确性,特别是提升了在长时预测情况下的准确性。
另外,关于本说明书中说明的用语和/或对本说明书进行理解所需的用语,可以与具有相同或类似含义的用语进行互换。例如,信道和/或符号也可以为信号(信令)。此外,信号也可以为消息。参考信号也可以简称为RS(ReferenceSignal),根据所适用的标准,也可以称为导频(Pilot)、导频信号等。此外,分量载波(CC,ComponentCarrier)也可以称为小区、频率载波、载波频率等。
此外,本说明书中说明的信息、参数等可以用绝对值来表示,也可以用与规定值的相对值来表示,还可以用对应的其它信息来表示。例如,无线资源可以通过规定的索引来指示。进一步地,使用这些参数的公式等也可以与本说明书中明确公开的不同。
在本说明书中用于参数等的名称在任何方面都并非限定性的。例如,各种各样的信道(物理上行链路控制信道(PUCCH,PhysicalUplink ControlChannel)、物理下行链路控制信道(PDCCH,PhysicalDownlink ControlChannel)等)和信息单元可以通过任何适当的名称来识别,因此为这些各种各样的信道和信息单元所分配的各种各样的名称在任何方面都并 非限定性的。
本说明书中说明的信息、信号等可以使用各种各样不同技术中的任意一种来表示。例如,在上述的全部说明中可能提及的数据、命令、指令、信息、信号、比特、符号、芯片等可以通过电压、电流、电磁波、磁场或磁性粒子、光场或光子、或者它们的任意组合来表示。
此外,信息、信号等可以从上层向下层、和/或从下层向上层输出。信息、信号等可以经由多个网络节点进行输入或输出。
输入或输出的信息、信号等可以保存在特定的场所(例如内存),也可以通过管理表进行管理。输入或输出的信息、信号等可以被覆盖、更新或补充。输出的信息、信号等可以被删除。输入的信息、信号等可以被发往其它装置。
信息的通知并不限于本说明书中说明的方式/实施方式,也可以通过其它方法进行。例如,信息的通知可以通过物理层信令(例如,下行链路控制信息(DCI,DownlinkControlInformation)、上行链路控制信息(UCI,UplinkControlInformation))、上层信令(例如,无线资源控制(RRC,RadioResourceControl)信令、广播信息(主信息块(MIB,MasterInformationBlock)、系统信息块(SIB,SystemInformationBlock)等)、媒体存取控制(MAC,MediumAccessControl)信令)、其它信号或者它们的组合来实施。
另外,物理层信令也可以称为L1/L2(第1层/第2层)控制信息(L1/L2控制信号)、L1控制信息(L1控制信号)等。此外,RRC信令也可以称为RRC消息,例如可以为RRC连接建立(RRC Connection Setup)消息、RRC连接重配置(RRC Connection Reconfiguration)消息等。此外,MAC信令例如可以通过MAC控制单元(MAC CE(Control Element))来通知。
此外,规定信息的通知(例如,“ACK”、“NACK”的通知)并不限于显式地进行,也可以隐式地(例如,通过不进行该规定信息的通知,或者通过其它信息的通知)进行。
关于判定,可以通过由1比特表示的值(0或1)来进行,也可以通过由真(true)或假(false)表示的真假值(布尔值)来进行,还可以通过数值的比较(例如与规定值的比较)来进行。
软件无论被称为软件、固件、中间件、微代码、硬件描述语言,还是以 其它名称来称呼,都应宽泛地解释为是指命令、命令集、代码、代码段、程序代码、程序、子程序、软件模块、应用程序、软件应用程序、软件包、例程、子例程、对象、可执行文件、执行线程、步骤、功能等。
此外,软件、命令、信息等可以经由传输介质被发送或接收。例如,当使用有线技术(同轴电缆、光缆、双绞线、数字用户线路(DSL,DigitalSubscriberLine)等)和/或无线技术(红外线、微波等)从网站、服务器、或其它远程资源发送软件时,这些有线技术和/或无线技术包括在传输介质的定义内。
本说明书中使用的“系统”和“网络”这样的用语可以互换使用。
在本说明书中,“基站(BS,BaseStation)”、“无线基站”、“eNB”、“gNB”、“小区”、“扇区”、“小区组”、“载波”以及“分量载波”这样的用语可以互换使用。基站有时也以固定台(fixedstation)、NodeB、eNodeB(eNB)、接入点(accesspoint)、发送点、接收点、毫微微小区、小小区等用语来称呼。
基站可以容纳一个或多个(例如三个)小区(也称为扇区)。当基站容纳多个小区时,基站的整个覆盖区域可以划分为多个更小的区域,每个更小的区域也可以通过基站子系统(例如,室内用小型基站(射频拉远头(RRH,RemoteRadioHead)))来提供通信服务。“小区”或“扇区”这样的用语是指在该覆盖中进行通信服务的基站和/或基站子系统的覆盖区域的一部分或整体。
在本说明书中,“移动台(MS,MobileStation)”、“用户终端(userterminal)”、“用户装置(UE,UserEquipment)”以及“终端”这样的用语可以互换使用。基站有时也以固定台(fixedstation)、NodeB、eNodeB(eNB)、接入点(accesspoint)、发送点、接收点、毫微微小区、小小区等用语来称呼。
移动台有时也被本领域技术人员以用户台、移动单元、用户单元、无线单元、远程单元、移动设备、无线设备、无线通信设备、远程设备、移动用户台、接入终端、移动终端、无线终端、远程终端、手持机、用户代理、移动客户端、客户端或者若干其它适当的用语来称呼。
此外,本说明书中的无线基站也可以用用户终端来替换。例如,对于将无线基站和用户终端间的通信替换为多个用户终端间(D2D,Device-to-Device)的通信的结构,也可以应用本发明的各方式/实施方式。此时,可以将上述的子节点11、12、13、14所具有的功能当作用户终端20所具有的功能。此外,“上行”和“下行”等文字也可以替换为“侧”。例如,上 行信道也可以替换为侧信道。
同样,本说明书中的用户终端也可以用无线基站来替换。此时,可以将上述的用户终端20所具有的功能当作子节点11、12、13、14所具有的功能。
在本说明书中,设为通过基站进行的特定动作根据情况有时也通过其上级节点(uppernode)来进行。显然,在具有基站的由一个或多个网络节点(networknodes)构成的网络中,为了与终端间的通信而进行的各种各样的动作可以通过基站、除基站之外的一个以上的网络节点(可以考虑例如移动管理实体(MME,MobilityManagementEntity)、服务网关(S-GW,Serving-Gateway)等,但不限于此)、或者它们的组合来进行。
本说明书中说明的各方式/实施方式可以单独使用,也可以组合使用,还可以在执行过程中进行切换来使用。此外,本说明书中说明的各方式/实施方式的处理步骤、序列、流程图等只要没有矛盾,就可以更换顺序。例如,关于本说明书中说明的方法,以示例性的顺序给出了各种各样的步骤单元,而并不限定于给出的特定顺序。
本说明书中说明的各方式/实施方式可以应用于利用长期演进(LTE,LongTermEvolution)、高级长期演进(LTE-A,LTE-Advanced)、超越长期演进(LTE-B,LTE-Beyond)、超级第3代移动通信系统(SUPER 3G)、高级国际移动通信(IMT-Advanced)、第4代移动通信系统(4G,4th generation mobile communication system)、第5代移动通信系统(5G,5th generation mobile communication system)、未来无线接入(FRA,Future Radio Access)、新无线接入技术(New-RAT,Radio Access Technology)、新无线(NR,New Radio)、新无线接入(NX,New radio access)、新一代无线接入(FX,Future generation radio access)、全球移动通信系统(GSM(注册商标),Global System for Mobile communications)、码分多址接入2000(CDMA2000)、超级移动宽带(UMB,Ultra Mobile Broadband)、IEEE 802.11(Wi-Fi(注册商标))、IEEE 802.16(WiMAX(注册商标))、IEEE 802.20、超宽带(UWB,Ultra-WideBand)、蓝牙(Bluetooth(注册商标))、其它适当的无线通信方法的系统和/或基于它们而扩展的下一代系统。
本说明书中使用的“根据”这样的记载,只要未在其它段落中明确记载,则并不意味着“仅根据”。换言之,“根据”这样的记载是指“仅根据”和“至少根据”这两者。
本说明书中使用的对使用“第一”、“第二”等名称的单元的任何参照,均非全面限定这些单元的数量或顺序。这些名称可以作为区别两个以上单元的便利方法而在本说明书中使用。因此,第一单元和第二单元的参照并不意味着仅可采用两个单元或者第一单元必须以若干形式占先于第二单元。
本说明书中使用的“判断(确定)(determining)”这样的用语有时包含多种多样的动作。例如,关于“判断(确定)”,可以将计算(calculating)、推算(computing)、处理(processing)、推导(deriving)、调查(investigating)、搜索(lookingup)(例如表、数据库、或其它数据结构中的搜索)、确认(ascertaining)等视为是进行“判断(确定)”。此外,关于“判断(确定)”,也可以将接收(receiving)(例如接收信息)、发送(transmitting)(例如发送信息)、输入(input)、输出(output)、存取(accessing)(例如存取内存中的数据)等视为是进行“判断(确定)”。此外,关于“判断(确定)”,还可以将解决(resolving)、选择(selecting)、选定(choosing)、建立(establishing)、比较(comparing)等视为是进行“判断(确定)”。也就是说,关于“判断(确定)”,可以将若干动作视为是进行“判断(确定)”。
本说明书中使用的“连接的(connected)”、“结合的(coupled)”这样的用语或者它们的任何变形是指两个或两个以上单元间的直接的或间接的任何连接或结合,可以包括以下情况:在相互“连接”或“结合”的两个单元间,存在一个或一个以上的中间单元。单元间的结合或连接可以是物理上的,也可以是逻辑上的,或者还可以是两者的组合。例如,“连接”也可以替换为“接入”。在本说明书中使用时,可以认为两个单元是通过使用一个或一个以上的电线、线缆、和/或印刷电气连接,以及作为若干非限定性且非穷尽性的示例,通过使用具有射频区域、微波区域、和/或光(可见光及不可见光这两者)区域的波长的电磁能等,被相互“连接”或“结合”。
在本说明书或权利要求书中使用“包括(including)”、“包含(comprising)”、以及它们的变形时,这些用语与用语“具备”同样是开放式的。进一步地,在本说明书或权利要求书中使用的用语“或(or)”并非是异或。
以上对本发明进行了详细说明,但对于本领域技术人员而言,显然,本发明并非限定于本说明书中说明的实施方式。本发明在不脱离由权利要求书的记载所确定的本发明的宗旨和范围的前提下,可以作为修改和变更方式来实施。因此,本说明书的记载是以示例说明为目的,对本发明而言并非具有 任何限制性的意义。

Claims (28)

  1. 一种基于神经网络模型的通信系统配置方法,所述通信系统包括至少一个主节点和与所述主节点通信连接的多个子节点,并且所述多个子节点的每一个中配置有子节点神经网络模型,所述通信系统配置方法包括:
    获取所述多个子节点的特征信息;以及
    基于获取的所述特征信息,动态配置所述子节点神经网络模型。
  2. 如权利要求1所述的通信系统配置方法,其中,所述获取所述多个子节点的特征信息包括:
    接收从所述多个子节点中的一个子节点传输的所述特征信息。
  3. 如权利要求1所述的通信系统配置方法,其中,所述获取所述多个子节点的特征信息包括:
    接收从所述多个子节点中的一个子节点传输的初始信息;以及
    基于所述初始信息,预测所述一个子节点的所述特征信息。
  4. 如权利要求2或3所述的通信系统配置方法,其中,所述基于获取的所述特征信息,动态配置所述子节点神经网络模型包括:
    基于所述特征信息,从多个预定神经网络模型选择一个神经网络模型;以及
    利用选择的所述一个神经网络模型配置所述一个子节点的所述子节点神经网络模型。
  5. 如权利要求2或3所述的通信系统配置方法,其中,所述基于获取的所述特征信息,动态配置所述子节点神经网络模型包括:
    基于所述特征信息,从所述多个子节点选择与所述一个子节点匹配的匹配子节点;
    从所述匹配子节点接收所述匹配子节点的子节点神经网络模型;以及
    利用所述匹配子节点的子节点神经网络模型配置所述一个子节点的所述子节点神经网络模型。
  6. 如权利要求1到5的任一项所述的通信系统配置方法,其中,所述一个子节点是新加入所述通信系统的子节点。
  7. 如权利要求1所述的通信系统配置方法,其中,所述获取所述多个子节点的特征信息包括:
    接收从所述多个子节点中的每一个子节点传输的所述特征信息。
  8. 如权利要求7所述的通信系统配置方法,其中,所述基于获取的所述特征信息,动态配置所述子节点神经网络模型包括:
    基于所述特征信息,将所述多个子节点分为多个类别;
    利用所述特征信息,针对所述多个类别执行所述子节点神经网络模型的训练,以获取更新的子节点神经网络模型;以及
    利用所述子节点神经网络模型更新所述多个子节点的所述子节点神经网络模型。
  9. 如权利要求7所述的通信系统配置方法,其中,所述基于获取的所述特征信息,动态配置所述子节点神经网络模型包括:
    基于所述特征信息,将所述多个子节点分为多个类别;
    按照多个类别,将属于多个类别中同一类别的子节点的所述特征信息通知给所述同一类别的子节点;以及
    所述同一类别的子节点利用所述同一类别的子节点的所述特征信息执行训练,更新所述同一类别的子节点的所述子节点神经网络模型。
  10. 如权利要求1到9的任一项所述的通信系统配置方法,其中,
    所述特征信息包括所述子节点的高度、天线配置、覆盖区域大小、业务类型、业务量、用户分布、环境信息、以及历史配置信息。
  11. 如权利要求1到10的任一项所述的通信系统配置方法,其中,所述配置所述子节点神经网络模型包括以下之一:
    建立多个神经网络模型的索引,利用所述索引指示所述子节点神经网络 模型为所述多个神经网络模型之一;
    利用神经网络模型的模型权重指示所述子节点神经网络模型;
    利用神经网络模型的模型权重变化量指示所述子节点神经网络模型;以及
    利用神经网络模型的语义表示指示所述子节点神经网络模型。
  12. 如权利要求7到11的任一项所述的通信系统配置方法,其中,所述特征信息为所述子节点对应用户设备的历史最优波束集,并且
    其中,所述历史最优波束集包括:
    连续多个时间点的多个最优波束与最近时间点的最优波束的差别序列;或者
    连续多个时间点中两个相邻时间点的最优波束之间的差别序列;
  13. 如权利要求12所述的通信系统配置方法,其中,利用所述特征信息更新所述子节点神经网络模型包括:
    利用所述历史最优波束集中每个历史最优波束的出现次数,确定每个历史最优波束的权重;以及
    根据每个历史最优波束的权重以及所述历史最优波束集,构造加权损失函数执行训练,以更新所述子节点神经网络模型。
  14. 如权利要求12或13所述的通信系统配置方法,其中,利用所述特征信息更新所述子节点神经网络模型包括:
    在所述子节点神经网络模型中配置注意力层,利用包括注意力层的所述子节点神经网络模型执行训练,以更新所述子节点神经网络模型。
  15. 一种基于神经网络模型的通信系统,包括:
    至少一个主节点;
    多个子节点,与所述主节点通信连接,并且所述多个子节点的每一个中配置有子节点神经网络模型,
    其中,所述至少一个主节点获取所述多个子节点的特征信息;以及
    基于获取的所述特征信息,动态配置所述子节点神经网络模型。
  16. 如权利要求15所述的通信系统,其中,所述至少一个主节点接收从所述多个子节点中的一个子节点传输的所述特征信息。
  17. 如权利要求15所述的通信系统,其中,所述至少一个主节点接收从所述多个子节点中的一个子节点传输的初始信息;以及
    基于所述初始信息,预测所述一个子节点的所述特征信息。
  18. 如权利要求16或17所述的通信系统,其中,所述至少一个主节点基于所述特征信息,从多个预定神经网络模型选择一个神经网络模型;以及
    利用选择的所述一个神经网络模型配置所述一个子节点的所述子节点神经网络模型。
  19. 如权利要求16或17所述的通信系统,其中,所述至少一个主节点基于所述特征信息,从所述多个子节点选择与所述一个子节点匹配的匹配子节点;
    从所述匹配子节点接收所述匹配子节点的子节点神经网络模型;以及
    利用所述匹配子节点的子节点神经网络模型配置所述一个子节点的所述子节点神经网络模型。
  20. 如权利要求15到19的任一项所述的通信系统,其中,所述一个子节点是新加入所述通信系统的子节点。
  21. 如权利要求15所述的通信系统,其中,所述至少一个主节点接收从所述多个子节点中的每一个子节点传输的所述特征信息。
  22. 如权利要求21所述的通信系统,其中,所述至少一个主节点基于所述特征信息,将所述多个子节点分为多个类别;
    利用所述特征信息,针对所述多个类别执行所述子节点神经网络模型的训练,以获取更新的子节点神经网络模型;以及
    利用所述子节点神经网络模型更新所述多个子节点的所述子节点神经 网络模型。
  23. 如权利要求21所述的通信系统,其中,所述至少一个主节点基于所述特征信息,将所述多个子节点分为多个类别;
    按照多个类别,将属于多个类别中同一类别的子节点的所述特征信息通知给所述同一类别的子节点;以及
    所述同一类别的子节点利用所述同一类别的子节点的所述特征信息执行训练,更新所述同一类别的子节点的所述子节点神经网络模型。
  24. 如权利要求15到23的任一项所述的通信系统,其中,
    所述特征信息包括所述子节点的高度、天线配置、覆盖区域大小、业务类型、业务量、用户分布、环境信息、以及历史配置信息。
  25. 如权利要求15到24的任一项所述的通信系统,其中,所述配置所述子节点神经网络模型包括以下之一:
    建立多个神经网络模型的索引,利用所述索引指示所述子节点神经网络模型为所述多个神经网络模型之一;
    利用神经网络模型的模型权重指示所述子节点神经网络模型;
    利用神经网络模型的模型权重变化量指示所述子节点神经网络模型;以及
    利用神经网络模型的语义表示指示所述子节点神经网络模型。
  26. 如权利要求21到25的任一项所述的通信系统,其中,所述特征信息为所述子节点对应用户设备的历史最优波束集,并且
    其中,所述历史最优波束集包括:
    连续多个时间点的多个最优波束与最近时间点的最优波束的差别序列;或者
    连续多个时间点中两个相邻时间点的最优波束之间的差别序列。
  27. 如权利要求26所述的通信系统,其中,所述至少一个主节点或所述子节点利用所述历史最优波束集中每个历史最优波束的出现次数,确定每 个历史最优波束的权重;以及
    根据每个历史最优波束的权重以及所述历史最优波束集,构造加权损失函数执行训练,以更新所述子节点神经网络模型。
  28. 如权利要求26或27所述的通信系统,其中,所述至少一个主节点或所述子节点在所述子节点神经网络模型中配置注意力层,利用包括注意力层的所述子节点神经网络模型执行训练,以更新所述子节点神经网络模型。
PCT/CN2020/127846 2020-01-21 2020-11-10 基于神经网络模型的通信系统及其配置方法 WO2021147469A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/759,168 US20230045011A1 (en) 2020-01-21 2020-11-10 Communication system based on neural network model, and configuration method therefor
CN202080093993.5A CN115004649A (zh) 2020-01-21 2020-11-10 基于神经网络模型的通信系统及其配置方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010072490.1 2020-01-21
CN202010072490.1A CN113225197A (zh) 2020-01-21 2020-01-21 基于神经网络模型的通信系统及其配置方法

Publications (1)

Publication Number Publication Date
WO2021147469A1 true WO2021147469A1 (zh) 2021-07-29

Family

ID=76992826

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/127846 WO2021147469A1 (zh) 2020-01-21 2020-11-10 基于神经网络模型的通信系统及其配置方法

Country Status (3)

Country Link
US (1) US20230045011A1 (zh)
CN (2) CN113225197A (zh)
WO (1) WO2021147469A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114153593A (zh) * 2021-10-29 2022-03-08 北京邮电大学 业务处理的方法、装置、电子设备及介质
CN116419267A (zh) * 2021-12-31 2023-07-11 维沃移动通信有限公司 通信模型配置方法、装置和通信设备
CN116963187A (zh) * 2022-04-11 2023-10-27 华为技术有限公司 一种通信方法及相关装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259194A (zh) * 2016-12-28 2018-07-06 普天信息技术有限公司 网络故障预警方法及装置
CN109086789A (zh) * 2018-06-08 2018-12-25 四川斐讯信息技术有限公司 一种图像识别方法及系统
WO2019166989A1 (en) * 2018-02-28 2019-09-06 Sophos Limited Methods and apparatus for identifying the shared importance of multiple nodes within a machine learning model for multiple tasks
CN110676845A (zh) * 2019-10-10 2020-01-10 成都华茂能联科技有限公司 一种负荷调节方法、装置、系统及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019086867A1 (en) * 2017-10-31 2019-05-09 Babylon Partners Limited A computer implemented determination method and system
US10505616B1 (en) * 2018-06-01 2019-12-10 Samsung Electronics Co., Ltd. Method and apparatus for machine learning based wide beam optimization in cellular network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259194A (zh) * 2016-12-28 2018-07-06 普天信息技术有限公司 网络故障预警方法及装置
WO2019166989A1 (en) * 2018-02-28 2019-09-06 Sophos Limited Methods and apparatus for identifying the shared importance of multiple nodes within a machine learning model for multiple tasks
CN109086789A (zh) * 2018-06-08 2018-12-25 四川斐讯信息技术有限公司 一种图像识别方法及系统
CN110676845A (zh) * 2019-10-10 2020-01-10 成都华茂能联科技有限公司 一种负荷调节方法、装置、系统及存储介质

Also Published As

Publication number Publication date
US20230045011A1 (en) 2023-02-09
CN115004649A (zh) 2022-09-02
CN113225197A (zh) 2021-08-06

Similar Documents

Publication Publication Date Title
US10952067B2 (en) Terminal and base station
WO2021147469A1 (zh) 基于神经网络模型的通信系统及其配置方法
Sun et al. The SMART handoff policy for millimeter wave heterogeneous cellular networks
US11324018B2 (en) Terminal and a base station
CN116940951A (zh) 通信系统中用于支持机器学习或人工智能技术的方法和装置
JP2017523618A (ja) 無線通信システムにおける電子機器及びモビリティ測定を行う方法
US11595883B2 (en) Wireless communication methods and corresponding base stations and user terminals
WO2019062307A1 (zh) 小区选择或接入方法、用户终端、维护方法和基站
TW201931912A (zh) 用於無線通訊的電子裝置和方法以及電腦可讀儲存媒體
WO2020188829A1 (ja) ユーザ装置及び通信方法
CN114143802A (zh) 数据传输方法和装置
JP2019176428A (ja) 基地局、及びユーザ装置
CN113383572B (zh) 用户装置以及测量方法
JP2020156074A (ja) 基地局によって実行される方法及びその基地局
WO2020029182A1 (zh) 用于传输参考信号的方法及设备
Panitsas et al. Predictive handover strategy in 6g and beyond: A deep and transfer learning approach
WO2019213934A1 (zh) 用于传输信号的方法及相应的用户终端、基站
WO2023131141A1 (zh) 人工智能模型的管理和分发
WO2018201928A1 (zh) 数据检测方法和用户设备
WO2022140915A1 (zh) 终端以及基站
WO2022140914A1 (zh) 波束选择方法以及网络元件
WO2022236634A1 (zh) 客户前置装置
US20240121773A1 (en) User equipment and base station operating based on communication model, and operating method thereof
WO2022236636A1 (zh) 高空平台站-地面通信系统中的电子设备
WO2023093777A1 (zh) 一种通信方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915122

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915122

Country of ref document: EP

Kind code of ref document: A1