WO2022184125A1 - 一种负载均衡方法,装置及可读存储介质 - Google Patents

一种负载均衡方法,装置及可读存储介质 Download PDF

Info

Publication number
WO2022184125A1
WO2022184125A1 PCT/CN2022/078983 CN2022078983W WO2022184125A1 WO 2022184125 A1 WO2022184125 A1 WO 2022184125A1 CN 2022078983 W CN2022078983 W CN 2022078983W WO 2022184125 A1 WO2022184125 A1 WO 2022184125A1
Authority
WO
WIPO (PCT)
Prior art keywords
network device
load value
network
mlb
predicted load
Prior art date
Application number
PCT/CN2022/078983
Other languages
English (en)
French (fr)
Inventor
曾宇
耿婷婷
曾清海
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22762587.8A priority Critical patent/EP4290923A1/en
Publication of WO2022184125A1 publication Critical patent/WO2022184125A1/zh
Priority to US18/459,911 priority patent/US20230413116A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0925Management thereof using policies
    • H04W28/0942Management thereof using policies based on measured or predicted load of entities- or links
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/0875Load balancing or load distribution to or through Device to Device [D2D] links, e.g. direct-mode links
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/0836Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability to enhance reliability, e.g. reduce downtime
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0925Management thereof using policies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W36/00Hand-off or reselection arrangements
    • H04W36/16Performing reselection for specific purposes
    • H04W36/22Performing reselection for specific purposes for handling the traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/22Processing or transfer of terminal data, e.g. status or physical capabilities

Definitions

  • the present application relates to the field of wireless communication technologies, and in particular, to a load balancing method, apparatus, readable storage medium and system.
  • Mobility Loading Balance is an important function of automatic network optimization.
  • the coverage area of each cell in the network is different, and some cells have more load than their neighbors, resulting in the phenomenon of load imbalance between cells and between base stations. In this case, the resources of the cells with less load will be wasted, and the user experience will also be affected. Therefore, the cell load needs to be adjusted through the MLB.
  • the use of resources between base stations is exchanged, and based on this, the parameter configuration of the base station is adjusted, so as to realize MLB.
  • the performance of the existing MLB methods is poor, which will lead to the loss of the performance of the network, and a better MLB method is urgently needed.
  • Embodiments of the present invention provide a communication method and device, by formulating an MLB strategy containing one or more items of time information or reliability information, so as to realize dynamic control of parameters of network equipment through the MLB strategy including time information, and enhance parameter configuration In order to adapt to the complex network load situation and improve the performance of the MLB strategy, or to realize the application judgment of the MLB strategy based on the credibility of the MLB strategy through the MLB strategy with reliability information, and improve the MLB strategy. performance.
  • a communication method is provided, and the execution body of the method may be a first network device, an operation and management entity that can configure and manage network devices, or an entity configured in the first network device or
  • the components (chips, circuits, or others) in the operation and management entity include: obtaining first network status information of a first network device; obtaining information indicating network status of a second network device; the first network state information and the information indicating the network state of the second network device, to obtain the first mobility load balancing MLB policy to take effect; sending the first MLB policy to the second network device; the first MLB policy to the second network device;
  • An MLB policy includes at least one of the following: a configuration parameter used to adjust the load of the first network device and time information corresponding to the configuration parameter used to adjust the load of the first network device, used to adjust the second network device load The configuration parameter of the load of the network device and the time information corresponding to the configuration parameter for adjusting the load of the second network device, or the reliability information of the first MLB policy.
  • the execution subject of the above implementation method may be a first network device, such as a base station, or may be an operation and management entity, such as an operation, management and maintenance network management device (operation, administration and maintenance, OAM).
  • the first network state information of the first network device may also be information indicating the network state of the first network device, the information indicating the network state of the first network device, The network state information predicted by the first network device, or the tenth predicted load value of the first network device.
  • the first network state information may be information indicating the network state of the above-mentioned first network device, which will not be repeated.
  • the above implementation method may further include sending the first MLB policy to the first network device.
  • the above implementation method may include interaction with multiple second network devices.
  • the execution body of the above method may determine the scope of the second network device involved in implementing the method according to specific design and implementation.
  • the above configuration parameters used to adjust the load of the first network device and the time information corresponding to the configuration parameters used to adjust the load of the first network device may include multiple sets of parameters used to adjust the load of the first network device.
  • the time information corresponding to the configuration parameters of the second network device may also include multiple sets of configuration parameters used to adjust the load of the second network device and time information corresponding to the configuration parameters used to adjust the load of the second network device, so as to adjust the network more dynamically
  • the parameter configuration of the device can better adapt to the complex network load situation.
  • the parameter configuration of the network device can be dynamically adjusted, thereby adapting to complex network load conditions and improving the performance of the MLB policy, or indicating the reliability of the MLB policy in the MLB policy, Therefore, the application judgment of the MLB policy can be made according to the credibility of the MLB policy, and the performance of the MLB policy can be improved.
  • the obtaining the information indicating the network state of the second network device includes: sending first inquiry information to the second network device, where the The first inquiry information is used to request the information indicating the network status; the first confirmation information from the second network device is received, and the first confirmation information responds to the first inquiry information and carries the first confirmation information.
  • Information indicating network status of the second network device wherein, the information indicating the network status of the second network device includes at least one of the following: current network status information of the second network device, the second network device The predicted network state information, or, the third predicted load value of the second network device.
  • the first possible implementation manner of the first aspect further includes: sending third query information to the first network device, where the third query information is used to request the first network device Information indicating the network status of the network device; receiving third confirmation information from the first network device, the third confirmation information being responsive to the third query information, and indicating the indicating network status of the first network device
  • the information indicating the network state of the first network device includes at least one of the following: current network state information of the first network device, network state information predicted by the first network device, or , the tenth predicted load value of the first network device.
  • the executing subject can obtain input information related to determining the first MLB strategy, thereby providing necessary conditions for determining the first MLB strategy.
  • the information indicating the network state of the network device, and obtaining the first MLB policy to take effect includes: obtaining the current state information based on the first network state information of the first network device and the information indicating the network state of the second network device. At least one of the first predicted load value of the first network device at the first time or the second predicted load value of the second network device at the first time under the MLB strategy; based on the first predicted load value or the second predicted load value of the second network device; At least one of the two predicted load values obtains the to-be-validated first MLB policy.
  • the network based on the first network state information of the first network device and the indication network of the second network device state information, and obtain at least one of the first predicted load value of the first network device at the first time or the second predicted load value of the second network device at the first time under the current MLB policy, including: a neural network, performing at least one of the following: predicting the load value of the first network device based on the current MLB policy to obtain the first predicted load value at the first time; or, predicting the load value of the second network device based on the current MLB policy
  • the MLB strategy performs prediction to obtain the second predicted load value at the first time.
  • the input of the first neural network includes the first network state information of the first network device and the information indicating the network state of the second network device, and the current MLB strategy, and the output of the first neural network At least one of the first predicted load value or the second predicted load value is included.
  • the first network state information based on the first network device and the second network includes: : obtaining the first predicted load value of the first network device at the first time under the current MLB policy based on the first network state information of the first network device and the information indicating the network state of the second network device;
  • Obtaining the first MLB strategy to take effect based on at least one of the first predicted load value or the second predicted load value includes: obtaining the first MLB strategy to take effect through a second neural network; wherein , the input of the second neural network includes: the information indicating the network state of the second network device or the second predicted load value, and the first predicted load value; the output of the second neural network includes the to-be-validated first MLB policy.
  • the load value is predicted, and the predicted load value is used as the basis for obtaining the MLB strategy, so that the MLB strategy can adapt to the changing network load situation, and the performance of the MLB strategy can be improved.
  • a fifth possible implementation manner of the first aspect includes obtaining, through the first neural network, that the first network device under the action of the first MLB strategy is at least one of the fourth predicted load value at the first time, or the fifth predicted load value of the second network device under the action of the first MLB policy at the first time; sending to the second network device At least one of the fourth predicted load value or the fifth predicted load value.
  • a fifth possible implementation manner of the first aspect further includes: sending the fourth predicted load value to the first network device, or at least one of the fifth predicted load value .
  • the first network device and the second network device can obtain at least one of the fourth predicted load value, or the fifth predicted load value, which provides for the subsequent operation of the first network device and/or the second network device.
  • Information eg according to the fourth predicted load value and/or the fifth predicted load value, triggers an update of the MLB policy.
  • a sixth possible implementation manner of the first aspect includes: determining that a second network device is modified based on the first MLB policy to adjust the The parameters of the second network device load are successful, and, based on the first MLB policy, modify the parameters used to adjust the load of the first network device; obtain the first network device under the first MLB policy at the first time.
  • the first actual load value and the fourth predicted load value of the first network device under the action of the first MLB strategy at the first time is obtained through the first neural network; based on the fourth predicted load value and the first an actual load value, for optimizing at least one of the first neural network or the second neural network; and/or obtaining the second actual load of the second network device at the first time under the first MLB strategy value and obtain the fifth predicted load value of the second network device under the action of the first MLB strategy at the first time through the first neural network; based on the fifth predicted load value and the second actual load value , at least one of the first neural network or the second neural network is optimized.
  • a seventh possible implementation manner of the first aspect includes: receiving, from the second network device, information indicating a second actual load value of the second network device.
  • a seventh possible implementation manner of the first aspect further includes: receiving information from the first network device indicating the first actual load value of the first network device.
  • an eighth possible implementation manner of the first aspect includes: receiving feedback information from the second network device, the feedback information indicating the second The reason why the network device fails to modify the parameter for adjusting the load of the second network device based on the first MLB policy; and based on the feedback information, optimize the first neural network and/or the second neural network.
  • an eighth possible implementation manner of the first aspect further includes: receiving feedback information from the first network device, where the feedback information instructs the first network device to modify based on the first MLB policy The reason for the failure of the parameters used to adjust the load of the first network device; based on the feedback information, optimize the first neural network and/or the second neural network.
  • the first neural network and/or the second neural network can be optimized, thereby improving the accuracy of load prediction and MLB strategy.
  • a ninth possible implementation manner of the first aspect includes: based on at least one of the sixth predicted load value or the seventh predicted load value and The accuracy of at least one of the eighth predicted load value or the ninth predicted load value determines that at least one of the sixth predicted load value or the seventh predicted load value is better; wherein the sixth predicted load value is the first the predicted load value of the first network device at the second time under the current MLB policy obtained by the network device; the seventh predicted load value is the second network device at the second time under the current MLB policy obtained by the first network device The eighth predicted load value is the predicted load value of the first network device at the second time under the current MLB policy obtained by the second network device; the ninth predicted load value is the current MLB obtained by the second network device. The predicted load value of the first network device at the second time under the policy.
  • the tenth possible implementation manner of the first aspect includes: obtaining at least one of the sixth predicted load value or the seventh predicted load value and the eighth predicted load value or the ninth predicted load value accuracy of at least one item of predicted load values; wherein obtaining the accuracy of the sixth predicted load value includes: based on the first network state information of the first network device and the information indicating the network state of the second network device information, obtain the sixth predicted load value; obtain the third actual load value of the first network device under the current MLB policy, where the third actual load value is the actual load value of the first network device at the second time obtaining the accuracy of the sixth predicted load value based on the sixth predicted load value and the third actual load value; wherein, obtaining the accuracy of the eighth predicted load value includes: receiving the first Eight predicted load values, the eighth predicted load value is the load value of the first network device at the second time predicted by the second network device under the current MLB policy; obtain the current load value of the first network device the third actual load value under the MLB strategy, where the third actual load
  • the eleventh possible implementation manner of the first aspect includes: sending the sixth predicted load value or the seventh predicted load value to the second network device. at least one item and a third actual load value; or, sending the seventh predicted load value to the second network device.
  • the twelfth possible implementation manner of the first aspect includes: receiving a request for neural network optimization information from the second network device; sending neural network optimization information to the second network device Network optimization information; the neural network optimization information includes at least one of the following: parameter-related information of the first neural network and/or the second neural network; Enter the information; or, the results of the analysis of the reasons for the difference between the actual load and the predicted load.
  • the device with higher prediction accuracy is determined, and then the neural network of other devices is optimized by sharing the relevant information of the neural network with better performance, so as to obtain more accurate prediction results.
  • the neural network of other devices is optimized by sharing the relevant information of the neural network with better performance, so as to obtain more accurate prediction results.
  • a communication method is provided, and the execution body of the method may be a second network device, or a component (chip, circuit or other, etc.) configured in the second network device, including: the second network device sends an instruction to information about the network state of the second network device; the second network device receives a first mobility load balancing MLB policy, the first MLB policy is based on the information indicating the network state of the second network device; the The first MLB policy includes at least one of the following: a configuration parameter used to adjust the load of the first network device and time information corresponding to the configuration parameter used to adjust the load of the first network device, used to adjust The configuration parameter of the load of the second network device and the time information corresponding to the configuration parameter used to adjust the load of the second network device, or the reliability information of the first MLB policy.
  • the sending, by the second network device, the information indicating the network state of the second network device includes: the second network device receives the first query information; The second network device sends first confirmation information, the first confirmation information is responsive to the first inquiry information, and includes information indicating the network status of the second network device; wherein, the second network device
  • the information indicating the network state includes at least one of the following: the current network state information of the second network device, the network state information predicted by the second network device, or the third prediction of the second network device load value.
  • the second possible implementation manner of the second aspect includes: the second network device receives a fourth predicted load value, or, a fifth predicted load value At least one of; the fourth predicted load value is the predicted load value of the first network device under the first MLB policy at the first time; the fifth predicted load value is the predicted load value of the first network device at the first time The predicted value of the second network device at the first time predicted under the first MLB strategy.
  • a third possible implementation manner of the second aspect includes: the second network device is modified based on the first MLB policy for adjustment The parameter of the load of the second network device and the second actual load value of the second network device at the first time is sent; or, the second network device fails to modify the parameter used to adjust the load of the second network device, so The second network device sends feedback information, where the feedback information indicates the reason for the failure to modify the second parameter configuration.
  • a fourth possible implementation manner of the second aspect includes: the second network device obtains the first network device under the current MLB policy The eighth predicted load value at the second time is sent and the eighth predicted load value is sent, so that other devices obtain the accuracy of the eighth predicted load value; or, the second network device obtains all parameters under the current MLB policy.
  • At least one of the eighth predicted load value of the first network device at the second time or the ninth predicted load value of the second network device at the second time predicted under the current MLB policy and the second the fourth actual load value of the network device at the second time under the current MLB policy and sends at least one of the eighth predicted load value or the ninth predicted load value and the fourth actual load value, so that other devices can obtain The accuracy of at least one of the eighth predicted load value or the ninth predicted load value.
  • a fifth possible implementation manner of the second aspect includes: the second network device receives a sixth predicted load value and a third actual load value, the sixth predicted load value is the load value of the first network device at the second time predicted by other devices under the current MLB policy, and the third actual load value is the load value of the first network device at the second time an actual load value; the second network device calculates the accuracy of the sixth predicted load value and the eighth predicted load value based on the sixth predicted load value, the eighth predicted load value and the third actual load value and determining that the sixth predicted load value is better; and/or,
  • the second network device receives a seventh predicted load value, where the seventh predicted load value is a load value of the second network device at a second time predicted by other devices under the current MLB policy, and the second network device is based on the seventh predicted load value, the ninth predicted load value and the fourth actual load value, calculate the accuracy of the seventh predicted load value and the ninth predicted load value, and determine that the seventh predicted load value is better; and / or,
  • the second network device receives a sixth predicted load value and a third actual load value
  • the sixth predicted load value is a load value of the first network device at a second time predicted by other devices under the current MLB policy
  • the third actual load value is the actual load value of the first network device at the second time
  • the second network device is based on the sixth predicted load value, the third actual load value, and the ninth predicted load value and the fourth actual load value, calculate the accuracy of the sixth predicted load value and the ninth predicted load value, and determine that the sixth predicted load value is better; and/or,
  • the second network device receives a seventh predicted load value and a third actual load value
  • the seventh predicted load value is a load value of the second network device at a second time predicted by other devices under the current MLB policy
  • the third actual load value is the actual load value of the first network device at the second time
  • the second network device is based on the seventh predicted load value, the eighth predicted load value, and the third actual load value and the fourth actual load value, calculate the accuracy of the seventh predicted load value and the eighth predicted load value, and determine that the seventh predicted load value is better.
  • a third aspect of the embodiments of the present application provides a communication apparatus, and the apparatus provided by the present application has the function of implementing the behavior of the first network device or the operation and management entity in the above method aspect, and includes the steps described in the above method aspect. Or functionally corresponding components (means).
  • the steps or functions can be implemented by software, or by hardware, or by a combination of hardware and software.
  • the above-mentioned apparatus includes one or more processors, and further, may include a communication unit.
  • the one or more processors are configured to support the apparatus to perform the corresponding functions of the first network device or the operation and management entity in the above method. For example, get the first MLB strategy.
  • the communication unit is used to support the communication between the apparatus and other devices, and realize the function of receiving and/or sending. For example, the first MLB policy is sent to other devices.
  • the apparatus may further include one or more memories, where the memories are coupled to the processor and store necessary program instructions and/or data of the base station.
  • the one or more memories may be integrated with the processor, or may be provided separately from the processor. This application is not limited.
  • the device may be a base station, a next generation base station (Next Generation NodeB, gNB) or a transmission point (Transmitting and Receiving Point, TRP), a distributed unit (distributed unit, DU) or a centralized unit (centralized unit, CU), OAM etc.
  • the communication unit may be a transceiver, or a transceiver circuit.
  • the transceiver may also be an input/output circuit or an interface.
  • the device may also be a chip.
  • the communication unit may be an input/output circuit or an interface of the chip.
  • the above device includes a transceiver, a processor and a memory.
  • the processor is used to control the transceiver to send and receive signals
  • the memory is used to store a computer program
  • the processor is used to run the computer program in the memory, so that the apparatus executes the method performed by the network device or the operation and management entity in the first aspect.
  • the above-mentioned apparatus includes one or more processors, and further, may include a communication unit.
  • the one or more processors are configured to support the apparatus to perform the corresponding function of the second network device in the above method. For example, information indicative of the network status of the second network device is determined.
  • the communication unit is used to support the communication between the apparatus and other devices, and realize the function of receiving and/or sending. For example, the first MLB policy is received.
  • the apparatus may further include one or more memories, which are coupled to the processor and store necessary program instructions and/or data of the apparatus.
  • the one or more memories may be integrated with the processor, or may be provided separately from the processor. This application is not limited.
  • the device may be a base station, a next generation base station (Next Generation NodeB, gNB) or a transmission point (Transmitting and Receiving Point, TRP), a distributed unit (distributed unit, DU) or a centralized unit (centralized unit, CU), etc.
  • the communication unit may be a transceiver, or a transceiver circuit.
  • the transceiver may also be an input/output circuit or an interface.
  • the device may also be a chip.
  • the communication unit may be an input/output circuit or an interface of the chip.
  • the above device includes a transceiver, a processor and a memory.
  • the processor is used to control the transceiver to send and receive signals
  • the memory is used to store a computer program
  • the processor is used to run the computer program in the memory, so that the apparatus executes the method performed by the network device in the first aspect.
  • a system comprising one or more of the above-mentioned first network device, second network device or operation and management entity.
  • a readable storage medium or program product for storing a program, the program comprising instructions for performing the method of the first aspect or the second aspect.
  • a readable storage medium or program product for storing a program that, when the program is executed on a processor, causes an apparatus including the processor to execute the first aspect or the second aspect. method instruction.
  • FIG. 1 is a schematic diagram of a communication system provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a network architecture in which multiple DUs share one CU according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of protocol layer functions of a CU and DU provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of an RRC state transition provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a resource status report initialization process provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a resource status reporting process provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a mobility parameter change process provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a neuron structure provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a layer relationship of a neural network provided by an embodiment of the present application.
  • FIG. 10 is a flowchart of a possible implementation manner provided by the embodiment of the present application.
  • FIG. 11 is a schematic flowchart of a communication method provided by an embodiment of the present application.
  • 12a is a schematic flowchart of a communication method provided by an embodiment of the present application.
  • 12b is a schematic flowchart of a communication method provided by an embodiment of the present application.
  • 12c is a schematic flowchart of a communication method provided by an embodiment of the present application.
  • FIG. 13 is a schematic flowchart of a communication method provided by an embodiment of the present application.
  • 14a is a schematic flowchart of a communication method provided by an embodiment of the present application.
  • FIG. 14b is a schematic flowchart of a communication method provided by an embodiment of the present application.
  • FIG. 14c is a schematic flowchart of a communication method provided by an embodiment of the present application.
  • FIG. 15 is a schematic flowchart of a communication method provided by an embodiment of the present application.
  • FIG. 16 is a schematic flowchart of a communication method provided by an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of an access network device according to an embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of a communication apparatus according to an embodiment of the present application.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect.
  • the first information and the second information are only for distinguishing different information, and the sequence of the first information is not limited.
  • the words “first”, “second” and the like do not limit the quantity and execution order, and the words “first”, “second” and the like are not necessarily different.
  • At least one item (one) refers to one or more, and “multiple” refers to two or more.
  • And/or which describes the association relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, which can indicate: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the related objects are an “or” relationship, but may also indicate an “and/or” relationship, which can be understood with reference to the context.
  • At least one item(s) below” or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s).
  • At least one (a) of a, b, or c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c may be single or multiple .
  • LTE long term evolution
  • WiMAX worldwide interoperability for microwave access
  • 5G fifth generation (5th generation) Generation
  • NR new radio access technology
  • 6G systems such as new radio access technology
  • the network architecture and service scenarios described in the embodiments of the present application are for the purpose of illustrating the technical solutions of the embodiments of the present application more clearly, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application.
  • the evolution of the architecture and the emergence of new business scenarios, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
  • different base stations may be base stations with different identities, or may be base stations with the same identity that are deployed in different geographic locations.
  • the base station does not know whether it will involve the scenario applied by the embodiments of the present application.
  • the base station or the baseband chip can support the methods provided by the embodiments of the present application before deployment. In some scenarios, the methods provided by the embodiments of the present application may also be supported by upgrading or loading after deployment. It can be understood that the foregoing base stations with different identities may correspond to base station identities, or may correspond to cell identities or other identities.
  • FIG. 1 shows a schematic diagram of a communication system applicable to the communication method of the embodiment of the present application.
  • the communication system 100 includes access network equipment 101 (gNB1 and gNB2), user equipment (user equipment, UE) 102, core network equipment (core network, CN) 103 and an operation and management entity 104.
  • the access network device 101 may be configured with multiple antennas, and the UE 102 may also be configured with multiple antennas.
  • Access network equipment and core network equipment may be collectively referred to as network equipment, or, network side equipment, access network and core network may be collectively referred to as network side.
  • access network equipment and terminals may also include various components related to signal transmission and reception (eg, processors, modulators, multiplexers, demodulators or demultiplexers, etc.).
  • the access network device refers to a radio access network (radio access network, RAN) node (or device) that accesses the terminal to the wireless network, and may also be referred to as a base station.
  • the access network device is a device with wireless transceiver function or a chip that can be installed in the device.
  • the device can broadly cover various names in the following, or be replaced with the following names, such as: node B (nodeB), evolution type Base station (evolved nodeB, eNB), gNB, relay station, access point, TRP, transmitting point (TP), primary station MeNB, secondary station SeNB, multi-standard radio (MSR) node, home base station, network controller, Access Node, Wireless Node, Access Point (AP), Transmission Node, Transceiver Node, Baseband Unit (BBU), Remote Radio Unit (RRU), Active Antenna Unit (AAU), Radio Head (RRH), Centralized Unit (CU), Distribution Unit (DU), Positioning Node, etc.
  • nodeB node B
  • eNB evolution type Base station
  • gNB evolved nodeB
  • relay station access point
  • TRP transmitting point
  • TP primary station MeNB
  • secondary station SeNB multi-standard radio (MSR) node
  • AP Access Node
  • AP Access Node
  • BBU Baseband Unit
  • a base station may be a macro base station, a micro base station, a relay node, a donor node, or the like, or a combination thereof.
  • a base station may also refer to a communication module, modem or chip used to be provided in the aforementioned equipment or apparatus.
  • the base station may also be a mobile switching center, a device that assumes the function of a base station in D2D, V2X, and M2M communications, a network-side device in a 6G network, a device that assumes the function of a base station in a future communication system, and the like.
  • Base stations can support networks of the same or different access technologies. The embodiments of the present application do not limit the specific technology and specific device form adopted by the network device.
  • the device can be stationary or mobile.
  • a helicopter or drone can be configured to act as a mobile base station, and one or more cells can move according to the location of the mobile base station.
  • a helicopter or drone may be configured to function as a device that communicates with another base station.
  • access network equipment may include BBUs and RRUs. Some baseband functions, such as beamforming functions, can be implemented in the BBU or, alternatively, in the RRU.
  • the connection interface between the BBU and the RRU may be a common public radio interface (common public radio interface, CPRI), or an enhanced common public radio interface (enhance CPRI, eCPRI).
  • the access network equipment may include CUs and DUs. CU and DU can be understood as the division of the base station from the perspective of logical functions. The CU and DU can be physically separated or deployed together.
  • FIG. 2 is a schematic diagram of a network architecture in which multiple DUs share one CU according to an embodiment of the application.
  • the core network and the RAN communicate with each other, and the base stations in the RAN are separated into CUs and DUs.
  • Multiple DUs share one CU.
  • the network architecture shown in FIG. 2 can be applied to a 5G communication system, and can also share one or more components or resources with an LTE system.
  • the access network equipment including the CU node and the DU node separates the protocol layers, and the functions of some protocol layers are centrally controlled by the CU, and the functions of the remaining part or all of the protocol layers are distributed in the DU, and the CU centrally controls the DU.
  • the CU is deployed with a radio resource control (RRC) layer in the protocol stack, a packet data convergence protocol (PDCP) layer, and a service data adaptation layer. Protocol (service data adaptation protocol, SDAP) layer;
  • DU is deployed with radio link control (radio link control, RLC) layer, medium access control (medium access control, MAC) layer in the protocol stack, and physical layer (physical layer) , PHY).
  • RRC radio resource control
  • PDCP packet data convergence protocol
  • SDAP service data adaptation protocol
  • DU is deployed with radio link control (radio link control, RLC) layer, medium access control (medium access control, MAC) layer in the protocol stack, and physical layer (physical layer) , PHY).
  • the CU has the processing capabilities of RRC, PDCP and SDAP.
  • DU has the processing capability of RLC, MAC and PHY. It can be understood that the division of the above functions is only an example, and does not constitute a limitation on the CU and the DU.
  • the functions of the CU can be implemented by one entity or by different entities.
  • the functions of the CU can be further segmented, for example, the control plane (CP) and the user plane (user plane, UP) can be separated, that is, the CU control plane (CU-CP) and the CU user plane (CU -UP).
  • the CU-CP and the CU-UP may be implemented by different functional entities, and the CU-CP and the CU-UP may be coupled with the DU to jointly complete the functions of the base station.
  • the CU-CP is responsible for the control plane function, which mainly includes the RRC and the PDCP control plane PDCP-C.
  • PDCP-C is mainly responsible for one or more of control plane data encryption and decryption, integrity protection, and data transmission.
  • CU-UP is responsible for user plane functions, mainly including SDAP and PDCP user plane PDCP-U.
  • SDAP is mainly responsible for processing the data of the core network and mapping the data flow to the bearer.
  • PDCP-U is mainly responsible for one or more of data plane encryption and decryption, integrity protection, header compression, serial number maintenance, and data transmission.
  • the CU-CP and CU-UP are connected through the E1 interface.
  • CU-CP represents that the access network equipment is connected to the core network through the Ng interface.
  • the CU-CP is connected to the DU through the F1-C (control plane).
  • the CU-UP is connected through F1-U (user plane) and DU.
  • the access network device may be a CU node, or a DU node, or a device including a CU node and a DU node.
  • the CU may be divided into devices in the radio access network RAN, and the CU may also be divided into devices in the core network CN, which is not limited herein.
  • a terminal may also be referred to as terminal equipment, user equipment (UE), access terminal, subscriber unit, subscriber station, mobile terminal (MT), mobile station (MS), remote station, remote terminal , mobile device, user terminal, wireless communication device, user agent or user equipment.
  • UE user equipment
  • MT mobile terminal
  • MS mobile station
  • remote terminal mobile device
  • user terminal wireless communication device
  • user agent or user equipment A terminal is a device that provides voice and/or data connectivity to a user and can be used to connect people, things and machines.
  • the terminal in the embodiments of the present application may be a mobile phone (mobile phone), a tablet computer (Pad), a computer with a wireless transceiver function, a wearable device, a mobile internet device (mobile internet device, MID), a virtual reality (virtual reality, VR) terminal, augmented reality (AR) terminal, wireless terminal in industrial control, wireless terminal in self-driving, wireless terminal in remote medical (remote medical), intelligent Wireless terminals in the power grid (smart grid), wireless terminals in transportation safety, wireless terminals in smart cities, wireless terminals in smart homes, and so on.
  • the embodiments of the present application do not limit application scenarios.
  • the methods and steps implemented by the terminal in this application may also be implemented by components (eg, chips or circuits) that can be used in the terminal.
  • the aforementioned terminals and components (such as chips or circuits) that can be provided in the aforementioned terminals are collectively referred to as terminals.
  • the terminal can also be used to act as a base station.
  • a terminal may act as a scheduling entity that provides sidelink signals between terminals in V2X or D2D or the like.
  • cell phones and automobiles communicate with each other using sidelink signals. Communication between cell phones and smart home devices without relaying communication signals through base stations.
  • the core network device refers to the device in the core network (CN) that provides service support for the terminal.
  • core network equipment are: access and mobility management function (AMF) entity, session management function (SMF) entity, user plane function (UPF) Entities, etc., are not listed here.
  • AMF access and mobility management function
  • SMF session management function
  • UPF user plane function
  • the AMF entity may be responsible for terminal access management and mobility management
  • the SMF entity may be responsible for session management, such as user session establishment, etc.
  • the UPF entity may be a user plane functional entity, mainly responsible for connecting external network.
  • AMF entities may also be referred to as AMF network elements or AMF functional entities
  • SMF entities may also be referred to as SMF network elements or SMF functions entity etc.
  • the network element management device refers to the device responsible for the configuration and management of access network devices.
  • the operation and management entity may be an OAM or a network management system, and the embodiment of the present application does not limit the naming manner of the operation and management entity.
  • both gNB1 and gNB2 can communicate with multiple UEs.
  • the UE communicating with gNB1 and the UE communicating with gNB2 may be the same or different.
  • the UE 102 shown in FIG. 1 can communicate with gNB1 and gNB2 at the same time, but this only shows a possible scenario. In some scenarios, the UE may only communicate with gNB1 or gNB2, which is not limited in this application .
  • FIG. 1 is only a simplified schematic diagram for easy understanding, and the communication system may also include other access network devices, terminals, or core network devices, which are not shown in FIG. 1 .
  • the radio resource control (RRC) states of the UE include a connected state (RRC_CONNECTED), an idle state (RRC_IDLE), and a deactivated state (RRC_INACTIVE, or the third state).
  • the RRC inactive (inactive) state is a newly introduced state in which the terminal is connected to the 5G core network through the base station, and the state is between the connected state and the idle state.
  • the RRC_INACTIVE state there is no RRC connection between the terminal and the access network device, but the connection between the access network device and the core network device is maintained, and the terminal saves all or part of the information necessary to establish/restore the connection. Therefore, in the RRC_INACTIVE state, when the terminal needs to establish a connection, it can quickly establish or restore an RRC connection with the network device according to the stored relevant information.
  • the UE When the UE is in the RRC_CONNECTED state, the UE has established links with the base station and the core network. When data arrives at the network, it can be directly transmitted to the UE; when the UE is in the RRC_INACTIVE state, it means that the UE has established links with the base station and the core network before. , but the link between the UE and the base station is released, but the base station will store the context of the UE. When there is data to be transmitted, the base station can quickly restore this link; when the UE is in the RRC_IDLE state, the connection between the UE and the base station and the network There is no link. When there is data to be transmitted, a link from the UE to the base station and the core network needs to be established.
  • FIG. 4 is a schematic diagram of an RRC state transition provided by an embodiment of the present application.
  • the UE in the RRC_IDLE state, the UE can access the base station, and the UE can communicate with the base station during the access process or after accessing the base station.
  • the RRC establishment process is performed, so that the state of the UE is converted from the RRC_IDLE state to the RRC_CONNECTED state.
  • the UE may initiate an RRC establishment process, and attempt to establish an RRC connection with the base station to enter the RRC_CONNECTED state.
  • the base station can change the state of the UE from the RRC_CONNECTED state to the RRC_IDLE state or the RRC_INACTIVE state by releasing the RRC process, such as sending an RRC release (RRCRelease) message to the UE.
  • RRC_INACTIVE state the UE may enter the RRC_IDLE state by releasing the RRC connection, or the UE may enter the RRC_CONNECTED state by resuming the RRC connection.
  • Mobility load balancing may include: by adjusting the handover parameters of the cells, making some UEs in the RRC_CONNECTED state switch from a cell with a higher load to a cell with a lower load; and/or by adjusting the cell reselection parameters, making some UEs in the RRC_IDLE state
  • the UE in the RRC_IDLE state or the RRC_INACTICE state reselects to a cell with a lower load to avoid the potential load imbalance caused by the service initiated by the UE in the RRC_IDLE state or the RRC_INACTICE state.
  • the handover parameters and reselection parameters of the cell are adjusted based on the load of the cell based on the usage of shared resources among network devices, so as to achieve the purpose of mobility load balancing.
  • the interfaces between network devices such as between NR base stations and core networks, between LTE base stations and core networks, between NR base stations, between LTE base stations, between CU and DU, and between CU-CP and CU-UP , which already has the function of exchanging resource status information.
  • the basic flow of MLB is described below by taking the interface between NR base stations, such as the Xn interface, as an example.
  • the resource usage interaction between network devices can be controlled through the resource status reporting initialization process of MLB, as shown in Figure 5:
  • the first network device sends a resource status request to the second network device, for example, the request is sent through a resource status request message (Resource Status Request).
  • the resource state request indicates that information related to the resource state needs to be fed back by the second network device.
  • the first network device sends a Resource Status Request message to the second network device, which carries indication information that instructs the second network device to start measuring, stop measuring, or add cells to measure, and can be reported in the report feature (Report) of the message.
  • Characteristics) information element indicates the information related to the resource status that needs to be fed back by the second network device.
  • the second network device performs a corresponding operation according to the Resource Status Request message sent by the first network device.
  • S502a Reply to the first network device with a response, such as responding by a resource status response (Resource Status Response) message.
  • a resource status response Resource Status Response
  • S502b Reply the failure information to the first network device, for example, reply through a resource status failure (Resource Status Failure) message.
  • Failure information can include the reason for the failure, such as the status of a resource that cannot be measured.
  • the second network device After the second network device completes the measurement of the resource requested by the first network device, it can send the measurement result to the first network device through the Resource Status Reporting process, as shown in Figure 6:
  • the second network device sends the information of the resource status to the first network device, for example, by sending the resource status update (Resource Status Update) message.
  • resource status update Resource Status Update
  • the second network device periodically measures according to the latest resource status request sent by the first network device, and after completing the resource status measurement, sends a Resource Status Update message, and sends the resource status information to the first network through the Resource Status Update message equipment.
  • the first network device After the first network device receives the Resource Status Update message of the second network device, if it determines that the mobility parameter needs to be changed, it can negotiate the change through the Mobility Settings Change process of the MLB, as shown in Figure 7:
  • the first network device sends information requesting to change the mobility parameter to the second network device, for example, sending the information through a mobility parameter change request (Mobility Change Request) message.
  • a mobility parameter change request Mobility Change Request
  • the mobility parameter change request initiated by the first network device may be triggered by various conditions, one of which is that the first network device and the second network device determine that the mobility parameter needs to be adjusted after sharing the resource usage. For example, the first network device finds that the load of the second network device is small, and therefore determines that the trigger threshold for handover from the second network device to the first network device needs to be increased, thereby making it more difficult for the UE on the second network device to switch to the first network equipment, allowing more UEs to stay in the second network equipment.
  • the parameters of the mobility parameter change procedure are for two neighboring cells, that is, parameters for one specific cell versus another specific cell.
  • S702a The second network device replies to the first network device a successful response of the parameter change, for example, by sending a mobility parameter change success (Mobility Change Acknowledge) to respond.
  • a mobility parameter change success Mobility Change Acknowledge
  • the second network device replies a parameter change failure response to the first network device, for example, by sending a mobility parameter change failure (Mobility Change Failure) message to respond.
  • the parameter change failure response may include at least one of a reason for the failure and a range of mobility parameter changes supported by the second network device.
  • the first network device may perform S701 according to the failure cause therein and the range of mobility parameter changes supported by the second network device, and re-initiate the mobility parameter change process.
  • a neural network is a specific implementation form of machine learning.
  • Machine learning has attracted extensive attention from academia and industry in recent years. Due to the huge advantages of machine learning in the face of structured information and massive data, many researchers in the field of communication have also turned their attention to machine learning.
  • Neural networks can be used to perform classification tasks, prediction tasks, and can also be used to build conditional probability distributions between variables.
  • Common neural networks include deep neural network (DNN), generative neural network (GNN), etc.
  • DNNs can include feedforward neural networks (FNN), convolutional neural networks (CNN), and recurrent neural networks (RNN), among others.
  • GNN includes Generative Adversarial Network (GAN) and Variational Autoencoder (VAE).
  • the neural network is constructed on the basis of neurons.
  • the following uses DNN as an example to introduce the calculation and optimization mechanism of the neural network. It can be understood that the specific implementation of the neural network is not limited in the embodiments of the present invention.
  • each neuron performs a weighted summation operation on its input values, and the weighted summation result is passed through an activation function to generate an output.
  • FIG. 8 is a schematic diagram of a neuron structure. Assuming that the input of the neuron is, the weight corresponding to the input is, the bias of the weighted summation is b, and the form of the activation function can be diversified.
  • the activation function is:
  • the output of a neuron's execution is: .
  • DNN generally has a multi-layer structure. Each layer of DNN can contain multiple neurons. The input layer processes the received values by neurons and transmits them to the middle hidden layer. Similarly, the hidden layer passes the calculation results to the final output layer, which produces the final output of the DNN. As shown in FIG. 9 , FIG. 9 is a schematic diagram of the layer relationship of the neural network. DNNs generally have one or more hidden layers, which often directly affect the ability to extract information and fit functions. Increasing the number of hidden layers of DNN or expanding the number of neurons in each layer can improve the function fitting ability of DNN.
  • the parameters of each neuron can include weights, biases and activation functions, and the set of parameters of all neurons in the DNN is called DNN parameters (or neural network parameters).
  • DNN parameters or neural network parameters.
  • the weights and biases of neurons can be optimized through the training process, so that DNN has the ability to extract data features and express mapping relationships.
  • the parameters of the neural network include information related to the neural network, exemplarily, may include one or more of the following:
  • Type of neural network such as deep neural network, or generative neural network
  • Information related to the neural network structure such as one or more of the number of layers of the neural network, the number of neurons, etc.;
  • Parameters of each neuron in a neural network such as one or more of weights, biases, and activation functions.
  • Network status information which is information related to network resource usage, and may include at least one of the following: air interface resource status information, transmission network layer resource status information, cell resource status information, hardware resource status information of network equipment, network equipment status information The capacity and resource usage of each network slice, the load information of different service types of network equipment, or the user path prediction.
  • the resource status information indicates the usage of resources, for example, the percentage of the used resources to the total, or one of the percentages of the unused resources to the total, and the total amount of the resources .
  • the resource status information may further include the capacity level of the resource, that is, the gear of the total capacity of the resource, and the range of the capacity value corresponding to the gear may be agreed in the protocol.
  • the air interface resource status information includes at least one of the following:
  • Cell physical downlink control channel control channel element (physical downlink control channel control channel element, PDCCH CCE) resource usage;
  • SRS Cell sounding reference signal
  • PRACH Physical random access channel
  • the number of PDCCH CCEs occupied by the uplink and downlink of the cell or, the information about the total number of CCEs and the occupancy ratio of uplink and downlink; or,
  • the code channel occupancy ratio and resource block (RB) occupancy ratio of different formats of the physical uplink control channel (PUCCH) of the cell For example, the code channel occupancy ratio of PUCCH Format1 format is 50%, occupying 2RB.
  • the transport network layer (TNL) resource status information may include:
  • the cell resource status information may include at least one of the following:
  • the load information of different service types of the network device may include at least one of the following:
  • the service type of the cell wherein the service type can be divided according to the service quality of service (quality of service, QoS) of the service, for example, a QoS of 1 corresponds to the first service type, and a QoS of 2 corresponds to the second service type; or,
  • QoS quality of service
  • Load information of each service type in the cell for example, the traffic value and/or the number of users corresponding to the service type.
  • the user path pre-judgment may include at least one of the following:
  • the location information of each user in the network device such as the user's global positioning system (GPS) coordinates;
  • GPS global positioning system
  • RSRP Reference signal received power
  • RSSQ reference signal received quality
  • User path information stored in the network device such as GPS coordinate information of each user.
  • the used ratio and/or the unused ratio of the resource can be a specific value or an index corresponding to the gear position.
  • Table 1 an index-ratio comparison table is stipulated in the agreement, The range value of the ratio information, exemplarily, if the used ratio of the resource is "22%", which is within the range of [0%, 25%), the index value corresponding to the resource is "0".
  • the load value can represent the load condition of the network device.
  • the load value may be a value calculated according to the content in the network state information.
  • the load value may be a value obtained by superimposing the corresponding weights on each value in the network state information.
  • the initial value of the weight used can be provided by OAM, or, it is a preset value, or, it is set based on the configuration of the network device, and after the actual load value is obtained, it can be based on the predicted value.
  • the weights are iteratively optimized with the actual values.
  • the calculation and optimization process of obtaining the load value may be completed through a neural network.
  • the actual value and the predicted value should correspond to time, such as the predicted load value at the first time, that is, the time corresponding to the load value is the first time, and the time corresponding to the actual value and the predicted value , that is, the first time, which can be a time period or a time point.
  • the actual value includes actual network status information and actual load value.
  • the actual network state information is the actual value of the network state information of the network device at the corresponding time, and can be obtained through statistics and calculation of the network device.
  • the actual load value is the actual value of the load value of the network device at the corresponding time, which can be obtained by calculation according to the actual network state information at the corresponding time. For the calculation method, refer to the above-mentioned method for obtaining the load value.
  • the predicted value includes predicted network state information and predicted load value.
  • the predicted network state information is the prediction result of the network state information of the network device at the corresponding time. In a possible implementation method, it can be obtained according to the current network state information, for example, the deviation correction value based on the current network state information and the historical and historical
  • the predicted network state information of the first time is obtained by using a function of the network state information of at least one time corresponding to the first time. where the function can be a weighted sum.
  • the ratio of the used PDCCH CCE resources at the current time of the first network device is compared with the current Kalman filtering is performed on the used ratio value of the PDCCH CCE of the first network device obtained by the first network device within one hour before the time, and the deviation correction value is obtained.
  • the actual PDCCH CCE occupation ratio value of a network device is obtained, and the historical value is obtained, and the deviation correction value and the historical value are superimposed according to the corresponding weight, so as to obtain the used ratio of the PDCCH CCE resources of the first network device from 10:00 to 11:00 prediction results.
  • the predicted load value is the predicted result of the load value.
  • the predicted network state information can be obtained through the current network state information according to the above method, and then the predicted load value can be obtained by referring to the above calculation method of the load value.
  • the MLB policy is information related to the operation policy of the network device formulated to implement the MLB.
  • the network device can adjust the parameters of the network device according to the MLB policy, so as to adjust the load of the network device and achieve load balance.
  • MLB policies can include at least one of the following:
  • the parameter configuration in the MLB policy may be the configuration of one parameter, or may include the configuration of multiple parameters, and the range of the parameters may be determined according to the need to adjust the load of the network device.
  • the parameter may be used to adjust the load of the network device.
  • the parameter may be a handover-related parameter or an access-related parameter.
  • reference may be made to parameters in the prior art or parameters that will appear in the future, which are not limited in this embodiment.
  • the configuration parameters of the network device may include configuration parameters of one or more network devices, which are not limited herein.
  • the parameter configuration of different network devices and the time information corresponding to the parameters may be independent of each other, or may be related to each other.
  • different network devices may use the time information corresponding to independent parameters, so as to control the network devices more flexibly.
  • parameters, or different network devices may share the time corresponding to the parameters, thereby reducing the overhead of exchanging information, which is not limited in this embodiment of the method.
  • the parameter configuration and time information in the MLB policy have the following possible implementations. It should be noted that in the above implementation, the first network device is used as an example to illustrate the implementation method of the parameter configuration and time information. When multiple network devices are involved , different network devices are handled the same way:
  • the MLB policy includes the first parameter of the first network device, and includes the time during which the first parameter is valid, which may be referred to as the first valid time.
  • the first network device follows the Parameter configuration is performed on the first parameter, and before or after the first valid time expires, the MLB policy inference service of the first network device can be triggered to obtain new parameters.
  • the MLB policy includes multiple parameter configurations of the first network device and multiple time information corresponding to the parameter configurations, that is, multiple sets of parameter configurations and time information.
  • the MLB policy includes The first parameter of the first network device and the valid time of the first parameter may be called the first valid time; it also includes the second parameter of the first network device and the valid time of the second parameter, which may be called the second valid time time. Then, during the first valid time, the first network device uses the first parameter, and after the first valid time, the first network device uses the second parameter within the second valid time. Before or after the second valid time expires, the MLB policy inference service of the first network device is triggered to obtain new parameters. In addition, if the MLB policy inference service is a periodic service, the expiry time of the second valid time may also be the expiry time of the current cycle by default.
  • the reliability represents the credibility of the MLB strategy.
  • the reliability is indicated by the probability that the predicted load value and the actual load value are within an error range of the predicted load value.
  • the reliability is expressed as: the predicted load value error range is -1 To 1, there is a 95% probability that the deviation of the predicted load value and the actual load value is within this error range.
  • the reliability includes the probability distribution of the predicted load value within a certain interval. For example, the probability distribution predicts the probability of the load value in different intervals, as shown in Table 2, including a set of correspondence between the interval of the predicted load value and the probability :
  • the mechanism of the MLB is to periodically obtain the current network resource usage, and to determine whether the parameter configuration of the base station needs to be adjusted.
  • the MLB strategy formulated by the existing MLB mechanism since the actual network situation is very complex and there may be fluctuations, the MLB strategy formulated by the existing MLB mechanism only adjusts the parameter configuration periodically, and cannot be changed between two adjustment periods. Parameter configuration may lead to load imbalance, and network resources cannot be fully utilized.
  • an embodiment of the present application provides a communication method, specifically, a load balancing method.
  • the method may include: obtaining first network state information of a first network device; obtaining information indicating network state of a second network device; based on the first network state information and information indicating network state of the second network device , obtain the first mobility load balancing MLB policy to take effect; send the first MLB policy to the second network device; the first MLB policy includes at least one of the following contents: for adjusting the first MLB policy
  • Time information corresponding to the configuration parameter; or, reliability information of the first MLB policy can adjust the parameter configuration according to the time information, which enhances the flexibility of the parameter configuration, thereby improving the performance of the MLB.
  • the first network device and/or the second network device can accept or reject the parameter configuration according to the first MLB policy adjustment according to the reliability information, so that the performance of the MLB can be improved.
  • a first network device obtains first network state information of the first network device.
  • the first network device starts the MLB process to obtain first network state information of the first network device.
  • the MLB process can be triggered periodically or triggered by an event. For example, when the number of users exceeds a threshold, the MLB process is started.
  • the first network device obtains information indicating a network state of the second network device.
  • the information indicating the network state of the second network device may include: current network state information of the second network device, network state information predicted by the second network device, or a third predicted load value of the second network device.
  • a possible way to obtain the information indicating the network state of the second network device may include:
  • the first network device sends request information for the first network device to request the second network state information of the second network device to the second network device, for example, sending the request information through a first query message.
  • the first network device sends a first query message to the second network device, where the first query message includes request information for the first network device to request the second network state information of the second network device.
  • the second network device After receiving the first query message, the second network device collects statistics on its own network status information according to the request information of the second network status information in the first query message.
  • the second network device sends a first confirmation message to the first network device.
  • the second network device After the second network device completes the statistics of the network state information, it sends the information indicating the network state to the first network device.
  • the information indicating the network status is the current network status information of the second network device.
  • the information indicating the network state is the predicted network state information of the second network device at a third time under the current MLB policy obtained by the second network device according to the current network state information of the second network device; or,
  • the information indicating the network state is the predicted load value of the second network device at the third time under the current MLB policy obtained by the second network device according to the current network state information of the second network device.
  • the third time is a certain time between the current time and the first time.
  • the first network device obtains, according to the first network state information and the information indicating the network state of the second network device, a first MLB policy to take effect.
  • the first MLB strategy corresponds to the aforementioned first time.
  • the first network device obtains the first MLB policy based on the predicted load value of the first network device and the information indicating the network state of the second network device, including:
  • the first network device obtains the first predicted load value of the first network device at the first time based on the current MLB policy, the first network state information and the information indicating the network state of the second network device; optional , the first predicted load value of the first network device at the first time can be obtained through the first neural network.
  • the current MLB strategy and the first network state information and the second network state information are used as the input of the first neural network, and the first predicted load value of the first network device at the first time is used as the output of the first neural network.
  • the first network device obtains the second predicted load value of the second network device at the first time based on the current MLB policy, the first network state information and the information indicating the network state of the second network device; optional , the second predicted load value of the second network device at the first time can be obtained through the first neural network.
  • the current MLB strategy and the first network state information and the second network state information are used as the input of the first neural network, and the second predicted load value of the second network device at the first time is used as the output of the first neural network.
  • the first network device obtains the first MLB strategy based on the first predicted load value and the second predicted load value; optionally, the first MLB strategy may be obtained through the second neural network.
  • the first predicted load value and the second predicted load value are used as the input of the second neural network, and the first MLB strategy is used as the output of the second neural network.
  • the predicted load value or the MLB strategy can also be obtained in other ways, such as through a preset fixed calculation method or condition, to obtain the predicted load value or the MLB strategy, which is not limited herein.
  • the first network device obtains the first MLB policy based on the load prediction value of the first network device and the network state information of the second network device, including:
  • the first network device obtains, through the first neural network, the first prediction of the first network device at the first time based on the current MLB policy, the first network state information and the information indicating the network state of the second network device load value.
  • the first network device obtains the first MLB policy through the second neural network based on the first predicted load value and the information indicating the network state of the second network device.
  • the first network device obtains the first MLB strategy based on the network state information of the first network device and the load prediction value of the second network device, including:
  • the first network device obtains, through the first neural network, the second prediction of the second network device at the first time based on the current MLB policy, the first network state information and the information indicating the network state of the second network device load value.
  • the first network device obtains the first MLB policy through the second neural network based on the first network state information and the second predicted load value.
  • the first neural network and the second neural network may be independent neural networks, or may be the same neural network.
  • S1004 The first network device sends the first MLB policy to the second network device.
  • the first network device sends a first MLB policy to the second network device through an interface with the second network device, where the first MLB policy may include at least one of the following: for adjusting the first MLB policy
  • the configuration parameters of the network device load and the time information corresponding to the configuration parameters used to adjust the load of the first network device; the configuration parameters used to adjust the load of the second network device and the configuration parameters used to adjust the load of the second network device.
  • the second network device can obtain the configuration-related information of the first network device from the first MLB policy, and use it as a reference in the subsequent operation process to formulate the operation policy. For example, the second network device can formulate the MLB policy when formulating the MLB policy.
  • the configuration of the first network device is used as input information, the influence of the parameter change of the second network device on the load is obtained.
  • the load situation is predicted through the neural network, and the predicted load result is used as the input information of the neural network to infer the MLB strategy, and the MLB strategy contains valid time information, so as to flexibly control the parameters in the MLB strategy
  • the configuration can improve the performance of the MLB policy in the case of network load changes.
  • the first network device and/or the second network device may accept or reject the parameter configuration according to the first MLB policy adjustment according to the reliability information, thereby improving the performance of the MLB.
  • this embodiment can also optimize the neural network according to the actual load value, so as to continuously improve the performance of the MLB strategy.
  • this embodiment may include:
  • the first network device obtains first network state information.
  • the first network device obtains second network state information.
  • the first network device obtains the first MLB policy according to the first network state information and the second network state information.
  • S1304 The first network device sends the first MLB policy to the second network device.
  • S1301-S1304 are the same as S1001-S1004, which will not be repeated here.
  • the first network device modifies the first configuration parameter based on the first MLB policy.
  • the first network device instructs the second network device to modify the second configuration parameter based on the first MLB policy.
  • the first network device may instruct the second network device to modify the second configuration parameter through a parameter modification instruction sent to the second network device.
  • the parameter modification instruction may be agreed in the protocol, and is one of the specific value of the parameter, the adjustment amount of the parameter modification, and the gear position of the parameter modification.
  • the current value of the parameter is 10, and it is expected to be modified to 15.
  • the specific value of the parameter is sent as 15 as the parameter modification instruction, and the second network device modifies the parameter to 15 after receiving it; a possibility In a possible implementation manner, an adjustment amount of 5 for parameter modification is sent, and after receiving it, the second network device adjusts 5 on the basis of the existing value of 10, and modifies the parameter to 15; in a possible implementation manner, the agreement stipulates that every The adjustment amount corresponding to each adjustment gear is 5, the transmission adjustment gear is 1, and the second network device determines the adjustment amount to 5 according to the adjustment gear 1 after receiving it, and modifies the parameter to 15.
  • the sending method of the parameter modification instruction has the following possible implementations:
  • the first network device sends the parameter modification instruction to the second network device through the first MLB policy sent by the first network device in S1304.
  • the parameter modification indication is parameter configuration and time information related to the second network device in the first MLB policy, and the second network device performs corresponding parameter modification after receiving the first MLB policy.
  • the first network device sends a parameter modification instruction to the second network device through independent instruction information.
  • the indication information for modifying the second configuration parameter may be a Mobility Change Request message in the existing MLB mechanism. After receiving the Mobility Change Request message, the second network device performs corresponding parameter modification.
  • the second network device feeds back a parameter modification success message to the first network device, for example, through the Mobility Change Acknowledge message in the existing MLB mechanism.
  • S1308 is executed.
  • the second network device If the second network device fails to modify the second parameter configuration, the second network device feeds back a parameter modification failure message to the first network device, and executes S1309.
  • the parameter modification failure message may carry an indication of the reason for the failure of the parameter modification, for example, the parameter modification exceeds the parameter configuration range of the second network device, or the second network device refuses to modify the second parameter configuration, such as refusing modification according to reliability information.
  • the modification failure indication message may be the Mobility Change Failure message in the existing MLB mechanism.
  • the parameter modification failure message may also carry the parameter modification range acceptable to the second network device.
  • the parameter modification range includes the minimum value and/or the maximum value of the parameter;
  • the parameter modification range includes the current parameter value of the second network device, and the currently acceptable value or gear for parameter adjustment.
  • the range of the switching parameter of the second network device is 0 to 20.
  • the current value of the handover parameter is 15, and the currently acceptable parameter adjustment range is -15 to 5.
  • the parameter modification failure message may also carry reliability information of the MLB policy acceptable to the second network device.
  • the second network device may feed back the acceptance reliability probability value as 95% and above MLB strategies.
  • the first network device optimizes the first neural network and/or the second neural network based on the predicted value and the actual value.
  • the predicted value and the actual value include predicted network state information and/or predicted load value, and the actual value includes actual network state information and/or actual load value, which is not limited.
  • the first network device can optimize the first neural network and/or the second neural network according to the predicted network state information and the actual network state information, and can also optimize the first neural network and/or the second neural network according to the predicted load value and the actual load value. Neural network for optimization. Taking the optimization of the first neural network and/or the second neural network according to the predicted load value and the actual load value as an example, the following possible implementations are included:
  • the first network device optimizes the first neural network and/or the second neural network of the first network device based on the predicted load value and the actual load value of the first network device, as shown in FIG. 14a shown, can include:
  • the first network device obtains the fourth predicted load value of the first network device at the first time.
  • the first network device predicts the load value of the first network device at the first time through the first neural network based on the first MLB strategy, to obtain a fourth predicted load value.
  • the first network device sends a fourth predicted load value to the second network device.
  • the second network device can determine the operation strategy of the second network according to the fourth predicted load value. For example, the second network device can determine whether to trigger the adjustment of the second network device according to the fourth predicted load value. MLB strategy.
  • the second network device sends the second actual load of the second network device at the first time to the first network device.
  • the first network device obtains the first actual load value of the first network device at the first time.
  • the first network device optimizes the first neural network and/or the second neural network based on the fourth predicted load value and the first actual load value.
  • the first network device optimizes the neural network based on the load prediction result of the second network device and the actual load result of the second network device, as shown in FIG. 14b, including:
  • the first network device obtains the fifth predicted load value of the second network device at the first time.
  • the first network device predicts the load of the second network device at the first time through the first neural network based on the first MLB strategy, and obtains a fifth predicted load value.
  • the first network device sends a fifth predicted load value to the second network device.
  • the second network device may determine the operation strategy of the second network according to the fifth predicted load value. For example, the second network device may determine whether to trigger the adjustment of the second network device according to the fifth predicted load value. MLB strategy.
  • the second network device sends the second actual load value of the second network device at the first time to the first network device.
  • the first network device optimizes the first neural network and/or the second neural network based on the fifth predicted load value and the second actual load value.
  • the first network device optimizes the neural network based on the load prediction results of the first network device and the second network device and the actual load results of the first network device and the second network device, as shown in FIG. 14c, including:
  • the first network device obtains the fourth predicted load value of the first network device at the first time, and the fifth predicted load value of the second network device at the first time.
  • the first network device predicts the load of the first network device at the first time to obtain a fourth predicted load value, and predicts the load of the second network device at the first time to obtain The fifth predicted load value.
  • the first network device sends the fourth predicted load value and/or the fifth predicted load value to the second network device.
  • the first network device obtains the first actual load value of the first network device at the first time.
  • the first network device optimizes the first neural network and/or the second neural network based on the fourth predicted load value, the fifth predicted load value, the first actual load value and the second actual load value.
  • performing neural network optimization may include:
  • the first network device optimizes the first neural network and/or the second neural network based on the reason fed back by the second network device.
  • the first network device adjusts the parameter range in the reasoning process according to the parameter modification range in the feedback failure cause.
  • the MLB strategy of inference can be more in line with the needs of the actual network, and the execution of the MLB strategy of subsequent inference can be avoided.
  • one of S1502, S1503, and S1504 is performed.
  • the MLB strategy is re-inferred.
  • the failure reason is that the parameter modification exceeds the parameter configuration range of the second network device, and the first network device performs load prediction again and infers the MLB policy. If it is within the parameter configuration range of the second network device, it can be When an enforceable MLB policy is obtained, the parameters of the first network device and/or the second network device are re-adjusted, and optimization is performed based on the new MLB policy.
  • the reason for the failure is that the second network device refuses to modify the parameters according to the reliability information, then the first network device performs load prediction again and infers the MLB policy according to the reliability requirements of the second network device, Readjust the parameters of the first network device and/or the second network device, and perform optimization based on the predicted load value and the actual load value under the new MLB strategy according to the embodiments shown in FIGS. 14a-14c.
  • S1503 Based on the failure cause in the failure feedback, adjust the parameter configuration and instruct the second network device to modify.
  • the second parameter configuration value can be adjusted according to the parameter modification range in the failure feedback message, and the second network device is instructed to modify.
  • S1504 Determine and execute S1502 or S1503 based on the failure cause in the failure feedback.
  • the MLB policy is judged and re-inferred according to the failure cause fed back by the second network device, or parameter modification is directly performed.
  • the set threshold may be set based on the experience of the sensitivity of the load to parameter changes, for example, when it is considered that the configuration value used to trigger the sending of the measurement report related to the handover has a great influence on the handover , the threshold corresponding to this parameter should be set to a small value to avoid the large impact of the parameter adjustment on the load.
  • the first network device sends a neural network optimization message to the second network device.
  • the first network device After the first network device completes the optimization of the neural network, it can feed back an optimization message of the neural network to the second network device, and can provide information for the optimization of the neural network of the second network device, and the optimization message can include at least one of the following contents:
  • the neural network parameters optimized by the first network device for example, weight parameters of the neural network, input parameters, and the like.
  • the neural network can be optimized by predicting the deviation between the load value and the actual load value, which can further improve the prediction accuracy of the load and the accuracy of the MLB strategy prediction.
  • the neural network can also be optimized according to the inference results of multiple network devices.
  • Embodiments of the present application also provide a method for optimizing a neural network based on the inference results of multiple network devices, as shown in Figure 16, which may include:
  • the first network device obtains the first network state information
  • the second network device obtains the information indicating the network state of the second network device.
  • the first network state information and the information indicating the network state of the second network device are the same as those described in S1001, and will not be repeated here.
  • the first network device obtains the second network state information, and the second network device obtains the information indicating the network state of the first network device.
  • the first network device sends a first query message to the second network device, and receives the first confirmation message sent by the second network device, and the second network device sends a second query message to the first network device and receives the first network device. the second confirmation message.
  • the content of the first inquiry message, the first confirmation message, the second inquiry message, and the second confirmation message please refer to the content related to the inquiry message and the confirmation message in S1001 and S1002.
  • the second inquiry message may belong to the same message as the first confirmation message, or may be two separate messages from the first confirmation message, and the sending times of the two messages may be the same or different.
  • the information indicating the network state of the first network device carried in the second confirmation message is the current network state information of the first network device, the predicted network state information of the first network device, or the predicted load value. an item.
  • the first network device obtains the first MLB policy, and the second network device obtains the second MLB policy.
  • the first network device predicts the sixth predicted load value of the first network device at the second time and the seventh predicted load value of the second network device at the second time; the second network device predicts the second predicted load value based on the current MLB policy , predicting the eighth predicted load value of the first network device at the second time and the ninth predicted load value of the second network device at the second time.
  • the sixth predicted load value, the seventh predicted load value, the eighth predicted load value, and the ninth predicted load value can also be predicted load values at different time points or different time periods, so as to improve the prediction
  • the randomness of the load value avoids the influence of subsequent accuracy calculations due to network fluctuations.
  • S1605 The first network device obtains the third actual load value at the second time, and the second network device obtains the fourth actual load value at the second time.
  • the first network device and the second network device can also The time of the above-mentioned predicted load value obtains the corresponding actual load value, so that the predicted load value and the actual load value can be compared.
  • S1606 The first network device and the second network device exchange the predicted load value and the actual load value.
  • the first network device and the second network device interact with the predicted load value and the actual load value to provide information for subsequent accuracy calculation.
  • the first network device performs one of the following:
  • the second network device performs one of the following:
  • An eighth predicted load value is sent to the first network device.
  • the first network device and the second network device obtain the accuracy of the predicted load value, and determine the network device with better predicted performance.
  • the first network device and the second network device perform at least one of the following:
  • the accuracy of the seventh predicted load value is calculated.
  • the first network device and the second network device perform at least one of the following:
  • the accuracy of the ninth predicted load value is calculated.
  • the accuracy is a numerical value used to describe the deviation between the predicted load value and the actual load value, and is a function of the predicted load value and the actual load value.
  • the accuracy can be calculated by the following formula:
  • S1608 The first network device and the second network device determine a network device with better prediction performance.
  • the first network device and the second network device obtain at least one of the accuracy of the sixth predicted load value and the accuracy of the seventh predicted load value, and, the accuracy of the eighth predicted load value and the ninth predicted load At least one of the accuracy of the value is used to determine the network device with better prediction performance.
  • the prediction performance of the second network device is better.
  • the accuracy of the obtained predicted load value may also be directly exchanged between the first network device and the second network device, instead of exchanging the obtained predicted load value.
  • the first network device may calculate the accuracy of the sixth predicted load value based on the sixth predicted load value and the third actual load value, and send the accuracy of the sixth predicted load value to the second network device;
  • the second network device The accuracy of the ninth predicted load value may be calculated based on the ninth predicted load value and the fourth actual load value, and the accuracy of the ninth predicted load value may be sent to the first network device. Therefore, the first network device and the second network device can determine the network device with better prediction performance based on the accuracy of the sixth predicted load value and the accuracy of the ninth predicted load value.
  • the above method can replace the content executed by the first network device and the second network device in S1606-S1607, thereby reducing the computing overhead of the network device and the overhead of information exchange.
  • the first network device judges that its own accuracy is better, and when the network device judges that its prediction performance is better, execute S1610; the second network device judges that the accuracy of the first network device is better, that is, the network device judges the accuracy of other network devices When the degree is better, execute S1611.
  • S1610 The first network device sends the first MLB policy to the second network device.
  • the first network device determines that its own accuracy is better, and sends the first MLB policy to the second network device.
  • the first network device may refer to the reference embodiment shown in FIG. 13 to perform parameter modification and/or optimization of the neural network.
  • the second network device determines that the accuracy of other network devices is better, and optimizes the neural network, which may specifically include:
  • S1611a The second network device sends request information for neural network optimization information to the first network device.
  • S1611b The first network device feeds back neural network optimization information to the second network device.
  • the first network device After receiving the request message for neural network optimization information, the first network device feeds back neural network optimization information, where the neural network optimization information includes at least one of the following:
  • Parameters of the first neural network and/or the second neural network of the first network device are Parameters of the first neural network and/or the second neural network of the first network device
  • the analysis result of the deviation between the predicted load value and the actual load value recorded by the first network device exemplarily, such as the input quantity that causes the predicted load value to be low or high, and the neural network parameter information related to the input quantity;
  • the first neural network and/or the second neural network of the first network device obtains the predicted load value and the input information used by the first MLB strategy, eg the first network state information, the information indicating the network state of the second network device, the MLB one or more of the policies.
  • S1611c The second network device optimizes the first neural network and/or the second neural network according to the neural network optimization information.
  • the optimization of the neural network depends on the internal implementation of the product. Exemplarily, there are the following possible implementations:
  • the second network device may update the first neural network of the second network device according to the parameters of the first neural network and/or the second neural network of the first network device sent by the first network device. and/or a second neural network.
  • the second network device may, according to the analysis result of the deviation between the predicted load value and the actual load sent by the first network device, determine the input quantity that has a greater impact on the deviation, and adjust the neural network of the corresponding input quantity. Network parameters.
  • the second network device updates the second MLB policy to the first MLB policy.
  • the second network device judges that its own accuracy is not optimal, and can use the MLB strategy of the network device with better accuracy.
  • the second network device may also determine the MLB policy using its own inference based on other conditions, for example, the MLB policy of the network device with better accuracy exceeds the range allowed by the second network device.
  • the second network device sends the second MLB policy to the first network device.
  • the second network device since the accuracy of the second network device is not optimal, it may be assumed that the second network device does not send the second MLB policy to the first network device.
  • the second network device may send a second MLB policy to the first network device, and after receiving the second MLB policy of the second network device, the first network device may send the second MLB policy according to the second MLB policy , determine the MLB policy executed by the second network device and the parameters of the second network device, and determine whether to modify the parameters of the first network device.
  • the second network device may refer to the embodiment shown in FIG. 13 to perform parameter modification and optimization, which will not be repeated here.
  • the network device can predict the performance according to better, such as the best prediction performance, or, according to the best prediction performance.
  • the neural network of the network device optimizes the neural network itself, so as to improve the performance of the neural network and further improve the performance of the MLB strategy.
  • the third network device obtains the MLB policy and sends it to the first network device.
  • the third network device is an operation and management entity, including: the third network device reports to the first network The device sends a third query message; the first network device feeds back a third confirmation message to the third network device according to the third query message, where the confirmation message includes information indicating state information of the first network device; the third network device The network device obtains a third MLB policy through a third neural network based on the information indicating the state information of the first network device; the third network device sends the third MLB policy to the first network device; the first network device The device modifies the first parameter configuration according to the third MLB policy, and feeds back the actual load value to the third network device; the third network device optimizes the third neural network based on the fed back actual load value.
  • the first network device optimizes the first neural network and/or the second neural network based on the neural network optimization information sent by the third network device.
  • the third network device is An operation and management entity, including: the first network device executes one or more of the methods in the embodiments shown in Figures 10-16; optionally, the first network device sends the third network device the information obtained by the first network device The predicted load value, the actual load value and the time information, the third network device optimizes the third neural network of the third network device based on the predicted load value, the actual load value and the time information, and then the neural network of the third neural network is optimized.
  • the network optimization information is sent to the first network device; optionally, the first network device optimizes the first neural network and/or the second neural network based on the third neural network information.
  • the neural network optimization information of the third neural network reference may be made to the relevant content of the neural network optimization information in S1611b.
  • it may further include:
  • the first network device sends one or more of the following prediction and/or inference related first information to the second network device:
  • a service type supported by the first network device such as an MLB service
  • Period information or time information of the service supported by the first network device for example, the MLB service period is 30 minutes, that is, the MLB service is performed every 30 minutes, or the first network device performs the MLB service at 10:00, 10:20, and 11:00 information about MLB business; or
  • the indication of the output information of the predicted service of the first network device indicates that the output information of the MLB service includes parameter configuration information of the first network device and/or the second network device.
  • the first information related to the above prediction and/or reasoning may be included in the aforementioned query message.
  • the aforementioned second network device may feed back the following prediction and/or inference-related second information to the first network device, and these second information may be included in the aforementioned response to the query message. in the confirmation message.
  • This second information may contain one or more of the following:
  • Type of service supported by the second network device such as MLB service.
  • the second network device may instruct the first network device to feed back the load of the first network device on the second network device between 10:00 and 11:00 or, the second network device may instruct the first network device to also feed back the configuration information of the switching threshold between the first network device and the second network device between 10:00 and 11:00.
  • network devices can negotiate and exchange information related to prediction and/or inference, so as to avoid errors caused by misalignment of services.
  • the network device serving as the first network device or the second network device may be gNB, CU, DU, CU-CP or CU-DP. It can be understood that, in the embodiments described in FIGS. 10-16 , a network device can be used as both the first network device and the second network device, or as both the first network device and the second network device. .
  • multiple second network devices may be included, and the first network device determines the range of second network devices required to implement the method according to specific design and implementation.
  • FIG. 17 is a schematic structural diagram of an access network device provided by an embodiment of the present application, such as a schematic structural diagram of a base station.
  • the base station can be applied to the system shown in FIG. 1 to perform the functions of the first network device or the second network device in the foregoing method embodiment.
  • Base station 1700 may include one or more DUs 1701 and one or more CUs 1702.
  • CU1702 can communicate with NG core (Next Generation Core Network, NC) or EPC.
  • the DU 1701 may include at least one antenna 17011 , at least one radio frequency unit 17012 , at least one processor 17013 and at least one memory 17014 .
  • the CU 1702 may include at least one processor 17022 and at least one memory 17021 .
  • the CU 1702 and the DU 1701 can communicate through an interface, wherein the control plane (Control plan) interface can be Fs-C, such as F1-C, and the user plane (User Plan) interface can be Fs-U, such as F1-U.
  • the DU 1701 and the CU 1702 may be physically set together, or may be physically separated, that is, a distributed base station.
  • the CU 1702 is the control center of the base station, which can also be called a processing unit, and is mainly used to complete the baseband processing function.
  • the DU 1701 part is mainly used for the transmission and reception of radio frequency signals, the conversion of radio frequency signals and baseband signals, and part of baseband processing.
  • both the CU 1702 and the DU 1701 may perform the relevant operations of the network device in the foregoing method embodiments.
  • the baseband processing on the CU and DU can be divided according to the protocol layers of the wireless network.
  • the functions of the PDCP layer and the above protocol layers are set in the CU, and the protocol layers below PDCP, such as the functions of the RLC layer, the MAC layer, and the PHY layer, etc. Set in DU.
  • the base station 1700 may include one or more radio frequency units (RUs), one or more DUs and one or more CUs.
  • the DU may include at least one processor 17013 and at least one memory 17014
  • the RU may include at least one antenna 17011 and at least one radio frequency unit 17012
  • the CU may include at least one processor 17022 and at least one memory 17021 .
  • the CU1702 may be composed of one or more single boards, and multiple single boards may jointly support a wireless access network (such as a 5G network) with a single access indication, or may respectively support wireless access systems of different access standards.
  • Access network (such as LTE network, 5G network or other network).
  • the memory 17021 and the processor 17022 can serve one or more single boards. That is to say, the memory and processor can be provided separately on each single board. It can also be that multiple boards share the same memory and processor. In addition, necessary circuits may also be provided on each single board.
  • the DU1701 can be composed of one or more single boards.
  • Multiple single boards can jointly support a wireless access network (such as a 5G network) with a single access indication, or can support a wireless access network with different access standards (such as a 5G network). LTE network, 5G network or other network).
  • the memory 17014 and processor 17013 may serve one or more single boards. That is to say, the memory and processor can be provided separately on each single board. It can also be that multiple boards share the same memory and processor. In addition, necessary circuits may also be provided on each single board.
  • FIG. 18 is a schematic structural diagram of a communication apparatus 1800 .
  • the communication apparatus 1800 may be used to implement the methods described in the foregoing method embodiments, and reference may be made to the descriptions in the foregoing method embodiments.
  • the communication apparatus 1800 may be a chip, an access network device (eg, a base station) or other network devices.
  • the communication device 1800 includes one or more processors 1801 .
  • the processor 1801 may be a general-purpose processor or a special-purpose processor, or the like. For example, it may be a baseband processor, or a central processing unit.
  • the baseband processor may be used to process communication protocols and communication data
  • the central processor may be used to control devices (eg, base stations, terminals, AMFs, or chips, etc.), execute software programs, and process data of software programs.
  • the apparatus may include a transceiving unit for implementing signal input (reception) and output (transmission).
  • the device may be a chip, and the transceiver unit may be an input and/or output circuit of the chip, or a communication interface.
  • the chip can be used in a terminal or an access network device (such as a base station) or a core network device.
  • the apparatus may be a terminal or an access network device (such as a base station), and the transceiver unit may be a transceiver, a radio frequency chip, or the like.
  • the communication apparatus 1800 includes one or more of the processors 1801, and the one or more processors 1801 can implement the first network device and/or the second network device in the embodiments shown in FIGS. 10-16, or , the methods performed by the operating and management entities.
  • the communication apparatus 1800 includes means for receiving network status information, predicted load value, actual load value, MLB strategy and neural network optimization information from the second network device, and using means for optimizing the first neural network and/or the second neural network for modifying the configuration parameters of the first network device.
  • the functions of the described components may be implemented by one or more processors. For example, it may be transmitted by one or more processors, by a transceiver, or by an input/output circuit, or by an interface of a chip. Reference may be made to the relevant descriptions in the foregoing method embodiments.
  • the communication apparatus 1800 includes means for sending one or more of network status information, predicted load value, actual load value, MLB strategy, and neural network optimization information to the second network device (means), and for generating network state information, and for running the first neural network and/or the second neural network to generate one or more of predicted load values, actual load values, MLB policies, and neural network optimization information the components (means).
  • the components (means).
  • it may be received through a transceiver, or an input/output circuit, or an interface to a chip, through one or more processors.
  • the processor 1801 may also implement other functions in addition to implementing the methods in the embodiments shown in FIGS. 10-16 .
  • the processor 1801 may also include instructions 1803, and the instructions may be executed on the processor, so that the communication apparatus 1800 executes the methods described in the foregoing method embodiments.
  • the communication apparatus 1800 may also include a circuit, and the circuit may implement the functions of the access network device or terminal in the foregoing method embodiments.
  • the communication device 1800 may include one or more memories 1802 having stored thereon instructions 1804 that are executable on the processor to cause the communication device 1800 to execute The method described in the above method embodiment.
  • data may also be stored in the memory.
  • Instructions and/or data may also be stored in the optional processor.
  • the one or more memories 1802 may store the MLB strategy, neural network optimization information described in the above embodiments, or other information involved in the above embodiments, such as predicted load values, actual load values, and the like.
  • the processor and the memory can be provided separately or integrated together.
  • the communication apparatus 1800 may further include a transceiver unit 1805 and an antenna 1806, or include a communication interface.
  • the transceiver unit 1805 may be referred to as a transceiver, a transceiver circuit, or a transceiver, etc., and is used to implement the transceiver function of the device through the antenna 1806 .
  • the communication interface (not shown in the figure) can be used for the communication between the core network device and the access network device, or between the access network device and the access network device.
  • the communication interface may be a wired communication interface, such as an optical fiber communication interface.
  • the processor 1801 may be referred to as a processing unit, and controls a device (such as a terminal or a base station or an AMF).
  • a device such as a terminal or a base station or an AMF.
  • the present application also provides a communication system, which includes a combination of one or more of the foregoing one or more access network devices, and, one or more terminals, and, and core network devices.
  • processors in the embodiments of the present application may be a central processing unit (central processing unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (digital signal processors, DSP), dedicated integrated Circuit (application specific integrated circuit, ASIC), off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory in the embodiments of the present application may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be random access memory (RAM), which acts as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • DDR SDRAM double data rate synchronous dynamic random access memory
  • enhanced SDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous connection dynamic random access memory Fetch memory
  • direct memory bus random access memory direct rambus RAM, DR RAM
  • the above embodiments may be implemented in whole or in part by software, hardware (eg, circuits), firmware, or any other combination.
  • the above-described embodiments may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions or computer programs. When the computer instructions or computer programs are loaded or executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server or data center by means of wire, such as optical fiber, or wireless, such as infrared, wireless, microwave, etc.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that contains one or more sets of available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media.
  • the semiconductor medium may be a solid state drive.
  • the disclosed system, communication apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

一种移动性负载均衡方法和装置,该方法包括:获得第一网络设备的第一网络状态信息;获得第二网络设备的指示网络状态的信息;基于所述第一网络状态信息和所述第二网络设备的指示网络状态的信息,获得待生效的第一移动性负载均衡MLB策略;向所述第二网络设备发送所述第一MLB策略;所述第一MLB策略包括以下内容的至少一项:用于调整所述第一网络设备负载的配置参数及配置参数对应的时间信息,用于调整所述第二网络设备负载的配置参数及配置参数对应的时间信息,或,所述第一MLB策略的可靠度信息。采用本申请的方法和装置,可以动态调整网络设备的参数,增强参数配置的灵活性,从而适应复杂的网络负载情况,使MLB策略的性能得以提升。

Description

一种负载均衡方法,装置及可读存储介质 技术领域
本申请涉及无线通信技术领域,尤其涉及一种负载均衡方法,装置,可读存储介质和系统。
背景技术
移动性负载均衡(Mobility Loading Balance,MLB)是网络自动优化的一个重要功能。网络中每个小区的覆盖区域不同,一些小区比它们的邻区负载多,造成了小区间、基站间负载不平衡的现象。这种情况下会造成负载较小的小区资源的浪费,同时也会影响用户体验,因此需要通过MLB对小区负载进行调整。
在现有通信技术中,基站间交互资源的使用情况,并以此为依据,调整基站的参数配置,从而实现MLB。但是,现有的MLB方法性能较差,会导致网络的性能受到损失,亟待一种更优的MLB方法。
发明内容
本发明实施例提供一种通信方法和装置,通过制定含时间信息或可靠度信息中一项或多项的MLB策略,以期通过含时间信息的MLB策略实现动态控制网络设备的参数,增强参数配置的灵活性,从而适应复杂的网络负载情况,提高MLB策略的性能,或是,以期通过含可靠度信息的MLB策略实现依据对MLB策略的可信程度进行MLB策略的应用判断,提高MLB策略的性能。
第一方面,提供了一种通信方法,该方法的执行主体可以为第一网络设备,也可以为可以对网络设备进行配置和管理的运营和管理实体,还可以为配置于第一网络设备或运营和管理实体中的部件(芯片、电路或其它等),包括:获得第一网络设备的第一网络状态信息;获得第二网络设备的指示网络状态的信息;基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得待生效的第一移动性负载均衡MLB策略;向所述第二网络设备发送所述第一MLB策略;所述第一MLB策略包括以下内容的至少一项:用于调整所述第一网络设备负载的配置参数及所述用于调整第一网络设备负载的配置参数对应的时间信息,用于调整所述第二网络设备负载的配置参数及所述用于调整第二网络设备负载的配置参数对应的时间信息,或,所述第一MLB策略的可靠度信息。
其中,上述实施方法的执行主体可以为第一网络设备,比如,基站,也可以为运营和管理实体,例如操作管理维护网管设备(operation,administration and maintenance,OAM)。当执行主体为运营和管理实体时,所述第一网络设备的第一网络状态信息,还可以为第一网络设备的指示网络状态的信息,所述第一网络设备的指示网络状态的信息,所述第一网络设备预测的网络状态信息,或,所述第一网络设备的第十预测负载值。在第一方面后续的实施方法中,所述第一网络状态信息均可以为上述第一网络设备的指示网络状态的信息,不予赘述。当执行主体为运营和管理实体时,上述实施方法还可以包括,向所述第一网络 设备发送所述第一MLB策略。
上述实施方法中,可以包括与多个第二网络设备的交互,可选的,可以由上述方法的执行主体,根据具体的设计和实现确定实施所述方法所涉及的第二网络设备的范围。可选的,以上用于调整所述第一网络设备负载的配置参数及所述用于调整第一网络设备负载的配置参数对应的时间信息可以包括多组用于调整所述第一网络设备负载的配置参数及所述用于调整第一网络设备负载的配置参数对应的时间信息,和/或,用于调整所述第二网络设备负载的配置参数及所述用于调整第二网络设备负载的配置参数对应的时间信息也可以包括多组用于调整所述第二网络设备负载的配置参数及所述用于调整第二网络设备负载的配置参数对应的时间信息,以期更动态的调整网络设备的参数配置,更好的适应复杂的网络负载情况。
通过上述方法,在MLB策略中引入时间信息,可以动态调整网络设备的参数配置,从而适应复杂的网络负载情况,提高MLB策略的性能,或是,在MLB策略中指示该MLB策略的可靠度,从而可依据对MLB策略的可信程度进行MLB策略的应用判断,提高MLB策略的性能。
结合第一方面,在第一方面的第一种可能的实施方式中,所述获得所述第二网络设备的指示网络状态的信息包括:向所述第二网络设备发送第一询问信息,所述第一询问信息用于请求所述指示网络状态的信息;接收来自所述第二网络设备的第一确认信息,所述第一确认信息响应于所述第一询问信息,并携带所述第二网络设备的指示网络状态的信息;其中,所述第二网络设备的指示网络状态的信息包括以下中的至少一项:所述第二网络设备当前的网络状态信息,所述第二网络设备预测的网络状态信息,或,所述第二网络设备的第三预测负载值。
若执行主体为运营和管理实体时,第一方面的第一种可能的实施方式还包括:向所述第一网络设备发送第三询问信息,所述第三询问信息用于请求所述第一网络设备的指示网络状态的信息;接收来自所述第一网络设备的第三确认信息,所述第三确认信息响应于所述第三询问信息,并指示所述第一网络设备的指示网络状态的信息;其中,所述第一网络设备的指示网络状态的信息包括以下中的至少一项:所述第一网络设备当前的网络状态信息,所述第一网络设备预测的网络状态信息,或,所述第一网络设备的第十预测负载值。
通过上述方法,执行主体可以获得与确定第一MLB策略相关的输入信息,从而为确定第一MLB策略提供必要条件。
结合第一方面,或第一方面的第一种可能的实施方式,第一方面的第二种可能的实施方式中,所述基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得待生效的第一MLB策略,包括:基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得当前MLB策略下所述第一网络设备在第一时间的第一预测负载值或第二网络设备在第一时间的第二预测负载值中的至少一项;基于所述第一预测负载值或第二预测负载值中的至少一项获得所述待生效的第一MLB策略。
结合第一方面第二种可能的实施方式,在第一方面第三种可能的实施方式中,所述基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得当前MLB策略下所述第一网络设备在第一时间的第一预测负载值或第二网络设备在第一时间的第二预测负载值中的至少一项,包括:通过第一神经网络,执行以下中的至少一项: 对第一网络设备的负载值基于当前MLB策略进行预测,得到第一时间的第一预测负载值,或,对第二网络设备的负载值基于当前MLB策略进行预测,得到第一时间的第二预测负载值。其中,所述第一神经网络的输入包括所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,以及当前MLB策略,所述第一神经网络的输出包括所述第一预测负载值或所述第二预测负载值中的至少一项。
结合结合第一方面第二种至第三种可能的实施方式,第一方面第四种可能的实施方式中,所述基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得当前MLB策略下所述第一网络设备在第一时间的第一预测负载值或第二网络设备在第一时间的第二预测负载值中的至少一项包括:基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得当前MLB策略下所述第一网络设备在第一时间的第一预测负载值;基于所述第一预测负载值或第二预测负载值中的至少一项获得所述待生效的第一MLB策略,包括:通过第二神经网络,获得所述待生效的第一MLB策略;其中,所述第二神经网络的输入包括:所述第二网络设备的指示网络状态的信息或所述第二预测负载值,以及所述第一预测负载值;所述第二神经网络的输出包括所述待生效的第一MLB策略。
通过上述方法,对负载值进行预测,将预测负载值作为获得MLB策略的依据,使MLB策略能够适应变化的网络负载情况,可以提高MLB策略的性能。
结合第一方面第二种至第四种可能的实施方式,第一方面第五种可能的实施方式包括,通过所述第一神经网络获得所述第一MLB策略作用下的第一网络设备在第一时间的第四预测负载值,或,所述第一MLB策略作用下的第二网络设备在第一时间的第五预测负载值,中的至少一项;向所述第二网络设备发送所述第四预测负载值,或,所述第五预测负载值中的至少一项。
若执行主体为运营和管理实体,第一方面第五种可能的实施方式还包括,向第一网络设备发送所述第四预测负载值,或,所述第五预测负载值中的至少一项。
通过上述方法,第一网络设备和第二网络设备可以获得第四预测负载值,或,第五预测负载值中的至少一项,为第一网络设备和/或第二网络设备后续的运行提供信息,例如根据所述第四预测负载值和/或所述第五预测负载值,触发MLB策略的更新。
结合第一方面,及第一方面第一至第五种可能的实施方式,第一方面第六种可能的实施方式包括:确定第二网络设备基于所述第一MLB策略修改用于调整所述第二网络设备负载的参数成功,并,基于所述第一MLB策略,修改用于调整所述第一网络设备负载的参数;获得所述第一MLB策略下第一网络设备在第一时间的第一实际负载值并通过所述第一神经网络获得所述第一MLB策略作用下的第一网络设备在第一时间的第四预测负载值;基于所述第四预测负载值和所述第一实际负载值,对所述第一神经网络或第二神经网络中的至少一项进行优化;和/或,获得所述第一MLB策略下第二网络设备在第一时间的第二实际负载值并通过所述第一神经网络获得所述第一MLB策略作用下的第二网络设备在第一时间的第五预测负载值;基于所述第五预测负载值和所述第二实际负载值,对第一神经网络或第二神经网络中的至少一项进行优化。
结合第一方面第六种可能的实施方式,第一方面第七种可能的实施方式包括:从所述第二网络设备接收指示所述第二网络设备的第二实际负载值的信息。
若执行主体为运营和管理实体,第一方面第七种可能的实施方式还包括:从所述第一网络设备接收指示所述第一网络设备的第一实际负载值的信息。
结合第一方面,及第一方面第一至第五种可能的实施方式,第一方面第八种可能的实施方式包括:从所述第二网络设备接收反馈信息,所述反馈信息指示第二网络设备基于所述第一MLB策略修改用于调整所述第二网络设备负载的参数失败的原因;基于所述反馈信息,对所述第一神经网络和或第二神经网络进行优化。
若执行主体为运营和管理实体,第一方面第八种可能的实施方式还包括:从所述第一网络设备接收反馈信息,所述反馈信息指示第一网络设备基于所述第一MLB策略修改用于调整所述第一网络设备负载的参数失败的原因;基于所述反馈信息,对所述第一神经网络和或第二神经网络进行优化。
通过上述方法,可以对第一神经网络和/或第二神经网络进行优化,从而提高负载预测和MLB策略的准确性。
结合第一方面,及第一方面第一至第八种可能的实施方式,第一方面第九种可能的实施方式包括:基于第六预测负载值或第七预测负载值中的至少一项和第八预测负载值或第九预测负载值中的至少一项的准确度,确定第六预测负载值或第七预测负载值中的至少一项较优;其中,第六预测负载值为第一网络设备获得的当前MLB策略下所述第一网络设备在第二时间的预测负载值;第七预测负载值为为第一网络设备获得的当前MLB策略下所述第二网络设备在第二时间的预测负载值;第八预测负载值为第二网络设备获得的当前MLB策略下所述第一网络设备在第二时间的预测负载值;第九预测负载值为第二网络设备获得的当前MLB策略下所述第一网络设备在第二时间的预测负载值。
结合第一方面第九种可能的实施方式,第一方面第十种可能的实施方式包括:获得第六预测负载值或第七预测负载值中的至少一项和第八预测负载值或第九预测负载值中的至少一项的准确度;其中,获得第六预测负载值的准确度包括:基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得所述第六预测负载值;获得所述第一网络设备在当前MLB策略下的第三实际负载值,所述第三实际负载值为所述第一网络设备在第二时间的实际的负载值;基于第六预测负载值和所述第三实际负载值,获得第六预测负载值的准确度;其中,获得第八预测负载值的准确度包括:接收来自第二网络设备的第八预测负载值,所述第八预测负载值为所述第二网络设备在当前MLB策略下所预测的所述第一网络设备在第二时间的负载值;获得所述第一网络设备在当前MLB策略下的第三实际负载值,所述第三实际负载值为所述第一网络设备在第二时间的实际的负载值;基于第八预测负载值和所述第三实际负载值,获得第八预测负载值的准确度;其中,获得第七预测负载值的准确度包括:基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得当前MLB策略下第二网络设备在第二时间的第七预测负载值;接收来自第二网络设备的所述第二网络设备在当前MLB策略下的第四实际负载值,所述第四实际负载值为所述第二网络设备在第二时间的实际的负载值;基于所述第七预测负载值和所述第四实际负载,获得第七预测负载值的准确度;其中,获得第九预测负载值的准确度包括:接收来自第二网络设备的第九预测负载和所述第二网络设备在当前MLB策略下的第四实际负载值,所述第九预测负载为所述第二网络设备在当前MLB策略下所预测的所述第二网络设备在第二时间的负载值,所述第四实际负载值为所述第二网络 设备在第二时间的实际的负载值,基于所述第九预测负载值和所述第四实际负载,获得第九预测负载值的准确度。
结合第一方面第九种可能的实施方式,第一方面第十一种可能的实施方式包括:向所述第二网络设备发送所述第六预测负载值或所述第七预测负载值中的至少一项和第三实际负载值;或,向所述第二网络设备发送所述第七预测负载值。
结合第一方面第九种可能的实施方式,第一方面第十二种可能的实施方式包括:接收来自所述第二网络设备的神经网络优化信息的请求;向所述第二网络设备发送神经网络优化信息;所述神经网络优化信息包括以下内容的至少一项:所述第一神经网络和/或第二神经网络的参数相关信息;所述第一神经网络和/或第二神经网络的输入信息;或,实际负载与预测负载之间的差异原因分析结果。
通过上述方法,通过确定准确度较高的预测负载值,从而确定预测准确度较高的设备,进而通过共享性能更好的神经网络的相关信息,对其他设备的神经网络进行优化,从而获得更为准确的预测负载值和MLB策略。
第二方面,提供一种通信方法,该方法的执行主体可以为第二网络设备,或者配置于第二网络设备中的部件(芯片、电路或其它等),包括:第二网络设备发送指示所述第二网络设备的网络状态的信息;所述第二网络设备接收第一移动性负载均衡MLB策略,所述第一MLB策略基于所述指示所述第二网络设备的网络状态的信息;所述第一MLB策略包括以下内容的至少一项:用于调整所述第一网络设备负载的配置参数及所述用于调整所述第一网络设备负载的配置参数对应的时间信息,用于调整所述第二网络设备负载的配置参数及所述用于调整第二网络设备负载的配置参数对应的时间信息,或,所述第一MLB策略的可靠度信息。
结合第二方面,第二方面的第一种可能的实施方式中,所述第二网络设备发送指示第二网络设备的网络状态的信息包括:所述第二网络设备接收第一询问信息;所述第二网络设备发送第一确认信息,所述第一确认信息响应于所述第一询问信息,并包括指示所述第二网络设备的指示网络状态的信息;其中,所述第二网络设备的指示网络状态的信息包括以下中的至少一项:所述第二网络设备当前的网络状态信息,所述第二网络设备预测的网络状态信息,或,所述第二网络设备的第三预测负载值。
结合第二方面及第二方面的第一种可能的实施方式,第二方面的第二种可能的实施方式包括:所述第二网络设备接收第四预测负载值,或,第五预测负载值中的至少一项;所述第四预测负载值,为在所述第一MLB策略下所预测的第一网络设备在第一时间的负载值;所述第五预测负载值,为在所述第一MLB策略下所预测的第二网络设备在第一时间的预测值。
结合第二方面及第二方面的第一至第二种可能的实施方式,第二方面的第三种可能的实施方式包括:所述第二网络设备基于所述第一MLB策略修改用于调整第二网络设备负载的参数并发送所述第二网络设备在所述第一时间的第二实际负载值;或,所述第二网络设备修改用于调整第二网络设备负载的参数失败,所述第二网络设备发送反馈信息,所述反馈信息指示第二参数配置修改失败原因。
结合第二方面及第二方面的第一至第三种可能的实施方式,第二方面的第四种可能的实施方式包括:所述第二网络设备获得当前MLB策略下所述第一网络设备在第二时间的第 八预测负载值并发送所述第八预测负载值,以使得其他设备获得所述第八预测负载值的准确度;或,所述第二网络设备获得当前MLB策略下所述第一网络设备在第二时间的第八预测负载值或在当前MLB策略下所预测的所述第二网络设备在第二时间的第九预测负载值中的至少一项和所述第二网络设备在当前MLB策略下的第二时间的第四实际负载值,并发送所述第八预测负载值或第九预测负载值中的至少一项和第四实际负载值,以使得其他设备获得所述第八预测负载值或第九预测负载值中的至少一项的准确度。
结合第二方面及第二方面的第一至第四种可能的实施方式,第二方面的第五种可能的实施方式包括:所述第二网络设备接收第六预测负载值和第三实际负载值,所述第六预测负载值为其他设备在当前MLB策略下预测的第一网络设备在第二时间的负载值,所述第三实际负载值为所述第一网络设备在第二时间的实际负载值;所述第二网络设备基于所述第六预测负载值,所述第八预测负载值和所述第三实际负载值,计算第六预测负载值和第八预测负载值的准确度并确定所述第六预测负载值较优;和/或,
所述第二网络设备接收第七预测负载值,所述第七预测负载值为其他设备在当前MLB策略下预测的第二网络设备在第二时间的负载值,所述第二网络设备基于所述第七预测负载值,所述第九预测负载值和所述第四实际负载值,计算第七预测负载值和第九预测负载值的准确度并确定所述第七预测负载值较优;和/或,
所述第二网络设备接收第六预测负载值和第三实际负载值,所述第六预测负载值为其他设备在当前MLB策略下预测的第一网络设备在第二时间的负载值,所述第三实际负载值为所述第一网络设备在第二时间的实际负载值;所述第二网络设备基于所述第六预测负载值,所述第三实际负载值,所述第九预测负载值和所述第四实际负载值,计算第六预测负载值和第九预测负载值的准确度并确定所述第六预测负载值较优;和/或,
所述第二网络设备接收第七预测负载值和第三实际负载值,所述第七预测负载值为其他设备在当前MLB策略下预测的第二网络设备在第二时间的负载值,所述第三实际负载值为所述第一网络设备在第二时间的实际负载值;所述第二网络设备基于所述第七预测负载值,所述第八预测负载值,所述第三实际负载值和所述第四实际负载值,计算第七预测负载值和第八预测负载值的准确度并确定所述第七预测负载值较优。
本申请实施例第三方面提供了一种通信装置,本申请提供的装置具有实现上述方法方面中第一网络设备或运营和管理实体行为的功能,其包括用于执行上述方法方面所描述的步骤或功能相对应的部件(means)。所述步骤或功能可以通过软件实现,或硬件实现,或者通过硬件和软件结合来实现。
在一种可能的设计中,上述装置包括一个或多个处理器,进一步的,可以包括通信单元。所述一个或多个处理器被配置为支持所述装置执行上述方法中第一网络设备或运营和管理实体相应的功能。例如,获得第一MLB策略。所述通信单元用于支持所述装置与其他设备通信,实现接收和/或发送功能。例如,向其他设备发送第一MLB策略。
可选的,所述装置还可以包括一个或多个存储器,所述存储器用于与处理器耦合,其保存基站必要的程序指令和/或数据。所述一个或多个存储器可以和处理器集成在一起,也可以与处理器分离设置。本申请并不限定。
所述装置可以为基站,下一代基站(Next Generation NodeB,gNB)或传输点(Transmitting and Receiving Point,TRP),分布式单元(distributed unit,DU)或集中式单元(centralized unit, CU)、OAM等,所述通信单元可以是收发器,或收发电路。可选的,所述收发器也可以为输入/输出电路或者接口。
所述装置还可以为芯片。所述通信单元可以为芯片的输入/输出电路或者接口。
另一个可能的设计中,上述装置,包括收发器、处理器和存储器。该处理器用于控制收发器收发信号,该存储器用于存储计算机程序,该处理器用于运行存储器中的计算机程序,使得该装置执行第一方面中网络设备或运营和管理实体完成的方法。
在一种可能的设计中,上述装置包括一个或多个处理器,进一步的,可以包括通信单元。所述一个或多个处理器被配置为支持所述装置执行上述方法中第二网络设备相应的功能。例如,确定指示第二网络设备的网络状态的信息。所述通信单元用于支持所述装置与其他设备通信,实现接收和/或发送功能。例如,接收第一MLB策略。
可选的,所述装置还可以包括一个或多个存储器,所述存储器用于与处理器耦合,其保存装置必要的程序指令和/或数据。所述一个或多个存储器可以和处理器集成在一起,也可以与处理器分离设置。本申请并不限定。
所述装置可以为基站,下一代基站(Next Generation NodeB,gNB)或传输点(Transmitting and Receiving Point,TRP),分布式单元(distributed unit,DU)或集中式单元(centralized unit,CU)等,所述通信单元可以是收发器,或收发电路。可选的,所述收发器也可以为输入/输出电路或者接口。
所述装置还可以为芯片。所述通信单元可以为芯片的输入/输出电路或者接口。
另一个可能的设计中,上述装置,包括收发器、处理器和存储器。该处理器用于控制收发器收发信号,该存储器用于存储计算机程序,该处理器用于运行该存储器中的计算机程序,使得该装置执行第一方面中网络设备完成的方法。
第四方面,提供了一种系统,该系统包括上述第一网络设备,第二网络设备或运营和管理实体中的一项或多项。
第五方面,提供了一种可读存储介质或程序产品,用于存储程序,该程序包括用于执行第一方面或第二方面中的方法的指令。
第六方面,提供了一种可读存储介质或程序产品,用于存储程序,当所述程序在处理器上运行时,使得包括所述处理器的装置执行第一方面或第二方面中的方法的指令。
应当理解的是,本申请的第二方面至第六方面与本申请的第一方面的技术方案相对应,各方面及对应的可行实施方式所取得的有益效果相似,不予赘述。
附图说明
图1为本申请实施例提供的一种通信系统的示意图;
图2为本申请实施例提供的一种多个DU共用一个CU的网络架构示意图;
图3为本申请实施例提供的一种CU和DU的协议层功能的示意图;
图4为本申请实施例提供的一种RRC状态转变示意图;
图5为本申请实施例提供的资源状态报告初始化流程的示意图;
图6为本申请实施例提供的资源状态报告流程的示意图;
图7为本申请实施例提供的移动性参数改变流程的示意图;
图8为本申请实施例提供的神经元结构示意图;
图9为本申请实施例提供的神经网络的层关系示意图;
图10为本申请实施例提供的一种可能的实施方式的流程图;
图11为本申请实施例提供的一种通信方法的流程示意图;
图12a为本申请实施例提供的一种通信方法的流程示意图;
图12b为本申请实施例提供的一种通信方法的流程示意图;
图12c为本申请实施例提供的一种通信方法的流程示意图;
图13为本申请实施例提供的一种通信方法的流程示意图;
图14a为本申请实施例提供的一种通信方法的流程示意图;
图14b为本申请实施例提供的一种通信方法的流程示意图;
图14c为本申请实施例提供的一种通信方法的流程示意图;
图15为本申请实施例提供的一种通信方法的流程示意图;
图16为本申请实施例提供的一种通信方法的流程示意图;
图17为本申请实施例提供的一种接入网设备的结构示意图;
图18为本申请实施例提供的一种通信装置的结构示意图。
具体实施方式
为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。例如,第一信息和第二信息仅仅是为了区分不同的信息,并不对其先后顺序进行限定。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
需要说明的是,本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请实施例中,“至少一项(个)”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系,但也可能表示的是一种“和/或”的关系,具体可参考前后文进行理解。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
本申请实施例的技术方案可以应用于各种通信系统,例如:长期演进(long term evolution,LTE)系统,全球互联微波接入(worldwide interoperability for microwave access,WiMAX)通信系统,第五代(5th Generation,5G)系统,如新一代无线接入技术(new radio access technology,NR),多种系统融合的网络,物联网系统,车联网系统,以及未来的通信系统,如6G系统等。
本申请实施例描述的网络架构以及业务场景是为了更加清楚的说明本申请实施例的技 术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着网络架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请实施例中不同基站可以为具有不同的标识的基站,也可以为具有相同的标识的被部署在不同地理位置的基站。部分场景中,在基站被部署前,基站不知道其是否会涉及本申请实施例所应用的场景,基站或基带芯片,可以在部署前支持本申请实施例所提供的方法。部分场景中,也可以通过部署后的升级或加载,来支持本申请实施例所提供的方法。可以理解的是,前述具有不同标识的基站可以对应于基站标识,也可以对应于小区标识或者其他标识。
本申请实施例中部分场景以无线通信网络中NR网络的场景为例进行说明,应当指出的是,本申请实施例中的方案还可以应用于其他无线通信网络中,相应的名称也可以用其他无线通信网络中的对应功能的名称进行替代。
为便于理解本申请实施例,首先以图1中示出的通信系统为例详细说明适用于本申请实施例的通信系统。图1示出了适用于本申请实施例的通信方法的通信系统的示意图。如图1所示,该通信系统100包括接入网设备101(gNB1和gNB2),用户设备(user equipment,UE)102,核心网设备(core network,CN)103和运营和管理实体104。接入网设备101可配置有多个天线,UE102也可配置有多个天线。接入网设备和核心网设备可以统称为网络设备,或,网络侧设备,接入网和核心网可以统称为网络侧。
应理解,接入网设备和终端还可包括与信号发送和接收相关的多个部件(例如,处理器、调制器、复用器、解调器或解复用器等)。
其中,接入网设备是指将终端接入到无线网络的无线接入网(radio access network,RAN)节点(或设备),又可以称为基站。接入网设备为具有无线收发功能的设备或可设置于该设备的芯片,该设备可以广义的覆盖如下中的各种名称,或与如下名称进行替换,比如:节点B(nodeB)、演进型基站(evolved nodeB,eNB)、gNB、中继站、接入点、TRP、发射点(transmitting point,TP)、主站MeNB、辅站SeNB、多制式无线(MSR)节点、家庭基站、网络控制器、接入节点、无线节点、接入点(AP)、传输节点、收发节点、基带单元(BBU)、射频拉远单元(RRU)、有源天线单元(AAU)、射频头(RRH)、集中式单元(CU)、分布单元(DU)、定位节点等。基站可以是宏基站、微基站、中继节点、施主节点或类似物,或其组合。基站还可以指用于设置于前述设备或装置内的通信模块、调制解调器或芯片。基站还可以是移动交换中心以及D2D、V2X、M2M通信中承担基站功能的设备、6G网络中的网络侧设备、未来的通信系统中承担基站功能的设备等。基站可以支持相同或不同接入技术的网络。本申请的实施例对网络设备所采用的具体技术和具体设备形态不做限定。
该设备可以是固定的,也可以是移动的。例如,直升机或无人机可以被配置成充当移动基站,一个或多个小区可以根据该移动基站的位置移动。在其他示例中,直升机或无人机可以被配置成用作与另一基站通信的设备。
在一些部署中,接入网设备(例如:gNB)可以包括BBU和RRU。部分基带功能,比如波束赋形功能,可以在BBU中实现,或者,在RRU中实现。BBU和RRU之间的连接接口可以为通用公共无线接口(common public radio interface,CPRI),或者,增强的通用公共无线接口(enhance CPRI,eCPRI)。在另一些部署中,接入网设备可以包括CU和DU。CU和 DU可以理解为是对基站从逻辑功能角度的划分,CU和DU在物理上可以分离,也可以部署在一起。例如,多个DU可以共用一个CU或者一个DU也可以连接多个CU,CU和DU之间可以通过F1接口相连。示例性的,图2为本申请实施例提供的一种多个DU共用一个CU的网络架构示意图,如图2所示,核心网和RAN互连通信,RAN中的基站分离成CU和DU,多个DU共用一个CU。图2所示的网络架构可以应用于5G通信系统,也可以与LTE系统共享一个或多个部件或资源。包括CU节点和DU节点的接入网设备将协议层拆分开,部分协议层的功能放在CU集中控制,剩下部分或全部协议层的功能分布在DU中,由CU集中控制DU。作为一种实现方式,如图3所示,CU部署有协议栈中的无线资源控制(radio resource control,RRC)层,分组数据汇聚层协议(packet data convergence protocol,PDCP)层,以及业务数据适应协议(service data adaptation protocol,SDAP)层;DU部署有协议栈中的无线链路控制(radio link control,RLC)层,媒体接入控制(medium access control,MAC)层,以及物理层(physical layer,PHY)。从而,CU具有RRC、PDCP和SDAP的处理能力。DU具有RLC、MAC和PHY的处理能力。可以理解的是,上述功能的切分仅为一个示例,不构成对CU和DU的限定。
CU的功能可以由一个实体来实现也可以由不同的实体实现。例如,可以对CU的功能进行进一步切分,例如,将控制面(control plane,CP)和用户面(user plane,UP)分离,即CU的控制面(CU-CP)和CU用户面(CU-UP)。例如,CU-CP和CU-UP可以由不同的功能实体来实现,所述CU-CP和CU-UP可以与DU相耦合,共同完成基站的功能。一种可能的方式中,CU-CP负责控制面功能,主要包含RRC和PDCP控制面PDCP-C。PDCP-C主要负责控制面数据的加解密,完整性保护,数据传输等中的一项或多项。CU-UP负责用户面功能,主要包含SDAP和PDCP用户面PDCP-U。其中SDAP主要负责将核心网的数据进行处理并将数据流(flow)映射到承载。PDCP-U主要负责数据面的加解密,完整性保护,头压缩,序列号维护,数据传输等中的一项或多项。其中CU-CP和CU-UP通过E1接口连接。CU-CP代表接入网设备通过Ng接口和核心网连接。CU-CP通过F1-C(控制面)和DU连接。CU-UP通过F1-U(用户面)和DU连接。可以理解的是,接入网设备可以为CU节点、或DU节点、或包括CU节点和DU节点的设备。此外,CU可以划分为无线接入网RAN中的设备,也可以将CU划分为核心网CN中的设备,在此不做限制。
终端也可以称为终端设备、用户设备(user equipment,UE)、接入终端、用户单元、用户站、移动终端(mobile terminal,MT)、移动台(mobile station,MS)、远方站、远程终端、移动设备、用户终端、无线通信设备、用户代理或用户装置。终端是指向用户提供语音和/或数据连通性的设备,可以用于连接人、物和机。本申请的实施例中的终端可以是手机(mobile phone)、平板电脑(Pad)、带无线收发功能的电脑、可穿戴设备、移动互联网设备(mobile internet device,MID)、虚拟现实(virtual reality,VR)终端、增强现实(augmented reality,AR)终端、工业控制(industrial control)中的无线终端、无人驾驶(self-driving)中的无线终端、远程医疗(remote medical)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端等等。本申请的实施例对应用场景不做限定。本申请中由终端实现的方法和步骤,也可以由可用于终端的部件(例如芯片或者电路)等实现。本申请中将前述终端及可设置于前述终端的部件(例如芯片或者电路)统 称为终端。可选的,终端也可以用于充当基站。例如,终端可以充当调度实体,其在V2X或D2D等中的终端之间提供侧行链路信号。比如,蜂窝电话和汽车利用侧行链路信号彼此通信。蜂窝电话和智能家居设备之间通信,而无需通过基站中继通信信号。
核心网设备,是指为终端提供业务支持的核心网(core network,CN)中的设备。目前,一些核心网设备的举例为:接入和移动性管理功能(access and mobility management function,AMF)实体、会话管理功能(session management function,SMF)实体、用户面功能(user plane function,UPF)实体等等,此处不一一列举。其中,所述AMF实体可以负责终端的接入管理和移动性管理;所述SMF实体可以负责会话管理,如用户的会话建立等;所述UPF实体可以是用户面的功能实体,主要负责连接外部网络。需要说明的是,本申请中实体也可以称为网元或功能实体,例如,AMF实体也可以称为AMF网元或AMF功能实体,又例如,SMF实体也可以称为SMF网元或SMF功能实体等。
网元管理设备,是指负责接入网设备的配置和管理的设备。这里,运营和管理实体可以为OAM,或者为网络管理系统,本申请实施例对运营和管理实体的命名方式不做限定。
在该通信系统100中,gNB1和gNB2均可以与多个UE通信。但应理解,与gNB1通信的UE和与gNB2通信的UE可以是相同的,也可以是不同的。图1中示出的UE 102可同时与gNB1和gNB2通信,但这仅示出了一种可能的场景,在某些场景中,UE可能仅与gNB1或gNB2通信,本申请对此不做限定。应理解,图1仅为便于理解而示例的简化示意图,该通信系统中还可以包括其他接入网设备,终端,或者核心网设备,图1中未予以画出。在NR和LTE系统中,UE的无线资源控制(radio resource control,RRC)状态包括连接态(RRC_CONNECTED)、空闲态(RRC_IDLE)、去激活态(RRC_INACTIVE,或者称为第三态)。其中,RRC去激活(inactive)状态是终端通过基站连接到5G核心网中新引入的状态,该状态介于连接态和空闲态之间。在RRC_INACTIVE状态下,终端与接入网设备之间没有RRC连接,但保持接入网设备与核心网设备的连接,终端保存有建立/恢复连接所必须的全部或部分信息。因而在RRC_INACTIVE状态下,终端在需要建立连接时,可以根据保存的相关信息,快速地与网络设备建立或恢复RRC连接。
当UE处于RRC_CONNECTED状态时,UE与基站以及核心网都已建立链路,当有数据到达网络时可以直接传送到UE;当UE处于RRC_INACTIVE状态时,表示UE之前和基站以及核心网建立过链路,但是UE到基站这一段链路被释放,但是基站会存储UE的上下文,当有数据需要传输时,基站可以快速恢复这段链路;当UE处于RRC_IDLE状态时,UE与基站和网络之间都没有链路,当有数据需要传输时,需要建立UE到基站及核心网的链路。
示例性的,图4为本申请实施例提供的一种RRC状态转变示意图,如图4所示,在RRC_IDLE态时,UE可以接入基站,接入过程中或接入基站后UE可以和基站进行RRC建立过程,使得UE的状态从RRC_IDLE态转换为RRC_CONNECTED态。在RRC_IDLE态时,UE从基站接收到寻呼消息后或者由UE的高层触发后,UE可以发起RRC建立过程,试图和基站建立RRC连接以进入RRC_CONNECTED态。UE是RRC_IDLE态时,没有UE和基站之间的RRC连接。当UE处于RRC_CONNECTED状态时,基站可以通过释放RRC过程,例如向UE发送RRC释放(RRCRelease)消息,使得UE的状态从RRC_CONNECTED态转变为RRC_IDLE状态或RRC_INACTIVE状态。当UE处于RRC_INACTIVE状态时, UE可以通过释放RRC连接而进入RRC_IDLE状态,或者,UE可以通过恢复RRC连接而进入RRC_CONNECTED状态。
移动性负载均衡可以包括:通过调整小区的切换参数,使部分处于RRC_CONNECTED态的UE从负载较高的小区切换到负载较低的小区;和/或,通过调整小区重选参数,使部分处于RRC_IDLE态或RRC_INACTICE态的UE重选到负载较低的小区,避免由RRC_IDLE态或RRC_INACTICE态的UE发起业务导致潜在的负载不均衡情况。
在现有通信机制中,如LTE和NR中,主要通过网络设备间共享资源的使用情况,基于小区的负载对小区的切换参数和重选参数进行调整,从而达到移动性负载均衡的目的。现有技术中,网络设备间的各接口,例如NR基站和核心网间,LTE基站和核心网间,NR基站间,LTE基站间,CU与DU之间,CU-CP与CU-UP之间,已具有交互资源状态信息的功能,下面以NR基站间接口,如,Xn接口,为例,对MLB的基本流程进行介绍。
网络设备间的资源使用情况交互可以通过MLB的资源状态报告初始化(resource status reporting initiation)流程控制,如图5所示:
S501:第一网络设备向第二网络设备发送资源状态的请求,比如该请求通过资源状态请求消息(Resource Status Request)发送。
可选的,在资源状态的请求中指示需要第二网络设备反馈的资源状态相关的信息。
具体的,第一网络设备向第二网络设备发送Resource Status Request消息,其中携带指示第二网络设备开始测量,停止测量,或增加测量的小区的指示信息,并可以在该消息的报告特征(Report Characteristics)信元中指示需要第二网络设备反馈的资源状态相关的信息。
第二网络设备根据第一网络设备发送的Resource Status Request消息,执行相应操作。
若能够成功执行,例如可以成功测量第一网络设备请求的资源状态信息,则执行S502a;若不能成功执行,例如无法测量第一网络设备请求的资源状态信息,则执行S502b。
S502a:向第一网络设备回复响应,比如通过资源状态响应(Resource Status Response)消息进行响应。
S502b:向第一网络设备回复失败信息,比如通过资源状态失败(Resource Status Failure)消息进行回复。失败信息可以包括失败的原因,如某种资源的状态无法测量。
第二网络设备完成对第一网络设备请求的资源的测量后,可以通过资源状态报告(Resource Status Reporting)流程,向第一网络设备发送测量结果,如图6所示:
S601:第二网络设备向第一网络设备发送资源状态的信息,比如,通过资源状态更新(Resource Status Update)消息进行发送。
第二网络设备周期性地根据最新的第一网络设备发送的资源状态请求进行测量,完成资源状态的测量后,发送Resource Status Update消息,将资源状态的信息通过Resource Status Update消息发送给第一网络设备。
第一网络设备收到第二网络设备的Resource Status Update消息后,若判断需要更改移动性参数,则可以通过MLB的移动性参数改变(Mobility Settings Change)流程进行协商更改,如图7所示:
S701:第一网络设备向第二网络设备发送请求改变移动性参数的信息,比如通过移动性参数改变请求(Mobility Change Request)消息进行发送。
第一网络设备发起移动性参数改变请求可以由多种条件触发,其中一种触发条件为第 一网络设备和第二网络设备在共享了资源使用情况后,判别需要调整移动性参数。例如,第一网络设备发现第二网络设备的负载较小,因此确定需要提高第二网络设备向第一网络设备切换的触发阈值,从而使第二网络设备上的UE更难切换至第一网络设备,让更多UE停留在第二网络设备。
需要注意的是,移动性参数改变流程的参数是针对两个邻区的,即针对一个特定的小区对另一个特定的小区的参数。
若第二网络设备确定可以接受第一网络设备提出的移动性参数改变,执行S702a;若第二网络设备确定不可以接受第一网络设备提出的移动性参数改变,执行S702b。
S702a:第二网络设备向第一网络设备回复参数改变的成功响应,比如通过发送移动性参数改变成功(Mobility Change Acknowledge)进行响应。
S702b:第二网络设备向第一网络设备回复参数改变的失败响应,比如通过发送移动性参数改变失败(Mobility Change Failure)消息进行响应。参数改变的失败响应可以包括失败的原因和第二网络设备支持的移动性参数变化的范围中的至少一项。
可选的,第一网络设备接收Mobility Change Failure消息后,可以根据其中的失败原因和第二网络设备支持的移动性参数变化的范围,执行S701,重新发起移动性参数改变流程。
以下,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解:
一、神经网络
神经网络是机器学习的一种具体实现形式。机器学习(machine learning,ML)在近年来引起了学术界和工业界的广泛关注。由于机器学习在面对结构化信息与海量数据时的巨大优势,诸多通信领域的研究者也将目光投向机器学习。
神经网络可以用来执行分类任务、预测任务,也可以用来建立变量间的条件概率分布。常见的神经网络包括深度神经网络(deep neural network,DNN)、生成型神经网络(generative neural network,GNN)等。根据网络的构建方式,DNN可包括前馈神经网络(feedforward neural network,FNN)、卷积神经网络(convolutional neural networks,CNN)和递归神经网络(recurrent neural network,RNN)等。GNN包括生成对抗网络(Generative Adversarial Network,GAN)和变分自编码器(Variational Autoencoder,VAE)。
神经网络是以神经元为基础而构造的,下面以DNN为例介绍神经网络的计算和优化机制,可以理解的是,本发明实施例中对神经网络的具体实现方式不做限制。DNN网络中,每个神经元都对其输入值做加权求和运算,将加权求和结果通过一个激活函数产生输出。如图8所示,图8为神经元结构示意图。假设神经元的输入为,与输入对应的权值为,加权求和的偏置为b,激活函数的形式可以多样化,作为示例,激活函数为:,则一个神经元的执行的输出为:。其中,表示与的乘积。DNN一般具有多层结构,DNN的每一层都可包含多个神经元,输入层将接收到的数值经过神经元处理后,传递给中间的隐藏层。类似的,隐藏层再将计算结果传递给最后的输出层,产生DNN的最后输出。如图9所示,图9为神经网络的层关系示意图。DNN一般具有一个或多个隐藏层,隐藏层往往直接影响提取信息和拟合函数的能力。增加DNN的隐藏层数或扩大每一层的神经元的个数都可以提高DNN的函数拟合能力。每个神经元的参数可以包括权值、偏置和激活函数,DNN中所有神经元的参数构成的集合称为DNN参数(或称为神经网络参数)。神经元的权值和偏置可以通过训练过程得到优化,从而使得DNN具备提取数据特征、表达映射关系的能力。
得益于神经网络在建模和提取信息特征的优势,可以设计基于神经网络的通信方案。为了支持不同的应用场景并获得良好的结果,需要对神经网络的参数进行设置与优化。所述神经网络的参数包括神经网络相关的信息,示例性的,可以包括以下内容的一项或多项:
神经网络的类型,例如深度神经网络,或,生成型神经网络;
神经网络结构相关的信息,例如神经网络的层数,神经元的数量等中的一项或多项;
神经网络中每个神经元的参数,例如权值、偏置和激活函数等中的一项或多项。
二、网络状态信息
网络状态信息,为与网络资源使用情况相关的信息,可以包括以下内容的至少一项:空口资源状态信息、传输网络层资源状态信息、小区资源状态信息、网络设备的硬件资源状态信息、网络设备的各网络切片的容量资源使用情况、网络设备不同业务类型的负载信息,或,用户路径预判。
其中,资源状态信息指示资源的使用情况,例如,已使用的所述资源占总量的百分比,或者,未使用的所述资源占总量的百分比中的一项,及所述资源的总量。资源状态信息中还可以包含所述资源的容量等级,即所述资源总容量的档位,所述档位对应的容量数值的范围可以在协议中进行约定。
其中,空口资源状态信息,包括以下内容的至少一项:
小区的保障比特速率(guarantee bit rate,GBR)及非保障比特速率(Non-GBR)资源使用情况中的至少一项;
同步信号与物理广播信道块(synchronization signal and PBCH block,SSB,简称同步信道块)波束的GBR及Non-GBR资源使用情况中的至少一项;
小区物理下行控制信道控制信道元素(physical downlink control channel control channel element,PDCCH CCE)资源使用情况;
小区探测参考信号(sounding reference signal,SRS)资源使用情况;
小区的物理随机接入信道(physical random access channel,PRACH)资源使用情况;
SSB波束的PRACH资源使用情况;
小区上行和下行占用的PDCCH CCE的数量,或,CCE总个数与上行和下行占用比例的信息;或,
小区物理上行控制信道(physical uplink control channel,PUCCH)不同格式的码道占用比例和资源块(resource block,RB)占用比例,例如,PUCCH Format1格式码道占用比例为50%,占用2RB。
其中,传输网络层(transport network layer,TNL)资源状态信息,可以包括:
小区上行和/或下行TNL资源使用情况。
其中,小区资源状态信息,可以包括以下内容的至少一项:
小区和/或各SSB波束的容量等级;
小区和/或各SSB波束容量资源使用情况;
小区激活的用户数量;或
小区RRC连接数。
其中,网络设备不同业务类型的负载信息,可以包括以下内容的至少一项:
小区的业务类型,其中,业务类型可以根据业务的业务服务质量(quality of service, QoS)进行划分,比如QoS为1的,对应第一业务类型,QoS为2的对应第二业务类型;或,
小区每种业务类型的负载信息,例如,业务类型对应的流量值和/或用户数。
其中,用户路径预判,可以包括以下内容的至少一项:
网络设备中各用户的位置信息,例如用户的全球定位系统(global positioning system,GPS)坐标;
网络设备中各用户的参考信号接收功率(reference signal receiving power,RSRP)及参考信号接收质量(reference signal received quality,RSRQ)测量信息;或,
网络设备中保存的用户路径信息,例如各用户的GPS坐标信息。
上述内容中,资源已使用的比例和/或未使用的比例,可以为具体的数值,也可以为档位对应的索引,例如,如表1所示,在协议中约定索引-比例对照表,其中,比例信息区间值,示例性的,若资源已使用比例为“22%”,在[0%,25%)的区间内,则资源对应的索引值为“0”。
表1 索引值-比例对照表样例
Figure PCTCN2022078983-appb-000001
三、负载值
负载值可以表示网络设备的负载情况。可选的,所述负载值可以为根据网络状态信息中的内容,计算得到的数值。示例性的,所述负载值可以为对网络状态信息中的各项数值,通过对应的权值进行叠加,所得到的数值。
其中,上述方法中,所用到的权值的初始值可以由OAM提供,或,为预设值,或,基于对该网络设备的配置进行设置,并可以在获得实际负载值后,根据预测值和实际值对权值进行迭代优化。
一种可能的实施方法中,根据网络状态信息中的内容,得到负载值的计算和优化过程可以通过神经网络完成。
四、实际值和预测值
需要注意的是,所述实际值和预测值应与时间相对应,例如第一时间的预测负载值,即所述负载值对应的时间为第一时间,其中实际值和预测值所对应的时间,即第一时间,可以为时间段,或者,为时间点。
实际值包括实际网络状态信息和实际负载值。实际网络状态信息是网络设备在对应时间的网络状态信息的实际值,可以通过网络设备的统计和计算获得。实际负载值是网络设备在对应时间的负载值的实际值,可以根据对应时间的实际网络状态信息通过计算获得,计算的方法可以参考上述负载值的获得方法。
预测值包括预测网络状态信息和预测负载值。预测网络状态信息是网络设备在对应时间的网络状态信息的预测结果,在一种可能的实施方法中,可以根据当前的网络状态信息 获得,比如,基于当前网络状态信息的纠偏值和历史上与第一时间对应的至少一个时间的网络状态信息的函数来获得所预测的第一时间的网络状态信息。其中,该函数可以为加权和。具体以预测第一网络设备10:00~11:00的PDCCH CCE资源状态信息为例,在一种可能的实施方法中,将第一网络设备当前时间的PDCCH CCE资源已使用的比例,与当前时间之前一个小时内第一网络设备获得的数次第一网络设备的PDCCH CCE已使用的比例数值进行卡尔曼滤波,得到纠偏值,并获得,前7天内每日10:00~11:00第一网络设备的实际PDCCH CCE占用比例数值,得到历史值,将纠偏值和历史值按照相应的权值叠加,从而得到第一网络设备的10:00~11:00的PDCCH CCE资源已使用的比例的预测结果。预测负载值是负载值的预测结果,一种可能的实施方法中,首先可以根据上述方法,通过当前的网络状态信息获得预测网络状态信息,然后参考上述负载值的计算方法获得预测负载值。
五、MLB策略
MLB策略是为了实现MLB而制定的网络设备运行策略相关的信息,网络设备可以根据MLB策略调整网络设备的参数,从而实现网络设备负载的调整,达到负载均衡。MLB策略可以包括以下内容的至少一项:
用于调整网络设备负载的网络设备的配置参数及参数对应的时间信息;或,
MLB策略的可靠度信息。
MLB策略中的参数配置可以为一个参数的配置,也可以包括多个参数的配置,可以根据调整网络设备负载的需要确定参数的范围。所述参数可以用于调整网络设备的负载,示例性的,所述参数可以为切换相关的参数,或,接入相关的参数。具体的参数可以参考现有技术中的参数或未来出现的参数,本实施例不予限定。
其中,所述网络设备的配置参数可以包括一个或多个网络设备的配置参数,在此不作限定。不同网络设备的参数配置及参数对应的时间信息可以是相互独立的,也可以是相互关联的,示例性的,不同的网络设备可以使用独立的参数对应的时间信息,从而更灵活地控制网络设备的参数,或者,不同的网络设备可以共用参数对应的时间,从而减少交互信息的开销,本方法实施例在此不做限定。
MLB策略中的参数配置及时间信息有如下可能的实施方式,需要注意的是,所述实施方式中,以第一网络设备为例阐述参数配置和时间信息的实施方法,当涉及多个网络设备时,不同的网络设备的处理方法相同:
在一种可能的实施方式中,MLB策略包含第一网络设备的第一参数,并包含第一参数有效的时间,可称为第一有效时间,在第一有效时间内,第一网络设备按照第一参数进行参数配置,并当第一有效时间结束前或结束后,可以触发第一网络设备的MLB策略推理业务,获得新的参数。
在另一种可能的实施方式中,MLB策略中包含第一网络设备的多个参数配置和与参数配置对应的多个时间信息,即,多组参数配置和时间信息,例如,MLB策略中包含第一网络设备的第一参数,及第一参数有效的时间,可称为第一有效时间;还包含第一网络设备的第二参数,及第二参数有效的时间,可称为第二有效时间。则在第一有效时间内,第一网络设备使用第一参数,第一有效时间后,第二有效时间内,第一网络设备使用第二参数。在第二有效时间结束前或结束后,触发第一网络设备的MLB策略推理业务,获得新的参数。此外,若MLB策略推理业务为周期性业务,则第二有效时间的截止时刻也可以默认为本次 周期的截止时刻。通过上述方法,可以实现网络设备参数的动态调整。
其中,可靠度表示MLB策略的可信程度。在一种可能的实施方式中,可信程度通过预测负载值和实际负载值在一预测负载值误差范围内的概率进行指示,示例性的,可靠度表示为:预测负载值误差范围为-1至1,预测负载值和实际负载值的偏差在此误差范围内的概率为95%。或者,可靠度包括预测负载值在一定区间内的概率分布,例如,所述概率分布预测负载值在不同区间的概率,如表二所示,包括一组预测负载值的区间和概率的对应关系:
表二 预测负载值-概率分布表
Figure PCTCN2022078983-appb-000002
传统技术中,MLB的机制主要是周期性的获得当前的网络资源使用情况,判断是否需要调整基站的参数配置。但是,现有的MLB机制中,由于实际的网络情况十分复杂,并且可能存在波动的情况,现有MLB机制所制定的MLB策略仅周期性的调整参数配置,在两次调整周期之间无法改变参数配置,有可能会导致负载不均衡,而无法充分利用网络资源。
为了解决上述问题,本申请实施例提供了一种通信方法,具体的,一种负载均衡的方法。该方法可以包括:获得第一网络设备的第一网络状态信息;获得第二网络设备的指示网络状态的信息;基于所述第一网络状态信息和所述第二网络设备的指示网络状态的信息,获得待生效的第一移动性负载均衡MLB策略;向所述第二网络设备发送所述第一MLB策略;所述第一MLB策略包括以下内容的至少一项:用于调整所述第一网络设备负载的配置参数及所述用于调整第一网络设备负载的配置参数对应的时间信息;用于调整所述第二网络设备负载的配置参数及所述用于调整第二网络设备负载的配置参数对应的时间信息;或,所述第一MLB策略的可靠度信息。所述第一网络设备和/或第二网络设备可以根据时间信息调整参数配置,增强了参数配置的灵活性,从而使MLB的性能得以提升。第一网络设备和/或第二网络设备可以根据可靠度信息接受或拒绝依据所述第一MLB策略调整参数配置,从而可以提升MLB的性能。
如图10所示,提供了一种可能的实施方式的流程图,本实施例的方法包括:
S1001:第一网络设备获得所述第一网络设备的第一网络状态信息。
第一网络设备启动MLB流程,获得第一网络设备的第一网络状态信息。MLB流程可以为周期性触发,也可以通过事件触发,例如,当用户数超过一门限值时,启动MLB流程。
S1002:所述第一网络设备获得所述第二网络设备的指示网络状态的信息。
其中,第二网络设备的指示网络状态的信息可以包括:第二网络设备当前的网络状态信息,第二网络设备预测的网络状态信息,或,第二网络设备的第三预测负载值。示例性的,一种可能的获得第二网络设备的指示网络状态的信息的方式如图11所示,可以包括:
S1101:第一网络设备向第二网络设备发送第一网络设备请求第二网络设备的第二网络状态信息的请求信息,比如通过第一询问消息发送该请求信息。
具体的,第一网络设备向第二网络设备发送第一询问消息,所述第一询问消息包括第 一网络设备请求第二网络设备的第二网络状态信息的请求信息。
第二网络设备在接收所述第一询问消息后,根据第一询问消息中的第二网络状态信息的请求信息,对自身的网络状态信息进行统计。
S1102:第二网络设备向第一网络设备发送第一确认消息。
第二网络设备完成网络状态信息统计后,向第一网络设备发送指示网络状态的信息。
其中,所述指示网络状态的信息是第二网络设备当前的网络状态信息;或,
所述指示网络状态的信息,是第二网络设备根据第二网络设备当前的网络状态信息,所得到的当前MLB策略下第二网络设备在第三时间的预测网络状态信息;或,
所述指示网络状态的信息,是第二网络设备根据第二网络设备当前的网络状态信息,所得到的当前MLB策略下第二网络设备在第三时间的预测负载值。其中,第三时间为当前时间和第一时间中的某个时间。
S1003:所述第一网络设备根据所述第一网络状态信息和所述第二网络设备的指示网络状态的信息,获得待生效的第一MLB策略。其中,第一MLB策略对应于前述第一时间。
具体的,如图12a-12c所示,获得待生效的第一MLB策略可以有如下可能的实施方式:
在一种可能的实施方式中,如图12a所示,第一网络设备基于对第一网络设备的预测负载值和第二网络设备的指示网络状态的信息得到第一MLB策略,包括:
S1201(a):第一网络设备基于当前MLB策略及第一网络状态信息和第二网络设备的指示网络状态的信息,获得第一网络设备在第一时间的第一预测负载值;可选的,可以通过第一神经网络,来获得第一网络设备在第一时间的第一预测负载值。比如,将当前MLB策略及第一网络状态信息和第二网络状态信息作为第一神经网络的输入,将第一网络设备在第一时间的第一预测负载值作为第一神经网络的输出。
S1202(a):第一网络设备基于当前MLB策略及第一网络状态信息和第二网络设备的指示网络状态的信息,获得第二网络设备在第一时间的第二预测负载值;可选的,可以通过第一神经网络,来获得第二网络设备在第一时间的第二预测负载值。比如,将当前MLB策略及第一网络状态信息和第二网络状态信息作为第一神经网络的输入,将第二网络设备在第一时间的第二预测负载值作为第一神经网络的输出。
S1203(a):第一网络设备基于第一预测负载值和第二预测负载值,获得第一MLB策略;可选的,可以通过第二神经网络,来获得第一MLB策略。比如,将第一预测负载值和第二预测负载值作为第二神经网络的输入,将第一MLB策略作为第二神经网络的输出。
本实施例中,后续均以通过神经网络来获得预测负载值或MLB策略为例进行描述。可以理解的是,获得预测负载值或MLB策略也可以通过其他方式,比如通过预设固定的计算方法或条件,获得预测负载值或MLB策略,在此不予限定。
在另一种可能的实施方式中,如图12b所示,第一网络设备基于对第一网络设备的负载预测值和第二网络设备的网络状态信息得到第一MLB策略,包括:
S1201(b):第一网络设备基于当前MLB策略及第一网络状态信息和第二网络设备的指示网络状态的信息,通过第一神经网络,获得第一网络设备在第一时间的第一预测负载值。
S1202(b):第一网络设备基于第一预测负载值和第二网络设备的指示网络状态的信息,通过第二神经网络,获得第一MLB策略。
在另一种可能的实施方式中,如图12c所示,第一网络设备基于对第一网络设备的网络 状态信息和第二网络设备的负载预测值得到第一MLB策略,包括:
S1201(c):第一网络设备基于当前MLB策略及第一网络状态信息和第二网络设备的指示网络状态的信息,通过第一神经网络,获得第二网络设备在第一时间的第二预测负载值。
S1202(c):第一网络设备基于第一网络状态信息和第二预测负载值,通过第二神经网络,获得第一MLB策略。
在图12a-12c所示的实施例中,第一神经网络与第二神经网络可以是相互独立的神经网络,也可以是同一个神经网络。
S1004:所述第一网络设备向所述第二网络设备发送所述第一MLB策略。
所述第一网络设备通过与第二网络设备之间的接口,向第二网络设备发送第一MLB策略,所述第一MLB策略可以包括以下内容的至少一项:用于调整所述第一网络设备负载的配置参数及所述用于调整第一网络设备负载的配置参数对应的时间信息;用于调整所述第二网络设备负载的配置参数及所述用于调整第二网络设备负载的配置参数对应的时间信息;或,所述第一MLB策略的可靠度信息。
第二网络设备从所述第一MLB策略中,可以获得第一网络设备的配置相关信息,并在后续运行过程中作为参考,制定运行策略,例如,所述第二网络设备可以在制定MLB策略时,将第一网络设备的配置情况作为输入信息,获得第二网络设备参数变化对负载的影响。
在上述实施例中,通过神经网络对负载情况进行预测,并将预测的负载结果作为神经网络的输入信息,推理MLB策略,并且所述MLB策略含有效时间信息,从而灵活控制MLB策略中的参数配置,可以提高MLB策略在网络负载变化的情况下的性能。或者,第一网络设备和/或第二网络设备可以根据可靠度信息接受或拒绝依据所述第一MLB策略调整参数配置,从而可以提升MLB的性能。
此外,本申请实施例还可以根据实际的负载值对神经网络进行优化,从而不断提高MLB策略的性能。如图13所示,该实施例可以包括:
S1301:第一网络设备获得第一网络状态信息。
S1302:第一网络设备获得第二网络状态信息。
S1303:第一网络设备根据第一网络状态信息和第二网络状态信息,获得第一MLB策略。
S1304:第一网络设备向第二网络设备发送第一MLB策略。
其中S1301-S1304和S1001-S1004相同,在此不予赘述。
S1305:第一网络设备基于第一MLB策略,修改第一配置参数。
S1306:第一网络设备基于第一MLB策略,指示第二网络设备修改第二配置参数。
第一网络设备可以通过发送给第二网络设备的参数修改指示来指示第二网络设备修改第二配置参数。其中,参数修改指示可以在协议中约定,为参数的具体数值、参数修改的调整量、参数修改的档位中的一种。示例性的,参数当前值为10,期望修改至15,一种可能的实施方式中,发送参数的具体数值15作为参数修改指示,第二网络设备收到后将参数修改至15;一种可能的实施方式中,发送参数修改的调整量5,第二网络设备收到后,在现有值10的基础上调整5,将参数修改至15;一种可能的实施方式中,协议中约定每个调整档位对应的调整量为5,则发送调整档位为1,第二网络设备收到后根据调整档位1确定调整量为5,将参数修改至15。
参数修改指示的发送方式有如下可能的实施方式:
在一种可能的实施方式中,第一网络设备通过S1304中第一网络设备发送的第一MLB策略向第二网络设备发送参数修改指示。所述参数修改指示为第一MLB策略中的第二网络设备相关的参数配置和时间信息,第二网络设备接收第一MLB策略后进行相应的参数修改。
在一种可能的实施方式中,第一网络设备通过独立的指示信息向第二网络设备发送参数修改指示。示例性的,修改第二配置参数的指示信息可以是现有的MLB机制中的Mobility Change Request消息。第二网络设备接收Mobility Change Request消息后,进行相应的参数修改。
S1307:第二网络设备反馈参数修改的结果。
若参数修改成功,可选的,第二网络设备向第一网络设备反馈参数修改成功消息,例如,通过现有MLB机制中的Mobility Change Acknowledge消息进行反馈。执行S1308。
若第二网络设备修改第二参数配置失败,第二网络设备向第一网络设备反馈参数修改失败消息,执行S1309。
其中,参数修改失败消息中可以携带参数修改失败的原因指示,例如,参数修改超过第二网络设备的参数配置范围,或第二网络设备拒绝修改第二参数配置,如根据可靠度信息判断拒绝修改参数。示例性的,所述修改失败指示消息可以是现有MLB机制中的Mobility Change Failure消息。
可选的,参数修改失败消息中,还可以携带第二网络设备可以接受的参数修改范围,在一种可能的实施方式中,所述参数修改范围包括参数的最小值和/或最大值;在一种可能的实施方式中,参数修改范围包括第二网络设备当前的参数值,及当前可接受的参数调整的数值或档位,例如第二网络设备切换参数的范围为0至20,在反馈消息中,携带切换参数当前值15,及当前可接受的参数调整范围-15至5。
可选的,参数修改失败消息中,还可以携带第二网络设备可以接受的MLB策略的可靠度信息,例如,在一种可能的实施方式中,第二网络设备可以反馈接受可靠度概率值为95%及以上的MLB策略。
S1308:第一网络设备基于预测值和实际值对第一神经网络和/或第二神经网络进行优化。
可以理解的是,所述预测值和实际值包括预测网络状态信息和/或预测负载值,所述实际值包括实际网络状态信息和/或实际负载值,并不加以限定。第一网络设备可以根据预测网络状态信息和实际网络状态信息对第一神经网络和/或第二神经网络进行优化,也可以根据预测负载值和实际负载值对第一神经网络和/或第二神经网络进行优化。以根据预测负载值和实际负载值对第一神经网络和/或第二神经网络进行优化为例,包括如下可能的实施方式:
在一种可能的实施方式中,第一网络设备基于对第一网络设备的预测负载值和实际负载值对第一网络设备的第一神经网络和/或第二神经网络进行优化,如图14a所示,可以包括:
S1401(a):第一网络设备获得第一网络设备在第一时间的第四预测负载值。
第一网络设备基于第一MLB策略,通过第一神经网络,对第一网络设备第一时间的负载值进行预测,得到第四预测负载值。
S1402(a):可选的,第一网络设备向第二网络设备发送第四预测负载值。
第二网络设备接收第四预测负载值后,可以根据第四预测负载值确定第二网路的运行策略,例如第二网络设备可以根据第四预测负载值,判断是否需要触发第二网络设备调整MLB策略。
S1403(a):可选的,第二网络设备向第一网络设备发送第一时间第二网络设备的第二实际负载。
S1404(a):第一网络设备获得第一网络设备在第一时间的第一实际负载值。
S1405(a):第一网络设备基于第四预测负载值和第一实际负载值,对第一神经网络和/或第二神经网络进行优化。
在一种可能的实施方式中,第一网络设备基于对第二网络设备的负载预测结果和第二网络设备的负载实际结果对神经网络进行优化,如图14b所示,包括:
S1401(b):第一网络设备获得第二网络设备在第一时间的第五预测负载值。
第一网络设备基于第一MLB策略,通过第一神经网络,对第二网络设备第一时间的负载进行预测,得到第五预测负载值。
S1402(b):可选的,第一网络设备向第二网络设备发送第五预测负载值。
第二网络设备接收第五预测负载值后,可以根据第五预测负载值确定第二网路的运行策略,例如第二网络设备可以根据第五预测负载值,判断是否需要触发第二网络设备调整MLB策略。
S1403(b):第二网络设备向第一网络设备发送第二网络设备在第一时间的第二实际负载值。
S1404(b):第一网络设备基于第五预测负载值和第二实际负载值,对第一神经网络和/或第二神经网络进行优化。
在一种可能的实施方式中,第一网络设备基于第一网络设备和第二网络设备的负载预测结果,及第一网络设备和第二网络设备的负载实际结果对神经网络进行优化,如图14c所示,包括:
S1401(c):第一网络设备获得第一网络设备在第一时间的第四预测负载值,及第二网络设备在第一时间的第五预测负载值。
第一网络设备基于第一MLB策略,通过第一神经网络,对第一网络设备第一时间的负载进行预测,得到第四预测负载值,对第二网络设备第一时间的负载进行预测,得到第五预测负载值。
S1402(c):可选的,第一网络设备向第二网络设备发送第四预测负载值和/或第五预测负载值。
可以参考S1402(a)及S1402(b)相关内容,此处不予赘述。
S1403(c):第二网络设备向第一网络设备发送第二网络设备在第一时间的第二实际负载值。
S1404(c):第一网络设备获得第一网络设备在第一时间的第一实际负载值。
S1405(c):第一网络设备基于第四预测负载值、第五预测负载值、第一实际负载值和第二实际负载值,对第一神经网络和/或第二神经网络进行优化。
S1309中,若第二网络设备参数修改失败,根据第二网络设备回复的失败原因,进行神经网络优化。具体的,如图15所示,根据第二网络设备回复的失败原因,进行神经网络优 化可以包括:
S1501:第一网络设备基于第二网络设备反馈的原因,对第一神经网络和/或第二神经网络进行优化。
例如,第一网络设备根据反馈的失败原因中参数修改范围,调整推理过程中的参数的范围。这样,通过优化神经网络,可以使推理的MLB策略更符合实际网络的需求,避免后续推理的MLB策略执行失败。
可选的,执行S1502、S1503、S1504中的一项。
S1502:重新执行S1303~S1307。
若参数修改失败,并基于失败原因优化神经网络后,重新推理MLB策略。
在一种可能的实施方式中,失败原因是参数修改超过第二网络设备的参数配置范围,第一网络设备重新进行负载预测并推理MLB策略,若在第二网络设备的参数配置范围内,可以得到可实施的MLB策略,则重新调整第一网络设备和/或第二网络设备的参数,并基于新的MLB策略进行优化。
在一种可能的实施方式中,失败原因是第二网络设备根据可靠度信息判断拒绝修改参数,则第一网络设备按照第二网络设备的可信度要求,重新进行负载预测并推理MLB策略,重新调整第一网络设备和/或第二网络设备的参数,并根据图14a-14c所示实施例,基于新的MLB策略下的预测负载值和实际负载值进行优化。
S1503:基于失败反馈中的失败原因,调整参数配置并指示第二网络设备修改。
若失败的原因为参数修改指示超过了第二网络设备允许的参数修改范围,则可以根据失败反馈消息中的参数修改范围,调整第二参数配置值,并指示第二网络设备进行修改。
S1504:基于失败反馈中的失败原因判别执行S1502或S1503。
根据第二网络设备反馈的失败原因判别重新推理MLB策略,或直接进行参数修改。
在一种可能的实施方法中,当失败原因为参数修改指示超过了第二网络设备参数允许的范围时,并且参数修改指示和第二网络设备允许的参数范围偏差超出所设置的阈值时,执行S1502,即重新进行推理;若参数修改指示和第二网络设备允许的参数范围偏差小于所述阈值时,执行S1503,即,不再重新推理MLB策略,根据反馈的参数范围调整参数修改指示。其中,一种可能的实施方法中,所设置的阈值可以基于负载对参数变化的敏感程度的经验进行设置,例如当认为用于触发发送与切换相关的测量报告的配置值对切换影响较大时,该参数所对应的阈值应设置较小的值,避免参数的调整对负载造成较大影响。
S1310:可选的,第一网络设备向第二网络设备发送神经网络优化消息。
第一网络设备完成神经网络的优化后,可以向第二网络设备反馈神经网络的优化消息,可以为第二网络设备的神经网络的优化提供信息,该优化消息可以包括以下内容的至少一项:
第一网络设备在第一时间的实际负载值;
第一网络设备优化后的神经网络参数,例如,神经网络的权值参数,输入参数等。
上述实施例中,可以通过预测负载值和实际负载值的偏差,对神经网络进行优化,可以进一步提高负载的预测准确性和MLB策略预测的准确性。
当网络中存在多个设备支持负载的预测和MLB策略推理业务时,还可以根据多个网络设备的推理结果对神经网络进行优化。本申请实施例还提供一种基于多个网络设备的推理 结果对神经网络进行优化的方法,如图16所示,可以包括:
S1601:第一网络设备获得第一网络状态信息,第二网络设备获得第二网络设备的指示网络状态的信息。
第一网络状态信息及第二网络设备的指示网络状态的信息和S1001中的描述相同,在此不予赘述。
S1602:第一网络设备获得第二网络状态信息,第二网络设备获得第一网络设备指示网络状态的信息。
第一网络设备向第二网络设备发送第一询问消息,并接收第二网络设备发送的第一确认消息,第二网络设备向第一网络设备发送第二询问消息,并接收第一网络设备发送的第二确认消息。
第一询问消息,第一确认消息,第二询问消息,第二确认消息的内容可参考S1001、S1002中询问消息和确认消息相关的内容。
其中,第二询问消息可以和第一确认消息属于同一条消息,或是和第一确认消息是独立的两条消息,这两条消息的发送时间可以相同,也可以不同。
需要注意的是,在第二确认消息中携带的第一网络设备的的指示网络状态的信息,为第一网络设备当前的网络状态信息、第一网络设备预测网络状态信息,或预测负载值中的一项。
S1603:第一网络设备获得第一MLB策略,第二网络设备获得第二MLB策略。
可参考S1003所述内容,在此不予赘述。
S1604:第一网络设备基于当前MLB策略,预测第一网络设备在第二时间的第六预测负载值及第二网络设备在第二时间的第七预测负载值;第二网络设备基于当前MLB策略,预测第一网络设备在第二时间的第八预测负载值及第二网络设备在第二时间的第九预测负载值。
在一种可能的实施方法中,第六预测负载值、第七预测负载值、第八预测负载值、第九预测负载值也可以为不同时间点或不同时间段的预测负载值,从而提高预测负载值的随机性,避免由于网络的波动影响后续准确度的计算。
S1605:第一网络设备获得在第二时间的第三实际负载值,第二网络设备获得在第二时间的第四实际负载值。
在一种可能的实施方法中,若第六预测负载值、第七预测负载值、第八预测负载值、第九预测负载值对应不同的时间,第一网络设备和第二网络设备也可以根据上述预测负载值的时间获得相应的实际负载值,从而使预测负载值和实际负载值可以进行对比。
S1606:第一网络设备和第二网络设备交互预测负载值和实际负载值。
第一网络设备和第二网络设备进行预测负载值和实际负载值的交互,为后续准确度计算提供信息。
在一种可能的实施方法中,第一网络设备执行以下内容中的一项:
向第二网络设备发送第六预测负载值、第七预测负载值及第三实际负载;
向第二网络设备发送第六预测负载值及第三实际负载;
向第二网络设备发送第七预测负载值及第三实际负载;
向第二网络设备发送第七预测负载值;
第二网络设备执行以下内容中的一项:
向第一网络设备发送第八预测负载值、第九预测负载值及第四实际负载;
向第一网络设备发送第九预测负载值及第四实际负载;
向第一网络设备发送第八预测负载值及第四实际负载;
向第一网络设备发送第八预测负载值。
S1607:第一网络设备和第二网络设备获得预测负载值的准确度,并判断预测性能较优的网络设备。
第一网络设备和第二网络设备执行以下内容的至少一项:
基于第三实际负载值和第六预测负载值,计算第六预测负载值的准确度;
基于第四实际负载值和第七预测负载值,计算第七预测负载值的准确度。
和/或,第一网络设备和第二网络设备执行以下内容的至少一项:
基于第三实际负载值和第八预测负载值,计算第八预测负载值的准确度;
基于第四实际负载值和第九预测负载值,计算第九预测负载值的准确度。
其中,准确度是用于描述预测负载值和实际负载值偏差的数值,是预测负载值和实际负载值的函数。在一种可能的实施方法中,准确度可以通过如下公式进行计算:
准确度=(|预测负载-实际负载|)/实际负载
通过上述公式计算得到的准确度数值越小,预测负载值越接近实际负载值,准确度越优。
S1608:第一网络设备和第二网络设备判断预测性能较优的网络设备。
第一网络设备和第二网络设备基于得到的第六预测负载值的准确度和第七预测负载值的准确度中的至少一项,和,第八预测负载值的准确度和第九预测负载值的准确度中的至少一项,判断预测性能较优的网络设备。
若第六预测负载值的准确度或第七预测负载值的准确度较优,则第一网络设备的预测性能较优;若第八预测负载值的准确度或第九预测负载值的准确度较优,则第二网络设备的预测性能较优。
需要说明的是,在另一种可能的实施方法中,第一网络设备和第二网络设备之间也可以直接交互已获得的预测负载值的准确度,而不是交互所获得预测负载值。比如,第一网络设备可以基于第六预测负载值和第三实际负载值计算第六预测负载值的准确度,并将第六预测负载值的准确度发送至第二网络设备;第二网络设备可以基于第九预测负载和第四实际负载值计算第九预测负载值的准确度,并将第九预测负载值的准确度发送至第一网络设备。从而,第一网络设备和第二网络设备可以基于第六预测负载值的准确度和第九预测负载值的准确度判断预测性能较优的网络设备。通过上述方法,可以替代S1606-S1607中,第一网络设备和第二网络设备所执行的内容,从而减少网络设备的运算开销,和信息交互的开销。
S1609:示例性的,第一网络设备和第二网络设备均判断第一网络设备的准确度较优。
第一网络设备判断自身的准确度较优,当网络设备判断自身预测性能较优时,执行S1610;第二网络设备判断第一网络设备的准确度较优,即网络设备判断其他网络设备的准确度较优时,执行S1611。
S1610:第一网络设备向第二网络设备发送第一MLB策略。
第一网络设备判断自身准确度较优,则向第二网络设备发送第一MLB策略。
后续第一网络设备可参考图13所示参考实施例,进行参数修改和/或神经网络的优化。
S1611:第二网络设备判断其他网络设备准确度较优,对神经网络进行优化,具体可以包括:
S1611a:第二网络设备向第一网络设备发送神经网络优化信息的请求信息。
S1611b:第一网络设备向第二网络设备反馈神经网络优化信息。
第一网络设备在接收神经网络优化信息的请求消息后,反馈神经网络优化信息,所述的神经网络优化信息包括以下内容的至少一项:
第一网络设备的第一神经网络和/或第二神经网络的参数;
第一网络设备记录的预测负载值和实际负载值偏差的分析结果,示例性的,如导致预测负载值偏低或偏高的输入量,及与所述输入量相关的神经网络参数信息;
第一网络设备的第一神经网络和/或第二神经网络获得预测负载值和第一MLB策略所使用的输入信息,例如第一网络状态信息,第二网络设备的指示网络状态的信息,MLB策略中的一项或多项。
S1611c:第二网络设备根据神经网络优化信息,对第一神经网络和/或第二神经网络进行优化。
对神经网络的优化依赖于产品的内部实现,示例性的,有如下可能的实施方式:
一种可能的实施方法中,第二网络设备可以根据,第一网络设备发送的第一网络设备的第一神经网络和/或第二神经网络的参数,更新第二网络设备的第一神经网络和/或第二神经网络。
一种可能的实施方式中,第二网络设备可以根据,第一网络设备发送的预测负载值和实际负载之偏差的分析结果,确定对偏差影响较大的输入量,并调整相应输入量的神经网络参数。
S1611d:可选的,第二网络设备将第二MLB策略更新为第一MLB策略。
第二网络设备判断自身的准确度不是最优,可以使用准确度较优的网络设备的MLB策略。在第二网络设备也可基于其他条件,比如,准确度较优的网络设备的MLB策略超出了第二网络设备所允许的范围,确定使用自身推理的MLB策略。
S1611e:可选的,第二网络设备向第一网络设备发送第二MLB策略。
在一种可能的实施方法中,由于第二网络设备的准确度不是最优,可以默认第二网络设备不向第一网络设备发送第二MLB策略。
在另一种可能的实施方式中,第二网络设备可以向第一网络设备发送第二MLB策略,第一网络设备接收第二网络设备的第二MLB策略后,可以根据所述第二MLB策略,确定第二网络设备所执行的MLB策略和第二网络设备的参数,并判断是否修改第一网络设备的参数。
后续第二网络设备可参考图13所示的实施例,进行参数修改和优化,在此不予赘述。
通过上述实施例的方法,多个网络设备可以支持负载值预测和MLB策略推理功能时,网络设备可以根据预测性能较好,比如预测性能最好的,或,预测性能最好的几个中的其中一个,的网络设备的神经网络对自身进行神经网络优化,从而实现神经网络的性能提升, 进一步提升MLB策略的性能。
需要说明的是,在本申请中,还提供了以下实施例,这些实施例可以独立于前述实施例或与前述实施例结合。具体可以包括如下可能的实施方式:
在一种可能的实施方式中,第三网络设备获得MLB策略并发送至第一网络设备,示例性的,所述第三网络设备为运营和管理实体,包括:第三网络设备向第一网络设备发送第三询问消息;第一网络设备根据所述第三询问消息,向第三网络设备反馈第三确认消息,所述确认消息包括第一网络设备的指示状态信息的信息;所述第三网络设备基于所述第一网络设备的指示状态信息的信息,通过第三神经网络获得第三MLB策略;所述第三网络设备将所述第三MLB策略发送至第一网络设备;第一网络设备根据第三MLB策略修改第一参数配置,并向所述第三网络设备反馈实际负载值;所述第三网络设备基于所述反馈的实际负载值对第三神经网络进行优化。
在一种可能的实施方式中,第一网络设备基于第三网络设备发送的神经网络优化信息对第一神经网络和/或第二神经网络进行优化,示例性的,所述第三网络设备为运营和管理实体,包括:第一网络设备执行如图10-16所示的实施例中一种或多种方法;可选的,第一网络设备向第三网络设备发送第一网络设备获得的预测负载值、实际负载值及时间信息,第三网络设备基于所述预测负载值、实际负载值及时间信息,对第三网络设备的第三神经网络进行优化,然后将第三神经网络的神经网络优化信息发送至第一网络设备;可选的,第一网络设备基于所述第三神经网络信息对第一神经网络和/或第二神经网络进行优化。所述第三神经网络的神经网络优化信息可参考S1611b中神经网络的优化信息相关内容。
进一步的,以上图10-图16所描述的实施例中,还可以包括:
第一网络设备向第二网络设备发送以下预测和/或推理相关的第一信息中的一项或多项:
第一网络设备支持的业务类型,例如MLB业务;
第一网络设备支持的业务的周期信息或时间信息,例如MLB业务周期为30分钟,即每30分钟进行一次MLB业务,或,第一网络设备在10:00,10:20,11:00进行MLB业务的信息;或
第一网络设备的预测业务的输出信息的指示,例如指示在MLB业务的输出信息包括第一网络设备和/或第二网络设备的参数配置信息。
可选的,以上预测和/或推理相关的第一信息可以包括在前述的询问消息中。
可选的,响应于预测和/或推理相关信息,前述第二网络设备可以向第一网络设备反馈以下预测和/或推理相关的第二信息,这些第二信息可以包括在前述响应于询问消息的确认消息中。这些第二信息可以包含以下内容的一项或多项:
第二网络设备支持的业务类型,例如MLB业务;或
第二网络设备期望进行交互的信息和时间段,例如在确认消息中,第二网络设备可以指示第一网络设备反馈第一网络设备对第二网络设备10:00至11:00之间的负载的预测结果,或者,第二网络设备可以指示也第一网络设备反馈10:00至11:00之间第一网络设备和第二网络设备切换门限的配置信息。
通过上述预测和/推理相关的第一信息和第二信息的交互,可以使网络设备之间能够协商和交互预测和/或推理相关的信息,避免业务不对齐所导致的错误。
以上实施例中,作为第一网络设备或第二网络设备的网络设备可以为gNB、CU、DU、 CU-CP或CU-DP。可以理解的是,一台网络设备,在图10-16所述的实施例中,既可以作为第一网络设备,也可以作为第二网络设备,或同时作为第一网络设备和第二网络设备。
以上实施例中,可以包括多个第二网络设备,由第一网络设备根据具体的设计和实现确定实施所述方法所需要的第二网络设备的范围。
可以理解的是,以上实施例中各个具体流程中的细节描述可以相互借鉴或结合,在此不予赘述。
以上结合图10-16详细说明了本申请实施例的通信方法。以下结合图17至图18详细说明本申请实施例的通信装置。
图17是本申请实施例提供的一种接入网设备的结构示意图,如可以为基站的结构示意图。如图17所示,该基站可应用于如图1所示的系统中,执行上述方法实施例中第一网络设备或第二网络设备的功能。基站1700可包括一个或多个DU1701和一个或多个CU 1702。CU1702可以与NG core(下一代核心网,NC)或EPC通信。所述DU1701可以包括至少一个天线17011,至少一个射频单元17012,至少一个处理器17013和至少一个存储器17014。CU1702可以包括至少一个处理器17022和至少一个存储器17021。CU 1702和DU1701之间可以通过接口进行通信,其中,控制面(Control plan)接口可以为Fs-C,比如F1-C,用户面(User Plan)接口可以为Fs-U,比如F1-U。所述DU 1701与CU 1702可以是物理上设置在一起,也可以物理上分离设置的,即分布式基站。
所述CU 1702为基站的控制中心,也可以称为处理单元,主要用于完成基带处理功能。所述DU 1701部分主要用于射频信号的收发以及射频信号与基带信号的转换,以及部分基带处理。例如,所述CU 1702和DU 1701均可以执行上述方法实施例中网络设备的相关操作。
具体的,CU和DU上的基带处理可以根据无线网络的协议层划分,例如PDCP层及以上协议层的功能设置在CU,PDCP以下的协议层,例如RLC层,MAC层和PHY层等的功能设置在DU。
此外,可选的,基站1700可以包括一个或多个射频单元(RU),一个或多个DU和一个或多个CU。其中,DU可以包括至少一个处理器17013和至少一个存储器17014,RU可以包括至少一个天线17011和至少一个射频单元17012,CU可以包括至少一个处理器17022和至少一个存储器17021。
在一个实例中,所述CU1702可以由一个或多个单板构成,多个单板可以共同支持单一接入指示的无线接入网(如5G网),也可以分别支持不同接入制式的无线接入网(如LTE网,5G网或其他网)。所述存储器17021和处理器17022可以服务于一个或多个单板。也就是说,可以每个单板上单独设置存储器和处理器。也可以是多个单板共用相同的存储器和处理器。此外每个单板上还可以设置有必要的电路。所述DU1701可以由一个或多个单板构成,多个单板可以共同支持单一接入指示的无线接入网(如5G网),也可以分别支持不同接入制式的无线接入网(如LTE网,5G网或其他网)。所述存储器17014和处理器17013可以服务于一个或多个单板。也就是说,可以每个单板上单独设置存储器和处理器。也可以是多个单板共用相同的存储器和处理器。此外每个单板上还可以设置有必要的电路。
图18给出了一种通信装置1800的结构示意图。通信装置1800可用于实现上述方法实施例中描述的方法,可以参见上述方法实施例中的说明。所述通信装置1800可以是芯片, 接入网设备(如基站)或者其他网络设备等。
所述通信装置1800包括一个或多个处理器1801。所述处理器1801可以是通用处理器或者专用处理器等。例如可以是基带处理器、或中央处理器。基带处理器可以用于对通信协议以及通信数据进行处理,中央处理器可以用于对装置(如,基站、终端、AMF、或芯片等)进行控制,执行软件程序,处理软件程序的数据。所述装置可以包括收发单元,用以实现信号的输入(接收)和输出(发送)。例如,装置可以为芯片,所述收发单元可以是芯片的输入和/或输出电路,或者通信接口。所述芯片可以用于终端或接入网设备(比如基站)或核心网设备。又如,装置可以为终端或接入网设备(比如基站),所述收发单元可以为收发器,射频芯片等。
所述通信装置1800包括一个或多个所述处理器1801,所述一个或多个处理器1801可实现图10-16所示的实施例中第一网络设备和/或第二网络设备,或,运营和管理实体所执行的方法。
在一种可能的设计中,所述通信装置1800包括用于接收来自第二网络设备的网络状态信息、预测负载值、实际负载值、MLB策略和神经网络优化信息的部件(means),以及用于修改第一网络设备配置参数,对第一神经网络和/或第二神经网络进行优化的部件(means)。可以通过一个或多个处理器来实现所述的部件的功能。例如可以通过一个或多个处理器,通过收发器、或输入/输出电路、或芯片的接口发送。可以参见上述方法实施例中的相关描述。
在一种可能的设计中,所述通信装置1800包括用于向第二网络设备发送网络状态信息、预测负载值、实际负载值、MLB策略和神经网络优化信息中的一项或多项的部件(means),以及用于生成网络状态信息,和用于运行第一神经网络和/或第二神经网络生成预测负载值、实际负载值、MLB策略和神经网络优化信息中的一项或多项的部件(means)。所述可以参见上述方法实施例中的相关描述。例如可以通过收发器、或输入/输出电路、或芯片的接口接收,通过一个或多个处理器。
可选的,处理器1801除了实现图10-16所示的实施例的方法,还可以实现其他功能。
可选的,一种设计中,处理器1801也可以包括指令1803,所述指令可以在所述处理器上被运行,使得所述通信装置1800执行上述方法实施例中描述的方法。
在又一种可能的设计中,通信装置1800也可以包括电路,所述电路可以实现前述方法实施例中接入网设备或终端的功能。
在又一种可能的设计中所述通信装置1800中可以包括一个或多个存储器1802,其上存有指令1804,所述指令可在所述处理器上被运行,使得所述通信装置1800执行上述方法实施例中描述的方法。可选的,所述存储器中还可以存储有数据。可选的处理器中也可以存储指令和/或数据。例如,所述一个或多个存储器1802可以存储上述实施例中所描述的MLB策略、神经网络优化信息,或者上述实施例中所涉及的其他信息,比如预测负载值、实际负载值等。所述处理器和存储器可以单独设置,也可以集成在一起。
在又一种可能的设计中,所述通信装置1800还可以包括收发单元1805以及天线1806,或者,包括通信接口。所述收发单元1805可以称为收发机、收发电路、或者收发器等,用于通过天线1806实现装置的收发功能。所述通信接口(图中未示出),可以用于核心网设备和接入网设备,或是,接入网设备和接入网设备之间的通信。可选的,该通信接口可以 为有线通信的接口,比如光纤通信的接口。
所述处理器1801可以称为处理单元,对装置(比如终端或者基站或者AMF)进行控制。
本申请还提供一种通信系统,其包括前述的一个或多个接入网设备,和,一个或多个终端,和,核心网设备中的一项或多项的组合。
应理解,在本申请实施例中的处理器可以是中央处理单元(central processing unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
还应理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的随机存取存储器(random access memory,RAM)可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
上述实施例,可以全部或部分地通过软件、硬件(如电路)、固件或其他任意组合来实现。当使用软件实现时,上述实施例可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令或计算机程序。在计算机上加载或执行所述计算机指令或计算机程序时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以为通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线,例如光纤,或是无线,例如红外、无线、微波等,方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集合的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质。半导体介质可以是固态硬盘。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不予赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、通信装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (25)

  1. 一种通信方法,其特征在于,所述方法包括:
    获得第一网络设备的第一网络状态信息;
    获得第二网络设备的指示网络状态的信息;
    基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得待生效的第一移动性负载均衡MLB策略;
    向所述第二网络设备发送所述第一MLB策略;
    所述第一MLB策略包括以下内容的至少一项:
    用于调整所述第一网络设备负载的配置参数及所述用于调整第一网络设备负载的配置参数对应的时间信息;
    用于调整所述第二网络设备负载的配置参数及所述用于调整第二网络设备负载的配置参数对应的时间信息;或,
    所述第一MLB策略的可靠度信息。
  2. 如权利要求1所述的方法,其特征在于,所述获得所述第二网络设备的指示网络状态的信息包括:
    向所述第二网络设备发送第一询问信息,所述第一询问信息用于请求所述指示网络状态的信息;
    接收来自所述第二网络设备的第一确认信息,所述第一确认信息响应于所述第一询问信息,并携带所述第二网络设备的指示网络状态的信息;
    其中,所述第二网络设备的指示网络状态的信息包括以下中的至少一项:所述第二网络设备当前的网络状态信息,所述第二网络设备预测的网络状态信息,或,所述第二网络设备的第三预测负载值。
  3. 如权利要求1-2任一项所述的方法,其特征在于,所述基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得待生效的第一MLB策略,包括:
    基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得当前MLB策略下所述第一网络设备在第一时间的第一预测负载值或第二网络设备在第一时间的第二预测负载值中的至少一项;
    基于所述第一预测负载值或第二预测负载值中的至少一项获得所述待生效的第一MLB策略。
  4. 如权利要求3所述的方法,其特征在于,所述基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得当前MLB策略下所述第一网络设备在第一时间的第一预测负载值或第二网络设备在第一时间的第二预测负载值中的至少一项,包括:
    通过第一神经网络,执行以下中的至少一项:
    对第一网络设备的负载值基于当前MLB策略进行预测,得到第一时间的第一预测负载值,或,
    对第二网络设备的负载值基于当前MLB策略进行预测,得到第一时间的第二预测负载值;
    其中,所述第一神经网络的输入包括所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,以及当前MLB策略,所述第一神经网络的输出包括所述第一预测负载值或所述第二预测负载值中的至少一项。
  5. 如权利要求3-4任一项所述的方法,其特征在于,所述基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得当前MLB策略下所述第一网络设备在第一时间的第一预测负载值或第二网络设备在第一时间的第二预测负载值中的至少一项包括:
    基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得当前MLB策略下所述第一网络设备在第一时间的第一预测负载值;
    基于所述第一预测负载值或第二预测负载值中的至少一项获得所述待生效的第一MLB策略,包括:
    通过第二神经网络,获得所述待生效的第一MLB策略;
    其中,所述第二神经网络的输入包括:所述第二网络设备的指示网络状态的信息或所述第二预测负载值,以及所述第一预测负载值;所述第二神经网络的输出包括所述待生效的第一MLB策略。
  6. 如权利要求3-5任一项所述的方法,其特征在于,还包括:
    通过所述第一神经网络获得所述第一MLB策略作用下的第一网络设备在第一时间的第四预测负载值,或,所述第一MLB策略作用下的第二网络设备在第一时间的第五预测负载值,中的至少一项;
    向所述第二网络设备发送所述第四预测负载值,或,所述第五预测负载值中的至少一项。
  7. 如权利要求1-6任一项所述的方法,其特征在于,还包括:
    确定第二网络设备基于所述第一MLB策略修改用于调整所述第二网络设备负载的参数成功,并,基于所述第一MLB策略,修改用于调整所述第一网络设备负载的参数;
    获得所述第一MLB策略下第一网络设备在第一时间的第一实际负载值并通过所述第一神经网络获得所述第一MLB策略作用下的第一网络设备在第一时间的第四预测负载值;基于所述第四预测负载值和所述第一实际负载值,对所述第一神经网络或第二神经网络中的至少一项进行优化;和/或,
    获得所述第一MLB策略下第二网络设备在第一时间的第二实际负载值并通过所述第一神经网络获得所述第一MLB策略作用下的第二网络设备在第一时间的第五预测负载值;基于所述第五预测负载值和所述第二实际负载值,对第一神经网络或第二神经网络中的至少一项进行优化。
  8. 如权利要求7所述的方法,其特征在于,还包括:
    从所述第二网络设备接收指示所述第二网络设备的第二实际负载值的信息。
  9. 如权利要求1-6任一项所述的方法,其特征在于,所述方法还包括:
    从所述第二网络设备接收反馈信息,所述反馈信息指示第二网络设备基于所述第一MLB策略修改用于调整所述第二网络设备负载的参数失败的原因;
    基于所述反馈信息,对所述第一神经网络和或第二神经网络进行优化。
  10. 如权利要求1-9任一项所述的方法,其特征在于,所述向所述第二网络设备发送所 述第一MLB策略之前,还包括:
    基于第六预测负载值或第七预测负载值中的至少一项和第八预测负载值或第九预测负载值中的至少一项的准确度,确定第六预测负载值或第七预测负载值中的至少一项较优;
    其中,第六预测负载值为第一网络设备获得的当前MLB策略下所述第一网络设备在第二时间的预测负载值;
    第七预测负载值为为第一网络设备获得的当前MLB策略下所述第二网络设备在第二时间的预测负载值;
    第八预测负载值为第二网络设备获得的当前MLB策略下所述第一网络设备在第二时间的预测负载值;
    第九预测负载值为第二网络设备获得的当前MLB策略下所述第一网络设备在第二时间的预测负载值。
  11. 如权利要求10所述的方法,其特征在于,还包括:
    获得第六预测负载值或第七预测负载值中的至少一项和第八预测负载值或第九预测负载值中的至少一项的准确度;
    其中,获得第六预测负载值的准确度包括:基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得所述第六预测负载值;
    获得所述第一网络设备在当前MLB策略下的第三实际负载值,所述第三实际负载值为所述第一网络设备在第二时间的实际的负载值;
    基于第六预测负载值和所述第三实际负载值,获得第六预测负载值的准确度;
    其中,获得第八预测负载值的准确度包括:
    接收来自第二网络设备的第八预测负载值,所述第八预测负载值为所述第二网络设备在当前MLB策略下所预测的所述第一网络设备在第二时间的负载值;
    获得所述第一网络设备在当前MLB策略下的第三实际负载值,所述第三实际负载值为所述第一网络设备在第二时间的实际的负载值;
    基于第八预测负载值和所述第三实际负载值,获得第八预测负载值的准确度;其中,获得第七预测负载值的准确度包括:
    基于所述第一网络设备的第一网络状态信息及所述第二网络设备的指示网络状态的信息,获得当前MLB策略下第二网络设备在第二时间的第七预测负载值;
    接收来自第二网络设备的所述第二网络设备在当前MLB策略下的第四实际负载值,所述第四实际负载值为所述第二网络设备在第二时间的实际的负载值;
    基于所述第七预测负载值和所述第四实际负载,获得第七预测负载值的准确度;
    其中,获得第九预测负载值的准确度包括:
    接收来自第二网络设备的第九预测负载和所述第二网络设备在当前MLB策略下的第四实际负载值,所述第九预测负载为所述第二网络设备在当前MLB策略下所预测的所述第二网络设备在第二时间的负载值,所述第四实际负载值为所述第二网络设备在第二时间的实际的负载值,基于所述第九预测负载值和所述第四实际负载,获得第九预测负载值的准确度。
  12. 如权利要求10所述的方法,其特征在于,所述方法还包括:
    向所述第二网络设备发送所述第六预测负载值或所述第七预测负载值中的至少一项和 第三实际负载值;或,
    向所述第二网络设备发送所述第七预测负载值。
  13. 如权利要求10所述的方法,其特征在于,所述方法还包括:
    接收来自所述第二网络设备的神经网络优化信息的请求;
    向所述第二网络设备发送神经网络优化信息;
    所述神经网络优化信息包括以下内容的至少一项:
    所述第一神经网络和/或第二神经网络的参数相关信息;
    所述第一神经网络和/或第二神经网络的输入信息;或,
    实际负载与预测负载之间的差异原因分析结果。
  14. 一种通信方法,其特征在于,所述方法包括:
    第二网络设备发送指示所述第二网络设备的网络状态的信息;
    所述第二网络设备接收第一移动性负载均衡MLB策略,所述第一MLB策略基于所述指示所述第二网络设备的网络状态的信息;
    所述第一MLB策略包括以下内容的至少一项:
    用于调整第一网络设备负载的配置参数及所述用于调整所述第一网络设备负载的配置参数对应的时间信息;
    用于调整所述第二网络设备负载的配置参数及所述用于调整第二网络设备负载的配置参数对应的时间信息;或,
    所述第一MLB策略的可靠度信息。
  15. 如权利要求14所述的方法,其特征在于,所述第二网络设备发送指示第二网络设备的网络状态的信息包括:
    所述第二网络设备接收第一询问信息;
    所述第二网络设备发送第一确认信息,所述第一确认信息响应于所述第一询问信息,并包括指示所述第二网络设备的指示网络状态的信息;
    其中,所述第二网络设备的指示网络状态的信息包括以下中的至少一项:所述第二网络设备当前的网络状态信息,所述第二网络设备预测的网络状态信息,或,所述第二网络设备的第三预测负载值。
  16. 如权利要求14-15任一项所述的方法,其特征在于,所述方法还包括:
    所述第二网络设备接收第四预测负载值,或,第五预测负载值中的至少一项;
    所述第四预测负载值,为在所述第一MLB策略下所预测的第一网络设备在第一时间的负载值;
    所述第五预测负载值,为在所述第一MLB策略下所预测的第二网络设备在第一时间的预测值。
  17. 如权利要求14-16任一项所述的方法,其特征在于,所述方法还包括:
    所述第二网络设备基于所述第一MLB策略修改用于调整第二网络设备负载的参数并发送所述第二网络设备在所述第一时间的第二实际负载值;或,
    所述第二网络设备修改用于调整第二网络设备负载的参数失败,所述第二网络设备发送反馈信息,所述反馈信息指示第二参数配置修改失败原因。
  18. 如权利要求14-17任一项所述的方法,其特征在于,所述方法还包括:
    所述第二网络设备获得当前MLB策略下所述第一网络设备在第二时间的第八预测负载值并发送所述第八预测负载值,以使得其他设备获得所述第八预测负载值的准确度;或,
    所述第二网络设备获得当前MLB策略下所述第一网络设备在第二时间的第八预测负载值或在当前MLB策略下所预测的所述第二网络设备在第二时间的第九预测负载值中的至少一项和所述第二网络设备在当前MLB策略下的第二时间的第四实际负载值,并发送所述第八预测负载值或第九预测负载值中的至少一项和第四实际负载值,以使得其他设备获得所述第八预测负载值或第九预测负载值中的至少一项的准确度。
  19. 如权利要求18所述的方法,其特征在于,所述方法还包括:
    所述第二网络设备接收第六预测负载值和第三实际负载值,所述第六预测负载值为其他设备在当前MLB策略下预测的第一网络设备在第二时间的负载值,所述第三实际负载值为所述第一网络设备在第二时间的实际负载值;所述第二网络设备基于所述第六预测负载值,所述第八预测负载值和所述第三实际负载值,计算第六预测负载值和第八预测负载值的准确度并确定所述第六预测负载值较优;和/或,
    所述第二网络设备接收第七预测负载值,所述第七预测负载值为其他设备在当前MLB策略下预测的第二网络设备在第二时间的负载值,所述第二网络设备基于所述第七预测负载值,所述第九预测负载值和所述第四实际负载值,计算第七预测负载值和第九预测负载值的准确度并确定所述第七预测负载值较优;和/或,
    所述第二网络设备接收第六预测负载值和第三实际负载值,所述第六预测负载值为其他设备在当前MLB策略下预测的第一网络设备在第二时间的负载值,所述第三实际负载值为所述第一网络设备在第二时间的实际负载值;所述第二网络设备基于所述第六预测负载值,所述第三实际负载值,所述第九预测负载值和所述第四实际负载值,计算第六预测负载值和第九预测负载值的准确度并确定所述第六预测负载值较优;和/或,
    所述第二网络设备接收第七预测负载值和第三实际负载值,所述第七预测负载值为其他设备在当前MLB策略下预测的第二网络设备在第二时间的负载值,所述第三实际负载值为所述第一网络设备在第二时间的实际负载值;所述第二网络设备基于所述第七预测负载值,所述第八预测负载值,所述第三实际负载值和所述第四实际负载值,计算第七预测负载值和第八预测负载值的准确度并确定所述第七预测负载值较优。
  20. 一种装置,其特征在于,用于实现如权利要求1至13任一项所述的方法。
  21. 一种装置,其特征在于,包括处理器,所述处理器和存储器耦合,所述处理器用于执行所述存储器存储的程序,以使得所述装置执行如权利要求1至13任一项所述的方法。
  22. 一种装置,其特征在于,用于实现如权利要求14至19任一项所述的方法。
  23. 一种装置,其特征在于,包括处理器,所述处理器和存储器耦合,所述处理器用于执行所述存储器存储的程序,以使得所述装置执行如权利要求14至19任一项所述的方法。
  24. 一种通信系统,其特征在于,包括权利要求20或21所述的装置,和权利要求22或23所述的装置。
  25. 一种可读存储介质或程序产品,其特征在于,包括程序,当其被处理器运行时,使得包括所述处理器的装置执行权利要求1至13任一项所述的方法,或者权利要求14至19任一项所述的方法。
PCT/CN2022/078983 2021-03-05 2022-03-03 一种负载均衡方法,装置及可读存储介质 WO2022184125A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22762587.8A EP4290923A1 (en) 2021-03-05 2022-03-03 Loading balance method and apparatus, and readable storage medium
US18/459,911 US20230413116A1 (en) 2021-03-05 2023-09-01 Load balancing method and apparatus, and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110247261.3 2021-03-05
CN202110247261.3A CN115038122A (zh) 2021-03-05 2021-03-05 一种负载均衡方法,装置及可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/459,911 Continuation US20230413116A1 (en) 2021-03-05 2023-09-01 Load balancing method and apparatus, and readable storage medium

Publications (1)

Publication Number Publication Date
WO2022184125A1 true WO2022184125A1 (zh) 2022-09-09

Family

ID=83118197

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/078983 WO2022184125A1 (zh) 2021-03-05 2022-03-03 一种负载均衡方法,装置及可读存储介质

Country Status (4)

Country Link
US (1) US20230413116A1 (zh)
EP (1) EP4290923A1 (zh)
CN (1) CN115038122A (zh)
WO (1) WO2022184125A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150031360A1 (en) * 2013-07-26 2015-01-29 Samsung Electronics Co., Ltd. Method and device for load balancing in wireless communication system
CN106714239A (zh) * 2016-12-14 2017-05-24 北京拓明科技有限公司 一种lte网络负载自动均衡的方法和系统
US20180192324A1 (en) * 2015-06-30 2018-07-05 Telecom Italia S.P.A. Automatic method for mobility load balancing in mobile telecommunications networks
CN108541022A (zh) * 2017-03-02 2018-09-14 中国移动通信有限公司研究院 一种实现网络负载均衡的方法及装置
CN110868737A (zh) * 2018-08-28 2020-03-06 大唐移动通信设备有限公司 一种负载均衡方法及装置
CN111050330A (zh) * 2018-10-12 2020-04-21 中兴通讯股份有限公司 移动网络自优化方法、系统、终端及计算机可读存储介质
CN111935777A (zh) * 2020-06-03 2020-11-13 东南大学 基于深度强化学习的5g移动负载均衡方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150031360A1 (en) * 2013-07-26 2015-01-29 Samsung Electronics Co., Ltd. Method and device for load balancing in wireless communication system
US20180192324A1 (en) * 2015-06-30 2018-07-05 Telecom Italia S.P.A. Automatic method for mobility load balancing in mobile telecommunications networks
CN106714239A (zh) * 2016-12-14 2017-05-24 北京拓明科技有限公司 一种lte网络负载自动均衡的方法和系统
CN108541022A (zh) * 2017-03-02 2018-09-14 中国移动通信有限公司研究院 一种实现网络负载均衡的方法及装置
CN110868737A (zh) * 2018-08-28 2020-03-06 大唐移动通信设备有限公司 一种负载均衡方法及装置
CN111050330A (zh) * 2018-10-12 2020-04-21 中兴通讯股份有限公司 移动网络自优化方法、系统、终端及计算机可读存储介质
CN111935777A (zh) * 2020-06-03 2020-11-13 东南大学 基于深度强化学习的5g移动负载均衡方法

Also Published As

Publication number Publication date
CN115038122A (zh) 2022-09-09
EP4290923A1 (en) 2023-12-13
US20230413116A1 (en) 2023-12-21

Similar Documents

Publication Publication Date Title
US20230179490A1 (en) Artificial intelligence-based communication method and communication apparatus
US11671952B2 (en) Frequency or radio access technology (RAT) selection based on slice availability
JP6890687B2 (ja) マスタ基地局始動のセカンダリ基地局解放とセカンダリ基地局始動のセカンダリ基地局変更手続きとの間の競合状態の回避
WO2022016401A1 (zh) 通信方法和通信装置
TWI745851B (zh) 藉由聯合網路及雲端資源管理之服務傳遞
RU2749018C1 (ru) Вспомогательная информация для выбора специальной ячейки (spcell)
CN116326174A (zh) 用于主节点发起的条件主辅助小区添加的系统和方法
US20230337065A1 (en) Network resource selection method, and terminal device and network device
CN113966592B (zh) 更新后台数据传输策略的方法和装置
US20210377916A1 (en) Wireless Communications Method and Apparatus
KR20230010716A (ko) 핸드오버 시 셀 그룹 추가/변경 구성의 보존
WO2021204238A1 (zh) 通信方法和通信装置
TW202110223A (zh) 無線通訊網路中之條件組態
US20230086410A1 (en) Communication method and communication apparatus
WO2022040873A1 (zh) 一种通信方法、设备和装置
US20230345331A1 (en) Communication method, apparatus, and system
WO2022184125A1 (zh) 一种负载均衡方法,装置及可读存储介质
WO2016070701A1 (zh) 无线资源分配方法、通信节点和存储介质
CN106470389A (zh) 一种通信方法及基站
US20230319597A1 (en) Network node and a method performed in a wireless communication network for handling configuration of radio network nodes using reinforcement learning
WO2024007221A1 (zh) 通信方法及装置
WO2024007156A1 (zh) 一种通信方法和装置
WO2023160560A1 (zh) 一种切换方法及相关装置
CN114788356B (zh) 用于通过srb3的按需系统信息块请求的方法、无线设备和网络节点

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22762587

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022762587

Country of ref document: EP

Effective date: 20230906

NENP Non-entry into the national phase

Ref country code: DE