WO2022236831A1 - 一种模型学习方法、模型学习装置及存储介质 - Google Patents
一种模型学习方法、模型学习装置及存储介质 Download PDFInfo
- Publication number
- WO2022236831A1 WO2022236831A1 PCT/CN2021/093927 CN2021093927W WO2022236831A1 WO 2022236831 A1 WO2022236831 A1 WO 2022236831A1 CN 2021093927 W CN2021093927 W CN 2021093927W WO 2022236831 A1 WO2022236831 A1 WO 2022236831A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- base station
- terminal
- micro base
- training
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 155
- 238000003860 storage Methods 0.000 title claims abstract description 11
- 238000012549 training Methods 0.000 claims abstract description 516
- 238000004891 communication Methods 0.000 claims abstract description 86
- 230000004044 response Effects 0.000 claims abstract description 48
- 230000006870 function Effects 0.000 claims description 147
- 230000002776 aggregation Effects 0.000 claims description 28
- 238000004220 aggregation Methods 0.000 claims description 28
- 230000008569 process Effects 0.000 claims description 21
- 230000005540 biological transmission Effects 0.000 abstract description 13
- 230000003993 interaction Effects 0.000 abstract description 13
- 238000012423 maintenance Methods 0.000 abstract description 4
- 230000011664 signaling Effects 0.000 description 69
- 238000012545 processing Methods 0.000 description 21
- 238000005516 engineering process Methods 0.000 description 20
- 238000004458 analytical method Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 16
- 238000005259 measurement Methods 0.000 description 9
- 238000010801 machine learning Methods 0.000 description 8
- 238000007726 management method Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 5
- 238000012935 Averaging Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000005304 joining Methods 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000009365 direct transmission Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/16—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/18—Network design, e.g. design based on topological or interconnect aspects of utility systems, piping, heating ventilation air conditioning [HVAC] or cabling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W16/00—Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
- H04W16/24—Cell structures
- H04W16/32—Hierarchical cell structures
Definitions
- the present disclosure relates to the technical field of wireless communication, and in particular, to a model learning method, a model learning device, and a storage medium.
- heterogeneous network technology In order to improve peak rate and spectrum utilization in communication technology, heterogeneous network technology is further introduced.
- the heterogeneous network technology refers to that many micro base stations are deployed in the coverage area of the macro base station to form a heterogeneous system with different types of different nodes of the same coverage. Since the geographical distance between the access point and the served user equipment is reduced, the system throughput and the overall efficiency of the network can be effectively improved.
- Federated learning refers to the method of uniting different participants (such as terminals) for machine learning. Different participants cooperate to learn, which can effectively guarantee information security during big data exchange and protect terminal data and personal data privacy.
- Applying federated learning to multi-source heterogeneous networks can realize machine learning modeling of multi-source heterogeneous networks.
- due to the different performance of each network node in a multi-source heterogeneous network there is a problem of complex processing and low efficiency in the federated learning process.
- the present disclosure provides a model learning method, a model learning device and a storage medium.
- a model learning method is provided, which is applied to a macro base station, including:
- the model training request is used to trigger the micro base station to report capability information; after sending the model training request to the first number of micro base stations, the method further includes:
- the model structure is to indicate that the Femtocell is based on the model
- the training request trains the model structure, and the model parameter value is the initial parameter value of the model structure.
- the capability information includes data type characteristics of the micro base station; the method further includes:
- the first model alignment is performed on the first model training results of the first number with the goal of optimizing the first model loss function; based on the first A model alignment result is used for global model learning to determine the global model.
- the global model learning based on the result of the first model alignment, and determining the global model include:
- determining the first model loss function includes:
- the global model learning based on the first model alignment result, and determining the global model include:
- the method further includes:
- the terminal switching information includes exiting Information about the terminal for model training and the target micro base station that the terminal re-accesses; the terminal switching information is used for the macro base station to re-determine the terminal that performs the model training task.
- a model learning method is provided, which is applied to a micro base station, including:
- the model training request sent by the macro base station sends the model training request to the terminal; wherein, the number of micro base stations receiving the model training request is a first number; the communication coverage of the first number of micro base stations is within the Within the communication coverage of the macro base station.
- the model training request is used to trigger the terminal to report the communication conditions and data characteristics of the terminal, and after sending the model training request to the terminal, the model learning method further includes:
- the terminal receiving the communication conditions and data type characteristics sent by the terminal; processing the communication conditions and data characteristics of the terminal and the communication conditions and data characteristics of the micro base station to obtain capability information, and sending the capability information to the macro base station ; Wherein, the capability information is used by the macro base station to determine the model structure and model parameter values.
- the method further includes:
- the model structure is a model structure indicating that the micro base station is trained based on the model training request, and the model parameter value is an initial parameter value of the model structure; based on the communication conditions of the terminal and The data type characteristics and the model structure and model parameter values determine the second number of terminals to perform model training; send scheduling information to the second number of terminals; the scheduling information includes the model structure and model parameter values and instructs the terminals to perform model training. Instructions for training.
- the method further includes:
- the federated aggregation is performed based on the results of the alignment of the second model to obtain the training results of the first model, including:
- determining the second model loss function includes:
- the method further includes:
- the stop training information instructs the micro base station to stop the terminal from executing the model training task; based on the stop model training information, instruct the terminal to stop performing the model training task.
- the method further includes:
- the terminal switching information includes information about the terminal that exited the model training and the target micro base station that the terminal re-enters; the terminal switching information is used by the macro base station to re-determine the terminal that performs the model training task; in response to receiving The terminal information sent by the macro base station re-determines the terminal performing the model training task, and sends the model training task to the terminal.
- the sending the model training task to the terminal includes:
- the target micro base station In response to the terminal information including the terminal that performed the model training task last time, determine the target micro base station after the terminal is switched, and send the model training task to the terminal by the target micro base station; and/or
- the terminal information does not include the terminal that performed the model training task last time, it is determined that the terminal will no longer perform the model training task, and it is determined that a new terminal that performs the model training task is added, and the new terminal that performs the model training task is added.
- the terminal sends model training tasks.
- a model learning device which is applied to a macro base station, including:
- the sending module is configured to send the model training request to a first number of micro base stations in response to receiving the model training request sent by the operation and maintenance management OAM entity; wherein, the communication coverage of the first number of micro base stations is within the Within the communication coverage of the macro base station.
- the model training request is used to trigger the micro base station to report capability information; the device further includes: a determining module;
- the determining module is configured to, in response to receiving capability information sent by the micro base station, determine a model structure and model parameter values based on the capability information, and send the model structure and model parameter values to the micro base station; the model structure is Instructing the Femtocell to request training based on the model structure of the model, where the model parameter value is an initial parameter value of the model structure.
- the capability information includes data type characteristics of the micro base station; the device further includes: a receiving module;
- the receiving module is configured to receive a first number of first model training results sent by a first number of micro base stations; determine the data type characteristics of different micro base stations in the first number of micro base stations, and determine a first model loss Function; after unifying the data type characteristics based on the data type characteristics of different micro base stations in the first number of micro base stations, with the goal of optimizing the first model loss function, performing the first model training results of the first number
- the first model is aligned; the global model is learned based on the result of the first model alignment, and the global model is determined.
- the determining module is configured to:
- the determining module is configured to:
- the determining module is configured to:
- the determination module is also used for:
- the terminal switching information includes exiting Information about the terminal for model training and the target micro base station that the terminal re-accesses; the terminal switching information is used for the macro base station to re-determine the terminal that performs the model training task.
- a model learning device is provided, which is applied to a micro base station, including:
- the receiving module is configured to receive the model training request sent by the macro base station; the sending module sends the model training request to the terminal; wherein, the number of micro base stations receiving the model training request is a first number; the first number of micro base stations The communication coverage of the base station is within the communication coverage of the macro base station.
- the model training request is used to trigger the terminal to report the communication conditions and data characteristics of the terminal, and the receiving module is also used to:
- the terminal receiving the communication conditions and data type characteristics sent by the terminal; processing the communication conditions and data characteristics of the terminal and the communication conditions and data characteristics of the micro base station to obtain capability information, and sending the capability information to the macro base station ; Wherein, the capability information is used by the macro base station to determine the model structure and model parameter values.
- the receiving module is further configured to: receive a model structure and a model parameter value; the model structure is a model structure indicating that the micro base station is trained based on the model training request, and the model parameter value is the model The initial parameter value of the structure; based on the communication conditions and data type characteristics of the terminal and the model structure and model parameter value, determine a second number of terminals to perform model training; send scheduling information to the second number of terminals; the The scheduling information includes model structure and model parameter values as well as instruction information instructing the terminal to perform model training.
- the device further includes: a determining module
- the receiving module is configured to receive a second number of second model training results sent by a second number of terminals; the determining module is configured to determine the data type characteristics of different terminals in the second number of terminals, and determine the second Model loss function; after unifying the data type characteristics based on the data type characteristics of different terminals in the second number of terminals, with the goal of optimizing the second model loss function, perform the training results of the second number of second models Aligning the second model; performing federated aggregation based on the alignment result of the second model to obtain the training result of the first model.
- the determining module is configured to:
- the determining module is configured to:
- the receiving module is further configured to: receive stop model training information sent by the macro base station; the stop training information instructs the micro base station to stop the terminal from executing the model training task; and instruct the terminal to stop executing the model training task based on the stop model training information.
- Model training tasks are further configured to: receive stop model training information sent by the macro base station; the stop training information instructs the micro base station to stop the terminal from executing the model training task; and instruct the terminal to stop executing the model training task based on the stop model training information. Model training tasks.
- the sending module is also used to: send terminal switching information; the terminal switching information includes information about the terminal that quits the model training and the target micro base station that the terminal re-accesses; the terminal switching information is used by the macro base station The station re-determines the terminal performing the model training task; in response to receiving the terminal information sent by the macro base station, re-determines the terminal performing the model training task, and sends the model training task to the terminal.
- the sending module :
- the target micro base station In response to the terminal information including the terminal that performed the model training task last time, determine the target micro base station after the terminal is switched, and send the model training task to the terminal by the target micro base station; and/or
- the terminal information does not include the terminal that performed the model training task last time, it is determined that the terminal will no longer perform the model training task, and it is determined that a new terminal that performs the model training task is added, and the new terminal that performs the model training task is added.
- the terminal sends model training tasks.
- a model learning device including:
- a processor a memory for storing processor-executable instructions; wherein the processor is configured to: execute the first aspect or the model learning method described in any one of the implementations of the first aspect, or execute the second aspect Or the model learning method described in any one of the implementation manners in the second aspect.
- a non-transitory computer-readable storage medium When the instructions in the storage medium are executed by the processor of the mobile terminal, the mobile terminal can execute the first aspect or the first The model learning method described in any one of the implementation manners of the second aspect, or enabling the mobile terminal to execute the second aspect or the model learning method described in any one of the implementation manners of the second aspect.
- the technical solution provided by the embodiments of the present disclosure may include the following beneficial effects: the macro base station sends a model training request to the micro base station, realizes the interaction between the macro base station and the micro base station to allocate model training tasks, and improves the utilization of wireless access network equipment Efficiency, high channel quality, high model reliability, that is, high accuracy.
- Fig. 1 is a schematic diagram of a heterogeneous network scenario architecture of a model learning method according to an exemplary embodiment.
- Fig. 2 is a flow chart showing a model learning method according to an exemplary embodiment.
- Fig. 3 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 4 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 5 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 6 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 7 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 8 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 9 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 10 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 11 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 12 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 13 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 14 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 15 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 16 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 17 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 18 is a flow chart showing another model learning method according to an exemplary embodiment.
- Fig. 19 is a main flowchart of a model reasoning method according to an exemplary embodiment.
- Fig. 20 is a flowchart of federated learning of model reasoning in a model learning method according to an exemplary embodiment.
- Fig. 21 is a flow chart of terminal switching processing in a model learning method according to an exemplary embodiment.
- Fig. 22 is a flow chart of model inference of a model learning method according to an exemplary embodiment.
- Fig. 23 is a schematic diagram of a protocol and an interface for signaling and data transmission between a micro base station and a macro base station in a model learning method according to an exemplary embodiment.
- Fig. 24 is a schematic diagram of a protocol and an interface for signaling and data transmission between a micro base station and a terminal in a model learning method according to an exemplary embodiment.
- Fig. 25 is a schematic diagram of a protocol and interface for terminal switching in a model learning method according to an exemplary embodiment.
- Fig. 26 is a block diagram of a model learning device according to an exemplary embodiment.
- Fig. 27 is a block diagram of another model learning device according to an exemplary embodiment.
- Fig. 28 is a block diagram of an apparatus for model learning according to an exemplary embodiment.
- Fig. 29 is a block diagram showing another apparatus for model learning according to an exemplary embodiment.
- heterogeneous network technology In order to improve peak rate and spectrum utilization in communication technology, heterogeneous network technology is further introduced.
- the heterogeneous network technology refers to that many micro base stations are deployed in the coverage area of the macro base station to form a heterogeneous system with different types of different nodes of the same coverage. Since the geographical distance between the access point and the served terminal is reduced, the system throughput and the overall efficiency of the network can be effectively improved.
- Federated learning refers to the method of uniting different participants (such as terminals) for machine learning. Different participants cooperate to learn, which can effectively guarantee information security during big data exchange and protect terminal data and personal data privacy.
- Applying federated learning to multi-source heterogeneous networks can realize machine learning modeling of multi-source heterogeneous networks, and its implementation can refer to the following examples.
- the macro base station forwards the specific subscription requirements of the Operation Administration and Maintenance (OAM) entity to the terminal, where the OAM subscription requirements may also be referred to as model training requests.
- the terminal reports the communication conditions and local data type characteristics to the macro base station.
- the macro base station assigns tasks according to the information reported by the terminal, and sends the model structure and hyperparameter information to the terminal.
- the terminal performs local model training according to tasks assigned by the macro base station, and after the training is completed, the terminal sends the local learning model parameters to the macro base station.
- the macro base station performs federated averaging according to the local learning results of the terminal to obtain a global model.
- the macro base station checks whether the global learning model meets the subscription requirements of the OAM, and if so, the macro base station sends the obtained model to the OAM. If not, the terminal updates the local model according to the global learning result, and then iteratively trains with the macro base station until the obtained global model meets the OAM subscription requirements.
- the terminal is directly connected to the macro base station for data and signaling transmission.
- the geographical distance between the terminal and the macro base station is large, the channel quality is poor, and the data transmission rate is slow. It affects the overall efficiency of the communication network, resulting in low efficiency of the federated learning process.
- the macro base station directly performs federated averaging on the local training results of all terminals.
- the data structure of the local training set of different terminals may be different, and the feasibility of directly performing federated averaging is low, which will lead to poor generalization ability of the model. Poor, the reliability and accuracy of the model cannot be guaranteed.
- the data interaction between the macro base station and the terminal needs to be carried out through the core network or data center.
- the terminal needs to upload the training result data to the core network or data center first, and the macro base station then requests the data.
- Direct transmission between the base station and the terminal is not supported
- Data federated learning reduces the efficiency of federated learning and the utilization of wireless network resources.
- the terminal When the terminal exits the macro base station connection, it directly exits the federated learning process, and does not consider the processing flow of new terminals joining the connection, resulting in less and less available training data in the federated learning process, which is not conducive to the overall training of the model and the improvement of model accuracy improve.
- a macro base station includes multiple micro base stations within the coverage area, and terminals are connected to the micro base stations for data and signaling interaction. Due to the small coverage area of the micro base station, it is easy to trigger handover when the terminal moves. However, in the related art, the handover of the terminal is not considered, so it is impossible to determine whether the terminal continues to support training after the handover occurs. Moreover, in federated learning, since different nodes may use different data types and characteristics of training data, the dimensions of training results of different nodes may be different. In related technologies, the processing of model learning based on heterogeneous networks has not been considered. method.
- the present disclosure provides a model learning method, which performs model alignment processing on model learning and learning results of heterogeneous networks, and determines a training model required by OAM. And it proposes a processing method after the handover of the terminal.
- the terminal can continue to participate in the training task of the source micro base station or join the training task of the target micro base station. It effectively solves the problem of continuous reduction of available training data in terminal mobile scenarios. And after aligning the data used by the training model at different nodes, training can be performed to support the use of different types of data to train the same model.
- the wireless access network device may be: a base station, an evolved base station (evolved node B, base station), a home base station, an access point (access point, AP) in a wireless fidelity (wireless fidelity, WIFI) system, a wireless relay Node, wireless backhaul node, transmission point (transmission point, TP) or transmission and reception point (transmission and reception point, TRP), etc., can also be gNB in the NR system, or it can also be a component or a part of equipment that constitutes a base station Wait.
- the network device may also be a vehicle-mounted device.
- V2X vehicle-to-everything
- the network device may also be a vehicle-mounted device. It should be understood that in the embodiments of the present disclosure, no limitation is imposed on the specific technology and specific device form adopted by the network device.
- terminals involved in this disclosure can also be referred to as terminal equipment, user equipment (User Equipment, UE), mobile station (Mobile Station, MS), mobile terminal (Mobile Terminal, MT), etc.
- a device providing voice and/or data connectivity for example, a terminal may be a handheld device with a wireless connection function, a vehicle-mounted device, and the like.
- examples of some terminals are: smart phones (Mobile Phone), pocket computers (Pocket Personal Computer, PPC), handheld computers, personal digital assistants (Personal Digital Assistant, PDA), notebook computers, tablet computers, wearable devices, or Vehicle equipment, etc.
- V2X vehicle-to-everything
- the terminal device may also be a vehicle-mounted device. It should be understood that the embodiment of the present disclosure does not limit the specific technology and specific device form adopted by the terminal.
- Fig. 1 is a schematic diagram of a heterogeneous network scenario architecture of a model learning method according to an exemplary embodiment.
- the system includes a macro base station, M micro base stations and N terminals.
- the disclosed terminal device is mainly responsible for local data collection and local model training.
- the micro base station device is mainly responsible for terminal scheduling and task allocation, coordinating the terminal device for model training and terminal mobility management.
- the macro base station device is mainly responsible for coordinating the micro base station device for the global model. Training to obtain a global model that meets the OAM subscription requirements.
- the coverage of the micro base station is within the coverage of the macro base station.
- the signaling/data is exchanged between the macro base station and the micro base station, it may be a wired connection, such as through an optical fiber, a coaxial cable, a network cable, etc.; it may also be a wireless connection, such as through a millimeter wave.
- the connection between the macro base station and the micro base station can be realized through the X2 interface, or through other interfaces such as X3, and the embodiment of the present invention does not limit the specific implementation form of the connection.
- a wireless connection can be established between the micro base station and the terminal through a wireless air interface.
- the wireless air interface is a wireless air interface based on the fourth-generation mobile communication network technology (4G) standard; or, the wireless air interface is a wireless air interface based on the fifth-generation mobile communication network technology (5G) standard, such as
- the wireless air interface is a new air interface; alternatively, the wireless air interface may also be a wireless air interface based on a technical standard of a next-generation mobile communication network based on 5G.
- the embodiments of the present disclosure do not require a specific implementation form of the connection between the terminal within the scope of the micro base station and the micro base station. Based on this system, the model learning method of the present disclosure is proposed.
- Fig. 2 is a flow chart showing a model learning method according to an exemplary embodiment. As shown in Figure 2, the model method is used in the macro base station, including the following steps.
- step S11 in response to receiving the model training request sent by the OAM entity, send the model training request to the first number of micro base stations.
- the OAM initiates a model training request to the macro base station, and the model training request includes the OAM's requirements for the type of training task and model accuracy of the subscribed model.
- the macro base station forwards the model training request to the micro base station through the X2 interface shown in FIG. 1 .
- the number of forwarding model training requests is determined based on the number of micro base stations covered by the macro base station. In this disclosure, the number of micro base stations covered by one macro base station is called the first number for convenience.
- the model training request may at least include: an analysis ID, a notification target address, and analysis report information.
- the analysis ID is used to identify the requested analysis type; the notification destination address is used to associate the notification received by the requestee with this subscription; the analysis report information includes parameters such as the preferred analysis accuracy level and analysis time interval.
- the model training request may also include analysis filter information, which is used to indicate conditions to be met for reporting analysis information.
- the macro base station sends the received model training request to the micro base station, which can increase the data rate and further improve the overall efficiency of the communication network.
- the macro base station sends a model training request to the micro base station, so that the micro base station reports capability information.
- the capability information reported by the micro base station includes communication conditions and local data type characteristics of terminals accessing the micro base station, as well as communication conditions and local data type characteristics of the micro base station.
- Fig. 3 is a flowchart showing a model learning method according to an exemplary embodiment. As shown in Figure 3, the model learning method is used in the macro base station, including the following steps.
- step S21 in response to receiving the capability information sent by the Femtocell, determine the model structure and model parameter values based on the capability information, and send the model structure and model parameter values to the Femtocell.
- the model structure is a model structure that instructs the femtocell to train based on the model training request, and the model parameter value is an initial parameter value of the model structure.
- the macro base station allocates model training tasks based on the received transmission capability information of the micro base stations, and determines the model structure and model parameter values corresponding to each micro base station in the first number of micro base stations. Among them, the assignment of model training tasks is to assign specific tasks of federated learning for each micro base station. The corresponding model structure and model parameter values are sent to each micro base station.
- Fig. 4 is a flowchart showing a model learning method according to an exemplary embodiment. As shown in Figure 4, the model learning method is used in the macro base station, including the following steps.
- step S31 a first number of first model training results sent by a first number of micro base stations are received.
- the macro base station receives the first model training result sent by each micro base station in the first number of micro base stations, and obtains the first number of first model training results.
- step S32 the data type characteristics of different micro base stations in the first quantity of micro base stations are determined, and a first optimization model loss function is determined.
- micro base stations have different data type characteristics, for example, one micro base station has a data type characteristic of image data, another micro base station has a data type of digital data, and so on.
- data type characteristic of image data for example, one micro base station has a data type characteristic of image data
- another micro base station has a data type of digital data, and so on.
- this is merely an illustration, not a specific limitation to the present disclosure.
- step S33 after unifying the data type characteristics based on the data type characteristics of different micro base stations in the first number of micro base stations, with the goal of optimizing the first model loss function, the first model is performed on the first number of first model training results. align.
- the macro base station first unifies the dimensions of the first data and first model training results after the federated learning of the micro base station.
- the macro base station performs one-dimensional convolution on the data type features of all (that is, the first number) micro base stations federated learning under the coverage of the macro base station, and maps the data type features of all micro base stations to The same dimension d′, the specific formula is as follows:
- r 1 , r 2 ...r q represent q micro base stations connected under the macro base station, is the size of the convolution kernel of the micro base station ⁇ r 1 , r 2 ...r q ⁇ , and d′ is the common dimension. After one-dimensional convolution, the features of all terminals are mapped to the same dimension d′.
- the macro base station aims to optimize the loss function of the first model, and performs the first model alignment on the first number of first model training results based on the data type characteristics of different micro base stations.
- step S34 global model learning is performed based on the result of the first model alignment, and the global model is determined.
- the macro base station performs global model learning based on the first model alignment result to obtain a model learning result.
- the model learning result is compared with the model training task type requirements and model accuracy included in the model training request, and then the global model requested by the OAM is determined.
- Fig. 5 is a flowchart showing a model learning method according to an exemplary embodiment. As shown in Fig. 5, the model learning method is used in the macro base station, including the following steps.
- step S41 in response to the model learning result of global model learning not satisfying the model training request of OAM, the model learning result is sent to the micro base station, and the first number of first model training results re-determined by the micro base station based on the model learning result is received.
- the model learning result of this global model learning is sent to the micro base station for the micro base station to re- A first model training result is determined.
- step S42 the first model loss function is re-determined based on the model learning results of the global model learning, and the first model training results of the first number of received first models are re-determined with the goal of optimizing the re-determined first model loss function. Model alignment.
- the first model loss function is re-determined based on the model learning result of the global model learning that does not meet the OAM model training request this time, and the goal of optimizing the re-determined first model loss function is again.
- a first model alignment is performed on the first number of first model training results.
- step S43 based on the result of re-determined alignment of the first model, the next global model learning is performed, and the model learning result is re-determined until the model learning result meets the model training request.
- the model is identified as the global model.
- the macro base station based on the re-determined first model alignment result, that is, the result of re-optimizing the loss function of the first model, performs global model learning again, and obtains the model learning result again. Compare the retrieved model learning result with the model training request to determine whether the requirements for the model in the model training request are met. If not, re-determine the first model loss function until the model learning result of the global model learning meets the model training request, and determine the model corresponding to the model learning result meeting the model training request as the global model.
- Fig. 6 is a flowchart of a model learning method according to an exemplary embodiment. As shown in Fig. 6, the model learning method is used in the macro base station, including the following steps.
- step S51 a first loss function between the first number of first model training results of the micro base station and a model learning result obtained from the last global model learning of the macro base station, and a first model alignment loss function are determined.
- the first model loss function includes two parts, one part is the first loss function between the first model training results of the first number of micro base stations and the model learning results obtained from the previous global model learning of the macro base station; The other part is the first model alignment loss function.
- the macro base station aims to optimize the first model loss function, and performs the first model alignment on the first number of first model training results. In other words, the macro base station aims to optimize the first model alignment loss function and the overall loss function of the first loss function, A first model alignment of the first number of first model training results is performed.
- step S52 a first model loss function is determined based on the first loss function and the first model alignment loss function.
- the absolute value error function and square error loss function for regression problems, and the cross-entropy loss function for classification problems are used to determine the first loss function and the first model alignment loss function as the first model loss function.
- the first model loss function may refer to the following formula.
- l( ⁇ , ⁇ ) represents the loss function of the model, that is, the absolute value error function and square error loss function for regression problems, the cross-entropy loss function for classification problems, etc.
- l M is the first model alignment loss function
- ⁇ represents a weight factor
- ⁇ represents all parameters to be learned, such as weights and bias items
- q represents the total number of micro base stations participating in federated learning
- the functional expression of the first model alignment loss function l M can be expressed as:
- Fig. 7 is a flow chart showing a model learning method according to an exemplary embodiment. As shown in Figure 7, the model learning method is used in the macro base station, including the following steps.
- step S61 in response to the model learning result of the global model learning meeting the model training request of the OAM, information about stopping model training is sent to the micro base station.
- the stop training information instructs the femto base station to stop the terminal from executing the model training task.
- the macro base station determines that the model learning result of the current global model learning satisfies the model training request of the OAM.
- the subscription requirements in the model training request sent by OAM include specific requirements for the model accuracy required by the subscribed business.
- the model learning results of the global model learning meet the OAM subscription requirements, it means that the current global learning model has been completed.
- sufficient accuracy is achieved, it is determined to end the training task and obtain a usable global model.
- Send stop model training information to the Femtocell instructs the micro base station to stop the terminal from executing the model training task.
- step S62 the model corresponding to the model learning result is determined as the global model, and the global model is sent to the OAM.
- the model learning result of the t-time global model learning is represented by a t , and then a t is sent to the OAM.
- Fig. 8 is a flow chart showing a model learning method according to an exemplary embodiment. As shown in Fig. 8, the model learning method is used in the macro base station, including the following steps.
- step S71 in response to receiving terminal switching information sent by the Femtocell during model training, re-determine a terminal for performing model training based on the terminal switching information, and send terminal information to the Femtocell.
- the macro base station in response to the macro base station receiving the terminal switching information sent by the micro base station, it is determined that there is a terminal performing the model training task exiting, or there is a new access terminal in the micro base station.
- the macro base station re-determines the terminal performing the model training task based on the received terminal switching information, and sends the re-determined terminal information of the terminal performing the model training task to the micro base station.
- the model switching information includes information about the terminal that quit the model training and the target micro base station that the terminal re-enters; the terminal switching information is used for the macro base station to re-determine the terminal that performs the model training task.
- the macro base station judges whether the terminal exiting the connection or newly joining the connection participates in performing the model training task according to the switching situation of the terminal. According to the training task type in the OAM subscription requirement, the macro base station judges whether the terminal that exits the connection or newly joins the connection continues to participate in the training task of the source micro base station.
- the types of training tasks can be divided into tasks related to upper-layer applications and tasks related to bottom-layer network channels. If the task is related to the upper layer application, the terminal can continue to participate in the federated learning task of the source micro base station; if the task is related to the underlying network channel, the trained model is only applicable to the source micro base station (that is, the micro base station accessed before the terminal handover ), the terminal cannot continue to participate in the federated learning task of the source micro base station.
- the macro base station can decide whether the terminal continues to participate in the training of the source micro base station according to the training task type and the specific handover information in the OAM subscription requirement.
- the macro base station decides that the terminal continues to participate in the model training task of the source micro base station, then the target micro base station (that is, the micro base station accessed by the terminal after handover) will be responsible for forwarding the first model training between the terminal and the source micro base station As a result, the source Femtocell continues to keep the terminal in the training task list and reassigns the model training task to it.
- the target micro base station sends the task arrangement result of the terminal to the terminal, and the terminal retains the training information in the source micro base station, and continues to participate in the federated learning of the source micro base station.
- the macro base station decides that the terminal continues to participate in the training of the source micro base station, and the target micro base station will be responsible for forwarding the first model training result between the terminal and the source micro base station.
- the terminal completes a round of local model training
- the terminal sends the local training results to the target micro base station, and the target micro base station forwards the results to the source micro base station;
- the source micro base station sends the global learning results and whether the terminal continues training is sent to the target micro base station, and the target micro base station forwards the data and the signaling to the terminal.
- the embodiment of the present disclosure also provides a model learning method.
- Fig. 9 is a flowchart of a model learning method according to an exemplary embodiment. As shown in Fig. 9, the model learning method is used in the micro base station, including the following steps.
- step S81 a model training request sent by the macro base station is received.
- step S82 a model training request is sent to the terminal.
- the number of micro base stations receiving the model training request is the first number; the communication coverage of the first number of micro base stations is within the communication coverage of the macro base station.
- the micro base station After receiving the model training request sent by the macro base station, the micro base station forwards the model training request to the terminal.
- the micro base station sends a model training request to the terminal, and the model training request can be used to trigger the terminal to send its own communication conditions and data characteristics.
- Fig. 10 is a flowchart showing a model learning method according to an exemplary embodiment. As shown in Figure 10, the model learning method is used in the micro base station, including the following steps.
- step S91 the communication conditions and data type characteristics sent by the terminal are received.
- step S92 the communication conditions and data characteristics of the terminal and the communication conditions and data characteristics of the micro base station are processed to obtain capability information, and the capability information is sent to the macro base station.
- the terminal after receiving the model training request sent by the micro base station, the terminal determines its own communication conditions and data characteristics and reports them.
- the micro base station and the terminal perform data and signaling interaction through a wireless channel.
- the communication condition reported by the terminal refers to the communication capability or communication channel status of the terminal.
- the communication condition reported by the terminal to the Femtocell may include CQI information detected by the terminal.
- the characteristics of the local data reported by the terminal may include the category of collected data and the like.
- the micro base station sends the communication conditions and data characteristics reported by the terminal and the communication conditions and data characteristics of the micro base station to the macro base station through the X2 interface.
- the present disclosure refers to the communication conditions and data characteristics of the terminal and the communication conditions and data characteristics of the micro base station as capability information, wherein the capability information is used for the macro base station to determine the model structure and model parameter values.
- Fig. 11 is a flow chart showing a model learning method according to an exemplary embodiment. As shown in Figure 11, the model learning method is used in the micro base station, including the following steps.
- step S101 a model structure and model parameter values are received.
- the model structure is a model structure that instructs the femtocell to train based on the model training request, and the model parameter value is an initial parameter value of the model structure.
- step S102 a second number of terminals to perform model training is determined based on the communication conditions and data type characteristics of the terminals, as well as the model structure and model parameter values.
- the femto base station determines the second number of terminals to perform the model training task based on the received model structure and model parameter values, as well as communication conditions and data type characteristics of the accessed terminals.
- step S103 the scheduling information is sent to the second number of terminals.
- the Femto base station after determining the second number of terminals, the Femto base station sends scheduling information to the second number of terminals.
- the scheduling information includes model structure and model parameter values, and instruction information instructing the terminal to perform model training.
- the Femto base station determines that the terminals performing the model training task include one terminal (that is, the second number is one), and then the Femto base station determines that the learning mode of the terminal is a single terminal training mode.
- the micro base station directly forwards the training task assigned by the macro base station to the terminal, and the terminal can perform local model training according to the assigned task.
- the micro base station determines that the terminals performing the model training task include multiple terminals (that is, the second number is multiple), and the micro base station determines that the learning mode of the terminals is a multi-terminal cooperative training mode.
- the micro base station allocates the training tasks assigned by the macro base station according to the communication conditions and local data characteristics of different terminals, and assists multi-terminal cooperation to complete the model training. Local model training.
- the terminal after receiving the scheduling information sent by the micro base station, the terminal initializes the local model parameters, then performs local model training according to the model training task requirements assigned by the micro base station, and transmits the training results to the micro base station through a wireless channel. base station.
- Fig. 12 is a flow chart showing a model learning method according to an exemplary embodiment. As shown in Fig. 12, the model learning method is used in the micro base station, including the following steps.
- step S111 a second number of second model training results sent by a second number of terminals are received.
- the Femto base station receives the second number of model training results sent by the second number of terminals.
- Terminal m Take terminal m in the second number of terminals as an example.
- Terminal m randomly initializes a set of model parameters as the initialization parameters of the local learning model, and the result of the initialized local learning model is recorded as
- the terminal m generates a local data set D m by sensing and collecting data, and randomly extracts a data set with a data volume of N from the local data set to generate a local training set T m .
- the terminal utilizes the local The training set performs local model training, and transmits the terminal training result (i.e., the second model training result) to the micro base station through the wireless channel.
- the local learning model training update result transmitted by terminal m can be Expressed as
- step S112 the data type characteristics of different terminals in the second quantity of terminals are determined, and a second model loss function is determined.
- a data type characteristic of each terminal in the second quantity of terminals is determined, wherein different data types are characterized by image data, digital data, and the like.
- step S113 after unifying the data type features based on the data type features of different terminals in the second number of terminals, the second model alignment is performed on the training results of the second number of second models with the goal of optimizing the second model loss function.
- the feature dimensions of the local models obtained through training may also be different, so the feature dimensions of different terminals are unified to facilitate model alignment and federated aggregation.
- the specific formula is as follows:
- m 1 , m 2 ...m n represent n terminals connected under the micro base station i, is the size of the terminal ⁇ m 1 , m 2 ...m n ⁇ convolution kernel, and d is the common dimension. After one-dimensional convolution, the features of all terminals are mapped to the same dimension d.
- the micro base station performs a second model alignment on the second number of second model training results based on the data type characteristics of different terminals with the goal of optimizing the loss function of the first model based on the dimension unification results of all terminals.
- step S114 federated aggregation is performed based on the alignment result of the second model to obtain the training result of the first model.
- the Femto base station performs federated learning based on the alignment result of the second model to obtain the training result of the first model. Afterwards, the first model training result is sent to the macro base station.
- Fig. 13 is a flowchart showing a model learning method according to an exemplary embodiment. As shown in Figure 13, the model learning method is used in the micro base station, including the following steps.
- step S121 in response to receiving the continuation training request sent by the macro base station, and receiving the model learning result sent by the macro base station.
- the model learning result sent by the macro base station is further received.
- step S122 the model structure and model parameter values of the terminal are updated based on the model learning results, and training continuation scheduling information is sent to the terminal.
- the Femto base station sends the model learning result sent by the Macro base station to the terminal, and the terminal updates the model structure and model parameter values based on the model learning result.
- the micro base station sends continuation training scheduling information to the terminal, instructing the terminal to continue to execute the model training task based on the updated model structure and model parameter values, and resends the obtained second model training result to the micro base station.
- step S123 in response to re-receiving the second number of second model training results, the second model loss function is re-determined based on the first model training results, and with the goal of optimizing the re-determined second model loss function, the second Number of second model training results for second model alignment.
- the micro base station after receiving the second number of second model training results sent by the terminal, the micro base station re-determines the second loss function based on the first model training result sent by the macro base station, and optimizes the second loss function again.
- the second loss function is used as the target, and the second model alignment is performed on the second number of second model training results.
- step S124 based on the re-determined alignment result of the second model, the next federated aggregation is performed to re-determine the training result of the first model.
- the micro base station is based on the re-determined second model alignment result.
- the micro base station i performs federated aggregation on the basis of the model alignment.
- the micro base station reports the federated aggregation result to the macro base station through the X2 interface.
- the federated aggregation result transmitted by the micro base station i can be expressed as Perform federated aggregation to re-determine the training result of the first model. Form the macro base station, micro base station and terminal federated learning cycle interaction until the final macro base station determines the global model that meets the OAM requirements.
- Fig. 14 is a flow chart showing a model learning method according to an exemplary embodiment. As shown in Figure 14, the model learning method is used in the micro base station, including the following steps.
- step S131 a second loss function between the second model training result of the second number of terminals and the first model training result obtained by the last federation aggregation of the Femtocell, and a second model alignment loss function are determined.
- step S132 a second model loss function is determined based on the second loss function and the second model alignment loss function.
- model alignment is performed based on the feature alignment results of different terminals.
- the goal is to optimize the second model loss function.
- the second model loss function It can be determined in two parts. The first part is obtained by calculating the loss function from the model training results of the t-th federated learning of all terminals and the updated results of the t-1 federated learning federated learning of the micro base station; the second part is the model alignment It is obtained by calculating the loss function before and after.
- the goal of model alignment is to optimize the loss function of the two-part ensemble.
- the absolute value error function and the square error loss function used for the regression problem, and the cross-entropy loss function used for the classification problem are used, and the first loss function and the first model alignment loss function are determined as the first model loss function.
- the first model loss function may refer to the following formula.
- the loss function of the micro base station i in the t-th federated training process can be expressed as:
- l( ⁇ , ⁇ ) represents the loss function of the model, that is, the absolute value error function and square error loss function for regression problems, the cross-entropy loss function for classification problems, etc.
- l M is the model alignment loss function
- ⁇ represents a weight factor
- ⁇ represents all parameters to be learned, such as weights and bias items
- n represents the total number of terminals participating in federated learning under micro base station i; Indicates the update result of terminal k's local learning model training during the t-time federated learning process; Indicates the training update result of the federated aggregation parameters of the micro base station i in the t-1 federated learning process.
- model alignment loss function can be expressed as:
- Fig. 15 is a flow chart showing a model learning method according to an exemplary embodiment. As shown in Figure 15, the model learning method is used in the micro base station, including the following steps.
- step S141 receive stop model training information sent by the macro base station.
- the stop training information is used to instruct the Femtocell to stop the terminal from executing the model training task.
- step S142 the terminal is instructed to stop executing the model training task based on the stop model training information.
- the femto base station receives the information of stopping model training, it determines that the model will not be trained any more. And send the stop model training information to the terminal, instructing the terminal to stop executing the model training task.
- Fig. 16 is a flow chart showing a model learning method according to an exemplary embodiment. As shown in Fig. 16, the model learning method is used in the micro base station, including the following steps.
- step S151 terminal switching information is sent.
- the terminal switching information includes information about the terminal that quits the model training and the target micro base station that the terminal re-enters; the terminal switching information is used for the macro base station to re-determine the terminal that performs the model training task.
- the source micro base station represents the micro base station to which the terminal is connected before handover
- the target micro base station represents the micro base station to which the terminal is connected after handover.
- the source micro base station will regularly send measurement control signals to the terminal, and the terminal measures the received power of the reference signal and the quality of the received reference signal according to the measurement control signal, and reports the measurement report to the source micro base station.
- the source micro base station detects that other base stations can provide higher quality of service for the terminal, the source micro base station makes a terminal handover decision, notifies the terminal to prepare for handover and initiates a handover request to the target micro base station, and simultaneously switches the terminal and target The information of the micro base station is reported to the connected macro base station.
- the source micro base station sends a reconfiguration RRC connection request message to the terminal, and at the same time sends terminal status information to the target micro base station, the terminal and the target micro base station perform a series of parameter configurations, the terminal successfully accesses the target micro base station, and the target micro base station sends a handover success message to Source Femtocell.
- step S152 in response to receiving the terminal information sent by the macro base station, re-determine the terminal performing the model training task, and send the model training task to the terminal.
- the micro base station after receiving the terminal information sent by the macro base station, the micro base station reassigns the model training task of each terminal based on the re-determined terminal performing the model training task, and sends the corresponding model training task to the terminal. training tasks.
- Fig. 17 is a flowchart showing a model learning method according to an exemplary embodiment. As shown in Figure 17, the model learning method is used in the micro base station, including the following steps.
- step S161 in response to the fact that the terminal information includes the terminal that performed the model training task last time, the target micro base station after the terminal is switched is determined, and the target micro base station sends the model training task to the terminal.
- the micro base station after the terminal switches the accessing micro base station, the micro base station re-determines the terminal performing the model training task based on the terminal information sent by the macro base station. If the terminal information includes the terminal that performed the model training task last time, and the terminal has switched the micro base station, then the target micro base station (that is, the micro base station that the terminal accesses after switching) will be responsible for forwarding the connection between the terminal and the source micro base station (that is, For the second model training result between the micro base station accessed by the terminal before handover), the source micro base station continues to keep the terminal in the training task list and reassigns the training task to it. The target micro base station sends the model training task of the terminal to the terminal, and the terminal retains the training information in the source micro base station, and continues to participate in the federated learning of the source micro base station.
- the target micro base station that is, the micro base station that the terminal accesses after switching
- Fig. 18 is a flow chart showing a model learning method according to an exemplary embodiment. As shown in Figure 18, the model learning method is used in the micro base station, including the following steps.
- step S171 in response to the fact that the terminal information does not include the terminal that performed the model training task last time, it is determined that the terminal will no longer perform the model training task, and it is determined that a new terminal that performs the model training task is added, and the newly added terminal that performs the model training task The terminal sends a model training task.
- the source micro base station if the training task type of the source micro base station does not support the terminal to continue to participate in the training, the source micro base station completely removes the terminal from the training, and the new terminal reports the communication conditions and local data characteristics to the target micro base station, and the target micro base station
- the micro base station determines whether the new terminal participates in the training of the target micro base station according to the type of the training task and the information reported by the terminal.
- the target micro base station then sends the task arrangement result of the terminal to the terminal.
- the micro base station re-determines the terminal performing the model training task based on the terminal information sent by the macro base station. If the terminal information does not include the terminal that performed the model training task last time, and the terminal has switched to the micro base station.
- the target Femtocell will be responsible for forwarding the second model training result between the terminal and the source Femtocell.
- the terminal sends the local training result to the target Femtocell, and the target Femtocell forwards the result to the source Femtocell.
- the source micro base station sends the model learning result and signaling whether the terminal continues training to the target micro base station, and the target micro base station forwards the data and signaling to the terminal.
- the femto base station removes the terminals that no longer perform the model training task from the training task list. Determine the new terminal to participate in the model training task, and the new terminal will report the communication conditions and local data characteristics to the target micro base station through the infinite channel.
- the target micro base station determines whether the new terminal participates in performing the model training task according to the type of the training model training task and the information reported by the new terminal. Whether the terminal participates in the training of the source micro base station, the target micro base station will send the task arrangement result of the terminal to the terminal through an infinite channel.
- the interaction process between the macro base station, the micro base station and the terminal is described.
- OAM initiates a model training request to the macro base station.
- the macro base station forwards the model training request to the micro base station, and the micro base station forwards the request to the terminal.
- the terminal reports the communication conditions and local data characteristics to the micro base station, and the micro base station then Report the terminal information to the macro base station.
- the macro base station assigns tasks to the micro base station according to the terminal information reported by the micro base station, and delivers the model structure and hyperparameter information (ie, the model parameter values involved in the embodiments of the present disclosure) to the micro base station.
- the micro base station selects the terminals participating in the training and the learning mode of the terminals, and assigns tasks to the terminals participating in the model training task.
- the terminal, micro base station, and macro base station iteratively perform federated learning until the model meets the OAM subscription requirements (such as model accuracy requirements), and the macro base station reports the model training results (ie, the model learning results of global model learning) to OAM.
- the OAM subscription requirement includes: analysis ID, used to identify the requested analysis type; notification target address, used to associate the notification received by the requested party with this subscription; analysis report information, including preferred analysis accuracy level, analysis time interval etc. parameters; analysis filter information (optional): Indicates the conditions to be met for the report analysis information.
- the specific terminal, the micro base station and the macro base station iteratively perform federated learning methods include:
- the terminal In the process of federated learning, the terminal first initializes the local model parameters, then performs local model training according to the task requirements assigned by the micro base station, and transmits the training result (ie, the second model training result) to the micro base station through a wireless channel.
- the micro base station summarizes the local training results of all participating training terminals, it first performs model alignment, then performs federated aggregation, and reports the federated aggregation result (that is, the first model training result) to the macro base station through the X2 interface.
- the macro base station After the macro base station aggregates the federated aggregation results of all the micro base stations participating in the training, it first performs model alignment, and then performs global model learning, and sends the global learning results to the micro base station through the X2 interface.
- the micro base station forwards the global model training result to the terminal through the wireless channel, and the terminal updates the local learning model according to the global model training result.
- the macro base station judges whether the global training model meets the requirements according to the OAM subscription requirements.
- the macro base station reports the model training result to the OAM, and notifies the micro base station to stop training.
- the macro base station needs to arrange the training tasks of the terminal according to the terminal handover information, and the micro base station re-assigns the task according to the terminal handover situation, and the terminal performs local model learning again and sends the result Report to the micro base station, and iterate repeatedly until the model performance meets the OAM subscription requirements.
- the source micro base station represents the micro base station to which the terminal is connected before the handover occurs
- the target micro base station represents the micro base station to which the terminal is connected after the handover occurs.
- the macro base station arranges the terminal to perform model training tasks according to the terminal switching information, including:
- the source Femtocell When the source Femtocell makes a terminal handover decision in a cycle of federated learning, the source Femtocell notifies the terminal to prepare for the handover, and reports information about the exiting terminal and the target Femtocell to the Macro base station. After receiving the command from the source Femtocell, the terminal executes handover, and completes the connection on the target Femtocell.
- the macro base station determines whether the terminal continues to participate in the training of the source micro base station according to the training task type of the source micro base station and the handover information of the terminal.
- the training task type of the source micro base station supports the terminal to continue to participate in the training, then the target micro base station will be responsible for forwarding the training data between the terminal and the source micro base station, the terminal continues to participate in the training task of the source micro base station, and the target micro base station Send the task scheduling result of the terminal to the terminal.
- the source micro base station if the training task type of the source micro base station does not support the terminal to continue to participate in the training, the source micro base station completely removes the terminal from the training, and the new terminal reports the communication conditions and local data characteristics to the target micro base station, and the target micro base station The base station decides whether the new terminal will participate in the training of the target micro base station according to the type of the training task and the information reported by the terminal. The target micro base station then sends the task arrangement result of the terminal to the terminal.
- the macro base station, the micro base station and the terminal complete the model training task of the OAM, and after sending the global model to the OAM, they can also perform inference on the trained model.
- the task cell for model reasoning is determined by the OAM, and the implementation of task reasoning in the task cell includes:
- the task cell When performing task inference, the task cell initiates an inference request to the OAM through the macro base station where it is located and reports the inference task type and specific requirements, and the OAM searches for one or more suitable models according to the inference task type and specific requirements. After finding a suitable model, the OAM sends the model selection result to the macro base station, and the selected macro base station reports specific model parameter information. The OAM forwards the model parameter information reported by the selected macro base station to the macro base station where the task cell is located, and the macro base station where the task cell is located performs inference on the task according to the model parameter information.
- Fig. 19 is a main flowchart of a model reasoning method according to an exemplary embodiment. As shown in Figure 19, the following steps are included:
- step 1 the OAM initiates a model training request to the macro base station, and the macro base station forwards the model training request to the micro base station.
- Step 2 the micro base station forwards the model training request to the terminal, the terminal reports the communication conditions and local data type characteristics to the micro base station, and the micro base station reports the terminal data to the macro base station.
- step 3 the macro base station allocates tasks according to the information reported by the micro base station, and delivers the model structure and model parameter values to the micro base station.
- Step 4 the micro base station selects the terminals participating in the model training task and the learning modes of the terminals, and assigns tasks to the terminals participating in the training.
- Step 5 The terminal, the micro base station and the macro base station iteratively perform federated learning until the model meets the OAM subscription requirements, and the macro base station reports the model training result to the OAM.
- Fig. 20 is a flowchart of federated learning of a model reasoning method according to an exemplary embodiment. As shown in Figure 20, it includes: the terminal initializes the local model parameters; the terminal performs local model training according to the task requirements, and transmits the second model training result to the micro base station through the wireless channel; the micro base station summarizes the second model training of all terminals As a result, model alignment is performed first, followed by federated aggregation, and the result is reported to the macro base station through the X2 interface; The learning result is sent to the micro base station through the X2 interface; the micro base station sends the model learning result to the terminal through the wireless channel, and the terminal updates the local learning model according to the model learning result; the macro base station determines whether the global model corresponding to the model training result meets the OAM subscription requirements; If the OAM subscription requirement is met, the federated learning ends, and the macro base station reports the model learning result to the OAM. If the OAM subscription requirements are not met, the macro base station judges whether the terminal exiting
- Fig. 21 is a flow chart of terminal switching processing in a model reasoning method according to an exemplary embodiment. As shown in Figure 21, it includes: the source micro base station notifies the terminal to prepare for handover, and reports the exit connection terminal and target micro base station information to the macro base station; the terminal executes the handover and completes the connection on the target micro base station; The type and switching information determine whether the terminal continues to participate in the model training task of the source micro base station; if it continues to participate in the model training task of the source micro base station, the target micro base station is responsible for forwarding the training data between the terminal and the source micro base station, and the terminal continues to participate in the source micro base station. The training task of the base station; the target micro base station sends the task arrangement result of the terminal to the terminal.
- the source micro base station will remove the terminal from the training; the newly added terminal will report the communication conditions and local data characteristics to the target micro base station; the target micro base station will determine the new Whether the terminal participates in the training; the target micro base station sends the task arrangement result of the terminal to the terminal.
- Fig. 22 is a flow chart of model inference of a model learning method according to an exemplary embodiment. As shown in Figure 22, it includes the following steps:
- Step 1 The task cell initiates an inference request to the OAM through the macro base station and reports the inference task type and specific requirements.
- Step 2 OAM searches for one or more suitable models according to the type of reasoning task and specific requirements.
- the types of reasoning tasks are classified into types related to upper-layer applications or types related to bottom-layer network channels.
- the macro base station model whose training task type is similar to the inference task type is preferred.
- multiple trained models may be selected, and the models are fused to perform inference.
- step 3 the OAM sends the model selection result to the macro base station, and the selected macro base station reports specific model parameter information.
- Step 4 the OAM forwards the model parameter information to the macro base station where the task cell is located, and the macro base station where the task cell is located performs inference on the task according to the model parameter information.
- the OAM selects multiple trained macro base station models, and the macro base station where the task cell is located performs model fusion on the multiple models, and then performs inference on the task.
- Fig. 23 is a schematic diagram of a protocol and an interface for signaling and data transmission between a micro base station and a macro base station in a model learning method according to an exemplary embodiment. As shown in Figure 23, it mainly involves the interaction between the micro base station and the macro base station, as follows:
- the micro base station sends a connection establishment request signaling (X2 Setup Request) to the macro base station, and the content of the signaling indicates that it requests to establish a connection with the target base station.
- the macro base station performs resource allocation according to the connection establishment request signaling sent by the micro base station.
- the macro base station sends a successful connection establishment signaling (X2 Setup Response) to the micro base station, and the content of the signaling indication is to notify the other party that the connection has been successfully established.
- the micro base station packages the training result of the first model.
- the micro base station sends the signaling of sending the training result data packet to the macro base station, and the content of the signaling indicates to send the training data packet to the receiver. 3.
- the macro base station uses the AI service module for global model training. 4.
- the macro base station sends a signaling of sending and packing the global model training result data packet to the micro base station, and the signaling indicates that the global model training result is packaged and the data packet is sent to the receiver. 5.
- the macro base station sends a signaling notifying whether to continue training to the micro base station, and the content of the signaling indication is to notify the other party whether to continue training. 6.
- the macro base station and the micro base station confirm that the transmission is completed.
- the macro base station sends a resource release signaling (Release Resource) to the micro base station, and the content of the signaling indicates: perform resource release.
- Resource Resource
- Fig. 24 is a schematic diagram of a protocol and an interface for signaling and data transmission between a micro base station and a terminal in a model learning method according to an exemplary embodiment. As shown in Figure 24, it mainly involves the interaction between the micro base station and the terminal, as follows:
- the terminal sends an RRC connection establishment request signaling (RRC Connection Request) to the micro base station, and the content of the signaling indication is to request to establish an RRC connection with the target base station.
- RRC Connection Request The terminal sends an RRC connection establishment request signaling (RRC Connection Request) to the micro base station, and the content of the signaling indication is to request to establish an RRC connection with the target base station.
- the Femtocell sends the RRC Connection Setup signaling (RRC Connection Setup) to the terminal, and the signaling indicates content: notify the receiver to agree to establish the RRC connection.
- the terminal performs radio resource configuration according to the signaling sent by the Femtocell.
- the terminal sends an RRC Connection Setup Complete signaling (RRC Connection Setup Complete) to the micro base station, and the content of the signaling indication is to notify the receiver that the RRC connection setup is complete.
- RRC Connection Setup Complete RRC Connection Setup Complete
- the terminal packs the local training result (that is, the second model training result).
- the terminal sends a signaling for sending the local training result data packet to the micro base station, and the signaling indicates that the content of the signaling is to send the local training result data packet to the receiver.
- the micro base station and the macro base station cooperate to use the AI service module for model training.
- the micro base station sends the global model training result signaling to the terminal, and the signaling indicates that the global model training result is sent to the receiver. 5.
- the micro base station will notify the terminal whether to continue the training signaling, and the content of the signaling indication: inform the other party whether to continue the training. 6a.
- the Femtocell sends RRC Connection Release request signaling (RRC Connection Release) to the terminal, and the content of the signaling indicates a request to release the RRC connection. 6b.
- the terminal sends the successful RRC connection release signaling (RRC Connection Release Complete) to the micro base station, and the content of the signaling indication is to notify the other party that the RRC connection has been successfully released.
- Fig. 25 is a schematic diagram of a protocol and interface for terminal switching in a model learning method according to an exemplary embodiment. As shown in Figure 25, it mainly involves the interaction between the macro base station, the source micro base station, the target micro base station and the terminal, as follows:
- the source micro base station sends the measurement control signal signaling (Measurement Control) to the terminal, and the signaling indicates content: notify the other party to perform signal strength measurement.
- the terminal sends measurement report signaling (Measurement Reports) to the source Femtocell, and the signaling indicates that the measurement report is to be sent to the receiver.
- the source Femtocell makes a terminal handover decision (HO decision). 4a.
- the source micro base station sends handover request signaling (Handover Request) to the target micro base station, and the signaling indicates that the handover request is sent to the receiver. 4b.
- the target micro base station sends handover request acknowledgment signaling (Handover Request ack) to the source micro base station, and the signaling indicates that the handover request acknowledgment is sent to the receiver. 5.
- the source micro base station sends the reconfiguration RRC connection request signaling (RRC Connection Reconfiguration) including the mobility control information (Mobility control information) to the terminal, and the signaling indicates that the content is to send a reconfiguration RRC connection request to the receiver.
- the source micro base station sends the terminal status information signaling (Early status transfer) to the target micro base station, and the signaling indicates that the content of the signaling is to send the terminal status information to the receiver. 7.
- the terminal accesses the target micro base station. 8.
- the terminal sends RRC connection reconfiguration complete message signaling (RRC Connection reconfiguration complete) to the target micro base station, and the signaling indicates that the content of the signaling is to send an RRC reconnection configuration complete message to the receiver.
- the target micro base station sends a handover success message signaling (Handover success) to the source micro base station, and the signaling indicates that the handover success message is sent to the receiver.
- the source micro base station sends the signaling for sending the information of the handover terminal and the target micro base station to the macro base station, and the content of the signaling instruction is to send the information of the handover terminal and the target micro base station to the macro base station.
- the macro base station determines whether the terminal continues to participate in the training task of the source micro base station according to the source micro base station training task type and handover information. 12. The macro base station sends the sending decision result signaling to the target micro base station, and the signaling indicates that the sending decision result is sent to the receiver. 13. The macro base station sends the sending decision result signaling to the source micro base station, and the signaling indicates that the sending decision result is sent to the receiver. 14. The target micro base station decides whether the handover terminal will participate in its own federated learning training task. 15. The target micro base station sends the signaling of the sending decision result to the terminal, and the content of the signaling indicates that the sending decision result is sent to the receiver.
- the embodiment of the present disclosure also provides a model learning device.
- the model learning apparatus provided by the embodiments of the present disclosure includes hardware structures and/or software modules corresponding to each function.
- the embodiments of the present disclosure can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the technical solutions of the embodiments of the present disclosure.
- the model learning apparatus includes one macro base station apparatus, M micro base station apparatuses, and N user apparatuses as an example for illustration.
- the user device is a terminal connected to the micro base station, responsible for local data collection and local model training, and can update the local model according to the learning result of the global model.
- the micro base station device is responsible for selecting the terminals and learning modes participating in the model training task, assigning training tasks to the terminals participating in the model training task, summarizing the local training results of the terminals, and using the AI service module to perform model alignment and federated averaging, and is also responsible for terminal switching management and forwarding the signaling issued by the macro base station to the terminal.
- the macro base station device is responsible for interacting with OAM, assigning tasks to the micro base station devices participating in the training, summarizing the training results of the micro base station devices, and using the AI service module to perform model alignment and global model learning, and at the same time decide whether the terminal continues Participate in training.
- Fig. 26 is a block diagram of a model learning device according to an exemplary embodiment.
- the model learning apparatus 100 is applied to a macro base station and includes a sending module 101 .
- the sending module is configured to send the model training request to the first number of micro base stations in response to receiving the model training request sent by the OAM entity.
- the communication coverage of the first number of micro base stations is within the communication coverage of the macro base station.
- the model training request is used to trigger the Femtocell to report capability information.
- the device further includes: a determining module 102 .
- the determining module 102 is configured to, in response to receiving capability information sent by the Femtocell, determine a model structure and model parameter values based on the capability information, and send the model structure and model parameter values to the Femtocell.
- the model structure is a model structure that instructs the micro base station to train based on the model training request, and the model parameter value is an initial parameter value of the model structure.
- the capability information includes the data type characteristics of the Femtocell.
- the device also includes: a receiving module 103 .
- the receiving module 103 is configured to receive the first number of first model training results sent by the first number of micro base stations. Determine data type characteristics of different micro base stations in the first quantity of micro base stations, and determine a first model loss function. After unifying the data type characteristics based on the data type characteristics of different micro base stations in the first number of micro base stations, with the goal of optimizing the first model loss function, the first model alignment is performed on the training results of the first number of first models. The global model learning is performed based on the result of the first model alignment to determine the global model.
- the determining module 102 is configured to send the model learning result to the micro base station in response to the model learning result of the global model learning not satisfying the OAM model training request, and receive the first second determined by the micro base station based on the model learning result.
- a quantity first model training result re-determine the first model loss function based on the model learning result of the global model learning, and aim to optimize the re-determined first model loss function, and re-perform the first model alignment on the received first number of first model training results.
- Based on the re-determined first model alignment result perform the next global model learning, re-determine the model learning result until the model learning result meets the model training request, and determine the model corresponding to the model learning result that meets the model training request as the global model .
- the determination module 102 is configured to determine the first loss function between the first model training results of the first number of micro base stations and the model learning results obtained from the previous global model learning of the macro base station, and the first model Alignment loss function. Based on the first loss function and the first model alignment loss function, a first model loss function is determined.
- the determining module 102 is configured to send stop model training information to the micro base station in response to the model learning result of the global model learning meeting the model training request of the OAM.
- the stop training information instructs the femto base station to stop the terminal from executing the model training task.
- the model corresponding to the model learning result is determined as the global model, and the global model is sent to the OAM.
- the determining module 102 is further configured to, in response to receiving terminal switching information sent by the micro base station during the model training process, re-determine the terminal performing the model training task based on the terminal switching information, and send the terminal's information to the micro base station.
- the terminal switching information includes information about the terminal that exits the model training and the target micro base station that the terminal re-enters.
- the terminal switching information is used by the macro base station to re-determine the terminal performing the model training task.
- Fig. 27 is a block diagram of a model learning device according to an exemplary embodiment.
- the model learning apparatus 200 is applied to a micro base station and includes a receiving module 201 and a sending module 202 .
- the receiving module 201 is configured to receive the model training request sent by the macro base station.
- the sending module 202 is configured to send a model training request to the terminal.
- the number of micro base stations receiving the model training request is the first number.
- the communication coverage of the first number of micro base stations is within the communication coverage of the macro base station.
- the model training request is used to trigger the terminal to report the communication conditions and data characteristics of the terminal
- the receiving module 201 is also used to receive the communication conditions and data type characteristics sent by the terminal.
- the communication conditions and data characteristics of the terminal and the communication conditions and data characteristics of the micro base station are processed to obtain capability information, and the capability information is sent to the macro base station.
- the capability information is used by the macro base station to determine the model structure and model parameter values.
- the receiving module 201 is further configured to: receive the model structure and model parameter values.
- the model structure is the model structure that instructs the femtocell to train based on the model training request
- the model parameter value is the initial parameter value of the model structure.
- a second number of terminals performing model training is determined.
- the scheduling information includes model structure and model parameter values as well as instruction information instructing the terminal to perform model training.
- the device further includes: a determination module 203 .
- the receiving module 201 is configured to receive a second number of second model training results sent by a second number of terminals.
- a determination module 203 configured to determine the data type characteristics of different terminals in the second quantity of terminals, and determine a second model loss function. After the data type features are unified based on the data type features of different terminals in the second number of terminals, the second model alignment is performed on the training results of the second number of second models with the goal of optimizing the second model loss function. The federated aggregation is performed based on the aligned results of the second model to obtain the training results of the first model.
- the determining module 203 is configured to respond to receiving the continuation training request sent by the macro base station, and receive the model learning result sent by the macro base station. Based on the model learning results, the model structure and model parameter values of the terminal are updated, and the scheduling information for continuing training is sent to the terminal. In response to re-receiving the second number of second model training results, re-determining the second model loss function based on the first model training result, and aiming at optimizing the re-determined second model loss function, training the second number of second models The results are subjected to a second model alignment. Based on the re-determined alignment result of the second model, the next federated aggregation is performed to re-determine the training result of the first model.
- the determination module 203 is configured to determine the second loss function between the second model training result of the second number of terminals and the first model training result obtained by the last federation aggregation of the micro base station, and the second model alignment loss function. Based on the second loss function and the second model alignment loss function, a second model loss function is determined.
- the receiving module 201 is further configured to: receive stop model training information sent by the macro base station.
- the stop training information instructs the femto base station to stop the terminal from executing the model training task. Instructing the terminal to stop executing the model training task based on the stop model training information.
- the sending module 202 is further configured to: send terminal switching information.
- the terminal switching information includes information about the terminal that exits the model training and the target micro base station that the terminal re-enters.
- the terminal switching information is used by the macro base station to re-determine the terminal performing the model training task.
- the sending module 202 In response to receiving the terminal information sent by the macro base station, re-determine the terminal performing the model training task, and send the model training task to the terminal.
- the sending module 202 is configured to determine the target micro base station after the terminal is handed over, and send the model training task to the terminal in response to the terminal information including the terminal that performed the model training task last time. and / or
- the terminal information does not include the terminal that performed the model training task last time, determine that the terminal will no longer perform the model training task, and determine that a new terminal that performs the model training task is added, and sends the model training task to the newly added terminal that performs the model training task .
- the specific manner in which each module executes operations has been described in detail in the embodiments related to the method, and will not be described in detail here.
- Fig. 28 is a block diagram of an apparatus 300 for model learning according to an exemplary embodiment.
- the apparatus 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
- device 300 may include one or more of the following components: processing component 302, memory 304, power component 306, multimedia component 308, audio component 310, input/output (I/O) interface 312, sensor component 314, and communication component 316 .
- the processing component 302 generally controls the overall operations of the device 300, such as those associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 302 may include one or more processors 320 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 302 may include one or more modules that facilitate interaction between processing component 302 and other components. For example, processing component 302 may include a multimedia module to facilitate interaction between multimedia component 308 and processing component 302 .
- the memory 304 is configured to store various types of data to support operations at the device 300 . Examples of such data include instructions for any application or method operating on device 300, contact data, phonebook data, messages, pictures, videos, and the like.
- the memory 304 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic or Optical Disk Magnetic Disk
- Power component 306 provides power to various components of device 300 .
- Power components 306 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for device 300 .
- the multimedia component 308 includes a screen that provides an output interface between the device 300 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
- the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or swipe action, but also detect duration and pressure associated with the touch or swipe action.
- the multimedia component 308 includes a front camera and/or a rear camera. When the device 300 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
- the audio component 310 is configured to output and/or input audio signals.
- the audio component 310 includes a microphone (MIC), which is configured to receive external audio signals when the device 300 is in operation modes, such as call mode, recording mode, and voice recognition mode. Received audio signals may be further stored in memory 304 or sent via communication component 316 .
- the audio component 310 also includes a speaker for outputting audio signals.
- the I/O interface 312 provides an interface between the processing component 302 and a peripheral interface module, which may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
- Sensor assembly 314 includes one or more sensors for providing various aspects of status assessment for device 300 .
- the sensor component 314 can detect the open/closed state of the device 300, the relative positioning of components, such as the display and keypad of the device 300, and the sensor component 314 can also detect a change in the position of the device 300 or a component of the device 300 , the presence or absence of user contact with the device 300 , the device 300 orientation or acceleration/deceleration and the temperature change of the device 300 .
- the sensor assembly 314 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
- Sensor assembly 314 may also include an optical sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
- the communication component 316 is configured to facilitate wired or wireless communication between the apparatus 300 and other devices.
- the device 300 can access wireless networks based on communication standards, such as WiFi, 2G or 3G, or a combination thereof.
- the communication component 316 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 316 also includes a near field communication (NFC) module to facilitate short-range communication.
- NFC near field communication
- the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID Radio Frequency Identification
- IrDA Infrared Data Association
- UWB Ultra Wide Band
- Bluetooth Bluetooth
- apparatus 300 may be programmed by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGA field programmable A gate array
- controller microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
- non-transitory computer-readable storage medium including instructions, such as the memory 304 including instructions, which can be executed by the processor 320 of the device 300 to implement the above method.
- the non-transitory computer readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
- Fig. 29 is a block diagram of an apparatus 400 for model learning according to an exemplary embodiment.
- the apparatus 400 may be provided as a server.
- apparatus 400 includes processing component 422 , which further includes one or more processors, and a memory resource represented by memory 432 for storing instructions executable by processing component 422 , such as application programs.
- the application program stored in memory 432 may include one or more modules each corresponding to a set of instructions.
- the processing component 422 is configured to execute instructions to perform the above method.
- Device 400 may also include a power component 426 configured to perform power management of device 400 , a wired or wireless network interface 450 configured to connect device 400 to a network, and an input-output (I/O) interface 458 .
- the device 400 can operate based on an operating system stored in the memory 432, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- “plurality” in the present disclosure refers to two or more, and other quantifiers are similar thereto.
- “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B may indicate: A exists alone, A and B exist simultaneously, and B exists independently.
- the character “/” generally indicates that the contextual objects are an “or” relationship.
- the singular forms “a”, “said” and “the” are also intended to include the plural unless the context clearly dictates otherwise.
- first, second, etc. are used to describe various information, but the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another, and do not imply a specific order or degree of importance. In fact, expressions such as “first” and “second” can be used interchangeably.
- first information may also be called second information, and similarly, second information may also be called first information.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Computational Mathematics (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Analysis (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
一种模型学习方法、模型学习装置及存储介质。其中,模型学习方法,应用于宏基站,包括:响应于接收到操作维护管理OAM实体发送的模型训练请求,向第一数量的微基站发送所述模型训练请求(S11);其中,所述第一数量的微基站通信覆盖范围在所述宏基站通信覆盖范围内。通过本方法实现宏基站与微基站之间的交互,实现训练模型的任务分配,信号质量好,数据传输速率较快,提高无线接入网络的利用率。
Description
本公开涉及无线通信技术领域,尤其涉及一种模型学习方法、模型学习装置及存储介质。
在通信技术中为提高峰值速率和频谱利用率,进一步引入异构网络技术。其中,异构网络技术是指许多微基站被布放在宏基站覆盖区域内,形成同覆盖的不同节点类型相异的异构系统。由于接入点与被服务的用户设备之间的地理距离被缩小了,能够有效提升系统吞吐量和网络整体效率。
另一方面随着人工智能技术的发展,机器学习被应用到越来越多的领域,机器学习中的联邦学习是其中一种学习方法。联邦学习是指通过联合不同的参与方(例如终端)进行机器学习的方法,不同参与方协同进行学习,可以有效保障大数据交换时的信息安全、保护终端数据和个人数据隐私。将联邦学习应用到多源异构网络中,可以实现多源异构网络的机器学习建模。但是由于多源异构网络各个网络节点性能不同,存在联邦学习过程处理复杂且效率低的问题。
发明内容
为克服相关技术中存在的问题,本公开提供一种模型学习方法、模型学习装置及存储介质。
根据本公开实施例的第一方面,提供一种模型学习方法,应用于宏基站,包括:
响应于接收到操作维护管理OAM实体发送的模型训练请求,向第一数量的微基站发送所述模型训练请求;其中,所述第一数量的微基站通信覆盖范围在所述宏基站通信覆盖范围内。
一种实施方式中,所述模型训练请求用于触发微基站上报能力信息;所述向第一数量的微基站发送所述模型训练请求之后,所述方法还包括:
响应于接收到微基站发送的能力信息,基于所述能力信息确定模型结构和模型参数值,并向微基站发送所述模型结构和模型参数值;所述模型结构为指示微基站基于所述模型训练请求训练的模型结构,所述模型参数值为所述模型结构的初始参数值。
一种实施方式中,所述能力信息包括微基站的数据类型特征;所述方法还包括:
接收第一数量微基站发送的第一数量第一模型训练结果;确定所述第一数量微基站中不同微基站具有的所述数据类型特征,并确定第一模型损失函数;基于所述第一数量微基 站中不同微基站具有的数据类型特征进行数据类型特征统一后,以优化所述第一模型损失函数为目标,对所述第一数量第一模型训练结果进行第一模型对齐;基于第一模型对齐的结果进行全局模型学习,确定全局模型。
一种实施方式中,所述基于第一模型对齐的结果进行全局模型学习,确定全局模型,包括:
响应于所述全局模型学习的模型学习结果不满足OAM的模型训练请求,将所述模型学习结果发送至微基站,接收微基站基于所述模型学习结果重新确定的第一数量第一模型训练结果;并基于所述全局模型学习的模型学习结果重新确定所述第一模型损失函数,并以优化重新确定的第一模型损失函数为目标,重新对接收的所述第一数量第一模型训练结果进行第一模型对齐;基于重新确定的第一模型对齐的结果,进行下一次全局模型学习,重新确定模型学习结果,直到所述模型学习结果满足所述模型训练请求,将与满足所述模型训练请求的模型学习结果对应的模型确定为全局模型。
一种实施方式中,确定第一模型损失函数,包括:
确定微基站第一数量的第一模型训练结果与所述宏基站上一次全局模型学习得到的模型学习结果之间的第一损失函数,以及第一模型对齐损失函数;基于所述第一损失函数和第一模型对齐损失函数,确定第一模型损失函数。
一种实施方式中,所述基于第一模型对齐结果进行全局模型学习,确定全局模型,包括:
响应于所述全局模型学习的模型学习结果满足OAM的模型训练请求,向微基站发送停止模型训练信息;所述停止训练信息指示微基站停止终端执行模型训练任务;将所述模型学习结果对应的模型确定为全局模型,并向所述OAM发送所述全局模型。
一种实施方式中,所述方法还包括:
响应于在训练模型过程中接收到微基站发送的终端切换信息,基于所述终端切换信息重新确定执行模型训练任务的终端,并向微基站发送所述终端的信息;所述终端切换信息包括退出模型训练的终端和所述终端重新接入的目标微基站的信息;所述终端切换信息用于宏基站重新确定执行模型训练任务的终端。
根据本公开实施例的第二方面,提供一种模型学习方法,应用于微基站,包括:
接收宏基站发送的模型训练请求;向终端发送所述模型训练请求;其中,所述接收模型训练请求的微基站的数量为第一数量;所述第一数量的微基站通信覆盖范围在所述宏基站通信覆盖范围内。
一种实施方式中,所述模型训练请求用于触发终端上报终端的通信条件和数据特征,所述向终端发送所述模型训练请求之后,所述模型学习方法还包括:
接收终端发送的通信条件和数据类型特征;对所述终端的通信条件和数据特性,以及所述微基站的通信条件和数据特性进行处理,得到能力信息,并向所述能力信息发送至宏基站;其中,所述能力信息用于宏基站确定模型结构和模型参数值。
一种实施方式中,所述方法还包括:
接收模型结构和模型参数值;所述模型结构为指示微基站基于所述模型训练请求训练的模型结构,所述模型参数值为所述模型结构的初始参数值;基于所述终端的通信条件和数据类型特征以及所述模型结构和模型参数值,确定执行模型训练的第二数量终端;向所述第二数量终端发送调度信息;所述调度信息包括模型结构和模型参数值以及指示终端进行模型训练的指示信息。
一种实施方式中,所述方法还包括:
接收第二数量终端发送的第二数量第二模型训练结果;确定所述第二数量终端中不同终端具有的数据类型特征,并确定第二模型损失函数;基于所述第二数量终端中不同终端具有的数据类型特征进行数据类型特征统一后,以优化所述第二模型损失函数为目标,对所述第二数量第二模型训练结果进行第二模型对齐;基于第二模型对齐的结果进行联邦聚合,得到第一模型训练结果。
一种实施方式中,所述基于第二模型对齐的结果进行联邦聚合,得到第一模型训练结果,包括:
响应于接收到宏基站发送的继续训练请求,并接收到宏基站发送的模型学习结果;基于所述模型学习结果更新终端的模型结构和模型参数值,并向终端发送继续训练调度信息;响应于重新接收到第二数量第二模型训练结果,基于所述第一模型训练结果重新确定第二模型损失函数,并以优化所述重新确定的第二模型损失函数为目标,对所述第二数量第二模型训练结果进行第二模型对齐;基于重新确定的第二模型对齐的结果,进行下一次联邦聚合,重新确定第一模型训练结果。
一种实施方式中,确定第二模型损失函数,包括:
确定终端第二数量第二模型训练结果与所述微基站上一次联邦聚合得到的第一模型训练结果之间的第二损失函数,以及第二模型对齐损失函数;基于所述第二损失函数和第二模型对齐损失函数,确定第二模型损失函数。
一种实施方式中,所述方法还包括:
接收宏基站发送的停止模型训练信息;所述停止训练信息指示微基站停止终端执行模 型训练任务;基于所述停止模型训练信息指示终端停止执行模型训练任务。
一种实施方式中,所述方法还包括:
发送终端切换信息;所述终端切换信息包括退出模型训练的终端和终端重新接入的目标微基站的信息;所述终端切换信息用于宏基站重新确定执行模型训练任务的终端;响应于接收到宏基站发送的终端信息,重新确定执行模型训练任务的终端,并向终端发送模型训练任务。
一种实施方式中,所述向终端发送模型训练任务,包括:
响应于所述终端信息中包括上一次执行模型训练任务的终端,确定所述终端切换后的目标微基站,由所述目标微基站向终端发送所述模型训练任务;和/或
响应于所述终端信息中未包括上一次执行模型训练任务的终端,确定将所述终端不再执行所述模型训练任务,并确定新增执行模型训练任务的终端,向新增执行模型训练任务的终端发送模型训练任务。
根据本公开实施例的第三方面,提供一种模型学习装置,应用于宏基站,包括:
发送模块,用于响应于接收到操作维护管理OAM实体发送的模型训练请求,向第一数量的微基站发送所述模型训练请求;其中,所述第一数量的微基站通信覆盖范围在所述宏基站通信覆盖范围内。
一种实施方式中,所述模型训练请求用于触发微基站上报能力信息;所述装置还包括:确定模块;
所述确定模块,用于响应于接收到微基站发送的能力信息,基于所述能力信息确定模型结构和模型参数值,并向微基站发送所述模型结构和模型参数值;所述模型结构为指示微基站基于所述模型训练请求训练的模型结构,所述模型参数值为所述模型结构的初始参数值。
一种实施方式中,所述能力信息包括微基站的数据类型特征;所述装置还包括:接收模块;
所述接收模块,用于接收第一数量微基站发送的第一数量第一模型训练结果;确定所述第一数量微基站中不同微基站具有的所述数据类型特征,并确定第一模型损失函数;基于所述第一数量微基站中不同微基站具有的数据类型特征进行数据类型特征统一后,以优化所述第一模型损失函数为目标,对所述第一数量第一模型训练结果进行第一模型对齐;基于第一模型对齐的结果进行全局模型学习,确定全局模型。
一种实施方式中,所述确定模块,用于:
响应于所述全局模型学习的模型学习结果不满足OAM的模型训练请求,将所述模型 学习结果发送至微基站,接收微基站基于所述模型学习结果重新确定的第一数量第一模型训练结果;并基于所述全局模型学习的模型学习结果重新确定所述第一模型损失函数,并以优化重新确定的第一模型损失函数为目标,重新对接收的所述第一数量第一模型训练结果进行第一模型对齐;基于重新确定的第一模型对齐的结果,进行下一次全局模型学习,重新确定模型学习结果,直到所述模型学习结果满足所述模型训练请求,将与满足所述模型训练请求的模型学习结果对应的模型确定为全局模型。
一种实施方式中,所述确定模块,用于:
确定微基站第一数量的第一模型训练结果与所述宏基站上一次全局模型学习得到的模型学习结果之间的第一损失函数,以及第一模型对齐损失函数;基于所述第一损失函数和第一模型对齐损失函数,确定第一模型损失函数。
一种实施方式中,所述确定模块,用于:
响应于所述全局模型学习的模型学习结果满足OAM的模型训练请求,向微基站发送停止模型训练信息;所述停止训练信息指示微基站停止终端执行模型训练任务;将所述模型学习结果对应的模型确定为全局模型,并向所述OAM发送所述全局模型。
一种实施方式中,所述确定模块还用于:
响应于在训练模型过程中接收到微基站发送的终端切换信息,基于所述终端切换信息重新确定执行模型训练任务的终端,并向微基站发送所述终端的信息;所述终端切换信息包括退出模型训练的终端和所述终端重新接入的目标微基站的信息;所述终端切换信息用于宏基站重新确定执行模型训练任务的终端。
根据本公开实施例的第四方面,提供一种模型学习装置,应用于微基站,包括:
接收模块,用于接收宏基站发送的模型训练请求;发送模块向终端发送所述模型训练请求;其中,所述接收模型训练请求的微基站的数量为第一数量;所述第一数量的微基站通信覆盖范围在所述宏基站通信覆盖范围内。
一种实施方式中,所述模型训练请求用于触发终端上报终端的通信条件和数据特征,所述接收模块还用于:
接收终端发送的通信条件和数据类型特征;对所述终端的通信条件和数据特性,以及所述微基站的通信条件和数据特性进行处理,得到能力信息,并向所述能力信息发送至宏基站;其中,所述能力信息用于宏基站确定模型结构和模型参数值。
一种实施方式中,所述接收模块还用于:接收模型结构和模型参数值;所述模型结构为指示微基站基于所述模型训练请求训练的模型结构,所述模型参数值为所述模型结构的初始参数值;基于所述终端的通信条件和数据类型特征以及所述模型结构和模型参数值, 确定执行模型训练的第二数量终端;向所述第二数量终端发送调度信息;所述调度信息包括模型结构和模型参数值以及指示终端进行模型训练的指示信息。
一种实施方式中,所述装置还包括:确定模块;
所述接收模块,用于接收第二数量终端发送的第二数量第二模型训练结果;所述确定模块,用于确定所述第二数量终端中不同终端具有的数据类型特征,并确定第二模型损失函数;基于所述第二数量终端中不同终端具有的数据类型特征进行数据类型特征统一后,以优化所述第二模型损失函数为目标,对所述第二数量第二模型训练结果进行第二模型对齐;基于第二模型对齐的结果进行联邦聚合,得到第一模型训练结果。
一种实施方式中,所述确定模块,用于:
响应于接收到宏基站发送的继续训练请求,并接收到宏基站发送的模型学习结果;基于所述模型学习结果更新终端的模型结构和模型参数值,并向终端发送继续训练调度信息;响应于重新接收到第二数量第二模型训练结果,基于所述第一模型训练结果重新确定第二模型损失函数,并以优化所述重新确定的第二模型损失函数为目标,对所述第二数量第二模型训练结果进行第二模型对齐;基于重新确定的第二模型对齐的结果,进行下一次联邦聚合,重新确定第一模型训练结果。
一种实施方式中,所述确定模块,用于:
确定终端第二数量第二模型训练结果与所述微基站上一次联邦聚合得到的第一模型训练结果之间的第二损失函数,以及第二模型对齐损失函数;基于所述第二损失函数和第二模型对齐损失函数,确定第二模型损失函数。
一种实施方式中,所述接收模块还用于:接收宏基站发送的停止模型训练信息;所述停止训练信息指示微基站停止终端执行模型训练任务;基于所述停止模型训练信息指示终端停止执行模型训练任务。
一种实施方式中,所述发送模块还用于:发送终端切换信息;所述终端切换信息包括退出模型训练的终端和终端重新接入的目标微基站的信息;所述终端切换信息用于宏基站重新确定执行模型训练任务的终端;响应于接收到宏基站发送的终端信息,重新确定执行模型训练任务的终端,并向终端发送模型训练任务。
一种实施方式中,所述发送模块:
响应于所述终端信息中包括上一次执行模型训练任务的终端,确定所述终端切换后的目标微基站,由所述目标微基站向终端发送所述模型训练任务;和/或
响应于所述终端信息中未包括上一次执行模型训练任务的终端,确定将所述终端不再执行所述模型训练任务,并确定新增执行模型训练任务的终端,向新增执行模型训练任务 的终端发送模型训练任务。
根据本公开实施例的第五方面,提供一种模型学习装置,包括:
处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:执行第一方面或第一方面中任意一种实施方式所述的模型学习方法,或执行第二方面或第二方面中任意一种实施方式所述的模型学习方法。
根据本公开实施例的第六方面,提供一种非临时性计算机可读存储介质,当所述存储介质中的指令由移动终端的处理器执行时,使得移动终端能够执行第一方面或第一方面中任意一种实施方式所述的模型学习方法,或使得移动终端能够执行第二方面或第二方面中任意一种实施方式所述的模型学习方法。
本公开的实施例提供的技术方案可以包括以下有益效果:通过宏基站向微基站发送模型训练请求,实现宏基站与微基站的交互进行模型训练任务的分配,提高了无线接入网设备的利用效率,信道质量较高,模型可靠性即精度高。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1是根据一示例性实施例示出的一种模型学习方法的异构网络场景架构示意图。
图2是根据一示例性实施例示出的一种模型学习方法的流程图。
图3是根据一示例性实施例示出的又一种模型学习方法的流程图。
图4是根据一示例性实施例示出的又一种模型学习方法的流程图。
图5是根据一示例性实施例示出的又一种模型学习方法的流程图。
图6是根据一示例性实施例示出的又一种模型学习方法的流程图。
图7是根据一示例性实施例示出的又一种模型学习方法的流程图。
图8是根据一示例性实施例示出的又一种模型学习方法的流程图。
图9是根据一示例性实施例示出的又一种模型学习方法的流程图。
图10是根据一示例性实施例示出的又一种模型学习方法的流程图。
图11是根据一示例性实施例示出的又一种模型学习方法的流程图。
图12是根据一示例性实施例示出的又一种模型学习方法的流程图。
图13是根据一示例性实施例示出的又一种模型学习方法的流程图。
图14是根据一示例性实施例示出的又一种模型学习方法的流程图。
图15是根据一示例性实施例示出的又一种模型学习方法的流程图。
图16是根据一示例性实施例示出的又一种模型学习方法的流程图。
图17是根据一示例性实施例示出的又一种模型学习方法的流程图。
图18是根据一示例性实施例示出的又一种模型学习方法的流程图。
图19是根据一示例性实施例示出的一种模型推理方法的主流程图。
图20是根据一示例性实施例示出的一种模型学习方法中模型推理的联邦学习流程图。
图21是根据一示例性实施例示出的一种模型学习方法中终端切换处理流程图。
图22是根据一示例性实施例示出的一种模型学习方法的模型推理流程图。
图23是根据一示例性实施例示出的一种模型学习方法中微基站与宏基站进行信令与数据传输的协议和接口原理图。
图24是根据一示例性实施例示出的一种模型学习方法中微基站与终端进行信令与数据传输的协议和接口原理图。
图25是根据一示例性实施例示出的一种模型学习方法中进行终端切换的协议和接口原理图。
图26是根据一示例性实施例示出的一种模型学习装置框图。
图27是根据一示例性实施例示出的又一种模型学习装置框图。
图28是根据一示例性实施例示出的一种用于模型学习的装置的框图。
图29是根据一示例性实施例示出的又一种用于模型学习的装置的框图。
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
在通信技术中为提高峰值速率和频谱利用率,进一步引入异构网络技术。其中,异构网络技术是指许多微基站被布放在宏基站覆盖区域内,形成同覆盖的不同节点类型相异的异构系统。由于接入点与被服务的终端之间的地理距离被缩小了,能够有效提升系统吞吐量和网络整体效率。
另一方面随着人工智能技术的发展,机器学习被应用到越来越多的领域,机器学习中的联邦学习是其中一种学习方法。联邦学习是指通过联合不同的参与方(例如终端)进行机器学习的方法,不同参与方协同进行学习,可以有效保障大数据交换时的信息安全、保护终端数据和个人数据隐私。将联邦学习应用到多源异构网络中,可以实现多源异构网络 的机器学习建模,其实施方式可以参考下述实施例。
宏基站将操作维护管理(Operation Administration and Maintenance,OAM)实体的具体订阅需求转发给终端,其中OAM的订阅需求也可以称为模型训练请求。终端将通信条件及本地数据类型特性上报给宏基站。宏基站根据终端上报信息进行任务分配,并将模型结构和超参数信息下发给终端。终端依据宏基站分配任务进行本地模型训练,训练完成后,终端将本地学习模型参数发送给宏基站。宏基站根据终端本地学习结果进行联邦平均,得到全局模型。宏基站检验全局学习模型是否满足OAM的订阅需求,若满足,则宏基站将所得模型发送给OAM。若不满足,则终端根据全局学习结果更新本地模型,再与宏基站重复迭代进行训练,直至所得全局模型满足OAM订阅需求。
通过上述实施方式可知,相关技术中存在以下不足:
1)终端直接与宏基站相连进行数据与信令的传输,对于宏基站覆盖范围边缘的终端而言,终端与宏基站之间的地理距离较大,信道质量较差,数据传输速率较慢,影响了通信网络的整体效率,导致联邦学习过程效率较低。
2)宏基站直接对所有终端的本地训练结果进行联邦平均,在实际应用中,不同终端本地训练集的数据结构可能有所不同,直接进行联邦平均可行性较低,会导致模型泛化能力较差,无法保证模型可靠性及精度。
3)宏基站与终端之间的数据交互需通过核心网或数据中心进行,终端需要先将训练结果数据上传至核心网或数据中心,宏基站再请求数据,不支持基站和终端之间直接传输数据进行联邦学习,降低了联邦学习的效率及无线网络资源的利用率。
4)终端退出宏基站连接则直接退出联邦学习过程,且未考虑新终端加入连接的处理流程,导致在联邦学习过程中的可用训练数据越来越少,不利于模型的整体训练及模型精度的提高。
基于上述实施方式中的不足,相关技术中考虑将模型学习与异构网络相结合。在异构网络中,一个宏基站覆盖范围内包括多个微基站,终端与微基站相连进行数据和信令的交互。由于微基站覆盖范围较小,终端发生移动时,很容易触发切换。而在相关技术中,并未考虑终端发生切换的问题,因此无法确定终端在发生切换后是否继续支持训练。并且,在进行联邦学习中,由于不同节点采用训练数据的数据类型特征可能不同,导致不同节点训练结果的维度可能不同,而在相关技术中,也并未考虑基于异构网络进行模型学习的处理方法。
基于此,本公开提供一种模型学习方法,将模型学习与异构网络的学习结果进行模型对齐处理,确定OAM需要的训练模型。并且提出终端发生切换后的处理方法,针对不同 的模型训练任务类型,终端可继续参与源微基站的训练任务或是加入目标微基站的训练任务。有效解决了在终端移动场景下可用训练数据不断减少的问题。并且在不同节点处对训练模型使用的数据进行对齐后再进行训练,可以支持使用不同类型的数据训练同一个模型。
进一步的,本公开中涉及宏基站和微基站属于网络设备,也可以称为无线接入网设备。该无线接入网设备可以是:基站、演进型基站(evolved node B,基站)、家庭基站、无线保真(wireless fidelity,WIFI)系统中的接入点(access point,AP)、无线中继节点、无线回传节点、传输点(transmission point,TP)或者发送接收点(transmission and reception point,TRP)等,还可以为NR系统中的gNB,或者,还可以是构成基站的组件或一部分设备等。当为车联网(V2X)通信系统时,网络设备还可以是车载设备。应理解,本公开的实施例中,对网络设备所采用的具体技术和具体设备形态不做限定。
进一步的,本公开中涉及的终端,也可以称为终端设备、用户设备(User Equipment,UE)、移动台(Mobile Station,MS)、移动终端(Mobile Terminal,MT)等,是一种向用户提供语音和/或数据连通性的设备,例如,终端可以是具有无线连接功能的手持式设备、车载设备等。目前,一些终端的举例为:智能手机(Mobile Phone)、口袋计算机(Pocket Personal Computer,PPC)、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)、笔记本电脑、平板电脑、可穿戴设备、或者车载设备等。此外,当为车联网(V2X)通信系统时,终端设备还可以是车载设备。应理解,本公开实施例对终端所采用的具体技术和具体设备形态不做限定。
图1是根据一示例性实施例示出的一种模型学习方法的异构网络场景架构示意图。如图1所示,该系统包括一个宏基站、M个微基站及N个终端。本公开终端装置主要负责本地数据采集与本地模型训练,微基站装置主要负责终端调度与任务分配、协调终端装置进行模型训练及终端的移动性管理,宏基站装置主要负责协调微基站装置进行全局模型训练,以得到满足OAM订阅需求的全局模型。
其中,微基站的覆盖范围都在宏基站的覆盖范围内。宏基站与微基站之间进行信令/数据的交换时,可以是有线连接,例如通过光纤、同轴电缆、网线等实现;也可以是无线连接,例如通过毫米波等实现。宏基站与微基站之间的连接可以通过X2接口来实现,也可通过X3等其他接口实现,本发明实施例对连接的具体实现形式不作限制。
微基站与终端之间可以通过无线空口建立无线连接。在不同的实施方式中,该无线空口是基于第四代移动通信网络技术(4G)标准的无线空口;或者,该无线空口是基于第五代移动通信网络技术(5G)标准的无线空口,比如该无线空口是新空口;或者,该无线空 口也可以是基于5G的更下一代移动通信网络技术标准的无线空口。本公开实施例对微基站范围内的终端与微基站之间连接的具体实现形式不做要求。基于该系统,提出本公开的模型学习方法。
图2是根据一示例性实施例示出的一种模型学习方法的流程图。如图2所示,模型方法用于宏基站中,包括以下步骤。
在步骤S11中,响应于接收到操作维护管理OAM实体发送的模型训练请求,向第一数量的微基站发送模型训练请求。
在本公开实施例中,OAM向宏基站发起模型训练请求,模型训练请求中包括OAM对订阅模型的训练任务类型要求和模型精度。宏基站基于接收的模型训练请求,通过图1所示的X2接口将该模型训练请求转发至微基站。其中,转发模型训练请求的数量基于宏基站下覆盖的微基站的数量确定,本公开为便于区分将一个宏基站下覆盖的微基站的数量称为第一数量。
其中,模型训练请求至少可以包括:分析ID,通知目标地址,分析报告信息。其中分析ID用于标识请求的分析类型;通知目标地址用于将被请求方接收到的通知与此订阅关联;分析报告信息包含首选分析精度级别、分析时间间隔等参数。模型训练请求还可以包括分析筛选器信息,分析筛选器信息用于指示报告分析信息要满足的条件。
通过本公开实施例提供的模型学习方法,宏基站基于接收的模型训练请求发送至微基站,可以提高数据速率,进一步提高通信网络的整体效率。
在本公开实施例中,宏基站向微基站发送模型训练请求,以使微基站上报能力信息。其中微基站上报的能力信息包括接入该微基站的终端的通信条件和本地数据类型特征,以及该微基站的通信条件和本地数据类型特征。
图3是根据一示例性实施例示出的一种模型学习方法的流程图。如图3所示,模型学习方法用于宏基站中,包括以下步骤。
在步骤S21中,响应于接收到微基站发送的能力信息,基于能力信息确定模型结构和模型参数值,并向微基站发送模型结构和模型参数值。
在本公开实施例中,模型结构为指示微基站基于模型训练请求训练的模型结构,模型参数值为模型结构的初始参数值。
宏基站基于接收的微基站发送能力信息,进行模型训练任务分配,确定第一数量的微基站中每个微基站对应的模型结构和模型参数值。其中,模型训练任务分配为分配每个微基站联邦学习的具体任务。向每个微基站发送对应的模型结构和模型参数值。
图4是根据一示例性实施例示出的一种模型学习方法的流程图。如图4所示,模型学 习方法用于宏基站中,包括以下步骤。
在步骤S31中,接收第一数量微基站发送的第一数量第一模型训练结果。
在本公开实施例中,宏基站接收第一数量微基站中每个微基站发送的第一模型训练结果,得到第一数量的第一模型训练结果。
在步骤S32中,确定第一数量微基站中不同微基站具有的数据类型特征,并确定第一优化模型损失函数。
在本公开实施例中,不同微基站具有的数据类型特征不同,例如,其中一个微基站具有的数据类型特征为图像数据,另一个微基站具有的数据类型为数字数据等。当然这仅仅是举例说明,并不是对本公开的具体限定。
在步骤S33中,基于第一数量微基站中不同微基站具有的数据类型特征进行数据类型特征统一后,以优化第一模型损失函数为目标,对第一数量第一模型训练结果进行第一模型对齐。
在本公开实施例中,宏基站首先对微基站联邦学习后的第一数据第一模型训练结果进行维度统一。
在本公开一些实施例中,宏基站对宏基站覆盖范围下所有(即,第一数量)微基站联邦学习之后的数据类型特征分别做一维卷积,将所有微基站的数据类型特征映射到同一维度d′,具体公式如下:
其次,宏基站基于所有微基站的维度统一结果,以优化第一模型损失函数为目标,基于不用微基站的数据类型特征对第一数量第一模型训练结果进行第一模型对齐。
在步骤S34中,基于第一模型对齐的结果进行全局模型学习,确定全局模型。
在本公开实施例中,宏基站基于第一模型对齐的结果进行全局模型学习,得到模型学习结果。将该模型学习结果与模型训练请求中包括的模型训练任务类型要求和模型精度进行比较,进而确定OAM请求的全局模型。
图5是根据一示例性实施例示出的一种模型学习方法的流程图。如图5所示,模型学习方法用于宏基站中,包括以下步骤。
在步骤S41中,响应于全局模型学习的模型学习结果不满足OAM的模型训练请求,将模型学习结果发送至微基站,接收微基站基于模型学习结果重新确定的第一数量第一模型训练结果。
在本公开实施例中,响应于宏基站确定本次全局模型学习的模型学习结果不满足OAM的模型训练请求,则将本次全局模型学习的模型学习结果发送至微基站,用于微基站重新确定第一模型训练结果。
在步骤S42中,基于全局模型学习的模型学习结果重新确定第一模型损失函数,并以优化重新确定的第一模型损失函数为目标,重新对接收的第一数量第一模型训练结果进行第一模型对齐。
在本公开实施例中,基于本次不满足OAM模型训练请求的全局模型学习的模型学习结果重新确定第一模型损失函数,再一次以优化重新确定的第一模型损失函数为目标,对接收的第一数量第一模型训练结果进行第一模型对齐。
在步骤S43中,基于重新确定的第一模型对齐的结果,进行下一次全局模型学习,重新确定模型学习结果,直到模型学习结果满足模型训练请求,将与满足模型训练请求的模型学习结果对应的模型确定为全局模型。
在本公开实施例中,宏基站基于重新确定的第一模型对齐结果,即重新优化第一模型损失函数的结果,再一次进行全局模型学习,再一次得到模型学习结果。将重新得到的模型学习结果与模型训练请求进行对比,确定是否满足模型训练请求中对于模型的要求。若不满足则重新确定第一模型损失函数,直到全局模型学习的模型学习结果满足模型训练请求,将与满足模型训练请求的模型学习结果对应的模型确定为全局模型。
图6是根据一示例性实施例示出的一种模型学习方法的流程图。如图6所示,模型学习方法用于宏基站中,包括以下步骤。
在步骤S51中,确定微基站第一数量的第一模型训练结果与宏基站上一次全局模型学习得到的模型学习结果之间的第一损失函数,以及第一模型对齐损失函数。
在本公开实施例中,第一模型损失函数包括两部分,一部分是微基站第一数量的第一模型训练结果与宏基站上一次全局模型学习得到的模型学习结果之间的第一损失函数;另一部分是第一模型对齐损失函数。宏基站以优化第一模型损失函数为目标,对第一数量第一模型训练结果进行第一模型对齐,换言之,宏基站以优化第一模型对齐损失函数和第一损失函数整体损失函数为目标,进行第一数量第一模型训练结果的第一模型对齐。
在步骤S52中,基于第一损失函数和第一模型对齐损失函数,确定第一模型损失函数。
在本公开实施例中,采用用于回归问题的绝对值误差函数及平方误差损失函数、用于分类问题的交叉熵损失函数,将第一损失函数和第一模型对齐损失函数确定为第一模型损失函数。
在本公开一些实施例中,第一模型损失函数可以参考下述公式。
其中,l(·,·)表示模型的损失函数,即,用于回归问题的绝对值误差函数及平方误差损失函数、用于分类问题的交叉熵损失函数等;l
M为第一模型对齐损失函数,η表示一个权重因子;Θ表示所有待学习的参数,比如权重和偏置项等;q表示参与联邦学习的微基站总数;
表示微基站k在第t次联邦学习过程中联邦聚合参数的第一模型训练结果,a
t-1表示宏基站在第t-1次全局学习过程中全局模型学习的模型学习结果。
其中,为第一模型对齐损失函数l
M的函数式,可以表示为:
图7是根据一示例性实施例示出的一种模型学习方法的流程图。如图7所示,模型学习方法用于宏基站中,包括以下步骤。
在步骤S61中,响应于全局模型学习的模型学习结果满足OAM的模型训练请求,向微基站发送停止模型训练信息。
在本公开实施例中,停止训练信息指示微基站停止终端执行模型训练任务。宏基站确定当前全局模型学习的模型学习结果满足OAM的模型训练请求。换言之,OAM发送的模型训练请求中的订阅需求中包含有订阅的业务所需要的模型精度提出具体要求,当全局模型学习的模型学习结果,满足该OAM订阅需求时,说明当前的全局学习模型已经达到了足够的精度,确定结束训练任务,得到可供使用的全局模型。向微基站发送停止模型训练信息。其中,停止训练信息指示微基站停止终端执行模型训练任务。
在步骤S62中,将模型学习结果对应的模型确定为全局模型,并向OAM发送全局模型。
在本公开实施例中,以当前为进行第t次全局模型学习为例,将第t次全局模型学习的模型学习结果用a
t表示,则将a
t发送至OAM。
图8是根据一示例性实施例示出的一种模型学习方法的流程图。如图8所示,模型学习方法用于宏基站中,包括以下步骤。
在步骤S71中,响应于在训练模型过程中接收到微基站发送的终端切换信息,基于终端切换信息重新确定执行模型训练的终端,并向微基站发送终端的信息。
在本公开实施例中,响应于宏基站接收到微基站发送的终端切换信息,确定存在执行模型训练任务的终端发生退出,或微基站存在新接入的终端。宏基站基于接收的终端切换 信息重新确定执行模型训练任务的终端,并将重新确定的执行模型训练任务的终端的终端信息发送中微基站。其中,模型切换信息包括退出模型训练的终端和终端重新接入的目标微基站的信息;终端切换信息用于宏基站重新确定执行模型训练任务的终端。宏基站根据终端发生切换的情况判断退出连接或新加入连接终端是否参与执行模型训练任务。宏基站根据OAM订阅需求中的训练任务类型判断退出连接或新加入连接终端是否继续参与源微基站的训练任务。
在本公开一些实施例中,训练任务类型可分为与上层应用相关任务及与底层网络通道相关任务。如果任务与上层应用相关,则终端可继续参与源微基站的联邦学习任务;如果任务与底层网络通道相关,则所训练的模型只适用于源微基站(即,终端切换之前接入的微基站),终端无法继续参与源微基站的联邦学习任务。宏基站可根据OAM订阅需求中的训练任务类型及具体的切换信息决定终端是否继续参与源微基站的训练。
一种实施例中,宏基站决定终端继续参与源微基站的模型训练任务,则目标微基站(即终端切换后接入的微基站)将负责转发终端与源微基站之间的第一模型训练结果,源微基站将该终端继续保留在训练任务列表中并为其重新分配模型训练任务。目标微基站将终端的任务安排结果发送给终端,终端保留在源微基站的训练信息,继续参与源微基站的联邦学习。
一种实施例中,宏基站决定终端继续参与源微基站的训练,则目标微基站将负责转发终端与源微基站之间的第一模型训练结果。当终端完成一轮本地模型训练时,终端将本地训练结果发送给目标微基站,目标微基站将结果转发给源微基站;当宏基站完成一轮全局模型学习时,源微基站将全局学习结果及终端是否继续进行训练的信令发送给目标微基站,目标微基站将数据及信令转发给终端。
基于相同/相似的构思,本公开实施例还提供一种模型学习方法。
图9是根据一示例性实施例示出的一种模型学习方法的流程图。如图9所示,模型学习方法用于微基站中,包括以下步骤。
在步骤S81中,接收宏基站发送的模型训练请求。
在步骤S82中,向终端发送模型训练请求。
在本公开实施例中,接收模型训练请求的微基站的数量为第一数量;第一数量的微基站通信覆盖范围在宏基站通信覆盖范围内。微基站接收到宏基站发送的模型训练请求后,将该模型训练请求转发至终端。
在本公开实施例中,微基站向终端发送模型训练请求,模型训练请求可以用于触发终端发送自身的通信条件和数据特征。
图10是根据一示例性实施例示出的一种模型学习方法的流程图。如图10所示,模型学习方法用于微基站中,包括以下步骤。
在步骤S91中,接收终端发送的通信条件和数据类型特征。
在步骤S92中,对终端的通信条件和数据特性,以及微基站的通信条件和数据特性进行处理,得到能力信息,并向能力信息发送至宏基站。
在本公开实施例中,终端接收到微基站发送的模型训练请求后,确定自身的通信条件和数据特征并进行上报。微基站与终端通过无线信道进行数据和信令的交互,一种实施方式中,终端上报的通信条件是指终端的通信能力或通信信道状况。一种实施方式中,终端向微基站上报的通信条件可以包含终端检测得到的信道质量指示CQI信息。终端上报的本地数据特性可以包含收集数据的类别等。微基站通过X2接口将终端上报的通信条件和数据特征以及微基站的通信条件和数据特征发送至宏基站。本公开为便于描述将终端的通信条件和数据特征以及微基站的通信条件和数据特征称为能力信息,其中,能力信息用于宏基站确定模型结构和模型参数值。
图11是根据一示例性实施例示出的一种模型学习方法的流程图。如图11所示,模型学习方法用于微基站中,包括以下步骤。
在步骤S101中,接收模型结构和模型参数值。
在本公开实施例中,模型结构为指示微基站基于模型训练请求训练的模型结构,模型参数值为模型结构的初始参数值。
在步骤S102中,基于终端的通信条件和数据类型特征以及模型结构和模型参数值,确定执行模型训练的第二数量终端。
在本公开实施例中,微基站基于接收的模型结构和模型参数值,以及接入的终端的通信条件和数据类型特征,确定执行模型训练任务的第二数量终端。
在步骤S103中,向第二数量终端发送调度信息。
在本公开实施例中,微基站确定第二数量终端后,向第二数量终端发送调度信息。其中,调度信息包括模型结构和模型参数值以及指示终端进行模型训练的指示信息。
一种方式中,微基站确定执行模型训练任务的终端包括一个终端(即,第二数量为一个),则微基站确定终端的学习模式为单一终端训练模式。微基站直接将宏基站分配的训练任务转发给终端,终端可根据分配任务进行本地模型训练。
另一方式中,微基站确定执行模型训练任务的终端包括多个终端(即,第二数量为多个),微基站确定终端的学习模式为多终端协作训练模式。微基站将宏基站分配的训练任务根据不同终端的通信条件及本地数据特性进行分配,辅助多终端协作完成模型训练,各终 端接收到微基站分配的任务后可根据微基站分配的模型训练任务进行本地模型训练。
在本公开一些实施例中,终端接收到微基站发送的调度信息后对本地模型参数进行初始化,再根据微基站分配的模型训练任务要求进行本地模型训练,并将训练结果通过无线信道传输给微基站。
图12是根据一示例性实施例示出的一种模型学习方法的流程图。如图12所示,模型学习方法用于微基站中,包括以下步骤。
在步骤S111中,接收第二数量终端发送的第二数量第二模型训练结果。
在本公开实施例中,微基站接收第二数量终端发送的第二数量模型训练结果。以第二数量终端中的终端m为例。终端m随机初始化一组模型参数作为本地学习模型的初始化参数,初始化的本地学习模型结果记为
终端m通过对数据进行感知与收集生成本地数据集D
m,并对本地数据集随机抽取数据量为N的数据集,生成本地训练集T
m,在对本地模型参数进行初始化后,终端利用本地训练集进行本地模型训练,并将终端的训练结果(即第二模型训练结果)通过无线信道传输给微基站,以第t次联邦学习过程为例,终端m传输的本地学习模型训练更新结果可表示为
在步骤S112中,确定第二数量终端中不同终端具有的数据类型特征,并确定第二模型损失函数。
在本公开实施例中,确定第二数量终端中每个终端具有的数据类型特征,其中,不同的数据类型特征图像数据、数字数据等。
在步骤S113中,基于第二数量终端中不同终端具有的数据类型特征进行数据类型特征统一后,以优化第二模型损失函数为目标,对第二数量第二模型训练结果进行第二模型对齐。
在本公开实施例中,由于终端的本地数据集的数据类型特征可能不同,训练得到的本地模型特征维度也可能不同,因此对不同终端特征维度进行统一以便于进行模型对齐及联邦聚合。对微基站i下所有终端训练完后的特征分别做一维卷积,将所有终端的特征映射到同一维度d,具体公式如下:
其中,m
1,m
2…m
n表示微基站i下连接的n个终端,
是终端{m
1,m
2…m
n}卷积核的大小,d是公共维度,经过一维卷积后,所有终端的特征都映射到同一维度d上。微基站基于所有终端的维度统一结果,以优化第一模型损失函数为目标,基于不用终端的数据类型特征对第二数量第二模型训练结果进行第二模型对齐。
在步骤S114中,基于第二模型对齐的结果进行联邦聚合,得到第一模型训练结果。
在本公开实施例中,微基站基于第二模型对齐的结果进行联邦学习,得到第一模型训练结果。之后将第一模型训练结果发送至宏基站。
图13是根据一示例性实施例示出的一种模型学习方法的流程图。如图13所示,模型学习方法用于微基站中,包括以下步骤。
在步骤S121中,响应于接收到宏基站发送的继续训练请求,并接收到宏基站发送的模型学习结果。
在本公开实施例中,若接收到宏基站发送的继续训练请求,则进一步接收将宏基站发送的模型学习结果。
在步骤S122中,基于模型学习结果更新终端的模型结构和模型参数值,并向终端发送继续训练调度信息。
在本公开实施例中,微基站将宏基站发送的模型学习结果发送至终端,终端基于模型学习结果更新模型结构和模型参数值。微基站向终端发送继续训练调度信息,指示终端基于更新后的模型结构和模型参数值继续执行模型训练任务,将重新得到的第二模型训练结果重新发送至微基站。
在步骤S123中,响应于重新接收到第二数量第二模型训练结果,基于第一模型训练结果重新确定第二模型损失函数,并以优化重新确定的第二模型损失函数为目标,对第二数量第二模型训练结果进行第二模型对齐。
在本公开实施例中,微基站重新接收到终端发送的第二数量第二模型训练结果后,基于接收到宏基站发送的第一模型训练结果,重新确定第二损失函数,再一次以优化第二损失函数为目标,对第二数量第二模型训练结果进行第二模型对齐。
在步骤S124中,基于重新确定的第二模型对齐的结果,进行下一次联邦聚合,重新确定第一模型训练结果。
在本公开实施例中,微基站基于重新确定的第二模型对齐结果,以微基站i为例,微基站i在模型对齐的基础上进行联邦聚合。联邦聚合完成后,微基站通过X2接口将联邦聚合结果上报给宏基站,以第t次联邦学习过程为例,微基站i传输的联邦聚合结果可表示为
进行联邦聚合,重新确定第一模型训练结果。形成宏基站、微基站与终端联邦学习循环交互,直到最终宏基站确定满足OAM要求的全局模型。
图14是根据一示例性实施例示出的一种模型学习方法的流程图。如图14所示,模型学习方法用于微基站中,包括以下步骤。
在步骤S131中,确定终端第二数量第二模型训练结果与所述微基站上一次联邦聚合得到的第一模型训练结果之间的第二损失函数,以及第二模型对齐损失函数。
在步骤S132中,基于第二损失函数和第二模型对齐损失函数,确定第二模型损失函数。
在本公开实施例中,对所有终端的特征维度进行统一后,基于不同终端的特征对齐结果进行模型对齐,在模型对齐的过程中,以优化第二模型损失函数为目标,第二模型损失函数可以分为两部分确定,第一部分是由所有终端第t次联邦学习的模型训练结果与微基站第t-1次联邦学习联邦学习的更新结果计算损失函数而得;第二部分是在模型对齐前后计算损失函数而得。以优化两部分整体的损失函数为模型对齐的目标。
其中,采用用于回归问题的绝对值误差函数及平方误差损失函数、用于分类问题的交叉熵损失函数,将第一损失函数和第一模型对齐损失函数确定为第一模型损失函数。
在本公开一些实施例中,第一模型损失函数可以参考下述公式。
微基站i在第t次联邦训练过程中的损失函数可表示为:
其中,l(·,·)表示模型的损失函数,即,用于回归问题的绝对值误差函数及平方误差损失函数、用于分类问题的交叉熵损失函数等;l
M为模型对齐损失函数,η表示一个权重因子;Θ表示所有待学习的参数,比如权重和偏置项等;n表示在微基站i下参与联邦学习的终端总数;
表示终端k在第t次联邦学习过程中本地学习模型训练更新结果;
表示微基站i在第t-1次联邦学习过程中联邦聚合参数的训练更新结果。
其中,模型对齐损失函数可表示为:
图15是根据一示例性实施例示出的一种模型学习方法的流程图。如图15所示,模型学习方法用于微基站中,包括以下步骤。
在步骤S141中,接收宏基站发送的停止模型训练信息。
在本公开实施例中,停止训练信息用于指示微基站停止终端执行模型训练任务。
在步骤S142中,基于停止模型训练信息指示终端停止执行模型训练任务。
在本公开实施例中,若微基站接收到停止模型训练信息,则确定不再对该模型进行训练。并向终端发送该停止模型训练信息,指示终端停止执行模型训练任务。
图16是根据一示例性实施例示出的一种模型学习方法的流程图。如图16所示,模型学习方法用于微基站中,包括以下步骤。
在步骤S151中,发送终端切换信息。
在本公开实施例中,终端切换信息包括退出模型训练的终端和终端重新接入的目标微基站的信息;终端切换信息用于宏基站重新确定执行模型训练任务的终端。
在本公开一些实施例中,源微基站表示终端切换前所连接的微基站,目标微基站表示终端切换后所连接的微基站。源微基站会定时给终端发送测量控制信号,终端根据测量控制信号对参考信号接收功率及参考信号接收质量等进行测量,并将测量报告上报给源微基站。当源微基站检测到其他基站能为该终端提供更高的服务质量时,源微基站做出终端切换的决策,通知终端准备执行切换并向目标微基站发起切换请求,同时将切换终端及目标微基站的信息上报给所连接的宏基站。源微基站向终端发送重配置RRC连接请求消息,同时向目标微基站发送终端状态信息,终端与目标微基站进行一系列参数配置,终端成功接入目标微基站,目标微基站发送切换成功消息给源微基站。
在步骤S152中,响应于接收到宏基站发送的终端信息,重新确定执行模型训练任务的终端,并向终端发送模型训练任务。
在本公开实施例中,微基站在接收到宏基站发送的终端信息后,基于重新确定的执行模型训练任务的终端,重新分配每个终端的模型训练任务,并向终端发送与之对应的模型训练任务。
图17是根据一示例性实施例示出的一种模型学习方法的流程图。如图17所示,模型学习方法用于微基站中,包括以下步骤。
在步骤S161中,响应于终端信息中包括上一次执行模型训练任务的终端,确定终端切换后的目标微基站,由目标微基站向终端发送模型训练任务。
在本公开实施例中,终端发生切换接入的微基站后,微基站基于宏基站发送的终端信息,重新确定执行模型训练任务的终端。若在终端信息中包括上一次执行模型训练任务的终端,且该终端已经切换了微基站,则由目标微基站(即终端切换后接入的微基站)将负责转发终端与源微基站(即终端切换前接入的微基站)之间的第二模型训练结果,源微基站将该终端继续保留在训练任务列表中并为其重新分配训练任务。目标微基站将终端的模型训练任务发送给终端,终端保留在源微基站的训练信息,继续参与源微基站的联邦学习。
图18是根据一示例性实施例示出的一种模型学习方法的流程图。如图18所示,模型学习方法用于微基站中,包括以下步骤。
在步骤S171中,响应于终端信息中未包括上一次执行模型训练任务的终端,确定将终端不再执行模型训练任务,并确定新增执行模型训练任务的终端,向新增执行模型训练任务的终端发送模型训练任务。
在本公开实施例中,源微基站的训练任务类型不支持终端继续参与训练,则源微基站将终端从训练中彻底移除,新终端将通信条件及本地数据特性上报给目标微基站,目标微基站根据训练任务类型及终端上报信息决定新终端是否参与目标微基站的训练。目标微基站再将终端的任务安排结果发送给终端。
进一步地,终端发生切换接入的微基站后,微基站基于宏基站发送的终端信息,重新确定执行模型训练任务的终端。若在终端信息中不包括上一次执行模型训练任务的终端,且该终端已经切换了微基站。由目标微基站将负责转发终端与源微基站之间的第二模型训练结果。当终端完成一轮本地模型训练时,终端将本地训练结果发送给目标微基站,目标微基站将结果转发给源微基站。当宏基站完成一轮全局模型学习时,源微基站将模型学习结果及终端是否继续进行训练的信令发送给目标微基站,目标微基站将数据及信令转发给终端。微基站将不再执行模型训练任务的终端移出训练的任务列表中。确定新增参与执行模型训练任务的终端,新增终端将通信条件及本地数据特性通过无限信道上报给目标微基站。目标微基站根据训练模型训练任务的类型及新增终端上报信息决定新增终端是否参与执行模型训练任务。终端是否参与源微基站的训练,目标微基站都将终端的任务安排结果通过无限信道发送给终端。
在本公开一些实施例中,对宏基站、微基站和终端之间的交互过程进行说明。
OAM向宏基站发起模型训练请求,宏基站接收到请求后,将模型训练请求转发给微基站,微基站再将请求转发给终端,终端将通信条件及本地数据特性上报给微基站,微基站再将终端信息上报给宏基站。宏基站根据微基站上报的终端信息对微基站进行任务分配,并将模型结构和超参数信息(即本公开实施例中涉及的模型参数值)下发给微基站。微基站收到宏基站下发信息后选择参与训练的终端以及终端的学习模式,并对参与模型训练任务的终端进行任务分配。终端、微基站与宏基站不断迭代进行联邦学习,直至模型满足OAM订阅需求(例如模型的精度需求),宏基站将模型训练结果(即,全局模型学习的模型学习结果)上报给OAM。
所述OAM订阅需求包含:分析ID,用于标识请求的分析类型;通知目标地址,用于将被请求方接收到的通知与此订阅关联;分析报告信息,包含首选分析精度级别、分析时间间隔等参数;分析筛选器信息(可选):指示报告分析信息要满足的条件。
在一些实施例中,所述具体终端、微基站与宏基站迭代进行联邦学习的方法包括:
在联邦学习的过程中,终端首先对本地模型参数进行初始化,再根据微基站分配的任务要求进行本地模型训练,并将训练结果(即第二模型训练结果)通过无线信道传输给微基站。微基站将所有参与训练终端的本地训练结果进行汇总后,先进行模型对齐,再进行 联邦聚合,并将联邦聚合结果(即,第一模型训练结果)通过X2接口上报给宏基站。宏基站在汇总所有参与训练的微基站的联邦聚合结果后,先进行模型对齐,再进行全局模型学习,并将全局学习结果通过X2接口发送给微基站。微基站将全局模型训练结果通过无线信道转发给终端,终端根据全局模型训练结果对本地学习模型进行更新。宏基站根据OAM的订阅需求判断全局训练模型是否满足要求。
在一些实施例中,全局模型性能满足OAM订阅需求,则宏基站将模型训练结果上报给OAM,并通知微基站停止训练。
在一些实施例中,全局模型性能不满足OAM订阅需求,则宏基站需根据终端切换信息安排终端的训练任务,微基站再根据终端切换情况重新进行任务分配,终端再次进行本地模型学习并把结果上报给微基站,如此反复迭代直至模型性能满足OAM订阅需求。
在一些实施例中,在终端切换过程中,源微基站代表终端发生切换之前所连接的微基站,目标微基站代表终端发生切换之后连接的微基站。宏基站根据终端切换信息安排终端执行模型训练任务,包括:
当在联邦学习某次循环中,源微基站做出终端切换的决定时,源微基站通知终端准备执行切换,并将退出连接终端及目标微基站信息上报给宏基站。终端在收到源微基站命令后执行切换,并在目标微基站上完成连接。宏基站根据源微基站的训练任务类型及终端的切换信息决定终端是否继续参与源微基站的训练。
在一些实施例中,源微基站的训练任务类型支持终端继续参与训练,则目标微基站将负责转发终端与源微基站之间的训练数据,终端继续参与源微基站的训练任务,目标微基站将终端的任务安排结果发送给终端。
在一些实施例中,源微基站的训练任务类型不支持终端继续参与训练,则源微基站将终端从训练中彻底移除,新终端将通信条件及本地数据特性上报给目标微基站,目标微基站根据训练任务类型及终端上报信息决定新终端是否参与目标微基站的训练。目标微基站再将终端的任务安排结果发送给终端。
在一些实施例中,宏基站、微基站和终端完成OAM的模型训练任务,将全局模型发送至OAM之后,还可以对训练得到的模型进行推理。由OAM确定进行模型推理的任务小区,其任务小区进行任务推理的实施方式包括:
当进行任务推理时,任务小区通过所在宏基站向OAM发起推理请求并上报推理任务类型及具体需求,OAM根据推理任务类型及具体需求寻找合适的一个或多个模型。寻找到合适的模型后,OAM将模型选择结果下发给宏基站,被选择的宏基站上报具体的模型参数信息。OAM将被选择宏基站上报的模型参数信息转发给任务小区所在宏基站,任务 小区所在宏基站根据模型参数信息对任务进行推理。
下面实施例将结合附图对宏基站、微基站和终端交互过程进行说明。图19是根据一示例性实施例示出的一种模型推理方法的主流程图。如图19所示,包括以下步骤:
步骤1,OAM向宏基站发起模型训练请求,宏基站将模型训练请求转发给微基站。
步骤2,微基站将模型训练请求转发给终端,终端将通信条件及本地数据类型特征上报给微基站,微基站将终端数据上报给宏基站。
步骤3,宏基站根据微基站上报信息进行任务分配,并将模型结构和模型参数值下发给微基站。
步骤4,微基站选择参与执行模型训练任务的终端以及终端的学习模式,并对参与训练的终端进行任务分配。
步骤5,终端、微基站与宏基站不断迭代进行联邦学习,直至模型满足OAM订阅需求,宏基站将模型训练结果上报给OAM。
图20是根据一示例性实施例示出的一种模型推理方法的联邦学习流程图。如图20所示,包括:终端对本地模型参数进行初始化;终端根据任务要求进行本地模型训练,并将第二模型训练结果通过无线信道传输给微基站;微基站汇总所有终端的第二模型训练结果,先进行模型对齐,再进行联邦聚合,并将结果通过X2接口上报给宏基站;宏基站汇总所有微基站联邦聚合结果,先进行模型对齐,再进行全局模型学习,并将全局模型的模型学习结果通过X2接口发送给微基站;微基站将模型学习结果通过无线信道发送给终端,终端根据模型学习结果更新本地学习模型;宏基站确定与模型训练结果对应的全局模型是否满足OAM订阅需求;若满足OAM订阅需求,则联邦学习结束,宏基站将模型学习结果上报给OAM。若不满足OAM订阅需求,则宏基站根据切换信息判断退出连接或新加入连接终端是否参与训练,微基站根据终端切换情况重新进行模型训练任务分配。
图21是根据一示例性实施例示出的一种模型推理方法的终端切换处理流程图。如图21所示,包括:源微基站通知终端准备执行切换,并将退出连接终端及目标微基站信息上报给宏基站;终端执行切换,并在目标微基站上完成连接;宏基站根据训练任务类型及切换信息决定终端是否继续参与源微基站的模型训练任务;若继续参与执行源微基站模型训练任务,则目标微基站负责转发终端与源微基站之间的训练数据,终端继续参与源微基站的训练任务;目标微基站将终端的任务安排结果发送给终端。若不继续参与执行源微基站模型训练任务,源微基站将终端移除训练;新增终端将通信条件及本地数据特性上报给目标微基站;目标微基站根据训练任务类型及终端上报信息决定新终端是否参与训练;目标微基站将终端的任务安排结果发送给终端。
在本公开一些实施例中,确定全局模型之后,还包括推理全局模型。图22是根据一示例性实施例示出的一种模型学习方法的模型推理流程图。如图22所示,包括如下步骤:
步骤1,任务小区通过宏基站向OAM发起推理请求并上报推理任务类型及具体需求。
步骤2,OAM根据推理任务类型及具体需求寻找合适的一个或多个模型。
一种实施例中,将推理任务类型分为与上层应用相关类型或与底层网络通道相关类型。在选择模型时,优先选择训练任务类型与推理任务类型相近的宏基站模型。
一种实施例中,可选择多个训练好的模型,将模型进行融合后进行推理。
步骤3,OAM将模型选择结果下发给宏基站,被选择的宏基站上报具体模型参数信息。
步骤4,OAM将模型参数信息转发给任务小区所在宏基站,任务小区所在宏基站根据模型参数信息对任务进行推理。
一种实施例中,OAM选择了多个训练好的宏基站模型,则任务小区所在宏基站将多个模型进行模型融合,然后再对任务进行推理。
图23是根据一示例性实施例示出的一种模型学习方法中微基站与宏基站进行信令与数据传输的协议和接口原理图。如图23所示,主要涉及微基站与宏基站之间的交互,具体如下:
1a.微基站将发送连接建立请求信令(X2 Setup Request)发送给宏基站,信令指示内容为,请求与目标基站建立连接。1b.宏基站根据微基站发送的连接建立请求信令进行资源分配。1c.宏基站将发送成功建立连接信令(X2 Setup Response)发送给微基站,信令指示内容为,通知对方已成功建立连接。2a.微基站将第一模型训练结果进行打包。2b.微基站将发送训练结果数据包信令发送给宏基站,信令指示内容为,发送训练数据包给接收方。3.宏基站利用AI服务模块进行全局模型训练。4.宏基站将发送打包并发送全局模型训练结果数据包信令发送给微基站,信令指示内容为对全局模型训练结果进行打包并将数据包发送给接收方。5.宏基站将通知是否继续进行训练信令发送给微基站,信令指示内容为通知对方是否继续进行训练。6.宏基站与微基站确认传输完毕。7.宏基站将资源释放信令(Release Resource)发送给微基站,信令指示内容:进行资源释放。
图24是根据一示例性实施例示出的一种模型学习方法中微基站与终端进行信令与数据传输的协议和接口原理图。如图24所示,主要涉及微基站与终端之间的交互,具体如下:
1a.终端将发送建立RRC连接请求信令(RRC Connection Request)发送给微基站,信令指示内容为请求与目标基站建立RRC连接。1b.微基站将发送确认建立RRC连接信令(RRC Connection Setup)发送给终端,信令指示内容:通知接收方同意建立RRC连接。 1c.终端根据微基站发送信令进行无线资源配置。1d.终端将发送完成建立RRC连接信令(RRC Connection Setup Complete)发送给微基站,信令指示内容为通知接收方RRC连接建立完成。2a.终端将本地训练结果(即第二模型训练结果)进行打包。2b.终端将发送本地训练结果数据包信令发送给微基站,信令指示内容为发送本地训练结果数据包给接收方。3.微基站与宏基站协同利用AI服务模块进行模型训练。4.微基站将发送全局模型训练结果信令发送给终端,信令指示内容为发送全局模型训练结果给接收方。5.微基站将通知是否继续训练信令给终端,信令指示内容:通知对方是否继续进行训练。6a.微基站将RRC连接释放请求信令(RRC Connection Release)发送给终端,信令指示内容为请求释放RRC连接。6b.终端将成功释放RRC连接信令(RRC Connection Release Complete)发送给微基站,信令指示内容为通知对方已经成功释放RRC连接。
图25是根据一示例性实施例示出的一种模型学习方法中进行终端切换的协议和接口原理图。如图25所示,主要涉及宏基站、源微基站、目标微基站与终端之间的交互,具体如下:
1.源微基站将发送测量控制信号信令(Measurement Control)发送给终端,信令指示内容:通知对方进行信号强度测量。2.终端将发送测量报告信令(Measurement Reports)发送给源微基站,信令指示内容为发送测量报告给接收方。3.源微基站做出终端切换决策(HO decision)。4a.源微基站将发送切换请求信令(Handover Request)发送给目标微基站,信令指示内容为发送切换请求给接收方。4b.目标微基站将发送切换请求应答信令(Handover Request ack)发送给源微基站,信令指示内容为发送切换请求应答给接收方。5.源微基站将发送包含移动控制信息(Mobility control information)的重配置RRC连接请求信令(RRC Connection Reconfiguration)发送给终端,信令指示内容为发送重配置RRC连接请求给接收方。6.源微基站将发送终端状态信息信令(Early status transfer)发送给目标微基站,信令指示内容为发送终端状态信息给接收方。7.终端接入目标微基站。8.终端将发送RRC重连接配置完成消息信令(RRC Connection reconfiguration complete)发送给目标微基站,信令指示内容为发送RRC重连接配置完成消息给接收方。9.目标微基站将发送切换成功消息信令(Handover success)发送给源微基站,信令指示内容为发送切换成功消息给接收方。10.源微基站将发送切换终端及目标微基站信息信令发送给宏基站,信令指示内容为发送切换终端及目标微基站信息给宏基站。11.宏基站根据源微基站训练任务类型及切换信息决定终端是否继续参与源微基站的训练任务。12.宏基站将发送决定结果信令发送给目标微基站,信令指示内容为发送决定结果给接收方。13.宏基站将发送决定结果信令发送给源微基站,信令指示内容为发送决定结果给接收方。14.目标微基站决定切换终端 是否参与自己的联邦学习训练任务。15.目标微基站将发送决定结果信令发送给终端,信令指示内容为发送决定结果给接收方。
基于相同的构思,本公开实施例还提供一种模型学习装置。
可以理解的是,本公开实施例提供的模型学习装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。结合本公开实施例中所公开的各示例的单元及算法步骤,本公开实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同的方法来实现所描述的功能,但是这种实现不应认为超出本公开实施例的技术方案的范围。
在本公开一些实施例中,在模型学习装置中,以包括一个宏基站装置、M个微基站装置和N个用户装置,为例进行说明。
其中,用户装置为接入微基站的终端,负责本地数据收集与本地模型训练,并可根据全局模型学习结果对本地模型进行更新。微基站装置负责选择参与模型训练任务的终端及学习模式、对参与模型训练任务的终端进行训练任务分配、汇总终端的本地训练结果并利用AI服务模块进行模型对齐与联邦平均,同时负责终端切换管理及转发宏基站下发的信令给终端。宏基站装置负责与OAM进行交互、对参与训练的微基站装置进行任务分配、汇总微基站装置的训练结果并利用AI服务模块进行模型对齐与全局模型学习,同时在终端发生切换时决定终端是否继续参与训练。
图26是根据一示例性实施例示出的一种模型学习装置框图。参照图26,该模型学习装置100,应用于宏基站,包括发送模块101。
发送模块,用于响应于接收到操作维护管理OAM实体发送的模型训练请求,向第一数量的微基站发送模型训练请求。其中,第一数量的微基站通信覆盖范围在宏基站通信覆盖范围内。
在本公开实施例中,模型训练请求用于触发微基站上报能力信息。装置还包括:确定模块102。
确定模块102,用于响应于接收到微基站发送的能力信息,基于能力信息确定模型结构和模型参数值,并向微基站发送模型结构和模型参数值。模型结构为指示微基站基于模型训练请求训练的模型结构,模型参数值为模型结构的初始参数值。
在本公开实施例中,能力信息包括微基站的数据类型特征。装置还包括:接收模块103。
接收模块103,用于接收第一数量微基站发送的第一数量第一模型训练结果。确定第一数量微基站中不同微基站具有的数据类型特征,并确定第一模型损失函数。基于第一数 量微基站中不同微基站具有的数据类型特征进行数据类型特征统一后,以优化第一模型损失函数为目标,对第一数量第一模型训练结果进行第一模型对齐。基于第一模型对齐的结果进行全局模型学习,确定全局模型。
在本公开实施例中,确定模块102,用于响应于全局模型学习的模型学习结果不满足OAM的模型训练请求,将模型学习结果发送至微基站,接收微基站基于模型学习结果重新确定的第一数量第一模型训练结果。并基于全局模型学习的模型学习结果重新确定第一模型损失函数,并以优化重新确定的第一模型损失函数为目标,重新对接收的第一数量第一模型训练结果进行第一模型对齐。基于重新确定的第一模型对齐的结果,进行下一次全局模型学习,重新确定模型学习结果,直到模型学习结果满足模型训练请求,将与满足模型训练请求的模型学习结果对应的模型确定为全局模型。
在本公开实施例中,确定模块102,用于确定微基站第一数量的第一模型训练结果与宏基站上一次全局模型学习得到的模型学习结果之间的第一损失函数,以及第一模型对齐损失函数。基于第一损失函数和第一模型对齐损失函数,确定第一模型损失函数。
在本公开实施例中,确定模块102,用于响应于全局模型学习的模型学习结果满足OAM的模型训练请求,向微基站发送停止模型训练信息。停止训练信息指示微基站停止终端执行模型训练任务。将模型学习结果对应的模型确定为全局模型,并向OAM发送全局模型。
在本公开实施例中,确定模块102还用于响应于在训练模型过程中接收到微基站发送的终端切换信息,基于终端切换信息重新确定执行模型训练任务的终端,并向微基站发送终端的信息。终端切换信息包括退出模型训练的终端和终端重新接入的目标微基站的信息。终端切换信息用于宏基站重新确定执行模型训练任务的终端。
图27是根据一示例性实施例示出的一种模型学习装置框图。参照图27,该模型学习装置200,应用于微基站,包括接收模块201和发送模块202。
接收模块201,用于接收宏基站发送的模型训练请求。发送模块202,用于向终端发送模型训练请求。其中,接收模型训练请求的微基站的数量为第一数量。第一数量的微基站通信覆盖范围在宏基站通信覆盖范围内。
在本公开实施例中,模型训练请求用于触发终端上报终端的通信条件和数据特征,接收模块201还用于接收终端发送的通信条件和数据类型特征。对终端的通信条件和数据特性,以及微基站的通信条件和数据特性进行处理,得到能力信息,并向能力信息发送至宏基站。其中,能力信息用于宏基站确定模型结构和模型参数值。
在本公开实施例中,接收模块201还用于:接收模型结构和模型参数值。模型结构为 指示微基站基于模型训练请求训练的模型结构,模型参数值为模型结构的初始参数值。基于终端的通信条件和数据类型特征以及模型结构和模型参数值,确定执行模型训练的第二数量终端。向第二数量终端发送调度信息。调度信息包括模型结构和模型参数值以及指示终端进行模型训练的指示信息。
在本公开实施例中,装置还包括:确定模块203。
接收模块201,用于接收第二数量终端发送的第二数量第二模型训练结果。确定模块203,用于确定第二数量终端中不同终端具有的数据类型特征,并确定第二模型损失函数。基于第二数量终端中不同终端具有的数据类型特征进行数据类型特征统一后,以优化第二模型损失函数为目标,对第二数量第二模型训练结果进行第二模型对齐。基于第二模型对齐的结果进行联邦聚合,得到第一模型训练结果。
在本公开实施例中,确定模块203,用于响应于接收到宏基站发送的继续训练请求,并接收到宏基站发送的模型学习结果。基于模型学习结果更新终端的模型结构和模型参数值,并向终端发送继续训练调度信息。响应于重新接收到第二数量第二模型训练结果,基于第一模型训练结果重新确定第二模型损失函数,并以优化重新确定的第二模型损失函数为目标,对第二数量第二模型训练结果进行第二模型对齐。基于重新确定的第二模型对齐的结果,进行下一次联邦聚合,重新确定第一模型训练结果。
在本公开实施例中,确定模块203,用于确定终端第二数量第二模型训练结果与微基站上一次联邦聚合得到的第一模型训练结果之间的第二损失函数,以及第二模型对齐损失函数。基于第二损失函数和第二模型对齐损失函数,确定第二模型损失函数。
在本公开实施例中,接收模块201还用于:接收宏基站发送的停止模型训练信息。停止训练信息指示微基站停止终端执行模型训练任务。基于停止模型训练信息指示终端停止执行模型训练任务。
在本公开实施例中,发送模块202还用于:发送终端切换信息。终端切换信息包括退出模型训练的终端和终端重新接入的目标微基站的信息。终端切换信息用于宏基站重新确定执行模型训练任务的终端。响应于接收到宏基站发送的终端信息,重新确定执行模型训练任务的终端,并向终端发送模型训练任务。
在本公开实施例中,发送模块202,用于响应于终端信息中包括上一次执行模型训练任务的终端,确定终端切换后的目标微基站,由目标微基站向终端发送模型训练任务。和/或
响应于终端信息中未包括上一次执行模型训练任务的终端,确定将终端不再执行模型训练任务,并确定新增执行模型训练任务的终端,向新增执行模型训练任务的终端发送模 型训练任务。关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图28是根据一示例性实施例示出的一种用于模型学习的装置300的框图。例如,装置300可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图28,装置300可以包括以下一个或多个组件:处理组件302,存储器304,电力组件306,多媒体组件308,音频组件310,输入/输出(I/O)接口312,传感器组件314,以及通信组件316。
处理组件302通常控制装置300的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件302可以包括一个或多个处理器320来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件302可以包括一个或多个模块,便于处理组件302和其他组件之间的交互。例如,处理组件302可以包括多媒体模块,以方便多媒体组件308和处理组件302之间的交互。
存储器304被配置为存储各种类型的数据以支持在装置300的操作。这些数据的示例包括用于在装置300上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器304可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电力组件306为装置300的各种组件提供电力。电力组件306可以包括电源管理系统,一个或多个电源,及其他与为装置300生成、管理和分配电力相关联的组件。
多媒体组件308包括在所述装置300和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件308包括一个前置摄像头和/或后置摄像头。当装置300处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件310被配置为输出和/或输入音频信号。例如,音频组件310包括一个麦克风(MIC),当装置300处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被 配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器304或经由通信组件316发送。在一些实施例中,音频组件310还包括一个扬声器,用于输出音频信号。
I/O接口312为处理组件302和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件314包括一个或多个传感器,用于为装置300提供各个方面的状态评估。例如,传感器组件314可以检测到装置300的打开/关闭状态,组件的相对定位,例如所述组件为装置300的显示器和小键盘,传感器组件314还可以检测装置300或装置300一个组件的位置改变,用户与装置300接触的存在或不存在,装置300方位或加速/减速和装置300的温度变化。传感器组件314可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件314还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件314还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件316被配置为便于装置300和其他设备之间有线或无线方式的通信。装置300可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件316经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件316还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置300可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器304,上述指令可由装置300的处理器320执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
图29是根据一示例性实施例示出的一种用于模型学习的装置400的框图。例如,装置400可以被提供为一服务器。参照图29,装置400包括处理组件422,其进一步包括一个或多个处理器,以及由存储器432所代表的存储器资源,用于存储可由处理组件422的执行的指令,例如应用程序。存储器432中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件422被配置为执行指令,以执行上述方法。
装置400还可以包括一个电源组件426被配置为执行装置400的电源管理,一个有线或无线网络接口450被配置为将装置400连接到网络,和一个输入输出(I/O)接口458。装置400可以操作基于存储在存储器432的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
进一步可以理解的是,本公开中“多个”是指两个或两个以上,其它量词与之类似。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
进一步可以理解的是,术语“第一”、“第二”等用于描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开,并不表示特定的顺序或者重要程度。实际上,“第一”、“第二”等表述完全可以互换使用。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。
进一步可以理解的是,本公开实施例中尽管在附图中以特定的顺序描述操作,但是不应将其理解为要求按照所示的特定顺序或是串行顺序来执行这些操作,或是要求执行全部所示的操作以得到期望的结果。在特定环境中,多任务和并行处理可能是有利的。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。
Claims (20)
- 一种模型学习方法,其特征在于,应用于宏基站,包括:响应于接收到操作维护管理OAM实体发送的模型训练请求,向第一数量的微基站发送所述模型训练请求;其中,所述第一数量的微基站通信覆盖范围在所述宏基站通信覆盖范围内。
- 根据权利要求1所述的模型学习方法,其特征在于,所述模型训练请求用于触发微基站上报能力信息;所述向第一数量的微基站发送所述模型训练请求之后,所述方法还包括:响应于接收到微基站发送的能力信息,基于所述能力信息确定模型结构和模型参数值,并向微基站发送所述模型结构和模型参数值;所述模型结构为指示微基站基于所述模型训练请求训练的模型结构,所述模型参数值为所述模型结构的初始参数值。
- 根据权利要求2所述的模型学习方法,其特征在于,所述能力信息包括微基站的数据类型特征;所述方法还包括:接收第一数量微基站发送的第一数量第一模型训练结果;确定所述第一数量微基站中不同微基站具有的所述数据类型特征,并确定第一模型损失函数;基于所述第一数量微基站中不同微基站具有的数据类型特征进行数据类型特征统一后,以优化所述第一模型损失函数为目标,对所述第一数量第一模型训练结果进行第一模型对齐;基于第一模型对齐的结果进行全局模型学习,确定全局模型。
- 根据权利要求3所述的模型学习方法,其特征在于,所述基于第一模型对齐的结果进行全局模型学习,确定全局模型,包括:响应于所述全局模型学习的模型学习结果不满足OAM的模型训练请求,将所述模型学习结果发送至微基站,接收微基站基于所述模型学习结果重新确定的第一数量第一模型训练结果;并基于所述全局模型学习的模型学习结果重新确定所述第一模型损失函数,并以优化重新确定的第一模型损失函数为目标,重新对接收的所述第一数量第一模型训练结果进行第一模型对齐;基于重新确定的第一模型对齐的结果,进行下一次全局模型学习,重新确定模型学习结果,直到所述模型学习结果满足所述模型训练请求,将与满足所述模型训练请求的模型学习结果对应的模型确定为全局模型。
- 根据权利要求4所述的模型学习方法,其特征在于,确定第一模型损失函数,包括:确定微基站第一数量的第一模型训练结果与所述宏基站上一次全局模型学习得到的模型学习结果之间的第一损失函数,以及第一模型对齐损失函数;基于所述第一损失函数和第一模型对齐损失函数,确定第一模型损失函数。
- 根据权利要求3所述的模型学习方法,其特征在于,所述基于第一模型对齐结果进行全局模型学习,确定全局模型,包括:响应于所述全局模型学习的模型学习结果满足OAM的模型训练请求,向微基站发送停止模型训练信息;所述停止训练信息指示微基站停止终端执行模型训练任务;将所述模型学习结果对应的模型确定为全局模型,并向所述OAM发送所述全局模型。
- 根据权利要求1所述的模型学习方法,其特征在于,所述方法还包括:响应于在训练模型过程中接收到微基站发送的终端切换信息,基于所述终端切换信息重新确定执行模型训练任务的终端,并向微基站发送所述终端的信息;所述终端切换信息包括退出模型训练的终端和所述终端重新接入的目标微基站的信息;所述终端切换信息用于宏基站重新确定执行模型训练任务的终端。
- 一种模型学习方法,其特征在于,应用于微基站,包括:接收宏基站发送的模型训练请求;向终端发送所述模型训练请求;其中,所述接收模型训练请求的微基站的数量为第一数量;所述第一数量的微基站通信覆盖范围在所述宏基站通信覆盖范围内。
- 根据权利要求8所述的模型学习方法,其特征在于,所述模型训练请求用于触发终端上报终端的通信条件和数据特征,所述向终端发送所述模型训练请求之后,所述模型学习方法还包括:接收终端发送的通信条件和数据类型特征;对所述终端的通信条件和数据特性,以及所述微基站的通信条件和数据特性进行处理,得到能力信息,并向所述能力信息发送至宏基站;其中,所述能力信息用于宏基站确定模型结构和模型参数值。
- 根据权利要求9所述的模型学习方法,其特征在于,所述方法还包括:接收模型结构和模型参数值;所述模型结构为指示微基站基于所述模型训练请求训练的模型结构,所述模型参数值为所述模型结构的初始参数值;基于所述终端的通信条件和数据类型特征以及所述模型结构和模型参数值,确定执行模型训练的第二数量终端;向所述第二数量终端发送调度信息;所述调度信息包括模型结构和模型参数值以及指示终端进行模型训练的指示信息。
- 根据权利要求10所述的模型学习方法,其特征在于,所述方法还包括:接收第二数量终端发送的第二数量第二模型训练结果;确定所述第二数量终端中不同终端具有的数据类型特征,并确定第二模型损失函数;基于所述第二数量终端中不同终端具有的数据类型特征进行数据类型特征统一后,以优化所述第二模型损失函数为目标,对所述第二数量第二模型训练结果进行第二模型对齐;基于第二模型对齐的结果进行联邦聚合,得到第一模型训练结果。
- 根据权利要求11所述的模型学习方法,其特征在于,所述基于第二模型对齐的结果进行联邦聚合,得到第一模型训练结果,包括:响应于接收到宏基站发送的继续训练请求,并接收到宏基站发送的模型学习结果;基于所述模型学习结果更新终端的模型结构和模型参数值,并向终端发送继续训练调度信息;响应于重新接收到第二数量第二模型训练结果,基于所述第一模型训练结果重新确定第二模型损失函数,并以优化所述重新确定的第二模型损失函数为目标,对所述第二数量第二模型训练结果进行第二模型对齐;基于重新确定的第二模型对齐的结果,进行下一次联邦聚合,重新确定第一模型训练结果。
- 根据权利要求12所述的模型学习方法,其特征在于,确定第二模型损失函数,包括:确定终端第二数量第二模型训练结果与所述微基站上一次联邦聚合得到的第一模型训练结果之间的第二损失函数,以及第二模型对齐损失函数;基于所述第二损失函数和第二模型对齐损失函数,确定第二模型损失函数。
- 根据权利要求12所述的模型学习方法,其特征在于,所述方法还包括:接收宏基站发送的停止模型训练信息;所述停止训练信息指示微基站停止终端执行模型训练任务;基于所述停止模型训练信息指示终端停止执行模型训练任务。
- 根据权利要求8所述的模型学习方法,其特征在于,所述方法还包括:发送终端切换信息;所述终端切换信息包括退出模型训练的终端和终端重新接入的目标微基站的信息;所述终端切换信息用于宏基站重新确定执行模型训练任务的终端;响应于接收到宏基站发送的终端信息,重新确定执行模型训练任务的终端,并向终端发送模型训练任务。
- 根据权利要求15所述的模型学习方法,其特征在于,所述向终端发送模型训练任务,包括:响应于所述终端信息中包括上一次执行模型训练任务的终端,确定所述终端切换后的目标微基站,由所述目标微基站向终端发送所述模型训练任务;和/或响应于所述终端信息中未包括上一次执行模型训练任务的终端,确定将所述终端不再执行所述模型训练任务,并确定新增执行模型训练任务的终端,向新增执行模型训练任务的终端发送模型训练任务。
- 一种模型学习装置,其特征在于,应用于宏基站,包括:发送模块,用于响应于接收到操作维护管理OAM实体发送的模型训练请求,向第一数量的微基站发送所述模型训练请求;其中,所述第一数量的微基站通信覆盖范围在所述宏基站通信覆盖范围内。
- 一种模型学习装置,其特征在于,应用于微基站,包括:接收模块,用于接收宏基站发送的模型训练请求;发送模块向终端发送所述模型训练请求;其中,所述接收模型训练请求的微基站的数量为第一数量;所述第一数量的微基站通信覆盖范围在所述宏基站通信覆盖范围内。
- 一种模型学习装置,其特征在于,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:执行权利要求1-7中任意一项所述的模型学习方法,或执行权利要求8-16中任意一项所述的模型学习方法。
- 一种非临时性计算机可读存储介质,当所述存储介质中的指令由移动终端的处理器执行时,使得移动终端能够执行权利要求1-7中任意一项所述的模型学习方法,或使得移动终端能够执行权利要求8-16中任意一项所述的模型学习方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180001548.6A CN115769211A (zh) | 2021-05-14 | 2021-05-14 | 一种模型学习方法、模型学习装置及存储介质 |
PCT/CN2021/093927 WO2022236831A1 (zh) | 2021-05-14 | 2021-05-14 | 一种模型学习方法、模型学习装置及存储介质 |
US18/560,372 US20240235954A1 (en) | 2021-05-14 | 2021-05-14 | Model learning method, model learning apparatus, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/093927 WO2022236831A1 (zh) | 2021-05-14 | 2021-05-14 | 一种模型学习方法、模型学习装置及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022236831A1 true WO2022236831A1 (zh) | 2022-11-17 |
Family
ID=84027946
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/093927 WO2022236831A1 (zh) | 2021-05-14 | 2021-05-14 | 一种模型学习方法、模型学习装置及存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240235954A1 (zh) |
CN (1) | CN115769211A (zh) |
WO (1) | WO2022236831A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116761182A (zh) * | 2023-07-12 | 2023-09-15 | 中国电信股份有限公司技术创新中心 | 通信方法、通信装置、电子设备以及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111190487A (zh) * | 2019-12-30 | 2020-05-22 | 中国科学院计算技术研究所 | 一种建立数据分析模型的方法 |
CN111369042A (zh) * | 2020-02-27 | 2020-07-03 | 山东大学 | 一种基于加权联邦学习的无线业务流量预测方法 |
US20210073639A1 (en) * | 2018-12-04 | 2021-03-11 | Google Llc | Federated Learning with Adaptive Optimization |
CN112668128A (zh) * | 2020-12-21 | 2021-04-16 | 国网辽宁省电力有限公司物资分公司 | 联邦学习系统中终端设备节点的选择方法及装置 |
-
2021
- 2021-05-14 US US18/560,372 patent/US20240235954A1/en active Pending
- 2021-05-14 WO PCT/CN2021/093927 patent/WO2022236831A1/zh active Application Filing
- 2021-05-14 CN CN202180001548.6A patent/CN115769211A/zh active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210073639A1 (en) * | 2018-12-04 | 2021-03-11 | Google Llc | Federated Learning with Adaptive Optimization |
CN111190487A (zh) * | 2019-12-30 | 2020-05-22 | 中国科学院计算技术研究所 | 一种建立数据分析模型的方法 |
CN111369042A (zh) * | 2020-02-27 | 2020-07-03 | 山东大学 | 一种基于加权联邦学习的无线业务流量预测方法 |
CN112668128A (zh) * | 2020-12-21 | 2021-04-16 | 国网辽宁省电力有限公司物资分公司 | 联邦学习系统中终端设备节点的选择方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
US20240235954A1 (en) | 2024-07-11 |
CN115769211A (zh) | 2023-03-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021248371A1 (zh) | 一种接入方法、接入装置及存储介质 | |
CN112823544B (zh) | 条件切换的方法、装置、通信设备及存储介质 | |
WO2021258370A1 (zh) | 一种通信处理方法、通信处理装置及存储介质 | |
WO2021146847A1 (zh) | 定位的处理方法、装置、基站、终端设备及存储介质 | |
WO2021196214A1 (zh) | 传输方法、装置及计算机存储介质 | |
WO2022141405A1 (zh) | 资源集合配置方法、装置及存储介质 | |
WO2015180591A1 (zh) | 终端切换方法、接入设备、终端及系统 | |
WO2024207243A1 (zh) | 一种通信方法、装置及存储介质 | |
WO2024207245A1 (zh) | 一种通信方法、装置及存储介质 | |
WO2022236831A1 (zh) | 一种模型学习方法、模型学习装置及存储介质 | |
WO2023123123A1 (zh) | 一种频域资源确定方法、装置及存储介质 | |
US20240172077A1 (en) | Cell reselection method, cell reselection apparatus, and storage medium | |
US20230327740A1 (en) | Information transmission method and apparatus, communication device, and storage medium | |
WO2022006786A1 (zh) | 网络数据收集方法及装置、网络设备、用户设备及存储介质 | |
WO2022236638A1 (zh) | 一种模型推理方法、模型推理装置及存储介质 | |
WO2023087190A1 (zh) | 一种基于人工智能的网络任务处理方法、装置及存储介质 | |
CN116888937A (zh) | 一种人工智能通信方法、装置及存储介质 | |
WO2022261906A1 (zh) | 一种指示方法、指示装置及存储介质 | |
WO2021227081A1 (zh) | 转移业务的方法、装置、通信设备及存储介质 | |
WO2020164515A1 (zh) | 信号传输方法、设备及系统 | |
WO2022257085A1 (zh) | 一种模型数据管理方法、模型数据管理装置及存储介质 | |
WO2024031276A1 (zh) | 一种收集服务体验质量信息的方法、装置及存储介质 | |
WO2024138375A1 (zh) | 一种通信方法、装置、设备及存储介质 | |
WO2024087258A1 (zh) | 波束失败检测参考信号资源的确定方法、装置及存储介质 | |
US20230198710A1 (en) | Information transmission method and communication device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21941400 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18560372 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21941400 Country of ref document: EP Kind code of ref document: A1 |