WO2022222152A1 - 联邦学习方法、联邦学习系统、第一设备和第三设备 - Google Patents

联邦学习方法、联邦学习系统、第一设备和第三设备 Download PDF

Info

Publication number
WO2022222152A1
WO2022222152A1 PCT/CN2021/089428 CN2021089428W WO2022222152A1 WO 2022222152 A1 WO2022222152 A1 WO 2022222152A1 CN 2021089428 W CN2021089428 W CN 2021089428W WO 2022222152 A1 WO2022222152 A1 WO 2022222152A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
encrypted
model
inference
key
Prior art date
Application number
PCT/CN2021/089428
Other languages
English (en)
French (fr)
Inventor
陈景然
许阳
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN202180097144.1A priority Critical patent/CN117157651A/zh
Priority to PCT/CN2021/089428 priority patent/WO2022222152A1/zh
Priority to EP21937380.0A priority patent/EP4328815A4/en
Publication of WO2022222152A1 publication Critical patent/WO2022222152A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present application relates to the field of communications, and more particularly, to a federated learning method, a federated learning system, a first device, a third device, a chip, a computer-readable storage medium, a computer program product, and a computer program.
  • Feature data is often distributed in various nodes such as mobile terminals, edge servers, network devices, and third-party application servers across core network nodes (Over the Top, OTT).
  • OTT core network nodes
  • the embodiments of the present application provide a federated learning method, a federated learning system, a first device, a third device, a chip, a computer-readable storage medium, a computer program product, and a computer program, which can be used to improve data privacy security.
  • the embodiments of the present application provide a federated learning method, including:
  • the first device sends the first key to the second device; wherein the first key is used to encrypt the inference information of the second model in the second device to obtain the first encrypted inference information;
  • the first device When receiving the second encrypted inference information corresponding to the first encrypted inference information, the first device obtains the target information based on the inference information of the first model and the second encrypted inference information in the first device.
  • the embodiment of the present application also provides a federated learning method, including:
  • the third device receives the first encrypted inference information from the i-th electronic device among the N electronic devices; wherein, the first encrypted inference information is the i-th electronic device based on the first key pair sent by the first device. Obtained by encrypting the reasoning information of the second model in the electronic device; N is an integer greater than or equal to 2, and i is an integer greater than or equal to 1 and less than or equal to N;
  • the third device determines second encrypted inference information corresponding to the first encrypted inference information based on the first encrypted inference information, and sends the second encrypted inference information to the first device; wherein the second encrypted inference information is used to instruct the first device based on Target information is obtained from the reasoning information of the first model and the second encrypted reasoning information in the first device.
  • the embodiment of the present application also provides a federated learning system, including:
  • a first device configured to send the first key
  • a second device configured to receive the first key, and use the first key to encrypt the inference information of the second model in the second device to obtain the first encrypted inference information
  • the first device is further configured to obtain target information based on the inference information of the first model and the second encrypted inference information in the first device when the second encrypted inference information corresponding to the first encrypted inference information is received.
  • the embodiment of the present application also provides a first device, including:
  • a first communication module configured to send a first key to the second device; wherein the first key is used to encrypt the inference information of the second model in the second device to obtain the first encrypted inference information;
  • the first processing module is configured to obtain target information based on the inference information of the first model and the second encrypted inference information in the first device when the second encrypted inference information corresponding to the first encrypted inference information is received.
  • the present application also provides a third device, comprising:
  • the second communication module is configured to receive the first encrypted inference information from the i-th electronic device among the N electronic devices; wherein, the first encrypted inference information is the first key sent by the i-th electronic device based on the first device Obtained by encrypting the reasoning information of the second model in the ith electronic device; N is an integer greater than or equal to 2, and i is an integer greater than or equal to 1 and less than or equal to N;
  • the second processing module is used for the third device to determine the second encrypted inference information corresponding to the first encrypted inference information based on the first encrypted inference information, and send the second encrypted inference information to the first device; wherein the second encrypted inference information It is used to instruct the first device to obtain target information based on the inference information of the first model and the second encrypted inference information in the first device.
  • An embodiment of the present application further provides a first device, including: a processor and a memory, where the memory is used to store a computer program, and the processor invokes and runs the computer program stored in the memory to execute the above-mentioned federated learning method.
  • Embodiments of the present application further provide a third device, including: a processor and a memory, where the memory is used to store a computer program, and the processor invokes and runs the computer program stored in the memory to execute the above-mentioned federated learning method.
  • An embodiment of the present application further provides a chip, including: a processor, configured to call and run a computer program from a memory, so that a device installed with the chip executes the above-mentioned federated learning method.
  • Embodiments of the present application further provide a computer-readable storage medium for storing a computer program, wherein the computer program causes a computer to execute the above-mentioned federated learning method.
  • Embodiments of the present application further provide a computer program product, including computer program instructions, wherein the computer program instructions cause a computer to execute the above-mentioned federated learning method.
  • the embodiment of the present application further provides a computer program, the computer program enables a computer to execute the above-mentioned federated learning method.
  • the second device encrypts the inference information of the second model therein to obtain the first encrypted inference information
  • the first device obtains the first encrypted inference information based on the second encrypted inference information corresponding to the first encrypted inference information and the inference information in the first device.
  • the inference information of the first model obtains target information. Therefore, the first device and the second device participate in federated learning based on their respective models, and obtain target information by reasoning.
  • the first device sends the key and processes the second encrypted reasoning information obtained by encryption. Therefore, the participants in the reasoning process manage the key and avoid decrypting the related data by other nodes. Improve data privacy and security.
  • FIG. 1 is a schematic diagram of a model training process of vertical federated learning in an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a model reasoning process of vertical federated learning in an embodiment of the present application.
  • FIG. 3 is a system architecture diagram of a terminal device accessing a mobile network in an embodiment of the present application.
  • FIG. 4A is a schematic diagram 1 of an interface between the NWDAF and other network elements.
  • FIG. 4B is a second schematic diagram of the interface between the NWDAF and other network elements.
  • FIG. 5 is a flowchart of a federated learning method according to an embodiment of the present application.
  • FIG. 6 is an interactive flowchart of a federated learning method according to an embodiment of the present application.
  • FIG. 7 is an interactive flowchart of a federated learning method according to another embodiment of the present application.
  • FIG. 8 is an interaction flowchart of a federated learning method according to another embodiment of the present application.
  • FIG. 9 is an interactive flowchart of a federated learning method according to another embodiment of the present application.
  • FIG. 10 is a scene diagram of the federated learning training process in Application Example 1 of the present application.
  • FIG. 11 is an interactive flowchart of the federated learning training process in Application Example 1 of the present application.
  • FIG. 12 is a scene diagram of the federated learning inference process in Application Example 1 of the present application.
  • FIG. 13 is an interactive flowchart of the federated learning inference process in Application Example 1 of the present application.
  • FIG. 14 is a scene diagram of the federated learning training process in the second application example of the present application.
  • FIG. 15 is an interactive flowchart of the federated learning training process in the second application example of the present application.
  • FIG. 16 is a scene diagram of the federated learning inference process in the second application example of the present application.
  • FIG. 17 is an interactive flowchart of the federated learning inference process in the second application example of the present application.
  • FIG. 18 is a scene diagram of the federated learning training process in Application Example 3 of the present application.
  • FIG. 19 is an interactive flowchart of the federated learning training process in the third application example of the present application.
  • FIG. 20 is a schematic block diagram of a federated learning system according to an embodiment of the present application.
  • FIG. 21 is a schematic block diagram of a federated learning system according to another embodiment of the present application.
  • FIG. 22 is a schematic structural block diagram of a first device according to an embodiment of the present application.
  • FIG. 23 is a schematic structural block diagram of a third device according to an embodiment of the present application.
  • FIG. 24 is a schematic block diagram of a communication device according to an embodiment of the present application.
  • FIG. 25 is a schematic block diagram of a chip according to an embodiment of the present application.
  • the "instruction" mentioned in the embodiments of the present application may be a direct instruction, an indirect instruction, or an associated relationship.
  • a indicates B it can indicate that A directly indicates B, for example, B can be obtained through A; it can also indicate that A indicates B indirectly, such as A indicates C, and B can be obtained through C; it can also indicate that there is an association between A and B relation.
  • corresponding may indicate that there is a direct or indirect corresponding relationship between the two, or may indicate that there is an associated relationship between the two, or indicate and be instructed, configure and be instructed configuration, etc.
  • the communication system includes, for example: Global System of Mobile communication (GSM) system, Code Division Multiple Access (Code Division Multiple Access, CDMA) system, Wideband Code Division Multiple Access (Wideband Code Division Multiple Access, WCDMA) system , General Packet Radio Service (General Packet Radio Service, GPRS), Long Term Evolution (Long Term Evolution, LTE) system, Advanced Long Term Evolution (Advanced long term evolution, LTE-A) system, New Radio (New Radio, NR) system , Evolution system of NR system, LTE (LTE-based access to unlicensed spectrum, LTE-U) system on unlicensed spectrum, NR (NR-based access to unlicensed spectrum, NR-U) system on unlicensed spectrum, non-licensed spectrum Terrestrial communication network (Non-Terrestrial Networks, NTN) system,
  • GSM Global System of Mobile communication
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • GPRS General Packet Radio Service
  • LTE Long Term Evolution
  • D2D Device to Device
  • M2M Machine to Machine
  • MTC Machine Type Communication
  • V2V Vehicle to Vehicle
  • V2X Vehicle to everything
  • a communication system may include multiple nodes, such as terminal equipment, network equipment, functional network elements in the core network, OTT servers, and the like.
  • the terminal equipment may also be referred to as user equipment (User Equipment, UE), access terminal, subscriber unit, subscriber station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device , user agent or user device, etc.
  • UE User Equipment
  • the terminal device can be a station (STAION, ST) in the WLAN, can be a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a personal digital processing (Personal Digital Assistant, PDA) devices, handheld devices with wireless communication capabilities, computing devices or other processing devices connected to wireless modems, in-vehicle devices, wearable devices, next-generation communication systems such as end devices in NR networks, or future Terminal equipment in the evolved public land mobile network (Public Land Mobile Network, PLMN) network, etc.
  • STAION, ST in the WLAN
  • SIP Session Initiation Protocol
  • WLL Wireless Local Loop
  • PDA Personal Digital Assistant
  • the terminal device can be deployed on land, including indoor or outdoor, handheld, wearable, or vehicle-mounted; it can also be deployed on water (such as ships, etc.); it can also be deployed in the air (such as airplanes, balloons, and satellites) superior).
  • the terminal device may be a mobile phone (Mobile Phone), a tablet computer (Pad), a computer with a wireless transceiver function, a virtual reality (Virtual Reality, VR) terminal device, and an augmented reality (Augmented Reality, AR) terminal Equipment, wireless terminal equipment in industrial control, wireless terminal equipment in self driving, wireless terminal equipment in remote medical, wireless terminal equipment in smart grid , wireless terminal equipment in transportation safety, wireless terminal equipment in smart city or wireless terminal equipment in smart home, etc.
  • a mobile phone Mobile Phone
  • a tablet computer Pad
  • a computer with a wireless transceiver function a virtual reality (Virtual Reality, VR) terminal device
  • augmented reality (Augmented Reality, AR) terminal Equipment wireless terminal equipment in industrial control, wireless terminal equipment in self driving, wireless terminal equipment in remote medical, wireless terminal equipment in smart grid , wireless terminal equipment in transportation safety, wireless terminal equipment in smart city or wireless terminal equipment in smart home, etc.
  • the terminal device may also be a wearable device.
  • Wearable devices can also be called wearable smart devices, which are the general term for the intelligent design of daily wear and the development of wearable devices using wearable technology, such as glasses, gloves, watches, clothing and shoes.
  • a wearable device is a portable device that is worn directly on the body or integrated into the user's clothing or accessories. Wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction, and cloud interaction.
  • wearable smart devices include full-featured, large-scale, complete or partial functions without relying on smart phones, such as smart watches or smart glasses, and only focus on a certain type of application function, which needs to cooperate with other devices such as smart phones.
  • the network device may be a device for communicating with a mobile device, and the network device may be an access point (Access Point, AP) in WLAN, or a base station (Base Transceiver Station, BTS) in GSM or CDMA , it can also be a base station (NodeB, NB) in WCDMA, it can also be an evolved base station (Evolutional Node B, eNB or eNodeB) in LTE, or a relay station or access point, or in-vehicle equipment, wearable devices and NR networks
  • the network device may have a mobile feature, for example, the network device may be a mobile device.
  • the network device may be a satellite or a balloon station.
  • the satellite may be a low earth orbit (LEO) satellite, a medium earth orbit (MEO) satellite, a geostationary earth orbit (GEO) satellite, a High Elliptical Orbit (HEO) ) satellite, etc.
  • the network device may also be a base station set in a location such as land or water.
  • the embodiments of the present application are used for vertical federated learning, and vertical federated learning includes the processes of model training and model inference.
  • the model may refer to an AI model, such as a deep neural network model.
  • the model training process of vertical federated learning includes:
  • Alignment of encrypted samples Vertical federated learning is suitable for the situation where multiple participants have training samples corresponding to the same identifier (Identifier, ID) but with different feature dimensions, that is, the IDs in the training samples provided by multiple participants overlap. More, but with less overlap of data feature types. For example, a UE in a certain area generates different feature data at different nodes in the communication system, and vertical federated learning can be performed by combining the feature data of the UE in different nodes. Therefore, it is necessary to align the training samples of each participant, and increase the feature dimension of the samples without increasing the sample ID.
  • ID identifier
  • feature dimensions that is, the IDs in the training samples provided by multiple participants overlap. More, but with less overlap of data feature types. For example, a UE in a certain area generates different feature data at different nodes in the communication system, and vertical federated learning can be performed by combining the feature data of the UE in different nodes. Therefore, it is necessary to align the training samples of each participant, and increase the feature dimension of
  • Model encryption training based on aligned samples. include:
  • the third-party coordinator C sends the key to participant A and participant B to encrypt the data to be transmitted.
  • the encryption method can be, for example, homomorphic encryption.
  • the result of performing homomorphic encryption on the two samples m1 and m2 is equal to the homomorphic encryption result of m1 plus the homomorphic encryption result of m2.
  • the homomorphic encryption result after the sample m is multiplied by a constant is equal to the homomorphic encryption result of this sample multiplied by the constant.
  • S2 Interactive intermediate results.
  • the participant with the sample label is the active party, such as participant B in the figure.
  • the rest of the participants are data providers, which can be called passive parties, and do not have the labels of the samples.
  • Participants A and B respectively calculate intermediate results related to their own local data based on model A and model B, and encrypt the interaction.
  • the model inference process of vertical federated learning includes:
  • T1 Coordinator C sends inference requests to participants A and B, respectively.
  • the inference request is used to indicate to A and B the model ID to be used and the input information required for inference.
  • T2 Participants A and B perform calculations based on their own data and locally stored models, obtain inference information, and encrypt them.
  • T3 Participants A and B send encrypted reasoning information to coordinator C.
  • T4 Coordinator C aggregates the reasoning information of A and B, obtains the encrypted reasoning result, and decrypts it.
  • the training process and inference process of the above-mentioned vertical federated learning can be implemented based on the 5G network architecture.
  • the network elements of the core network can act as service providers, can provide specific services, and can be called by other network elements through the defined Application Programming Interface (API).
  • API Application Programming Interface
  • the network elements of the core network may also be referred to as nodes of the core network.
  • FIG. 3 shows a system architecture diagram of a terminal device UE accessing a mobile network.
  • the system architecture includes at least one of the following:
  • NSSF Network Slice Selection Function
  • Network Exposure Function (NEF) network element
  • Network storage function Network Repository Function, NRF
  • NRF Network Repository Function
  • PCF Policy Control Function
  • UDM Unified Data Management
  • AF Application Function
  • NSSAAF Network Slice Specific Authentication and Authorization Function
  • AUSF Authentication Server Function
  • Access and Mobility Management Function AMF network element
  • Session Management Function (SMF) network element
  • SCP Service Communication Proxy
  • UPF User Plane Function
  • DN Data Network
  • the N1 interface is the reference point between the terminal equipment and the AMF entity.
  • the N2 interface is the reference point of the AN and AMF network elements, and is used for sending non-access stratum (Non-Access Stratum, NAS) messages, etc.
  • the N3 interface is the reference point between the (R)AN and UPF network elements, and is used to transmit data on the user plane.
  • the N4 interface is a reference point between the SMF network element and the UPF network element, and is used to transmit, for example, the tunnel identification information of the N3 connection, the data buffer indication information, and the downlink data notification message.
  • the N6 interface is the reference point between the UPF entity and the DN, and is used to transmit data on the user plane.
  • the UE is connected with the network device in an Access Stratum (AS), and exchanges access stratum messages and wireless data.
  • AS Access Stratum
  • the UE performs a NAS connection with the AMF and exchanges NAS messages.
  • the AMF is responsible for the management of the mobility of the UE, and the SMF is responsible for the session management of the UE.
  • the AMF is also responsible for forwarding messages related to session management between the UE and the SMF.
  • the PCF is responsible for formulating policies related to UE mobility management, session management, and charging.
  • UPF is connected with network equipment and external data network for data transmission.
  • the 5G network also adds a Network Data Analytics Function (NWDAF) network element to the core network, which can collect data from various network elements and network management systems of the core network for big data statistics, analysis or intelligent data. Analysis is performed to obtain analysis or prediction data on the network side, so as to assist each network element to perform more effective control of UE access according to the data analysis result.
  • NWDAF Network Data Analytics Function
  • the NWDAF network element may collect data of other network elements (Network Function, NF) to perform big data analysis.
  • Network Function Network Function
  • interfaces between NWDAF and other network elements are defined, including, for example, the interface Nnf that other network elements as shown in Fig. 4A request a certain analysis result from NWDAF , and the NWDAF shown in Fig. 4B that sends a certain analysis result to other network elements.
  • the coordinator is the coordinator.
  • the node because of its decryption capability, can directly obtain the inferred result.
  • the active party is a third-party application server such as an OTT server
  • the coordinator is a node inside the core network, then the reasoning information of artificial intelligence (AI) applications initiated by the third-party application server will be known by the core network.
  • AI artificial intelligence
  • the coordinator continuously collects and decrypts the model intermediate data of participants A and B, and can infer the model information of participants A and B to a certain extent, and there is a risk of privacy data leakage.
  • NWDAF can only interact with other network elements and obtain the required data, it cannot use its data aggregation function to improve the data privacy and security of federated learning.
  • FIG. 5 is a schematic flowchart of a federated learning method according to an embodiment of the present application. The method includes:
  • the first device sends a first key to the second device; wherein the first key is used to encrypt the inference information of the second model in the second device to obtain the first encrypted inference information;
  • the first device When receiving the second encrypted inference information corresponding to the first encrypted inference information, the first device obtains the target information based on the inference information of the first model and the second encrypted inference information in the first device.
  • the first device includes at least one of the following:
  • the second device includes at least one of the following:
  • At least one network element in the second core network and/or the third core network may include, for example, at least one of the various network elements shown in FIG. 3 .
  • the second core network and the third core network may be the same core network, or may be different core networks.
  • the above-mentioned reasoning information may include output information for the reasoning request obtained by using the model.
  • the first device is the active party of federated learning
  • the second device is the passive party of federated learning.
  • the first device initiates an inference task and sends the inference request to the second device.
  • the inference request may include input information of the inference task, model ID, and the like.
  • the model ID is used to instruct the second device to determine a second model for performing the inference task from at least one model in the second device.
  • the second device may input the input information of the reasoning task into the second model, and the output information of the second model is the reasoning information of the second model.
  • the second device will receive the first key.
  • the second device may encrypt the inference information of the second model in the second device by using the first key to obtain the first encrypted inference information.
  • the second encrypted reasoning information may be a communication message of the first encrypted reasoning information.
  • the second device encapsulates the first encrypted inference information into the second encrypted inference information according to the preset communication message format, as shown in FIG. 6 , and sends the second encrypted inference information to the first encrypted inference information. a device.
  • the second encrypted inference information may also be information obtained by performing other processing on the first encrypted inference information. For example, after obtaining the first encrypted inference information, the second device sends the first encrypted inference information to other devices, and the other devices process the second encrypted inference information and send the second encrypted inference information to the first device.
  • the above-mentioned first key may be a public key corresponding to the first private key held by the first device. Since the first device holds the first private key, the first device is a key manager and has decryption capability.
  • the first device can decrypt the received second encrypted inference information by using the first private key to obtain corresponding decrypted information, where the decrypted information can represent the inference information of the second model in the second device.
  • the first device obtains target information based on the decryption information and inference information of the first model in the first device.
  • the reasoning information of the first model may be output information of the first model after the input information of the reasoning task is input into the first model.
  • the target information is the final result of federated learning, that is, the inference result for the inference request.
  • the second device encrypts the inference information of the second model therein to obtain the first encrypted inference information
  • the first device obtains the first encrypted inference information based on the second encrypted inference information corresponding to the first encrypted inference information and the inference information in the first device.
  • the inference information of the first model obtains target information. Therefore, the first device and the second device participate in federated learning based on their respective models, and obtain target information by reasoning.
  • the first device sends the key and processes the second encrypted reasoning information obtained by encryption. Therefore, the participants in the reasoning process manage the key and avoid decrypting the related data by other nodes. Improve data privacy and security.
  • the second encrypted reasoning information may be information obtained by processing the first encrypted reasoning information.
  • the second encrypted inference information may be information obtained by aggregating the first encrypted inference information of multiple devices.
  • the second device includes N electronic devices, and the above method further includes:
  • the i-th electronic device among the N electronic devices receives the first key, encrypts the inference information of the second model in the i-th electronic device by using the first key, obtains the first encrypted inference information, and converts the first key to the first key.
  • the encrypted inference is sent to the third device;
  • the third device determines the second encrypted inference information based on the received first encrypted inference information
  • the third device sends the second encrypted reasoning information to the first device
  • N is an integer greater than or equal to 2
  • i is an integer greater than or equal to 1 and less than or equal to N.
  • the first key is used to instruct the i-th electronic device among the N electronic devices to encrypt the inference information of the second model in the i-th electronic device to obtain the first encrypted inference information, and convert the first The encrypted inference information is sent to the third device.
  • the first encrypted inference information is used to instruct the third device to determine the second encrypted inference information.
  • the above method includes:
  • the third device receives the first encrypted inference information from the i-th electronic device among the N electronic devices; wherein, the first encrypted inference information is the i-th electronic device based on the first key pair sent by the first device. Obtained by encrypting the reasoning information of the second model in the electronic device; N is an integer greater than or equal to 2, and i is an integer greater than or equal to 1 and less than or equal to N;
  • the third device determines second encrypted inference information corresponding to the first encrypted inference information based on the first encrypted inference information, and sends the second encrypted inference information to the first device; wherein the second encrypted inference information is used to instruct the first device based on Target information is obtained from the reasoning information of the first model and the second encrypted reasoning information in the first device.
  • each of the N electronic devices may have its corresponding second model, and the parameters of the second model of each electronic device may be different.
  • Each electronic device obtains inference information based on its own second model, and performs encryption to obtain first encrypted inference information.
  • the N electronic devices send the first encrypted reasoning information to the third device.
  • the third device aggregates N pieces of first encrypted inference information to obtain second encrypted inference information, and sends the second encrypted inference information to the first device.
  • the first device obtains the target information by decrypting and combining the reasoning information of the first model.
  • the third device determining the second encrypted inference information based on the received first encrypted inference information may include: the third device adds the received first encrypted inference information to obtain the second encrypted inference information.
  • the second encrypted inference information is obtained by aggregating multiple first encrypted inference information, even if the first device has the decryption capability and can decrypt the second encrypted inference information, the decrypted information is also the inference of multiple second models As a result of the aggregation of information, the inference information of each second model cannot be obtained. It can be seen that, based on the above method, data privacy security can be further improved.
  • the second device may include the third device. That is to say, the third device can also act as a participant of federated learning, receive the first key, and encrypt the inference information of the second model in the third device to obtain the corresponding first encrypted inference information. After receiving the first encrypted inference information sent by other devices, the first encrypted inference information of itself and other devices may be aggregated to obtain second encrypted inference information.
  • the third device includes the first NWDAF network element. That is to say, the data aggregation function of NWDAF network elements in the core network can be used to improve the data privacy and security of federated learning.
  • the federated learning method provided in this application may also include the training process of federated learning.
  • the first device if the first device is the active party of federated learning, the first device holds the label of the training data and can calculate the loss function. Therefore, the federated learning method may further include: federated learning between the first model and the second model. During training, the first device determines a loss function based on the label information.
  • the key management party in the federated learning inference process is different from the key management party in the federated learning training process.
  • the key manager in the federated learning inference process, is the first device.
  • the key manager may be other devices other than the first device and the second device.
  • the federated learning method can also include:
  • the first device receives the second key from the fourth device
  • the first device encrypts the training information of the first model by using the second key to obtain the first encrypted training information
  • the first device sends first encrypted training information, wherein the first encrypted training information is used to enable the fourth device to obtain model update information based on the second encrypted training information corresponding to the first encrypted training information, and the model update information is used to update the first encrypted training information. a model.
  • federated learning approaches can include:
  • the fourth device sends the second key
  • the first device receives the second key, uses the second key to encrypt the training information of the first model, obtains the first encrypted training information, and sends the first encrypted training information;
  • the fourth device obtains model update information based on the second encrypted training information corresponding to the first encrypted training information, and sends the model update information;
  • the first device updates the first model based on the model update information.
  • the above steps may be implemented before steps S51 and S52, and may be repeated iteratively for many times until the first model and/or the second model meet the preset convergence conditions.
  • the above-mentioned second key may be a public key corresponding to the second private key held by the fourth device. Since the fourth device holds the second private key, the fourth device is a key manager and has decryption capability.
  • the second device also receives the second key, encrypts the training information of the second model with the second key, obtains third encrypted training information, and sends the third encrypted training information.
  • the fourth device combines the received first encrypted training information and the third encrypted training information to obtain second encrypted training information, which can be used to determine model update information.
  • the fourth device aggregates, eg adds, the first encrypted training information and the third encrypted training information to obtain the second encrypted training information, and then decrypts the second encrypted training information to obtain the model update information.
  • the above training information may be various information calculated by each device based on the model during the federated learning training process, such as loss function, gradient, and the like.
  • the above-mentioned model update information may include the gradient of the federated model obtained by the gradient aggregation of each device, or may include information such as gradient and mask of the model corresponding to each device.
  • the fourth device sends the second key to the first device and the second device respectively.
  • the second model may include at least one electronic device.
  • the first device interacts with the second device, wherein the first device provides label information of the training data and calculates the loss function.
  • the at least one electronic device included in the first device and the second device respectively calculates and obtains the gradients of the first model and the at least one second model as the training information.
  • the first device adds a mask to the gradient, encrypts the gradient and the mask based on the second key, obtains the first encrypted training information, and sends the first encrypted training information.
  • At least one electronic device in the second device adds a mask to the respective gradients, and encrypts the gradient and the mask based on the second key to obtain respective third encrypted training information, and respectively sends the third encrypted training information information.
  • the fourth device obtains the second encrypted training information, and decrypts with the second private key to obtain the model update information of the first model and each of the second models.
  • the model update information of the first model and each of the second models are respectively sent to each participant, that is, each of the electronic devices in the first device and the second device. Each participant updates its own model separately.
  • the fourth device includes at least one of the following:
  • the second device further includes a fourth device, that is, the fourth device can also participate in the training and inference of the federated learning.
  • the fourth device receives the loss function calculated by the first device based on the label information, calculates the gradient of its own second model based on the loss function, and uses the gradient as model update information of its own second model. , update the second model of itself.
  • the fourth device receives the encrypted training information of the first device and the encrypted training information of other electronic devices in the second device except the fourth device, uses the private key to decrypt to obtain the corresponding model update information, and separates each model update information sent to the corresponding device.
  • the fourth device receives the first key sent by the first device, and uses the first key to encrypt the inference information of the second model in the fourth device to obtain the first encrypted inference information, so that the A device can perform inference in combination with the information of the second model in the fourth device to obtain target information.
  • a fifth device that does not have the decryption capability can also be used to achieve The above process of aggregating the first encrypted training information and the third encrypted training information.
  • the first device sends the first encrypted training information, including:
  • the first device sends the first encrypted training information to the fifth device;
  • the first encrypted training information is used to instruct the fifth device to obtain the second encrypted training information based on the first encrypted training information and the third encrypted training information from the second device, and send the second encrypted training information to the fourth device;
  • the third encrypted training information is obtained by encrypting the training information of the second model with the second key; the second encrypted training information is used to instruct the fourth device to determine the model update information.
  • the above federated learning algorithms also include:
  • the second device receives the second key, uses the second key to encrypt the training information of the second model, obtains third encrypted training information, and sends the third encrypted training information;
  • the fifth device receives the first encrypted training information and the third encrypted training information, obtains the second encrypted training information based on the first encrypted training information and the third encrypted training information, and sends the second encrypted training information;
  • the fourth device receives the second encrypted training information, and determines model update information based on the second encrypted training information.
  • the fifth device includes a second NWDAF network element.
  • the fourth device sends the second key to the first device and the second device respectively.
  • the first device interacts with the second device, wherein the first device provides label information of the training data and calculates the loss function.
  • the first device and the second device respectively calculate and obtain the gradients of the first model and the second model as the above training information.
  • the first device adds a mask to the gradient, encrypts the gradient and the mask based on the second key, obtains the first encrypted training information, and sends the first encrypted training information.
  • the second device adds a mask to the gradient, encrypts the gradient and the mask based on the second key, obtains third encrypted training information, and sends the third encrypted training information.
  • the fifth device receives the first encrypted training information and the third encrypted training information, and aggregates the first encrypted training information and the third encrypted training information to obtain the second encrypted training information. and sending the second encrypted training information to the fourth device.
  • the fourth device decrypts the second encrypted training information by using the second private key to obtain model update information.
  • the fourth device sends the model update information to each participant, namely the first device and the second device. Each participant updates its own model separately.
  • the second device may further include a fifth device. That is, the fifth device can also participate in the training and inference of federated learning.
  • the fifth device receives the second key sent by the fourth device, encrypts its own training information to obtain encrypted training information, and aggregates the received encrypted training information in combination with its own encrypted training information.
  • the second encrypted training information is obtained and sent to the fourth device to obtain model update information and update the model.
  • the fifth device receives the first key sent by the first device, and encrypts the inference information of its own second model to obtain the first encrypted inference information. Further, the fifth device may be the same as the third device, or may be different from the third device. If the fifth device is the same as the third device, the fifth device can combine the first encrypted inference information sent by other electronic devices in the second device to obtain the second encrypted inference information, and send the second encrypted inference information to the first device , to determine the target information. If the fifth device is different from the third device, the fifth device can send its first encrypted inference information to the third device, and the third device aggregates the first encrypted inference information of each device, obtains the second encrypted inference information, and sends it to the third device. A device to determine target information.
  • the above-mentioned first key and/or second key may be sent to the corresponding node during the process of establishing or modifying a packet data unit (Packet Data Unit, PDU) session between the terminal device and the network device, or may be sent to the corresponding node. It is sent to the corresponding node during the registration request process. Further, it can also be sent to the corresponding node during the authentication process or authorization process that occurs in the relevant flow.
  • the authentication process is, for example, a secondary authentication process performed by the core network between the terminal device and the application server triggered by the SMF network element during the establishment of the PDU session.
  • the key is sent from one node to other nodes, for example, from a network element of the core network to a terminal device and/or server, or from a terminal device to a network element and/or server of the core network.
  • the first device receives a second key from the fourth device, comprising:
  • the first device receives the second key from the fourth device in the first process
  • the first process includes at least one of the following: a first PDU session establishment process; a first PDU session modification process; a first registration request process; a first authentication process; and a first authorization process.
  • the fourth device sends the second key in the first process.
  • the first device sends the first key to the second device, including:
  • the first device sends the first key to the second device in the second process
  • the second process includes at least one of the following: a second PDU session establishment process; a second PDU session modification process; a second registration request process; a second authentication process; and a second authorization process.
  • the second device receives the first key in the second process.
  • the participant who manages the key is the active party of the federated learning. Based on this, only the active party obtains the target information of the federated learning inference, which effectively prevents the application results from being known by other participants.
  • sink nodes such as a third device and a fifth device may be set up in federated learning.
  • the aggregation node does not have the ability to decrypt. By aggregating the encrypted information of each participant and sending it to the key manager, it can prevent the key manager from decrypting and obtain the model information of each participant, and further improve data privacy and security.
  • the key managers are different during the training process and the inference process of federated learning. During the inference process, the key will be replaced, and the active party will send a new key, thereby further improving data privacy and security.
  • the first device includes at least one network element on the network side (core network), so the network side is the active side.
  • the network element NWDAF of the network data analysis function in the network side acts as a sink node, that is, the above-mentioned third device and fifth device.
  • NWDAF has labels for data samples and is responsible for data collection at each node.
  • the second device includes a terminal device UE and an OTT server, that is, the UE and the OTT server are passive parties, and the OTT server provides characteristic data required by the sample through the access layer AS.
  • the OTT server is also used as a fourth device, ie a key manager, to generate a second key and send the second key to the UE and the network side.
  • the training process of federated learning includes:
  • the OTT server sends a first key to the UE and at least one network element NFs on the network side to encrypt the data to be transmitted.
  • the first key may be generated and sent by the key management module AS-KEY of the OTT server.
  • the UE, the AS, and at least one network element NFs on the network side respectively obtain the model calculation result according to the local data, encrypt the model calculation result, and send it to the NWDAF.
  • NWDAF aggregates the data of each node, encrypts and calculates the loss function according to the label.
  • the NWDAF sends the loss function to the UE, AS and at least one network element NFs.
  • the UE, AS and at least one network element NFs calculate the encrypted gradient, add a mask and send it to NWDAF.
  • NWADF aggregates the masked gradients sent by each node, and transparently transmits the aggregated encryption results to the AS.
  • Each node removes the mask and updates the local model weights according to the decrypted gradient.
  • the network side since the network side is the active party, the network side initiates some demand analysis for the 5G communication system, and the results are not expected to be known by the third-party server. Therefore, the key management party is replaced by the network side.
  • At least one network element that is, in the inference process, the network side generates a new key and sends it to each node.
  • the key management module NF-KEY in at least one network element is responsible for generating and issuing keys and decrypting, and other network elements in at least one network element are responsible for participating in the calculation.
  • the NWDAF network element acts as a sink node (third device) and is responsible for collecting inference information of each node.
  • the reasoning process of federated learning includes:
  • the network side acts as the active party, and its key management module NF-KEY sends the first key and the model configuration information to be analyzed to each node, including the model ID and input information.
  • the UE at least one network element NFs and AS on the network side perform calculations according to local data and corresponding models, obtain the calculation results of the inference process, and send them to the NWDAF for aggregation.
  • the key management module NF-KEY on the network side decrypts based on the aggregation result of NWDAF, and obtains the final analysis result.
  • the joint analysis of multi-domain data of UE, network side and third-party applications can be realized under the condition of protecting the data privacy of each node, so that the network side can obtain more comprehensive analysis results.
  • the OTT server is the active party (the first device), the AS of the OTT server has the label of the data sample, and at least one network element and the terminal device UE on the network side (core network) are the passive party (the second device) , which provides feature data related to OTT applications.
  • the network element NWDAF on the network side is responsible for the data collection of each node for the sink nodes (the third device and the fifth device).
  • the key manager generates the second key for the network side, that is, the key management module NF-KEY (fourth device) on the network side, and sends the second key to the UE and the AS.
  • the training process of federated learning includes:
  • the key management module NF-KEY on the network side sends a second key to the UE, at least one network element NFs and AS on the network side, for encrypting the data to be transmitted.
  • the UE and at least one network element NFs on the network side respectively determine the model calculation result according to the local data, and encrypt and send it to the NWDAF.
  • NWDAF aggregates the encryption model calculation results of each node, obtains the model aggregation result and sends it to AS.
  • AS sends loss function to UE and NFs.
  • Each node calculates the encrypted gradient, adds a mask and sends it to NWDAF.
  • NWDAF aggregates the gradient and mask and sends it to the key management module NF-KEY.
  • the key management module NF-KEY decrypts the loss function and gradient according to the private key.
  • the key management module NF-KEY transmits the gradients belonging to each node back to each node.
  • Each node removes the mask and updates the local model weights according to the decrypted gradient.
  • a new key management party needs to be replaced during the inference process, that is, during the inference process, the key management module AS-KEY in the active party AS generates the first key and sends it to each node, as shown in the figure As shown in Figure 16, NWDAF aggregates the encrypted inference results of each node, and sends the encrypted inference results to AS-KEY.
  • the reasoning process includes:
  • the AS sends the first key and the model configuration information to be analyzed to each node.
  • the UE and at least one network element NFs on the network side perform calculations according to the local data and the corresponding model, obtain the model calculation result, and send it to the NWDAF for aggregation.
  • NWDAF sends the encrypted aggregation result to AS.
  • AS combines its own calculation results and decrypts according to the private key to obtain the final analysis result.
  • the network side is the active party (the first device)
  • the NWDAF has the label of the data sample
  • the access layer AS and the UE of the OTT server are the passive party (the second device).
  • NWDAF is a sink node (a third device and a fifth device), and is also responsible for data collection of each node.
  • the key manager (the fourth device) generates a second key for the UE, that is, the UE, and sends the second key to the network side and the AS.
  • the key management module UE-KEY of the terminal device sends a second key to at least one network element NFs and AS on the network side for encrypting the data to be transmitted.
  • the UE, at least one network element NFs and AS on the network side respectively obtain the model calculation result according to the local data, and encrypt and send it to the NWDAF.
  • NWDAF aggregates the data of each node, encrypts and calculates the loss function according to the label.
  • NWDAF sends the loss function to UE, NFs and AS.
  • UE, NFs, and AS calculate the encrypted gradient, where NFs and AS add a mask to the gradient and send it to NWDAF.
  • the NWDAF aggregates the received gradients and masks, obtains the aggregated result, and sends the aggregated result to the UE.
  • the UE decrypts the loss function and gradient according to the private key.
  • the UE passes the gradient back to each node.
  • Each node removes the mask and updates the local model weights according to the decrypted gradient.
  • the first key is generated by the network side of the active party and sent to each node.
  • the subsequent steps are similar to application example 1 and will not be described here.
  • each device in this embodiment of the present application may include at least one of a terminal device, a core network element, and a server in a network system.
  • how to set the first device (active party), the second device (participant), the fourth device (the key manager of the training process), the third device and the fifth device (sink node) can be set according to the actual situation. Demand is determined.
  • the specific device setting manner is not limited to the manner of the above application example, and the implementation process is similar to the above application example.
  • an embodiment of the present application further provides a federated learning system 1000, referring to FIG. 20, which includes:
  • a first device 100 configured to send a first key
  • the second device 200 is configured to receive the first key, and use the first key to encrypt the inference information of the second model in the second device 200 to obtain the first encrypted inference information;
  • the first device 100 is further configured to obtain the target based on the inference information of the first model and the second encrypted inference information in the first device 100 when the second encrypted inference information corresponding to the first encrypted inference information is received information.
  • the second device 200 includes N electronic devices; the ith electronic device in the N electronic devices is used to encrypt the reasoning information of the second model in the ith electronic device by using the first key to obtain: first encrypted reasoning information, and sending the first encrypted reasoning information;
  • system 1000 further includes:
  • a third device 300 configured to receive the first encrypted inference information, and determine the second encrypted inference information based on the first encrypted inference information;
  • N is an integer greater than or equal to 2
  • i is an integer greater than or equal to 1 and less than or equal to N.
  • the system 1000 further includes:
  • a fourth device 400 configured to send the second key
  • the first device 100 is further configured to receive the second key, encrypt the training information of the first model with the second key, obtain the first encrypted training information, and send the first encrypted training information;
  • the fourth device 400 is further configured to obtain model update information based on the second encrypted training information corresponding to the first encrypted training information, where the model update information is used to update the first model.
  • the second device 200 is further configured to receive the second key, encrypt the training information of the second model with the second key, obtain third encrypted training information, and send the third encrypted training information;
  • the system 1000 further includes: a fifth device 500 configured to receive the first encrypted training information and the third encrypted training information, obtain the second encrypted training information based on the first encrypted training information and the second encrypted training information, and send the second encrypted training information information;
  • the fourth device 400 is further configured to receive second encrypted training information, and determine model update information based on the second encrypted training information.
  • Each device in the federated learning system 1000 of the embodiment of the present application can implement the corresponding function of the corresponding device in the foregoing method embodiments.
  • an embodiment of the present application further provides a first device 100, referring to FIG. 22, which includes:
  • the first communication module 110 is configured to send the first key to the second device; wherein, the first key is used to encrypt the inference information of the second model in the second device to obtain the first encrypted inference information;
  • the first processing module 120 is configured to obtain target information based on the inference information of the first model and the second encrypted inference information in the first device when the second encrypted inference information corresponding to the first encrypted inference information is received .
  • the second device includes N electronic devices;
  • the first key is used to instruct the i-th electronic device in the N electronic devices to encrypt the inference information of the second model in the i-th electronic device, obtain the first encrypted inference information, and send the first encrypted inference information to a third device;
  • the first encrypted inference information is used to instruct the third device to determine the second encrypted inference information
  • N is an integer greater than or equal to 2
  • i is an integer greater than or equal to 1 and less than or equal to N.
  • the third device includes the first NWDAF network element.
  • the first communication module 110 is further configured to: receive the second key from the fourth device;
  • the first processing module 120 is further configured to: encrypt the training information of the first model with the second key to obtain the first encrypted training information;
  • the first communication module 110 is further configured to: send first encrypted training information, wherein the first encrypted training information is used to enable the fourth device to obtain model update information based on the second encrypted training information corresponding to the first encrypted training information, and the model The update information is used to update the first model.
  • the first communication module 110 is specifically used for:
  • the first encrypted training information is used to instruct the fifth device to obtain the second encrypted training information based on the first encrypted training information and the third encrypted training information from the second device, and send the second encrypted training information to the fourth device;
  • the second encrypted training information is used to instruct the fourth device to determine the model update information.
  • the fifth device includes a second NWDAF network element.
  • the fourth device includes at least one of the following:
  • the first communication module 110 is used for:
  • the first process includes at least one of the following:
  • the first authentication process
  • the first authorization process The first authorization process.
  • the first communication module 110 is used for:
  • the second process includes at least one of the following:
  • the first processing module 120 is further configured to:
  • the loss function is determined based on the label information.
  • the first device includes at least one of the following:
  • the second device includes at least one of the following:
  • the first device 100 in this embodiment of the present application can implement the corresponding functions of the first device in the foregoing method embodiments, and the corresponding processes, functions, and implementations of each module (sub-module, unit, or component, etc.) in the first device 100
  • each module sub-module, unit, or component, etc.
  • the functions described by each module (submodule, unit, or component, etc.) in the first device 100 in this embodiment of the present application may be implemented by different modules (submodule, unit, or component, etc.), or may be implemented by The same module (sub-module, unit or component, etc.) is implemented.
  • the first sending module and the second sending module may be different modules, or may be the same module, both of which can be implemented in the embodiments of the present application. corresponding functions.
  • the communication module in the embodiment of the present application may be implemented by a transceiver of the device, and some or all of the other modules may be implemented by a processor of the device.
  • FIG. 23 is a schematic block diagram of a third device 300 according to an embodiment of the present application.
  • the third device 300 may include:
  • the second communication module 310 is configured to receive the first encrypted inference information from the i-th electronic device among the N electronic devices; wherein the first encrypted inference information is the first encrypted inference information sent by the i-th electronic device based on the first device.
  • the key is obtained by encrypting the reasoning information of the second model in the ith electronic device; N is an integer greater than or equal to 2, and i is an integer greater than or equal to 1 and less than or equal to N;
  • a second processing module 320 configured to determine second encrypted inference information corresponding to the first encrypted inference information based on the first encrypted inference information
  • the second communication module 310 is further configured to: send second encrypted inference information to the first device; wherein the second encrypted inference information is used to instruct the first device to infer the inference information based on the first model in the first device and the second encrypted inference information to get the target information.
  • the third device 300 in this embodiment of the present application can implement the corresponding functions of the network device in the foregoing method embodiments.
  • each module (sub-module, unit, or component, etc.) in the third device 300 reference may be made to the corresponding descriptions in the foregoing method embodiments, which are not repeated here.
  • the functions described by each module (submodule, unit, or component, etc.) in the third device 300 of the application embodiment may be implemented by different modules (submodule, unit, or component, etc.), or by the same module.
  • a module (sub-module, unit or component, etc.) is implemented.
  • the first sending module and the second sending module may be different modules, or may be the same module, both of which can realize their functions in the embodiments of the present application. corresponding function.
  • the communication module in the embodiment of the present application may be implemented by a transceiver of the device, and some or all of the other modules may be implemented by a processor of the device.
  • FIG. 24 is a schematic structural diagram of a communication device 600 according to an embodiment of the present application, wherein the communication device 600 includes a processor 610, and the processor 610 can call and run a computer program from a memory to implement the method in the embodiment of the present application.
  • the communication device 600 may also include a memory 620 .
  • the processor 610 may call and run a computer program from the memory 620 to implement the methods in the embodiments of the present application.
  • the memory 620 may be a separate device independent of the processor 610 , or may be integrated in the processor 610 .
  • the communication device 600 may further include a transceiver 630, and the processor 610 may control the transceiver 630 to communicate with other devices, specifically, may send information or data to other devices, or receive information or data sent by other devices .
  • the transceiver 630 may include a transmitter and a receiver.
  • the transceiver 630 may further include antennas, and the number of the antennas may be one or more.
  • the communication device 600 may be the first device in this embodiment of the present application, and the communication device 600 may implement the corresponding processes implemented by the first device in each method in the embodiment of the present application, which is not repeated here for brevity. Repeat.
  • the communication device 600 may be the third device in this embodiment of the present application, and the communication device 600 may implement the corresponding processes implemented by the third device in each method in the embodiment of the present application, which is not repeated here for brevity. Repeat.
  • 25 is a schematic structural diagram of a chip 700 according to an embodiment of the present application, wherein the chip 700 includes a processor 710, and the processor 710 can call and run a computer program from a memory to implement the method in the embodiment of the present application.
  • the chip 700 may further include a memory 720 .
  • the processor 710 may call and run a computer program from the memory 720 to implement the methods in the embodiments of the present application.
  • the memory 720 may be a separate device independent of the processor 710 , or may be integrated in the processor 710 .
  • the chip 700 may further include an input interface 730 .
  • the processor 710 may control the input interface 730 to communicate with other devices or chips, and specifically, may acquire information or data sent by other devices or chips.
  • the chip 700 may further include an output interface 740 .
  • the processor 710 can control the output interface 740 to communicate with other devices or chips, and specifically, can output information or data to other devices or chips.
  • the chip can be applied to the first device in the embodiment of the present application, and the chip can implement the corresponding processes implemented by the first device in each method of the embodiment of the present application, which is not repeated here for brevity.
  • the chip can be applied to the third device in the embodiment of the present application, and the chip can implement the corresponding processes implemented by the third device in each method of the embodiment of the present application, which is not repeated here for brevity.
  • the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-chip, or a system-on-a-chip, or the like.
  • the processor mentioned above may be a general-purpose processor, a digital signal processor (DSP), a field programmable gate array (FPGA), an application specific integrated circuit (ASIC) or Other programmable logic devices, transistor logic devices, discrete hardware components, etc.
  • DSP digital signal processor
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • the general-purpose processor mentioned above may be a microprocessor or any conventional processor or the like.
  • the memory mentioned above may be either volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be random access memory (RAM).
  • the memory in the embodiment of the present application may also be a static random access memory (static RAM, SRAM), a dynamic random access memory (dynamic RAM, DRAM), Synchronous dynamic random access memory (synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous connection Dynamic random access memory (synch link DRAM, SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DR RAM) and so on. That is, the memory in the embodiments of the present application is intended to include but not limited to these and any other suitable types of memory.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored on or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted over a wire from a website site, computer, server or data center (eg coaxial cable, optical fiber, Digital Subscriber Line (DSL)) or wireless (eg infrared, wireless, microwave, etc.) means to another website site, computer, server or data center.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes one or more available media integrated.
  • the available media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), among others.
  • the size of the sequence numbers of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, and should not be dealt with in the embodiments of the present application. implementation constitutes any limitation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioethics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Storage Device Security (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请涉及一种联邦学习方法、联邦学习系统、第一设备、第三设备、芯片、计算机可读存储介质、计算机程序产品和计算机程序,该方法包括:第一设备向第二设备发送第一密钥;其中,第一密钥用于对第二设备中的第二模型的推理信息进行加密,得到第一加密推理信息;第一设备在接收到与第一加密推理信息对应的第二加密推理信息的情况下,基于第一设备中的第一模型的推理信息以及第二加密推理信息,得到目标信息。利用本申请实施例能够提高数据隐私安全。

Description

联邦学习方法、联邦学习系统、第一设备和第三设备 技术领域
本申请涉及通信领域,并且更具体地,涉及一种联邦学习方法、联邦学习系统、第一设备、第三设备、芯片、计算机可读存储介质、计算机程序产品和计算机程序。
背景技术
为了训练泛化能力更强的人工智能模型,需要使用多维度的特征数据。特征数据往往分布在移动终端、边缘服务器、网络设备和跨核心网节点(Over the Top,OTT)的第三方应用服务器等各个节点。通过实现跨域的多节点数据共享,联合多个节点的不同维度的特征数据进行模型训练,可以提升模型的能力,对模型的训练有着重要的意义。
但是,多节点的多域数据共享,将会对数据隐私带来极大的挑战。为了在满足数据隐私、安全和监管需求的前提下,高效、准确的使用多节点的数据,相关技术中提出了联邦学习方案。如何在联邦学习的多方交互过程中提升数据隐私安全,是相关领域中的热点问题。
发明内容
有鉴于此,本申请实施例提供一种联邦学习方法、联邦学习系统、第一设备、第三设备、芯片、计算机可读存储介质、计算机程序产品和计算机程序,可用于提高数据隐私安全。
本申请实施例提供一种联邦学习方法,包括:
第一设备向第二设备发送第一密钥;其中,第一密钥用于对第二设备中的第二模型的推理信息进行加密,得到第一加密推理信息;
第一设备在接收到与第一加密推理信息对应的第二加密推理信息的情况下,基于第一设备中的第一模型的推理信息以及第二加密推理信息,得到目标信息。
本申请实施例还提供一种联邦学习方法,包括:
第三设备接收来自N个电子设备中的第i个电子设备的第一加密推理信息;其中,第一加密推理信息是第i个电子设备基于第一设备发送的第一密钥对第i个电子设备中的第二模型的推理信息进行加密得到的;N为大于等于2的整数,i为大于等于1且小于等于N的整数;
第三设备基于第一加密推理信息确定与第一加密推理信息对应的第二加密推理信息,并向第一设备发送第二加密推理信息;其中,第二加密推理信息用于指示第一设备基于第一设备中的第一模型的推理信息以及第二加密推理信息,得到目标信息。
本申请实施例还提供一种联邦学习系统,包括:
第一设备,用于发送第一密钥;
第二设备,用于接收第一密钥,利用第一密钥对第二设备中的第二模型的推理信息进行加密,得到第一加密推理信息;
第一设备,还用于在接收到与第一加密推理信息对应的第二加密推理信息的情况下,基于第一设备中的第一模型的推理信息以及第二加密推理信息,得到目标信息。
本申请实施例还提供一种第一设备,包括:
第一通信模块,用于向第二设备发送第一密钥;其中,第一密钥用于对第二设备中的第二模型的推理信息进行加密,得到第一加密推理信息;
第一处理模块,用于在接收到与第一加密推理信息对应的第二加密推理信息的情况下,基于第一设备中的第一模型的推理信息以及第二加密推理信息,得到目标信息。
本申请还提供一种第三设备,包括:
第二通信模块,用于接收来自N个电子设备中的第i个电子设备的第一加密推理信息;其中,第一加密推理信息是第i个电子设备基于第一设备发送的第一密钥对第i个电子设备中的第二模型的推理信息进行加密得到的;N为大于等于2的整数,i为大于等于1且小于等于N的整数;
第二处理模块,用于第三设备基于第一加密推理信息确定与第一加密推理信息对应的第二加密推理信息,并向第一设备发送第二加密推理信息;其中,第二加密推理信息用于指示第一设备基于第一设备中的第一模型的推理信息以及第二加密推理信息,得到目标信息。
本申请实施例还提供一种第一设备,包括:处理器和存储器,存储器用于存储计算机程序,处理器调用并运行存储器中存储的计算机程序,执行上述的联邦学习方法。
本申请实施例还提供一种第三设备,包括:处理器和存储器,存储器用于存储计算机程序,处理器 调用并运行存储器中存储的计算机程序,执行上述的联邦学习方法。
本申请实施例还提供一种芯片,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有芯片的设备执行上述的联邦学习方法。
本申请实施例还提供一种计算机可读存储介质,用于存储计算机程序,其中,计算机程序使得计算机执行上述的联邦学习方法。
本申请实施例还提供一种计算机程序产品,包括计算机程序指令,其中,计算机程序指令使得计算机执行上述的联邦学习方法。
本申请实施例还提供一种计算机程序,计算机程序使得计算机执行上述的联邦学习方法。
根据本申请实施例,第二设备对其中的第二模型的推理信息进行加密得到第一加密推理信息,第一设备基于与第一加密推理信息对应的第二加密推理信息以及第一设备中的第一模型的推理信息得到目标信息。因此,第一设备和第二设备基于各自的模型参与联邦学习,推理得到目标信息。在推理得到目标信息的过程中,由第一设备发送密钥并处理经加密得到的第二加密推理信息,因此,实现了由推理过程的参与方管理密钥,避免由其他节点解密相关数据,提升数据隐私安全。
附图说明
图1是本申请实施例中纵向联邦学习的模型训练过程的示意图。
图2是本申请实施例中纵向联邦学习的模型推理过程的示意图。
图3是本申请实施例中终端设备接入移动网络的系统架构图。
图4A是NWDAF与其他网元之间的接口的示意图一。
图4B是NWDAF与其他网元之间的接口的示意图二。
图5是本申请一个实施例的联邦学习方法的流程框图。
图6是本申请一个实施例的联邦学习方法的交互流程图。
图7是本申请另一个实施例的联邦学习方法的交互流程图。
图8是本申请另一个实施例的联邦学习方法的交互流程图。
图9是本申请另一个实施例的联邦学习方法的交互流程图。
图10是本申请应用示例一中的联邦学习训练过程的场景图。
图11是本申请应用示例一中的联邦学习训练过程的交互流程图。
图12是本申请应用示例一中的联邦学习推理过程的场景图。
图13是本申请应用示例一中的联邦学习推理过程的交互流程图。
图14是本申请应用示例二中的联邦学习训练过程的场景图。
图15是本申请应用示例二中的联邦学习训练过程的交互流程图。
图16是本申请应用示例二中的联邦学习推理过程的场景图。
图17是本申请应用示例二中的联邦学习推理过程的交互流程图。
图18是本申请应用示例三中的联邦学习训练过程的场景图。
图19是本申请应用示例三中的联邦学习训练过程的交互流程图。
图20是本申请一个实施例的联邦学习系统的示意性框图。
图21是本申请另一个实施例的联邦学习系统的示意性框图。
图22是本申请一个实施例的第一设备的示意性结构框图。
图23是本申请一个实施例的第三设备的示意性结构框图。
图24是本申请实施例的通信设备示意性框图。
图25是本申请实施例的芯片的示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
应理解,本文中术语“系统”和“网络”在本文中常可互换使用。本文中术语“和/或”用来描述关联对象的关联关系,例如表示前后关联对象可存在三种关系,举例说明,A和/或B,可以表示:单独存在A、同时存在A和B、单独存在B这三种情况。本文中字符“/”一般表示前后关联对象是“或”的关系。
应理解,在本申请的实施例中提到的“指示”可以是直接指示,也可以是间接指示,还可以是表示具有关联关系。举例说明,A指示B,可以表示A直接指示B,例如B可以通过A获取;也可以表示A间接指示B,例如A指示C,B可以通过C获取;还可以表示A和B之间具有关联关系。
在本申请实施例的描述中,术语“第一”、“第二”、“第三”等仅用于区别相同或相类似的技术特征 的描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量,也不用于描述次序或时间顺序。在合适的情况下术语是可以互换的。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。
在本申请实施例的描述中,术语“对应”可表示两者之间具有直接对应或间接对应的关系,也可以表示两者之间具有关联关系,也可以是指示与被指示、配置与被配置等关系。
本申请实施例的技术方案可以应用于纵向联邦学习。纵向联邦学习的过程可以基于各种通信系统中的各节点交互实现。其中,通信系统包括例如:全球移动通讯(Global System of Mobile communication,GSM)系统、码分多址(Code Division Multiple Access,CDMA)系统、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)系统、通用分组无线业务(General Packet Radio Service,GPRS)、长期演进(Long Term Evolution,LTE)系统、先进的长期演进(Advanced long term evolution,LTE-A)系统、新无线(New Radio,NR)系统、NR系统的演进系统、免授权频谱上的LTE(LTE-based access to unlicensed spectrum,LTE-U)系统、免授权频谱上的NR(NR-based access to unlicensed spectrum,NR-U)系统、非地面通信网络(Non-Terrestrial Networks,NTN)系统、通用移动通信系统(Universal Mobile Telecommunication System,UMTS)、无线局域网(Wireless Local Area Networks,WLAN)、无线保真(Wireless Fidelity,WiFi)、第五代通信(5th-Generation,5G)系统或其他通信系统等。
通常来说,传统的通信系统支持的连接数有限,也易于实现,然而,随着通信技术的发展,移动通信系统将不仅支持传统的通信,还将支持例如,设备到设备(Device to Device,D2D)通信,机器到机器(Machine to Machine,M2M)通信,机器类型通信(Machine Type Communication,MTC),车辆间(Vehicle to Vehicle,V2V)通信,或车联网(Vehicle to everything,V2X)通信等,本申请实施例也可以应用于这些通信系统。
通信系统中可包括多个节点,例如终端设备、网络设备、核心网中各功能性网元、OTT服务器等。其中,终端设备也可以称为用户设备(User Equipment,UE)、接入终端、用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理或用户装置等。
终端设备可以是WLAN中的站点(STAION,ST),可以是蜂窝电话、无绳电话、会话启动协议(Session Initiation Protocol,SIP)电话、无线本地环路(Wireless Local Loop,WLL)站、个人数字处理(Personal Digital Assistant,PDA)设备、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、车载设备、可穿戴设备、下一代通信系统例如NR网络中的终端设备,或者未来演进的公共陆地移动网络(Public Land Mobile Network,PLMN)网络中的终端设备等。
在本申请实施例中,终端设备可以部署在陆地上,包括室内或室外、手持、穿戴或车载;也可以部署在水面上(如轮船等);还可以部署在空中(例如飞机、气球和卫星上等)。
在本申请实施例中,终端设备可以是手机(Mobile Phone)、平板电脑(Pad)、带无线收发功能的电脑、虚拟现实(Virtual Reality,VR)终端设备、增强现实(Augmented Reality,AR)终端设备、工业控制(industrial control)中的无线终端设备、无人驾驶(self driving)中的无线终端设备、远程医疗(remote medical)中的无线终端设备、智能电网(smart grid)中的无线终端设备、运输安全(transportation safety)中的无线终端设备、智慧城市(smart city)中的无线终端设备或智慧家庭(smart home)中的无线终端设备等。
作为示例而非限定,在本申请实施例中,该终端设备还可以是可穿戴设备。可穿戴设备也可以称为穿戴式智能设备,是应用穿戴式技术对日常穿戴进行智能化设计、开发出可以穿戴的设备的总称,如眼镜、手套、手表、服饰及鞋等。可穿戴设备即直接穿在身上,或是整合到用户的衣服或配件的一种便携式设备。可穿戴设备不仅仅是一种硬件设备,更是通过软件支持以及数据交互、云端交互来实现强大的功能。广义穿戴式智能设备包括功能全、尺寸大、可不依赖智能手机实现完整或者部分的功能,例如:智能手表或智能眼镜等,以及只专注于某一类应用功能,需要和其它设备如智能手机配合使用,如各类进行体征监测的智能手环、智能首饰等。
在本申请实施例中,网络设备可以是用于与移动设备通信的设备,网络设备可以是WLAN中的接入点(Access Point,AP),GSM或CDMA中的基站(Base Transceiver Station,BTS),也可以是WCDMA中的基站(NodeB,NB),还可以是LTE中的演进型基站(Evolutional Node B,eNB或eNodeB),或者中继站或接入点,或者车载设备、可穿戴设备以及NR网络中的网络设备(gNB)或者未来演进的PLMN网络中的网络设备等。
作为示例而非限定,在本申请实施例中,网络设备可以具有移动特性,例如网络设备可以为移动的设备。可选地,网络设备可以为卫星、气球站。例如,卫星可以为低地球轨道(low earth orbit,LEO)卫星、中地球轨道(medium earth orbit,MEO)卫星、地球同步轨道(geostationary earth orbit,GEO)卫星、高椭圆轨道(High Elliptical Orbit,HEO)卫星等。可选地,网络设备还可以为设置在陆地、水域 等位置的基站。
为便于理解本申请实施例的技术方案,以下对本申请实施例的相关技术进行说明,以下相关技术作为可选方案与本申请实施例的技术方案可以进行任意结合,其均属于本申请实施例的保护范围。
本申请实施例用于纵向联邦学习,纵向联邦学习包括模型训练和模型推理的过程。其中,模型可以指AI模型,例如深度神经网络模型。通常,如图1所示,纵向联邦学习的模型训练过程包括:
一、加密样本对齐:纵向联邦学习适用于多个参与方具有对应于多个相同的标识(Identifier,ID)但特征维度不同的训练样本的情况,即多个参与方提供的训练样本中ID重叠较多,但数据特征类型重叠较少的情况。例如,某个区域的UE在通信系统的不同节点产生不同特征数据,联合不同节点的该UE的特征数据可以进行纵向联邦学习。因此,需要对各参与方的训练样本进行对齐,在不增加样本ID的情况下,增加样本的特征维度。
二、模型加密训练:基于对齐的样本进行模型加密训练。包括:
S1:发送密钥。由第三方协调者C向参与方A和参与方B发送密钥,用于加密需要传输的数据。加密方式可以是例如同态加密。对两个样本m1、m2进行同态加密的结果,等于m1的同态加密结果加上m2的同态加密结果。并且,样本m和一个常数相乘后的同态加密结果,等于这个样本的同态加密结果乘以常数。
S2:交互中间结果。纵向联邦学习中,拥有样本标签的参与方为主动方,如图中的参与方B。其余参与方为数据提供方,可以称为被动方,不具有样本的标签。参与方A和B分别基于模型A和模型B计算和自己本地数据相关的中间结果,并加密交互。
S3:计算损失函数和梯度。通过被动方A和主动方B交互中间结果,基于主动方中的样本标签可以计算得到联邦模型即模型A和模型B整体的损失函数。根据损失函数,被动方A和主动方B分别基于模型A和模型B计算加密后的梯度并添加掩码发送给协调者C,同时主动方B确定加密后的损失函数发送给C。
S4:更新模型。协调者C解密损失和梯度后分别回传给参与方A和B,A和B去除掩码后更新各自的模型。
如图2所示,纵向联邦学习的模型推理过程包括:
T1:协调者C分别向参与方A和B发送推理请求。推理请求用于向A和B指示需要采用的模型ID以及推理所需的输入信息。
T2:参与方A和B根据自身数据和本地存储的模型,进行计算,得到推理信息,并加密。
T3:参与方A和B向协调者C发送加密后的推理信息。
T4:协调者C聚合A和B的推理信息,得到加密的推理结果,对其进行解密。
上述纵向联邦学习的训练过程和推理过程,可以基于5G网络架构实现。
5G网络架构中最重要的一个特征就是服务化架构。服务化架构中,核心网的网元可以作为服务提供者,可以提供特定的服务,并通过定义好的应用程序编程接口(Application Programming Interface,API)供其他网元调用。其中,核心网的网元也可以称为核心网的节点。
图3示出了终端设备UE接入移动网络的系统架构图。系统架构中包括以下至少之一:
网络切片选择功能(Network Slice Selection Function,NSSF)网元;
网络开放功能(Network Exposure Function,NEF)网元;
网络存储功能(Network Repository Function,NRF)网元;
策略控制功能(Policy Control Function,PCF)网元;
统一数据管理(Unified Data Management,UDM)网元;
应用功能(Application Function,AF)网元;
网络切片专用认证授权功能(Network Slice Specific Authentication and Authorization Function,NSSAAF)网元;
鉴权服务器功能(Authentication Server Function,AUSF)网元;
接入和移动性管理功能(Access and Mobility Management Function,AMF)网元;
会话管理功能(Session Management Function,SMF)网元;
业务通信代理(Service Communication Proxy,SCP)网元;
终端设备;
(无线)接入网((Radio)Access Network,(R)AN);
用户面功能(User Plane Function,UPF)网元;
数据网络(Data Network,DN)。
在该系统架构中,N1接口为终端设备与AMF实体之间的参考点。N2接口为AN和AMF网元的 参考点,用于非接入层(Non-Access Stratum,NAS)消息的发送等。N3接口为(R)AN和UPF网元之间的参考点,用于传输用户面的数据等。N4接口为SMF网元和UPF网元之间的参考点,用于传输例如N3连接的隧道标识信息、数据缓存指示信息、下行数据通知消息等。N6接口为UPF实体和DN之间的参考点,用于传输用户面的数据等。
其中,UE与网络设备进行接入层(Access Stratum,AS)连接,交互接入层消息及无线数据。UE与AMF进行NAS连接,交互NAS消息。AMF负责对UE移动性的管理,SMF负责对UE的会话管理,AMF在对移动终端进行移动性管理之外,还负责将从会话管理相关消息在UE和SMF之间的转发。PCF负责制定对UE的移动性管理、会话管理、计费等相关的策略。UPF与网络设备及外部数据网络相连进行数据传输。
此外,5G网络还在核心网中增加了网络数据分析功能(Network Data Analytics Function,NWDAF)网元,可以从核心网各个网元、网管系统等收集数据进行大数据统计、分析或者智能化的数据分析,得出网络侧的分析或者预测数据,从而辅助各个网元根据数据分析结果对UE接入进行更有效的控制。
具体的,NWDAF网元可以收集其他网元(Network Function,NF)的数据,用以进行大数据分析。为此,定义了NWDAF与其他网元之间的接口,包括例如图4A所示的其他网元向NWDAF请求某个分析结果的接口N nf,以及图4B所示的NWDAF向其他网元发送某个分析结果的接口N nwdaf
经本申请发明人深入研究发现,在上述联邦学习的过程中,虽然在保证数据隐私的情况下,可以进行多节点的数据收集,进行联合训练和推断,但是在推理过程中,作为协调者的节点,因为具备解密能力,可以直接得到推断的结果。例如主动方是第三方的应用服务器例如OTT服务器,而协调者是核心网内部的节点,则第三方的应用服务器发起的人工智能(Artificial Intelligence,AI)应用的推理信息会被核心网得知,依旧存在隐私风险。此外,协调者不断收集参与方A和B的模型中间数据,并解密,可以在一定程度上推理出参与方A和B的模型信息,存在隐私数据泄露风险。进一步地,基于目前的5G系统架构实现联邦学习,由于NWDAF只能与其他网元交互并获得需要的数据,因此,也未能利用其数据汇聚功能提升联邦学习的数据隐私安全。
本申请实施例提供的方案,主要用于解决上述问题中的至少一个。
为了能够更加详尽地了解本发明实施例的特点与技术内容,下面结合附图对本发明实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本发明实施例。
图5是根据本申请一实施例的联邦学习方法的示意性流程图。该方法包括:
S51:第一设备向第二设备发送第一密钥;其中,第一密钥用于对第二设备中的第二模型的推理信息进行加密,得到第一加密推理信息;
S52:第一设备在接收到与第一加密推理信息对应的第二加密推理信息的情况下,基于第一设备中的第一模型的推理信息以及第二加密推理信息,得到目标信息。
可选地,第一设备包括以下至少之一:
第二终端设备;
第二核心网中的至少一个网元;
第二服务器。
可选地,第二设备包括以下至少之一:
第三终端设备;
第三核心网中的至少一个网元;
第三服务器。
其中,第二核心网和/或第三核心网中的至少一个网元可以包括例如图3所示的多种网元中的至少一种。第二核心网和第三核心网可以是相同的核心网,也可以是不同的核心网。
可选地,上述推理信息可以包括利用模型得到的针对推理请求的输出信息。例如,第一设备为联邦学习的主动方,第二设备为联邦学习的被动方。第一设备发起推理任务,将推理请求下发至第二设备。推理请求可以包括推理任务的输入信息、模型ID等。模型ID用于指示第二设备从第二设备中的至少一个模型中确定出用于执行推理任务的第二模型。第二设备可以将推理任务的输入信息输入第二模型,第二模型的输出信息即为第二模型的推理信息。
与上述方法相对应地,第二设备会接收到第一密钥。第二设备可以利用第一密钥对第二设备中的第二模型的推理信息进行加密,得到第一加密推理信息。
一种示例中,第二加密推理信息可以是第一加密推理信息的通信报文。例如,第二设备得到第一加密推理信息后,根据预设的通信报文格式将第一加密推理信息封装为第二加密推理信息,如图6所示,将第二加密推理信息发送至第一设备。
另一种示例中,第二加密推理信息还可以是对第一加密推理信息进行其他处理后得到的信息。例 如,第二设备得到第一加密推理信息后,将第一加密推理信息发送到其他设备中,其他设备处理得到第二加密推理信息,并将第二加密推理信息发送至第一设备。
示例性地,上述第一密钥可以是与第一设备持有的第一私钥对应的公钥。由于第一设备持有第一私钥,因此,第一设备为密钥管理方,具有解密能力。
一种示例中,第一设备可以利用第一私钥对接收到的第二加密推理信息进行解密,得到对应的解密信息,该解密信息可以表征第二设备中的第二模型的推理信息。第一设备基于解密信息和第一设备中的第一模型的推理信息,得到目标信息。
其中,第一模型的推理信息可以是将推理任务的输入信息输入第一模型后,第一模型的输出信息。目标信息即联邦学习的最终结果,即针对推理请求的推理结果。
根据上述联邦学习方法,第二设备对其中的第二模型的推理信息进行加密得到第一加密推理信息,第一设备基于与第一加密推理信息对应的第二加密推理信息以及第一设备中的第一模型的推理信息得到目标信息。因此,第一设备和第二设备基于各自的模型参与联邦学习,推理得到目标信息。在推理得到目标信息的过程中,由第一设备发送密钥并处理经加密得到的第二加密推理信息,因此,实现了由推理过程的参与方管理密钥,避免由其他节点解密相关数据,提升数据隐私安全。
前述已经说明,第二加密推理信息可以是对第一加密推理信息进行处理后得到的信息。可选地,第二加密推理信息可以是汇聚多个设备的第一加密推理信息后得到的信息。
示例性地,第二设备包括N个电子设备,上述方法还包括:
N个电子设备中的第i个电子设备接收第一密钥,利用第一密钥对第i个电子设备中的第二模型的推理信息进行加密,得到第一加密推理信息,并将第一加密推理发送至第三设备;
第三设备基于接收到的第一加密推理信息,确定第二加密推理信息;
第三设备向第一设备发送第二加密推理信息;
其中,N为大于等于2的整数,i为大于等于1且小于等于N的整数。
也就是说,第一密钥用于指示N个电子设备中的第i个电子设备对第i个电子设备中的第二模型的推理信息进行加密,得到第一加密推理信息,并将第一加密推理信息发送至第三设备。第一加密推理信息用于指示第三设备确定第二加密推理信息。
从第三设备的角度而言,上述方法包括:
第三设备接收来自N个电子设备中的第i个电子设备的第一加密推理信息;其中,第一加密推理信息是第i个电子设备基于第一设备发送的第一密钥对第i个电子设备中的第二模型的推理信息进行加密得到的;N为大于等于2的整数,i为大于等于1且小于等于N的整数;
第三设备基于第一加密推理信息确定与第一加密推理信息对应的第二加密推理信息,并向第一设备发送第二加密推理信息;其中,第二加密推理信息用于指示第一设备基于第一设备中的第一模型的推理信息以及第二加密推理信息,得到目标信息。
在一些示例性的应用场景中,N个电子设备中的每个电子设备中都可以有其对应的第二模型,且各电子设备的第二模型的参数可以不同。每个电子设备基于自身的第二模型得到推理信息,并进行加密得到第一加密推理信息。如图7所示,N个电子设备将第一加密推理信息发送至第三设备。第三设备汇聚N个第一加密推理信息,得到第二加密推理信息,并将第二加密推理信息发送至第一设备。第一设备通过解密以及结合第一模型的推理信息,得到目标信息。
示例性地,第三设备基于接收到的第一加密推理信息,确定第二加密推理信息,可以包括:第三设备对接收到的第一加密推理信息进行加和,得到第二加密推理信息。
由于第二加密推理信息是多个第一加密推理信息汇聚得到的,因此,即使第一设备具备解密能力,能够对第二加密推理信息进行解密,解密得到的信息也是多个第二模型的推理信息的汇聚结果,无法得到每个第二模型的推理信息。可见,基于上述方式,可以进一步提高数据隐私安全。
可选地,第二设备可以包括第三设备。也就是说,第三设备也可以作为联邦学习的参与方,接收第一密钥,对第三设备中的第二模型的推理信息进行加密,得到对应的第一加密推理信息。在接收到其他设备发送的第一加密推理信息后,可以对自身及其他设备的第一加密推理信息进行汇聚,得到第二加密推理信息。
可选地,第三设备包括第一NWDAF网元。也就是说,可以利用核心网中的NWDAF网元的数据汇聚功能提升联邦学习的数据隐私安全。
本申请提供的联邦学习方法中,还可以包括联邦学习的训练过程。示例性地,第一设备为联邦学习的主动方,则第一设备持有训练数据的标签,可以计算损失函数,因此,联邦学习方法还可以包括:在第一模型和第二模型的联邦学习训练过程中,第一设备基于标签信息确定损失函数。
本申请的一些示例性实施例中,在联邦学习推理过程中的密钥管理方与在联邦学习训练过程中的 密钥管理方不同。根据前述说明,在联邦学习推理过程中,密钥管理方为第一设备。示例性地,在联邦学习训练过程中,密钥管理方可以是除第一设备和第二设备以外的其他设备。
具体地,联邦学习方法还可以包括:
第一设备接收来自第四设备的第二密钥;
第一设备利用第二密钥对第一模型的训练信息进行加密,得到第一加密训练信息;
第一设备发送第一加密训练信息,其中,第一加密训练信息用于使第四设备能够基于与第一加密训练信息对应的第二加密训练信息得到模型更新信息,模型更新信息用于更新第一模型。
从系统角度而言,联邦学习方法可以包括:
第四设备发送第二密钥;
第一设备接收第二密钥,利用第二密钥对第一模型的训练信息进行加密,得到第一加密训练信息,并发送第一加密训练信息;
第四设备基于与第一加密训练信息对应的第二加密训练信息得到模型更新信息,并发送模型更新信息;
第一设备基于模型更新信息更新第一模型。
示例性地,上述步骤可以在步骤S51和S52之前实现,并且,可以迭代重复多次,直至第一模型和/或第二模型符合预设的收敛条件。
示例性地,上述第二密钥可以是与第四设备持有的第二私钥对应的公钥。由于第四设备持有第二私钥,因此,第四设备为密钥管理方,具有解密能力。
可选地,第二设备也接收第二密钥,利用第二密钥对第二模型的训练信息进行加密,得到第三加密训练信息,并发送第三加密训练信息。第四设备结合接收到的第一加密训练信息和第三加密训练信息得到第二加密训练信息,该信息可以用于确定模型更新信息。举例而言,第四设备对第一加密训练信息和第三加密训练信息进行汇聚例如加和,得到第二加密训练信息,然后对第二加密训练信息进行解密,得到模型更新信息。
示例性地,上述训练信息可以是各设备在联邦学习训练过程中基于模型计算得到的各种信息,例如损失函数、梯度等。上述模型更新信息可以包括由各设备的梯度汇聚得到的联邦模型的梯度,也可以包括与各个设备分别对应的模型的梯度、掩码等信息。
举例而言,如图8所示,根据上述方法,第四设备向第一设备和第二设备分别发送第二密钥。这里,第二模型可以包括至少一个电子设备。第一设备和第二设备进行交互,其中,第一设备提供训练数据的标签信息,计算损失函数。第一设备和第二设备所包括的至少一个电子设备分别计算得到第一模型和至少一个第二模型的梯度,作为上述训练信息。第一设备在梯度上加上掩码,并基于第二密钥对梯度和掩码进行加密,得到上述第一加密训练信息,并发送第一加密训练信息。第二设备中的至少一个电子设备分别在各自的梯度上加上掩码,并基于第二密钥对梯度和掩码进行加密,得到各自的第三加密训练信息,并分别发送第三加密训练信息。第四设备汇聚第一加密训练信息和第三加密训练信息后,得到第二加密训练信息,并利用第二私钥解密得到第一模型和各个第二模型的模型更新信息。将第一模型和各个第二模型的模型更新信息分别发送至各参与方,即第一设备和第二设备中的各电子设备。各参与方分别更新自身的模型。
可选地,第四设备包括以下至少之一:
第一终端设备;
第一核心网中的至少一个网元;
第一服务器。
可选地,第二设备还包括第四设备,也就是说,第四设备也可以参与联邦学习的训练和推理。
示例性地,在训练过程中,第四设备接收第一设备基于标签信息计算得到的损失函数,基于损失函数计算自身的第二模型的梯度,将该梯度作为自身的第二模型的模型更新信息,更新自身的第二模型。此外,第四设备接收第一设备的加密训练信息以及第二设备中除第四设备以外的其他电子设备的加密训练信息,利用私钥解密得到相应的模型更新信息,并将各模型更新信息分别发送至对应的设备。
在推理过程中,第四设备接收第一设备发送的第一密钥,并利用第一密钥对第四设备中的第二模型的推理信息进行加密,得到第一加密推理信息,以使第一设备能够结合第四设备中的第二模型的信息进行推理,得到目标信息。
可选地,为了避免第四设备分别解密得到第一设备的第一加密训练信息和第二设备的第三加密训练信息,本申请实施例中,还可以应用不具备解密能力的第五设备实现上述汇聚第一加密训练信息和第三加密训练信息的过程。具体地,
第一设备发送第一加密训练信息,包括:
第一设备向第五设备发送第一加密训练信息;
其中,第一加密训练信息用于指示第五设备基于第一加密训练信息以及来自第二设备的第三加密训练信息得到第二加密训练信息,并将第二加密训练信息发送至第四设备;
其中,第三加密训练信息是利用第二密钥对第二模型的训练信息进行加密得到的;第二加密训练信息用于指示第四设备确定模型更新信息。
从系统角度而言,上述联邦学习算法还包括:
第二设备接收第二密钥,利用第二密钥对第二模型的训练信息进行加密,得到第三加密训练信息,并发送第三加密训练信息;
第五设备接收第一加密训练信息和第三加密训练信息,并基于第一加密训练信息和第三加密训练信息得到第二加密训练信息,发送第二加密训练信息;
第四设备接收第二加密训练信息,基于第二加密训练信息确定模型更新信息。
可选地,第五设备包括第二NWDAF网元。
举例而言,如图9所示,根据上述方法,第四设备向第一设备和第二设备分别发送第二密钥。第一设备和第二设备进行交互,其中,第一设备提供训练数据的标签信息,计算损失函数。第一设备和第二设备分别计算得到第一模型和第二模型的梯度,作为上述训练信息。第一设备在梯度上加上掩码,并基于第二密钥对梯度和掩码进行加密,得到上述第一加密训练信息,并发送第一加密训练信息。第二设备中在梯度上加上掩码,并基于第二密钥对梯度和掩码进行加密,得到第三加密训练信息,并发送第三加密训练信息。第五设备接收第一加密训练信息和第三加密训练信息,汇聚第一加密训练信息和第三加密训练信息,得到第二加密训练信息。并将第二加密训练信息发送至第四设备。第四设备利用第二私钥对第二加密训练信息进行解密,得到模型更新信息。第四设备将模型更新信息发送至各参与方,即第一设备和第二设备。各参与方分别更新自身的模型。
可选地,第二设备还可以包括第五设备。也就是说,第五设备也可以参与联邦学习的训练和推理。
示例性地,在训练过程中,第五设备接收第四设备发送的第二密钥,对自身的训练信息进行加密得到加密训练信息,并将接收到的加密训练信息结合自身的加密训练信息汇聚得到第二加密训练信息,并发送至第四设备,以得到模型更新信息,更新模型。
在推理过程中,第五设备接收第一设备发送的第一密钥,对自身的第二模型的推理信息进行加密得到第一加密推理信息。进一步地,第五设备可以与第三设备相同,也可以与第三设备不同。若第五设备与第三设备相同,则第五设备可以结合第二设备中其他电子设备发送的第一加密推理信息,得到第二加密推理信息,并将第二加密推理信息发送至第一设备,以确定目标信息。若第五设备与第三设备不同,则第五设备可以发送其第一加密推理信息至第三设备,第三设备汇聚各设备的第一加密推理信息,得到第二加密推理信息后发送至第一设备,以确定目标信息。
可选地,上述第一密钥和/或第二密钥,可以在终端设备与网络设备进行分组数据单元(Packet Data Unit,PDU)会话建立或修改的过程中发送到相应节点,也可以在注册请求过程中发送到相应节点。进一步地,还可以在相关流程中发生的鉴权过程或授权过程中发送到相应节点。其中,鉴权过程例如是在PDU会话建立过程中由SMF网元触发的终端设备和应用服务器之间通过核心网进行的二次鉴权过程。将密钥从一个节点发送到其他节点,例如是从核心网的网元发送到终端设备和/或服务器,或者从终端设备发送到核心网的网元和/或服务器。
示例性地,第一设备接收来自第四设备的第二密钥,包括:
第一设备在第一过程中接收来自第四设备的第二密钥;
其中,第一过程包括以下至少之一:第一PDU会话的建立过程;第一PDU会话的修改过程;第一注册请求过程;第一鉴权过程;第一授权过程。
相应的,第四设备在第一过程中发送第二密钥。
可选地,第一设备向第二设备发送第一密钥,包括:
第一设备在第二过程中向第二设备发送第一密钥;
其中,第二过程包括以下至少之一:第二PDU会话的建立过程;第二PDU会话的修改过程;第二注册请求过程;第二鉴权过程;第二授权过程。
相应的,第二设备在第二过程中接收第一密钥。
以上从不同角度描述了本申请实施例的具体设置和实现方式。利用上述至少一个实施例,实现了由推理过程的参与方管理密钥,避免由其他节点解密相关数据,提升数据隐私安全。
在一些示例中,管理密钥的参与方为联邦学习的主动方,基于此,仅有主动方获得联邦学习推理的目标信息,有效防止应用结果被其他参与方得知。
在一些示例中,可以在联邦学习中设置汇聚节点例如第三设备和第五设备。汇聚节点不具备解密能 力,通过汇聚各参与方的加密信息,再发送到密钥管理方,可以避免密钥管理方解密得到各参与方的模型信息,进一步提升数据隐私安全。
在一些示例中,联邦学习的训练过程和推理过程中,密钥管理方不同。在推理过程中,会更换密钥,由主动方发送新的密钥,从而进一步提升数据隐私安全。
下面将提供多个应用示例,以进一步说明本申请实施例的上述技术效果。
应用示例一
在本应用示例中,第一设备包括网络侧(核心网)的至少一个网元,因此,网络侧为主动方。如图10所示,网络侧中的网络数据分析功能网元NWDAF作为汇聚节点,即上述第三设备和第五设备。NWDAF拥有数据样本的标签,负责各节点的数据收集。第二设备包括终端设备UE和OTT服务器,即UE和OTT服务器为被动方,OTT服务器通过接入层AS提供样本所需特征数据。在训练过程中,OTT服务器还用作第四设备,即密钥管理者,生成第二密钥,并将第二密钥发送至UE和网络侧。
如图11所示,联邦学习的训练过程包括:
1.OTT服务器向UE和网络侧的至少一个网元NFs发送第一密钥,用于加密需要传输的数据。其中,第一密钥可以由OTT服务器的密钥管理模块AS-KEY生成并发送。
2.UE、AS和网络侧的至少一个网元NFs分别根据本地数据得到模型计算结果,并加密模型计算结果,发送至NWDAF。
3.NWDAF汇聚各节点数据,根据标签,加密计算损失函数。
4.NWDAF发送损失函数至UE、AS和至少一个网元NFs。
5.UE、AS和至少一个网元NFs计算加密后的梯度并添加掩码发送至NWDAF。
6.NWADF聚合各节点发送的添加掩码的梯度,将聚合后的加密结果透传至AS。
7.AS根据私钥,解密损失函数和梯度。
8.AS将属于各节点的梯度传递回各节点。
9.各节点根据解密后的梯度,去除掩码,更新本地模型权重。
在推理过程中,由于网络侧是主动方,网络侧发起的针对于5G通信系统的某些需求分析,其结果不希望被第三方服务器得知,因此,将密钥管理方更换为网络侧的至少一个网元,即在推理过程中,网络侧生成新的密钥,并发送给各节点。其中,至少一个网元中的密钥管理模块NF-KEY负责生成和发放密钥以及解密,至少一个网元中的其他网元负责参与计算。如图12所示,NWDAF网元作为汇聚节点(第三设备)负责各节点的推理信息的收集。
如图13所示,联邦学习的推理过程包括:
1.网络侧作为主动方,其密钥管理模块NF-KEY向各节点发送第一密钥,以及需要进行分析的模型配置信息,包括模型ID和输入信息。
2.UE、网络侧的至少一个网元NFs和AS根据本地数据和相应模型进行计算,得出推理过程的计算结果,发送至NWDAF进行聚合。
3.网络侧的密钥管理模块NF-KEY基于NWDAF的聚合结果进行解密,得到最后的分析结果。
经过这种密钥管理方式,可以在保护各节点数据隐私的情况下,实现对UE、网络侧和第三方应用多域数据的联合分析,让网络侧得到更加全面的分析结果。
应用示例二
在本应用示例中,OTT服务器为主动方(第一设备),OTT服务器的AS拥有数据样本的标签,网络侧(核心网)的至少一个网元和终端设备UE为被动方(第二设备),提供与OTT应用相关的特征数据。如图14所示,网络侧的网元NWDAF为汇聚节点(第三设备和第五设备)负责各节点的数据收集。在训练过程中,密钥管理者为网络侧,即网络侧的密钥管理模块NF-KEY(第四设备)生成第二密钥,并将第二密钥发送给UE和AS。
如图15所示,联邦学习的训练过程包括:
1.网络侧的密钥管理模块NF-KEY向UE、网络侧的至少一个网元NFs和AS发送第二密钥,用于加密需要传输的数据。
2.UE和网络侧的至少一个网元NFs分别根据本地数据确定模型计算结果,并加密发送给NWDAF。
3.NWDAF聚合各节点的加密模型计算结果,得到模型汇聚结果并发送给AS。
4.AS根据接收到的模型汇聚结果、自身的模型计算结果和标签,加密计算损失函数。
5.AS发送损失函数至UE和NFs。
6.各节点计算加密后的梯度并添加掩码发送给NWDAF,NWDAF汇聚梯度和掩码,发送至密钥管理模块NF-KEY。
7.密钥管理模块NF-KEY根据私钥,解密损失函数和梯度。
8.密钥管理模块NF-KEY将属于各节点的梯度传递回各节点。
9.各节点根据解密后的梯度,去除掩码,更新本地模型权重。
与应用示例一同理,在推理过程需要更换新的密钥管理方,即在推断过程中,主动方AS中的密钥管理模块AS-KEY生成第一密钥,并发送给各节点,如图16所示,NWDAF汇聚各节点的加密推理结果,并将加密后的推理结果发送至AS-KEY。
如图17所示,推理过程包括:
1.AS作为主动方,向各节点发送第一密钥以及需要进行分析的模型配置信息。
2.UE、网络侧的至少一个网元NFs根据本地数据和相应模型进行计算,得出模型计算结果,发送给NWDAF进行聚合。
3.NWDAF将加密聚合结果发送给AS。
4.AS结合自身计算结果,并根据私钥进行解密,得到最后的分析结果。
应用示例三
在本实施例中,网络侧为主动方(第一设备),其中的NWDAF拥有数据样本的标签,OTT服务器的接入层AS和UE为被动方(第二设备)。如图18所示,NWDAF为汇聚节点(第三设备和第五设备),还负责各节点的数据收集。在训练过程中,密钥管理者(第四设备)为UE,即UE生成第二密钥,并将第二密钥发送给网络侧和AS。
如图19所示,联邦学习的训练过程如下:
1.终端设备的密钥管理模块UE-KEY向网络侧的至少一个网元NFs和AS发送第二密钥,用于加密需要传输的数据。
2.UE、网络侧的至少一个网元NFs和AS分别根据本地数据得到模型计算结果,并加密发送给NWDAF。
3.NWDAF聚合各节点数据,根据标签,加密计算损失函数。
4.NWDAF将损失函数发送至UE、NFs以及AS。
5.UE、NFs以及AS计算加密后的梯度,其中,NFs和AS在梯度上添加掩码发送给NWDAF。
6.NWDAF聚合接收到的梯度和掩码,得到聚合结果,将聚合结果发送给UE。
7.UE根据私钥解密损失函数和梯度。
8.UE将梯度传递回各节点。
9.各节点根据解密后的梯度,去除掩码,更新本地模型权重。
在推理过程中,与实施例一同理,需要更换新的密钥管理方。本应用示例中,推理过程由主动方网络侧生成第一密钥,并发送给各节点。后续步骤与应用示例一类似,在此不做阐述。
需要说明的是,本申请实施例中各设备均可以包括网络系统中的终端设备、核心网网元、服务器等至少一种。实际应用中,对于如何设置第一设备(主动方)、第二设备(参与方)、第四设备(训练过程的密钥管理方)、第三设备和第五设备(汇聚节点)可根据实际需求确定。具体的设备设置方式不限于上述应用示例的方式,实施流程与上述应用示例类似。
与上述至少一个实施例的处理方法相对应地,本申请实施例还提供一种联邦学习系统1000,参考图20,其包括:
第一设备100,用于发送第一密钥;
第二设备200,用于接收第一密钥,利用第一密钥对第二设备200中的第二模型的推理信息进行加密,得到第一加密推理信息;
第一设备100,还用于在接收到与第一加密推理信息对应的第二加密推理信息的情况下,基于第一设备100中的第一模型的推理信息以及第二加密推理信息,得到目标信息。
可选地,第二设备200包括N个电子设备;N个电子设备中的第i个电子设备用于利用第一密钥对第i个电子设备中的第二模型的推理信息进行加密,得到第一加密推理信息,并发送第一加密推理信息;
如图21所示,系统1000还包括:
第三设备300,用于接收第一加密推理信息,并基于第一加密推理信息,确定第二加密推理信息;
其中,N为大于等于2的整数,i为大于等于1且小于等于N的整数。
可选地,如图21所示,系统1000还包括:
第四设备400,用于发送第二密钥;
第一设备100还用于接收第二密钥,利用第二密钥对第一模型的训练信息进行加密,得到第一加密训练信息,并发送第一加密训练信息;
第四设备400还用于基于与第一加密训练信息对应的第二加密训练信息得到模型更新信息,模型更新信息用于更新第一模型。
可选地,如图21所示,其中,
第二设备200还用于接收第二密钥,利用第二密钥对第二模型的训练信息进行加密,得到第三加密训练信息,并发送第三加密训练信息;
系统1000还包括:第五设备500,用于接收第一加密训练信息和第三加密训练信息,并基于第一加密训练信息和第二加密训练信息得到第二加密训练信息,发送第二加密训练信息;
第四设备400还用于接收第二加密训练信息,基于第二加密训练信息确定模型更新信息。
本申请实施例的联邦学习系统1000中的各设备能够实现前述的方法实施例中的对应设备的对应功能。该联邦学习系统1000中的各设备对应的流程、功能、实现方式以及有益效果,可参见上述方法实施例中的对应描述,在此不再赘述。
与上述至少一个实施例的处理方法相对应地,本申请实施例还提供一种第一设备100,参考图22,其包括:
第一通信模块110,用于向第二设备发送第一密钥;其中,第一密钥用于对第二设备中的第二模型的推理信息进行加密,得到第一加密推理信息;
第一处理模块120,用于在接收到与第一加密推理信息对应的第二加密推理信息的情况下,基于第一设备中的第一模型的推理信息以及第二加密推理信息,得到目标信息。
可选地,第二设备包括N个电子设备;
第一密钥用于指示N个电子设备中的第i个电子设备对第i个电子设备中的第二模型的推理信息进行加密,得到第一加密推理信息,并将第一加密推理信息发送至第三设备;
第一加密推理信息用于指示第三设备确定第二加密推理信息;
其中,N为大于等于2的整数,i为大于等于1且小于等于N的整数。
可选地,第三设备包括第一NWDAF网元。
可选地,第一通信模块110还用于:接收来自第四设备的第二密钥;
第一处理模块120还用于:利用第二密钥对第一模型的训练信息进行加密,得到第一加密训练信息;
第一通信模块110还用于:发送第一加密训练信息,其中,第一加密训练信息用于使第四设备能够基于与第一加密训练信息对应的第二加密训练信息得到模型更新信息,模型更新信息用于更新第一模型。
可选地,第一通信模块110具体用于:
向第五设备发送第一加密训练信息;
其中,第一加密训练信息用于指示第五设备基于第一加密训练信息以及来自第二设备的第三加密训练信息得到第二加密训练信息,并将第二加密训练信息发送至第四设备;
其中,第二加密训练信息用于指示第四设备确定模型更新信息。
可选地,第五设备包括第二NWDAF网元。
可选地,第四设备包括以下至少之一:
第一终端设备;
第一核心网中的至少一个网元;
第一服务器。
可选地,第一通信模块110用于:
在第一过程中接收来自第四设备的第二密钥;
其中,第一过程包括以下至少之一:
第一分组数据单元PDU会话的建立过程;
第一PDU会话的修改过程;
第一注册请求过程;
第一鉴权过程;
第一授权过程。
可选地,第一通信模块110用于:
在第二过程中向第二设备发送第一密钥;
其中,第二过程包括以下至少之一:
第二PDU会话的建立过程;
第二PDU会话的修改过程;
第二注册请求过程;
第二鉴权过程;
第二授权过程。
可选地,第一处理模块120还用于:
在第一模型和第二模型的联邦学习训练过程中,基于标签信息确定损失函数。
可选地,第一设备包括以下至少之一:
第二终端设备;
第二核心网中的至少一个网元;
第二服务器。
可选地,第二设备包括以下至少之一:
第三终端设备;
第三核心网中的至少一个网元;
第三服务器。
本申请实施例的第一设备100能够实现前述的方法实施例中的第一设备的对应功能,该第一设备100中的各个模块(子模块、单元或组件等)对应的流程、功能、实现方式以及有益效果,可参见上述方法实施例中的对应描述,此处不进行赘述。需要说明,关于本申请实施例的第一设备100中的各个模块(子模块、单元或组件等)所描述的功能,可以由不同的模块(子模块、单元或组件等)实现,也可以由同一个模块(子模块、单元或组件等)实现,举例来说,第一发送模块与第二发送模块可以是不同的模块,也可以是同一个模块,均能够实现其在本申请实施例中的相应功能。此外,本申请实施例中的通信模块,可通过设备的收发机实现,其余各模块中的部分或全部可通过设备的处理器实现。
图23是根据本申请一实施例的第三设备300的示意性框图。该第三设备300可以包括:
第二通信模块310,用于接收来自N个电子设备中的第i个电子设备的第一加密推理信息;其中,第一加密推理信息是第i个电子设备基于第一设备发送的第一密钥对第i个电子设备中的第二模型的推理信息进行加密得到的;N为大于等于2的整数,i为大于等于1且小于等于N的整数;
第二处理模块320,用于基于第一加密推理信息确定与第一加密推理信息对应的第二加密推理信息;
第二通信模块310还用于:向第一设备发送第二加密推理信息;其中,第二加密推理信息用于指示第一设备基于第一设备中的第一模型的推理信息以及第二加密推理信息,得到目标信息。
本申请实施例的第三设备300能够实现前述的方法实施例中的网络设备的对应功能。该第三设备300中的各个模块(子模块、单元或组件等)对应的流程、功能、实现方式以及有益效果,可参见上述方法实施例中的对应描述,在此不再赘述。需要说明,关于申请实施例的第三设备300中的各个模块(子模块、单元或组件等)所描述的功能,可以由不同的模块(子模块、单元或组件等)实现,也可以由同一个模块(子模块、单元或组件等)实现,举例来说,第一发送模块与第二发送模块可以是不同的模块,也可以是同一个模块,均能够实现其在本申请实施例中的相应功能。此外,本申请实施例中的通信模块,可通过设备的收发机实现,其余各模块中的部分或全部可通过设备的处理器实现。
图24是根据本申请实施例的通信设备600示意性结构图,其中通信设备600包括处理器610,处理器610可以从存储器中调用并运行计算机程序,以实现本申请实施例中的方法。
可选地,通信设备600还可以包括存储器620。其中,处理器610可以从存储器620中调用并运行计算机程序,以实现本申请实施例中的方法。
其中,存储器620可以是独立于处理器610的一个单独的器件,也可以集成在处理器610中。
可选地,通信设备600还可以包括收发器630,处理器610可以控制该收发器630与其他设备进行通信,具体地,可以向其他设备发送信息或数据,或接收其他设备发送的信息或数据。
其中,收发器630可以包括发射机和接收机。收发器630还可以进一步包括天线,天线的数量可以为一个或多个。
可选地,该通信设备600可为本申请实施例的第一设备,并且该通信设备600可以实现本申请实施例的各个方法中由第一设备实现的相应流程,为了简洁,在此不再赘述。
可选地,该通信设备600可为本申请实施例的第三设备,并且该通信设备600可以实现本申请实施例的各个方法中由第三设备实现的相应流程,为了简洁,在此不再赘述。
图25是根据本申请实施例的芯片700的示意性结构图,其中芯片700包括处理器710,处理器710可以从存储器中调用并运行计算机程序,以实现本申请实施例中的方法。
可选地,芯片700还可以包括存储器720。其中,处理器710可以从存储器720中调用并运行计算机程序,以实现本申请实施例中的方法。
其中,存储器720可以是独立于处理器710的一个单独的器件,也可以集成在处理器710中。
可选地,该芯片700还可以包括输入接口730。其中,处理器710可以控制该输入接口730与其他设备或芯片进行通信,具体地,可以获取其他设备或芯片发送的信息或数据。
可选地,该芯片700还可以包括输出接口740。其中,处理器710可以控制该输出接口740与其他设备或芯片进行通信,具体地,可以向其他设备或芯片输出信息或数据。
可选地,该芯片可应用于本申请实施例中的第一设备,并且该芯片可以实现本申请实施例的各个方法中由第一设备实现的相应流程,为了简洁,在此不再赘述。
可选地,该芯片可应用于本申请实施例中的第三设备,并且该芯片可以实现本申请实施例的各个方法中由第三设备实现的相应流程,为了简洁,在此不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片,系统芯片,芯片系统或片上系统芯片等。
上述提及的处理器可以是通用处理器、数字信号处理器(digital signal processor,DSP)、现成可编程门阵列(field programmable gate array,FPGA)、专用集成电路(application specific integrated circuit,ASIC)或者其他可编程逻辑器件、晶体管逻辑器件、分立硬件组件等。其中,上述提到的通用处理器可以是微处理器或者也可以是任何常规的处理器等。
上述提及的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM)。
应理解,上述存储器为示例性但不是限制性说明,例如,本申请实施例中的存储器还可以是静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)以及直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)等等。也就是说,本申请实施例中的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(Digital Subscriber Line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
所属技术领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
以上所述仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以该权利要求的保护范围为准。

Claims (38)

  1. 一种联邦学习方法,包括:
    第一设备向第二设备发送第一密钥;其中,所述第一密钥用于对所述第二设备中的第二模型的推理信息进行加密,得到第一加密推理信息;
    所述第一设备在接收到与所述第一加密推理信息对应的第二加密推理信息的情况下,基于所述第一设备中的第一模型的推理信息以及所述第二加密推理信息,得到目标信息。
  2. 根据权利要求1所述的方法,其中,所述第二设备包括N个电子设备;
    所述第一密钥用于指示所述N个电子设备中的第i个电子设备对所述第i个电子设备中的第二模型的推理信息进行加密,得到所述第一加密推理信息,并将所述第一加密推理信息发送至第三设备;
    所述第一加密推理信息用于指示所述第三设备确定所述第二加密推理信息;
    其中,N为大于等于2的整数,i为大于等于1且小于等于N的整数。
  3. 根据权利要求2所述的方法,其中,所述第三设备包括第一网络数据分析功能NWDAF网元。
  4. 根据权利要求1-3中任一项所述的方法,其中,所述方法还包括:
    所述第一设备接收来自第四设备的第二密钥;
    所述第一设备利用所述第二密钥对所述第一模型的训练信息进行加密,得到第一加密训练信息;
    所述第一设备发送所述第一加密训练信息,其中,所述第一加密训练信息用于使所述第四设备能够基于与所述第一加密训练信息对应的第二加密训练信息得到模型更新信息,所述模型更新信息用于更新所述第一模型。
  5. 根据权利要求4所述的方法,其中,所述第一设备发送所述第一加密训练信息,包括:
    所述第一设备向第五设备发送所述第一加密训练信息;
    其中,所述第一加密训练信息用于指示所述第五设备基于所述第一加密训练信息以及来自所述第二设备的第三加密训练信息得到所述第二加密训练信息,并将所述第二加密训练信息发送至所述第四设备;
    其中,所述第二加密训练信息用于指示所述第四设备确定所述模型更新信息。
  6. 根据权利要求5所述的方法,其中,所述第五设备包括第二NWDAF网元。
  7. 根据权利要求4-6中任一项所述的方法,其中,所述第四设备包括以下至少之一:
    第一终端设备;
    第一核心网中的至少一个网元;
    第一服务器。
  8. 根据权利要求4-7中任一项所述的方法,其中,所述第一设备接收来自第四设备的第二密钥,包括:
    所述第一设备在第一过程中接收来自第四设备的第二密钥;
    其中,所述第一过程包括以下至少之一:
    第一分组数据单元PDU会话的建立过程;
    第一PDU会话的修改过程;
    第一注册请求过程;
    第一鉴权过程;
    第一授权过程。
  9. 根据权利要求1-8中任一项所述的方法,其中,所述第一设备向第二设备发送第一密钥,包括:
    所述第一设备在第二过程中向所述第二设备发送所述第一密钥;
    其中,所述第二过程包括以下至少之一:
    第二PDU会话的建立过程;
    第二PDU会话的修改过程;
    第二注册请求过程;
    第二鉴权过程;
    第二授权过程。
  10. 根据权利要求1-9中任一项所述的方法,其中,所述方法还包括:
    在所述第一模型和所述第二模型的联邦学习训练过程中,所述第一设备基于标签信息确定损失函数。
  11. 根据权利要求1-10中任一项所述的方法,其中,所述第一设备包括以下至少之一:
    第二终端设备;
    第二核心网中的至少一个网元;
    第二服务器。
  12. 根据权利要求1-11中任一项所述的方法,其中,所述第二设备包括以下至少之一:
    第三终端设备;
    第三核心网中的至少一个网元;
    第三服务器。
  13. 一种联邦学习方法,包括:
    第三设备接收来自N个电子设备中的第i个电子设备的第一加密推理信息;其中,所述第一加密推理信息是所述第i个电子设备基于第一设备发送的第一密钥对所述第i个电子设备中的第二模型的推理信息进行加密得到的;N为大于等于2的整数,i为大于等于1且小于等于N的整数;
    所述第三设备基于所述第一加密推理信息确定与所述第一加密推理信息对应的第二加密推理信息,并向所述第一设备发送所述第二加密推理信息;其中,所述第二加密推理信息用于指示所述第一设备基于所述第一设备中的第一模型的推理信息以及所述第二加密推理信息,得到目标信息。
  14. 根据权利要求13所述的方法,其中,所述第三设备包括第一NWDAF网元。
  15. 一种联邦学习系统,包括:
    第一设备,用于发送第一密钥;
    第二设备,用于接收所述第一密钥,利用所述第一密钥对所述第二设备中的第二模型的推理信息进行加密,得到第一加密推理信息;
    所述第一设备,还用于在接收到与所述第一加密推理信息对应的第二加密推理信息的情况下,基于所述第一设备中的第一模型的推理信息以及所述第二加密推理信息,得到目标信息。
  16. 根据权利要求15所述的系统,其中,所述第二设备包括N个电子设备;所述N个电子设备中的第i个电子设备用于利用所述第一密钥对所述第i个电子设备中的第二模型的推理信息进行加密,得到所述第一加密推理信息,并发送所述第一加密推理信息;
    所述系统还包括:
    第三设备,用于接收所述第一加密推理信息,并基于所述第一加密推理信息,确定所述第二加密推理信息;
    其中,N为大于等于2的整数,i为大于等于1且小于等于N的整数。
  17. 根据权利要求15或16所述的系统,其中,所述系统还包括:
    第四设备,用于发送第二密钥;
    所述第一设备还用于接收所述第二密钥,利用所述第二密钥对所述第一模型的训练信息进行加密,得到第一加密训练信息,并发送所述第一加密训练信息;
    第四设备还用于基于与所述第一加密训练信息对应的第二加密训练信息得到模型更新信息,所述模型更新信息用于更新所述第一模型。
  18. 根据权利要求17所述的系统,其中,
    所述第二设备还用于接收所述第二密钥,利用所述第二密钥对所述第二模型的训练信息进行加密,得到第三加密训练信息,并发送所述第三加密训练信息;
    所述系统还包括:第五设备,用于接收所述第一加密训练信息和所述第三加密训练信息,并基于所述第一加密训练信息和所述第二加密训练信息得到所述第二加密训练信息,发送所述第二加密训练信息;
    所述第四设备还用于接收所述第二加密训练信息,基于所述第二加密训练信息确定所述模型更新信息。
  19. 一种第一设备,包括:
    第一通信模块,用于向第二设备发送第一密钥;其中,所述第一密钥用于对所述第二设备中的第二模型的推理信息进行加密,得到第一加密推理信息;
    第一处理模块,用于在接收到与所述第一加密推理信息对应的第二加密推理信息的情况下,基于所述第一设备中的第一模型的推理信息以及所述第二加密推理信息,得到目标信息。
  20. 根据权利要求19所述的第一设备,其中,所述第二设备包括N个电子设备;
    所述第一密钥用于指示所述N个电子设备中的第i个电子设备对所述第i个电子设备中的第二模型的推理信息进行加密,得到所述第一加密推理信息,并将所述第一加密推理信息发送至第三设备;
    所述第一加密推理信息用于指示所述第三设备确定所述第二加密推理信息;
    其中,N为大于等于2的整数,i为大于等于1且小于等于N的整数。
  21. 根据权利要求20所述的第一设备,其中,所述第三设备包括第一NWDAF网元。
  22. 根据权利要求19-21中任一项所述的第一设备,其中,所述第一通信模块还用于:接收来自第四设备的第二密钥;
    所述第一处理模块还用于:利用所述第二密钥对所述第一模型的训练信息进行加密,得到第一加密训练信息;
    所述第一通信模块还用于:发送所述第一加密训练信息,其中,所述第一加密训练信息用于使所述第四设备能够基于与所述第一加密训练信息对应的第二加密训练信息得到模型更新信息,所述模型更新信息用于更新所述第一模型。
  23. 根据权利要求22所述的第一设备,其中,所述第一通信模块具体用于:
    向第五设备发送所述第一加密训练信息;
    其中,所述第一加密训练信息用于指示所述第五设备基于所述第一加密训练信息以及来自所述第二设备的第三加密训练信息得到所述第二加密训练信息,并将所述第二加密训练信息发送至所述第四设备;
    其中,所述第二加密训练信息用于指示所述第四设备确定所述模型更新信息。
  24. 根据权利要求23所述的第一设备,其中,所述第五设备包括第二NWDAF网元。
  25. 根据权利要求22-24中任一项所述的第一设备,其中,所述第四设备包括以下至少之一:
    第一终端设备;
    第一核心网中的至少一个网元;
    第一服务器。
  26. 根据权利要求22-25中任一项所述的第一设备,其中,所述第一通信模块用于:
    在第一过程中接收来自第四设备的第二密钥;
    其中,所述第一过程包括以下至少之一:
    第一分组数据单元PDU会话的建立过程;
    第一PDU会话的修改过程;
    第一注册请求过程;
    第一鉴权过程;
    第一授权过程。
  27. 根据权利要求19-26中任一项所述的第一设备,其中,所述第一通信模块用于:
    在第二过程中向所述第二设备发送所述第一密钥;
    其中,所述第二过程包括以下至少之一:
    第二PDU会话的建立过程;
    第二PDU会话的修改过程;
    第二注册请求过程;
    第二鉴权过程;
    第二授权过程。
  28. 根据权利要求19-27中任一项所述的第一设备,其中,所述第一处理模块还用于:
    在所述第一模型和所述第二模型的联邦学习训练过程中,基于标签信息确定损失函数。
  29. 根据权利要求19-28中任一项所述的第一设备,其中,所述第一设备包括以下至少之一:
    第二终端设备;
    第二核心网中的至少一个网元;
    第二服务器。
  30. 根据权利要求19-29中任一项所述的第一设备,其中,所述第二设备包括以下至少之一:
    第三终端设备;
    第三核心网中的至少一个网元;
    第三服务器。
  31. 一种第三设备,包括:
    第二通信模块,用于接收来自N个电子设备中的第i个电子设备的第一加密推理信息;其中,所述第一加密推理信息是所述第i个电子设备基于第一设备发送的第一密钥对所述第i个电子设备中的第二模型的推理信息进行加密得到的;N为大于等于2的整数,i为大于等于1且小于等于N的整数;
    第二处理模块,用于基于所述第一加密推理信息确定与所述第一加密推理信息对应的第二加密推 理信息;
    所述第二通信模块还用于:向所述第一设备发送所述第二加密推理信息;其中,所述第二加密推理信息用于指示所述第一设备基于所述第一设备中的第一模型的推理信息以及所述第二加密推理信息,得到目标信息。
  32. 根据权利要求31所述的第三设备,其中,所述第三设备包括第一NWDAF网元。
  33. 一种第一设备,包括:处理器和存储器,所述存储器用于存储计算机程序,所述处理器调用并运行所述存储器中存储的计算机程序,执行如权利要求1至12中任一项所述的方法的步骤。
  34. 一种第三设备,包括:处理器和存储器,所述存储器用于存储计算机程序,所述处理器调用并运行所述存储器中存储的计算机程序,执行如权利要求13或14所述的方法的步骤。
  35. 一种芯片,包括:
    处理器,用于从存储器中调用并运行计算机程序,使得安装有所述芯片的设备执行如权利要求1至14中任一项所述的方法的步骤。
  36. 一种计算机可读存储介质,用于存储计算机程序,其中,
    所述计算机程序使得计算机执行如权利要求1至14中任一项所述的方法的步骤。
  37. 一种计算机程序产品,包括计算机程序指令,其中,
    所述计算机程序指令使得计算机执行如权利要求1至14中任一项所述的方法的步骤。
  38. 一种计算机程序,所述计算机程序使得计算机执行如权利要求1至14中任一项所述的方法的步骤。
PCT/CN2021/089428 2021-04-23 2021-04-23 联邦学习方法、联邦学习系统、第一设备和第三设备 WO2022222152A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202180097144.1A CN117157651A (zh) 2021-04-23 2021-04-23 联邦学习方法、联邦学习系统、第一设备和第三设备
PCT/CN2021/089428 WO2022222152A1 (zh) 2021-04-23 2021-04-23 联邦学习方法、联邦学习系统、第一设备和第三设备
EP21937380.0A EP4328815A4 (en) 2021-04-23 2021-04-23 FEDERATE LEARNING PROCESS, FEDERATE LEARNING SYSTEM, FIRST DEVICE AND THIRD DEVICE

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/089428 WO2022222152A1 (zh) 2021-04-23 2021-04-23 联邦学习方法、联邦学习系统、第一设备和第三设备

Publications (1)

Publication Number Publication Date
WO2022222152A1 true WO2022222152A1 (zh) 2022-10-27

Family

ID=83723410

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/089428 WO2022222152A1 (zh) 2021-04-23 2021-04-23 联邦学习方法、联邦学习系统、第一设备和第三设备

Country Status (3)

Country Link
EP (1) EP4328815A4 (zh)
CN (1) CN117157651A (zh)
WO (1) WO2022222152A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023125879A1 (zh) * 2021-12-30 2023-07-06 维沃移动通信有限公司 数据处理的方法、装置及通信设备
CN116502732A (zh) * 2023-06-29 2023-07-28 杭州金智塔科技有限公司 基于可信执行环境的联邦学习方法以及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111212110A (zh) * 2019-12-13 2020-05-29 清华大学深圳国际研究生院 一种基于区块链的联邦学习系统及方法
CN111600707A (zh) * 2020-05-15 2020-08-28 华南师范大学 一种在隐私保护下的去中心化联邦机器学习方法
US20200285980A1 (en) * 2019-03-08 2020-09-10 NEC Laboratories Europe GmbH System for secure federated learning
CN111985000A (zh) * 2020-08-21 2020-11-24 深圳前海微众银行股份有限公司 模型服务输出方法、装置、设备及存储介质
CN112132293A (zh) * 2020-09-30 2020-12-25 腾讯科技(深圳)有限公司 纵向联邦学习中的计算方法、装置、设备及介质
CN112162959A (zh) * 2020-10-15 2021-01-01 深圳技术大学 一种医疗数据共享方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112182595B (zh) * 2019-07-03 2024-03-26 北京百度网讯科技有限公司 基于联邦学习的模型训练方法及装置
WO2021032495A1 (en) * 2019-08-16 2021-02-25 Telefonaktiebolaget Lm Ericsson (Publ) Methods, apparatus and machine-readable media relating to machine-learning in a communication network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200285980A1 (en) * 2019-03-08 2020-09-10 NEC Laboratories Europe GmbH System for secure federated learning
CN111212110A (zh) * 2019-12-13 2020-05-29 清华大学深圳国际研究生院 一种基于区块链的联邦学习系统及方法
CN111600707A (zh) * 2020-05-15 2020-08-28 华南师范大学 一种在隐私保护下的去中心化联邦机器学习方法
CN111985000A (zh) * 2020-08-21 2020-11-24 深圳前海微众银行股份有限公司 模型服务输出方法、装置、设备及存储介质
CN112132293A (zh) * 2020-09-30 2020-12-25 腾讯科技(深圳)有限公司 纵向联邦学习中的计算方法、装置、设备及介质
CN112162959A (zh) * 2020-10-15 2021-01-01 深圳技术大学 一种医疗数据共享方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023125879A1 (zh) * 2021-12-30 2023-07-06 维沃移动通信有限公司 数据处理的方法、装置及通信设备
CN116502732A (zh) * 2023-06-29 2023-07-28 杭州金智塔科技有限公司 基于可信执行环境的联邦学习方法以及系统
CN116502732B (zh) * 2023-06-29 2023-10-20 杭州金智塔科技有限公司 基于可信执行环境的联邦学习方法以及系统

Also Published As

Publication number Publication date
EP4328815A1 (en) 2024-02-28
EP4328815A4 (en) 2024-06-05
CN117157651A (zh) 2023-12-01

Similar Documents

Publication Publication Date Title
US20210289351A1 (en) Methods and systems for privacy protection of 5g slice identifier
US11570617B2 (en) Communication method and communications apparatus
US20200228977A1 (en) Parameter Protection Method And Device, And System
JP7127689B2 (ja) コアネットワーク装置、通信端末、及び通信方法
WO2019096075A1 (zh) 一种消息保护的方法及装置
WO2017133021A1 (zh) 一种安全处理方法及相关设备
WO2022222152A1 (zh) 联邦学习方法、联邦学习系统、第一设备和第三设备
US20220174761A1 (en) Communications method and apparatus
US20220210859A1 (en) Data transmission method and apparatus
WO2022016434A1 (zh) 设备注销的方法、设备注册的方法、通信设备和云平台
CN115362692A (zh) 一种通信方法、装置及系统
WO2022027476A1 (zh) 密钥管理方法及通信装置
WO2022160314A1 (zh) 一种安全参数的获取方法、装置及系统
WO2022095047A1 (zh) 无线通信的方法、终端设备和网络设备
CN114584969B (zh) 基于关联加密的信息处理方法及装置
WO2022222745A1 (zh) 一种通信方法及装置
US20220225463A1 (en) Communications method, apparatus, and system
WO2021073382A1 (zh) 注册方法及装置
US20230308864A1 (en) Wireless communication method, apparatus, and system
WO2024060149A1 (zh) 密钥验证方法、密钥获取方法及设备
WO2023279304A1 (zh) 应用于通信系统的方法及通信装置
US20230413059A1 (en) Method and system for designing security protocol for 6g network architecture
WO2024050846A1 (zh) 近邻通信方法和装置
WO2022082667A1 (zh) 一种数据安全传输的方法及装置
CN114208240B (zh) 数据传输方法、装置及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21937380

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2021937380

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021937380

Country of ref document: EP

Effective date: 20231122