CN117376905A - Data processing method, device, electronic equipment and storage medium - Google Patents

Data processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117376905A
CN117376905A CN202311167438.4A CN202311167438A CN117376905A CN 117376905 A CN117376905 A CN 117376905A CN 202311167438 A CN202311167438 A CN 202311167438A CN 117376905 A CN117376905 A CN 117376905A
Authority
CN
China
Prior art keywords
global
nwdaf
local
learning model
encrypted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311167438.4A
Other languages
Chinese (zh)
Inventor
任梦璇
薛淼
王泽林
任杰
林琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202311167438.4A priority Critical patent/CN117376905A/en
Publication of CN117376905A publication Critical patent/CN117376905A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/03Protecting confidentiality, e.g. by encryption
    • H04W12/033Protecting confidentiality, e.g. by encryption of the user plane, e.g. user's traffic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/009Security arrangements; Authentication; Protecting privacy or anonymity specially adapted for networks, e.g. wireless sensor networks, ad-hoc networks, RFID networks or cloud networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/12Detection or prevention of fraud
    • H04W12/121Wireless intrusion detection systems [WIDS]; Wireless intrusion prevention systems [WIPS]
    • H04W12/122Counter-measures against attacks; Protection against rogue devices

Abstract

The application provides a data processing method, a data processing device, electronic equipment and a storage medium, relates to the technical field of communication, and is used for solving the problem that service data is easy to leak in the transmission process in the prior art. The method comprises the following steps: the global NWDAF receives the encrypted local learning model sent by each local NWDAF; the local learning model is obtained by training the local NWDAF based on the initial model indicated by the local data and the global NWDAF; the global NWDAF determines a global learning model based on each encrypted local learning model; the global NWDAF encrypts the global learning model to obtain an encrypted global learning model, and sends the encrypted global learning model to each local NWDAF.

Description

Data processing method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a data processing method, a data processing device, an electronic device, and a storage medium.
Background
With the development of 5G networks, different operator networks have different user and network data, and for analysis of some specific services, more comprehensive service data needs to be obtained to perform cooperative operation so as to obtain more accurate service analysis results.
In the prior art, network data of different operators are comprehensively analyzed through a federal learning technology, so that more accurate service analysis results can be obtained. In this process, however, traffic data of different operators is at risk of leakage during transmission. Therefore, there is a need to design a scheme to ensure the data security of the target communication.
Disclosure of Invention
The application provides a data processing method, a data processing device, electronic equipment and a storage medium, which are used for solving the problem that service data is easy to leak in the transmission process in the prior art.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, a data processing method is provided, applied to a target communication system, where a global network data analysis function NWDAF and at least one local NWDAF are deployed; the global NWDAF is connected with each local NWDAF; the network corresponding to each local NWDAF is different; the method comprises the following steps: the global NWDAF receives the encrypted local learning model sent by each local NWDAF; the local learning model is obtained by training the local NWDAF based on the initial model indicated by the local data and the global NWDAF; the global NWDAF determines a global learning model based on each encrypted local learning model; the global NWDAF encrypts the global learning model to obtain an encrypted global learning model, and sends the encrypted global learning model to each local NWDAF.
Optionally, the global NWDAF includes a global trusted execution environment TEE node and a global federal learning node; the global NWDAF receives the encrypted local learning model sent by each local NWDAF, and the method comprises the following steps: the global NWDAF receives each encrypted local learning model through a global federal learning node; the global NWDAF determines a global learning model based on each encrypted local learning model, comprising: the global NWDAF sends each encrypted local learning model to a global TEE node through a global federal learning node; and the global NWDAF processes each encrypted local learning model through the global TEE node to obtain a global learning model.
Optionally, the global NWDAF processes each encrypted local learning model through a global TEE node to obtain a global learning model, including: the global NWDAF queries the keys of the local NWDAFs through the global TEE node; the global NWDAF decrypts the encrypted local learning model sent by the first local NWDAF through the global TEE node based on the key of the first local NWDAF to obtain a decryption model; the first local NWDAF is any one of the at least one local NWDAF; and the global NWDAF performs aggregation calculation on each decryption model through the global TEE to obtain a global learning model.
In a second aspect, a data processing method is provided and applied to a target communication system, where the target communication system includes a global network data analysis function NWDAF and a target local NWDAF; the target specimen NWDAF is located in the first network; the method comprises the following steps: the target local NWDAF receives an encrypted global learning model sent by the global NWDAF; the encryption global learning model is obtained by performing federal learning on the basis of each local learning model by the global NWDAF; and the target local NWDAF acquires a key of the global NWDAF, and decrypts the encrypted global learning model according to the key to obtain a decrypted global learning model.
Optionally, the method further comprises: and the target sample NWDAF performs identity verification on the AF network element of the application function in the first network, and sends a decrypted global learning model to the AF network element under the condition that verification passes.
In a third aspect, a data processing apparatus is provided, applied to a global network data analysis function NWDAF in a target communication system, the target communication system further being deployed with at least one local NWDAF; the global NWDAF is connected with each local NWDAF; the network corresponding to each local NWDAF is different; the device comprises a receiving unit, a processing unit and a sending unit; a receiving unit, configured to receive the encrypted local learning model sent by each local NWDAF; the local learning model is obtained by training the local NWDAF based on the initial model indicated by the local data and the global NWDAF; the processing unit is used for determining a global learning model based on the local learning models after encryption; the processing unit is used for encrypting the global learning model to obtain an encrypted global learning model, and the transmitting unit is used for transmitting the encrypted global learning model to each local NWDAF.
Optionally, the global NWDAF includes a global trusted execution environment TEE node and a global federal learning node; the receiving unit is specifically configured to: receiving each encrypted local learning model through a global federal learning node; the processing unit is specifically used for: sending each encrypted local learning model to a global TEE node through a global federal learning node; and processing each encrypted local learning model through the global TEE node to obtain a global learning model.
Optionally, the processing unit is specifically configured to: inquiring the key of each local NWDAF through the global TEE node; decrypting, by the global TEE node, the encrypted local learning model sent by the first local NWDAF based on the key of the first local NWDAF to obtain a decryption model; the first local NWDAF is any one of the at least one local NWDAF; and carrying out aggregation calculation on each decryption model through the global TEE to obtain a global learning model.
In a fourth aspect, there is provided an electronic device comprising: a processor, a memory for storing instructions executable by the processor; wherein the processor is configured to execute instructions to implement the data processing method of the first aspect or the data processing method of the second aspect described above.
In a fifth aspect, there is provided a computer readable storage medium having instructions stored thereon, which when executed by a processor of an electronic device, enable the electronic device to perform the data processing method of the first aspect or the data processing method of the second aspect.
The technical scheme provided by the embodiment of the application at least brings the following beneficial effects: the global NWDAF receives the local learning model sent by the local NWDAF located in a different network. Because the local learning models are all in an encryption state, the risk of exposing confidential data in the transmission process of the local learning models can be reduced. The local learning model is obtained by training the local NWDAF based on the local data and an initial model indicated by the global NWDAF, which is equivalent to modeling the local data in the local NWDAF, and can provide rich model learning results for the global NWDAF. The global NWDAF determines a global learning model based on each encrypted local learning model, further encrypts the global learning model by the global NWDAF to obtain an encrypted global learning model, and sends the encrypted global learning model to each local NWDAF so as to protect the global learning model from being maliciously tampered in the transmission process. Compared with the related art, the method and the device directly conduct comprehensive analysis on network data of different operators, encryption processing is conducted before model receiving and transmitting by combining with the federal learning technology, and further the NWDAF module can be prevented from being maliciously attacked in the processing and transmitting process, so that model data is lost or leaked, and the data safety of target communication is guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a centralized architecture according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a decentralizing architecture according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a target communication system according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a second embodiment of a target communication system;
fig. 5 is a schematic diagram of a 5G network architecture according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of NWDAF based on a 5G network architecture according to an embodiment of the present application;
fig. 7 is a schematic diagram of a logic module deployment structure in NWDAF according to an embodiment of the present application;
fig. 8 is a flow chart of a data processing method according to an embodiment of the present application;
fig. 9 is a federal learning schematic diagram of a global NWDAF provided in an embodiment of the present application;
Fig. 10 is a federal learning schematic diagram of a local NWDAF according to an embodiment of the present application;
fig. 11 is a schematic diagram of a data transmission process of a local NWDAF and a global NWDAF provided in an embodiment of the present application;
FIG. 12 is a schematic diagram of a local federal learning process provided in an embodiment of the present application;
FIG. 13 is a second schematic diagram of a local federal learning process according to embodiments of the present application;
FIG. 14 is a schematic diagram of a data processing apparatus according to an embodiment of the present disclosure;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
It should be noted that, in the embodiment of the present application, "english: of", "corresponding" and "corresponding" may sometimes be used in combination, and it should be noted that the meaning to be expressed is consistent when the distinction is not emphasized.
In order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the terms "first", "second", and the like are used to distinguish the same item or similar items having substantially the same function and effect, and those skilled in the art will understand that the terms "first", "second", and the like are not limited in number and execution order.
Before explaining the embodiments of the present application in detail, some related technical terms and related technologies related to the embodiments of the present application are described.
Federal learning (Federated Learning) is an artificial intelligence infrastructure aimed at enabling discrete parties to collaborate in model training for machine learning without revealing private data to other parties.
Federal learning may employ a centralized architecture and a decentralized architecture. Fig. 1 shows a centralised architecture. The centralized architecture comprises a central node and a plurality of participation nodes, and the federal learning step of the centralized architecture is divided into: the central node establishes a basic model and issues the basic structure and parameters of the model to each participating node; each participating node performs model training by using local data and returns a result to the central node; the central node gathers the models of all the participating nodes and builds a more accurate global model. Fig. 2 shows a decentralization architecture. The decentralization architecture only comprises a plurality of peer nodes, a central coordinator does not exist, and the nodes exchange data and models with each other.
Different users and network data exist in different networks, global information cannot be mastered by analyzing the network data under certain service requirements, and multiple network data are needed to participate in collaborative calculation. Therefore, by utilizing the federal learning technology, different networks can participate in calculation together, and then a global learning model is obtained. However, in this process, a problem of leakage of confidential data is liable to occur due to exposure of data information in each network.
In view of the foregoing, the present application provides a data processing method, which is directed to a data encryption technique. In the federal learning process of the centralized architecture, a safety isolation area is provided in the coordination node and the participating nodes, and data processing and modeling are performed in the safety isolation area, so that each node is protected from being attacked when a learning model is built. Before the learning model is transmitted, the safety isolation area encrypts the learning model, so that the learning model in the transmission process is prevented from being tampered, confidential data is protected, and a more accurate global model is obtained.
The following describes in detail a data processing method provided in an embodiment of the present application with reference to the accompanying drawings.
The data processing method provided by the embodiment of the application can be suitable for a target communication system. As shown in fig. 3, the target communication system is deployed with a global network data analysis function (Network Data Analytics Function, NWDAF) and at least one local NWDAF. The global NWDAF is connected to each local NWDAF, and the local NWDAF has different networks.
Illustratively, as shown in fig. 4, the local NWDAF1 is located in a first network, the local NWDAF2 is located in a second network, and the local NWDAF3 is located in a third network. The global NWDAF may be located in any network. The global NWDAF is connected with three local NWDAFs, and the global NWDAF is communicated with the local NWDAF.
The target communication system may be a fifth generation mobile communication technology (5th Generation Mobile Communication Technology,5G) system. Fig. 4 shows a 5G network architecture. The architecture is based on a servitization architecture, and the architecture includes a plurality of architecture elements, where the architecture elements are defined as Network Function (NF) Network elements at any suitable location. NF at different locations has different functions, and in the top-down configuration in fig. 5, the NF network element includes: a Network tab selection function (Network Slice Selection Function, NSSF), a Network traffic presentation function (Network Exposure Function, NEF), a Network element Data warehouse function (NF Repository Function, NRF), a policy control function (Policy Control function, PCF), a unified Data management function (Unified Data Management, UDM), an application layer function (Application Function, AF), an authentication service function (Authentication Server Function, AUSF), an Access and mobility management function (Access and Mobility Management Function, AMF), a session management function (Session Management Function, SMF), a User Equipment (UE), an Access Network (Radio) (RAN), a User plane function (User plane Function, UPF), a Data Network (DN).
The NWDAF may provide a network analysis service according to the request data of the network service, and may provide functions such as slice-level network data analysis for NF. The global NWDAF and the local NWDAF in the present application may be NWDAFs in the above 5G network architecture.
The NEF is a network exposure function in the 5G network architecture, where the NF in the 5G network discloses functions and events to other NFs through the NEF, and the capability and events of NF exposure can be safely exposed to external network elements such as third parties, AFs, edge computation, and the like.
AF is an application function in a 5G network architecture that interacts with the core network to provide services, support application impact on traffic routing, access NEF, interact with policy architecture, etc. Based on the deployment of operators, the AF trusted by the operators can interact directly with the relevant NF, while the untrusted AF needs to interact with the relevant NF through the NEF using an external exposure framework. The third generation partnership project (3rd Generation Partnership Project,3GPP) only standardizes the capabilities and purposes of the AF to interact with the 3GPP core network, and does not relate to the specific services provided by the AF.
UDM is unified data management in 5G network architecture, responsible for user identification, and generating authentication credentials for 3GPP authentication and key agreement (Authentication and Key Agreement, AKA for short); is responsible for subscription management and performs access authorization based on subscription data. The UDM performs registration management for the service NF of the user terminal.
NRF is a network storage function in 5G network architecture, and is mainly responsible for NF discovery and maintenance, etc. NF service discovery refers to receiving NF discovery requests from NF instances and providing information of discovered (or discovered) NF instances to NF instances. NF maintenance refers to maintenance of NF configuration files for available NF instances and their supporting services. The description information of the maintenance NF instance in the NRF mainly includes NF instance ID, NF type, NF capability information, NF specific service authorization information, and the like.
The functional network elements can be in communication connection through some service interfaces. For example, NSSF provided service interface (Nnssf), NEF provided service interface (Nnef), NRF provided service interface (Nnrf), PCF provided service interface (Npcf), UDM provided service interface (Nudm), AF provided service interface (Naf) AUSF provided service interface (Nausf), AMF provided service interface (Namf), SMF provided service interface (Nsmf). The AMF and the UE can be in communication connection through an interface of a reference point (N1), the AMF and the RAN can be in communication connection through an interface of a reference point (N2), the RAN and the UPF can be in communication connection through an interface of a reference point (N3), the SMF and the UPF can be in communication connection through an interface of a reference point (N4), the UPF can be in communication with other functional network elements through an interface of a reference point (N9), and the UPF and the DN can be in communication connection through an interface of a reference point (N6).
Optionally, as shown in fig. 6, based on the 5G network architecture described above, the global NWDAF may also be deployed with a global trusted execution environment (Trustd Execution Environment, TEE) node and a global federal learning node. Similarly, a local NWDAF may be deployed with a local TEE node and a local federal learning node. The TEE nodes (global TEE node and local TEE node) are used for executing TEE applications and providing a trusted computing execution environment, and are safe isolation environments for raw data, model data storage and computation. Federal learning nodes (global federal learning nodes and local federal learning nodes) are used to participate in federal learning.
It should be noted that, the TEE technology provides a secure area on the device by means of hardware, that is, creates a secure isolation area which is independent of the operating system and writable inside the processor, ensures that private and sensitive data are stored and calculated in the isolated and trusted area, and protects the security of the data and code by means of hardware encryption technology.
It can be understood that multiple computing flows in the TEE can be mutually independent, cannot be mutually accessed under the condition of no authorization, and can only access information in the security area through the authorization interface, so that the program code or the private data cannot be illegally acquired or tampered by an operating system or other application programs. The trusted execution environment and the common execution environment in the system are safely isolated and have independent internal data channels and calculation and storage spaces. Each party participating in the calculation can verify the credibility of the environment in a remote authentication mode, and when the calculation is finished, the original data and the process data can be destroyed in the TEE environment according to the requirement, so that the leakage risk is avoided.
In addition, as shown in fig. 7, the NWDAF may further include an interface module, a data management module, and a network element storage module. The interface module is a functional component for the NWDAF network element to communicate with other network element systems, and is used for receiving and sending message data and collecting original data. The data management module is used for acquiring data from other network element systems and storing the data into the network element storage module so that the federal learning node can acquire the data through the data management module. The network element storage module is used for storing the service related data acquired by the NWDAF network element.
FIG. 8 is a flow diagram illustrating a method of data processing according to some example embodiments. In some embodiments, the data processing method described below may be applied to the target communication system as shown in fig. 3, and may also be applied to other similar scenarios.
As shown in fig. 8, the data processing method provided in the embodiment of the present application includes the following S201 to S203.
S201, the global NWDAF receives the encrypted local learning model sent by each local NWDAF.
The local learning model is obtained by training the local NWDAF based on the local data and an initial model indicated by the global NWDAF.
It should be noted that, each local NWDAF may perform machine learning based on the local network data and a preset learning task, so as to obtain respective local learning models.
As one possible implementation, the global NWDAF instructs each local NWDAF to perform federal learning. After each local NWDAF completes its own local learning task, the encrypted local learning model is sent to the global NWDAF. Correspondingly, the global NWDAF receives the encrypted local learning model sent by each local NWDAF.
Illustratively, local NWDAF1 in network 1 corresponds to local learning model 1, local NWDAF1 in network 2 corresponds to local learning model 2, and local NWDAF1 in network 3 corresponds to local learning model 3. The global NWDAF receives the encrypted local learning model 1 sent by the local NWDAF1, the encrypted local learning model 2 sent by the local NWDAF2, and the encrypted local learning model 3 sent by the local NWDAF 3.
In some embodiments, the global NWDAF is deployed with a global federal learning node. The global NWDAF may receive each encrypted local learning model through a global federal learning node.
S202, the global NWDAF determines a global learning model based on each encrypted local learning model.
As one possible implementation, the global NWDAF is deployed with a global TEE node and a global federal learning node. And the global NWDAF transmits each encrypted local learning model to the global TEE node through the global federal learning node. Further, the global NWDAF processes each encrypted local learning model through the global TEE node to obtain a global learning model.
Specifically, the global NWDAF may query the keys of the local NWDAFs through the global TEE node. The global NWDAF decrypts the encrypted local learning model sent by the first local NWDAF based on the key of the first local NWDAF through the global TEE node, and obtains a decryption model. Wherein the first local NWDAF is any one of the at least one local NWDAF. And the global NWDAF performs aggregation calculation on each decryption model through the global TEE to obtain a global learning model.
The global NWDAF queries the public keys of the network 1, the network 2 and the network 3 through the TEE node as K1', K2', K3', decrypts the public keys to obtain decryption models m1', m2', m3' of the networks, and performs aggregation calculation by using the 3 models to obtain a first round global learning model GM1.
S203, the global NWDAF encrypts the global learning model to obtain an encrypted global learning model, and sends the encrypted global learning model to each local NWDAF.
As a possible implementation manner, the global NWDAF encrypts the trained global learning model through the TEE node to obtain an encrypted global learning model. Further, the global NWDAF sends the encrypted global learning model to each local NWDAF separately.
Illustratively, the global NWDAF encrypts the global learning model by the TEE node using the key Kg to obtain Kg (GM 1), and transmits the Kg (GM 1) to the global federal learning node. The global federal learning node issues data to the relevant local NWDAFs of network 1 and network 2 via NWDAF interface modules.
As shown in fig. 9, the global NWDAF receives, through the interface module, the encrypted local learning model sent by each local NWDAF: k1 (m 1 '), K2 (m 2'), K3 (m 3 '), and storing K1 (m 1'), K2 (m 2 '), K3 (m 3') in the data processing unit of the global federal learning node. Wherein m1 'is a local learning model 1, and K1 (m 1') represents an encrypted local learning model 1, K2 (m 2 '), K3 (m 3') obtained by encrypting the local learning model 1 with the public key K1. Further, the global NWDAF obtains K1 (m 1 '), K2 (m 2 '), K3 (m 3 ') from the global federal learning node through the global TEE node. In the trusted execution environment of the TEE node of the global NWDAF, the received local learning models m1', m2', m3 'are decrypted by the TEE node using K1', K2', K3', and the global learning model GM is obtained according to the decrypted models. Further, the global NWDAF encrypts the global learning model GM in the TEE node using its own key Kg to obtain Kg (GM). The global NWDAF sends Kg (GM) to the interface module through the global federal learning node and to each local NWDAF through the interface module.
Accordingly, for any one local NWDAF (denoted as target local NWDAF), the target local NWDAF receives the encrypted global learning model sent by the global NWDAF. Further, the target sample NWDAF obtains a key of the global NWDAF, and decrypts the encrypted global learning model according to the key to obtain a decrypted global learning model.
As shown in fig. 10, the local NWDAF receives, through the interface module, the encrypted global learning model sent by the global NWDAF: kg (GM (F)) and store Kg (GM (F)) in the data processing unit of the local federal learning node. The GM (F) is a global learning model, and Kg (GM (F)) represents an encrypted global learning model obtained by encrypting the global learning model with a public key Kg. Further, the local NWDAF obtains Kg (GM (F)) from the local federal learning node through the local TEE node. In the trusted execution environment of the TEE node of the local NWDAF, decrypting the received global learning model GM (F) by the TEE node using Kg, and obtaining the global learning model GM according to the decrypted model. Further, the local NWDAF is sent to the interface module through the local federal learning node and to the authorized application AF through the interface module.
In one design, in order to ensure that the global learning model is not abused, the target local NWDAF may perform identity verification on the application function AF network element in the first network, and send the decrypted global learning model to the AF network element if the verification passes. In this way, only authorized users can use the decrypted global learning model.
The technical scheme provided by the embodiment of the application at least brings the following beneficial effects: the global NWDAF receives the local learning model sent by the local NWDAF located in a different network. Because the local learning models are all in an encryption state, the risk of exposing confidential data in the transmission process of the local learning models can be reduced. The local learning model is obtained by training the local NWDAF based on the local data and an initial model indicated by the global NWDAF, which is equivalent to modeling the local data in the local NWDAF, and can provide rich model learning results for the global NWDAF. The global NWDAF determines a global learning model based on each encrypted local learning model, further encrypts the global learning model by the global NWDAF to obtain an encrypted global learning model, and sends the encrypted global learning model to each local NWDAF so as to protect the global learning model from being maliciously tampered in the transmission process. Compared with the related art, the method and the device directly conduct comprehensive analysis on network data of different operators, encryption processing is conducted before model receiving and transmitting by combining with the federal learning technology, and further the NWDAF module can be prevented from being maliciously attacked in the processing and transmitting process, so that model data is lost or leaked, and the data safety of target communication is guaranteed.
In one design, in order to enable the global NWDAF and each local NWDAF to have safe federal learning environments, the data processing method provided by the embodiment of the present application may deploy TEE nodes and federal learning nodes in advance for the global NWDAF and each local NWDAF. The specific deployment flow is as follows:
step 1, for the local NWDAF participating in the federal learning task in different networks, the local NWDAF is in communication connection with the global NWDAF, and the specific communication mode is not limited in this application, for example, modes such as a private line and an agent.
And step 2, deploying federal learning nodes by NWDAF network elements which participate in federal learning tasks in different network domains.
And 3, defining a global federation learning node and a local federation learning node.
And 4, authenticating identities of the local federal learning node and the global federal learning node.
Step 5, building TEE node examples in each NWDAF, wherein each TEE example has own secret key and certificate for authentication and encrypted communication.
And 6, the local TEE node initiates an initialization authentication process to the global TEE node, and exchanges the public key. For example, the local TEE node obtains the global TEE node public key Kg 'and the global TEE node obtains the local TEE node public key K1'/K2'/K3'.
And 7, deploying a federation learning algorithm model list by the global federation learning node, and storing corresponding federation learning service IDs, model types, model IDs and other contents.
And 8, exchanging keys between the federal learning nodes in each NWDAF and the TEE nodes to ensure the safe transmission of data.
For easy understanding, after completing the above deployment procedure, as shown in fig. 11, a data transmission process of the local NWDAF and the global NWDAF is shown, including:
s301, for a network where the local NWDAF is located, the third party application AF in the network sends a network data analysis service request to the local NWDAF to request the local NWDAF to start a network data analysis service function.
For example, the AF may open a network element NEF through the network, sending a network data analysis service request to the local NWDAF.
S302, the local NWDAF inquires the user service subscription information from the data management module UDM to confirm whether the AF user opens the data analysis service.
For example, in response to a network data analysis service request, the local NWDAF queries the data management module UDM for user service subscription information.
S303, the UDM feeds back a query result to the local NWDAF.
In one implementation, if the user does not subscribe to the data analysis service, the UDM feeds back the user's unsubscribed information. If the user is subscribed, the UDM feeds back to the subscribed information of the local NWDAF user.
It should be noted that, the user may subscribe to the data analysis service in advance. The specific subscription flow is as follows: the third party application AF subscribes to the corresponding data analysis service from the local NWDAF through the NEF, and the data analysis service needs to use the relevant data in each network to perform collaborative analysis. The local NWDAF inquires service subscription data from the UDM, and inquires whether AF in the local NWDAF has service use permission or not, wherein the inquired subscription data comprises data such as service ID, user ID, service time period T and the like. Further, if the UDM feeds back that the AF has the service usage right, the local NWDAF sends the service subscription information to the local federal learning node. The local federation learning node initiates a service opening application to the global federation learning node, and the applied data comprise user ID, service ID, analysis data type, analysis data parameter set, analysis data network ID set and the like. Correspondingly, the global federal learning node judges whether the request is legal or not, and if the request is legal, an analysis model or an analysis model list is selected according to the service ID. Meanwhile, AF in the network can also initiate subscription cancellation service of corresponding data analysis service to NWDAF, NWDAF module informs UDM to delete user subscription data, user cancellation service is completed.
S304, the local NWDAF acquires local service data and downloads an initial model according to the subscription information.
In one implementation, in the case that the subscribed information of the user is subscribed, the local NWDAF sends a service initiation request to the local federal learning node to request the local federal learning node to start federal learning.
Illustratively, the data processing module of the local NWDAF sends the service request into the local federal learning node. The local federal learning node forwards the request to the global federal learning node, and the global federal learning node judges the validity of the request and feeds back information to the local federal learning node, wherein the feedback information comprises validity verification information and data acquisition instructions, and the data acquisition instructions comprise contents such as service IDs, data analysis types, analysis data parameter sets, time stamps and the like. Meanwhile, the global federation learning node informs the local federation learning node to download an initial model m0 and provide relevant information of the model, wherein the relevant information comprises a service ID, a model type, a model ID and the like. Taking the local NWDAF1 as an example, after receiving the instruction, the local federal learning node sends a data acquisition request to the data processing module according to the service requirement. And the data processing module of the local NWDAF1 acquires relevant data D of data analysis from other network elements in the required network through the interface module according to the request, and stores the relevant data D in the network element storage module. And the local federal learning node extracts corresponding data D from the network element storage module according to the service related parameters and stores the corresponding data D in the data management module.
S305, the local NWDAF performs federal learning through the local TEE node.
S306, encrypting the local learning model by the local NWDAF, and transmitting the encrypted local learning model to the global NWDAF.
As shown in fig. 12, the local federal learning process is described herein using local NWDAF1 as an example: and the data processing module of the local NWDAF1 acquires relevant data D of data analysis from other network elements in the required network through the interface module according to the request, and stores the relevant data D in the network element storage module. And the local federal learning node extracts corresponding data D from the network element storage module according to the service related parameters and stores the corresponding data D in the data management module. And the local NWDAF1 transmits the data D from the data management module through the interface module to the value federation learning node, and the local federation learning node downloads the initial model m0 to the global federation learning node according to the contents such as the service ID, the model type, the model ID and the like. The local federal learning node transmits the original data D and the initial model m0 to a trusted execution environment in the local TEE node, models and encrypts, and calculates to obtain a local new learning model m1'. The TEE node in the local NWDAF1 encrypts the modeling result m1 'of the local NWDAF1 by using the local key K1 to obtain K1 (m 1'), stores K1 (m 1 '), and sends the encrypted learning model K1 (m 1') to the local federal learning node through the TLS tunnel. The local federal learning node transmits the data K1 (m 1') to the interface module of the local NWDAF1 through the encrypted communication interface according to the stored information. The interface module of the local NWDAF1 transmits the model K1 (m 1') to the global federal learning node in the global NWDAF to perform model aggregation.
Similarly, the local NWDAF2 and the local NWDAF3 generate K2 (m 2 ') and K3 (m 3') according to the steps of the local NWDAF 1. The interface modules of the local NWDAF2 and the local NWDAF3 pass the models K2 (m 2 ') and K3 (m 3'), respectively, to the global NWDAF. At this time, the global federal learning node has three local learning models of K1 (m 1 '), K2 (m 2 ') and K3 (m 3 ').
S307, the global NWDAF determines a global learning model according to the received local learning model.
S308, the global NWDAF encrypts the global learning model and sends the encrypted global learning model to the NWDAF.
Illustratively, the interface module of each local NWDAF passes the models K1 (m 1 '), K2 (m 2 '), K3 (m 3 ') to the global federal learning node in the global NWDAF for model aggregation. At this time, the global federal learning node has three local learning models of K1 (m 1 '), K2 (m 2 ') and K3 (m 3 '). After receiving each local learning model, the global federal learning node uniformly transmits the local learning models to the global TEE node for aggregation operation. The TEE node queries public keys K1'/K2'/K3 'of the local NWDAF1, the local NWDAF2 and the local NWDAF3, decrypts the encrypted local learning model, and obtains learning models m1', m2', m3' of each local NWDAF. And the global TEE node performs aggregate calculation on the three learning models in the executable environment to obtain a global learning model GM1. The global NWDAF encrypts the global learning model GM1 in the TEE node using its own key Kg, to obtain Kg (GM 1), and transmits to the global federal learning node. The global NWDAF transmits the first round global encryption model Kg (GM 1) to the local NWDAF through an interface module of the global NWDAF.
It should be noted that the global model GM1 obtained by the federal learning may be a first-round global model.
Correspondingly, the local NWDAF decrypts the encrypted global learning model Kg (GM 1) to obtain GM1, and continues to perform local learning on GM1, and sends the learned local learning model to the global NWDAF, so that the global NWDAF updates the first round of global model.
As shown in fig. 13, the global NWDAF sends Kg (GM 1) to the federal learning node through the interface module of the local NWDAF1, and into the local TEE node through the federal learning node. The global encryption model Kg (GM 1) is decrypted in the TEE node using the global TEE node public key Kg' to obtain GM1. And aggregating the GM1 with the local data D and other local new data in an executable environment to obtain a local learning model m1". The TEE node in the local NWDAF1 encrypts the modeling result m1 "of the local NWDAF1 by using the local key K1 to obtain K1 (m 1") and stores K1 (m 1 "), and sends the encrypted learning model K1 (m 1") to the local federal learning node through the TLS tunnel. The local federal learning node sends the encrypted local learning module K1 (m 1 ") to the global NWDAF via the local interface module.
Optionally, after the global model is transmitted to the AF, in TEE modules in the local NWDAF1, the local NWDAF2, and the local NWDAF3, the intermediate model parameter data of all other network models may be destroyed, so as to prevent data leakage.
The foregoing embodiments mainly describe the solutions provided in the embodiments of the present application from the perspective of the apparatus (device). It will be appreciated that, in order to implement the above-mentioned method, the apparatus or device includes hardware structures and/or software modules corresponding to each of the method flows, and these hardware structures and/or software modules corresponding to each of the method flows may constitute a material information determining apparatus. Those of skill in the art will readily appreciate that the algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application may divide the functional modules of the apparatus or the device according to the above method example, for example, the apparatus or the device may divide each functional module corresponding to each function, or may integrate two or more functions into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Fig. 14 is a schematic diagram showing the structure of a data processing apparatus according to an exemplary embodiment. Referring to fig. 14, a data processing apparatus 40 provided in the embodiment of the present application is applied to a global network data analysis function NWDAF in a target communication system, where at least one local NWDAF is further deployed; the global NWDAF is connected with each local NWDAF; the network corresponding to each local NWDAF is different; the data processing apparatus 40 includes a receiving unit 401, a processing unit 402, and a transmitting unit 403.
A receiving unit 401, configured to receive the encrypted local learning model sent by each local NWDAF; the local learning model is obtained by training the local NWDAF based on the initial model indicated by the local data and the global NWDAF; a processing unit 402, configured to determine a global learning model based on each encrypted local learning model; a processing unit 402, configured to encrypt the global learning model to obtain an encrypted global learning model, and a sending unit 403, configured to send the encrypted global learning model to each local NWDAF.
Optionally, the global NWDAF includes a global trusted execution environment TEE node and a global federal learning node; the receiving unit 401 is specifically configured to: receiving each encrypted local learning model through a global federal learning node; the processing unit 402 is specifically configured to: sending each encrypted local learning model to a global TEE node through a global federal learning node; and processing each encrypted local learning model through the global TEE node to obtain a global learning model.
Optionally, the processing unit 402 is specifically configured to: inquiring the key of each local NWDAF through the global TEE node; decrypting, by the global TEE node, the encrypted local learning model sent by the first local NWDAF based on the key of the first local NWDAF to obtain a decryption model; the first local NWDAF is any one of the at least one local NWDAF; and carrying out aggregation calculation on each decryption model through the global TEE to obtain a global learning model.
Fig. 15 is a schematic structural diagram of an electronic device provided in the present application. As shown in fig. 15, the electronic device 50 may include at least one processor 501 and a memory 502 for storing processor executable instructions, wherein the processor 501 is configured to execute the instructions in the memory 502 to implement the data processing method in the above embodiment.
In addition, the electronic device 50 may also include a communication bus 503 and at least one communication interface 504.
The processor 501 may be a processor (central processing units, CPU), micro-processing unit, ASIC, or one or more integrated circuits for controlling the execution of the programs of the present application.
Communication bus 503 may include a path to transfer information between the above components.
Communication interface 504, using any transceiver-like device for communicating with other devices or communication networks, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
The memory 502 may be, but is not limited to, read-only memory (ROM) or other type of static storage device that can store static information and instructions, random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, but may also be electrically erasable programmable read-only memory (EEPROM), compact disc-read only memory (compact disc read-only memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be implemented by itself and it can be coupled to the processor 501 via a bus. The memory may also be integrated with the processor 501.
The memory 502 is used for storing instructions for executing the present application, and is controlled by the processor 501 to execute the present application. The processor 501 is configured to execute instructions stored in the memory 502 to implement the functions of the methods of the present application.
As an example, in connection with fig. 14, the functions implemented by the receiving unit 401, the processing unit 402, and the transmitting unit 403 in the data processing apparatus 40 are the same as those of the processor 501 in fig. 15.
In a particular implementation, as one embodiment, processor 501 may include one or more CPUs, such as CPU0 and CPU1 in FIG. 15.
In a particular implementation, as one embodiment, electronic device 50 may include multiple processors, such as processor 501 and processor 507 in FIG. 15. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In a particular implementation, electronic device 50 may also include an output device 505 and an input device 506, as one embodiment. The output device 505 communicates with the processor 501 and may display information in a variety of ways. For example, the output device 505 may be a liquid crystal display (liquid crystal display, LCD), a light emitting diode (light emitting diode, LED) display device, a Cathode Ray Tube (CRT) display device, or a projector (projector), or the like. The input device 506 communicates with the processor 501 and may accept input of user objects in a variety of ways. For example, the input device 506 may be a mouse, a keyboard, a touch screen device, a sensing device, or the like.
Those skilled in the art will appreciate that the structure shown in fig. 15 is not limiting of the electronic device 50 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In addition, the present application also provides a computer-readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the data processing method provided in the above-described embodiments.
In addition, the application also provides a computer program product comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the data processing method as provided in the above embodiments.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

Claims (10)

1. A data processing method, characterized by being applied to a target communication system, the target communication system being deployed with a global network data analysis function NWDAF and at least one local NWDAF; the global NWDAF is connected with each local NWDAF; the network corresponding to each local NWDAF is different; the method comprises the following steps:
the global NWDAF receives the encrypted local learning model sent by each local NWDAF; the local learning model is obtained by training the local NWDAF based on the initial model indicated by the local data and the global NWDAF;
the global NWDAF determines a global learning model based on each encrypted local learning model;
and the global NWDAF encrypts the global learning model to obtain an encrypted global learning model, and sends the encrypted global learning model to each local NWDAF.
2. The method of claim 1, wherein the global NWDAF comprises a global trusted execution environment TEE node and a global federal learning node; the global NWDAF receives the encrypted local learning model sent by each local NWDAF, and the method comprises the following steps:
the global NWDAF receives each encrypted local learning model through the global federal learning node;
The global NWDAF determines a global learning model based on each encrypted local learning model, including:
the global NWDAF sends each encrypted local learning model to the global TEE node through the global federal learning node;
and the global NWDAF processes each encrypted local learning model through the global TEE node to obtain the global learning model.
3. The method of claim 2, wherein the global NWDAF processes each encrypted local learning model through the global TEE node to obtain the global learning model, comprising:
the global NWDAF queries the keys of the local NWDAFs through the global TEE node;
the global NWDAF decrypts the encrypted local learning model sent by the first local NWDAF based on the key of the first local NWDAF through the global TEE node to obtain a decryption model; the first local NWDAF is any one of at least one local NWDAF;
and the global NWDAF performs aggregation calculation on each decryption model through the global TEE to obtain the global learning model.
4. A data processing method, characterized by being applied to a target communication system, the target communication system comprising a global network data analysis function NWDAF and a target local NWDAF; the target specimen NWDAF is located in a first network; the method comprises the following steps:
The target local NWDAF receives an encrypted global learning model sent by the global NWDAF; the encryption global learning model is obtained by performing federal learning on the global NWDAF based on each local learning model;
and the target sample NWDAF acquires a key of the global NWDAF, and decrypts the encrypted global learning model according to the key to obtain a decrypted global learning model.
5. The method according to claim 4, wherein the method further comprises:
and the target sample NWDAF performs identity verification on the application function AF network element in the first network, and sends the decrypted global learning model to the AF network element under the condition that verification is passed.
6. A data processing device, characterized by being applied to a global network data analysis function NWDAF in a target communication system, said target communication system being further deployed with at least one local NWDAF; the global NWDAF is connected with each local NWDAF; the network corresponding to each local NWDAF is different; the device comprises a receiving unit, a processing unit and a sending unit;
the receiving unit is used for receiving the encrypted local learning model sent by each local NWDAF; the local learning model is obtained by training the local NWDAF based on the initial model indicated by the local data and the global NWDAF;
The processing unit is used for determining a global learning model based on each encrypted local learning model;
the processing unit is used for encrypting the global learning model to obtain an encrypted global learning model,
and the sending unit is used for sending the encrypted global learning model to each local NWDAF.
7. The apparatus of claim 6, wherein the global NWDAF comprises a global trusted execution environment TEE node and a global federal learning node; the receiving unit is specifically configured to:
receiving each encrypted local learning model through the global federal learning node;
the processing unit is specifically configured to:
sending each encrypted local learning model to the global TEE node through the global federal learning node;
and processing each encrypted local learning model through the global TEE node to obtain the global learning model.
8. The apparatus according to claim 7, wherein the processing unit is specifically configured to:
querying keys of each local NWDAF through the global TEE node;
decrypting, by the global TEE node, the encrypted local learning model sent by the first local NWDAF based on the key of the first local NWDAF to obtain a decryption model; the first local NWDAF is any one of at least one local NWDAF;
And performing aggregate calculation on each decryption model through the global TEE to obtain the global learning model.
9. An electronic device, comprising: a processor, a memory for storing instructions executable by the processor; wherein the processor is configured to execute instructions to implement the data processing method of any of claims 1-3 or the data processing method of any of claims 4-5.
10. A computer readable storage medium having instructions stored thereon, which, when executed by a processor of an electronic device, enable the electronic device to perform the data processing method of any one of claims 1-3 or the data processing method of any one of claims 4-5.
CN202311167438.4A 2023-09-11 2023-09-11 Data processing method, device, electronic equipment and storage medium Pending CN117376905A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311167438.4A CN117376905A (en) 2023-09-11 2023-09-11 Data processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311167438.4A CN117376905A (en) 2023-09-11 2023-09-11 Data processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117376905A true CN117376905A (en) 2024-01-09

Family

ID=89395345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311167438.4A Pending CN117376905A (en) 2023-09-11 2023-09-11 Data processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117376905A (en)

Similar Documents

Publication Publication Date Title
US11387978B2 (en) Systems and methods for securing access rights to resources using cryptography and the blockchain
CN102405630B (en) System of multiple domains and domain ownership
EP3308495B1 (en) System, apparatus and method for group key distribution for a network
EP3668042B1 (en) Registration method and apparatus based on service-oriented architecture
US9954687B2 (en) Establishing a wireless connection to a wireless access point
CN112583802B (en) Data sharing platform system and equipment based on block chain and data sharing method
CN107005569A (en) Peer-to-peer services layer certification
CN111742531B (en) Profile information sharing
WO2019041802A1 (en) Discovery method and apparatus based on service-oriented architecture
US10021144B2 (en) Techniques for establishing a trusted cloud service
CN114978635B (en) Cross-domain authentication method and device, user registration method and device
CN116992458A (en) Programmable data processing method and system based on trusted execution environment
Enge et al. An offline mobile access control system based on self-sovereign identity standards
CN114205072B (en) Authentication method, device and system
Balachandran et al. EDISON: a blockchain-based secure and auditable orchestration framework for multi-domain software defined networks
Nagy et al. Peershare: A system secure distribution of sensitive data among social contacts
CN117376905A (en) Data processing method, device, electronic equipment and storage medium
CN105592433A (en) Device-to-device restraint service discovery broadcasting method and device, monitoring method and device, and system
CN113468584A (en) Information management method and device, electronic equipment and storage medium
Silva et al. Formal verification of a cross-layer, trustful space-time protocol for wireless sensor networks
CN116032494B (en) Data interaction method, blockchain predictor, device and medium
Ren et al. A Blockchain-Based Authentication Scheme for 5G Applications
CN114258006A (en) Method, device and system for acquiring credential
Guezguez et al. Scalable and Secure Mobile Sensor Cloud Architecture for the Delivery of Surveillance Applications As A Service
Alshawish An Efficient Security Mechanism for the Internet of Things Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination