WO2023125879A1 - 数据处理的方法、装置及通信设备 - Google Patents

数据处理的方法、装置及通信设备 Download PDF

Info

Publication number
WO2023125879A1
WO2023125879A1 PCT/CN2022/143669 CN2022143669W WO2023125879A1 WO 2023125879 A1 WO2023125879 A1 WO 2023125879A1 CN 2022143669 W CN2022143669 W CN 2022143669W WO 2023125879 A1 WO2023125879 A1 WO 2023125879A1
Authority
WO
WIPO (PCT)
Prior art keywords
inference
information
reasoning
network element
model
Prior art date
Application number
PCT/CN2022/143669
Other languages
English (en)
French (fr)
Inventor
崇卫微
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2023125879A1 publication Critical patent/WO2023125879A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/02Protecting privacy or anonymity, e.g. protecting personally identifiable information [PII]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Definitions

  • the present application belongs to the technical field of communication, and in particular relates to a data processing method, device and communication equipment.
  • Embodiments of the present application provide a data processing method, device, and communication device, which can implement data reasoning by combining data located in different domains and on different network elements without data sharing.
  • a data processing method including: when a first network element is performing a federated reasoning process corresponding to a first reasoning task, sending a federated reasoning request message to at least one second network element, the The federated inference request message includes at least information about the first inference task, the second network element is a network element participating in the federated inference process; the first network element receives the information sent by the at least one second network element First information, where the first information includes at least a first inference result; and the first network element determines a second inference result corresponding to the first inference task according to at least one of the first inference results.
  • a data processing method comprising: a second network element receiving a federated inference request message sent by a first network element, where the federated inference request message includes at least information about the first inference task; The second network element performs inference according to the federated inference request message to obtain a first inference result; the second network element sends first information to the first network element, and the first information includes at least the first 1. Inference results.
  • a data processing device including: a first sending module, configured to send a federated reasoning request message to at least one second network element in the case of performing a federated reasoning process corresponding to a first reasoning task , the federated inference request message includes at least information about the first inference task, the second network element is a network element participating in the federated inference process; the first receiving module is configured to receive the at least one second The first information sent by the network element, the first information includes at least a first reasoning result; a first reasoning module, configured to determine a second reasoning result corresponding to the first reasoning task according to at least one of the first reasoning results .
  • a data processing device includes: a second receiving module, configured to receive a federated inference request message sent by a first network element, and the federated inference request message includes at least the related information of the first inference task. Information; a second reasoning module, configured to perform reasoning according to the federated reasoning request message to obtain a first reasoning result; a second sending module, configured to send first information to the first network element, in the first information At least including the first reasoning result.
  • a communication device in a fifth aspect, includes a processor and a memory, the memory stores programs or instructions that can run on the processor, and the programs or instructions are implemented when executed by the processor The steps of the method as described in the first aspect or the second aspect.
  • a sixth aspect provides a communication device, including a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the method as described in the first aspect steps, or to achieve the steps of the method as described in the second aspect.
  • a federated data processing system including: a first network element and a second network element, the first network element can be used to execute the steps of the data processing method described in the first aspect, the The second network element may be configured to execute the steps of the data processing method described in the second aspect.
  • a readable storage medium is provided, and programs or instructions are stored on the readable storage medium, and when the programs or instructions are executed by a processor, the steps of the method described in the first aspect are realized, or the steps of the method described in the first aspect are realized, or The steps of the method described in the second aspect.
  • a ninth aspect provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, the processor is used to run programs or instructions, and implement the method as described in the first aspect steps, or to achieve the steps of the method as described in the second aspect.
  • a computer program/program product is provided, the computer program/program product is stored in a storage medium, and the computer program/program product is executed by at least one processor to implement the The steps of the method, or the steps to realize the method as described in the second aspect.
  • the first network element cooperates with the second network elements to realize local distributed data reasoning by means of federated reasoning.
  • the inference data On the premise of the inference data, it can not only ensure the data privacy and data security in each second network element, but also ensure the reliability of the inference results.
  • Fig. 1 is a schematic structural diagram of a wireless communication system provided by an exemplary embodiment of the present application.
  • Fig. 2 is a schematic structural diagram of a federated data processing system provided by an exemplary embodiment of the present application.
  • Fig. 3 is one of the schematic flowcharts of the data processing method provided by the embodiment of the present application.
  • FIG. 4 is the second schematic flow diagram of the data processing method provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of an interaction flow of a data processing method provided by an embodiment of the present application.
  • FIG. 6 is the third schematic flowchart of the data processing method provided by the embodiment of the present application.
  • FIG. 7 is one of the structural schematic diagrams of the data processing device provided by the embodiment of the present application.
  • FIG. 8 is the second structural schematic diagram of the data processing device provided by the embodiment of the present application.
  • Fig. 9 is a schematic structural diagram of a communication device provided by an exemplary embodiment of the present application.
  • Fig. 10 is a schematic structural diagram of a terminal provided by an exemplary embodiment of the present application.
  • FIG. 11 is one of the structural schematic diagrams of the network side device provided by the embodiment of the present application.
  • FIG. 12 is a second schematic structural diagram of a network side device provided by an embodiment of the present application.
  • first, second and the like in the specification and claims of the present application are used to distinguish similar objects, and are not used to describe a specific sequence or sequence. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein and that "first" and “second” distinguish objects. It is usually one category, and the number of objects is not limited. For example, there may be one or more first objects.
  • “and/or” in the description and claims means at least one of the connected objects, and the character “/” generally means that the related objects are an "or” relationship.
  • LTE Long Term Evolution
  • LTE-Advanced LTE-Advanced
  • LTE-A Long Term Evolution-Advanced
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single-carrier Frequency-Division Multiple Access
  • SC-FDMA Single-carrier Frequency-Division Multiple Access
  • system and “network” in the embodiments of the present application are often used interchangeably, and the described technology can be used for the above-mentioned system and radio technology, and can also be used for other systems and radio technologies.
  • NR New Radio
  • the following description describes the New Radio (NR) system for illustrative purposes, and uses NR terminology in most of the following descriptions, but these techniques can also be applied to applications other than NR system applications, such as the 6th generation (6 th Generation, 6G) communication system.
  • 6G 6th Generation
  • Fig. 1 shows a block diagram of a wireless communication system to which the embodiment of the present application is applicable.
  • the wireless communication system includes a terminal 11 and a network side device 12 .
  • the terminal 11 can be a mobile phone, a tablet computer (Tablet Personal Computer), a laptop computer (Laptop Computer) or a notebook computer, a personal digital assistant (Personal Digital Assistant, PDA), a palmtop computer, a netbook, a super mobile personal computer (ultra-mobile personal computer, UMPC), mobile Internet device (Mobile Internet Device, MID), augmented reality (augmented reality, AR) / virtual reality (virtual reality, VR) equipment, robot, wearable device (Wearable Device) , vehicle equipment (VUE), pedestrian terminal (PUE), smart home (home equipment with wireless communication functions, such as refrigerators, TVs, washing machines or furniture, etc.), game consoles, personal computers (personal computers, PCs), teller machines or self-service Wearable devices include: smart watches, smart bracelets, smart headphones, smart glasses, smart jewelry (
  • the network side device 12 may include an access network device or a core network device, where the access network device 12 may also be called a radio access network device, a radio access network (Radio Access Network, RAN), a radio access network function, or Wireless access network unit.
  • RAN Radio Access Network
  • RAN Radio Access Network
  • Wireless access network unit Wireless access network unit
  • the access network device 12 may include a base station, a WLAN access point, or a WiFi node, etc., and the base station may be called a Node B, an evolved Node B (eNB), an access point, a Base Transceiver Station (Base Transceiver Station, BTS), a radio Base station, radio transceiver, Basic Service Set (BSS), Extended Service Set (ESS), Home Node B, Home Evolved Node B, Transmitting Receiving Point (TRP) or all As long as the same technical effect is achieved, the base station is not limited to a specific technical vocabulary. It should be noted that in this embodiment of the application, only the base station in the NR system is used as an example for introduction, and The specific type of the base station is not limited.
  • an embodiment of the present application also provides a schematic structural diagram of a federated data processing system, the federated data processing system includes a first network element, a second network element, and a third network element Yuan, Consumer Devices.
  • the first network element as a joint reasoning network element in the federated data processing system, has the function of acquiring local reasoning results reported by other local reasoning entities and aggregated to generate a global reasoning result.
  • the first network element may be a certain network element or device in the communication network that can provide machine learning reasoning functions, such as network data analysis function (Network Data Analytics Function, NWDAF), management data analysis service (management data analytics service, MDAS), management data analysis function (management data analytic function, MDAF) and other network elements or devices dedicated to providing network intelligent services; or, the first network element may also provide other communication-related services (such as mobility management (Mobility Management, MM) service, session management (Session Management, SM) service), and network elements or devices with intelligent functions at the same time, such as access and mobility management function (Access and Mobility Management Function, AMF) , session management function (Session Management Function, SMF), application function (application function, AF), etc.
  • the AF may be an AF deployed by a communication operator, or a
  • the second network element is used as a local reasoning entity in the federated data processing system, which may be a network element or device with local machine learning reasoning capabilities in the communication network, such as a radio access network (Radio Access Network, RAN) domain Artificial Intelligence (AI) functional network elements, core network (Core Network, CN) domain AI functional network elements, third-party AI applications, AI agent (client) in UE, local communication service equipment, etc.
  • RAN Radio Access Network
  • AI Artificial Intelligence
  • core network Core Network, CN
  • AI functional network elements third-party AI applications
  • client AI agent
  • the third network element may serve as a model provider (also referred to as a coordinator) in the federated data processing system to train and provide the first network element with an AI model for data reasoning, etc.
  • the The third network element may be a certain network element or device in the communication network that can provide machine learning functions, such as NWDAF, MDAS, MDAF and other network elements or devices dedicated to providing network intelligent services; or, the third network element It may also be a network element or device that provides other communication-related services (such as MM service and SM service) and also has intelligent functions, such as AMF and SMF).
  • the model training manner adopted by the third network element may be but not limited to a federated model training manner (that is, a federated learning training manner) and the like.
  • the first network element in addition to providing the AI model for data reasoning to the first network element through the third network element, can also use model training (For example, a federated model training method or a federated learning training method) to obtain an AI model for data reasoning, which is not limited in this embodiment.
  • model training For example, a federated model training method or a federated learning training method
  • Consumer devices can be network elements or devices that need to perform data processing in wireless communication systems, such as third-party AI applications, UEs, and policy control function entities (Policy Control Function, PCF) , AMF, AF, etc.
  • PCF Policy Control Function
  • the consumer device may send an inference task request message to the first network element due to a specific data analysis task (for example, identified by an analytics ID), so as to trigger the federated inference process.
  • the federated data processing system may include more or less network elements or devices than those shown in FIG. 2 .
  • the federated data processing system may include the third network element (the reasoning target model required by the reasoning task comes from the third network element) as shown in Figure 2, or may not include the third network element (the The inference target model required for the inference task comes from the first network element, wherein the first network element can generate the target model through a federated learning training process); for another example, the federated data processing system can also include a graph 2.
  • FIG. 3 it is a schematic flowchart of a data processing method 300 provided by an exemplary embodiment of the present application.
  • the method 300 may be executed by but not limited to a first network element (such as a terminal or a network side device), specifically, it may be implemented by an installation Executed in hardware and/or software in the first network element.
  • the method 300 may at least include the following steps.
  • the first network element sends a federated inference request message to at least one second network element when performing a federated inference process corresponding to the first inference task.
  • the second network element is an inference entity participating in the federated inference process.
  • the second network element may be determined by the first network element according to the first inference task, or the second network element is the target corresponding to the first network element according to the federated reasoning process Relevant information of the model is determined, or the second network element is obtained from a network storage function (Network Repository Function, NRF) or a unified data management entity (Unified Data Management, UDM) according to the model training task of the first network element Network elements obtained from and capable of supporting the federated reasoning process.
  • NRF Network Repository Function
  • UDM Unified Data Management
  • the determination process of the second network element may include: the first network element
  • the element determines the second network element according to the network element information (that is, the fourth network element information) that participates in the training of the target model included in the relevant information of the target model.
  • the first network element determines the training network elements (such as AMF instance (instance(s)), RAN instance(s), AF instance(s), etc.) participating in the training of the target model, and then these training network elements The element is determined as the second network element.
  • the federated inference request message includes at least information about the first inference task, so as to indicate that each of the second network elements needs to jointly perform the first inference task.
  • the first reasoning task may be a data analysis task, a data statistics task, and the like.
  • the federated inference process may be triggered by the first network element according to its own inference task, or may be triggered by an inference task of a consumer device.
  • the implementation process may include: when the consumer device determines that data analysis and other reasoning tasks need to be performed, the The first network element sends a reasoning task request message, and the reasoning task request message includes relevant information of the first reasoning task (such as a data analysis task), then, the first network element receives the information sent by the consumer device After the reasoning task requests a message, the federated reasoning process can be triggered.
  • the first network element receives first information sent by the at least one second network element.
  • the first information includes at least a first reasoning result.
  • the first inference result is determined by the second network element according to local inference data and a local inference model.
  • the local reasoning model may come from other network elements except the second network element, for example, the first network element indicates the local reasoning model to the second network element, or the fourth network element.
  • the network element provides the local inference model to the second network element, and the fourth network element is a network element that participates in the target model training process and is responsible for the local model training; or the local inference model can also be provided by the fourth network element
  • the second network element is obtained through local model training, and there is no limitation here.
  • the first network element determines a second inference result corresponding to the first inference task according to at least one of the first inference results.
  • the first network element may determine the second inference result in multiple ways according to the first inference results from different second network elements, for example, it may be based on the AI model corresponding to the first inference task etc. to process the received first reasoning result, which is not limited in this embodiment.
  • the first network element may send a request to the consumer device after obtaining the second inference result The ore device sends the second inference result.
  • the first network element combines the second network elements to realize local distributed data reasoning by using federated reasoning, so that the reasoning on each second network element in the communication network can be not shared. Data privacy and data security in each second network element can be ensured, and the reliability of reasoning results can also be ensured.
  • FIG. 4 it is a schematic flowchart of a data processing method 400 provided by an exemplary embodiment of the present application.
  • the method 400 may be executed by but not limited to a first network element (such as a terminal or a network side device), specifically, it may be implemented by an installation Executed in hardware and/or software in the first network element.
  • the method 400 may at least include the following steps.
  • the first network element sends a federated inference request message to at least one second network element when performing a federated inference process corresponding to the first inference task.
  • the federated inference request message includes at least relevant information of the first inference task, and the second network element is a network element participating in the federated inference process.
  • the federated inference request message may at least include at least one of the following (11)-(15) item.
  • Model instance identification information (model instance ID)
  • the model instance identification information is used to identify the target model required for the federated reasoning process, and uniquely specify a model for reasoning to the second network element
  • a local inference model for example, the second network element may associate a local inference model corresponding to the target model according to the model instance identification information.
  • the local inference model may belong to a sub-model or a part of the target model.
  • the local inference model mentioned in this application may be obtained by the second network element through local training, or may be obtained from Other network elements (such as the fourth network element, the fourth network element is a network element that participates in the target model training process and is responsible for local model training) are acquired.
  • the target model may be obtained by the first network element through model training, such as local distributed model training through the vertical federated model training process, or it may be that the first network element needs to perform federated model training. Before/after the inference process, obtained from the third network element.
  • the following takes the target model obtained by the first network element from the third network element as an example to describe the process of obtaining the target model, wherein the process of obtaining the target model may include the following ( 111)-(113), the contents are as follows.
  • the first network element sends a model request message to the third network element.
  • the model request message may at least carry relevant information of the first model training task (also can be understood as the model training task information determined by the first reasoning task information), so as to request the third network Meta-training and/or feeding back a target model corresponding to the first reasoning task.
  • the model request message may include at least one of the following (11101)-(11104).
  • the model training task corresponds to the first reasoning task, so as to instruct the third network element to train a target model corresponding to the first reasoning task, such as a data analysis AI model used for a data analysis task.
  • the type information of the model training task may include an analysis identification (analytics ID), a model identification (model ID), etc., to indicate what machine learning task (i.e., model training task) the third network element is for. task) for model training.
  • an analysis identification an analysis identification
  • model ID model identification
  • the type information may also be represented by a number representation or other encoding representation, etc., which is not limited here.
  • the identification information of the model training task may also include analytics ID, model ID, etc., to indicate which machine learning task (i.e. model ID) the third network element is for. training task) for model training.
  • the model request message may include any one of identification information of the model training task and type information of the model training task.
  • Relevant information of the second filter which can also be understood as filter information (filter information) of model training, to be used to limit the target object corresponding to the model training task (such as target UE (target UE( s))), target time (such as target time period (target time period)), target area (such as target area of interest (target area of interest, target AOI)) at least one item, so that the third network element Model training may be performed according to relevant information of the second filter.
  • Model feedback related information the model feedback related information may include at least one of model feedback format and feedback condition.
  • model feedback format can be understood as model framework information, such as an expression method based on TensorFlow or Pytorch model framework, or can also be understood as a cross-framework-based model expression method (such as ONNX).
  • the feedback condition may include an event trigger and/or a periodic trigger, wherein the event trigger includes at least one of (a)-(c).
  • the third network element queries or trains the target model according to the model request message.
  • the third network element may select the target model from already trained models.
  • the third network element when it is training the target model, it can perform the training of the target model based on the vertical federated model training process, that is, the third network element can send a federated model training request message to the training entity to request
  • Each training entity performs local model training and feeds back model training intermediate data, and the third network element performs model training based on the received model training intermediate data fed back by each training entity to obtain the target model.
  • each training entity is combined to perform local distributed model training to obtain the target model, which can not only protect the security and privacy of the data on each training entity, but also ensure the reliability of the target model.
  • the third network element may send related information of the target model to the first network element, so as to indicate the target model to the first network element.
  • the relevant information of the target model may at least include the target model information.
  • the target model information includes at least one of the following (1121)-(1126).
  • Model structure information is used to indicate the specific model structure of the target model (such as neural network, deep neural network (DNN), convolutional neural network (CNN), recurrent neural network (RNN), linear structure wait).
  • DNN deep neural network
  • CNN convolutional neural network
  • RNN recurrent neural network
  • linear structure wait linear structure wait
  • model structure information may further specify the layers of the neural network, the neuron nodes, the relationship between the input and output of each layer, etc., and there is no limitation here.
  • Model parameter information is an internal configuration variable of the target model for defining functions of the target model, which can be obtained through data estimation or model training.
  • the model parameters may include weights in artificial neural networks, support vectors in support vector machines, coefficients in linear regression or logistic regression, and the like.
  • Model algorithm information may include, but is not limited to, decision trees, Bayesian classifiers, K-nearest neighbors, support vector machines, and the like.
  • Model hyperparameter information is a configuration variable outside the target model, which is generally used in the training process of model parameters and can usually be directly specified by practitioners.
  • the model hyperparameter information may include the learning rate of the training neural network, the C and sigama hyperparameters of the support vector machine, the parameter k in the K neighborhood, the loss function corresponding to the target model, and the loss function predetermined when the target model converges. value etc.
  • Type information of model input data that is, when using the target model to perform data reasoning, what is the type of model input data (also can be understood as inference data) of the target mode.
  • Type information of model output data That is, when the target model is used for data reasoning, what is the type of the model output data of the target mode (which can also be understood as the first reasoning result and the second reasoning result).
  • the related information of the target model may also include model instance identification information, second indication information, related information of the fourth network element, and model training configuration information At least one of the .
  • the model instance identification information may be the same as or different from the model instance identification information described in (11) above.
  • the model instance identification information when the target model is trained by the third network element, the model instance identification information may be allocated by the third network element and used to uniquely identify the target model. For example, when the first network element determines to perform the federated model training process corresponding to the model training task, it assigns a model instance identification information to the model training task, which is used to indicate the The model entity, which is the target model.
  • model instance identification information may also be used by the second network element to associate the local model with the target model.
  • the second indication information is used to indicate that the target model is a vertical federated learning model, that is, the target model is obtained through a vertical federated model training process.
  • the fourth network element is a network element participating in the training of the target model, that is, the fourth network element is a training entity corresponding to the target model.
  • the relevant information of the fourth network element may include identification information, address information, fully qualified domain name (Fully Qualified Domain Name, FQDN) information, name information, etc. of the fourth network element.
  • the first network element may determine the information of the at least one second network element according to the relevant information of the target model. For example, the first network element determines the information of the at least one second network element according to the relevant information of the fourth network element included in the relevant information of the target model, and the fourth network element participates in the target model Trained network elements.
  • the second network element may also be a training entity participating in the training process of the vertical federated model, that is, the second network element may communicate with The fourth network element is the same.
  • the first network element receives the relevant information of the target model sent by the third network element.
  • the first network element may save the relevant information of the target model for subsequent federated reasoning process.
  • the identification information of the first reasoning task is used to indicate the type or purpose of the reasoning task to the second network element.
  • the identification information of the first reasoning task may be an analysis identification (analytics ID) or the like.
  • the first indication information is used to indicate that the federated reasoning process is a vertical federated reasoning process.
  • the vertical federated reasoning process can be understood as: the training data participating in the federated reasoning process has the same samples but different sample characteristics among different second network elements.
  • the essence of the vertical federation reasoning process is the combination of sample features, which is suitable for scenarios where users (ie, samples) overlap more and features (ie, sample features) overlap less, such as the CN domain and RAN domain in a communication network serving the same user (
  • the present application jointly infers the different sample characteristics of the common samples of the participating parties (that is, the second network element), so that The feature dimension of the reasoning data is increased, and a better reasoning result can be obtained through reasoning.
  • the relevant information of the first filter is used to define the inference sample information corresponding to the first inference task, and the inference sample information includes: at least one of inference object information, inference time information, and inference area information.
  • the relevant information of the first filter may be analytics filter information, etc., for each of the second network elements to correlate according to the first filter.
  • the information collects reasoning data corresponding to the reasoning object information, reasoning time information, and reasoning area information, so as to perform local data reasoning.
  • the reported information corresponding to the first reasoning result includes at least one of the following (151)-(152).
  • the reporting format of the first reasoning result is used to indicate which data format the second network element uses to report the first reasoning result after obtaining the first reasoning result.
  • reporting condition may include event trigger and/or periodic trigger.
  • event trigger may include any one of (1521)-(1522).
  • the second network element reports a first inference result.
  • the number of inference rounds specified by the first network element in the reported information may be different, which allows the first network element to align with the reports reported by each inference entity (that is, the second network element) in the federated inference process.
  • the pace of the first inference result of the inference entity and then ensure that the reporting of each inference entity is in a consistent pace to prevent the problem of stragglers.
  • the second network element reports the first reasoning result.
  • the periodic triggering means that the second network element can periodically report the first reasoning result, such as reporting the first reasoning result every 5 minutes.
  • the present application determines the joint reasoning process, in addition to the aforementioned need to be triggered by the first network element according to its own reasoning task, or by the consumer device, it may also Determine whether the first condition is true, and if the first condition is true, the first network element determines to perform a joint model reasoning process.
  • the first condition may include at least one of the following (21)-(23).
  • the first network element does not store or cannot acquire all or part of the inference data corresponding to the inference process. For example, due to data security issues or data privacy issues, the first network element lacks part or all of the inference data of each inference entity (that is, the second network element), so that the first network element needs to use the federated model inference process
  • the local distributed model reasoning is performed by uniting each second network element.
  • the at least one second network element can provide all or part of the inference data corresponding to the inference process.
  • the inference data samples required by the inference task between the second network elements are the same, but the characteristics of the samples are different.
  • the reasoning data used for federated model reasoning is the MM-related data generated in the CN by the same or a group of UEs, or the location data generated in the RAN, or the service experience data generated in a third-party service.
  • the first condition includes which of the foregoing (21)-(23) can be agreed upon by a protocol, configured by a high-level layer, or configured by a network side, which is not limited here.
  • the first network element receives first information sent by the at least one second network element.
  • the first information includes at least a first inference result
  • the first inference result is obtained by the second network element according to a local inference model
  • the local inference model may be obtained by the second network element according to the
  • the model instance identification information included in the federated reasoning request message determines that the local reasoning model corresponds to the target model that needs to be used in the federated reasoning process.
  • the second network element determines the local inference model according to the model instance identification information included in the federated inference request message, and determines the local inference model according to the inference model
  • the process of obtaining the first reasoning result by reasoning may include (31)-(33), the content of which is as follows.
  • the second network element determines the local inference model and the type information of inference input data according to the model instance identification information in the federated inference request message.
  • the second network element acquires inference input data according to the type information of the inference input data and the related information of the first filter in the federated inference request message.
  • the second network element may, according to the type information of the inference input data and the relevant information of the first filter, determine to collect inferences of all UEs in cell 1 during the period of 07:00-09:00 every Monday Data, that is, inference input data.
  • the second network element performs inference based on the inference input data and the local inference model to obtain the first inference result.
  • the second network element inputs the collected inference input data into the local inference model to obtain the first inference result.
  • the first information may further include at least one of the following (41)-(43).
  • Model instance identification information used to identify the target model required for the federation reasoning process, in other words, the model instance identification information is used to indicate to the first network element the target corresponding to the first reasoning result The ID of the model.
  • Inference sample information corresponding to the first inference result where the inference sample information includes: at least one of inference object information (such as UE ID(s) or any UE, etc.), inference time information, and inference area information .
  • inference object information such as UE ID(s) or any UE, etc.
  • inference time information such as UE ID(s) or any UE, etc.
  • inference area information such as UE ID(s) or any UE, etc.
  • model instance identification information and the identification information of the first inference task can refer to the relevant description in the aforementioned federated inference request message, and the inference sample information corresponding to the first inference result is obtained by the second network element according to the inference
  • the inference sample information collected at the time is determined, and there is no limitation here.
  • the first network element determines a second inference result corresponding to the first inference task according to at least one of the first inference results.
  • the implementation process of S430 may include S431, the content of which is as follows.
  • the first network element calculates the second inference result according to the target model and at least one of the first inference results.
  • the first network element may associate and align the first inference result sent by the at least one second network element and targeting the same inference sample according to the inference sample information corresponding to the first inference result ;
  • the first network element inputs the first inference result for the same inference sample into the target model, and obtains the second inference result for the same inference sample.
  • the first network element when it performs association alignment on the first reasoning result sent by the second network element, it may be implemented according to UE granularity, or may be implemented according to time information, etc., which is not limited here.
  • the first network element may calculate the final second reasoning result according to the UE granularity. That is, the first network element calculates and obtains the final inference result of the federated learning corresponding to the UE based on the target model corresponding to the model instance identification information and the first inference result corresponding to the UE received from each of the second network elements , which is the second reasoning result.
  • the first network element may also associate and align the inference data at the same time point according to the time information. For example, the first network element performs an associated operation on the first reasoning result of the same UE in the RAN domain and the CN domain and both at 8:00, to obtain the final reasoning result corresponding to 8:00, that is, the second reasoning result.
  • UE IDs from different second network elements may be different.
  • the UE ID from the RAN may be the radio access network NG application protocol ID (RAN NG Application Protocol ID, RAN NGAP ID), while the UE ID of the CN is the AMF NGAP ID or SUPI (Subscription Permanent Identifier, SUPI), etc.
  • the first network element is associated with the same UE according to the mapping relationship between different identifications of UE (such as the mapping relationship between RAN NGAP ID and AMF NGAP ID). Data and Inference Results.
  • the first network element when the first network element inputs the first inference result for the same inference sample into the target model, it may also input its own inference data for the same inference sample As model input data, it is input into the target model together with the first reasoning result to obtain the second reasoning result.
  • the federated reasoning process is exemplarily described below by taking the federated reasoning process triggered by the consumer device as an example.
  • the data reasoning process may include one or more second network elements.
  • the data reasoning process includes two second network elements, that is, the reasoning entity 1 shown in FIG. 5 and inference entity 2.
  • the consumer device sends an inference task request message to the first network element.
  • the first network element determines whether to perform or not to perform a federated inference process corresponding to the first inference task according to the inference task request message.
  • the first network element sends a model request message to the third network element when performing the federated inference process corresponding to the first inference task.
  • the first network element receives the target model related information sent by the third network element, where the target model related information includes at least the target model information.
  • the first network element determines information of a second network element capable of participating in the federated inference process, such as inference entity 1 and inference entity 2, according to the relevant information of the target model.
  • the first network element sends a federated inference request message to the inference entity 1 and the inference entity 2.
  • inference entity 1 determines the local inference model according to the model instance identification information included in the federated inference request message, and obtains the first inference result according to the inference of the local inference model in the case of receiving the federated inference request message .
  • the reasoning entity 2 when it receives the federated reasoning request message, it determines the local reasoning model according to the model instance identification information included in the federated reasoning request message, and obtains the first reasoning model according to the local reasoning model reasoning result.
  • the inference entity 1 and the inference entity 2 respectively send the first inference result to the first network element.
  • the first network element sends the second reasoning result to the consumer device.
  • the federated model training process may include but not limited to the aforementioned S501-S510, for example, may include more or fewer steps than the aforementioned S501-S510, and no limitation is set here.
  • FIG. 6 it is a schematic flowchart of a data processing method 600 provided by an exemplary embodiment of the present application.
  • the method 600 may be executed by but not limited to a first network element (such as a terminal or a network-side device), specifically, it may be implemented by an installation Executed in hardware and/or software in the first network element.
  • the method 600 may at least include the following steps.
  • the second network element receives a federated inference request message sent by the first network element, where the federated inference request message includes at least information about the first inference task.
  • the second network element performs inference according to the federated inference request message to obtain a first inference result.
  • the second network element sends first information to the first network element, where the first information includes at least the first inference result.
  • the federated inference request message includes at least one of the following: model instance identification information, the model instance identification information is used to identify the target model required for the federated inference process; the identification of the first inference task Information; first indication information, used to indicate that the federated reasoning process is a vertical federated reasoning process; related information of the first filter, used to limit the reasoning sample information corresponding to the first reasoning task, and the reasoning sample information includes : At least one item of inference object information, inference time information, and inference area information; report information corresponding to the first inference result.
  • the reporting information of the first reasoning result includes at least one of the following: a reporting format of the first reasoning result; a reporting condition of the first reasoning result.
  • the first information further includes at least one of the following: model instance identification information, the model instance identification information is used to identify the target model required for the federation reasoning process; the identification of the first reasoning task Information: inference sample information corresponding to the first inference result, where the inference sample information includes: at least one of inference object information, inference time information, and inference area information.
  • the step of the second network element performing inference according to the federated inference request message to obtain the first inference result includes: the second network element determines according to the model instance identification information in the federated inference request message a local reasoning model, and type information of reasoning input data; the second network element acquires reasoning input data according to the type information of the reasoning input data and the relevant information of the first filter in the federated reasoning request message; The second network element performs inference based on the inference input data and the local inference model to obtain the first inference result.
  • the data processing method provided in the embodiment of the present application may be executed by a data processing device.
  • the data processing device provided in the embodiment of the present application is described by taking the data processing device performing the data processing method as an example.
  • the device 700 may include: a first sending module 710, configured to perform federated reasoning corresponding to the first reasoning task In the case of a process, send a federated inference request message to at least one second network element, the federated inference request message includes at least the relevant information of the first inference task, and the second network element is a network element participating in the federated inference process A network element; a first receiving module 720, configured to receive first information sent by the at least one second network element, where the first information includes at least a first reasoning result; a first reasoning module 730, configured to The first reasoning result is determined, and the second reasoning result corresponding to the first reasoning task is determined.
  • the first inference module is further configured to determine that a first condition is satisfied, and the first condition includes at least one of the following: the first network element does not store or cannot obtain all or part of the inference process corresponding Reasoning data; the at least one second network element can provide all or part of the reasoning data corresponding to the reasoning process; the samples of reasoning data between the second network elements required for the reasoning task are the same, but the characteristics of the samples are different .
  • the federated inference request message includes at least one of the following: model instance identification information, the model instance identification information is used to identify the target model required for the federated inference process; the identification of the first inference task Information; first indication information, used to indicate that the federated reasoning process is a vertical federated reasoning process; related information of the first filter, used to limit the reasoning sample information corresponding to the first reasoning task, and the reasoning sample information includes : At least one item of inference object information, inference time information, and inference area information; report information corresponding to the first inference result.
  • the reporting information corresponding to the first reasoning result includes at least one of the following: a reporting format of the first reasoning result; a reporting condition of the first reasoning result.
  • the first inference result is inferred by the second network element according to a local inference model, and the local inference model is obtained by the second network element according to the model instance identification information included in the federated inference request message Sure.
  • the first information further includes at least one of the following: model instance identification information, used to identify the target model required for the federated reasoning process; identification information of the first reasoning task; the first Inference sample information corresponding to the inference result, where the inference sample information includes: at least one of inference object information, inference time information, and inference area information.
  • the first reasoning module 730 is configured to determine a second reasoning result corresponding to the first reasoning task according to at least one of the first reasoning results, including: according to the target model and at least one of the first reasoning results An inference result, calculating the second inference result.
  • the step of the first reasoning module 730 calculating the second reasoning result according to the target model and at least one of the first reasoning results includes: according to the reasoning sample information corresponding to the first reasoning result, associating Aligning the first inference results sent by the at least one second network element and targeting the same inference sample; inputting the first inference results for the same inference sample into the target model, and obtaining the first inference results for the same inference sample The second inference result of the sample.
  • the first sending module 710 is further configured to send a model request message to a third network element, where the model request message is used to request the third network element to train and/or feed back the target model; the The first receiving module 720 is further configured to receive the related information of the target model sent by the third network element, where the related information of the target model includes at least the target model information.
  • the model request message includes at least one of the following: type information of the model training task; identification information of the model training task; related information of the second filter, used to limit the target corresponding to the model training task At least one of object, target time and target area; model feedback related information, the model feedback related information includes at least one of model description method and model feedback time.
  • the related information of the target model further includes at least one of the following: model instance identification information; second indication information, used to indicate that the target model is a vertical federated learning model; related information of the fourth network element, the The fourth network element is a network element participating in the training of the target model.
  • the target model information includes at least one of the following: model structure information; model parameter information; model algorithm information; model hyperparameter information; model input data type information; model output data type information.
  • the first reasoning module 730 is further configured to determine the information of the at least one second network element according to the relevant information of the target model.
  • the step of the first reasoning module 730 determining the information of the at least one second network element according to the relevant information of the target model includes: according to the fourth network element included in the relevant information of the target model
  • the relevant information determines the information of the at least one second network element
  • the fourth network element is a network element participating in the training of the target model.
  • the first receiving module 720 is also configured to receive an inference task request message sent by a consumer device, where the inference task request message includes relevant information of the first inference task; the first sending module 710 It is further configured to send the second reasoning result to the consumer device.
  • the device 800 may include: a second receiving module 810, configured to receive a federated inference request message sent by the first network element , the federated reasoning request message includes at least the relevant information of the first reasoning task; the second reasoning module 820 is configured to perform reasoning according to the federated reasoning request message to obtain the first reasoning result; the second sending module 830 is configured to send The first information is sent to the first network element, and the first information includes at least the first reasoning result.
  • the federated inference request message includes at least one of the following: model instance identification information, the model instance identification information is used to identify the target model required for the federated inference process; the identification of the first inference task Information; first indication information, used to indicate that the federated reasoning process is a vertical federated reasoning process; related information of the first filter, used to limit the reasoning sample information corresponding to the first reasoning task, and the reasoning sample information includes : At least one item of inference object information, inference time information, and inference area information; report information corresponding to the first inference result.
  • the reporting information of the first reasoning result includes at least one of the following: a reporting format of the first reasoning result; a reporting condition of the first reasoning result.
  • the first information further includes at least one of the following: model instance identification information, the model instance identification information is used to identify the target model required for the federation reasoning process; the identification of the first reasoning task Information: inference sample information corresponding to the first inference result, where the inference sample information includes: at least one of inference object information, inference time information, and inference area information.
  • the step of the second reasoning module 820 performing reasoning according to the federated reasoning request message to obtain the first reasoning result includes: determining a local reasoning model according to the model instance identification information in the federated reasoning request message, and Type information of the inference input data; according to the type information of the inference input data and the related information of the first filter in the federated inference request message, acquire the inference input data; based on the inference input data and the local inference model Perform inference to obtain the first inference result.
  • the data processing apparatus 700-800 in the embodiment of the present application may be a communication device, such as a communication device with an operating system, or a component in the communication device, such as an integrated circuit or a chip.
  • the communication device may be a terminal, or a network-side device other than a terminal.
  • the terminal may include but not limited to the types of terminal 11 listed above
  • the network-side device may include but not limited to the type of network-side device 12 listed above, which is not specifically limited in this embodiment of the present application.
  • the data processing apparatuses 700-800 provided in the embodiments of the present application can realize various processes realized by the method embodiments in FIG. 3 to FIG. 6 and achieve the same technical effect. To avoid repetition, details are not repeated here.
  • this embodiment of the present application also provides a communication device 900, including a processor 901 and a memory 902, and the memory 902 stores programs or instructions that can run on the processor 901, for example,
  • the communication device 900 is a terminal, when the program or instruction is executed by the processor 901, each step of the above data processing method embodiment can be implemented, and the same technical effect can be achieved.
  • the communication device 900 is a network-side device, when the program or instruction is executed by the processor 901, each step of the above-mentioned data processing method embodiment can be achieved, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the communication device may be a terminal, as shown in FIG. 10 , which is a schematic structural diagram of a terminal provided in an embodiment of the present application, which includes a processor and a communication interface, and the communication interface and the processing Coupled with a processor, the processor is configured to run programs or instructions to implement the steps of the methods described in the method embodiments 300-600.
  • This terminal embodiment corresponds to the above-mentioned terminal-side method embodiment, and each implementation process and implementation mode of the above-mentioned method embodiment can be applied to this terminal embodiment, and can achieve the same technical effect.
  • FIG. 10 is a schematic diagram of a hardware structure of a terminal implementing an embodiment of the present application.
  • the terminal 1000 includes but not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010, etc. at least some of the components.
  • the terminal 1000 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 1010 through the power management system, so as to manage charging, discharging, and power consumption through the power management system. Management and other functions.
  • a power supply such as a battery
  • the terminal structure shown in FIG. 10 does not constitute a limitation on the terminal, and the terminal may include more or fewer components than shown in the figure, or combine certain components, or arrange different components, which will not be repeated here.
  • the input unit 1004 may include a graphics processing unit (Graphics Processing Unit, GPU) 1041 and a microphone 10042, and the graphics processor 10041 is used in a video capture mode or an image capture mode by an image capture device (such as the image data of the still picture or video obtained by the camera) for processing.
  • the display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072 .
  • the touch panel 10071 is also called a touch screen.
  • the touch panel 10071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 10072 may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • the radio frequency unit 1001 may transmit it to the processor 1010 for processing; in addition, the radio frequency unit 1001 may send the uplink data to the network side device.
  • the radio frequency unit 1001 includes, but is not limited to, an antenna, an amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the memory 1009 can be used to store software programs or instructions as well as various data.
  • the memory 1009 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instructions required by at least one function (such as a sound playing function, image playback function, etc.), etc.
  • memory 1009 may include volatile memory or nonvolatile memory, or, memory 1009 may include both volatile and nonvolatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electronically programmable Erase Programmable Read-Only Memory (Electrically EPROM, EEPROM) or Flash.
  • ROM Read-Only Memory
  • PROM programmable read-only memory
  • Erasable PROM Erasable PROM
  • EPROM erasable programmable read-only memory
  • Electrical EPROM Electrical EPROM
  • EEPROM electronically programmable Erase Programmable Read-Only Memory
  • Volatile memory can be random access memory (Random Access Memory, RAM), static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (Synch link DRAM , SLDRAM) and Direct Memory Bus Random Access Memory (Direct Rambus RAM, DRRAM).
  • RAM Random Access Memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM Double Data Rate SDRAM
  • DDRSDRAM double data rate synchronous dynamic random access memory
  • Enhanced SDRAM, ESDRAM enhanced synchronous dynamic random access memory
  • Synch link DRAM , SLDRAM
  • Direct Memory Bus Random Access Memory Direct Rambus
  • the processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor and a modem processor, wherein the application processor mainly processes operations related to the operating system, user interface, and application programs, etc., Modem processors mainly process wireless communication signals, such as baseband processors. It can be understood that the foregoing modem processor may not be integrated into the processor 1010 .
  • the radio frequency unit 1001 is configured to send a federated reasoning request message to at least one second network element when performing a federated reasoning process corresponding to the first reasoning task, the federated reasoning request message At least including relevant information of the first inference task, the second network element is a network element participating in the federated inference process; a radio frequency unit 1001, configured to receive the first information sent by the at least one second network element, The first information includes at least a first reasoning result; the processor 1010 is configured to determine a second reasoning result corresponding to the first reasoning task according to at least one of the first reasoning results.
  • the first inference module is further configured to determine that a first condition is satisfied, and the first condition includes at least one of the following: the first network element does not store or cannot obtain all or part of the inference process corresponding Reasoning data; the at least one second network element can provide all or part of the reasoning data corresponding to the reasoning process; the samples of reasoning data between the second network elements required for the reasoning task are the same, but the characteristics of the samples are different .
  • the federated inference request message includes at least one of the following: model instance identification information, the model instance identification information is used to identify the target model required for the federated inference process; the identification of the first inference task Information; first indication information, used to indicate that the federated reasoning process is a vertical federated reasoning process; related information of the first filter, used to limit the reasoning sample information corresponding to the first reasoning task, and the reasoning sample information includes : At least one item of inference object information, inference time information, and inference area information; report information corresponding to the first inference result.
  • the reporting information corresponding to the first reasoning result includes at least one of the following: a reporting format of the first reasoning result; a reporting condition of the first reasoning result.
  • the first inference result is inferred by the second network element according to a local inference model, and the local inference model is obtained by the second network element according to the model instance identification information included in the federated inference request message Sure.
  • the first information further includes at least one of the following: model instance identification information, used to identify the target model required for the federated reasoning process; identification information of the first reasoning task; the first Inference sample information corresponding to the inference result, where the inference sample information includes: at least one of inference object information, inference time information, and inference area information.
  • the step of the processor 1010 being configured to determine a second reasoning result corresponding to the first reasoning task according to at least one of the first reasoning results includes: according to the target model and at least one of the first reasoning results As a result, said second inference result is calculated.
  • the step of the processor 1010 calculating the second inference result according to the target model and at least one of the first inference results includes: according to the inference sample information corresponding to the first inference result, associating and aligning all The first inference result sent by the at least one second network element and for the same inference sample; input the first inference result for the same inference sample into the target model, and obtain the first inference result for the same inference sample The second inference result.
  • the radio frequency unit 1001 is further configured to send a model request message to a third network element, where the model request message is used to request the third network element to train and/or feed back the target model; the radio frequency unit 1001, further receiving information related to the target model sent by the third network element, where the related information about the target model includes at least the target model information.
  • the model request message includes at least one of the following: type information of the model training task; identification information of the model training task; related information of the second filter, used to limit the target corresponding to the model training task At least one of object, target time and target area; model feedback related information, the model feedback related information includes at least one of model description method and model feedback time.
  • the related information of the target model further includes at least one of the following: model instance identification information; second indication information, used to indicate that the target model is a vertical federated learning model; related information of the fourth network element, the The fourth network element is a network element participating in the training of the target model.
  • the target model information includes at least one of the following: model structure information; model parameter information; model algorithm information; model hyperparameter information; model input data type information; model output data type information.
  • the processor 1010 is further configured to determine the information of the at least one second network element according to the relevant information of the target model.
  • the step of the processor 1010 determining the information of the at least one second network element according to the relevant information of the target model includes: according to the relevant information of the fourth network element included in the relevant information of the target model The information determines the information of the at least one second network element, and the fourth network element is a network element participating in the training of the target model.
  • the radio frequency unit 1001 is further configured to receive an inference task request message sent by a consumer device, where the inference task request message includes relevant information of the first inference task; The consumer device sends the second inference result.
  • the radio frequency unit 1001 is configured to receive a federated inference request message sent by the first network element, where the federated inference request message includes at least information about the first inference task; the second inference module 820 is configured to Perform inference according to the federated inference request message to obtain a first inference result; the radio frequency unit 1001 is configured to send first information to the first network element, where the first information at least includes the first inference result.
  • the federated inference request message includes at least one of the following: model instance identification information, the model instance identification information is used to identify the target model required for the federated inference process; the identification of the first inference task Information; first indication information, used to indicate that the federated reasoning process is a vertical federated reasoning process; related information of the first filter, used to limit the reasoning sample information corresponding to the first reasoning task, and the reasoning sample information includes : At least one item of inference object information, inference time information, and inference area information; report information corresponding to the first inference result.
  • the reporting information of the first reasoning result includes at least one of the following: a reporting format of the first reasoning result; a reporting condition of the first reasoning result.
  • the first information further includes at least one of the following: model instance identification information, the model instance identification information is used to identify the target model required for the federation reasoning process; the identification of the first reasoning task Information: inference sample information corresponding to the first inference result, where the inference sample information includes: at least one of inference object information, inference time information, and inference area information.
  • the step of the second reasoning module 820 performing reasoning according to the federated reasoning request message to obtain the first reasoning result includes: determining a local reasoning model according to the model instance identification information in the federated reasoning request message, and Type information of the inference input data; according to the type information of the inference input data and the related information of the first filter in the federated inference request message, acquire the inference input data; based on the inference input data and the local inference model Perform inference to obtain the first inference result.
  • the communication device 900 is a network-side device, as shown in FIG. 11 , it is a schematic structural diagram of a network-side device provided in an embodiment of the present application, which includes a processor and a communication interface, and the communication interface and the processing Coupled with a processor, the processor is used to run programs or instructions to implement the steps of the methods described in the embodiments 300-600.
  • the network-side device embodiment corresponds to the above-mentioned network-side device method embodiment, and each implementation process and implementation mode of the above-mentioned method embodiment can be applied to this network-side device embodiment, and can achieve the same technical effect.
  • FIG. 11 it is a schematic structural diagram of a network side device 1100 provided in the embodiment of the present application.
  • the network side device 1100 includes: an antenna 1101 , a radio frequency device 1102 , a baseband device 1103 , a processor 1104 and a memory 1105 .
  • the antenna 1101 is connected to the radio frequency device 1102 .
  • the radio frequency device 1102 receives information through the antenna 1101, and sends the received information to the baseband device 1103 for processing.
  • the baseband device 1103 processes the information to be sent and sends it to the radio frequency device 1102
  • the radio frequency device 1102 processes the received information and sends it out through the antenna 1101 .
  • the method performed by the network side device in the above embodiments may be implemented in the baseband device 1103, where the baseband device 1103 includes a baseband processor.
  • the baseband device 1103 may include, for example, at least one baseband board, on which a plurality of chips are arranged, as shown in FIG.
  • the program executes the network device operations shown in the above method embodiments.
  • the network side device may also include a network interface 1106, such as a common public radio interface (common public radio interface, CPRI).
  • a network interface 1106 such as a common public radio interface (common public radio interface, CPRI).
  • the network-side device 1100 in this embodiment of the present invention further includes: instructions or programs stored in the memory 1105 and operable on the processor 1104, and the processor 1104 calls the instructions or programs in the memory 1105 to execute FIG. 7 or FIG. 8
  • the methods executed by each module shown in the figure achieve the same technical effect, so in order to avoid repetition, they are not repeated here.
  • FIG. 12 it is a schematic structural diagram of another network side device 1200 provided in the embodiment of the present application.
  • the network side device 1200 includes: a processor 1201 , a network interface 1202 and a memory 1203 .
  • the network interface 1202 is, for example, a common public radio interface (common public radio interface, CPRI).
  • the network side device 1200 in this embodiment of the present invention further includes: instructions or programs stored in the memory 1203 and operable on the processor 1201, and the processor 1201 invokes the instructions or programs in the memory 1203 to execute FIG. 7 or FIG. 8
  • the methods executed by each module shown in the figure achieve the same technical effect, so in order to avoid repetition, they are not repeated here.
  • the embodiment of the present application also provides a readable storage medium, the readable storage medium stores a program or an instruction, and when the program or instruction is executed by a processor, each process of the above data processing method embodiment is realized, and can achieve The same technical effects are not repeated here to avoid repetition.
  • the processor is the processor in the terminal described in the foregoing embodiments.
  • the readable storage medium includes a computer-readable storage medium, such as a computer read-only memory ROM, a random access memory RAM, a magnetic disk or an optical disk, and the like.
  • the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a network-side device program or instruction to realize the above-mentioned data processing
  • the chip includes a processor and a communication interface
  • the communication interface is coupled to the processor
  • the processor is used to run a network-side device program or instruction to realize the above-mentioned data processing
  • the chip mentioned in the embodiment of the present application may also be called a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip.
  • the embodiment of the present application also provides a computer program product, the computer program product includes a processor, a memory, and a program or instruction stored in the memory and operable on the processor, the program or instruction is executed by the
  • the above-mentioned processor is executed, each process of the above-mentioned data processing method embodiment can be realized, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the embodiment of the present application also provides a federated data processing system, including at least: a first network element and a second network element, and the first network element can be used to perform the steps in the above-mentioned method embodiments 300-400, The second network element may be used to execute the steps in the method embodiment 500 described above.
  • the term “comprising”, “comprising” or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the phrase “comprising a " does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element.
  • the scope of the methods and devices in the embodiments of the present application is not limited to performing functions in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in reverse order according to the functions involved. Functions are performed, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
  • the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of computer software products, which are stored in a storage medium (such as ROM/RAM, magnetic disk, etc.) , CD-ROM), including several instructions to make a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computer And Data Communications (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请公开了一种数据处理的方法、装置及通信设备,属于通信技术领域,本申请实施例的数据处理的方法包括:第一网元在进行与第一推理任务对应的联邦推理过程的情况下,向至少一个第二网元发送联邦推理请求消息,所述联邦推理请求消息至少包括所述第一推理任务的相关信息,所述第二网元为参与所述联邦推理过程的网元;所述第一网元接收所述至少一个第二网元发送的第一信息,所述第一信息至少包括第一推理结果;所述第一网元根据至少一个所述第一推理结果,确定所述第一推理任务对应的第二推理结果。

Description

数据处理的方法、装置及通信设备
相关申请的交叉引用
本申请主张在2021年12月30日在中国提交的中国专利申请No.202111669990.4的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于通信技术领域,具体涉及一种数据处理的方法、装置及通信设备。
背景技术
在通信领域中,对于某种通信服务,可能需要基于来自不同域(如核心网域、接入网域、管理网域、第三服务网域等)或不同网元上生产的数据进行数据处理(如数据分析等),以提高通信服务的高效性和可靠性。
但是,随着数据安全问题和隐私问题被越来越重视,通信网络中的不同域、不同网元/设备之间可能存在数据隔离的问题,那么,如何基于不同域或不同网元中的数据实现数据推理成为当前急需解决的技术问题。
发明内容
本申请实施例提供一种数据处理的方法、装置及通信设备,能够在不进行数据共享的情况下,联合位于不同域、不同网元上的数据实现数据推理。
第一方面,提供了一种数据处理的方法,包括:第一网元在进行与第一推理任务对应的联邦推理过程的情况下,向至少一个第二网元发送联邦推理请求消息,所述联邦推理请求消息至少包括所述第一推理任务的相关信息,所述第二网元为参与所述联邦推理过程的网元;所述第一网元接收所述至少一个第二网元发送的第一信息,所述第一信息至少包括第一推理结果;所述第一网元根据至少一个所述第一推理结果,确定所述第一推理任务对应的第二推理结果。
第二方面,提供了一种数据处理方法,所述方法包括:第二网元接收第一网元发送的联邦推理请求消息,所述联邦推理请求消息至少包括第一推理任务的相关信息;所述第二网元根据所述联邦推理请求消息进行推理,得到第一推理结果;所述第二网元发送第一信息给所述第一网元,所述第一信息中至少包括所述第一推理结果。
第三方面,提供了一种数据处理的装置,包括:第一发送模块,用于在进行与第一推理任务对应的联邦推理过程的情况下,向至少一个第二网元发送联邦推理请求消息,所述联邦推理请求消息至少包括所述第一推理任务的相关信息,所述第二网元为参与所述联邦 推理过程的网元;第一接收模块,用于接收所述至少一个第二网元发送的第一信息,所述第一信息至少包括第一推理结果;第一推理模块,用于根据至少一个所述第一推理结果,确定所述第一推理任务对应的第二推理结果。
第四方面,提供了一种数据处理装置,所述装置包括:第二接收模块,用于接收第一网元发送的联邦推理请求消息,所述联邦推理请求消息至少包括第一推理任务的相关信息;第二推理模块,用于根据所述联邦推理请求消息进行推理,得到第一推理结果;第二发送模块,用于发送第一信息给所述第一网元,所述第一信息中至少包括所述第一推理结果。
第五方面,提供了一种通信设备,该通信设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面或第二方面所述的方法的步骤。
第六方面,提供了一种通信设备,包括处理器及通信接口,其中,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法的步骤,或实现如第二方面所述的方法的步骤。
第七方面,提供了一种联邦数据处理系统,包括:第一网元及第二网元,所述第一网元可用于执行如第一方面所述的数据处理的方法的步骤,所述第二网元可用于执行如第二方面所述的数据处理的方法的步骤。
第八方面,提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤,或者实现如第二方面所述的方法的步骤。
第九方面,提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法的步骤,或实现如第二方面所述的方法的步骤。
第十方面,提供了一种计算机程序产品/程序产品,所述计算机程序/程序产品被存储在存储介质中,所述计算机程序/程序产品被至少一个处理器执行以实现如第一方面所述的方法的步骤,或实现如第二方面所述的方法的步骤。
在本申请实施例中,所述第一网元通过利用联邦推理的方式,联合各第二网元实现本地分布式的数据推理,由此,可在不共享通信网络中各第二网元上的推理数据的前提下,既能确保各第二网元中的数据隐私性和数据安全性,还能确保推理结果的可靠性。
附图说明
图1是本申请一示例性实施例提供的无线通信系统的结构示意图。
图2是本申请一示例性实施例提供的联邦数据处理系统的结构示意图。
图3是本申请实施例提供的数据处理的方法的流程示意图之一。
图4是本申请实施例提供的数据处理的方法的流程示意图之二。
图5是本申请实施例提供的数据处理的方法的交互流程示意图。
图6是本申请实施例提供的数据处理的方法的流程示意图之三。
图7是本申请实施例提供的数据处理的装置的结构示意图之一。
图8是本申请实施例提供的数据处理的装置的结构示意图之二。
图9是本申请一示例性实施例提供的通信设备的结构示意图。
图10是本申请一示例性实施例提供的终端的结构示意图。
图11是本申请实施例提供的网络侧设备的结构示意图之一。
图12是本申请实施例提供的网络侧设备的结构示意图之二。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”所区别的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”一般表示前后关联对象是一种“或”的关系。
值得指出的是,本申请实施例所描述的技术不限于长期演进型(Long Term Evolution,LTE)/LTE的演进(LTE-Advanced,LTE-A)系统,还可用于其他无线通信系统,诸如码分多址(Code Division Multiple Access,CDMA)、时分多址(Time Division Multiple Access,TDMA)、频分多址(Frequency Division Multiple Access,FDMA)、正交频分多址(Orthogonal Frequency Division Multiple Access,OFDMA)、单载波频分多址(Single-carrier Frequency-Division Multiple Access,SC-FDMA)和其他系统。本申请实施例中的术语“系统”和“网络”常被可互换地使用,所描述的技术既可用于以上提及的系统和无线电技术,也可用于其他系统和无线电技术。以下描述出于示例目的描述了新空口(New Radio,NR)系统,并且在以下大部分描述中使用NR术语,但是这些技术也可应用于NR系统应用以外的应用,如第6代(6 th Generation,6G)通信系统。
图1示出本申请实施例可应用的一种无线通信系统的框图。无线通信系统包括终端11和网络侧设备12。其中,终端11可以是手机、平板电脑(Tablet Personal Computer)、膝上型电脑(Laptop Computer)或称为笔记本电脑、个人数字助理(Personal Digital Assistant,PDA)、掌上电脑、上网本、超级移动个人计算机(ultra-mobile personal computer,UMPC)、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴式设备(Wearable Device)、车载设备(VUE)、行人终端(PUE)、智能家居(具有无线通信功能的家居设备,如冰箱、电视、洗衣机或者 家具等)、游戏机、个人计算机(personal computer,PC)、柜员机或者自助机等终端侧设备,可穿戴式设备包括:智能手表、智能手环、智能耳机、智能眼镜、智能首饰(智能手镯、智能手链、智能戒指、智能项链、智能脚镯、智能脚链等)、智能腕带、智能服装等。需要说明的是,在本申请实施例并不限定终端11的具体类型。网络侧设备12可以包括接入网设备或核心网设备,其中,接入网设备12也可以称为无线接入网设备、无线接入网(Radio Access Network,RAN)、无线接入网功能或无线接入网单元。接入网设备12可以包括基站、WLAN接入点或WiFi节点等,基站可被称为节点B、演进节点B(eNB)、接入点、基收发机站(Base Transceiver Station,BTS)、无线电基站、无线电收发机、基本服务集(Basic Service Set,BSS)、扩展服务集(Extended Service Set,ESS)、家用B节点、家用演进型B节点、发送接收点(Transmitting Receiving Point,TRP)或所述领域中其他某个合适的术语,只要达到相同的技术效果,所述基站不限于特定技术词汇,需要说明的是,在本申请实施例中仅以NR系统中的基站为例进行介绍,并不限定基站的具体类型。
在前述无线通信系统的基础上,如图2所示,本申请实施例还提供一种联邦数据处理系统的结构示意图,该联邦数据处理系统包括第一网元、第二网元、第三网元、消费者设备。
所述第一网元作为所述联邦数据处理系统中的联合推理网元,具备获取其他本地推理实体上报的局部推理结果并聚合生成全局推理结果的功能。基于此,所述第一网元可以是通信网络中某个能够提供机器学习推理功能的网元或设备,如网络数据分析功能(Network Data Analytics Function,NWDAF)、管理数据分析的服务(management data analytics service,MDAS)、管理数据分析的功能(management data analytic function,MDAF)等专用于提供网络智能化服务的网元或设备;或者,所述第一网元也可以是提供其他通信相关服务(如移动管理(Mobility Management,MM)服务、会话管理(Session Management,SM)服务)、且同时具备智能化功能的网元或设备,如接入和移动管理功能(Access and Mobility Management Function,AMF)、会话管理功能(Session Management Function,SMF)、应用功能(application function,AF)等,该AF可以是通信运营商部署的AF,也可以是第三方AF等。
所述第二网元作为所述联邦数据处理系统中的本地推理实体,其可以是通信网络中具备本地机器学习推理能力的网元或设备,如无线接入网(Radio Access Network,RAN)域的人工智能(Artificial Intelligence,AI)功能网元、核心网(Core Network,CN)域的AI功能网元、第三方AI应用、UE中AI代理(client)、本地通信服务设备等等。应注意,为便于理解,图2中仅示意出了两个第二网元,实际中,所述第二网元可以为一个或多个。
所述第三网元可以作为联邦数据处理系统中的模型提供者(也可称作coordinator),以训练并向所述第一网元提供用于数据推理的AI模型等,基于此,所述第三网元可以是通信网络中某个能够提供机器学习功能的网元或设备,如NWDAF、MDAS、MDAF等专用于提供网络智能化服务的网元或设备;或者,所述第三网元也可以是提供其他通信相关 服务(如MM服务、SM服务)、且同时具备智能化功能的网元或设备,如AMF、SMF)等。可以理解,所述第三网元采用模型训练方式可以是但不限于联邦模型训练方式(也即联邦学习训练方式)等。
当然需要注意的是,在本申请中,除了通过所述第三网元向所述第一网元提供用于数据推理的AI模型之外,所述第一网元也可以通过模型训练的方式(如联邦模型训练方式或联邦学习训练方式)获取用于数据推理的AI模型,本实施例在此不做限制。
消费者设备作为无线通信系统中的推理任务的消费者,可以是无线通信系统中需要进行数据处理的网元或设备,如第三方AI应用、UE、策略控制功能实体(Policy Control Function,PCF)、AMF、AF等。本申请中,消费者设备可以因某具体的数据分析任务(例如以analytics ID进行标识)等向所述第一网元发送推理任务请求消息,以触发联邦推理过程。
可以理解,所述联邦数据处理系统可以包括比前述图2更多或更少的网元或设备。例如,所述联邦数据处理系统可以如图2所示包括所述第三网元(所述推理任务所需的推理目标模型来自第三网元),也可以不包括第三网元(所述推理任务所需的推理目标模型来自于第一网元,其中,所述第一网元可以经过联邦学习训练过程产生所述目标模型)等;又例如,所述联邦数据处理系统还可以包括图2未示出的第四网元,其中,所述第四网元可通过本地训练,向所述第二网元提供推理任务所需的本地推理模型等,在此不做限制。
下面结合附图,通过一些实施例及其应用场景对本申请实施例提供的技术方案进行详细地说明。
如图3所示,为本申请一示例性实施例提供的数据处理的方法300的流程示意图,该方法300可以但不限于由第一网元(如终端或网络侧设备)执行,具体可由安装于第一网元中的硬件和/或软件执行。本实施例中,所述方法300至少可以包括如下步骤。
S310,第一网元在进行与第一推理任务对应的联邦推理过程的情况下,向至少一个第二网元发送联邦推理请求消息。
其中,所述第二网元为参与所述联邦推理过程的推理实体。本实施例中,所述第二网元可以是所述第一网元根据第一推理任务确定,或者,所述第二网元是所述第一网元根据所述联邦推理过程对应的目标模型的相关信息确定,或者,所述第二网元是所述第一网元根据所述模型训练任务从网络存储功能(Network Repository Function,NRF)或统一数据管理实体(Unified Data Management,UDM)中获取的、且能支持所述联邦推理过程的网元。
例如,假设所述第二网元是所述第一网元根据所述联邦推理过程对应的目标模型的相关信息确定,那么,所述第二网元的确定过程可以包括:所述第一网元根据目标模型的相关信息中所包括的参与所述目标模型训练的网元信息(即第四网元信息)确定所述第二网元。如,所述第一网元确定出参与所述目标模型训练的训练网元(如AMF实例(instance(s))、RAN instance(s)、AF instance(s)等),再将这些训练网元确定为第二网元。
所述联邦推理请求消息至少包括所述第一推理任务的相关信息,以用于指示各所述第二网元需要联合进行所述第一推理任务。可选的,根据业务需求的不同,所述第一推理任务可以是数据分析任务、数据统计任务等。
此外,所述联邦推理过程可以由所述第一网元根据自身的推理任务进行触发,也可以由消费者设备的推理任务触发。例如,请参阅图2,在所述联邦推理过程由所述消费者设备触发的情况下,其实现过程可以包括:所述消费者设备在确定需要进行数据分析等推理任务时,可以向所述第一网元发送推理任务请求消息,所述推理任务请求消息中包括所述第一推理任务(如数据分析任务)的相关信息,那么,所述第一网元在接收到消费者设备发送的推理任务请求消息后,可触发所述联邦推理过程。
S320,所述第一网元接收所述至少一个第二网元发送的第一信息。
其中,所述第一信息至少包括第一推理结果。可以理解,对于每个所述第二网元,所述第一推理结果是所述第二网元根据本地的推理数据以及本地推理模型确定。其中,所述本地推理模型可以来自除所述第二网元之外的其他网元,如由所述第一网元向所述第二网元指示所述本地推理模型,又如由第四网元向所述第二网元提供所述本地推理模型,所述第四网元是参与目标模型训练过程、且负责本地模型训练的网元;或者所述本地推理模型还可以由所述第二网元进行本地模型训练得到,在此不做限制。
S330,所述第一网元根据至少一个所述第一推理结果,确定所述第一推理任务对应的第二推理结果。
其中,所述第一网元在根据来自不同的第二网元的第一推理结果确定所述第二推理结果的方式可以有多种,如可以根据与所述第一推理任务对应的AI模型等对接收到的第一推理结果进行处理,本实施例在此不做限制。
另外,与前述联合推理过程的触发对应,在所述联邦推理过程由所述消费者设备触发的情况下,所述第一网元在获取到所述第二推理结果后,可向所述消费者设备发送所述第二推理结果。
本实施例中,所述第一网元通过利用联邦推理的方式,联合各第二网元实现本地分布式的数据推理,由此,可在不共享通信网络中各第二网元上的推理数据的前提下,既能确保各第二网元中的数据隐私性和数据安全性,还能确保推理结果的可靠性。
如图4所示,为本申请一示例性实施例提供的数据处理的方法400的流程示意图,该方法400可以但不限于由第一网元(如终端或网络侧设备)执行,具体可由安装于第一网元中的硬件和/或软件执行。本实施例中,所述方法400至少可以包括如下步骤。
S410,第一网元在进行与第一推理任务对应的联邦推理过程的情况下,向至少一个第二网元发送联邦推理请求消息。
其中,所述联邦推理请求消息至少包括所述第一推理任务的相关信息,所述第二网元为参与所述联邦推理过程的网元。
可以理解,S410的实现过程除了可参照方法实施例300中的相关描述之外,作为一种可能的实现方式,所述联邦推理请求消息至少可以包括以下(11)-(15)中的至少一项。
(11)模型实例标识信息(model instance ID),所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型,以及向所述第二网元唯一地指定一个用于推理的本地推理模型,如所述第二网元可以根据所述模型实例标识信息关联出与所述目标模型对应的本地推理模型。
一种实现方式中,所述本地推理模型可以属于目标模型的子模型或一部分,当然需要注意的是,本申请中提及的本地推理模型可以是第二网元通过本地训练得到,也可以从其他网元(如第四网元,所述第四网元是参与目标模型训练过程、且负责本地模型训练的网元)获取。
此外,所述目标模型可以是所述第一网元通过模型训练的方式获取,如通过纵向联邦模型训练过程进行本地分布式模型训练获取,也可以是所述第一网元在确定需要进行联邦推理过程前/后,从第三网元处获取。
基于此,下面以所述目标模型是所述第一网元从第三网元处获取为例,对所述目标模型的获取过程进行说明,其中,所述目标模型的获取过程可以包括以下(111)-(113),内容如下。
(111)所述第一网元向所述第三网元发送模型请求消息。其中,所述模型请求消息中至少可携带有所述第一模型训练任务的相关信息(也可理解为由第一推理任务信息确定的模型训练任务信息),以用于请求所述第三网元训练和/或反馈与所述第一推理任务对应的目标模型。
一种实现方式中,所述模型请求消息可以包括以下(11101)-(11104)中的至少一项。
(11101)模型训练任务的类型信息。其中,所述模型训练任务与所述第一推理任务对应,以指示所述第三网元训练与所述第一推理任务对应的目标模型,如用于数据分析任务的数据分析AI模型等。
本实施例中,所述模型训练任务的类型信息可以包括分析标识(analytics ID)、模型标识(model ID)等,以用于指示所述第三网元是针对什么机器学习任务(即模型训练任务)而进行模型训练。
例如,可利用字符串“analytics ID/model ID=UE mobility”来指示需要对UE的移动轨迹进行模型训练,以获取UE移动轨迹AI模型。应注意,所述类型信息除了前述字符串类型的表示方式之外,也可以利用数字表示方式或者其他编码表示方式等进行表示,此处不限。
(11102)所述模型训练任务的标识信息。
其中,与所述模型训练任务的类型信息类似,所述模型训练任务的标识信息也可以包 括analytics ID、model ID等,以用于指示所述第三网元是针对什么机器学习任务(即模型训练任务)而进行模型训练。
可以理解,为避免信息冗余,所述模型请求消息中可以包括所述模型训练任务的标识信息和所述模型训练任务的类型信息中的任一种。
(11103)第二过滤器(filter)的相关信息,其也可以理解为模型训练的过滤信息(filter information),以用于限定所述模型训练任务对应的目标对象(如目标UE(target UE(s)))、目标时间(如目标时间周期(target time period))、目标区域(如目标感兴趣区域(target area of interest,target AOI))中的至少一项,使得所述第三网元可根据所述第二过滤器的相关信息进行模型训练。
(11104)模型反馈相关信息,所述模型反馈相关信息可以包括模型反馈格式、反馈条件中的至少一项。
其中,所述模型反馈格式可以理解为模型框架信息,如基于TensorFlow、Pytorch模型框架的表达方式,或者还可以理解为基于跨框架的模型表达方式(如ONNX)等。
所述反馈条件可以包括事件触发和/或周期触发,其中,所述事件触发包括(a)-(c)中的至少一项。
(a)在所述第三网元训练目标模型的训练轮数(或次数)达到预定值时,所述第三网元反馈目标模型。
(b)在训练时间到达最长等待时间之前,所述第三网元反馈目标模型。
(c)在目标模型收敛(目标模型对应的损失函数到达预设值以及对应的预设值)时,所述第三网元反馈目标模型等。
所述周期触发是指所述第三网元可以周期性的反馈目标模型给第一网元,如每5分钟反馈一次目标模型等。
(112)所述第三网元根据所述模型请求消息进行目标模型的查询或训练。
其中,所述第三网元在进行目标模型查询时,所述第三网元可从已经训练好的模型中选取所述目标模型。
或者,所述第三网元在进行目标模型训练时,可以基于纵向联邦模型训练过程进行所述目标模型的训练,即所述第三网元可向训练实体发送联邦模型训练请求消息,以请求各训练实体进行本地模型训练并反馈模型训练中间数据,所述第三网元再基于接收到的各训练实体反馈的模型训练中间数据进行模型训练得到所述目标模型,由此,可在不进行数据共享的情况下,联合各训练实体进行本地分布式模型训练,以获取所述目标模型,既能保护各训练实体上的数据的安全性和隐私性,还可以确保目标模型的可靠性。
当然,所述第三网元在查询或训练到所述目标模型后,可发送所述目标模型的相关信息给所述第一网元,以向所述第一网元指示所述目标模型。
基于此,所述目标模型的相关信息至少可以包括所述目标模型信息。本实施例中,所述目标模型信息包括以下(1121)-(1126)中的至少一项。
(1121)模型结构信息。其中,所述模型结构信息用于指示所述目标模型的模型结构具体是哪种(如神经网络、深度神经网络(DNN)、卷积神经网络(CNN)、循环神经网络(RNN)、线性结构等)。
此外,针对神经网络模型结构,所述模型结构信息中还可以细化指明神经网络的层、神经元节点、每层输入输出之间的关系等,在此不做限制。
(1122)模型参数信息。其中,所述模型参数信息是所述目标模型内部的配置变量,以用于定义所述目标模型的功能,其可通过数据估计或模型训练的方式得到。例如,所述模型参数可以包括人造神经网络中的权重、支持向量机中的支持向量、线性回归或逻辑回归中的系数等。
(1123)模型算法信息。例如,所述模型算法信息可以但不限于包括决策树、贝叶斯分类器、K近邻、支持向量机等。
(1124)模型超参数信息。其中,所述模型超参数信息是所述目标模型外部的配置变量,一般用于模型参数的训练过程,通常可由实践者直接指定。例如,所述模型超参数信息可以包括训练神经网络的学习速率、支持向量机的C和sigama超参数、K邻域中的参数k、目标模型对应的损失函数、目标模型收敛时的损失函数预定值等。
(1125)模型输入数据的类型信息,即在利用所述目标模型进行数据推理时,所述目标模式的模型输入数据(也可以理解为推理数据)的类型是什么。
(1126)模型输出数据的类型信息。即在利用所述目标模型进行数据推理时,所述目标模式的模型输出数据(也可以理解为第一推理结果、第二推理结果)的类型是什么。
此外,在一种实现方式中,除了前述的目标模型信息之外,所述目标模型的相关信息还可以包括模型实例标识信息、第二指示信息、第四网元的相关信息、模型训练配置信息中的至少一项。
其中,所述模型实例标识信息与前述(11)中所述的模型实例标识信息可以相同或不同。本实施例中在所述目标模型由所述第三网元训练得到时,所述模型实例标识信息可以由所述第三网元分配、且用于唯一的标识所述目标模型。例如,所述第一网元在确定进行与模型训练任务对应的联邦模型训练过程时,为所述模型训练任务对应分配一个模型实例标识信息,用于指示通过所述联邦模型训练过程训练得到的模型实体,即所述目标模型。
或者,所述模型实例标识信息还可以用于第二网元关联本地模型和所述目标模型。
所述第二指示信息,用于指示所述目标模型是纵向联邦学习模型,即所述目标模型是经过纵向联邦模型训练过程得到。
所述第四网元的相关信息,所述第四网元为参与所述目标模型训练的网元,即所述第四网元为目标模型对应的训练实体。所述第四网元的相关信息可以包括第四网元的标识信息、地址信息、全限定域名(Fully Qualified Domain Name,FQDN)信息、名称信息等。
需要注意,在一种实现方式中,所述第一网元可以根据所述目标模型的相关信息确定所述至少一个第二网元的信息。例如,所述第一网元根据所述目标模型的相关信息中包括 的第四网元的相关信息确定所述至少一个第二网元的信息,所述第四网元为参与所述目标模型训练的网元。
也就是,在所述目标模型是通过纵向联邦模型训练过程得到的情况下,所述第二网元也可以是参与所述纵向联邦模型训练过程的训练实体,即所述第二网元可以与所述第四网元相同。
(113)所述第一网元接收所述第三网元发送的所述目标模型的相关信息。
其中,所述第一网元在接收到所述目标模型的相关信息之后,可对所述目标模型的相关信息进行保存,以用于后续的联邦推理过程。
(12)所述第一推理任务的标识信息,用于向第二网元指明推理任务的类型或目的。可选的,所述第一推理任务的标识信息可以是分析标识(analytics ID)等。
(13)第一指示信息,用于指示所述联邦推理过程为纵向联邦推理过程。其中,所述纵向联邦推理过程可以理解为:参与所述联邦推理过程的训练数据在不同第二网元间对应的样本相同但样本特征不同。即所述纵向联邦推理过程的本质是样本特征的联合,适用于用户(即样本)重叠多、特征(即样本特征)重叠少的场景,如通信网络内的CN域和RAN域服务相同用户(如UE,即样本相同)的不同服务(如MM业务、SM业务,即样本特征不同),基于此,本申请通过联合推理参与方(即第二网元)的共同样本的不同样本特征,使推理数据的特征维度增多,以推理得到一个更好的推理结果。
(14)第一过滤器的相关信息,用于限定所述第一推理任务对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项。例如,在所述第一推理任务为数据分析任务时,所述第一过滤器的相关信息可以是analytics filter information等,以用于各所述第二网元根据所述第一过滤器的相关信息采集与所述推理对象信息、推理时间信息、推理区域信息对应的推理数据,以进行本地数据推理。
(15)所述第一推理结果对应的上报信息。可选的,所述第一推理结果对应的上报信息包括以下(151)-(152)中的至少一项。
(151)所述第一推理结果的上报格式,用于指示所述第二网元在得到第一推理结果后,采用哪种数据格式进行所述第一推理结果的上报。
(152)所述第一推理结果的上报条件。其中,所述上报条件可以包括事件触发和/或周期触发。其中,所述事件触发可以包括(1521)-(1522)中任一项。
(1521)在所述第二网元进行推理的推理轮数(或次数)达到预定值时,所述第二网元上报第一推理结果。其中,对于不同的第二网元,第一网元在上报信息指定的推理轮数可以不同,这可使得所述第一网元对齐联邦推理过程中各个推理实体(即第二网元)上报的第一推理结果的步调,进而确保各推理实体上报步调一致,防止出现掉队者的问题。
(1522)在上报时间到达最长等待时间(或反馈截止时间)之前,所述第二网元上报第一推理结果。
所述周期触发是指所述第二网元可以周期性的上报第一推理结果,如每5分钟上报一 次第一推理结果等。
应注意,在本实施例中,所述联邦推理请求消息中具体包括哪个信息可以由协议约定、高层配置等实现,在此不做限制。
进一步,作为一种可能的实现方式,本申请在确定联合推理过程时,除了前述的需要由所述第一网元根据自身的推理任务触发,或由所述消费者设备触发之外,还可确定第一条件是否成立,并在第一条件成立的情况下,所述第一网元确定进行联模型推理过程。
可选的,所述第一条件可以包括以下(21)-(23)中的至少之一。
(21)所述第一网元没有存储或无法获取所述推理过程对应的全部或部分推理数据。例如,可能因为数据安全问题或数据隐私问题导致所述第一网元中缺乏各推理实体(即第二网元)的部分或全部推理数据,使得所述第一网元需要利用联邦模型推理过程以联合各第二网元进行本地分布式模型推理。
(22)所述至少一个第二网元能够提供所述推理过程对应的全部或部分推理数据。
(23)所述推理任务所需的各所述第二网元间推理数据的样本相同、但样本特征不同。例如,用于联邦模型推理的推理数据是同一个或一组UE在CN中产生的MM相关数据、或在RAN中产生的位置数据、或在第三方服务中产生的业务体验数据。
可以理解,所述第一条件包括前述(21)-(23)中的哪个可以由协议约定、高层配置或网络侧配置,在此不做限制。
S420,所述第一网元接收所述至少一个第二网元发送的第一信息。
其中,所述第一信息至少包括第一推理结果,所述第一推理结果是所述第二网元根据本地推理模型推理得到,所述本地推理模型可以是所述第二网元根据所述联邦推理请求消息中包括的模型实例标识信息确定、且所述本地推理模型对应于联邦推理过程所需使用的目标模型。
当然在一种实现方式中,对于各所述第二网元,所述第二网元根据所述联邦推理请求消息中包括的模型实例标识信息确定所述本地推理模型,并根据所述推理模型推理得到所述第一推理结果的过程可以包括(31)-(33),内容如下。
(31)所述第二网元根据所述联邦推理请求消息中的模型实例标识信息确定本地推理模型,以及推理输入数据的类型信息。
(32)所述第二网元根据所述推理输入数据的类型信息以及所述联邦推理请求消息中的第一过滤器的相关信息,获取推理输入数据。
例如,所述第二网元可可根据所述推理输入数据的类型信息以及所述第一过滤器的相关信息,确定采集小区1中在每周一07:00-09:00时段内所有UE的推理数据,即推理输入数据。
(33)所述第二网元基于所述推理输入数据和所述本地推理模型进行推理,得到所述第一推理结果。
例如,所述第二网元将采集到的推理输入数据输入所述本地推理模型,得到所述第一 推理结果。
基于此,作为一种实现方式,所述第一信息除了包括所述第一推理结果之外,所述第一信息还可包括以下(41)-(43)至少一项。
(41)模型实例标识信息,用于标识所述联邦推理过程所需使用的目标模型,换言之,所述模型实例标识信息用于向所述第一网元指示所述第一推理结果对应的目标模型的标识。
(42)所述第一推理任务的标识信息。
(43)所述第一推理结果对应的推理样本信息,所述推理样本信息包括:推理对象信息(如UE ID(s)或any UE等)、推理时间信息、推理区域信息中的至少一项。
其中,所述模型实例标识信息、所述第一推理任务的标识信息可以参照前述联邦推理请求消息中的相关描述,所述第一推理结果对应的推理样本信息是所述第二网元根据推理时所采集的推理样本信息确定,在此不做限制。
S430,所述第一网元根据至少一个所述第一推理结果,确定所述第一推理任务对应的第二推理结果。
可以理解,S430的实现过程除了可参照方法实施例300中的相关描述之外,作为一种可能的实现方式,请再次参阅图4,S430的实现过程可以包括S431,内容如下。
S431,所述第一网元根据目标模型以及至少一个所述第一推理结果,计算所述第二推理结果。
一种实现方式中,所述第一网元可根据所述第一推理结果对应的推理样本信息,关联对齐所述至少一个第二网元发送的、且针对同一个推理样本的第一推理结果;所述第一网元将所述针对同一个推理样本的第一推理结果输入所述目标模型,获取针对所述同一个推理样本的所述第二推理结果。
其中,所述第一网元在对所述第二网元发送的第一推理结果进行关联对齐时,可以按照UE粒度实现,也可以按照时间信息等实现,在此不做限制。示例性的,若所述联邦推理过程是UE粒度的,则第一网元可按照UE粒度计算最终的第二推理结果。也即,第一网元基于模型实例标识信息对应的目标模型,以及从各所述第二网元接收到的该UE对应的第一推理结果,计算获取该UE对应的联邦学习的最终推理结果,即第二推理结果。
或者,所述第一网元也可以根据时间信息对同一个时间点的推理数据进行关联对齐。例如,第一网元将同一个UE在RAN域和CN域、且同是8:00的第一推理结果进行关联运算,获得8:00对应的最终推理结果,即第二推理结果。
另外,来自不同第二网元的UE的标识可能不同,例如来自RAN的UE标识可能是无线接入网NG应用协议标识(RAN NG Application Protocol ID,RAN NGAP ID),而CN的UE标识是AMF NGAP ID或用户永久标识(SUbscription Permanent Identifier,SUPI)等,则所述第一网元根据UE不同标识的映射关系(如RAN NGAP ID、AMF NGAP ID两者的映射关系)而关联同一个UE的数据和推理结果。
进一步,在前述联邦推理过程中,所述第一网元在将所述针对同一个推理样本的第一推理结果输入所述目标模型时,还可以将自身的且针对同一个推理样本的推理数据作为模型输入数据,其与所述第一推理结果一并输入所述目标模型,得到所述第二推理结果。
进一步,基于前述方法实施例300和400的描述,下面以所述联邦推理过程为所述消费者设备触发为例,对所述联邦推理过程进行示例性说明。其中,所述数据推理过程可以包括一个或多个第二网元,在本实施例中,为便于理解,所述数据推理过程包括两个第二网元,即图5所示的推理实体1和推理实体2。
S501,消费者设备发送推理任务请求消息给所述第一网元。
S502,所述第一网元根据所述推理任务请求消息确定进行或不进行第一推理任务对应的联邦推理过程。
S503,第一网元在进行与第一推理任务对应的联邦推理过程的情况下,向第三网元发送模型请求消息。
S504,所述第一网元接收所述第三网元发送的目标模型的相关信息,所述目标模型的相关信息至少包括所述目标模型信息。
S505,所述第一网元根据所述目标模型的相关信息确定能够参与所述联邦推理过程的第二网元的信息,如推理实体1和推理实体2。
S506,所述第一网元向推理实体1和推理实体2发送联邦推理请求消息。
S507,推理实体1在接收到所述联邦推理请求消息的情况下,根据所述联邦推理请求消息中包括的模型实例标识信息确定本地推理模型,以及根据本地推理模型推理得到所述第一推理结果。
相似的,推理实体2在接收到所述联邦推理请求消息的情况下,根据所述联邦推理请求消息中包括的模型实例标识信息确定本地推理模型,以及根据本地推理模型推理得到所述第一推理结果。
S508,推理实体1和推理实体2分别向所述第一网元发送所述第一推理结果。
S509,所述第一网元在接收到所述推理实体1和推理实体2发送的第一推理结果的情况下,根据推理实体1和推理实体2发送的第一推理结果,确定所述第一推理任务对应的第二推理结果。
S510,所述第一网元向所述消费者设备发送所述第二推理结果。
需要注意,前述S501-S510的实现过程可参照前述方法实施例300-400中的相关描述,并达到相同或相应的技术效果,为避免重复,在此不再赘述。
此外,在本实施例中,所述联邦模型训练过程可以包括但不限于前述S501-S510,例如,可以包括比前述S501-S510更多或更少的步骤,在此不做限制。
如图6所示,为本申请一示例性实施例提供的数据处理的方法600的流程示意图,该方法600可以但不限于由第一网元(如终端或网络侧设备)执行,具体可由安装于第一网 元中的硬件和/或软件执行。本实施例中,所述方法600至少可以包括如下步骤。
S610,第二网元接收第一网元发送的联邦推理请求消息,所述联邦推理请求消息至少包括第一推理任务的相关信息。
S620,所述第二网元根据所述联邦推理请求消息进行推理,得到第一推理结果。
S630,所述第二网元发送第一信息给所述第一网元,所述第一信息中至少包括所述第一推理结果。
可选的,所述联邦推理请求消息包括以下至少一项:模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;所述第一推理任务的标识信息;第一指示信息,用于指示所述联邦推理过程为纵向联邦推理过程;第一过滤器的相关信息,用于限定所述第一推理任务对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项;所述第一推理结果对应的上报信息。
可选的,所述第一推理结果的上报信息包括以下至少一项:所述第一推理结果的上报格式;所述第一推理结果的上报条件。
可选的,所述第一信息还包括以下至少一项:模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;所述第一推理任务的标识信息;所述第一推理结果对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项。
可选的,所述第二网元根据所述联邦推理请求消息进行推理,得到第一推理结果的步骤,包括:所述第二网元根据所述联邦推理请求消息中的模型实例标识信息确定本地推理模型,以及推理输入数据的类型信息;所述第二网元根据所述推理输入数据的类型信息以及所述联邦推理请求消息中的第一过滤器的相关信息,获取推理输入数据;所述第二网元基于所述推理输入数据和所述本地推理模型进行推理,得到所述第一推理结果。
可以理解,方法实施例600中的各实现方式的实现过程可参照前述方法实施例300-500中的相关描述,并达到相同或相应的技术效果,为避免重复,在此不再赘述。
本申请实施例提供的数据处理的方法,执行主体可以为数据处理的装置。本申请实施例中以数据处理的装置执行数据处理的方法为例,说明本申请实施例提供的数据处理的装置。
如图7所示,为本申请一示例性实施例提供的数据处理的装置700的结构示意图,该装置700可以包括:第一发送模块710,用于在进行与第一推理任务对应的联邦推理过程的情况下,向至少一个第二网元发送联邦推理请求消息,所述联邦推理请求消息至少包括所述第一推理任务的相关信息,所述第二网元为参与所述联邦推理过程的网元;第一接收模块720,用于接收所述至少一个第二网元发送的第一信息,所述第一信息至少包括第一推理结果;第一推理模块730,用于根据至少一个所述第一推理结果,确定所述第一推理任务对应的第二推理结果。
可选的,所述第一推理模块还用于确定第一条件成立,所述第一条件包括以下至少之 一:所述第一网元没有存储或无法获取所述推理过程对应的全部或部分推理数据;所述至少一个第二网元能够提供所述推理过程对应的全部或部分推理数据;所述推理任务所需的各所述第二网元间推理数据的样本相同、但样本特征不同。
可选的,所述联邦推理请求消息包括以下至少一项:模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;所述第一推理任务的标识信息;第一指示信息,用于指示所述联邦推理过程为纵向联邦推理过程;第一过滤器的相关信息,用于限定所述第一推理任务对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项;所述第一推理结果对应的上报信息。
可选的,所述第一推理结果对应的上报信息包括以下至少一项:所述第一推理结果的上报格式;所述第一推理结果的上报条件。
可选的,所述第一推理结果是所述第二网元根据本地推理模型推理得到,所述本地推理模型是所述第二网元根据所述联邦推理请求消息中包括的模型实例标识信息确定。
可选的,所述第一信息还包括以下至少一项:模型实例标识信息,用于标识所述联邦推理过程所需使用的目标模型;所述第一推理任务的标识信息;所述第一推理结果对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项。
可选的,所述第一推理模块730用于根据至少一个所述第一推理结果,确定所述第一推理任务对应的第二推理结果的步骤,包括:根据目标模型以及至少一个所述第一推理结果,计算所述第二推理结果。
可选的,所述第一推理模块730根据目标模型以及至少一个所述第一推理结果,计算所述第二推理结果的步骤,包括:根据所述第一推理结果对应的推理样本信息,关联对齐所述至少一个第二网元发送的、且针对同一个推理样本的第一推理结果;将所述针对同一个推理样本的第一推理结果输入所述目标模型,获取针对所述同一个推理样本的所述第二推理结果。
可选的,所述第一发送模块710还用于向第三网元发送模型请求消息,所述模型请求消息用于请求所述第三网元训练和/或反馈所述目标模型;所述第一接收模块720,还用于接收所述第三网元发送的所述目标模型的相关信息,所述目标模型的相关信息至少包括所述目标模型信息。
可选的,所述模型请求消息包括以下至少一项:模型训练任务的类型信息;所述模型训练任务的标识信息;第二过滤器的相关信息,用于限定所述模型训练任务对应的目标对象、目标时间、目标区域中的至少一项;模型反馈相关信息,所述模型反馈相关信息包括模型描述方式、模型反馈时间中的至少一项。
可选的,所述目标模型的相关信息还包括以下至少一项:模型实例标识信息;第二指示信息,用于指示所述目标模型是纵向联邦学习模型;第四网元的相关信息,所述第四网元为参与所述目标模型训练的网元。
可选的,所述目标模型信息包括以下至少一项:模型结构信息;模型参数信息;模型算法信息;模型超参数信息;模型输入数据的类型信息;模型输出数据的类型信息。
可选的,所述第一推理模块730还用于根据所述目标模型的相关信息确定所述至少一个第二网元的信息。
可选的,所述第一推理模块730根据所述目标模型的相关信息确定所述至少一个第二网元的信息的步骤,包括:根据所述目标模型的相关信息中包括的第四网元的相关信息确定所述至少一个第二网元的信息,所述第四网元为参与所述目标模型训练的网元。
可选的,所述第一接收模块720还用于接收消费者设备发送的推理任务请求消息,所述推理任务请求消息中包括所述第一推理任务的相关信息;所述第一发送模块710还用于向所述消费者设备发送所述第二推理结果。
如图8所示,为本申请一示例性实施例提供的数据处理的装置800的结构示意图,该装置800可以包括:第二接收模块810,用于接收第一网元发送的联邦推理请求消息,所述联邦推理请求消息至少包括第一推理任务的相关信息;第二推理模块820,用于根据所述联邦推理请求消息进行推理,得到第一推理结果;第二发送模块830,用于发送第一信息给所述第一网元,所述第一信息中至少包括所述第一推理结果。
可选的,所述联邦推理请求消息包括以下至少一项:模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;所述第一推理任务的标识信息;第一指示信息,用于指示所述联邦推理过程为纵向联邦推理过程;第一过滤器的相关信息,用于限定所述第一推理任务对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项;所述第一推理结果对应的上报信息。
可选的,所述第一推理结果的上报信息包括以下至少一项:所述第一推理结果的上报格式;所述第一推理结果的上报条件。
可选的,所述第一信息还包括以下至少一项:模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;所述第一推理任务的标识信息;所述第一推理结果对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项。
可选的,所述第二推理模块820根据所述联邦推理请求消息进行推理,得到第一推理结果的步骤,包括:根据所述联邦推理请求消息中的模型实例标识信息确定本地推理模型,以及推理输入数据的类型信息;根据所述推理输入数据的类型信息以及所述联邦推理请求消息中的第一过滤器的相关信息,获取推理输入数据;基于所述推理输入数据和所述本地推理模型进行推理,得到所述第一推理结果。
本申请实施例中的数据处理的装置700-800可以是通信设备,例如具有操作系统的通信设备,也可以是通信设备中的部件,例如集成电路或芯片。该通信设备可以是终端,也可以为除终端之外的网络侧设备。示例性的,终端可以包括但不限于上述所列举的终端11的类型,网络侧设备可以包括但不限于上述所列举的网络侧设备12的类型,本申请实 施例不作具体限定。
本申请实施例提供的数据处理的装置700-800能够实现图3至图6的方法实施例实现的各个过程,并达到相同的技术效果,为避免重复,这里不再赘述。
可选的,如图9所示,本申请实施例还提供一种通信设备900,包括处理器901和存储器902,存储器902存储有可在所述处理器901上运行的程序或指令,例如,该通信设备900为终端时,该程序或指令被处理器901执行时实现上述数据处理的方法实施例的各个步骤,且能达到相同的技术效果。该通信设备900为网络侧设备时,该程序或指令被处理器901执行时实现上述数据处理的方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
一种实现方式中,所述通信设备可以是终端,如图10所示,为本申请实施例提供的一种终端的结构示意图,其包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如方法实施例300-600中所述的方法的步骤。该终端实施例是与上述终端侧方法实施例对应的,上述方法实施例的各个实施过程和实现方式均可适用于该终端实施例中,且能达到相同的技术效果。具体地,图10为实现本申请实施例的一种终端的硬件结构示意图。
该终端1000包括但不限于:射频单元1001、网络模块1002、音频输出单元1003、输入单元1004、传感器1005、显示单元1006、用户输入单元1007、接口单元1008、存储器1009、以及处理器1010等中的至少部分部件。
本领域技术人员可以理解,终端1000还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1010逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图10中示出的终端结构并不构成对终端的限定,终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
应理解的是,本申请实施例中,输入单元1004可以包括图形处理单元(Graphics Processing Unit,GPU)1041和麦克风10042,图形处理器10041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1006可包括显示面板10061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板10061。用户输入单元1007包括触控面板10071以及其他输入设备10072中的至少一种。触控面板10071,也称为触摸屏。触控面板10071可包括触摸检测装置和触摸控制器两个部分。其他输入设备10072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
本申请实施例中,射频单元1001接收来自网络侧设备的下行数据后,可以传输给处理器1010进行处理;另外,射频单元1001可以向网络侧设备发送上行数据。通常,射频单元1001包括但不限于天线、放大器、收发信机、耦合器、低噪声放大器、双工器等。
存储器1009可用于存储软件程序或指令以及各种数据。存储器1009可主要包括存储 程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器1009可以包括易失性存储器或非易失性存储器,或者,存储器1009可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器1009包括但不限于这些和任意其它适合类型的存储器。
处理器1010可包括一个或多个处理单元;可选的,处理器1010集成应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。
其中,在一种实现方式中,射频单元1001,用于在进行与第一推理任务对应的联邦推理过程的情况下,向至少一个第二网元发送联邦推理请求消息,所述联邦推理请求消息至少包括所述第一推理任务的相关信息,所述第二网元为参与所述联邦推理过程的网元;射频单元1001,用于接收所述至少一个第二网元发送的第一信息,所述第一信息至少包括第一推理结果;处理器1010,用于根据至少一个所述第一推理结果,确定所述第一推理任务对应的第二推理结果。
可选的,所述第一推理模块还用于确定第一条件成立,所述第一条件包括以下至少之一:所述第一网元没有存储或无法获取所述推理过程对应的全部或部分推理数据;所述至少一个第二网元能够提供所述推理过程对应的全部或部分推理数据;所述推理任务所需的各所述第二网元间推理数据的样本相同、但样本特征不同。
可选的,所述联邦推理请求消息包括以下至少一项:模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;所述第一推理任务的标识信息;第一指示信息,用于指示所述联邦推理过程为纵向联邦推理过程;第一过滤器的相关信息,用于限定所述第一推理任务对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项;所述第一推理结果对应的上报信息。
可选的,所述第一推理结果对应的上报信息包括以下至少一项:所述第一推理结果的上报格式;所述第一推理结果的上报条件。
可选的,所述第一推理结果是所述第二网元根据本地推理模型推理得到,所述本地推 理模型是所述第二网元根据所述联邦推理请求消息中包括的模型实例标识信息确定。
可选的,所述第一信息还包括以下至少一项:模型实例标识信息,用于标识所述联邦推理过程所需使用的目标模型;所述第一推理任务的标识信息;所述第一推理结果对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项。
可选的,所述处理器1010用于根据至少一个所述第一推理结果,确定所述第一推理任务对应的第二推理结果的步骤,包括:根据目标模型以及至少一个所述第一推理结果,计算所述第二推理结果。
可选的,所述处理器1010根据目标模型以及至少一个所述第一推理结果,计算所述第二推理结果的步骤,包括:根据所述第一推理结果对应的推理样本信息,关联对齐所述至少一个第二网元发送的、且针对同一个推理样本的第一推理结果;将所述针对同一个推理样本的第一推理结果输入所述目标模型,获取针对所述同一个推理样本的所述第二推理结果。
可选的,所述射频单元1001还用于向第三网元发送模型请求消息,所述模型请求消息用于请求所述第三网元训练和/或反馈所述目标模型;所述射频单元1001,还用于接收所述第三网元发送的所述目标模型的相关信息,所述目标模型的相关信息至少包括所述目标模型信息。
可选的,所述模型请求消息包括以下至少一项:模型训练任务的类型信息;所述模型训练任务的标识信息;第二过滤器的相关信息,用于限定所述模型训练任务对应的目标对象、目标时间、目标区域中的至少一项;模型反馈相关信息,所述模型反馈相关信息包括模型描述方式、模型反馈时间中的至少一项。
可选的,所述目标模型的相关信息还包括以下至少一项:模型实例标识信息;第二指示信息,用于指示所述目标模型是纵向联邦学习模型;第四网元的相关信息,所述第四网元为参与所述目标模型训练的网元。
可选的,所述目标模型信息包括以下至少一项:模型结构信息;模型参数信息;模型算法信息;模型超参数信息;模型输入数据的类型信息;模型输出数据的类型信息。
可选的,所述处理器1010还用于根据所述目标模型的相关信息确定所述至少一个第二网元的信息。
可选的,所述处理器1010根据所述目标模型的相关信息确定所述至少一个第二网元的信息的步骤,包括:根据所述目标模型的相关信息中包括的第四网元的相关信息确定所述至少一个第二网元的信息,所述第四网元为参与所述目标模型训练的网元。
可选的,所述射频单元1001还用于接收消费者设备发送的推理任务请求消息,所述推理任务请求消息中包括所述第一推理任务的相关信息;所述射频单元1001还用于向所述消费者设备发送所述第二推理结果。
在另一种实现方式中,射频单元1001,用于接收第一网元发送的联邦推理请求消息, 所述联邦推理请求消息至少包括第一推理任务的相关信息;第二推理模块820,用于根据所述联邦推理请求消息进行推理,得到第一推理结果;射频单元1001,用于发送第一信息给所述第一网元,所述第一信息中至少包括所述第一推理结果。
可选的,所述联邦推理请求消息包括以下至少一项:模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;所述第一推理任务的标识信息;第一指示信息,用于指示所述联邦推理过程为纵向联邦推理过程;第一过滤器的相关信息,用于限定所述第一推理任务对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项;所述第一推理结果对应的上报信息。
可选的,所述第一推理结果的上报信息包括以下至少一项:所述第一推理结果的上报格式;所述第一推理结果的上报条件。
可选的,所述第一信息还包括以下至少一项:模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;所述第一推理任务的标识信息;所述第一推理结果对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项。
可选的,所述第二推理模块820根据所述联邦推理请求消息进行推理,得到第一推理结果的步骤,包括:根据所述联邦推理请求消息中的模型实例标识信息确定本地推理模型,以及推理输入数据的类型信息;根据所述推理输入数据的类型信息以及所述联邦推理请求消息中的第一过滤器的相关信息,获取推理输入数据;基于所述推理输入数据和所述本地推理模型进行推理,得到所述第一推理结果。
在所述通信设备900为网络侧设备时,如图11所示,为本申请实施例提供的一种网络侧设备的结构示意图,其包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如实施例300-600中所述的方法的步骤。该网络侧设备实施例是与上述网络侧设备方法实施例对应的,上述方法实施例的各个实施过程和实现方式均可适用于该网络侧设备实施例中,且能达到相同的技术效果。
例如,如图11所示,为本申请实施例提供的一种网络侧设备1100的结构示意图。该网络侧设备1100包括:天线1101、射频装置1102、基带装置1103、处理器1104和存储器1105。天线1101与射频装置1102连接。在上行方向上,射频装置1102通过天线1101接收信息,将接收的信息发送给基带装置1103进行处理。在下行方向上,基带装置1103对要发送的信息进行处理,并发送给射频装置1102,射频装置1102对收到的信息进行处理后经过天线1101发送出去。
以上实施例中网络侧设备执行的方法可以在基带装置1103中实现,该基带装置1103包基带处理器。
基带装置1103例如可以包括至少一个基带板,该基带板上设置有多个芯片,如图11所示,其中一个芯片例如为基带处理器,通过总线接口与存储器1105连接,以调用存储器1105中的程序,执行以上方法实施例中所示的网络设备操作。
该网络侧设备还可以包括网络接口1106,该接口例如为通用公共无线接口(common public radio interface,CPRI)。
具体地,本发明实施例的网络侧设备1100还包括:存储在存储器1105上并可在处理器1104上运行的指令或程序,处理器1104调用存储器1105中的指令或程序执行图7或图8所示各模块执行的方法,并达到相同的技术效果,为避免重复,故不在此赘述。
例如,如图12所示,为本申请实施例提供的另一种网络侧设备1200的结构示意图。该网络侧设备1200包括:处理器1201、网络接口1202和存储器1203。其中,网络接口1202例如为通用公共无线接口(common public radio interface,CPRI)。
具体地,本发明实施例的网络侧设备1200还包括:存储在存储器1203上并可在处理器1201上运行的指令或程序,处理器1201调用存储器1203中的指令或程序执行图7或图8所示各模块执行的方法,并达到相同的技术效果,为避免重复,故不在此赘述。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述数据处理的方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的终端中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行网络侧设备程序或指令,实现上述数据处理的方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片,系统芯片,芯片系统或片上系统芯片等。
本申请实施例还提供了一种计算机程序产品,该计算机程序产品包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时,实现上述数据处理的方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例还提供了一种联邦数据处理系统,至少包括:第一网元、第二网元,所述第一网元可用于执行如上所述的方法实施例300-400中的步骤,所述第二网元可用于执行如上所述的方法实施例500中的步骤。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可 包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (42)

  1. 一种数据处理的方法,包括:
    第一网元在进行与第一推理任务对应的联邦推理过程的情况下,向至少一个第二网元发送联邦推理请求消息,所述联邦推理请求消息至少包括所述第一推理任务的相关信息,所述第二网元为参与所述联邦推理过程的网元;
    所述第一网元接收所述至少一个第二网元发送的第一信息,所述第一信息至少包括第一推理结果;
    所述第一网元根据至少一个所述第一推理结果,确定所述第一推理任务对应的第二推理结果。
  2. 根据权利要求1所述的方法,其中,向至少一个第二网元发送联邦推理请求消息之前,所述方法还包括:
    所述第一网元确定第一条件成立,所述第一条件包括以下至少之一:
    所述第一网元没有存储或无法获取所述推理过程对应的全部或部分推理数据;
    所述至少一个第二网元能够提供所述推理过程对应的全部或部分推理数据;
    所述推理任务所需的各所述第二网元间推理数据的样本相同、但样本特征不同。
  3. 如权利要求1所述的方法,其中,所述联邦推理请求消息包括以下至少一项:
    模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;
    所述第一推理任务的标识信息;
    第一指示信息,用于指示所述联邦推理过程为纵向联邦推理过程;
    第一过滤器的相关信息,用于限定所述第一推理任务对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项;
    所述第一推理结果对应的上报信息。
  4. 如权利要求3所述的方法,其中,所述第一推理结果对应的上报信息包括以下至少一项:
    所述第一推理结果的上报格式;
    所述第一推理结果的上报条件。
  5. 如权利要求1所述的方法,其中,所述第一推理结果是所述第二网元根据本地推理模型推理得到,所述本地推理模型是所述第二网元根据所述联邦推理请求消息中包括的模型实例标识信息确定。
  6. 如权利要求1所述的方法,其中,所述第一信息还包括以下至少一项:
    模型实例标识信息,用于标识所述联邦推理过程所需使用的目标模型;
    所述第一推理任务的标识信息;
    所述第一推理结果对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项。
  7. 如权利要求1-6中任一项所述的方法,其中,所述第一网元根据至少一个所述第一推理结果,确定所述第一推理任务对应的第二推理结果的步骤,包括:
    所述第一网元根据目标模型以及至少一个所述第一推理结果,计算所述第二推理结果。
  8. 如权利要求7所述的方法,其中,所述第一网元根据目标模型以及至少一个所述第一推理结果,计算所述第二推理结果的步骤,包括:
    所述第一网元根据所述第一推理结果对应的推理样本信息,关联对齐所述至少一个第二网元发送的、且针对同一个推理样本的第一推理结果;
    所述第一网元将所述针对同一个推理样本的第一推理结果输入所述目标模型,获取针对所述同一个推理样本的所述第二推理结果。
  9. 如权利要求1-8中任一项所述的方法,其中,所述方法还包括:
    所述第一网元向第三网元发送模型请求消息,所述模型请求消息用于请求所述第三网元训练和/或反馈所述目标模型;
    所述第一网元接收所述第三网元发送的所述目标模型的相关信息,所述目标模型的相关信息至少包括所述目标模型信息。
  10. 如权利要求9所述的方法,其中,所述模型请求消息包括以下至少一项:
    模型训练任务的类型信息;
    所述模型训练任务的标识信息;
    第二过滤器的相关信息,用于限定所述模型训练任务对应的目标对象、目标时间、目标区域中的至少一项;
    模型反馈相关信息,所述模型反馈相关信息包括模型描述方式、模型反馈时间中的至少一项。
  11. 如权利要求9所述的方法,其中,所述目标模型的相关信息还包括以下至少一项:
    模型实例标识信息;
    第二指示信息,用于指示所述目标模型是纵向联邦学习模型;
    第四网元的相关信息,所述第四网元为参与所述目标模型训练的网元。
  12. 如权利要求9所述的方法,其中,所述目标模型信息包括以下至少一项:
    模型结构信息;
    模型参数信息;
    模型算法信息;
    模型超参数信息;
    模型输入数据的类型信息;
    模型输出数据的类型信息。
  13. 如权利要求9所述的方法,其中,向至少一个第二网元发送联邦推理请求消息的步骤之前,所述方法还包括:
    所述第一网元根据所述目标模型的相关信息确定所述至少一个第二网元的信息。
  14. 如权利要求13所述的方法,其中,所述第一网元根据所述目标模型的相关信息确定所述至少一个第二网元的信息的步骤,包括:
    所述第一网元根据所述目标模型的相关信息中包括的第四网元的相关信息确定所述至少一个第二网元的信息,所述第四网元为参与所述目标模型训练的网元。
  15. 根据权利要求1-14中任一项所述的方法,其中,所述方法还包括:
    所述第一网元接收消费者设备发送的推理任务请求消息,所述推理任务请求消息中包括所述第一推理任务的相关信息;
    所述第一网元向所述消费者设备发送所述第二推理结果。
  16. 一种数据处理方法,所述方法包括:
    第二网元接收第一网元发送的联邦推理请求消息,所述联邦推理请求消息至少包括第一推理任务的相关信息;
    所述第二网元根据所述联邦推理请求消息进行推理,得到第一推理结果;
    所述第二网元发送第一信息给所述第一网元,所述第一信息中至少包括所述第一推理结果。
  17. 如权利要求16所述的方法,其中,所述联邦推理请求消息包括以下至少一项:
    模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;
    所述第一推理任务的标识信息;
    第一指示信息,用于指示所述联邦推理过程为纵向联邦推理过程;
    第一过滤器的相关信息,用于限定所述第一推理任务对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项;
    所述第一推理结果对应的上报信息。
  18. 如权利要求17所述的方法,其中,所述第一推理结果的上报信息包括以下至少一项:
    所述第一推理结果的上报格式;
    所述第一推理结果的上报条件。
  19. 如权利要求16所述的方法,其中,所述第一信息还包括以下至少一项:
    模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;
    所述第一推理任务的标识信息;
    所述第一推理结果对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项。
  20. 如权利要求16-19中任一项所述的方法,其中,所述第二网元根据所述联邦推理请求消息进行推理,得到第一推理结果的步骤,包括:
    所述第二网元根据所述联邦推理请求消息中的模型实例标识信息确定本地推理模型, 以及推理输入数据的类型信息;
    所述第二网元根据所述推理输入数据的类型信息以及所述联邦推理请求消息中的第一过滤器的相关信息,获取推理输入数据;
    所述第二网元基于所述推理输入数据和所述本地推理模型进行推理,得到所述第一推理结果。
  21. 一种数据处理的装置,包括:
    第一发送模块,用于在进行与第一推理任务对应的联邦推理过程的情况下,向至少一个第二网元发送联邦推理请求消息,所述联邦推理请求消息至少包括所述第一推理任务的相关信息,所述第二网元为参与所述联邦推理过程的网元;
    第一接收模块,用于接收所述至少一个第二网元发送的第一信息,所述第一信息至少包括第一推理结果;
    第一推理模块,用于根据至少一个所述第一推理结果,确定所述第一推理任务对应的第二推理结果。
  22. 根据权利要求21所述的装置,其中,所述第一推理模块还用于确定第一条件成立,所述第一条件包括以下至少之一:
    所述第一网元没有存储或无法获取所述推理过程对应的全部或部分推理数据;
    所述至少一个第二网元能够提供所述推理过程对应的全部或部分推理数据;
    所述推理任务所需的各所述第二网元间推理数据的样本相同、但样本特征不同。
  23. 如权利要求21所述的装置,其中,所述联邦推理请求消息包括以下至少一项:
    模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;
    所述第一推理任务的标识信息;
    第一指示信息,用于指示所述联邦推理过程为纵向联邦推理过程;
    第一过滤器的相关信息,用于限定所述第一推理任务对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项;
    所述第一推理结果对应的上报信息。
  24. 如权利要求23所述的装置,其中,所述第一推理结果对应的上报信息包括以下至少一项:
    所述第一推理结果的上报格式;
    所述第一推理结果的上报条件。
  25. 如权利要求21所述的装置,其中,所述第一推理结果是所述第二网元根据本地推理模型推理得到,所述本地推理模型是所述第二网元根据所述联邦推理请求消息中包括的模型实例标识信息确定。
  26. 如权利要求21所述的装置,其中,所述第一信息还包括以下至少一项:
    模型实例标识信息,用于标识所述联邦推理过程所需使用的目标模型;
    所述第一推理任务的标识信息;
    所述第一推理结果对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项。
  27. 如权利要求21-26中任一项所述的装置,其中,所述第一推理模块用于根据至少一个所述第一推理结果,确定所述第一推理任务对应的第二推理结果的步骤,包括
    根据目标模型以及至少一个所述第一推理结果,计算所述第二推理结果。
  28. 如权利要求27所述的装置,其中,所述第一推理模块根据目标模型以及至少一个所述第一推理结果,计算所述第二推理结果的步骤,包括:
    根据所述第一推理结果对应的推理样本信息,关联对齐所述至少一个第二网元发送的、且针对同一个推理样本的第一推理结果;
    将所述针对同一个推理样本的第一推理结果输入所述目标模型,获取针对所述同一个推理样本的所述第二推理结果。
  29. 如权利要求21-28中任一项所述的装置,其中,所述第一发送模块还用于向第三网元发送模型请求消息,所述模型请求消息用于请求所述第三网元训练和/或反馈所述目标模型;
    所述第一接收模块,还用于接收所述第三网元发送的所述目标模型的相关信息,所述目标模型的相关信息至少包括所述目标模型信息。
  30. 如权利要求29所述的装置,其中,所述模型请求消息包括以下至少一项:
    模型训练任务的类型信息;
    所述模型训练任务的标识信息;
    第二过滤器的相关信息,用于限定所述模型训练任务对应的目标对象、目标时间、目标区域中的至少一项;
    模型反馈相关信息,所述模型反馈相关信息包括模型描述方式、模型反馈时间中的至少一项。
  31. 如权利要求29所述的装置,其中,所述目标模型的相关信息还包括以下至少一项:
    模型实例标识信息;
    第二指示信息,用于指示所述目标模型是纵向联邦学习模型;
    第四网元的相关信息,所述第四网元为参与所述目标模型训练的网元。
  32. 如权利要求9所述的装置,其中,所述目标模型信息包括以下至少一项:
    模型结构信息;
    模型参数信息;
    模型算法信息;
    模型超参数信息;
    模型输入数据的类型信息;
    模型输出数据的类型信息。
  33. 如权利要求29所述的装置,其中,所述第一推理模块还用于根据所述目标模型的相关信息确定所述至少一个第二网元的信息。
  34. 如权利要求33所述的装置,其中,所述第一推理模块根据所述目标模型的相关信息确定所述至少一个第二网元的信息的步骤,包括:
    根据所述目标模型的相关信息中包括的第四网元的相关信息确定所述至少一个第二网元的信息,所述第四网元为参与所述目标模型训练的网元。
  35. 根据权利要求21-34中任一项所述的装置,其中,所述第一接收模块还用于接收消费者设备发送的推理任务请求消息,所述推理任务请求消息中包括所述第一推理任务的相关信息;
    所述第一发送模块还用于向所述消费者设备发送所述第二推理结果。
  36. 一种数据处理装置,所述装置包括:
    第二接收模块,用于接收第一网元发送的联邦推理请求消息,所述联邦推理请求消息至少包括第一推理任务的相关信息;
    第二推理模块,用于根据所述联邦推理请求消息进行推理,得到第一推理结果;
    第二发送模块,用于发送第一信息给所述第一网元,所述第一信息中至少包括所述第一推理结果。
  37. 如权利要求36所述的装置,其中,所述联邦推理请求消息包括以下至少一项:
    模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;
    所述第一推理任务的标识信息;
    第一指示信息,用于指示所述联邦推理过程为纵向联邦推理过程;
    第一过滤器的相关信息,用于限定所述第一推理任务对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项;
    所述第一推理结果对应的上报信息。
  38. 如权利要求37所述的装置,其中,所述第一推理结果的上报信息包括以下至少一项:
    所述第一推理结果的上报格式;
    所述第一推理结果的上报条件。
  39. 如权利要求36所述的装置,其中,所述第一信息还包括以下至少一项:
    模型实例标识信息,所述模型实例标识信息用于标识所述联邦推理过程所需使用的目标模型;
    所述第一推理任务的标识信息;
    所述第一推理结果对应的推理样本信息,所述推理样本信息包括:推理对象信息、推理时间信息、推理区域信息中的至少一项。
  40. 如权利要求36-39中任一项所述的装置,其中,所述第二推理模块根据所述联邦推 理请求消息进行推理,得到第一推理结果的步骤,包括:
    根据所述联邦推理请求消息中的模型实例标识信息确定本地推理模型,以及推理输入数据的类型信息;
    根据所述推理输入数据的类型信息以及所述联邦推理请求消息中的第一过滤器的相关信息,获取推理输入数据;
    基于所述推理输入数据和所述本地推理模型进行推理,得到所述第一推理结果。
  41. 一种通信设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至15任一项所述的数据推理的方法的步骤,或者实现如权利要求16至20任一项所述的数据推理的方法的步骤。
  42. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-15任一项所述的数据推理的方法,或者实现如权利要求16至20任一项所述的数据推理的方法的步骤。
PCT/CN2022/143669 2021-12-30 2022-12-29 数据处理的方法、装置及通信设备 WO2023125879A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111669990.4 2021-12-30
CN202111669990.4A CN116419209A (zh) 2021-12-30 2021-12-30 数据处理的方法、装置及通信设备

Publications (1)

Publication Number Publication Date
WO2023125879A1 true WO2023125879A1 (zh) 2023-07-06

Family

ID=86998056

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/143669 WO2023125879A1 (zh) 2021-12-30 2022-12-29 数据处理的方法、装置及通信设备

Country Status (2)

Country Link
CN (1) CN116419209A (zh)
WO (1) WO2023125879A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985000A (zh) * 2020-08-21 2020-11-24 深圳前海微众银行股份有限公司 模型服务输出方法、装置、设备及存储介质
US20210019652A1 (en) * 2019-07-18 2021-01-21 Qualcomm Incorporated Concurrent optimization of machine learning model performance
CN112423382A (zh) * 2020-11-09 2021-02-26 江苏第二师范学院(江苏省教育科学研究院) 基于5g网络的模型管理方法及使用nrf的注册和更新方法
WO2021172810A1 (ko) * 2020-02-26 2021-09-02 삼성전자 주식회사 무선 통신 시스템에서 서비스를 선택하는 방법 및 장치
CN113537633A (zh) * 2021-08-09 2021-10-22 中国电信股份有限公司 基于纵向联邦学习的预测方法、装置、设备、介质和系统
CN113570071A (zh) * 2021-08-09 2021-10-29 山东产业技术研究院智能计算研究院 一种联邦学习模型服务发布方法及系统
CN113839797A (zh) * 2020-06-23 2021-12-24 华为技术有限公司 数据处理方法和装置
WO2022222152A1 (zh) * 2021-04-23 2022-10-27 Oppo广东移动通信有限公司 联邦学习方法、联邦学习系统、第一设备和第三设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210019652A1 (en) * 2019-07-18 2021-01-21 Qualcomm Incorporated Concurrent optimization of machine learning model performance
WO2021172810A1 (ko) * 2020-02-26 2021-09-02 삼성전자 주식회사 무선 통신 시스템에서 서비스를 선택하는 방법 및 장치
CN113839797A (zh) * 2020-06-23 2021-12-24 华为技术有限公司 数据处理方法和装置
CN111985000A (zh) * 2020-08-21 2020-11-24 深圳前海微众银行股份有限公司 模型服务输出方法、装置、设备及存储介质
CN112423382A (zh) * 2020-11-09 2021-02-26 江苏第二师范学院(江苏省教育科学研究院) 基于5g网络的模型管理方法及使用nrf的注册和更新方法
WO2022222152A1 (zh) * 2021-04-23 2022-10-27 Oppo广东移动通信有限公司 联邦学习方法、联邦学习系统、第一设备和第三设备
CN113537633A (zh) * 2021-08-09 2021-10-22 中国电信股份有限公司 基于纵向联邦学习的预测方法、装置、设备、介质和系统
CN113570071A (zh) * 2021-08-09 2021-10-29 山东产业技术研究院智能计算研究院 一种联邦学习模型服务发布方法及系统

Also Published As

Publication number Publication date
CN116419209A (zh) 2023-07-11

Similar Documents

Publication Publication Date Title
US20190122260A1 (en) Method and apparatus for generating targeted label, and storage medium
US11575956B2 (en) Method and apparatus for determining the accuracy of targeted advertising
WO2023109827A1 (zh) 客户端筛选方法及装置、客户端及中心设备
CN108011937A (zh) 消息推送方法、服务器、智能终端及计算机可读存储介质
US11336743B2 (en) System for trend discovery and curation from content metadata and context
WO2023125879A1 (zh) 数据处理的方法、装置及通信设备
WO2023169523A1 (zh) 数据采集方法、装置、终端及网络侧设备
WO2023093894A1 (zh) 感知业务实现方法、装置、网络侧设备及终端
WO2023093609A1 (zh) 物联网设备的会话建立方法及装置
WO2023066288A1 (zh) 模型请求方法、模型请求处理方法及相关设备
US20220092643A1 (en) Method and apparatus for targeted advertising selection
WO2023125760A1 (zh) 模型训练方法、装置及通信设备
WO2023125747A1 (zh) 模型训练方法、装置及通信设备
WO2024088119A1 (zh) 数据处理方法、装置、终端及网络侧设备
WO2023098535A1 (zh) 信息交互方法、装置及通信设备
WO2023125594A1 (zh) Ai模型传输方法、装置、设备及存储介质
WO2023098586A1 (zh) 信息交互方法、装置及通信设备
WO2023169402A1 (zh) 模型的准确度确定方法、装置及网络侧设备
WO2023213270A1 (zh) 模型训练处理方法、装置、终端及网络侧设备
WO2023207980A1 (zh) 模型信息获取方法、发送方法、装置、节点和储存介质
WO2024061287A1 (zh) 人工智能ai模型传输方法、装置、终端及介质
WO2023169404A1 (zh) 模型的准确度确定方法、装置及网络侧设备
WO2024120470A1 (zh) 模型训练方法、终端及网络侧设备
WO2023134653A1 (zh) 通信网络预测方法、终端及网络侧设备
WO2023169392A1 (zh) 模型的准确度确定方法、装置及网络侧设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22915128

Country of ref document: EP

Kind code of ref document: A1