CN115767514A - Communication method, communication device and communication system - Google Patents

Communication method, communication device and communication system Download PDF

Info

Publication number
CN115767514A
CN115767514A CN202111030657.9A CN202111030657A CN115767514A CN 115767514 A CN115767514 A CN 115767514A CN 202111030657 A CN202111030657 A CN 202111030657A CN 115767514 A CN115767514 A CN 115767514A
Authority
CN
China
Prior art keywords
network element
training
model
encrypted
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111030657.9A
Other languages
Chinese (zh)
Inventor
封召
辛阳
王远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111030657.9A priority Critical patent/CN115767514A/en
Priority to PCT/CN2022/114043 priority patent/WO2023030077A1/en
Publication of CN115767514A publication Critical patent/CN115767514A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/03Protecting confidentiality, e.g. by encryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Abstract

The embodiment of the application provides a communication method, a communication device and a communication system. The method comprises the following steps: the reasoning network element sends a first request message to a training network element, wherein the first request message comprises identification information of an analysis type, and the manufacturer type of the training network element is different from that of the reasoning network element and the type of a model deployment platform is the same; receiving a first response message from a training network element, the first response message including an encrypted model or address information of the encrypted model; obtaining an encrypted analysis result according to the encrypted model; and acquiring a decrypted analysis result according to the encrypted analysis result. According to the scheme, the reasoning network element and the training network element can be deployed by different manufacturers, and the limitation that a model can only be deployed by the same manufacturer in the existing scheme is broken.

Description

Communication method, communication device and communication system
Technical Field
The present application relates to the field of communications technologies, and in particular, to a communication method, a communication apparatus, and a communication system.
Background
The training network element can train the model and provide the trained model to the reasoning network element, and the reasoning network element inputs the data to be analyzed into the model for reasoning to obtain an analysis result.
At present, address information of one or more training network elements and identification information of an analysis type supported by each training network element are generally configured locally on the inference network element, and the inference network element may select a training network element capable of providing a model from the one or more training network elements according to an analysis type corresponding to data to be analyzed. And the reasoning network element is the same as the manufacturer of each training network element, and the used model deployment platform is also the same.
However, the inference network element and the training network element are limited to the same manufacturer, and the model cannot be shared across manufacturers.
Disclosure of Invention
The embodiment of the application provides a communication method, a communication device and a communication system, which are used for realizing cross-manufacturer sharing of a model.
In a first aspect, the present embodiments provide a communication method, which may be performed by an inference network element or a module (e.g., a chip) applied in the inference network element. Taking the inference network element to execute the communication method as an example, the method includes: the method comprises the steps that a reasoning network element sends a first request message to a training network element, wherein the first request message comprises identification information of an analysis type, the first request message is used for requesting a model supporting the analysis type, the manufacturer type of the training network element is different from that of the reasoning network element, and the model deployment platforms of the reasoning network element and the training network element are the same in type; the inference network element receiving a first response message from the training network element, the first response message including an encrypted model or address information of the encrypted model, the encrypted model supporting the analysis type; the reasoning network element obtains an encrypted analysis result according to the encrypted model; and the reasoning network element acquires a decrypted analysis result according to the encrypted analysis result.
According to the scheme, the reasoning network element and the training network element are deployed by different manufacturers, but model deployment platforms used by the reasoning network element and the training network element are the same, and the limitation that a model in the existing solution can only be shared by the manufacturers is broken. The scheme provides a process of encrypting and distributing the model across manufacturers, enhances the capability of the training network element for encrypting and distributing the model, and avoids the risk that the manufacturers who deploy the reasoning network element steal the information such as the framework, the parameters and the like of the model.
In a possible implementation method, the inference network element sends the encrypted analysis result to the training network element; the reasoning network element receives the decrypted analysis result from the training network element.
According to the scheme, the training network element is the encryption network element of the model, so that the training network element decrypts the encrypted analysis result, and accurate decryption of the encrypted analysis result can be realized.
In a possible implementation method, the first response message further includes first indication information, where the first indication information indicates that the training network element decrypts the encrypted analysis result.
According to the scheme, the reasoning network element can accurately know that the network element encrypting the encrypted analysis result is the training network element according to the first indication information.
In a possible implementation method, the inference network element sends the encrypted analysis result and an association identifier to the training network element, where the association identifier is used by the training network element to determine an encryption algorithm corresponding to the encrypted model.
According to the scheme, the training network element can accurately acquire the encryption algorithm corresponding to the encrypted model through the associated identification, so that the decryption algorithm used for decrypting the encrypted analysis can be accurately obtained, and the decryption efficiency can be improved.
In a possible implementation method, the first response message further includes address information of the first network element; the reasoning network element sends the encrypted analysis result to the first network element according to the address information of the first network element; the inference network element receives the decrypted analysis result from the first network element.
According to the scheme, when the training network element cannot decrypt the encrypted analysis result, the first network element can decrypt the encrypted analysis result, so that the reasoning network element can be ensured to obtain the decrypted analysis result.
In a possible implementation method, the inference network element sends the encrypted analysis result and an association identifier to the first network element according to the address information of the first network element, where the association identifier is used by the first network element to determine an encryption algorithm corresponding to the encrypted model.
According to the scheme, the first network element can accurately acquire the encryption algorithm corresponding to the encrypted model through the associated identifier, so that the decryption algorithm used for decrypting the encrypted analysis can be accurately obtained, and the decryption efficiency can be improved.
In a possible implementation method, the first response message further includes second indication information, where the second indication information is used to indicate a data type of the input data corresponding to the encrypted model.
According to the scheme, the reasoning network element carries out corresponding preprocessing on the input data through the second indication information to obtain the data to be analyzed which meets the requirements, and the data reasoning efficiency can be improved.
In a possible implementation method, the first request message further includes a vendor type of the inference network element and a model deployment platform type of the inference network element.
According to the scheme, the manufacturer type of the reasoning network element and the type of the model deployment platform of the reasoning network element are carried in the first request message, so that the training network element can judge whether the manufacturer types of the training network element and the reasoning network element are the same or not and judge whether the types of the model deployment platforms of the training network element and the reasoning network element are the same or not, therefore, the training network element can conveniently select a proper method to provide a data reasoning function for the reasoning network element, and the efficiency of data reasoning can be improved.
In a possible implementation method, before sending the first request message to the training network element, the inference network element sends a second request message to the data management network element, where the second request message includes the identification information of the analysis type, and the second request message is used to request a network element that supports the analysis type; the inference network element receives a second response message from the data management network element, the second response message including address information of the training network element.
According to the scheme, the reasoning network element can request to discover the training network element from the data management network element, and the training network element capable of providing the model can be accurately discovered.
In a second aspect, embodiments of the present application provide a communication method, which may be performed by a training network element or a module (e.g., a chip) applied in the training network element. Taking the training network element to execute the communication method as an example, the method includes: a training network element receives a first request message from an inference network element, wherein the first request message comprises identification information of an analysis type, the first request message is used for requesting a model supporting the analysis type, the manufacturer type of the training network element is different from that of the inference network element, and the model deployment platforms of the inference network element and the training network element are the same in type; the training network element sends a first response message to the reasoning network element, wherein the first response message comprises an encrypted model or address information of the encrypted model; the training network element receiving an encrypted analysis result from the reasoning network element, the encrypted analysis result being obtained according to the encrypted model; the training network element decrypts the encrypted analysis result to obtain a decrypted analysis result; the training network element sends the decrypted analysis result to the inference network element.
According to the scheme, the reasoning network element and the training network element are deployed by different manufacturers, but model deployment platforms used by the reasoning network element and the training network element are the same, and the limitation that a model in the existing solution can only be shared by the manufacturers is broken. The scheme provides a process of encrypting and distributing the model across manufacturers, enhances the capability of the training network element for encrypting and distributing the model, and avoids the risk that the manufacturers who deploy the reasoning network element steal the information such as the framework, the parameters and the like of the model.
In a possible implementation method, the first request message further includes a vendor type of the inference network element and a type of a model deployment platform of the inference network element; before the training network element sends the first response message to the reasoning network element, determining that the manufacturer types of the training network element and the reasoning network element are different, and the types of the model deployment platforms of the reasoning network element and the training network element are the same.
According to the scheme, the manufacturer type of the reasoning network element and the type of the model deployment platform of the reasoning network element are carried in the first request message, so that the training network element can judge whether the manufacturer types of the training network element and the reasoning network element are the same or not and judge whether the types of the model deployment platforms of the training network element and the reasoning network element are the same or not, therefore, the training network element can conveniently select a proper method to provide a data reasoning function for the reasoning network element, and the efficiency of data reasoning can be improved.
In a possible implementation method, the first response message further includes first indication information, where the first indication information indicates that the training network element decrypts the encrypted analysis result.
According to the scheme, the reasoning network element can accurately know that the network element encrypting the encrypted analysis result is the training network element according to the first indication information.
In a possible implementation method, the first response message further includes second indication information, where the second indication information is used to indicate a data type of the input data corresponding to the encrypted model.
According to the scheme, the reasoning network element carries out corresponding preprocessing on the input data through the second indication information to obtain the data to be analyzed which meets the requirements, and the data reasoning efficiency can be improved.
In a possible implementation method, before receiving a first request message from an inference network element, the training network element sends a registration request message to a data management network element, where the registration request message includes identification information of the analysis type and model information of the training network element, and the model information includes a vendor type of the training network element and a type of a model deployment platform of the training network element.
In a possible implementation method, the model information in the registration request message further includes the second indication information.
In a possible implementation method, the model information in the registration request message further includes identification information of the second network element.
In a possible implementation method, the model information in the registration request message further includes identification information of the first network element.
In a possible implementation method, the training network element receives the encrypted analysis result and the associated identifier from the reasoning network element; the training network element determines an encryption algorithm corresponding to the encrypted model according to the association identifier; the training network element determines a decryption algorithm according to the encryption algorithm; and the training network element decrypts the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result.
According to the scheme, the training network element can accurately acquire the encryption algorithm corresponding to the encrypted model through the associated identification, so that the decryption algorithm used for decrypting the encrypted analysis can be accurately obtained, and the decryption efficiency can be improved.
In a third aspect, the present application provides a communication method, which may be performed by an inference network element or a module (e.g., a chip) applied in the inference network element. Taking the inference network element to execute the communication method as an example, the method includes: the inference network element sends a request message to a training network element, wherein the request message comprises identification information of an analysis type, the request message is used for requesting a model supporting the analysis type, the manufacturer type of the training network element is different from that of the inference network element, and the model deployment platforms of the inference network element and the training network element are different; the inference network element receives a response message from the training network element, wherein the response message comprises first indication information and address information of a second network element, the first indication information indicates that the request for supporting the model of the analysis type is rejected, and the type of the model deployment platform supported by the second network element comprises the type of the model deployment platform of the training network element; the reasoning network element sends data to be analyzed to the second network element according to the address information of the second network element, wherein the data to be analyzed is used for the second network element to generate an encrypted analysis result according to the encrypted model corresponding to the analysis type; the inference network element receives a decrypted analysis result from the training network element or the first network element, the decrypted analysis result being obtained by the training network element or the first network element according to the encrypted analysis result.
According to the scheme, the reasoning network element and the training network element are deployed by different manufacturers, and model deployment platforms used by the reasoning network element and the training network element are different, so that the limitation that a model can only be shared by the manufacturers in the existing solution is broken. The scheme provides a process of encrypting and distributing the model across manufacturers, enhances the capability of the training network element for encrypting and distributing the model, and avoids the risk that the manufacturers who deploy the reasoning network element steal the information such as the framework, the parameters and the like of the model.
In a possible implementation method, the response message further includes a rejection reason value, where the rejection reason value indicates that the manufacturer type of the training network element is different from that of the inference network element, and the model deployment platforms of the inference network element and the training network element are different from each other.
According to the scheme, the reason why the reasoning network element is rejected can be informed through the rejection reason value, so that the reasoning network element does not send a model for requesting the support of the analysis type to the training network element any more, and the reasoning overhead can be reduced.
In a possible implementation method, the request message further includes a vendor type of the inference network element and a model deployment platform type of the inference network element.
According to the scheme, the request message carries the manufacturer type of the reasoning network element and the type of the model deployment platform of the reasoning network element, so that the training network element can judge whether the manufacturer types of the training network element and the reasoning network element are the same or not and judge whether the types of the model deployment platforms of the training network element and the reasoning network element are the same or not, therefore, the training network element can conveniently select a proper method to provide a data reasoning function for the reasoning network element, and the data reasoning efficiency can be improved.
In a possible implementation method, the response message further includes second indication information, where the second indication information is used to indicate a data type of the input data corresponding to the encrypted model.
According to the scheme, the reasoning network element carries out corresponding preprocessing on the input data through the second indication information to obtain the data to be analyzed which meets the requirements, and the data reasoning efficiency can be improved.
In a possible implementation method, the inference network element sends, to the second network element, data to be analyzed and an association identifier according to the address information of the second network element, where the association identifier is used by the first network element or the training network element to determine an encryption algorithm corresponding to the encrypted model.
According to the scheme, the training network element or the first network element can accurately acquire the encryption algorithm corresponding to the encrypted model through the associated identification, so that the decryption algorithm used for decrypting the encrypted analysis can be accurately obtained, and the decryption efficiency can be improved.
In a fourth aspect, embodiments of the present application provide a communication method, which may be performed by a training network element or a module (e.g., a chip) applied in the training network element. Taking the training network element to execute the communication method as an example, the method includes: the method comprises the steps that a training network element receives a request message from a reasoning network element, wherein the request message comprises identification information of an analysis type, the request message is used for requesting a model supporting the analysis type, the manufacturer type of the training network element is different from that of the reasoning network element, and the model deployment platforms of the reasoning network element and the training network element are different from each other; the training network element sends a response message to the reasoning network element, wherein the response message comprises first indication information and address information of a second network element, the first indication information indicates that the request is rejected to support the model of the analysis type, and the type of the model deployment platform supported by the second network element comprises the type of the model deployment platform of the training network element; the training network element receives an encrypted analysis result from the second network element, wherein the encrypted analysis result is obtained by the second network element according to the data to be analyzed of the reasoning network element and the encrypted model corresponding to the analysis type; the training network element decrypts the encrypted analysis result to obtain a decrypted analysis result; the training network element sends the decrypted analysis result to the reasoning network element.
According to the scheme, the reasoning network element and the training network element are deployed by different manufacturers, and model deployment platforms used by the reasoning network element and the training network element are different, so that the limitation that a model in the existing solution can only be shared by the manufacturers is broken. The scheme provides a process of encrypting and distributing the model across manufacturers, enhances the capability of the training network element for encrypting and distributing the model, and avoids the risk that the manufacturers who deploy the reasoning network element steal the information such as the framework, the parameters and the like of the model.
In a possible implementation method, the request message further includes a vendor type of the inference network element and a model deployment platform type of the inference network element; before the training network element sends a response message to the reasoning network element, determining that the manufacturer types of the training network element and the reasoning network element are different, and the types of the model deployment platforms of the reasoning network element and the training network element are different.
According to the scheme, the request message carries the manufacturer type of the reasoning network element and the type of the model deployment platform of the reasoning network element, the training network element can judge whether the manufacturer type of the training network element is the same as that of the reasoning network element or not, and judge whether the type of the model deployment platform of the training network element is the same as that of the reasoning network element or not, so that the training network element can conveniently select a proper method to provide a data reasoning function for the reasoning network element, and the data reasoning efficiency can be improved.
In a possible implementation method, the response message further includes a rejection reason value, where the rejection reason value indicates that the manufacturer type of the training network element is different from that of the inference network element, and the model deployment platforms of the inference network element and the training network element are different from each other.
According to the scheme, the reason why the reasoning network element is rejected can be informed through the rejection reason value, so that the reasoning network element does not send a model for requesting the support of the analysis type to the training network element any more, and the reasoning overhead can be reduced.
In a possible implementation method, before the training network element receives the request message from the inference network element, the training network element sends the identification information of the analysis type and the encrypted model corresponding to the analysis type to the second network element.
In a possible implementation method, the response message further includes second indication information, where the second indication information is used to indicate a data type of the input data corresponding to the encrypted model.
According to the scheme, the reasoning network element carries out corresponding preprocessing on the input data through the second indication information to obtain the data to be analyzed which meets the requirements, and the data reasoning efficiency can be improved.
In a possible implementation method, the training network element receives the encrypted analysis result and the associated identifier from the second network element; the training network element determines an encryption algorithm corresponding to the encrypted model according to the association identifier; the training network element determines a decryption algorithm according to the encryption algorithm; and the training network element decrypts the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result.
According to the scheme, the training network element can accurately acquire the encryption algorithm corresponding to the encrypted model through the associated identification, so that the decryption algorithm used for decrypting the encrypted analysis can be accurately obtained, and the decryption efficiency can be improved.
In a fifth aspect, the present application provides a communication method, which may be performed by a first network element or a module (e.g., a chip) applied in the first network element. Taking the first network element as an example to execute the communication method, the method includes: the first network element receives the encrypted analysis result; the first network element decrypts the encrypted analysis result to obtain a decrypted analysis result; the first network element sends the decrypted analysis result to an inference network element.
In a possible implementation method, the first network element receives the encrypted analysis result from the reasoning network element.
In one possible implementation method, the first network element receives the encrypted analysis result and the address information of the inference network element from the second network element; and the first network element sends the decrypted analysis result to the reasoning network element according to the address information of the reasoning network element.
In a possible implementation method, before the first network element receives the encrypted analysis result, the first network element receives an association identifier from a training network element and an identifier of a decryption algorithm corresponding to the association identifier; the first network element receives the encrypted analysis result and the association identifier; the first network element determines the decryption algorithm according to the association identifier; and the first network element decrypts the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result.
In a sixth aspect, the present application provides a communication method, which may be performed by a second network element or a module (e.g., a chip) applied in the second network element. Taking the second network element as an example to execute the communication method, the method includes: the second network element receives the identification information of the analysis type from the training network element and the encrypted model supporting the analysis type, wherein the type of the model deployment platform supported by the second network element comprises the type of the model deployment platform of the training network element; the second network element receives the data to be analyzed from the reasoning network element; the second network element obtains an encrypted analysis result according to the encrypted model and the data to be analyzed; the second network element sends the encrypted analysis result and the address information of the inference network element for receiving the decrypted analysis result to the training network element or the first network element, and the decrypted analysis result is obtained by the training network element or the first network element according to the encrypted analysis result.
In a seventh aspect, the present application provides a communication method, which may be executed by an inference network element or a module (e.g., a chip) applied in the inference network element. Taking the inference network element to execute the communication method as an example, the method includes: the inference network element sends a request message to the data management network element, wherein the request message comprises identification information of an analysis type, and the request message is used for requesting the network element supporting the analysis type; the inference network element receives a response message from the data management network element, wherein the response message comprises at least one group of information, each group of information comprises address information of a candidate training network element and model information of the candidate training network element, the candidate training network element supports the analysis type, and the model information of the candidate training network element comprises a manufacturer type of the candidate training network element and a type of a model deployment platform of the candidate training network element; and when at least one candidate training network element corresponding to the at least one group of information has one or more candidate training network elements which are different from the manufacturer type of the reasoning network element and the same as the model deployment platform type, the reasoning network element selects one candidate training network element from the one or more candidate training network elements as the training network element.
According to the scheme, the function of the data management network element is enhanced, the training network element firstly registers/updates the supported identification information of the analysis type and the corresponding model information to the data management network element, and then the reasoning network element finds the available training network element or a third-party network element from the data management network element. The inference network element and the training network element are deployed by different manufacturers, and the types of model deployment platforms used by the inference network element and the training network element are the same or different.
In a possible implementation method, when there is no candidate training network element that is different from the manufacturer type of the inference network element and is of the same type as the model deployment platform in at least one candidate training network element corresponding to the at least one set of information, the inference network element determines address information of a second network element according to the at least one set of information.
In a possible implementation method, the model information of the candidate training network element includes address information of the second network element; and the reasoning network element acquires the address information of the second network element from the model information of the candidate training network element.
In a possible implementation, the encryption model in any of the above implementations is encrypted using one or more of a fully homomorphic encryption algorithm, a random secure averaging algorithm, or a differential privacy algorithm.
In a possible implementation method, the inference network element in any implementation method may be an independent core network element or a functional module in the core network element.
In a possible implementation method, the training network element in any implementation method may be an independent core network element or a functional module in the core network element.
In a possible implementation method, the first network element in any of the above implementation methods may be an analysis result decryption network element, and may be configured to decrypt the encrypted analysis result.
In a possible implementation method, the second network element in any implementation method may be a model deployment and inference network element, and may be configured to infer, according to a model, data to be analyzed to obtain an analysis result. If the used model is an encrypted model, the data to be analyzed can be reasoned according to the encrypted model to obtain an encrypted analysis result.
In an eighth aspect, an embodiment of the present application provides a communication apparatus, which may be an inference network element or a module (e.g., a chip) applied in the inference network element. The apparatus has functionality to implement any of the implementation methods of the first aspect, any of the implementation methods of the second aspect, or any of the implementation methods of the seventh aspect described above. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a ninth aspect, an embodiment of the present application provides a communication apparatus, which may be an inference network element or a module (e.g., a chip) applied in the inference network element. The apparatus has the functionality to implement any of the implementation methods of the second aspect or any of the implementation methods of the fourth aspect described above. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a tenth aspect, an embodiment of the present application provides a communication apparatus, which may be a first network element or a module (e.g., a chip) applied in the first network element. The apparatus has a function of implementing any implementation method of the fifth aspect described above. The function can be realized by hardware, and can also be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.
In an eleventh aspect, an embodiment of the present application provides a communication apparatus, which may be a second network element or a module (e.g., a chip) applied in the second network element. The apparatus has a function of implementing any implementation method of the sixth aspect. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a twelfth aspect, an embodiment of the present application provides a communication apparatus, including a processor and a memory; the memory is configured to store computer instructions, and when the apparatus runs, the processor executes the computer instructions stored by the memory, so as to cause the apparatus to perform any implementation method of the first aspect to the seventh aspect.
In a thirteenth aspect, an embodiment of the present application provides a communication apparatus, which includes means or devices (means) for performing each step of any implementation method in the first to seventh aspects.
In a fourteenth aspect, an embodiment of the present application provides a communication device, which includes a processor and an interface circuit, where the processor is configured to communicate with other devices through the interface circuit, and perform any implementation method in the first to seventh aspects. The processor includes one or more.
In a fifteenth aspect, an embodiment of the present application provides a communication device, including a processor coupled with a memory, and configured to invoke a program stored in the memory to perform any implementation method in the first to seventh aspects. The memory may be located within the device or external to the device. And the processor may be one or more.
Sixteenth, the present application also provides a computer-readable storage medium, which stores instructions that, when executed on a communication device, cause any implementation method in the first to seventh aspects to be performed.
In a seventeenth aspect, the present application further provides a computer program product, where the computer program product includes a computer program or instructions, and when the computer program or instructions are executed by a communication device, the method in any of the first to seventh aspects is executed.
In an eighteenth aspect, an embodiment of the present application further provides a chip system, including: a processor configured to perform any of the implementation methods of the first to third aspects.
In a nineteenth aspect, an embodiment of the present application further provides a communication system, including an inference network element for implementing any implementation method of the first aspect and a training network element for implementing any implementation method of the second aspect.
In a twentieth aspect, an embodiment of the present application further provides a communication system, including an inference network element for implementing any implementation method of the third aspect and a training network element for implementing any implementation method of the fourth aspect.
Drawings
FIG. 1 is a schematic diagram of a 5G network architecture based on a service-oriented architecture;
FIG. 2 is a schematic diagram of a 5G network architecture based on a point-to-point interface;
fig. 3 is a flowchart illustrating a communication method according to an embodiment of the present application;
fig. 4 is a flowchart illustrating a communication method according to an embodiment of the present application;
fig. 5 is a flowchart illustrating a communication method according to an embodiment of the present application;
fig. 6 is a schematic diagram of a communication device according to an embodiment of the present application;
fig. 7 is a schematic diagram of a communication device according to an embodiment of the present application.
Detailed Description
FIG. 1 is a schematic diagram of the 5th generation (5G) network architecture based on a service-oriented architecture. The 5G network architecture shown in fig. 1 may include a terminal device, an access network device, and a core network device. The terminal device is connected to a Data Network (DN) through an access network device and a core network. The core network equipment comprises part or all of the following network elements: a Unified Data Management (UDM) Network element, a Unified Database (UDR), a Network open Function (NEF) Network element (not shown), an Application Function (AF) Network element, a Policy Control Function (PCF) Network element, an access and mobility management Function (AMF) Network element, a Session Management Function (SMF) Network element, a User Plane Function (UPF) Network element, a Network Data analysis Function (NWDAF) Network element, a Network storage Function (NRF) Network element (not shown).
The access network device may be a Radio Access Network (RAN) device. For example: a base station (base station), an evolved NodeB (eNodeB), a Transmission Reception Point (TRP), a next generation base station (next generation NodeB, gNB) in a 5G mobile communication system, a next generation base station in the sixth generation (the 6G) mobile communication system, a base station in a future mobile communication system, or an access node in a wireless fidelity (WiFi) system, etc.; the present invention may also be a module or a unit that performs part of the functions of the base station, for example, a Centralized Unit (CU) or a Distributed Unit (DU). The radio access network device may be a macro base station, a micro base station or an indoor station, a relay node or a donor node, and the like. The embodiments of the present application do not limit the specific technologies and the specific device forms adopted by the radio access network device.
The terminal device may be a User Equipment (UE), a mobile station, a mobile terminal, or the like. The terminal device can be widely applied to various scenes, for example, device-to-device (D2D), vehicle-to-equipment (V2X) communication, machine-type communication (MTC), internet of things (IOT), virtual reality, augmented reality, industrial control, automatic driving, telemedicine, smart grid, smart furniture, smart office, smart wearing, smart transportation, smart city, and the like. The terminal equipment can be a mobile phone, a tablet personal computer, a computer with a wireless transceiving function, wearable equipment, a vehicle, an urban air vehicle (such as an unmanned aerial vehicle, a helicopter and the like), a ship, a robot, a mechanical arm, intelligent household equipment and the like.
The access network devices and the terminal devices may be fixed or mobile. The access network equipment and the terminal equipment can be deployed on the land, including indoor or outdoor, handheld or vehicle-mounted; can also be deployed on the water surface; it may also be deployed on airborne airplanes, balloons, and satellite vehicles. The embodiment of the application does not limit the application scenes of the access network equipment and the terminal equipment.
The AMF network element comprises functions of executing mobility management, access authentication/authorization and the like. In addition, it is also responsible for transferring user policy between the terminal equipment and the PCF.
The SMF network element includes functions of session management execution, control policy execution issued by PCF, selection of UPF, internet Protocol (IP) address allocation of the terminal device, and the like.
The UPF network element is used as an interface with a data network and comprises functions of completing user plane data forwarding, accounting statistics based on session/stream level, bandwidth limitation and the like.
And the UDM network element comprises functions of executing and managing subscription data, user access authorization and the like.
UDRs include access functions for executing types of data such as subscription data, policy data, application data, and the like.
And the NEF network element is used for supporting the opening of the capability and the event.
And the AF network element is used for transmitting the requirements of the application side on the network side, such as QoS requirements or user state event subscription and the like. The AF may be a third party functional entity or an application server deployed by an operator.
The PCF network element comprises the policy control functions of charging, qoS bandwidth guarantee, mobility management, terminal equipment policy decision and the like aiming at the conversation and service flow levels.
The NRF network element can be used for providing a network element discovery function and providing network element information corresponding to the network element type based on the request of other network elements. NRF also provides network element management services such as network element registration, update, de-registration, and network element status subscription and push.
The NWDAF network element is mainly used for collecting data (including one or more of terminal equipment data, access network equipment data, core network element data and third-party application data), providing data analysis service, and outputting data analysis results for use in network, network management and application execution policy decision. The NWDAF may utilize a machine learning model for data analysis. The third generation partnership project (3 rd generation partnership project,3 gpp) Release 17 splits the training and reasoning functions of an NWDAF, which may support only model training functions, only data reasoning functions, or both. Among them, an NWDAF supporting a model training function may be referred to as a training NWDAF, or an NWDAF supporting a Model Training Logical Function (MTLF) (abbreviated as NWDAF (MTLF)). The trained NWDAF may perform model training according to the obtained data to obtain a trained model. An NWDAF that supports a data inference function may also be referred to as an inferential NWDAF, or as an NWDAF that supports an analytic logic function (AnLF) (abbreviated as NWDAF (AnLF)). The inferencing NWDAF may input data to the trained model to obtain analysis results or inference data. In an embodiment of the present application, training an NWDAF refers to an NWDAF that supports at least a model training function. As one possible implementation, training the NWDAF may also support data reasoning functions. Inferential NWDAF refers to an NWDAF that at least supports a data inference function. As one possible implementation, the inferential NWDAF may also support the model training function. If one NWDAF supports both the model training function and the data inference function, that NWDAF may be referred to as a training NWDAF, an inferential NWDAF, or a training inferential NWDAF or NWDAF. In this embodiment, an NWDAF may be a separate network element, or may be configured with other network elements, for example, the NWDAF is set in a PCF network element or an AMF network element.
The DN is a network outside the operator network, the operator network can access a plurality of DNs, and the DN can deploy a plurality of services and provide services such as data and/or voice for the terminal device. For example, the DN is a private network of a certain intelligent factory, a sensor installed in a workshop of the intelligent factory can be a terminal device, a control server of the sensor is deployed in the DN, and the control server can provide services for the sensor. The sensor can communicate with the control server, obtain the instruction of the control server, transmit the sensor data gathered to the control server, etc. according to the instruction. For another example, the DN is an internal office network of a company, the mobile phone or computer of the employee of the company may be a terminal device, and the mobile phone or computer of the employee may access information, data resources, and the like on the internal office network of the company.
In fig. 1, npcf, nudr, nudm, naf, namf, nsmf, and NWDAF are service interfaces respectively provided by the PCF, UDR, UDM, AF, AMF, SMF, and NWDAF, and are used to invoke corresponding service operations. N1, N2, N3, N4, and N6 are interface serial numbers, and the meaning of these interface serial numbers can be referred to the description in fig. 2.
Fig. 2 is a schematic diagram of a 5G network architecture based on a point-to-point interface, where introduction of functions of a network element may refer to introduction of functions of a corresponding network element in fig. 1, and details are not repeated. The main differences between fig. 2 and fig. 1 are: the interfaces between the control plane network elements in fig. 1 are served interfaces, and the interfaces between the control plane network elements in fig. 2 are point-to-point interfaces.
In the architecture shown in fig. 2, the interface names and functions between the network elements are as follows:
1) N1: the interface between the AMF and the terminal device may be used to deliver NAS signaling (e.g., including QoS rules from the AMF) to the terminal device, etc.
2) N2: the interface between the AMF and the RAN may be used to transfer radio bearer control information from the core network side to the RAN, and the like.
3) N3: the interface between RAN and UPF is mainly used for transmitting the uplink and downlink user plane data between RAN and UPF.
4) N4: the interface between the SMF and the UPF may be used to transfer information between the control plane and the user plane, including controlling the sending of forwarding rules, qoS control rules, traffic statistics rules, etc. for the user plane and reporting of information for the user plane.
5) And N5: the interface between the AF and the PCF may be used for application service request issue and network event report.
6) N6: and the UPF and DN interface is used for transmitting the uplink and downlink user data stream between the UPF and the DN.
7) N7: the interface between the PCF and the SMF may be used to send a Protocol Data Unit (PDU) session granularity and a service data stream granularity control policy.
8) And N8: the interface between the AMF and the UDM may be used for the AMF to obtain subscription data and authentication data related to access and mobility management from the UDM, and for the AMF to register the current mobility management related information of the terminal device with the UDM.
9) And N9: and the user interface between the UPF and the UPF is used for transmitting the uplink and downlink user data stream between the UPFs.
10 N10: the interface between the SMF and the UDM may be used for the SMF to acquire the subscription data related to session management from the UDM, and for the SMF to register the current session related information of the terminal device with the UDM.
11 N11): the interface between the SMF and the AMF may be used to transfer PDU session tunnel information between the RAN and the UPF, to transfer control messages sent to the terminal device, to transfer radio resource control information sent to the RAN, and so on.
12 N15), N15: the interface between PCF and AMF can be used to send down terminal equipment strategy and access control related strategy.
13 N23), N23: an interface between the PCF and the NWDAF through which the NWDAF may collect data on the PCF. It should be noted that the NWDAF may also have interfaces with other devices (e.g., AMF, UPF, access network device, terminal device, etc.), which are not fully shown in the figure.
14 N35), N35: and the interface between the UDM and the UDR can be used for the UDM to acquire the user subscription data information from the UDR.
15 N36, N36: the interface between the PCF and the UDR may be used for the PCF to obtain policy related subscription data and application data related information from the UDR.
It is to be understood that the above network elements or functions may be network elements in a hardware device, or may be software functions running on dedicated hardware, or virtualization functions instantiated on a platform (e.g., a cloud platform). As a possible implementation method, the network element or the function may be implemented by one device, or may be implemented by multiple devices together, or may be a functional module in one device, which is not specifically limited in this embodiment of the present application.
As an implementation method, the data management network element in this embodiment may be the above NRF, UDM, or UDR, or may be a network element having the above NRF, UDM, or UDR function in future communication such as a 6G network. The inferencing network elements may be network elements in the inferencing NWDAF described above or in future communications such as 6G networks having the functionality of the inferencing NWDAF described above. The training network element may be a network element in the above-described training NWDAF or a future communication such as a 6G network having the above-described function of training the NWDAF.
As an implementation method, the data management network element in the embodiment of the present application may be a network management side type network management device, a network management side type network management element, or a network management side type network management service. The inference network element may be an access network device side inference device. The training network element can be network management side training equipment, network management side training network element or network management side training service.
As an implementation method, the data management network element in the embodiment of the present application may be an access network device-side model management device. The inference network element may be an access network device side inference device. The training network element may be an access network device side training device.
In order to realize cross-vendor sharing of models, the embodiment of the application provides a communication method. In the method, the manufacturer type of the training network element is different from the manufacturer type of the reasoning network element, and the type of the model deployment platform of the training network element is the same as that of the model deployment platform of the reasoning network element. The model deployment platform is a framework on which the model runs, the dynamic computation graph, the static computation graph, the debugging mode, the visualization or the parallel characteristic of different model deployment platforms may be different, and the type of the model deployment platform is used for distinguishing different model deployment platforms. Illustratively, the Vendor type may be represented by Vendor ID, such as Vendor ID =1 for Vendor a and Vendor ID =2 for Vendor B. Illustratively, the type of model deployment Platform can be represented by AI Platform ID (or Platform ID), such as AI Platform ID =1 identifying model deployment Platform a and AI Platform ID =2 representing model deployment Platform B. Wherein, AI is the abbreviation of artificial intelligence (artificial intelligence). Here, the manufacturer type and the model deployment platform type are explained in a unified manner, and are not described in detail later.
Referring to fig. 3, the method includes the steps of:
step 301, the inference network element sends a request message to the training network element. Accordingly, the training network element receives the request message.
The request message includes identification information (analysis ID) of the analysis type, and the request message is used for requesting support of a model of the analysis type indicated by the identification information of the analysis type. The identification information of the analysis type is used to indicate the analysis type, and the identification information of the analysis type may be service experience (service experience) or network element load information (NF load information), for example.
As a possible implementation method, the request message further includes a vendor type and a type of the model deployment platform. The vendor type and the model deployment platform type in the request message refer to a vendor type and a model deployment platform type of the inference network element. The vendor type may be Huanye, ericsson or Nokia, for example. The type of model deployment platform may be, for example, mindspore, tensflow, or PyTorch, etc.
As a possible implementation method, the request message may further include a version of a model deployment platform of the inference network element. The version of the model deployment platform may be, for example, V1.0 or V2.1, etc.
As a possible implementation method, the request message may further include an association identifier.
As an implementation method, the inference network element may request model information from the training network element by invoking an nwdaf _ mlmodel provision _ Subscribe service operation. That is, the request message in this step 301 may be an nwdaf _ mlmodel provision _ Subscribe service operation.
In the embodiment of the present application, the request message in step 301 is also referred to as a first request message.
Step 302, the training network element determines that the manufacturer types of the training network element and the inference network element are different and the types of the model deployment platforms are the same.
As an implementation method, the request message in step 301 carries a manufacturer type of the inference network element and a type of a model deployment platform of the inference network element, and the training network element determines whether the manufacturer type of the training network element is the same as the manufacturer type of the inference network element, and determines the type of the model deployment platform of the training network element and the type of the model deployment platform of the inference network element. If the manufacturer types of the training network element and the inference network element are different and the types of the model deployment platforms of the training network element and the inference network element are the same, executing the following step 303 and subsequent steps, otherwise ending the process.
As another implementation method, the inference network element may know the vendor type and the model deployment platform type of each training network element deployed on the inference network element, and the request message in step 301 may not carry the vendor type of the inference network element and the model deployment platform type of the inference network element, but carries an indication information indicating whether the vendor type of the training network element is the same as the vendor type of the inference network element and indicating whether the model deployment platform type of the training network element is the same as the model deployment platform type of the inference network element, so that the training network element may determine, according to the indication information, whether the vendor type of the training network element is the same as the vendor type of the inference network element and determine whether the model deployment platform type of the training network element is the same as the model deployment platform type of the inference network element. If the manufacturer types of the training network element and the inference network element are different and the types of the model deployment platforms of the training network element and the inference network element are the same, executing the following step 303 and subsequent steps, otherwise ending the process.
As another implementation method, the manufacturer type and the model deployment platform type of each inference network element may be configured in advance on the training network element, and then the request message in step 301 may not need to carry the manufacturer type and the model deployment platform type of the inference network element, nor the indication information, and the training network element may determine, according to the local configuration information, whether the manufacturer type of the training network element is the same as the manufacturer type of the inference network element, and whether the model deployment platform type of the training network element is the same as the model deployment platform type of the inference network element. If the manufacturer types of the training network element and the inference network element are different and the types of the model deployment platforms of the training network element and the inference network element are the same, executing the following step 303 and subsequent steps, otherwise, ending the process.
It should be noted that the ending of the flow mentioned above means that the flow ends in the embodiment of fig. 3, and other operations may also be executed after the flow ends. For example, if the manufacturer types of the training network element and the inference network element are the same and the types of the model deployment platforms of the training network element and the inference network element are the same, the training network element may provide the inference network element with the address information of an unencrypted model or an unencrypted model, and then the inference network element obtains an unencrypted analysis result according to the unencrypted model. For another example, if the vendor types of the training network element and the inference network element are different and the types of the model deployment platforms of the training network element and the inference network element are different, the following scheme of the embodiment of fig. 4 may be adopted, so that the inference network element may obtain the analysis result. For another example, if the manufacturer types of the training network element and the inference network element are the same and the types of the model deployment platforms of the training network element and the inference network element are different, the inference network element may provide data to be analyzed to a third-party network element (e.g., a second network element), the second network element obtains an encrypted analysis result by using the encrypted model and the data to be analyzed, then the first network element or the training network element decrypts the data to be analyzed to obtain a decrypted analysis result, and then sends the decrypted analysis result to the inference network element.
Of course, the functions of the training network elements may also be preconfigured, for example, the preconfigured training network elements 1 to 10 only provide models for inference network elements of the same vendor type and the same type of platform of model deployment. Taking the training network element 1 as an example, if the training network element 1 receives the request message of step 301 from the inference network element, the training network element 1 defaults that the manufacturer type of the training network element 1 is different from the manufacturer type of the inference network element, and the type of the platform deployed by the model of the training network element 1 is the same as the type of the platform deployed by the model of the inference network element. Under this implementation, step 302 need not be performed.
Step 303, the training network element sends a response message to the inference network element. Accordingly, the inference network element receives the response message.
The response message contains the encrypted model or address information of the encrypted model, where the address information of the encrypted model may be, for example, a Uniform Resource Locator (URL) or a Fully Qualified Domain Name (FQDN).
Taking the example where the model is a neural network model, the model includes model architecture information and model parameters. The model architecture information includes information such as the number of layers of the neural network in the model, connection relations among the layers, and activation functions used by each layer. The model parameters include parameter values for each layer of the neural network.
As one implementation, the encrypted model in the response message includes unencrypted model architecture information and encrypted model parameters. As another implementation, the encrypted model in the response message includes encrypted model architecture information and encrypted model parameters.
As a possible implementation method, the response message further includes address information of a first network element or indication information (in this embodiment, the indication information may also be referred to as first indication information) for indicating that the training network element decrypts the encrypted analysis result, where the first network element is a third-party network element with an analysis result decryption function, such as an NWDAF network element, and the first network element may also be referred to as an analysis result decryption network element.
As a possible implementation method, the response message further includes indication information for indicating a data type of the input data corresponding to the encrypted model (in this embodiment, the indication information may also be referred to as second indication information). For example, the indication information may be an event identification (event ID). Illustratively, the data type may be one or more of a UE location or a QoS Flow parameter.
As a possible implementation method, the response message further includes a data format and/or processing parameters corresponding to each data type, and the inference network element performs corresponding preprocessing on the input data corresponding to the data type according to the data format and/or processing parameters corresponding to each data type to obtain data to be analyzed. Illustratively, the data format includes one or more of a time window for data reporting (i.e., when the data is reported), a size of a data buffer (i.e., how large the data is buffered before reporting), and the processing parameter includes one or more of a maximum value, a minimum value, an average value, or a variance value. The explanation of the data type, data format and processing parameters herein also applies to other embodiments that follow, and will not be described in detail later.
As a possible implementation method, the response message may further include the association identifier.
As an implementation method, the training network element may send the information to the inference network element by calling an nwdaf _ mlmodel provision _ Notify service operation. That is, the response message in step 303 may be an nwdaf _ mlmodel provision _ Notify service operation.
In this embodiment, the response message in step 303 is also referred to as a first response message.
And step 304, the reasoning network element obtains an encrypted analysis result according to the encrypted model.
If the response message in step 303 carries the address information of the encrypted model, the inference network element further needs to obtain the encrypted model according to the address information of the encrypted model. For example, the inference network element may download the encrypted model from an address indicated by the address information of the encrypted model according to a File Transfer Protocol (FTP).
And the reasoning network element obtains an encrypted analysis result according to the data to be analyzed and the encrypted model, namely, the data to be analyzed is input into the encrypted model to obtain the encrypted analysis result. The data to be analyzed is input data corresponding to the encrypted model, and the data to be analyzed is collected by the inference network element from other network elements (such as one or more of UE, SMF, AMF, access network equipment, PCF, UPF, or AF).
After step 304, the inference network element obtains the decrypted analysis result according to the encrypted analysis result, and two different implementation methods for the inference network element to obtain the decrypted analysis result are described below.
As a first implementation method, if the response message in step 303 carries indication information (i.e. first indication information) for indicating that the training network element decrypts the encrypted analysis result, the following steps 305 to 307 are performed after step 304.
As a second implementation method, if the response message in step 303 carries the address information of the first network element, step 308 to step 310 are executed after step 304.
Step 305, the inference network element sends a request message to the training network element. Accordingly, the training network element receives the request message.
The request message includes the identification information of the analysis type and the encrypted analysis result, and the request message is used to request the decrypted analysis result, and the identification information of the analysis type is the same as the identification information of the analysis type in step 301.
As a possible implementation method, the request message may further include the association identifier.
As an implementation method, the inference network element may send the identification information of the analysis type and the encrypted analysis result to the training network element by calling nwdaf _ analytics decryption _ Request service operation. That is, the Request message in this step 305 may be an nwdaf _ analyticdecryptionrequest service operation.
Step 306, the training network element decrypts the encrypted analysis result to obtain a decrypted analysis result.
The encrypted model may be encrypted by using one or more of a fully homomorphic encryption (homomorphic encryption) algorithm, a random security average (random security average) algorithm, or a differential privacy (differential privacy) algorithm, and the training network element decrypts the encrypted analysis result by using a decryption algorithm corresponding to the encryption algorithm used by the encrypted model, so as to obtain a decrypted analysis result.
If the request message of step 301 and the request message of step 305 both carry the association identifier, before or after step 303, the training network element binds the encryption algorithm used by the encrypted model with the association identifier, and further in step 306, the training network element may determine the encryption algorithm corresponding to the encrypted model according to the association identifier in the request message of step 305, and then determine the decryption algorithm according to the encryption algorithm, so as to decrypt the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result.
Step 307, the training network element sends a response message to the reasoning network element. Accordingly, the inference network element receives the response message.
The response message contains the decrypted analysis result.
As an implementation method, the training network element may send the decrypted analysis result to the inference network element by calling nwdaf _ analytics decryption _ Request Response service operation. That is, the Response message in this step 307 may be an nwdaf _ analyticdecryptionjrequest Response service operation.
Step 308, the inference network element sends a request message to the first network element. Accordingly, the first network element receives the request message.
The request message includes the identification information of the analysis type and the encrypted analysis result, and the request message is used to request the decrypted analysis result, and the identification information of the analysis type is the same as the identification information of the analysis type in step 301.
As a possible implementation method, the request message may further include the association identifier.
As an implementation method, the inference network element may send the identification information of the analysis type and the encrypted analysis result to the first network element by calling an nf _ analytics decryption _ Request service operation. That is, the Request message in this step 308 may be an nf _ analyticdecryptionrequest service operation.
Step 309, the first network element decrypts the encrypted analysis result to obtain a decrypted analysis result.
And the training network element decrypts the encrypted analysis result by adopting a decryption algorithm corresponding to the encryption algorithm used by the encrypted model to obtain a decrypted analysis result.
As a possible implementation method, if the response message in the step 303 includes address information of the first network element, before or after the step 303, the training network element further sends the association identifier and a decryption algorithm corresponding to the encrypted model to the first network element, and the request message in the step 308 also carries the association identifier, so that in the step 309, the first network element may determine the decryption algorithm corresponding to the encrypted model according to the association identifier in the request message in the step 308, and decrypt the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result.
In step 310, the first network element sends a response message to the inference network element. Accordingly, the inference network element receives the response message.
The response message contains the decrypted analysis result.
As an implementation method, the first network element may send the decrypted analysis result to the inference network element by calling an nff _ analytics decryption _ Response service operation. That is, the Response message in this step 310 may be a nf _ analyticsdescription _ Response service operation.
In the scheme, the reasoning network element and the training network element are deployed by different manufacturers, but model deployment platforms used by the reasoning network element and the training network element are the same, the scheme provides a process of encrypting and distributing the models across manufacturers, enhances the capacity of the training network element for encrypting and distributing the models, avoids the risk that the manufacturers who deploy the reasoning network element steal information such as frames and parameters of the models, ensures the safety of model information, and breaks through the limitation that the models in the existing solution can only be shared by the manufacturers.
As an implementation method, each training network element may further register its own model information to the data management network element, so that when the inference network element does not locally configure address information of the training network element, the inference network element may request to find a suitable training network element from the data management network element. For example, the training network element may send a registration request message to the data management network element, where the registration request message includes identification information of an analysis type that the training network element can provide and model information of the training network element, and the model information includes a vendor type of the training network element and a type of a model deployment platform of the training network element. As a possible implementation method, the model information further includes indication information for indicating a data type of the input data corresponding to the encrypted model. As a possible implementation method, the response message further includes a data format and/or a processing parameter corresponding to each data type. As a possible implementation method, the model information further includes address information of the first network element or indication information for indicating that the training network element decrypts the encrypted analysis result. Wherein the meaning of the first network element may refer to the foregoing description. As a possible implementation method, the model information further includes address information of the second network element. The second network element may be a trusted third party network element, and in particular, may be a model deployment and inference network element. The second network element can reason the data to be analyzed according to the model to obtain an analysis result. If the used model is an encrypted model, the second network element may reason the data to be analyzed according to the encrypted model to obtain an encrypted analysis result.
It should be noted that, when different training network elements register model information with a data management network element, a first network element in the model information of different training network elements may be the same network element or different network elements. Similarly, the second network elements in the model information of different training network elements may be the same network element or different network elements.
As an implementation method, before the step 301, the inference network element may send a request message (in this embodiment, the request message is also referred to as a second request message) to the data management network element, where the request message includes identification information of the analysis type in the step 301, and the request message is used to request a network element supporting the analysis type, and then the data management network element sends a response message (in this embodiment, the response message is also referred to as a second response message) to the inference network element, where the response message includes address information of the training network element described in the step 301. If the data management network element determines that a plurality of training network elements support the analysis type, the data management network element can provide the address information and the model information of the plurality of training network elements to the reasoning network element, and the reasoning network element selects one training network element from the address information and the model information.
The embodiment of the application provides a communication method. In the method, the manufacturer type of the training network element is different from the manufacturer type of the reasoning network element, and the type of the model deployment platform of the training network element is different from the type of the model deployment platform of the reasoning network element.
Referring to fig. 4, the method includes the steps of:
step 401, the training network element encrypts the local existing model, and sends the encrypted model and the identification information of the analysis type corresponding to the encrypted model to the second network element.
The types of the model deployment platforms supported by the second network element are relatively rich, and the types of the model deployment platforms supported by the second network element in the embodiment of the present application at least include the type of the model deployment platform of the training network element. The meaning of this second network element may refer to the preceding description.
As a possible implementation method, the training network element further sends address information of the first network element to the second network element, and the first network element has a function of decrypting the analysis result.
It can be understood that the model locally existing in the training network element may be a model obtained by training the training network element, or may be a model obtained by the training network element from other training network elements.
This step 401 is an optional step. When this step 401 is not performed, the second network element may be preconfigured with the above information, such as one or more of the encrypted model, the identification information of the analysis type corresponding to the encrypted model, and the address information of the first network element, by another network element or an operator.
Step 402, the inference network element sends a request message to the training network element. Accordingly, the training network element receives the request message.
This step 402 is the same as step 301 described above, and reference may be made to the foregoing description.
Step 403, the training network element sends the encrypted updating model and the identification information of the analysis type corresponding to the encrypted updating model to the second network element.
This step is an optional step. After receiving the request message of the reasoning network element, the training network element triggers other network elements to perform data collection and subsequent model training processes if the local model is confirmed to need further training, encrypts the updated model obtained by training and then sends the encrypted updated model to the second network element.
Step 404, the training network element determines that the manufacturer types of the training network element and the inference network element are different and the model deployment platform types are different.
This step 404 is an optional step. The implementation of this step 404, as well as a number of different alternative implementations, is similar to that described above with reference to step 302, and reference may be made to the foregoing description.
Step 405, the training network element sends a response message to the reasoning network element. Accordingly, the inference network element receives the response message.
The response message includes address information of the second network element and indication information indicating that the reject request supports the model of the analysis type (in this embodiment, the indication information is also referred to as first indication information).
As a possible implementation method, the response message further includes indication information for indicating a data type of the input data corresponding to the encrypted model (in this embodiment, the indication information may also be referred to as second indication information). For example, the indication information may be an event identification (event ID). As a possible implementation method, the response message further includes a data format and/or a processing parameter corresponding to each data type.
As a possible implementation method, the response message further includes a rejection reason value, where the rejection reason value indicates that the manufacturer type of the training network element is different from that of the inference network element, and the model deployment platform type of the inference network element is different from that of the training network element.
As a possible implementation method, if the request message in step 402 includes the association identifier, the response message includes the association identifier.
As an implementation method, the training network element may send the information to the inference network element by calling an nwdaf _ mlmodel provision _ Notify service operation. That is, the response message in step 405 may be an nwdaf _ mlmodel provision _ Notify service operation.
Step 406, the inference network element sends a request message to the second network element according to the address information of the second network element. Accordingly, the second network element receives the request message.
The request message includes the data to be analyzed and the identification information of the analysis type, and the request message is used for requesting the analysis of the data to be analyzed. The identification information of the analysis type is the same as that of the analysis type of the above-described step 402.
The data to be analyzed is input data corresponding to the encrypted model, and the data to be analyzed is collected by the inference network element from other network elements (such as one or more of UE, SMF, AMF, access network equipment, PCF, UPF, or AF).
As a possible implementation method, the request message may further include the association identifier.
As an implementation method, the inference network element may send the information to the second network element by calling an nff _ analyticlnfo _ Request service operation. That is, the Request message in this step 406 may be an nf _ analyticlnfo _ Request service operation.
And step 407, the second network element obtains an encrypted analysis result according to the encrypted model.
Specifically, the second network element calculates an encrypted analysis result using the locally deployed encrypted model and the data to be analyzed received from the inference network element. Wherein the encrypted model locally deployed on the second network element is from a training network element, other network elements, or a cloud operator configuration.
After the second network element has obtained the encrypted analysis result, the following steps 408 to 410, or the following steps 411 to 413, may be performed.
Step 408, the second network element sends a request message to the training network element. Accordingly, the training network element receives the request message.
The request message includes identification information of an analysis type, an encrypted analysis result, and address information of the inference network element, and is used to request the decrypted analysis result and send the decrypted analysis result to the inference network element, where the identification information of the analysis type is the same as the identification information of the analysis type in step 402.
As a possible implementation method, the request message may further include the association identifier.
As an implementation method, the second network element may send the identification information of the analysis type, the encrypted analysis result, and the address information of the inference network element to the training network element by calling nwdaf _ analytics decryption _ Request service operation. That is, the Request message in this step 408 may be a nwdaf _ analyticsdescription _ Request service operation.
And step 409, the training network element decrypts the encrypted analysis result to obtain a decrypted analysis result.
This step 409 is the same as step 306 described above, and reference may be made to the description above.
Step 410, the training network element sends the decrypted analysis result to the inference network element. Correspondingly, the reasoning network element receives the decrypted analysis result.
As an implementation method, the training network element may send the decrypted analysis result to the inference network element by calling nwdaf _ analytics decryption _ Request Response service operation.
In step 411, the second network element sends a request message to the first network element. Accordingly, the first network element receives the request message.
The request message includes identification information of an analysis type, an encrypted analysis result, and address information of the inference network element, and is used to request the decrypted analysis result and send the decrypted analysis result to the inference network element, where the identification information of the analysis type is the same as the identification information of the analysis type in step 402.
The second network element may obtain the address information of the first network element through step 401.
As a possible implementation method, the request message may further include the association identifier.
In step 412, the first network element decrypts the encrypted analysis result to obtain a decrypted analysis result.
Step 412 is similar to step 309 described above, and reference is made to the above description.
In step 413 the first network element sends the decrypted analysis result to the reasoning network element. Correspondingly, the reasoning network element receives the decrypted analysis result.
As an implementation method, the first network element may send the decrypted analysis result to the inference network element by calling a nwdaf _ analytics decryption _ Request Response service operation.
In the scheme, the reasoning network element and the training network element are deployed by different manufacturers, and model deployment platforms used by the reasoning network element and the training network element are different, so that a process of encrypting and distributing the models across manufacturers is provided, the capability of the training network element for encrypting and distributing the models is enhanced, the risk that the manufacturers who deploy the reasoning network element steal information such as frames and parameters of the models is avoided, the safety of model information is guaranteed, and the limitation that the models can only be shared by the manufacturers in the existing solution is broken.
Referring to fig. 5, a communication method according to an embodiment of the present application is provided. The method comprises the following steps:
step 501, the training network element sends a registration request message to the data management network element. Accordingly, the data management network element receives the registration request message.
The registration request message includes identification information of an analysis type and model information, where the model information includes a vendor type, a type of a model deployment platform, address information of a second network element, and also includes address information of a first network element or indication information for indicating that the training network element decrypts an encrypted analysis result.
As a possible implementation method, the registration request message may further include a version of the model deployment platform.
As a possible implementation method, the registration request message further includes indication information indicating a data type of the input data corresponding to the encrypted model. For example, the indication information may be an event identification (event ID). As a possible implementation method, the registration request message further includes a data format and/or a processing parameter corresponding to each data type.
For the meaning of the identification information about the analysis type, the vendor type, the type of the model deployment platform, the version of the model deployment platform, the first network element, and the second network element, reference may be made to the foregoing description, which is not described again.
As an implementation, the training network element may Request registration from the data management network element by invoking an nrf _ NFManagement _ NFRegister Request service operation. That is, the registration Request message in this step 501 may be an nrf _ NFManagement _ NFRegister Request service operation.
Step 502, the data management network element sends a registration response message to the training network element. Accordingly, the training network element receives the registration response message.
As an implementation method, the data management network element may return a Response to the registration request message to the training network element by calling an nrf _ NFManagement _ NFRegister Response service operation. That is, the registration Response message in this step 502 may be an nrf _ NFManagement _ NFRegister Response service operation.
Step 503, the training network element sends an update request message to the data management network element. Accordingly, the data management network element receives the update request message.
If the model information of the training network element is updated, for example, the version of the model deployment platform is updated, the training network element may send an update request message to the data management network element, so as to re-register the updated model information to the data management network element.
The information carried in the update request message is similar to the information carried in the registration request message in step 501, and reference may be made to the foregoing description.
As an implementation method, the training network element may request registration update from the data management network element by calling an nrf _ NFManagement _ NFUpdateRequest service operation.
In step 504, the data management network element sends an update response message to the training network element. Accordingly, the training network element receives the update response message.
As an implementation method, the data management network element may return a Response to the update request message to the training network element by calling an nrf _ NFManagement _ NFUpdate Response service operation.
The steps 503 to 504 are optional steps.
Step 505, the inference network element sends a request message to the data management network element. Accordingly, the data management network element receives the request message.
The request message includes identification information of the analysis type. As a possible implementation method, the request message further includes a vendor type of the inference network element and a type of the model deployment platform.
The request message is used to request to acquire a network element supporting the analysis type, and specifically, is used to request to acquire a training network element or a third-party network element supporting the analysis type.
As an implementation method, the inference network element may Request discovery of an available training network element or a third-party network element from the data management network element by invoking an nrf _ NFDiscovery _ Request service operation. That is, the Request message in step 505 may be an nrf _ NFDiscovery _ Request service operation.
Step 506, the data management network element sends a response message to the inference network element. Accordingly, the inference network element receives the response message.
The response message includes at least one set of information, each set of information includes address information of at least one candidate training network element and model information of the candidate training network element, the model information corresponds to the identification information of the analysis type in the request message of step 505, and the content included in the model information may refer to the description of step 501.
It should be noted that the address information of the first network element in the model information of different candidate training network elements may be the same or different, and the address information of the second network element in the model information of different candidate training network elements may be the same or different.
As an implementation method, the data management network element may respond to a network element discovery Request of the inference network element by calling an nrf _ NFDiscovery _ Request Response service operation. That is, the Response message in this step 506 may be an nrf _ NFDiscovery _ Request Response service operation.
Step 507, the inference network element selects the training network element or the second network element.
If the response message of the above step 506 contains multiple sets of information, the inference network element selects the training network element or the second network element according to the following sequence.
If one or more candidate training network elements which are the same as the inference network element manufacturer type and the model deployment platform type exist in at least one candidate training network element corresponding to the plurality of sets of information, the inference network element selects one from the one or more candidate training network elements as a training network element, for example, randomly selects one or selects one according to a predetermined rule.
If at least one candidate training network element corresponding to the plurality of sets of information does not have a candidate training network element with the same type as the manufacturer of the inference network element and the same type of the model deployment platform, but at least one candidate training network element corresponding to the plurality of sets of information has one or more candidate training network elements with the different type from the manufacturer of the inference network element and the same type of the model deployment platform, the inference network element selects one of the one or more candidate training network elements as the training network element, for example, randomly selects one or selects one according to a predetermined rule.
And if the candidate training network elements with the same manufacturer type as the reasoning network elements and the same type of the model deployment platform do not exist in the at least one candidate training network element corresponding to the plurality of groups of information, and the candidate training network elements with the different manufacturer type as the reasoning network elements and the same type of the model deployment platform do not exist in the at least one candidate training network element corresponding to the plurality of groups of information, the reasoning network element selects a second network element according to the model information of the at least one candidate training network element corresponding to the plurality of groups of information. For example, if the addresses of the second network elements in the model information of the at least one candidate training network element are all the same, an address of one second network element is randomly selected. For another example, if the addresses of the second network elements in the model information of the at least one candidate training network element are not identical, one of them may be randomly selected or one may be selected according to a predetermined rule.
If the inference network element selects a training network element, then step 507 may be followed by performing steps 301 to 307, or performing steps 301 to 304 and steps 308 to 310.
If the inference network element selects a second network element, step 507 may be followed by steps 406 to 410, or steps 406 to 407 and 411 to 413.
In the above scheme, the function of the data management network element is enhanced, the training network element first registers/updates the supported identification information of the analysis type and the corresponding model information to the data management network element, and then the reasoning network element finds an available training network element or a third-party network element to the data management network element. The inference network element and the training network element are deployed by different manufacturers, and the types of model deployment platforms used by the inference network element and the training network element are the same or different.
It should be understood that the data management network element in the embodiment of the present invention is only an example, and as one possible implementation method, the functions performed by the data management network element in the embodiment of the present invention may be performed by other network elements (e.g., a model management network element).
It is to be understood that, in order to implement the functions in the foregoing embodiments, the inference network element, the training network element, the first network element and the second network element include corresponding hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed in hardware or computer software driven hardware depends on the specific application scenario and design constraints of the solution.
Fig. 6 and fig. 7 are schematic structural diagrams of a possible communication device provided in an embodiment of the present application. These communication apparatuses can be used to implement the functions of the inference network element, the training network element, the first network element, or the second network element in the above method embodiments, so that the beneficial effects of the above method embodiments can also be achieved. In an embodiment of the present application, the communication device may be an inference network element, a training network element, a first network element, or a second network element, or may be a module (e.g., a chip) applied to the inference network element, the training network element, the first network element, or the second network element.
As shown in fig. 6, the communication device 600 includes a processing unit 610 and a transceiving unit 620. The communication device 600 is configured to implement the functions of the inference network element, the training network element, the first network element, or the second network element in the foregoing method embodiments.
In the first embodiment, when the communication device is an inference network element or a model (e.g. a chip) for an inference network element, the transceiver unit 620 is configured to send a first request message to a training network element, where the first request message includes identification information of an analysis type, the first request message is used to request a model supporting the analysis type, the training network element is of a different vendor type from the inference network element, and the model deployment platforms of the inference network element and the training network element are of the same type; receiving a first response message from the training network element, the first response message including an encrypted model or address information of the encrypted model, the encrypted model supporting the analysis type; a processing unit 610, configured to obtain an encrypted analysis result according to the encrypted model; and acquiring a decrypted analysis result according to the encrypted analysis result.
In a possible implementation method, the transceiver 620 is configured to send the encrypted analysis result to the training network element; receiving the decrypted analysis result from the training network element.
In a possible implementation method, the first response message further includes first indication information, where the first indication information indicates that the encrypted analysis result is decrypted by the training network element.
In a possible implementation method, the transceiver 620 is configured to send the encrypted analysis result and an associated identifier to the training network element, where the associated identifier is used by the training network element to determine an encryption algorithm corresponding to the encrypted model.
In a possible implementation method, the first response message further includes address information of the first network element; a processing unit 610, configured to send the encrypted analysis result to the first network element through a transceiving unit 620 according to the address information of the first network element; receiving the decrypted analysis result from the first network element.
In a possible implementation method, the processing unit 610 is configured to send, according to the address information of the first network element, the encrypted analysis result and an association identifier, which is used by the first network element to determine an encryption algorithm corresponding to the encrypted model, to the first network element through the transceiver unit 620.
In a possible implementation method, the first response message further includes second indication information, where the second indication information is used to indicate a data type of the input data corresponding to the encrypted model.
In a possible implementation method, the first request message further includes a vendor type of the inference network element and a model deployment platform type of the inference network element.
In a possible implementation method, the transceiver 620 is configured to send a second request message to the data management network element before sending the first request message to the training network element, where the second request message includes the identification information of the analysis type, and the second request message is used to request a network element that supports the analysis type; receiving a second response message from the data management network element, the second response message including address information of the training network element.
In the second embodiment, when the communication device is a training network element or a model (e.g. a chip) for training a network element, the transceiver unit 620 is configured to receive a first request message from an inference network element, where the first request message includes identification information of an analysis type, the first request message is used to request a model supporting the analysis type, the manufacturer type of the training network element is different from that of the inference network element, and the model deployment platforms of the inference network element and the training network element are the same; sending a first response message to the inference network element, the first response message comprising an encrypted model or address information of the encrypted model; receiving an encrypted analysis result from the reasoning network element, the encrypted analysis result being obtained according to the encrypted model; a processing unit 610, configured to decrypt the encrypted analysis result to obtain a decrypted analysis result; a transceiving unit 620, configured to send the decrypted analysis result to the inference network element.
In a possible implementation method, the first request message further includes a vendor type of the inference network element and a type of a model deployment platform of the inference network element; the processing unit 610 is configured to determine that the manufacturer types of the training network element and the inference network element are different, and the types of the model deployment platforms of the inference network element and the training network element are the same, before the transceiver unit 620 sends the first response message to the inference network element.
In a possible implementation method, the first response message further includes first indication information, where the first indication information indicates that the training network element decrypts the encrypted analysis result.
In a possible implementation method, the first response message further includes second indication information, where the second indication information is used to indicate a data type of the input data corresponding to the encrypted model.
In a possible implementation method, the transceiver 620 is configured to send a registration request message to the data management network element before receiving the first request message from the inference network element, where the registration request message includes identification information of the analysis type and model information of the training network element, and the model information includes a vendor type of the training network element and a type of a model deployment platform of the training network element.
In a possible implementation method, the transceiving unit 620 is configured to receive the encrypted analysis result and the associated identifier from the inference network element; the processing unit 610 is configured to determine, according to the association identifier, an encryption algorithm corresponding to the encrypted model; determining a decryption algorithm according to the encryption algorithm; and decrypting the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result.
In the third embodiment, when the communication device is an inference network element or a model (e.g. a chip) for an inference network element, the transceiver unit 620 is configured to send a request message to a training network element, where the request message includes identification information of an analysis type, the request message is used to request a model supporting the analysis type, the manufacturer type of the training network element is different from that of the inference network element, and the model deployment platforms of the inference network element and the training network element are different from each other; receiving a response message from the training network element, wherein the response message comprises first indication information and address information of a second network element, the first indication information indicates that the request for supporting the model of the analysis type is rejected, and the type of the model deployment platform supported by the second network element comprises the type of the model deployment platform of the training network element; a processing unit 610, configured to send, according to the address information of the second network element, to-be-analyzed data to the second network element through a transceiving unit 620, where the to-be-analyzed data is used for the second network element to generate an encrypted analysis result according to the encrypted model corresponding to the analysis type; a transceiving unit 620, configured to receive a decrypted analysis result from the training network element or the first network element, where the decrypted analysis result is obtained by the training network element or the first network element according to the encrypted analysis result.
In a possible implementation method, the response message further includes a reject cause value, where the reject cause value indicates that the manufacturer type of the training network element is different from that of the inference network element, and the model deployment platforms of the inference network element and the training network element are different from each other.
In a possible implementation method, the request message further includes a vendor type of the inference network element and a type of a model deployment platform of the inference network element.
In a possible implementation method, the response message further includes second indication information, where the second indication information is used to indicate a data type of the input data corresponding to the encrypted model.
In a possible implementation method, the processing unit 610 is configured to send, according to the address information of the second network element, to-be-analyzed data and an associated identifier to the second network element through the transceiving unit 620, where the associated identifier is used for the first network element or the training network element to determine an encryption algorithm corresponding to the encrypted model.
In the fourth embodiment, when the communication device is a training network element or a model (e.g. a chip) for training a network element, the transceiver 620 is configured to receive a request message from an inference network element, where the request message includes identification information of an analysis type, the request message is used to request a model supporting the analysis type, the manufacturer type of the training network element is different from that of the inference network element, and the model deployment platforms of the inference network element and the training network element are different from each other; sending a response message to the reasoning network element, wherein the response message comprises first indication information and address information of a second network element, the first indication information indicates that the request for supporting the model of the analysis type is rejected, and the type of the model deployment platform supported by the second network element comprises the type of the model deployment platform of the training network element; receiving an encrypted analysis result from the second network element, wherein the encrypted analysis result is obtained by the second network element according to the data to be analyzed of the reasoning network element and the encrypted model corresponding to the analysis type; a processing unit 610, configured to decrypt the encrypted analysis result to obtain a decrypted analysis result; a transceiving unit 620, configured to send the decrypted analysis result to the inference network element.
In a possible implementation method, the request message further includes a vendor type of the inference network element and a model deployment platform type of the inference network element; the processing unit 610 is configured to determine that the manufacturer types of the training network element and the inference network element are different, and the types of the model deployment platforms of the inference network element and the training network element are different before the transceiver unit 620 sends the response message to the inference network element.
In a possible implementation method, the response message further includes a rejection reason value, where the rejection reason value indicates that the manufacturer type of the training network element is different from that of the inference network element, and the model deployment platforms of the inference network element and the training network element are different from each other.
In a possible implementation method, the transceiving unit 620 is configured to send, to the second network element, the identification information of the analysis type and the encrypted model corresponding to the analysis type before receiving the request message from the inference network element.
In a possible implementation method, the response message further includes second indication information, where the second indication information is used to indicate a data type of the input data corresponding to the encrypted model.
In a possible implementation method, the transceiver 620 is configured to receive the encrypted analysis result and the associated identifier from the second network element; a processing unit 610, configured to determine, according to the association identifier, an encryption algorithm corresponding to the encrypted model; determining a decryption algorithm according to the encryption algorithm; and decrypting the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result.
In the fifth embodiment, when the communication apparatus is the first network element or a model (e.g. a chip) for the first network element, the transceiving unit 620 is configured to receive the encrypted analysis result; a processing unit 610, configured to decrypt the encrypted analysis result to obtain a decrypted analysis result; a transceiving unit 620, configured to send the decrypted analysis result to the inference network element.
In a possible implementation method, the transceiving unit 620 is configured to receive the encrypted analysis result from the inference network element.
In a possible implementation method, the transceiving unit 620 is configured to receive the encrypted analysis result and the address information of the inference network element from the second network element; a processing unit 610, configured to send the decrypted analysis result to the inference network element through a transceiving unit 620 according to the address information of the inference network element.
In a possible implementation method, the transceiver 620 is configured to receive the association identifier from the training network element and an identifier of a decryption algorithm corresponding to the association identifier before receiving the encrypted analysis result; receiving the encrypted analysis result and the association identifier; a processing unit 610, configured to determine the decryption algorithm according to the association identifier; and decrypting the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result.
In the sixth embodiment, when the communication apparatus is a second network element or a model (e.g. a chip) for the second network element, the transceiver 620 is configured to receive identification information of an analysis type from a training network element and an encrypted model supporting the analysis type, where the type of a model deployment platform supported by the second network element includes the type of the model deployment platform of the training network element; receiving data to be analyzed from an inference network element; a processing unit 610, configured to obtain an encrypted analysis result according to the encrypted model and the data to be analyzed; a transceiving unit 620, configured to send the encrypted analysis result and the address information of the inference network element, which is used to receive a decrypted analysis result, to a training network element or a first network element, where the decrypted analysis result is obtained by the training network element or the first network element according to the encrypted analysis result.
In the seventh embodiment, when the communication device is an inference network element or a model (e.g. a chip) for an inference network element, the transceiver 620 is configured to send a request message to a data management network element, where the request message includes identification information of an analysis type, and the request message is used to request a network element supporting the analysis type; receiving a response message from the data management network element, wherein the response message comprises at least one group of information, each group of information comprises address information of a candidate training network element and model information of the candidate training network element, the candidate training network element supports the analysis type, and the model information of the candidate training network element comprises a manufacturer type of the candidate training network element and a type of a model deployment platform of the candidate training network element; the processing unit 610 is configured to, when there is one or more candidate training network elements that are different from the manufacturer type of the inference network element and are the same as the model deployment platform in at least one candidate training network element corresponding to the at least one set of information, select one candidate training network element from the one or more candidate training network elements as a training network element.
In a possible implementation method, the processing unit 610 is configured to determine, when there is no candidate training network element that is different from the inference network element in vendor type and the same as the model deployment platform in at least one candidate training network element corresponding to the at least one set of information, address information of a second network element according to the at least one set of information.
In a possible implementation method, the model information of the candidate training network element includes address information of the second network element; a processing unit 610, configured to obtain address information of the second network element from the model information of the candidate training network element.
More detailed descriptions about the processing unit 610 and the transceiver unit 620 can be directly obtained by referring to the related descriptions in the foregoing method embodiments, and are not repeated herein.
As shown in fig. 7, the communication device 700 includes a processor 710, and as one possible implementation, the communication device 700 further includes an interface circuit 720. Processor 710 and interface circuit 720 are coupled to each other. It will be appreciated that interface circuit 720 may be a transceiver or an input-output interface. As a possible implementation method, the communication device 700 may further include a memory 730 for storing instructions executed by the processor 710 or storing input data required by the processor 710 to execute the instructions or storing data generated by the processor 710 after executing the instructions.
When the communication device 700 is used to implement the above method embodiments, the processor 710 is used to implement the functions of the processing unit 610, and the interface circuit 720 is used to implement the functions of the transceiving unit 620.
It is understood that the processor in the embodiments of the present application may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. The general purpose processor may be a microprocessor, but may be any conventional processor.
The method steps in the embodiments of the present application may be implemented by hardware, or may be implemented by software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in random access memory, flash memory, read only memory, programmable read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, registers, a hard disk, a removable hard disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in an access network device or a terminal device. Of course, the processor and the storage medium may reside as discrete components in an access network device or a terminal device.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, an access network device, a terminal device, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website, computer, server or data center to another website, computer, server or data center by wire or wirelessly. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape; optical media such as digital video disks; but may also be a semiconductor medium such as a solid state disk. The computer readable storage medium may be volatile or nonvolatile storage medium, or may include both volatile and nonvolatile types of storage media.
In various embodiments of the present application, unless otherwise specified or conflicting, terms and/or descriptions between different embodiments have consistency and may be mutually referenced, and technical features in different embodiments may be combined to form a new embodiment according to their inherent logical relationships.
In this application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated object, indicating that there may be three relationships, for example, a and/or B, which may indicate: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In the description of the text of the present application, the character "/" generally indicates that the former and latter associated objects are in an "or" relationship; in the formula of the present application, the character "/" indicates that the preceding and following associated objects are in a "division" relationship.
It is to be understood that the various numerical references referred to in the embodiments of the present application are merely for descriptive convenience and are not intended to limit the scope of the embodiments of the present application. The sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of the processes should be determined by their functions and inherent logic.

Claims (33)

1. A method of communication, comprising:
the method comprises the steps that a reasoning network element sends a first request message to a training network element, wherein the first request message comprises identification information of an analysis type, the first request message is used for requesting a model supporting the analysis type, the manufacturer type of the training network element is different from that of the reasoning network element, and the model deployment platforms of the reasoning network element and the training network element are the same in type;
the inference network element receiving a first response message from the training network element, the first response message including an encrypted model or address information of the encrypted model, the encrypted model supporting the analysis type;
the reasoning network element obtains an encrypted analysis result according to the encrypted model;
and the reasoning network element acquires a decrypted analysis result according to the encrypted analysis result.
2. The method of claim 1, wherein the reasoning network element obtaining the decrypted analysis result from the encrypted analysis result comprises:
the reasoning network element sends the encrypted analysis result to the training network element;
the inference network element receives the decrypted analysis result from the training network element.
3. The method of claim 2, wherein the first response message further includes first indication information indicating that the encrypted analysis result was decrypted by the training network element.
4. The method of claim 2 or 3, wherein the reasoning network element sending the encrypted analysis result to the training network element comprises:
and the reasoning network element sends the encrypted analysis result and the associated identifier to the training network element, wherein the associated identifier is used for the training network element to determine an encryption algorithm corresponding to the encrypted model.
5. The method of claim 1, wherein the first response message further includes address information of the first network element;
the inference network element obtains a decrypted analysis result according to the encrypted analysis result, and the method comprises the following steps:
the reasoning network element sends the encrypted analysis result to the first network element according to the address information of the first network element;
the inference network element receives the decrypted analysis result from the first network element.
6. The method of claim 5, wherein the reasoning network element sending the encrypted analysis result to the first network element based on the address information of the first network element, comprises:
and the reasoning network element sends the encrypted analysis result and the association identifier to the first network element according to the address information of the first network element, wherein the association identifier is used for the first network element to determine an encryption algorithm corresponding to the encrypted model.
7. The method according to any one of claims 1 to 6, wherein the first response message further includes second indication information indicating a data type of the input data corresponding to the encrypted model.
8. The method according to any of claims 1 to 7, wherein the first request message further contains a vendor type of the inference network element and a type of model deployment platform of the inference network element.
9. The method of any of claims 1 to 8, wherein prior to the inference network element sending the first request message to the training network element, further comprising:
the inference network element sends a second request message to a data management network element, where the second request message includes identification information of the analysis type, and the second request message is used to request a network element supporting the analysis type;
and the reasoning network element receives a second response message from the data management network element, wherein the second response message comprises the address information of the training network element.
10. A method of communication, comprising:
a training network element receives a first request message from a reasoning network element, wherein the first request message comprises identification information of an analysis type, the first request message is used for requesting a model supporting the analysis type, the manufacturer type of the training network element is different from that of the reasoning network element, and the model deployment platforms of the reasoning network element and the training network element are the same in type;
the training network element sends a first response message to the reasoning network element, wherein the first response message comprises an encrypted model or address information of the encrypted model;
the training network element receives an encrypted analysis result from the reasoning network element, wherein the encrypted analysis result is obtained according to the encrypted model;
the training network element decrypts the encrypted analysis result to obtain a decrypted analysis result;
and the training network element sends the decrypted analysis result to the reasoning network element.
11. The method of claim 10, wherein the first request message further includes a vendor type of the reasoning network element and a type of model deployment platform of the reasoning network element;
before the training network element sends the first response message to the inference network element, the method further includes:
and the training network element determines that the manufacturer types of the training network element and the reasoning network element are different, and the types of the model deployment platforms of the reasoning network element and the training network element are the same.
12. The method according to claim 10 or 11, wherein the first response message further comprises first indication information indicating that the encrypted analysis result is decrypted by the training network element.
13. The method according to any one of claims 10 to 12, wherein the first response message further includes second indication information indicating a data type of the input data corresponding to the encrypted model.
14. The method of any of claims 10 to 13, wherein prior to the training network element receiving the first request message from the reasoning network element, further comprising:
the training network element sends a registration request message to a data management network element, wherein the registration request message includes identification information of the analysis type and model information of the training network element, and the model information includes a manufacturer type of the training network element and a type of a model deployment platform of the training network element.
15. The method of any of claims 10 to 14, wherein the training network element receives the encrypted analysis results from the reasoning network element, comprising:
the training network element receives the encrypted analysis result and the associated identification from the reasoning network element;
the training network element decrypts the encrypted analysis result to obtain a decrypted analysis result, and the method comprises the following steps:
the training network element determines an encryption algorithm corresponding to the encrypted model according to the association identifier;
the training network element determines a decryption algorithm according to the encryption algorithm;
and the training network element decrypts the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result.
16. A method of communication, comprising:
the method comprises the steps that a reasoning network element sends a request message to a training network element, wherein the request message comprises identification information of an analysis type, the request message is used for requesting a model supporting the analysis type, the manufacturer type of the training network element is different from that of the reasoning network element, and the model deployment platforms of the reasoning network element and the training network element are different from each other;
the inference network element receives a response message from the training network element, wherein the response message comprises first indication information and address information of a second network element, the first indication information indicates that a request for supporting the model of the analysis type is rejected, and the type of the model deployment platform supported by the second network element comprises the type of the model deployment platform of the training network element;
the reasoning network element sends data to be analyzed to the second network element according to the address information of the second network element, wherein the data to be analyzed is used for the second network element to generate an encrypted analysis result according to the encrypted model corresponding to the analysis type;
the inference network element receives a decrypted analysis result from the training network element or the first network element, where the decrypted analysis result is obtained by the training network element or the first network element according to the encrypted analysis result.
17. The method of claim 16, wherein the response message further includes a rejection cause value, the rejection cause value being that the training network element is of a different vendor type than the reasoning network element and that the reasoning network element is of a different model deployment platform type than the training network element.
18. The method according to claim 16 or 17, wherein the request message further contains a vendor type of the inference network element and a type of model deployment platform of the inference network element.
19. The method according to any one of claims 16 to 18, wherein the response message further includes second indication information indicating a data type of the input data corresponding to the encrypted model.
20. The method of any of claims 16 to 19, wherein the encrypted model is encrypted using one or more of a fully homomorphic encryption algorithm, a random secure averaging algorithm, or a differential privacy algorithm.
21. The method according to any of claims 16 to 20, wherein the sending, by the inference network element, the data to be analyzed to the second network element based on the address information of the second network element comprises:
and the reasoning network element sends data to be analyzed and an associated identifier to the second network element according to the address information of the second network element, wherein the associated identifier is used for the first network element or the training network element to determine an encryption algorithm corresponding to the encrypted model.
22. A method of communication, comprising:
a training network element receives a request message from a reasoning network element, wherein the request message comprises identification information of an analysis type, the request message is used for requesting a model supporting the analysis type, the manufacturer type of the training network element is different from that of the reasoning network element, and the model deployment platforms of the reasoning network element and the training network element are different;
the training network element sends a response message to the inference network element, wherein the response message comprises first indication information and address information of a second network element, the first indication information indicates that the request is rejected to support the model of the analysis type, and the type of the model deployment platform supported by the second network element comprises the type of the model deployment platform of the training network element;
the training network element receives an encrypted analysis result from the second network element, wherein the encrypted analysis result is obtained by the second network element according to the data to be analyzed of the reasoning network element and the encrypted model corresponding to the analysis type;
the training network element decrypts the encrypted analysis result to obtain a decrypted analysis result;
and the training network element sends the decrypted analysis result to the reasoning network element.
23. The method of claim 22, wherein the request message further includes a vendor type of the inference network element and a type of a model deployment platform of the inference network element;
before the training network element sends the response message to the inference network element, the method further includes:
and the training network element determines that the manufacturer types of the training network element and the reasoning network element are different, and the types of the model deployment platforms of the reasoning network element and the training network element are different.
24. The method of claim 22 or 23, wherein the response message further comprises a rejection cause value, wherein the rejection cause value is that the type of vendor of the training network element is different from the type of model deployment platform of the reasoning network element and the training network element are different.
25. The method of any of claims 22 to 24, wherein prior to the training network element receiving the request message from the reasoning network element, further comprising:
and the training network element sends the identification information of the analysis type and the encrypted model corresponding to the analysis type to the second network element.
26. The method according to any one of claims 22 to 25, wherein the response message further includes second indication information indicating a data type of the input data corresponding to the encrypted model.
27. The method according to any of claims 22 to 26, wherein the training network element receives encrypted analysis results from the second network element, comprising:
the training network element receives the encrypted analysis result and the associated identifier from the second network element;
the training network element decrypts the encrypted analysis result to obtain a decrypted analysis result, and the method comprises the following steps:
the training network element determines an encryption algorithm corresponding to the encrypted model according to the association identifier;
the training network element determines a decryption algorithm according to the encryption algorithm;
and the training network element decrypts the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result.
28. A communication device comprising a processor and a memory; the memory is for storing computer instructions which, when executed by the apparatus, cause the apparatus to perform the method of any of claims 1 to 9, 16 to 21 or any of claims 10 to 15, 22 to 27.
29. A computer-readable storage medium, in which a computer program or instructions are stored which, when executed by a communication apparatus, carry out the method of any one of claims 1 to 27.
30. A communication system, comprising:
an inference network element for performing the method of any of claims 1 to 9; and
and the training network element is used for sending the encrypted model or the address information of the encrypted model to the reasoning network element.
31. A communication system, comprising:
the system comprises an inference network element and a training network element, wherein the inference network element is used for sending a first request message to the training network element, the first request message comprises identification information of an analysis type, and the first request message is used for requesting a model supporting the analysis type; and
the training network element for performing the method of any one of claims 10 to 15.
32. A communication system, comprising:
an inference network element for performing the method of any of claims 16 to 21; and
and the training network element is used for sending the address information of the second network element to the reasoning network element, and the type of the model deployment platform supported by the second network element comprises the type of the model deployment platform of the training network element.
33. A communication system, comprising:
the system comprises an inference network element and a training network element, wherein the inference network element is used for sending a request message to the training network element, the request message comprises identification information of an analysis type, and the request message is used for requesting a model supporting the analysis type; and
the training network element for performing the method of any one of claims 22 to 27.
CN202111030657.9A 2021-09-03 2021-09-03 Communication method, communication device and communication system Pending CN115767514A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111030657.9A CN115767514A (en) 2021-09-03 2021-09-03 Communication method, communication device and communication system
PCT/CN2022/114043 WO2023030077A1 (en) 2021-09-03 2022-08-22 Communication method, communication apparatus, and communication system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111030657.9A CN115767514A (en) 2021-09-03 2021-09-03 Communication method, communication device and communication system

Publications (1)

Publication Number Publication Date
CN115767514A true CN115767514A (en) 2023-03-07

Family

ID=85332899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111030657.9A Pending CN115767514A (en) 2021-09-03 2021-09-03 Communication method, communication device and communication system

Country Status (2)

Country Link
CN (1) CN115767514A (en)
WO (1) WO2023030077A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111083722A (en) * 2019-04-15 2020-04-28 中兴通讯股份有限公司 Model pushing method, model requesting method, model pushing device, model requesting device and storage medium
CN112311564B (en) * 2019-07-23 2022-04-22 华为技术有限公司 Training method, device and system applying MOS model
CN110569288A (en) * 2019-09-11 2019-12-13 中兴通讯股份有限公司 Data analysis method, device, equipment and storage medium
CN112784992A (en) * 2019-11-08 2021-05-11 中国移动通信有限公司研究院 Network data analysis method, functional entity and electronic equipment
EP4087193A4 (en) * 2020-02-07 2023-01-18 Huawei Technologies Co., Ltd. Data analysis method, apparatus and system

Also Published As

Publication number Publication date
WO2023030077A1 (en) 2023-03-09

Similar Documents

Publication Publication Date Title
CN111901135B (en) Data analysis method and device
US11825319B2 (en) Systems and methods for monitoring performance in distributed edge computing networks
US20220030117A1 (en) Systems and methods to enable programmable xhaul transport
US10645565B2 (en) Systems and methods for external group identification in wireless networks
CN111200798B (en) V2X message transmission method, device and system
US20220053584A1 (en) Method for establishing communication bearer, device, and system
US11758377B2 (en) Vehicle terminal for controlling V2X message transmission between vehicle terminals through V2X service in wireless communication system and communication control method thereof
CN114830818A (en) QoS management method, relay terminal, PCF network element, SMF network element and remote terminal
CN114342332A (en) Communication method, device and system
CN113973399A (en) Message forwarding method, device and system
CN114902703A (en) D2D communication method, device and system
CN112954768B (en) Communication method, device and system
CN115767514A (en) Communication method, communication device and communication system
CN113543216B (en) Method, device and system for transmitting media message
CN112449377B (en) Network data reporting method and device
CN115529637A (en) Communication method, communication device and communication system
CN115244991A (en) Communication method, device and system
WO2023213177A1 (en) Communication method and apparatus
WO2023016298A1 (en) Service awareness method, communication apparatus, and communication system
CN112584326B (en) Communication method, device and system
WO2023056784A1 (en) Data collection method, communication apparatus and communication system
EP4228344A1 (en) Method and apparatus for requesting prs configuration, and communication device and storage medium
US11140550B2 (en) Gateway, a CMS, a system and methods therein, for assisting a server with collecting data from a capillary device
CN116866965A (en) Backup method, communication device and communication system
CN115884134A (en) Communication method and communication device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination