WO2023160508A1 - Model management method and communication apparatus - Google Patents

Model management method and communication apparatus Download PDF

Info

Publication number
WO2023160508A1
WO2023160508A1 PCT/CN2023/077287 CN2023077287W WO2023160508A1 WO 2023160508 A1 WO2023160508 A1 WO 2023160508A1 CN 2023077287 W CN2023077287 W CN 2023077287W WO 2023160508 A1 WO2023160508 A1 WO 2023160508A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
entity
inference
provider
information
Prior art date
Application number
PCT/CN2023/077287
Other languages
French (fr)
Chinese (zh)
Inventor
黄谢田
曹龙雨
于益俊
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023160508A1 publication Critical patent/WO2023160508A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products

Definitions

  • the present application relates to the field of communication technologies, and more specifically, relates to a model management method and a communication device.
  • reasoning models such as artificial intelligence (AI) models and machine learning (ML) models
  • AI artificial intelligence
  • ML machine learning
  • model providers such as manufacturers or operators
  • the model management entity can obtain the reasoning model from the model market, and deploy the reasoning model in the device provider, and then the device provider can run the deployed reasoning model.
  • the present application provides a model management method and a communication device, so as to improve the reliability of reasoning model operation.
  • a method for model management may be executed by a model certification entity or a chip in the model certification entity, and the method includes: the model certification entity receives first provider information from a model management entity, and the first provider The vendor information indicates that the inference model is provided by the first model provider; the model authentication entity compares the first model provider with the device provider of the model inference entity to generate a comparison result, wherein the model inference entity is an entity that will run the inference model The model authentication entity sends the authentication result of the reasoning model to the model management entity based on the comparison result, and the authentication result indicates whether the reasoning model can run in the model reasoning entity.
  • the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
  • the comparison result is that the first model provider and the device provider are inconsistent, the accuracy of the reasoning model is less than the first threshold, and the authentication result indicates that the reasoning model cannot be used for model reasoning. or, the comparison result shows that the first model provider and the device provider are inconsistent, the accuracy of the reasoning model is greater than or equal to the first threshold, and the authentication result indicates that the reasoning model can run in the model reasoning entity.
  • the model authentication network element judges that the first model provider of the reasoning model is inconsistent with the equipment provider, then the accuracy of the reasoning model can be further determined, and whether it can be determined according to the accuracy of the reasoning model The decision reasoning model has passed the certification, which can improve the flexibility of the system.
  • the method before the model authentication entity compares the first model provider and the equipment provider, the method further includes: the model authentication entity receives the reasoning model from the model management entity; the model authentication The entity determines that the first model provider is the provider of the inference model.
  • the model authentication network element can first judge whether the reasoning model provided by the first model provider, that is, judge whether the information of the first provider Authenticity can improve the reliability of model operation.
  • the model certification entity determining that the first model provider is the provider of the inference model includes: the model certification entity determining that the first format is consistent with the second format; or , the model certification entity determines that the second model provider is consistent with the first model provider; or, the model certification entity determines that the first format is consistent with the second format, and that the second model provider is consistent with the first model provider; wherein, the first The first format is the format of the reasoning model, the second format is determined according to the information of the first provider, and the second model provider is determined according to the reasoning model.
  • the method further includes: the model authentication entity sends cause information to the model management entity, where the cause information indicates the reason for obtaining the authentication result.
  • the model certification entity can inform the model management entity of the reason why the inference model certification fails, and then the model management entity can perform different processing based on different reasons, which can improve the reliability of model operation.
  • the comparison result is that the first model provider and the device provider are consistent, and the authentication result indicates that the reasoning model can run in the model reasoning entity.
  • the model authentication network element can judge whether the first model provider of the inference model is consistent with the equipment provider of the model inference entity, and if they are consistent, it can be determined that the inference model is authenticated, and then the model management entity can pass the The certified inference model is deployed in the model inference entity, which can improve the reliability of the model operation.
  • the method before the model certification entity compares the first model provider and the equipment provider, the method further includes: the model certification entity receives the second provider information from the model management entity , the second provider information indicates the device provider.
  • a method for model management is provided.
  • the method may be executed by a model management entity or a chip in the model management entity.
  • the method includes: the model management entity sends the first provider information to the model certification entity, and the first provider The supplier information indicates that the first inference model is provided by the first model provider; the model management entity receives the authentication result of the first inference model from the model authentication entity, and the authentication result indicates whether the first inference model can run in the model inference entity.
  • the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
  • the first model provider and the device provider of the model inference entity are inconsistent, the accuracy of the first inference model is less than the first threshold, and the authentication result indicates that the first inference model Cannot run in the model inference entity; or, the first model provider and the device provider are inconsistent, the accuracy of the first inference model is greater than or equal to the first threshold, and the authentication result indicates that the first inference model can run in the model inference entity .
  • the method further includes: the model management entity sends the first reasoning model to the model certification entity.
  • the first model provider is an inference model provider, including: the first format is consistent with the second format; or, the second model provider and the first model The provider is consistent; or, the first format is consistent with the second format, and the second model provider is consistent with the first model provider; wherein, the first format is the format of the inference model, and the second format is based on the information of the first provider Yes, the second model provider is determined according to the reasoning model.
  • the method further includes: the model management entity receives cause information from the model certification entity, where the cause information indicates a reason for obtaining the certification result.
  • the authentication result indicates that the first inference model cannot run in the model inference entity
  • the method further includes: the model management entity sends at least one of the following information to the model training entity : authentication result, reason information and first adjustment information, wherein, the reason information indicates the reason for obtaining the authentication result, and the first adjustment information indicates that the model training entity adjusts the first reasoning model; the model management entity receives the second reasoning model from the model training entity An inference model, the second inference model is adjusted based on the first inference model.
  • the model training entity can improve the flexibility of model management by reacquiring or retraining the inference model.
  • the authentication result indicates that the first inference model cannot run in the model inference entity
  • the method further includes: the model management entity sends the first inference model and the second inference model to the model inference entity Two adjustment information, the second adjustment information is used to instruct the model training entity to adjust the first reasoning model.
  • the model inference entity can retrain the inference model before deploying the inference model, which improves the flexibility of model management.
  • the first model provider is the same as the device provider of the model inference entity, and the authentication result indicates that the first inference model can run in the model inference entity.
  • the method before receiving the authentication result of the first reasoning model from the model authentication entity, the method further includes: the model management entity sends the second provider information to the model authentication entity, the first The second provider information indicates the device provider.
  • a communication device for model management includes a transceiver module and a processing module, the transceiver module is used to receive first provider information from a model management entity, and the first provider information indicates that the reasoning model is provided by the first Provided by the model provider; the processing module is used to compare the first model provider and the equipment provider of the model inference entity to generate a comparison result, wherein the model inference entity is an entity that will run the inference model; the transceiver module is also used Based on sending the authentication result of the inference model to the model management entity based on the comparison result, the authentication result indicates whether the inference model can run in the model inference entity.
  • the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
  • the comparison result is that the first model provider and the device provider are inconsistent, the accuracy of the inference model is less than the first threshold, and the authentication result indicates that the inference model cannot be used for model reasoning. or, the comparison result shows that the first model provider and the device provider are inconsistent, the accuracy of the reasoning model is greater than or equal to the first threshold, and the authentication result indicates that the reasoning model can run in the model reasoning entity.
  • the transceiver module is also used to: receive from the model management entity Receive an inference model; the processing module is also used to: determine that the first model provider is the provider of the inference model.
  • the processing module is specifically configured to: determine that the first format is consistent with the second format; or determine that the second model provider is consistent with the first model provider; or, Determine that the first format is consistent with the second format, and that the second model provider is consistent with the first model provider; wherein, the first format is the format of the reasoning model, the second format is determined according to the information of the first provider, and the second The model provider is determined based on the inference model.
  • the transceiver module is further configured to: send cause information to the model management entity, where the cause information indicates the reason for obtaining the authentication result.
  • the comparison result indicates that the first model provider is consistent with the device provider
  • the authentication result indicates that the inference model can run in the model inference entity
  • the transceiving module is further configured to: receive second provider information from the model management entity, where the second provider information is used to indicate the device provider.
  • a communication device for model management includes a transceiver module and a processing module, the processing module is used to generate first provider information, and the first provider information is used to indicate that the first reasoning model is provided by the first Provided by the model provider; the transceiver module is used to send the first provider information to the model authentication entity; the transceiver module is also used to receive the authentication result of the first inference model from the model authentication entity, and the authentication result indicates whether the first inference model can be used in Model inference runs in the entity.
  • the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to inform the processing module, and then the model management entity can take different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
  • the first model provider and the device provider are inconsistent, the accuracy of the first inference model is less than the first threshold, and the authentication result indicates that the first inference model cannot be used in the model or, the first model provider is inconsistent with the device provider, the accuracy of the first reasoning model is greater than or equal to the first threshold, and the authentication result indicates that the first reasoning model can run in the model reasoning entity.
  • the transceiver module is further configured to: send the first reasoning model to the model verification entity.
  • the first model provider is a provider of the first reasoning model, including: the first format is consistent with the second format; or, the second model provider and the first inference model A model provider is consistent; or, the first format is consistent with the second format, and the second model provider is consistent with the first model provider; wherein, the first format is the format of the inference model, and the second format is provided according to the first The provider information is determined, and the second model provider is determined according to the reasoning model.
  • the transceiver module is further configured to: receive cause information from the model authentication entity, where the cause information indicates the reason for obtaining the authentication result.
  • the authentication result indicates that the first inference model cannot run in the model inference entity
  • the transceiver module is further used to: send at least one of the following information to the model training entity: Authentication result, reason information and first adjustment information, wherein the reason information indicates the reason for obtaining the authentication result, and the first adjustment information indicates that the model training entity adjusts the first inference model; receiving the second inference model from the model training entity, the first The second reasoning model is adjusted based on the first reasoning model.
  • the authentication result indicates that the reasoning model cannot run in the model reasoning entity
  • the sending and receiving module is also used to: send the first reasoning model and the second adjustment information to the model reasoning entity , the second adjustment information instructs the model inference entity to adjust the first inference model.
  • the first model provider is consistent with the device provider, and the authentication result indicates that the first inference model can run in the model inference entity.
  • the transceiving module is further configured to: send second provider information to the model verification entity, where the second provider information indicates the device provider of the model reasoning entity.
  • a communication device may include a processing module, a sending unit, and a receiving unit.
  • the sending unit and the receiving unit may also be transceiver modules.
  • the processing module can be a processor, and the sending unit and the receiving unit can be transceivers; the device can also include a storage unit, which can be a memory; the storage unit is used to store instructions , the processing module executes the instructions stored in the storage unit, so that the model certification entity executes any method in the first aspect.
  • the processing module may be a processor, and the sending unit and receiving unit may be input/output interfaces, pins or circuits, etc.; the processing module executes the instructions stored in the storage unit, to make the chip execute any one of the methods in the first aspect.
  • the storage unit is used to store instructions, and the storage unit may be a storage unit in the chip (for example, a register, a cache, etc.), or a storage unit outside the chip in the model certification entity (for example, a read-only memory , random access memory, etc.).
  • the processing module may be a processor, and the sending unit and the receiving unit may be transceivers; the device may also include a storage unit, which may be a memory; the storage unit is used to store instructions , the processing module executes the instruction stored in the storage unit, so that the model management entity executes any method in the second aspect.
  • the processing module may be a processor, and the sending unit and the receiving unit may be input/output interfaces, pins or circuits, etc.; the processing module executes the instructions stored in the storage unit, to make the chip execute any one of the methods in the second aspect.
  • the storage unit is used to store instructions, and the storage unit may be a storage unit in the chip (for example, a register, a cache, etc.), or a storage unit outside the chip in the model management entity (for example, a read-only memory , random access memory, etc.).
  • a communication device including a processor and an interface circuit, and the interface circuit is used to receive signals from other communication devices other than the communication device and transmit them to the processor or send signals from the processor
  • the processor is used to implement any method in the aforementioned first aspect or second aspect through a logic circuit or executing code instructions.
  • a computer-readable storage medium is provided, and a computer program or instruction is stored in the computer-readable storage medium.
  • the computer program or instruction is executed, any of the aforementioned first or second aspects can be realized.
  • a computer program product containing instructions is provided, and when the instructions are executed, any one of the methods in the aforementioned first aspect or second aspect is implemented.
  • a computer program in a ninth aspect, includes codes or instructions, and when the codes or instructions are executed, any method in the aforementioned first aspect or second aspect is implemented.
  • a chip system in a tenth aspect, includes a processor and may further include a memory, configured to implement any method in the aforementioned first aspect or second aspect.
  • the system-on-a-chip may consist of chips, or may include chips and other discrete devices.
  • a communication system in an eleventh aspect, includes the device described in any one of the third aspect and the fourth aspect.
  • FIG. 1 is a schematic diagram of a system for model management provided by an embodiment of the present application.
  • Fig. 2 is a schematic flowchart of an example of a model management method provided by an embodiment of the present application.
  • Fig. 3 is a schematic flow chart of another example of an Italian model management method provided by an embodiment of the present application.
  • Fig. 4 is a schematic flowchart of another example of the model management method provided by the embodiment of the present application.
  • FIG. 5 and FIG. 6 are schematic structural diagrams of a possible communication device provided by an embodiment of the present application.
  • the method in the embodiment of the present application can be applied to a long term evolution technology (long term evolution, LTE) system, a long term evolution advanced technology (long term evolution-advanced, LTE-A) system, an enhanced long term evolution technology (enhanced long term evolution-advanced , eLTE), the fifth generation (the 5th Generation, 5G) mobile communication system New Radio (NR) system, can also be extended to similar wireless communication systems, such as wireless-fidelity (wireless-fidelity, WiFi) , worldwide interoperability for microwave access (WIMAX), and cellular systems related to the 3rd generation partnership project (3gpp).
  • Inference model (may also be referred to simply as model): A function learned from data that can implement a specific function/map.
  • the model can be obtained based on artificial intelligence (AI) or machine learning (ML) technology, so it can also be called artificial intelligence/AI model, machine learning/ML model, etc.
  • AI/ML models Commonly used algorithms for generating AI/ML models include: supervised learning, unsupervised learning, and enhanced learning.
  • the corresponding models can be called supervised learning models, unsupervised learning models, and enhanced learning models.
  • the supervised learning model may be a classification model, a prediction model, a regression model, etc.
  • the unsupervised learning model may be a clustering model.
  • the model can also be obtained based on neural network (NN) technology, and this model can also be called a neural network model, a deep learning model, and the like.
  • NN neural network
  • Model training Use the training data to train the available models.
  • Model reasoning perform reasoning or prediction based on the model, and generate reasoning results. Additionally, model inference entities can be used for model inference.
  • Model Deployment Deploy the model in the model inference entity.
  • Model activation Activate the model deployed in the model inference entity to start running.
  • Model evaluation Evaluate whether the performance of the model running in the model inference entity meets the requirements.
  • Model certification Determine whether the entity for model training is consistent with the entity deployed by the model, and when the entity trained by the model is inconsistent with the entity deployed by the model, determine whether the operating performance of the model after deployment can meet expectations.
  • Model management Manage models during their life cycle. For example, manage model deployment, model activation, model evaluation, model training, etc.
  • FIG. 1 To facilitate understanding of the embodiment of the present application, an application scenario of the embodiment of the present application is described in detail first with reference to FIG. 1 .
  • Fig. 1 is a schematic structural diagram of a communication system to which the embodiment of the present application is applicable. Firstly, the devices that may be involved in the communication system will be described.
  • Model management entity 110 used to manage the model within the life cycle.
  • the model management entity 110 may be a network management system (network management system, NMS).
  • NMS network management system
  • the model management entity 110 may be deployed in the operator's equipment.
  • Model training entity 120 used to obtain available models through training.
  • the model training entity may be an operator's platform or a manufacturer's training platform, or other entities deploying a model training function.
  • the model training entity 120 can publish the trained model to the model market, and the model management entity 110 can obtain the model from the model market, and deploy the model to the model reasoning entity 130 .
  • the model market can be deployed in the model management entity 110 , can also be deployed in the model training entity 120 , or can be deployed independently, which is not particularly limited in this application.
  • the provider of the model training entity 120 may be referred to as a model provider.
  • Model reasoning entity 130 used to perform reasoning or calculation based on the model, and generate reasoning results.
  • the model reasoning entity 130 may be an element management system (element management system, EMS) or a management data analysis function (management data analytics function, MDAF), a radio access network (radio access network, RAN), or a 5G system
  • EMS element management system
  • MDAF management data analytics function
  • RAN radio access network
  • 5G 5G system
  • NWDAF network data analysis function
  • the model reasoning entity 120 may be deployed in a manufacturer's equipment, and the provider of the model reasoning entity 120 may be referred to as a device provider.
  • the communication system may also include a model certification entity 140:
  • Model certification entity 140 the model certification entity 140 can be used to certify the model, for example, can be used to determine whether the operating performance of the model can meet expectations.
  • the solution in this application can be applied to other systems including corresponding entities, which is not limited in this application.
  • the above entity or function may be a network element in a hardware device, a software function running on dedicated hardware, or a virtualization function instantiated on a platform (eg, a cloud platform).
  • the above entity or function may be implemented by one device, or jointly implemented by multiple devices, or may be a functional module in one device, which is not specifically limited in this embodiment of the present application.
  • the model management entity 110 and the model training entity 120 may be different functional models in one device
  • the model inference entity 130 and the model certification entity 140 may be different functional modules in one device.
  • the embodiment of the present application provides a model management method and communication device, which can improve the reliability of the model operation.
  • the method of management is explained.
  • the method embodiments shown in FIG. 2 to FIG. 4 can be combined with each other, and the steps in the method embodiments shown in FIG. 2 to FIG. 4 can be referred to each other.
  • the method embodiments shown in FIG. 3 and FIG. 4 may respectively be an implementation manner for realizing the functions of the method embodiment shown in FIG. 2 .
  • FIG. 2 is a schematic flowchart of a method 200 for model management provided by an embodiment of the present application.
  • the model management entity sends the first provider information to the model certification entity, and correspondingly, the model certification entity receives the first provider information from the model management entity.
  • the first provider information indicates that the reasoning model (ie, the first reasoning model) is provided by the first model provider.
  • the first provider information indicates that the first inference model is trained and generated by the first model provider.
  • the first model provider may be a first manufacturer or a first supplier.
  • the first inference model is provided by the first model provider may mean that: the device providing the first inference model belongs to the first model provider, or in other words, the first inference model is trained The device of the model belongs to the first model provider, or in other words, the device generating the first inference model belongs to the first model provider.
  • the first model provider may train the first reasoning model based on the data of the first model provider, or obtain and train the first reasoning model based on data from other providers, This application does not specifically limit it.
  • the first provider information may be the name or identification information of the first model provider, for example, manufacturers such as Huawei, ZTE, and Ericsson, or operators such as China Mobile and China Telecom.
  • the model verification entity compares the first model provider with the device provider of the model inference entity to generate a comparison result, wherein the model inference entity is an entity that will run the first inference model.
  • the model authentication entity determines whether the first model provider is consistent with the equipment provider for providing the model inference entity.
  • the equipment provider may be a second manufacturer or a second operator.
  • the model reasoning entity being an entity that will run the first reasoning model may refer to: the device to be received by the first reasoning model (that is, the model reasoning entity) is produced or provided by the device provider, or the first reasoning model to be run An inference model device is produced or provided by the device provider.
  • the model certification entity may determine that the first model provider and the equipment provider are consistent;
  • the first model provider and the device provider are different manufacturers, or different operators, or are manufacturers and operators respectively, it is determined that the first model provider and the device provider are inconsistent.
  • the model certification entity before the first inference model is run, the model certification entity can first determine whether the provider who trained the first inference model is consistent with the provider who runs the model, so as to be able to implement different results. Different handling to improve the reliability of running models.
  • the information of the device provider can be recorded in the In the model authentication entity.
  • the method 200 may acquire the information of the equipment provider by performing step S203.
  • the model management entity sends the second provider information to the model certification entity, and correspondingly, the model certification entity receives the second provider information from the model management entity.
  • the second provider information indicates the device provider.
  • the second provider information may be the name or identification information of the device provider and/or the name or identification information of the model reasoning entity.
  • the model management entity may determine that the first inference model needs to be deployed in the device provider's model inference entity, and then send the second provider information to the model authentication entity.
  • the model authentication entity sends an authentication result to the model management entity based on the comparison result, and correspondingly, the model management entity receives the authentication result from the model authentication entity.
  • the authentication result indicates whether the first inference model can run in the model inference entity.
  • the model verification entity determines whether the first reasoning model can run in the model reasoning entity according to the comparison result, so as to generate a verification result. If the model authentication entity determines that the first inference model can run in the model inference entity, then the first inference model is certified. Conversely, if the model certification entity determines that the first inference model cannot type reasoning entity, then the authentication of the first reasoning model fails.
  • the authentication result indicates that the first reasoning model can run in the model reasoning entity.
  • the authentication result indicates that the first model provider and the device provider are identical.
  • the authentication result instructs the model management entity to deploy the first reasoning model.
  • the authentication result indicates that the authentication status of the first reasoning model is passed.
  • the model authentication entity may also send the authentication result to the model inference entity, and correspondingly, the model inference entity receives the authentication result from the model authentication entity.
  • the authentication result may also indicate an authenticated ID or an authenticated password, and the authenticated ID or the authenticated password may be used by the model inference entity to determine that the first inference model is authenticated.
  • the authentication result includes status information indicating that the authentication is passed, and an authentication-passed identifier or a pass-authentication password.
  • the model authentication entity may generate an authentication pass ID or an authentication pass password according to the version of the first inference model or the identifier of the first inference model, and send the authentication results to the model management entity and the model inference entity respectively.
  • the model management entity carries the authentication result in the model deployment message used to deploy the first inference model, and the model inference entity determines the identity or password carried in the model deployment message and the identity or password indicated by the authentication result received from the model authentication entity. Whether the passwords are consistent or not determines whether the first reasoning model is authenticated.
  • model deployment message may also carry at least one of the following information: identification information of the first inference model, version information of the first inference model, first inference model, usage information of the first inference model.
  • the usage information of the first reasoning model may be used to indicate the operating conditions of the first reasoning model, for example, the usage information may include at least one of the following information: the usage format of the first reasoning model, the usage time of the first reasoning model and information such as the use area of the first inference model.
  • the method 200 may also perform step S205.
  • the model verification entity determines the verification result according to the comparison result and performance information.
  • the model certification entity determines whether the first reasoning model is certified according to the performance information, where the performance information is used to indicate the accuracy of the first reasoning model.
  • the comparison result shows that the first model provider and the device provider are inconsistent, the accuracy of the first inference model is less than the first threshold, and the authentication result indicates that the first inference model fails to pass the authentication; or, the comparison result is that the first model provider and the device provide If the quotients are inconsistent, the accuracy of the first inference model is greater than or equal to the first threshold, and the authentication result indicates that the first inference model has passed the authentication.
  • the model certification entity may acquire the performance information, for example, the model certification entity may obtain the performance information by testing the first inference model.
  • the first inference model is used to predict the second data in the second time period using the first data in the first time period
  • the first data and the second data are historical data saved by the model inference entity
  • the first data and the second data are real data in two time periods.
  • the model verification entity takes the first data as input data, and runs the first reasoning model to obtain forecast data within the second forecast period.
  • the model certification entity may determine the accuracy of the first reasoning model by determining the matching degree of the second data and the predicted data, so as to generate performance information.
  • the model authentication entity requests the model inference entity to acquire the first data and the second data according to the usage information of the first inference model.
  • the first threshold may be preset, or may be indicated by the model performance information sent by the model management entity to the model authentication entity. achievable performance.
  • the model management entity can deploy the first inference model in the model inference entity.
  • the authentication result indicates that the first reasoning model cannot run in the model reasoning entity (that is, when the first model provider is inconsistent with the device provider, and the accuracy of the first reasoning model is less than the first threshold).
  • the authentication result indicates that the authentication of the first reasoning model fails.
  • the authentication result indicates that the first reasoning model deployed in the model reasoning entity cannot run normally.
  • the authentication result indicates that the first reasoning model deployed on the model reasoning entity cannot achieve expected performance.
  • the model certification entity sends reason information to the model management entity, and correspondingly, the model management entity receives the reason information from the model certification entity.
  • the cause information indicates the reason for the authentication result.
  • the model authentication entity may send cause information to the model management entity only when the authentication result indicates that the authentication of the first reasoning model fails.
  • the reason information indicates the reason why the first reasoning model fails the authentication.
  • the cause information may indicate that the performance information of the first reasoning model cannot meet expectations, and may also indicate that the first model provider and the device provider are inconsistent.
  • the cause information may include usage information, accuracy of the first reasoning model, and information indicating that the first model provider and the device provider are inconsistent.
  • the usage information and the accuracy may refer to: the accuracy that the first inference model is expected to achieve by running the test in a manner indicated by the usage information.
  • the model management entity After the model management entity receives the authentication result indicating that the first inference model cannot be run in the model inference entity, it may process it in two ways, which will be introduced respectively below.
  • the method 200 may execute steps S207 to S208.
  • the model management entity sends at least one of the following information to the model training entity: authentication result, cause information, and first adjustment information.
  • the first adjustment information is used to indicate to adjust the first reasoning model. to generate an adjusted inference model (in the embodiment of the present application, the adjusted first inference model is represented by a second inference model).
  • the authentication result and cause information may correspond to the identification information of the first inference model, and then the model training entity may regenerate the second inference model according to the authentication result and/or cause information, or when receiving the adjustment information, may update the first inference model
  • the model is tuned to generate a second inference model
  • adjusting the first reasoning model may refer to: retraining the first reasoning model.
  • the reason information may implicitly indicate that the first reasoning model fails the authentication, and the reason information may also implicitly indicate that the first reasoning model The model is adjusted.
  • the model management entity may also send information indicating the device provider, model identification information, model performance information or model version information to the model training entity to assist the model training entity in determining the second inference model.
  • the model training entity determines a second inference model.
  • the model training entity may determine that the first inference model fails the authentication according to the authentication result, the cause information, or the first adjustment information, and then generate a second inference model, or adjust the first inference model to generate a second inference model.
  • the model training entity may acquire the second inference model trained and generated by the device provider.
  • the model training entity may extract the data set of the device provider and retrain the first inference model to generate the second inference model.
  • the model training entity sends the second inference model to the model management entity, and correspondingly, the model management entity receives the second inference model from the model management entity.
  • model management entity may perform an authentication process similar to steps S201 to S206 on the second reasoning model.
  • the model training entity can generate the second inference model by reacquiring or retraining the first inference model.
  • the model management entity sends the first inference model and the second adjustment information to the model inference entity, and correspondingly, the model inference entity receives the first inference model and the second adjustment information from the model management entity.
  • the second adjustment information is used to indicate to adjust the first reasoning model.
  • the model management entity can send a model deployment request message to the model inference entity, and the model deployment request message carries the first inference model and the second adjustment information, and then, the model inference entity can know the first inference model according to the second adjustment information If the model authentication fails, or the model inference entity knows that running the first inference model cannot achieve the expected performance according to the second adjustment information, the model inference entity can retrain the first inference model before running the first inference model.
  • the model management entity can also send the identification information, version information, model performance information or usage information of the first inference model to the model inference entity , used to assist the model reasoning entity to retrain the first reasoning model, so that the retrained first reasoning model can achieve the performance indicated by the model performance information.
  • the model authentication entity and the model inference entity when the model authentication entity and the model inference entity are deployed in one device, the model authentication entity and the model inference entity receive the first provider information, and the first provider information is carried in the model In the deployment request message, that is, the model authentication entity has acquired the first inference model before determining whether the first inference model is authenticated.
  • the model authentication entity determines that the first inference model is not certified, it can notify the model inference entity through the internal interface If the authentication of the first reasoning model fails, the model reasoning entity may adjust the first reasoning model before running it. In this case, the model management entity may not need to send the adjustment information to the model inference entity.
  • the model inference entity may retrain the first inference model before deploying the first inference model.
  • the model certification entity may first determine whether the first model provider that provides the first inference model and the device provider to which the first inference model is to be deployed, indicated by the first provider information, are Consistent, when they are consistent, the model certification entity may consider that the first reasoning model has passed the certification; when they are inconsistent, the model certification entity may further determine whether the first reasoning model has passed the certification based on the performance information of the first reasoning model.
  • mod The first provider information sent by the type management entity may be unauthentic. Therefore, before step S202, the method 200 may also execute steps S209 to S210 to determine whether the first provider information is authentic, which will be introduced below.
  • the model management entity sends the first inference model to the model certification entity, and correspondingly, the model certification entity receives the first inference model from the model management entity.
  • the first reasoning model may be carried in the form of a model file or a model file address.
  • the model file refers to information describing the first reasoning model, which is recorded in a file format
  • the model file address refers to address information for indexing to the model file.
  • the model file may consist of multiple sub-files.
  • the information describing the first inference model may include at least one of the following information: a name of the first inference model, information of a second model provider providing the first inference model, and an identifier of the first inference model.
  • the first inference model of the model certification entity determines that the first model provider is the provider of the first inference model.
  • the model certification entity determines whether the first inference model is provided or trained by the first model provider according to the first inference model and the first provider information.
  • the first provider information is recorded outside the model file, and the difference from the second model provider information in the first inference model is that the second model provider information in the first inference model is originally recorded in
  • the real information in the model file, the first provider information outside the model file is the information specified by the model management entity and may have errors.
  • the model certification entity may use three methods to determine whether the first reasoning model is trained by the first model provider, and the three methods will be described below.
  • the model certification entity determines that the first inference model is provided by the first model provider.
  • the model certification entity determines that the first inference model is not provided by the first model provider.
  • the second model provider is the provider indicated by the first inference model that provides the first inference model, or in other words, the second model provider is the provider recorded in the model file of the first inference model.
  • the model certification entity can determine whether the first provider information is authentic according to whether the first provider information is consistent with the second model provider recorded in the first reasoning model.
  • the model certification entity determines that the first reasoning model is provided by the first model provider
  • the model certification entity determines that the first inference model is not provided by the first model provider
  • the first format is the format of the first reasoning model
  • the second format is determined according to the first provider information
  • the format of the first reasoning model may refer to: the format of the model file of the first reasoning model, such as file format, syntax, and the like.
  • the fact that the second format is determined according to the first provider information may mean that the model certification entity may determine the second format of the model trained by the first model provider according to the first model provider indicated by the first provider information.
  • the model certification entity may record the formats of the training models of multiple suppliers, and the model certification entity may determine the first format corresponding to the first model provider after receiving the information of the first provider.
  • the model certification entity can determine whether the information of the first provider is authentic according to whether the format of the first model provider is consistent with the format of the first reasoning model.
  • the model certification entity determines that the first reasoning model is provided by the first model provider
  • the model certification entity determines that the first inference model is not provided by the first model provider
  • the second model provider is the supplier indicated by the first inference model to provide the first inference model
  • the first format is the format of the first inference model
  • the second format is determined according to the information of the first provider.
  • the model certification entity can simultaneously determine whether the first provider information is consistent with the second model provider recorded in the first inference model, and whether the format of the first model provider and the format of the first inference model Consistent, when both conditions are consistent, it is determined that the information of the first provider is true.
  • the method 200 may execute steps S202 to S209 to determine whether the first model provider and the equipment provider are consistent.
  • the model certification entity may consider that the first reasoning model certification is unsuccessful, and the method may perform steps S206 to S209 after the first reasoning model certification is unsuccessful Actions.
  • the cause information in step S207 may also be used to indicate that the first inference model is not trained by the first model provider, or indicate that the information of the first provider is wrong.
  • the model management entity can send the first provider information indicating the first model provider that provides the first reasoning model to the model certification entity, and the model certification entity can verify the first model provider and the equipment provider The comparison is performed, and an authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can take different processing based on different authentication results, which can improve the reliability of the operation of the first reasoning model.
  • the model management entity may request the model authentication entity for authentication, and determine whether the first inference model does not belong to the model inference entity according to the authentication result.
  • FIG. 3 is a schematic flowchart of a method 300 for model management provided by an embodiment of the present application.
  • a model management entity acquires a first inference model.
  • the model management entity may send a model query request message to the model marketplace according to network status or operation requirements, where the model query request message is used to request to acquire the first reasoning model.
  • the model marketplace may send the first inference model to the model management entity in response to the query request message, or send multiple models, and the model management entity determines the first inference model from the multiple models.
  • the model market has models trained by multiple suppliers, and the model market can be deployed in the model training entity shown in FIG. 1 .
  • the model management entity may send a model training request message to the model training entity according to network status or operation requirements, where the model training request message is used to request the model training entity to perform training to generate the first inference model.
  • the model training entity performs model training according to the model training request message to generate a first reasoning model.
  • the model training entity sends the first inference model to the model management entity in response to the model training request message.
  • model management entity and the model training entity can be deployed on the same operator platform.
  • the model management entity sends the first provider information to the model certification entity, and correspondingly, the model certification entity receives the first provider information from the model management entity.
  • the first provider information indicates that the first inference model is provided by the first model provider.
  • Information about the first provider For description, refer to the description of the first provider information in step S201 in the method 200, and for simplicity, details are not repeated here.
  • the model management entity sends a model certification request message to the model certification entity, where the model certification request message is used to request certification of the first reasoning model, and the model certification request message carries the first provider information.
  • the model management entity sends the second provider information and the first reasoning model to the model certification entity, and correspondingly, the model certification entity receives the second provider information and the first reasoning model from the model management entity.
  • the second provider information is used to indicate the equipment provider on which the first reasoning model is to be deployed, and the equipment provider is a second manufacturer or a second operator.
  • the equipment provider is a second manufacturer or a second operator.
  • the model management entity sends a model certification request message to the model certification entity, where the model certification request message carries the second provider information and the first reasoning model.
  • the model authentication request message may also carry at least one of the following information: identification information of the first inference model, version information of the first inference model, performance information of the first inference model, and - Usage information for an inference model.
  • identification information of the first inference model may also carry at least one of the following information: identification information of the first inference model, version information of the first inference model, performance information of the first inference model, and - Usage information for an inference model.
  • the model authentication entity determines, according to the first inference model and the first provider information, that the provider of the first inference model is the first inference model.
  • step S210 For the description about whether the model certification entity determines whether the first inference model is provided by the first model provider, refer to the description in step S210 in the method 200, and for the sake of brevity, details are not repeated here.
  • the model certification entity determines that the first reasoning model is provided by the first model provider, and the method 300 may perform steps S305 to S313.
  • the model certification entity compares the first model provider and the equipment provider to generate a comparison result.
  • the model certification entity may determine whether the first model provider and the device provider are consistent.
  • step S202 For the manner in which the model verification entity performs comparison to generate a comparison result, reference may be made to the description of step S202 in the method 200 , and details are not repeated here for brevity.
  • the method 300 may perform step S306.
  • the model verification entity determines the verification result according to the comparison result and performance information.
  • the model certification entity determines whether the first reasoning model is certified according to the performance information, where the performance information is used to indicate the accuracy of the first reasoning model.
  • the model certification entity may send an evaluation data request message to the model reasoning entity, where the evaluation data request message is used to request the first data and the second data, and the evaluation data request message may include type information and condition information.
  • the model reasoning entity sends an evaluation data response message to the model certification entity, where the evaluation data response message carries the first data and the second data.
  • the type information may refer to the data types of the first data and the second data
  • the condition information refers to the conditions that the first data and the second data need to meet.
  • the condition information may include standard conditions, time conditions, and area conditions.
  • Model The reasoning entity sends the first data and the second data satisfying the condition to the model verification entity.
  • model certification entity determines the performance information according to the first data and the second data, and determines whether the first inference model is certified according to the performance information can refer to the description of step S205 in the method 200, and for the sake of brevity, details are not repeated here.
  • the model authentication entity sends the authentication result to the model management entity, and correspondingly, the model management entity receives the authentication result from the model authentication entity.
  • the authentication result indicates whether the first inference model can run in the model inference entity.
  • the model certification entity in response to the model certification request message, the model certification entity sends a model certification response message to the model management entity, where the model certification response message carries the certification result.
  • the model certification response message For a description of the authentication result, reference may be made to the description of step S204 in the method 200, and for simplicity, details are not repeated here.
  • the model authentication entity sends the authentication result to the model inference entity, and correspondingly, the model inference entity receives the authentication result from the model authentication entity.
  • the model authentication entity sends a model authentication pass notification message to the model reasoning entity, and the model authentication pass notification message carries the authentication result.
  • the model certification entity sends reason information to the model management entity, and correspondingly, the model management entity receives the reason information from the model certification entity.
  • the cause information is used to indicate the cause of the authentication result.
  • the cause information refer to the description of step S206 in the method 200, and for the sake of brevity, details are not repeated here.
  • the model authentication response message carries the cause information.
  • the method 300 may execute step S310.
  • the model management entity sends a model deployment message to the model inference entity, and correspondingly, the model inference entity receives the model deployment message from the model management entity.
  • the model deployment message is used to deploy the first inference model.
  • the model deployment message may carry at least one of the following information: identification information of the first reasoning model, version information of the first reasoning model, the first reasoning model, usage information of the first reasoning model, and the authentication result.
  • the model inference entity may determine that the authentication of the first inference model is successful according to the consistency of the authentication results in the model deployment message and the model authentication notification message. For example, the model inference entity stores the corresponding relationship between the model identifier and/or model version and the authentication result through the notification message according to the model authentication, and the model inference entity determines the stored corresponding authentication result and model according to the model identifier and/or version in the model deployment message. The authentication results in the deployment message are consistent, that is, it is determined that the authentication of the first reasoning model is successful. Furthermore, the model reasoning entity can deploy the first reasoning model.
  • the model inference entity sends a model deployment response to the model management entity in response to the model deployment message, where the model deployment response is used to indicate the deployment status of the first inference model.
  • the model management entity can use two methods to handle the situation that the authentication result indicates that the first reasoning model fails to pass the authentication, and the two methods will be described below.
  • the model management entity sends at least one of the following information to the model training entity: authentication result, cause information, and first adjustment information.
  • step S207 in the method 200 For related descriptions, refer to the description of step S207 in the method 200, and for the sake of brevity, details are not repeated here.
  • the model management entity sends a model optimization request message to the model training entity, and the model optimization request message carries at least one of the following information: authentication result, cause information, and first adjustment information.
  • the model optimization request message also carries at least one of the following information: information of the equipment provider, identification information of the model, information of model performance or version information of the model.
  • the model training entity determines a second reasoning model.
  • step S208 For the manner in which the model training entity determines the second inference model, reference may be made to the description of step S208 in the method 200, and details are not repeated here for brevity.
  • the model training entity sends the second inference model to the model management entity, and correspondingly, the model management entity receives the second inference model from the model management entity.
  • the model training entity sends a model optimization response message to the model management entity in response to the model optimization request message, where the model optimization response message carries the second inference model.
  • the model optimization response message also carries model performance information of the second inference model and usage information of the second inference model, and the usage information of the second inference model can be used to indicate the For the running condition, the model performance information is used to describe the performance that the second inference model can achieve when running using the running condition indicated by the information.
  • Step S313 is included.
  • the model management entity sends the first inference model and the second adjustment information to the model inference entity.
  • the model inference entity receives the first inference model and the second adjustment information from the model management entity.
  • the second adjustment information indicates that the reasoning model should be adjusted.
  • the model management entity may send a model deployment request message to the model inference entity, where the model deployment request message carries the first inference model and the second adjustment information.
  • the model deployment request message further carries at least one of the following information: identification information, version information, model performance information, or usage information of the first inference model.
  • the adjustment or retraining of the first inference model by the model inference entity may refer to the adjustment or retraining of the inference model by the model training function module in the model inference entity.
  • the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
  • the model management network element sends a model deployment message to the model certification entity, and the model certification entity can After the authentication result is determined, it is determined whether the inference model can be run according to the authentication result.
  • FIG. 4 is a schematic flowchart of a method 400 for model management provided by an embodiment of the present application.
  • the model management entity acquires an inference model.
  • the model management entity sends the first provider information to the model certification entity, and correspondingly, the model certification entity receives the first provider information from the model management entity.
  • the description of the first provider information refer to the description of the first provider information in step S201 in the method 200, and for the sake of brevity, details are not repeated here.
  • the model management entity sends a model deployment message to the model inference entity, where the model deployment message is used to deploy an inference model, and the model deployment message carries the first provider information.
  • the model management entity sends the reasoning model to the model certification entity, and correspondingly, the model certification entity receives the reasoning model from the model management entity.
  • the model deployment message carries the reasoning model.
  • the model deployment message may also carry at least one of the following information: identification information of the inference model, version information of the inference model, model performance information of the inference model, and use information of the inference model.
  • identification information of the inference model For a description of the above information, reference may be made to the description of the method 200, and for simplicity, details are not repeated here.
  • the model certification entity determines, according to the inference model and the first provider information, that the first model provider is the inference model provider.
  • the description about whether the inference model is provided by the first model provider by the model certification entity can refer to the description in step S210 in the method 200, and for the sake of brevity, details are not repeated here.
  • the model certification entity determines that the inference model is provided by the first model provider, and the method 400 may perform steps S405 to S408.
  • the model certification entity compares the first model provider and the equipment provider to generate a comparison result.
  • the model certification entity may determine whether the first model provider and the device provider are consistent.
  • step S202 For the manner in which the model verification entity performs comparison to generate a comparison result, reference may be made to the description of step S202 in the method 200 , and details are not repeated here for brevity.
  • the method 400 may perform step S406.
  • the model verification entity determines the verification result according to the comparison result and performance information.
  • the model certification entity determines whether the reasoning model is certified according to the performance information, and the performance information is used to indicate the accuracy of the reasoning model.
  • step S205 For the manner in which the model certification entity judges whether the inference model has passed the certification based on the performance information, refer to the description of step S205 in the method 200, and for simplicity, details are not repeated here.
  • the model authentication entity sends the authentication result to the model management entity, and correspondingly, the model management entity receives the authentication result from the model authentication entity.
  • the authentication result indicates whether the inference model can run in the model inference entity.
  • the model authentication entity in response to the model deployment request message, sends a model deployment response message to the model management entity, where the model deployment response message carries the authentication result.
  • the model deployment response message For a description of the authentication result, reference may be made to the description of step S204 in the method 200, and for simplicity, details are not repeated here.
  • the model certification entity sends reason information to the model management entity, and correspondingly, the model management entity receives the reason information from the model certification entity.
  • the cause information is used to indicate the cause of the authentication result.
  • the cause information refer to the description of step S206 in the method 200, and for the sake of brevity, details are not repeated here.
  • the model deployment response message carries the cause information.
  • model authentication entity and the model management entity can process the inference model according to the authentication result:
  • the model authentication entity can notify the model inference entity to run the inference model through an internal interface.
  • the model management entity can use the two methods described in method 300, method a and method b, to handle the situation that the inference model fails to pass the authentication, for For the sake of simplicity, it will not be repeated here.
  • the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
  • FIG. 5 and FIG. 6 are schematic structural diagrams of a possible communication device provided by an embodiment of the present application. These communication devices can be used to implement the functions of the model authentication entity and the model management entity in the above method embodiments, and therefore can also realize the beneficial effects of the above method embodiments.
  • the communication device may be a model authentication entity or a model management entity, or a module (such as a chip) applied to the model authentication entity or the model management entity.
  • a communication device 500 includes a processing module 510 and a transceiver module 520 .
  • the communication device 500 is configured to realize the functions of the model authentication entity and the model management entity in the method embodiment shown in FIG. 2 above.
  • the communication device 500 may include a module for realizing any function or operation of the model authentication entity and the model management entity in the method embodiment shown in FIG. or any combination thereof.
  • the transceiver module 520 is used to receive the first provider information from the model management entity. Provided by the model provider; the processing module 510 is used to compare the first model provider and the device provider of the model inference entity to generate a comparison result, wherein the model inference entity is an entity that will run the inference model; the processing module 510 is used to An authentication result of the inference model is sent to the model management entity based on the comparison result, and the authentication result indicates whether the inference model can run in the model inference entity.
  • the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
  • processing module 510 and the transceiver module 520 can be directly obtained by referring to the relevant descriptions in the method embodiment shown in FIG. 2 , and will not be repeated here.
  • the first model provider provides; the transceiving module 520 is also used to receive the authentication result of the first inference model from the model authentication entity, and the authentication result indicates whether the first inference model can run in the model inference entity.
  • the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
  • processing module 510 and the transceiver module 520 can be directly obtained by referring to the relevant descriptions in the method embodiment shown in FIG. 2 , and will not be repeated here.
  • the communication device 600 includes a processor 610 and an interface circuit 620 .
  • the processor 610 and the interface circuit 620 are coupled to each other.
  • the interface circuit 620 may be a transceiver or an input-output interface.
  • the communication device 600 may further include a memory 630 for storing instructions executed by the processor 610 or for storing The input data required to execute the instruction or store the data generated by the processor 610 after executing the instruction.
  • the processor 610 is used to implement the functions of the above-mentioned processing module 510
  • the interface circuit 620 is used to implement the functions of the above-mentioned transceiver module 520 .
  • the communication device 600 When the communication device 600 is used to implement the method shown in FIG. 2 or FIG. 4 , the communication device 600 includes a processor 610 and an interface circuit 620 .
  • the processor 610 and the interface circuit 620 are coupled to each other. It can be understood that the interface circuit 620 may be a transceiver or an input-output interface.
  • the communication device 600 may further include a memory 630 for storing instructions executed by the processor 610 or storing input data required by the processor 610 to execute the instructions or storing data generated after the processor 610 executes the instructions.
  • the processor 610 is used to implement the functions of the above-mentioned processing module 510
  • the interface circuit 620 is used to implement the functions of the above-mentioned transceiver module 520 .
  • the model certification entity chip implements the function of the model certification entity in the above method embodiment.
  • the model certification entity chip receives information from other modules (such as radio frequency modules or antennas) in the model certification entity, and the information is sent to the model certification entity by the model management entity; or, the model certification entity chip sends information to other modules in the model certification entity Modules (such as radio frequency modules or antennas) send information that is sent by the model authentication entity to the model management entity.
  • the model management entity chip implements the function of the model management entity in the above method embodiment.
  • the model management entity chip receives information from other modules in the model management entity (such as radio frequency modules or antennas), and the information is sent to the model management entity by the model authentication entity; or, the model management entity sends information to other modules in the model management entity (such as a radio frequency module or an antenna) to send information, which is sent by the model management entity to the model certification entity.
  • the processor in the embodiments of the present application can be a central processing module (Central Processing Unit, CPU), and can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application-specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (Field Programmable Gate Array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • a general-purpose processor can be a microprocessor, or any conventional processor.
  • memory can be random access memory (Random Access Memory, RAM), flash memory, read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable Programmable read-only memory (Erasable PROM, EPROM), electrically erasable programmable read-only memory (Electrically EPROM, EEPROM), registers, hard disk, mobile hard disk, CD-ROM or any other form of storage medium known in the art .
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may also be a component of the processor.
  • the processor and storage medium can be located in the ASIC.
  • the ASIC can be located in a network device or a terminal device. Certainly, the processor and the storage medium may also exist in the network device or the terminal device as discrete components.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product comprises one or more computer programs or instructions.
  • the processes or functions described in the embodiments of the present application are executed in whole or in part.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, a network device, a terminal device, or other programmable devices. said computer program or The commands may be stored in or transmitted via a computer-readable storage medium.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server integrating one or more available media.
  • the available medium may be a magnetic medium, such as a floppy disk, a hard disk, or a magnetic tape; it may also be an optical medium, such as a DVD; and it may also be a semiconductor medium, such as a solid state disk (solid state disk, SSD).
  • a corresponds to B means that B is associated with A, and B can be determined according to A.
  • determining B according to A does not mean determining B only according to A, and B may also be determined according to A and/or other information.
  • the above is an example of the three elements of A, B and C to illustrate the optional items of the project.
  • the expression includes at least one of the following: A, B, ..., and X"
  • the applicable entries for this item can also be obtained according to the aforementioned rules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Economics (AREA)
  • Accounting & Taxation (AREA)
  • Computational Linguistics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The present application provides a model management method and a communication apparatus. The method comprises: a model authentication entity receives first provider information from a model management entity, the first provider information being used for indicating that an inference model is provided by a first model provider; the model authentication entity compares the first model provider with a device provider of a model inference entity, so as to generate a comparison result, wherein the model inference entity is an entity for operating the inference model; and on the basis of the comparison result, the model authentication entity sends to the model management entity an authentication result of the inference model, the authentication result indicating whether the inference model can operate in the model inference entity. Thus, the operation reliability of the inference model can be improved.

Description

一种模型管理的方法和通信装置Method and communication device for model management
本申请要求于2022年2月25日提交中国国家知识产权局、申请号为202210178362.4、申请名称为“一种模型管理的方法和通信装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with application number 202210178362.4 and application title "A Method and Communication Device for Model Management" filed with the State Intellectual Property Office of China on February 25, 2022, the entire contents of which are incorporated by reference in this application.
技术领域technical field
本申请涉及通信技术的领域,并且更具体地,涉及一种模型管理的方法和通信装置。The present application relates to the field of communication technologies, and more specifically, relates to a model management method and a communication device.
背景技术Background technique
为了提高网络的智能化和自动化水平,推理模型,例如人工智能(artificial intelligence,AI)模型和机器学习(machine learning,ML)模型运用于越来越多的技术领域。目前模型提供商(例如厂商或运行商)提供的推理模型都可以发布到模型市场中。模型管理实体可以从模型市场中获取推理模型,并将该推理模型部署在设备提供商中,进而设备提供商可以运行所部署的推理模型。In order to improve the intelligence and automation level of the network, reasoning models, such as artificial intelligence (AI) models and machine learning (ML) models, are used in more and more technical fields. Currently, inference models provided by model providers (such as manufacturers or operators) can be released to the model market. The model management entity can obtain the reasoning model from the model market, and deploy the reasoning model in the device provider, and then the device provider can run the deployed reasoning model.
但是,由于不同的提供商的私有数据、专家经验等不同,当某一个设备提供商运行来自其他提供商提供的推理模型时,可能无法正常运行。However, due to differences in private data and expert experience of different providers, when a certain equipment provider runs an inference model from another provider, it may not work properly.
发明内容Contents of the invention
本申请提供了一种模型管理的方法和通信装置,以提高推理模型运行的可靠性。The present application provides a model management method and a communication device, so as to improve the reliability of reasoning model operation.
第一方面,提供了一种模型管理的方法,该方法可以由模型认证实体或模型认证实体中的芯片执行,该方法包括:模型认证实体从模型管理实体接收第一提供商信息,第一提供商信息指示推理模型由第一模型提供商提供;模型认证实体对第一模型提供商和模型推理实体的设备提供商进行对比,以生成对比结果,其中,模型推理实体为将运行推理模型的实体;模型认证实体基于对比结果向模型管理实体发送推理模型的认证结果,认证结果指示推理模型是否能够在模型推理实体中运行。In a first aspect, a method for model management is provided, the method may be executed by a model certification entity or a chip in the model certification entity, and the method includes: the model certification entity receives first provider information from a model management entity, and the first provider The vendor information indicates that the inference model is provided by the first model provider; the model authentication entity compares the first model provider with the device provider of the model inference entity to generate a comparison result, wherein the model inference entity is an entity that will run the inference model The model authentication entity sends the authentication result of the reasoning model to the model management entity based on the comparison result, and the authentication result indicates whether the reasoning model can run in the model reasoning entity.
从而,在本申请中,模型管理实体可以将指示提供推理模型的第一模型提供商的第一提供商信息发送给模型认证实体,模型认证实体可以对第一模型提供商和设备提供商进行对比,并可以根据对比结果生成认证结果告知模型管理实体,进而模型管理实体可以基于不同的认证结果采取不同的处理,能够提高推理模型运行的可靠性。Therefore, in this application, the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
结合第一方面,在第一方面的某些实现方式中,对比结果为第一模型提供商和设备提供商不一致,推理模型的准确度小于第一阈值,认证结果指示推理模型不能够在模型推理实体中运行;或者,对比结果为第一模型提供商和设备提供商不一致,推理模型的准确度大于或等于第一阈值,认证结果指示推理模型能够在模型推理实体中运行。In combination with the first aspect, in some implementations of the first aspect, the comparison result is that the first model provider and the device provider are inconsistent, the accuracy of the reasoning model is less than the first threshold, and the authentication result indicates that the reasoning model cannot be used for model reasoning. or, the comparison result shows that the first model provider and the device provider are inconsistent, the accuracy of the reasoning model is greater than or equal to the first threshold, and the authentication result indicates that the reasoning model can run in the model reasoning entity.
从而,在本申请中,如果模型认证网元判断推理模型的第一模型提供商和设备提供商不一致,那么可以进一步确定该推理模型的准确度,根据推理模型的准确度确定是否可以 判定推理模型认证通过,能够提高系统的灵活性。Therefore, in this application, if the model authentication network element judges that the first model provider of the reasoning model is inconsistent with the equipment provider, then the accuracy of the reasoning model can be further determined, and whether it can be determined according to the accuracy of the reasoning model The decision reasoning model has passed the certification, which can improve the flexibility of the system.
结合第一方面,在第一方面的某些实现方式中,模型认证实体对第一模型提供商和设备提供商进行对比之前,方法还包括:模型认证实体从模型管理实体接收推理模型;模型认证实体确定第一模型提供商是该推理模型的提供商。With reference to the first aspect, in some implementations of the first aspect, before the model authentication entity compares the first model provider and the equipment provider, the method further includes: the model authentication entity receives the reasoning model from the model management entity; the model authentication The entity determines that the first model provider is the provider of the inference model.
从而,在本申请中,模型认证网元在判定第一模型提供商和设备提供商是否一致之前,可以先判断是否由该第一模型提供商提供的推理模型,即判断第一提供商信息的真实性,能够提高模型运行的可靠性。Therefore, in this application, before judging whether the first model provider and the equipment provider are consistent, the model authentication network element can first judge whether the reasoning model provided by the first model provider, that is, judge whether the information of the first provider Authenticity can improve the reliability of model operation.
结合第一方面,在第一方面的某些实现方式中,模型认证实体确定第一模型提供商是所述推理模型的提供商,包括:模型认证实体确定第一格式和第二格式一致;或者,模型认证实体确定第二模型提供商和第一模型提供商一致;或者,模型认证实体确定第一格式和第二格式一致,以及第二模型提供商和第一模型提供商一致;其中,第一格式是推理模型的格式,第二格式是根据第一提供商信息确定的,第二模型提供商是根据推理模型确定的。With reference to the first aspect, in some implementation manners of the first aspect, the model certification entity determining that the first model provider is the provider of the inference model includes: the model certification entity determining that the first format is consistent with the second format; or , the model certification entity determines that the second model provider is consistent with the first model provider; or, the model certification entity determines that the first format is consistent with the second format, and that the second model provider is consistent with the first model provider; wherein, the first The first format is the format of the reasoning model, the second format is determined according to the information of the first provider, and the second model provider is determined according to the reasoning model.
结合第一方面,在第一方面的某些实现方式中,方法还包括:模型认证实体向模型管理实体发送原因信息,原因信息指示得出该认证结果的原因。With reference to the first aspect, in some implementation manners of the first aspect, the method further includes: the model authentication entity sends cause information to the model management entity, where the cause information indicates the reason for obtaining the authentication result.
从而,在本申请中,模型认证实体可以告知模型管理实体推理模型认证不通过的原因,进而模型管理实体可以基于不同的原因进行不同的处理,能够提高模型运行的可靠性。Therefore, in this application, the model certification entity can inform the model management entity of the reason why the inference model certification fails, and then the model management entity can perform different processing based on different reasons, which can improve the reliability of model operation.
结合第一方面,在第一方面的某些实现方式中,对比结果为第一模型提供商和设备提供商一致,认证结果指示推理模型能够在模型推理实体中运行。With reference to the first aspect, in some implementation manners of the first aspect, the comparison result is that the first model provider and the device provider are consistent, and the authentication result indicates that the reasoning model can run in the model reasoning entity.
从而,在本申请中,模型认证网元可以判断推理模型的第一模型提供商和模型推理实体的设备提供商是否一致,在一致的情况下,判定推理模型认证通过,进而模型管理实体可以将认证通过的推理模型部署在模型推理实体中,能够提高模型运行的可靠性。Therefore, in this application, the model authentication network element can judge whether the first model provider of the inference model is consistent with the equipment provider of the model inference entity, and if they are consistent, it can be determined that the inference model is authenticated, and then the model management entity can pass the The certified inference model is deployed in the model inference entity, which can improve the reliability of the model operation.
结合第一方面,在第一方面的某些实现方式中,模型认证实体对第一模型提供商和设备提供商进行对比之前,方法还包括:模型认证实体从模型管理实体接收第二提供商信息,第二提供商信息指示设备提供商。With reference to the first aspect, in some implementations of the first aspect, before the model certification entity compares the first model provider and the equipment provider, the method further includes: the model certification entity receives the second provider information from the model management entity , the second provider information indicates the device provider.
第二方面,提供了一种模型管理的方法,该方法可以由模型管理实体或模型管理实体中的芯片执行,该方法包括:模型管理实体向模型认证实体发送第一提供商信息,第一提供商信息指示第一推理模型由第一模型提供商提供;模型管理实体从模型认证实体接收第一推理模型的认证结果,认证结果指示第一推理模型是否能够在模型推理实体中运行。In a second aspect, a method for model management is provided. The method may be executed by a model management entity or a chip in the model management entity. The method includes: the model management entity sends the first provider information to the model certification entity, and the first provider The supplier information indicates that the first inference model is provided by the first model provider; the model management entity receives the authentication result of the first inference model from the model authentication entity, and the authentication result indicates whether the first inference model can run in the model inference entity.
从而,在本申请中,模型管理实体可以将指示提供推理模型的第一模型提供商的第一提供商信息发送给模型认证实体,模型认证实体可以对第一模型提供商和设备提供商进行对比,并可以根据对比结果生成认证结果告知模型管理实体,进而模型管理实体可以基于不同的认证结果采取不同的处理,能够提高推理模型运行的可靠性。Therefore, in this application, the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
结合第二方面,在第二方面的某些实现方式中,第一模型提供商和模型推理实体的设备提供商不一致,第一推理模型的准确度小于第一阈值,认证结果指示第一推理模型不能够在模型推理实体中运行;或者,第一模型提供商和设备提供商不一致,第一推理模型的准确度大于或等于第一阈值,认证结果指示第一推理模型能够在模型推理实体中运行。In conjunction with the second aspect, in some implementations of the second aspect, the first model provider and the device provider of the model inference entity are inconsistent, the accuracy of the first inference model is less than the first threshold, and the authentication result indicates that the first inference model Cannot run in the model inference entity; or, the first model provider and the device provider are inconsistent, the accuracy of the first inference model is greater than or equal to the first threshold, and the authentication result indicates that the first inference model can run in the model inference entity .
结合第二方面,在第二方面的某些实现方式中,方法还包括:模型管理实体向模型认证实体发送第一推理模型。 With reference to the second aspect, in some implementation manners of the second aspect, the method further includes: the model management entity sends the first reasoning model to the model certification entity.
结合第二方面,在第二方面的某些实现方式中,第一模型提供商是推理模型的提供商,包括:第一格式和第二格式一致;或者,第二模型提供商和第一模型提供商一致;或者,第一格式和第二格式一致,和第二模型提供商和第一模型提供商一致;其中,第一格式是推理模型的格式,第二格式是根据第一提供商信息确定的,第二模型提供商是根据推理模型确定的。With reference to the second aspect, in some implementations of the second aspect, the first model provider is an inference model provider, including: the first format is consistent with the second format; or, the second model provider and the first model The provider is consistent; or, the first format is consistent with the second format, and the second model provider is consistent with the first model provider; wherein, the first format is the format of the inference model, and the second format is based on the information of the first provider Yes, the second model provider is determined according to the reasoning model.
结合第二方面,在第二方面的某些实现方式中,方法还包括:模型管理实体从模型认证实体接收原因信息,原因信息指示得出该认证结果的原因。With reference to the second aspect, in some implementations of the second aspect, the method further includes: the model management entity receives cause information from the model certification entity, where the cause information indicates a reason for obtaining the certification result.
结合第二方面,在第二方面的某些实现方式中,认证结果指示第一推理模型不能够在模型推理实体中运行,方法还包括:模型管理实体向模型训练实体发送以下信息中至少一项:认证结果,原因信息和第一调整信息,其中,原因信息指示得出认证结果的原因,第一调整信息指示模型训练实体对第一推理模型进行调整;模型管理实体从模型训练实体接收第二推理模型,该第二推理模型为基于该第一推理模型调整而得。With reference to the second aspect, in some implementations of the second aspect, the authentication result indicates that the first inference model cannot run in the model inference entity, and the method further includes: the model management entity sends at least one of the following information to the model training entity : authentication result, reason information and first adjustment information, wherein, the reason information indicates the reason for obtaining the authentication result, and the first adjustment information indicates that the model training entity adjusts the first reasoning model; the model management entity receives the second reasoning model from the model training entity An inference model, the second inference model is adjusted based on the first inference model.
从而,在本申请中,当推理模型认证不通过时,模型训练实体可以通过重新获取或者对推理模型再训练,提高模型管理的灵活性。Therefore, in this application, when the inference model authentication fails, the model training entity can improve the flexibility of model management by reacquiring or retraining the inference model.
结合第二方面,在第二方面的某些实现方式中,认证结果指示第一推理模型不能够在模型推理实体中运行,方法还包括:模型管理实体向模型推理实体发送第一推理模型和第二调整信息,第二调整信息用于指示模型训练实体对第一推理模型进行调整。With reference to the second aspect, in some implementations of the second aspect, the authentication result indicates that the first inference model cannot run in the model inference entity, and the method further includes: the model management entity sends the first inference model and the second inference model to the model inference entity Two adjustment information, the second adjustment information is used to instruct the model training entity to adjust the first reasoning model.
从而,在本申请中,当推理模型认证不通过时,模型推理实体可以在部署推理模型之前对推理模型进行再训练,提高模型管理的灵活性。Therefore, in this application, when the inference model authentication fails, the model inference entity can retrain the inference model before deploying the inference model, which improves the flexibility of model management.
结合第二方面,在第二方面的某些实现方式中,第一模型提供商和模型推理实体的设备提供商一致,认证结果指示第一推理模型能够在模型推理实体中运行。With reference to the second aspect, in some implementation manners of the second aspect, the first model provider is the same as the device provider of the model inference entity, and the authentication result indicates that the first inference model can run in the model inference entity.
结合第二方面,在第二方面的某些实现方式中,在从模型认证实体接收第一推理模型的认证结果之前,方法还包括:模型管理实体向模型认证实体发送第二提供商信息,第二提供商信息指示设备提供商。With reference to the second aspect, in some implementations of the second aspect, before receiving the authentication result of the first reasoning model from the model authentication entity, the method further includes: the model management entity sends the second provider information to the model authentication entity, the first The second provider information indicates the device provider.
第三方面,提供了一种模型管理的通信装置,该装置包括收发模块和处理模块,收发模块,用于从模型管理实体接收第一提供商信息,第一提供商信息指示推理模型由第一模型提供商提供;处理模块,用于对第一模型提供商和模型推理实体的设备提供商进行对比,以生成对比结果,其中,模型推理实体为将运行推理模型的实体;收发模块,还用于基于对比结果向模型管理实体发送推理模型的认证结果,认证结果指示推理模型是否能够在模型推理实体中运行。In a third aspect, a communication device for model management is provided, the device includes a transceiver module and a processing module, the transceiver module is used to receive first provider information from a model management entity, and the first provider information indicates that the reasoning model is provided by the first Provided by the model provider; the processing module is used to compare the first model provider and the equipment provider of the model inference entity to generate a comparison result, wherein the model inference entity is an entity that will run the inference model; the transceiver module is also used Based on sending the authentication result of the inference model to the model management entity based on the comparison result, the authentication result indicates whether the inference model can run in the model inference entity.
从而,在本申请中,模型管理实体可以将指示提供推理模型的第一模型提供商的第一提供商信息发送给模型认证实体,模型认证实体可以对第一模型提供商和设备提供商进行对比,并可以根据对比结果生成认证结果告知模型管理实体,进而模型管理实体可以基于不同的认证结果采取不同的处理,能够提高推理模型运行的可靠性。Therefore, in this application, the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
结合第三方面,在第三方面的某些实现方式中,对比结果为第一模型提供商和设备提供商不一致,推理模型的准确度小于第一阈值,认证结果指示推理模型不能够在模型推理实体中运行;或者,对比结果为第一模型提供商和设备提供商不一致,推理模型的准确度大于或等于第一阈值,认证结果指示推理模型能够在模型推理实体中运行。In combination with the third aspect, in some implementations of the third aspect, the comparison result is that the first model provider and the device provider are inconsistent, the accuracy of the inference model is less than the first threshold, and the authentication result indicates that the inference model cannot be used for model reasoning. or, the comparison result shows that the first model provider and the device provider are inconsistent, the accuracy of the reasoning model is greater than or equal to the first threshold, and the authentication result indicates that the reasoning model can run in the model reasoning entity.
结合第三方面,在第三方面的某些实现方式中,收发模块还用于:从模型管理实体接 收推理模型;处理模块还用于:确定第一模型提供商是该推理模型的提供商。In combination with the third aspect, in some implementations of the third aspect, the transceiver module is also used to: receive from the model management entity Receive an inference model; the processing module is also used to: determine that the first model provider is the provider of the inference model.
结合第三方面,在第三方面的某些实现方式中,处理模块具体用于:确定第一格式和第二格式一致;或者,确定第二模型提供商和第一模型提供商一致;或者,确定第一格式和第二格式一致,以及第二模型提供商和第一模型提供商一致;其中,第一格式是推理模型的格式,第二格式是根据第一提供商信息确定的,第二模型提供商是根据推理模型确定的。With reference to the third aspect, in some implementation manners of the third aspect, the processing module is specifically configured to: determine that the first format is consistent with the second format; or determine that the second model provider is consistent with the first model provider; or, Determine that the first format is consistent with the second format, and that the second model provider is consistent with the first model provider; wherein, the first format is the format of the reasoning model, the second format is determined according to the information of the first provider, and the second The model provider is determined based on the inference model.
结合第三方面,在第三方面的某些实现方式中,收发模块还用于:向模型管理实体发送原因信息,原因信息指示得出认证结果的原因。With reference to the third aspect, in some implementation manners of the third aspect, the transceiver module is further configured to: send cause information to the model management entity, where the cause information indicates the reason for obtaining the authentication result.
结合第三方面,在第三方面的某些实现方式中,对比结果指示第一模型提供商和设备提供商一致,认证结果指示推理模型能够在模型推理实体中运行。With reference to the third aspect, in some implementation manners of the third aspect, the comparison result indicates that the first model provider is consistent with the device provider, and the authentication result indicates that the inference model can run in the model inference entity.
结合第三方面,在第三方面的某些实现方式中,收发模块还用于:从模型管理实体接收第二提供商信息,第二提供商信息用于指示设备提供商。With reference to the third aspect, in some implementation manners of the third aspect, the transceiving module is further configured to: receive second provider information from the model management entity, where the second provider information is used to indicate the device provider.
第四方面,提供了一种模型管理的通信装置,该装置包括收发模块和处理模块,处理模块,用于生成第一提供商信息,第一提供商信息用于指示第一推理模型由第一模型提供商提供;收发模块,用于向模型认证实体发送第一提供商信息;收发模块,还用于从模型认证实体接收第一推理模型的认证结果,认证结果指示第一推理模型是否能够在模型推理实体中运行。In a fourth aspect, a communication device for model management is provided, the device includes a transceiver module and a processing module, the processing module is used to generate first provider information, and the first provider information is used to indicate that the first reasoning model is provided by the first Provided by the model provider; the transceiver module is used to send the first provider information to the model authentication entity; the transceiver module is also used to receive the authentication result of the first inference model from the model authentication entity, and the authentication result indicates whether the first inference model can be used in Model inference runs in the entity.
从而,在本申请中,模型管理实体可以将指示提供推理模型的第一模型提供商的第一提供商信息发送给模型认证实体,模型认证实体可以对第一模型提供商和设备提供商进行对比,并可以根据对比结果生成认证结果告知处理模块用于,进而模型管理实体可以基于不同的认证结果采取不同的处理,能够提高推理模型运行的可靠性。Therefore, in this application, the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to inform the processing module, and then the model management entity can take different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
结合第四方面,在第四方面的某些实现方式中,第一模型提供商和设备提供商不一致,第一推理模型的准确度小于第一阈值,认证结果指示第一推理模型不能够在模型推理实体中运行;或者,第一模型提供商和设备提供商不一致,第一推理模型的准确度大于或等于第一阈值,认证结果指示第一推理模型能够在模型推理实体中运行。With reference to the fourth aspect, in some implementations of the fourth aspect, the first model provider and the device provider are inconsistent, the accuracy of the first inference model is less than the first threshold, and the authentication result indicates that the first inference model cannot be used in the model or, the first model provider is inconsistent with the device provider, the accuracy of the first reasoning model is greater than or equal to the first threshold, and the authentication result indicates that the first reasoning model can run in the model reasoning entity.
结合第四方面,在第四方面的某些实现方式中,收发模块还用于:向模型认证实体发送第一推理模型。With reference to the fourth aspect, in some implementation manners of the fourth aspect, the transceiver module is further configured to: send the first reasoning model to the model verification entity.
结合第四方面,在第四方面的某些实现方式中,第一模型提供商是第一推理模型的提供商,包括:第一格式和第二格式一致;或者,第二模型提供商和第一模型提供商一致;或者,第一格式和第二格式一致,和第二模型提供商和第一模型提供商一致;其中,第一格式是推理模型的格式,第二格式是根据第一提供商信息确定的,第二模型提供商是根据推理模型确定的。With reference to the fourth aspect, in some implementations of the fourth aspect, the first model provider is a provider of the first reasoning model, including: the first format is consistent with the second format; or, the second model provider and the first inference model A model provider is consistent; or, the first format is consistent with the second format, and the second model provider is consistent with the first model provider; wherein, the first format is the format of the inference model, and the second format is provided according to the first The provider information is determined, and the second model provider is determined according to the reasoning model.
结合第四方面,在第四方面的某些实现方式中,收发模块还用于:从模型认证实体接收原因信息,原因信息指示得出认证结果的原因。With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver module is further configured to: receive cause information from the model authentication entity, where the cause information indicates the reason for obtaining the authentication result.
结合第四方面,在第四方面的某些实现方式中,认证结果指示第一推理模型不能够在模型推理实体中运行,收发模块还用于:向模型训练实体发送以下信息中至少一项:认证结果,原因信息和第一调整信息,其中,原因信息指示得出认证结果的原因,第一调整信息指示模型训练实体对第一推理模型进行调整;从模型训练实体接收第二推理模型,第二推理模型为基于第一推理模型调整而得。 With reference to the fourth aspect, in some implementations of the fourth aspect, the authentication result indicates that the first inference model cannot run in the model inference entity, and the transceiver module is further used to: send at least one of the following information to the model training entity: Authentication result, reason information and first adjustment information, wherein the reason information indicates the reason for obtaining the authentication result, and the first adjustment information indicates that the model training entity adjusts the first inference model; receiving the second inference model from the model training entity, the first The second reasoning model is adjusted based on the first reasoning model.
结合第四方面,在第四方面的某些实现方式中,认证结果指示推理模型不能够在模型推理实体中运行,收发模块还用于:向模型推理实体发送第一推理模型和第二调整信息,第二调整信息指示模型推理实体对第一推理模型进行调整。With reference to the fourth aspect, in some implementations of the fourth aspect, the authentication result indicates that the reasoning model cannot run in the model reasoning entity, and the sending and receiving module is also used to: send the first reasoning model and the second adjustment information to the model reasoning entity , the second adjustment information instructs the model inference entity to adjust the first inference model.
结合第四方面,在第四方面的某些实现方式中,第一模型提供商和设备提供商一致,认证结果指示第一推理模型能够在模型推理实体中运行。With reference to the fourth aspect, in some implementation manners of the fourth aspect, the first model provider is consistent with the device provider, and the authentication result indicates that the first inference model can run in the model inference entity.
结合第四方面,在第四方面的某些实现方式中,收发模块还用于:向模型认证实体发送第二提供商信息,第二提供商信息指示模型推理实体的设备提供商。With reference to the fourth aspect, in some implementation manners of the fourth aspect, the transceiving module is further configured to: send second provider information to the model verification entity, where the second provider information indicates the device provider of the model reasoning entity.
第五方面,提供了一种通信装置,该装置可以包括处理模块、发送单元和接收单元。可选的,发送单元和接收单元还可以为收发模块。In a fifth aspect, a communication device is provided, and the device may include a processing module, a sending unit, and a receiving unit. Optionally, the sending unit and the receiving unit may also be transceiver modules.
当该装置是模型认证实体时,该处理模块可以是处理器,该发送单元和接收单元可以是收发器;该装置还可以包括存储单元,该存储单元可以是存储器;该存储单元用于存储指令,该处理模块执行该存储单元所存储的指令,以使该模型认证实体执行第一方面中的任一方法。当该装置是模型认证实体内的芯片时,该处理模块可以是处理器,该发送单元和接收单元可以是输入/输出接口、管脚或电路等;该处理模块执行存储单元所存储的指令,以使该芯片执行第一方面中的任一方法。该存储单元用于存储指令,该存储单元可以是该芯片内的存储单元(例如,寄存器、缓存等),也可以是该模型认证实体内的位于该芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。When the device is a model certification entity, the processing module can be a processor, and the sending unit and the receiving unit can be transceivers; the device can also include a storage unit, which can be a memory; the storage unit is used to store instructions , the processing module executes the instructions stored in the storage unit, so that the model certification entity executes any method in the first aspect. When the device is a chip in a model certification entity, the processing module may be a processor, and the sending unit and receiving unit may be input/output interfaces, pins or circuits, etc.; the processing module executes the instructions stored in the storage unit, to make the chip execute any one of the methods in the first aspect. The storage unit is used to store instructions, and the storage unit may be a storage unit in the chip (for example, a register, a cache, etc.), or a storage unit outside the chip in the model certification entity (for example, a read-only memory , random access memory, etc.).
当该装置是模型管理实体时,该处理模块可以是处理器,该发送单元和接收单元可以是收发器;该装置还可以包括存储单元,该存储单元可以是存储器;该存储单元用于存储指令,该处理模块执行该存储单元所存储的指令,以使该模型管理实体执行第二方面中的任一方法。当该装置是模型管理实体内的芯片时,该处理模块可以是处理器,该发送单元和接收单元可以是输入/输出接口、管脚或电路等;该处理模块执行存储单元所存储的指令,以使该芯片执行第二方面中的任一方法。该存储单元用于存储指令,该存储单元可以是该芯片内的存储单元(例如,寄存器、缓存等),也可以是该模型管理实体内的位于该芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。When the device is a model management entity, the processing module may be a processor, and the sending unit and the receiving unit may be transceivers; the device may also include a storage unit, which may be a memory; the storage unit is used to store instructions , the processing module executes the instruction stored in the storage unit, so that the model management entity executes any method in the second aspect. When the device is a chip in the model management entity, the processing module may be a processor, and the sending unit and the receiving unit may be input/output interfaces, pins or circuits, etc.; the processing module executes the instructions stored in the storage unit, to make the chip execute any one of the methods in the second aspect. The storage unit is used to store instructions, and the storage unit may be a storage unit in the chip (for example, a register, a cache, etc.), or a storage unit outside the chip in the model management entity (for example, a read-only memory , random access memory, etc.).
第六方面,提供了一种通信装置,包括处理器和接口电路,接口电路用于接收来自该通信装置之外的其它通信装置的信号并传输至该处理器或将来自该处理器的信号发送给该通信装置之外的其它通信装置,该处理器通过逻辑电路或执行代码指令用于实现前述第一方面或第二方面中的任一方法。In a sixth aspect, a communication device is provided, including a processor and an interface circuit, and the interface circuit is used to receive signals from other communication devices other than the communication device and transmit them to the processor or send signals from the processor For other communication devices other than the communication device, the processor is used to implement any method in the aforementioned first aspect or second aspect through a logic circuit or executing code instructions.
第七方面,提供了一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序或指令,当该计算机程序或指令被执行时,实现前述第一方面或第二方面中的任一方法。In a seventh aspect, a computer-readable storage medium is provided, and a computer program or instruction is stored in the computer-readable storage medium. When the computer program or instruction is executed, any of the aforementioned first or second aspects can be realized. One method.
第八方面,提供了一种包含指令的计算机程序产品,当该指令被运行时,实现前述第一方面或第二方面中的任一方法。In an eighth aspect, a computer program product containing instructions is provided, and when the instructions are executed, any one of the methods in the aforementioned first aspect or second aspect is implemented.
第九方面,提供了一种计算机程序,该计算机程序包括代码或指令,当该代码或指令被运行时,实现前述第一方面或第二方面中的任一方法。In a ninth aspect, a computer program is provided, the computer program includes codes or instructions, and when the codes or instructions are executed, any method in the aforementioned first aspect or second aspect is implemented.
第十方面,提供一种芯片系统,该芯片系统包括处理器,还可以包括存储器,用于实现前述第一方面或第二方面中的任一方法。该芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。 In a tenth aspect, a chip system is provided, and the chip system includes a processor and may further include a memory, configured to implement any method in the aforementioned first aspect or second aspect. The system-on-a-chip may consist of chips, or may include chips and other discrete devices.
第十一方面,提供一种通信系统,该系统包括第三方面和第四方面中任一所述的装置。In an eleventh aspect, a communication system is provided, and the system includes the device described in any one of the third aspect and the fourth aspect.
附图说明Description of drawings
图1是适用本申请实施例提供的模型管理的系统的示意图。FIG. 1 is a schematic diagram of a system for model management provided by an embodiment of the present application.
图2是本申请实施例提供的一例模型管理方法的示意性流程图。Fig. 2 is a schematic flowchart of an example of a model management method provided by an embodiment of the present application.
图3是本申请实施例提供的另一例意模型管理方法的示意性流程图。Fig. 3 is a schematic flow chart of another example of an Italian model management method provided by an embodiment of the present application.
图4是本申请实施例提供的再一例模型管理方法的示意性流程图。Fig. 4 is a schematic flowchart of another example of the model management method provided by the embodiment of the present application.
图5和图6是本申请实施例提供的可能的通信装置的结构示意图。FIG. 5 and FIG. 6 are schematic structural diagrams of a possible communication device provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合附图,对本申请中的技术方案进行描述。The technical solution in this application will be described below with reference to the accompanying drawings.
本申请实施例的方法可以应用于长期演进技术(long term evolution,LTE)系统,长期演进高级技术(long term evolution-advanced,LTE-A)系统,增强的长期演进技术(enhanced long term evolution-advanced,eLTE),第五代(the 5th Generation,5G)移动通信系统新空口(New Radio,NR)系统,也可以扩展到类似的无线通信系统中,如无线保真(wireless-fidelity,WiFi),全球微波互联接入(worldwide interoperability for microwave access,WIMAX),以及第三代合作伙伴计划(3rd generation partnership project,3gpp)相关的蜂窝系统。The method in the embodiment of the present application can be applied to a long term evolution technology (long term evolution, LTE) system, a long term evolution advanced technology (long term evolution-advanced, LTE-A) system, an enhanced long term evolution technology (enhanced long term evolution-advanced , eLTE), the fifth generation (the 5th Generation, 5G) mobile communication system New Radio (NR) system, can also be extended to similar wireless communication systems, such as wireless-fidelity (wireless-fidelity, WiFi) , worldwide interoperability for microwave access (WIMAX), and cellular systems related to the 3rd generation partnership project (3gpp).
为了清楚,以下对本申请实施例中的部分术语进行解释。For clarity, some terms in the embodiments of the present application are explained below.
推理模型(也可以简称为模型):从数据中学习到的,可以实现特定功能/映射的函数。模型可以基于人工智能(artificial intelligence,AI)或者机器学习(machine learning,ML)的技术得到,因此,也可以称为人工智能/AI模型、机器学习/ML模型等。常用的用于生成AI/ML模型的算法包括:监督学习、无监督学习、增强学习,对应的模型可以称为监督学习模型、无监督学习模型、增强学习模型。示例的,监督学习模型可以是分类模型、预测模型、回归模型等,无监督学习模型可以是聚类模型。此外,模型还可以基于神经网络(neural network,NN)技术得到,这种模型也可以称为神经网络模型、深度学习模型等。Inference model (may also be referred to simply as model): A function learned from data that can implement a specific function/map. The model can be obtained based on artificial intelligence (AI) or machine learning (ML) technology, so it can also be called artificial intelligence/AI model, machine learning/ML model, etc. Commonly used algorithms for generating AI/ML models include: supervised learning, unsupervised learning, and enhanced learning. The corresponding models can be called supervised learning models, unsupervised learning models, and enhanced learning models. Exemplarily, the supervised learning model may be a classification model, a prediction model, a regression model, etc., and the unsupervised learning model may be a clustering model. In addition, the model can also be obtained based on neural network (NN) technology, and this model can also be called a neural network model, a deep learning model, and the like.
模型训练:利用训练数据训练得到可用的模型。Model training: Use the training data to train the available models.
模型推理:基于模型进行推理或预测,生成推理结果。另外,模型推理实体可以用于模型推理。Model reasoning: perform reasoning or prediction based on the model, and generate reasoning results. Additionally, model inference entities can be used for model inference.
模型部署:将模型部署在模型推理实体中。Model Deployment: Deploy the model in the model inference entity.
模型激活:激活部署在模型推理实体中的模型,使其开始运行。Model activation: Activate the model deployed in the model inference entity to start running.
模型评估:评估模型推理实体中运行的模型的性能是否满足要求。Model evaluation: Evaluate whether the performance of the model running in the model inference entity meets the requirements.
模型认证:判定进行模型训练的实体和模型所部署的实体是否一致,以及当模型训练的实体和模型所部署的实体不一致时,判定模型部署后的运行性能是否能达到预期。Model certification: Determine whether the entity for model training is consistent with the entity deployed by the model, and when the entity trained by the model is inconsistent with the entity deployed by the model, determine whether the operating performance of the model after deployment can meet expectations.
模型管理:在生命周期内对模型进行管理。例如,对模型部署、模型激活、模型评估、模型训练等进行管理。Model management: Manage models during their life cycle. For example, manage model deployment, model activation, model evaluation, model training, etc.
为便于理解本申请实施例,首先结合图1详细说明本申请实施例的一个应用场景。To facilitate understanding of the embodiment of the present application, an application scenario of the embodiment of the present application is described in detail first with reference to FIG. 1 .
图1是本申请实施例适用的一种通信系统的示意性结构图。首先对该通信系统中可能涉及的装置进行说明。 Fig. 1 is a schematic structural diagram of a communication system to which the embodiment of the present application is applicable. Firstly, the devices that may be involved in the communication system will be described.
1、模型管理实体110:用于在生命周期内对模型进行管理。例如,该模型管理实体110可以是网络管理系统(network management system,NMS)。1. Model management entity 110: used to manage the model within the life cycle. For example, the model management entity 110 may be a network management system (network management system, NMS).
在本申请实施例中,模型管理实体110可以部署于运营商设备中。In this embodiment of the application, the model management entity 110 may be deployed in the operator's equipment.
2、模型训练实体120:用于通过训练得到可用的模型。例如,该模型训练实体可以是运营商平台或者厂商训练平台,或者是部署模型训练功能的其它实体。2. Model training entity 120: used to obtain available models through training. For example, the model training entity may be an operator's platform or a manufacturer's training platform, or other entities deploying a model training function.
在本申请实施例中,模型训练实体120可以将训练好的模型发布到模型市场中,模型管理实体110可以从该模型市场中获取模型,将模型部署到模型推理实体130中。其中,模型市场可以部署在模型管理实体110中,也可以部署在模型训练实体120中,也可以独立部署,本申请对此不作特别限定。In this embodiment of the application, the model training entity 120 can publish the trained model to the model market, and the model management entity 110 can obtain the model from the model market, and deploy the model to the model reasoning entity 130 . Wherein, the model market can be deployed in the model management entity 110 , can also be deployed in the model training entity 120 , or can be deployed independently, which is not particularly limited in this application.
另外,在本申请实施例中,模型训练实体120的提供商可以称之为模型提供商。In addition, in this embodiment of the application, the provider of the model training entity 120 may be referred to as a model provider.
3、模型推理实体130:用于基于模型进行推理或运算,生成推理结果。例如,该模型推理实体130可以是网元管理系统(element management system,EMS)或者管理数据分析功能(management data analytics function,MDAF)、无线接入网(radio access network,RAN)、或者5G系统中的网元(例如,网络数据分析功能(network data analytics function,NWDAF)网元)。3. Model reasoning entity 130: used to perform reasoning or calculation based on the model, and generate reasoning results. For example, the model reasoning entity 130 may be an element management system (element management system, EMS) or a management data analysis function (management data analytics function, MDAF), a radio access network (radio access network, RAN), or a 5G system A network element (for example, a network data analysis function (network data analytics function, NWDAF) network element).
在本申请实施例中,模型推理实体120可以部署于厂商设备中,模型推理实体120的提供商可以称之为设备提供商。In the embodiment of the present application, the model reasoning entity 120 may be deployed in a manufacturer's equipment, and the provider of the model reasoning entity 120 may be referred to as a device provider.
在本申请实施例中,该通信系统还可以包括模型认证实体140:In the embodiment of this application, the communication system may also include a model certification entity 140:
4、模型认证实体140:该模型认证实体140可以用于对模型进行认证,例如,可以用于判定模型的运行性能是否能达到预期。4. Model certification entity 140: the model certification entity 140 can be used to certify the model, for example, can be used to determine whether the operating performance of the model can meet expectations.
需要说明的是,本申请的方案可以应用于包含相应实体的其它系统中,本申请不作限定。可以理解的是,上述实体或者功能既可以是硬件设备中的网络元件,也可以是在专用硬件上运行软件功能,或者是平台(例如,云平台)上实例化的虚拟化功能。可选的,上述实体或者功能可以由一个设备实现,也可以由多个设备共同实现,还可以是一个设备内的一个功能模块,本申请实施例对此不作具体限定。例如,在本申请实施例中,模型管理实体110和模型训练实体120可以是一个设备内的不同的功能模型,模型推理实体130和模型认证实体140可以是一个设备内的不同的功能模块。It should be noted that the solution in this application can be applied to other systems including corresponding entities, which is not limited in this application. It can be understood that the above entity or function may be a network element in a hardware device, a software function running on dedicated hardware, or a virtualization function instantiated on a platform (eg, a cloud platform). Optionally, the above entity or function may be implemented by one device, or jointly implemented by multiple devices, or may be a functional module in one device, which is not specifically limited in this embodiment of the present application. For example, in this embodiment of the application, the model management entity 110 and the model training entity 120 may be different functional models in one device, and the model inference entity 130 and the model certification entity 140 may be different functional modules in one device.
当模型推理实体运行不同提供商提供的模型时,可能无法正常运行该模型,因此,本申请实施例提供了一种模型管理的方法和通信装置,能够提高模型运行的可靠性,以下首先对模型管理的方法进行说明。其中,图2至图4所示的方法实施例之间可以相互结合,图2至图4所示的方法实施例中的步骤可以相互引用。例如,在本申请实施例中,图3和图4所示的方法实施例可以分别是实现图2所示的方法实施例的功能的一种实现方式。When the model inference entity runs the models provided by different providers, it may not be able to run the model normally. Therefore, the embodiment of the present application provides a model management method and communication device, which can improve the reliability of the model operation. The following first introduces the model The method of management is explained. Wherein, the method embodiments shown in FIG. 2 to FIG. 4 can be combined with each other, and the steps in the method embodiments shown in FIG. 2 to FIG. 4 can be referred to each other. For example, in the embodiment of the present application, the method embodiments shown in FIG. 3 and FIG. 4 may respectively be an implementation manner for realizing the functions of the method embodiment shown in FIG. 2 .
图2是本申请实施例提供的一种模型管理的方法200的示意性流程图。FIG. 2 is a schematic flowchart of a method 200 for model management provided by an embodiment of the present application.
S201,模型管理实体向模型认证实体发送第一提供商信息,对应地,模型认证实体从模型管理实体接收该第一提供商信息。S201, the model management entity sends the first provider information to the model certification entity, and correspondingly, the model certification entity receives the first provider information from the model management entity.
该第一提供商信息指示推理模型(即第一推理模型)由第一模型提供商提供。或者说,该第一提供商信息指示第一第一推理模型由第一模型提供商训练生成。The first provider information indicates that the reasoning model (ie, the first reasoning model) is provided by the first model provider. In other words, the first provider information indicates that the first inference model is trained and generated by the first model provider.
该第一模型提供商可以是第一厂商或者第一供应商。第一推理模型由第一模型提供商提供可以是指:提供第一推理模型的设备属于该第一模型提供商,或者说,训练第一推理 模型的设备属于该第一模型提供商,或者说,生成该第一推理模型的设备属于该第一模型提供商。The first model provider may be a first manufacturer or a first supplier. The first inference model is provided by the first model provider may mean that: the device providing the first inference model belongs to the first model provider, or in other words, the first inference model is trained The device of the model belongs to the first model provider, or in other words, the device generating the first inference model belongs to the first model provider.
应理解,在本申请实施例中,第一模型提供商可以是基于该第一模型提供商的数据训练该第一推理模型,也可以是获取基于其他提供商的数据训练该第一推理模型,本申请对此不作特别限定。It should be understood that in the embodiment of the present application, the first model provider may train the first reasoning model based on the data of the first model provider, or obtain and train the first reasoning model based on data from other providers, This application does not specifically limit it.
该第一提供商信息可以是第一模型提供商的名称或标识信息,例如,华为、中兴、爱立信等厂商,或者中国移动、中国电信等运营商。The first provider information may be the name or identification information of the first model provider, for example, manufacturers such as Huawei, ZTE, and Ericsson, or operators such as China Mobile and China Telecom.
S202,模型认证实体对第一模型提供商和模型推理实体的设备提供商进行对比,以生成对比结果,其中,该模型推理实体为将运行该第一推理模型的实体。S202. The model verification entity compares the first model provider with the device provider of the model inference entity to generate a comparison result, wherein the model inference entity is an entity that will run the first inference model.
例如,模型认证实体判定第一模型提供商和用于提供模型推理实体的设备提供商是否一致。For example, the model authentication entity determines whether the first model provider is consistent with the equipment provider for providing the model inference entity.
该设备提供商可以为第二厂商或第二运营商。The equipment provider may be a second manufacturer or a second operator.
该模型推理实体为将运行该第一推理模型的实体可以是指:第一推理模型的待接收方设备(也就是模型推理实体)是该设备提供商生产或提供的,或者,待运行该第一推理模型的设备是该设备提供商生产或提供的。The model reasoning entity being an entity that will run the first reasoning model may refer to: the device to be received by the first reasoning model (that is, the model reasoning entity) is produced or provided by the device provider, or the first reasoning model to be run An inference model device is produced or provided by the device provider.
例如,当第一模型提供商和设备提供商为相同的厂商,或者第一模型提供商和设备提供商为相同的运营商时,模型认证实体可以确定第一模型提供商和设备提供商一致;当第一模型提供商和设备提供商为不同厂商、或者不同运营商、或者分别为厂商和运营商时,确定第一模型提供商和设备提供商不一致。For example, when the first model provider and the equipment provider are the same manufacturer, or the first model provider and the equipment provider are the same operator, the model certification entity may determine that the first model provider and the equipment provider are consistent; When the first model provider and the device provider are different manufacturers, or different operators, or are manufacturers and operators respectively, it is determined that the first model provider and the device provider are inconsistent.
从而,在本申请实施例中,第一推理模型在被运行前,模型认证实体可以先判定训练该第一推理模型的提供商与运行该模型的提供商是否一致,以能够实现针对不同结果进行不同的处理,提高运行模型的可靠性。Therefore, in this embodiment of the application, before the first inference model is run, the model certification entity can first determine whether the provider who trained the first inference model is consistent with the provider who runs the model, so as to be able to implement different results. Different handling to improve the reliability of running models.
可选地,当模型认证实体和模型推理实体部署于同一个设备中时,或者说,当模型认证实体和模型推理实体是同一个提供商提供的时,该设备提供商的信息可以记录在该模型认证实体中。Optionally, when the model authentication entity and the model inference entity are deployed in the same device, or in other words, when the model authentication entity and the model inference entity are provided by the same provider, the information of the device provider can be recorded in the In the model authentication entity.
或者,可选地,当模型认证实体不是该设备提供商提供的时,方法200可以通过执行步骤S203获取该设备提供商的信息。Or, optionally, when the model authentication entity is not provided by the equipment provider, the method 200 may acquire the information of the equipment provider by performing step S203.
可选地,S203,模型管理实体向模型认证实体发送第二提供商信息,对应地,模型认证实体从模型管理实体接收该第二提供商信息。Optionally, in S203, the model management entity sends the second provider information to the model certification entity, and correspondingly, the model certification entity receives the second provider information from the model management entity.
该第二提供商信息指示该设备提供商。The second provider information indicates the device provider.
其中,该第二提供商信息可以是设备提供商的名称或标识信息和/或模型推理实体的名称或标识信息。模型管理实体在发送第二提供商信息之前,可以确定需要将第一推理模型部署在设备提供商的模型推理实体中,进而向模型认证实体发送该第二提供商信息。Wherein, the second provider information may be the name or identification information of the device provider and/or the name or identification information of the model reasoning entity. Before sending the second provider information, the model management entity may determine that the first inference model needs to be deployed in the device provider's model inference entity, and then send the second provider information to the model authentication entity.
S204,模型认证实体基于对比结果向模型管理实体发送认证结果,对应地,模型管理实体从模型认证实体接收该认证结果。S204, the model authentication entity sends an authentication result to the model management entity based on the comparison result, and correspondingly, the model management entity receives the authentication result from the model authentication entity.
该认证结果指示第一推理模型是否可以在该模型推理实体中运行。The authentication result indicates whether the first inference model can run in the model inference entity.
例如,模型认证实体根据对比结果判定第一推理模型是否能够在该模型推理实体中运行,以生成认证结果。如果模型认证实体判定第一推理模型能够在该模型推理实体中运行,那么该第一推理模型认证通过。反之,如果模型认证实体判定第一推理模型不能够在该模 型推理实体中运行,那么该第一推理模型认证不通过。For example, the model verification entity determines whether the first reasoning model can run in the model reasoning entity according to the comparison result, so as to generate a verification result. If the model authentication entity determines that the first inference model can run in the model inference entity, then the first inference model is certified. Conversely, if the model certification entity determines that the first inference model cannot type reasoning entity, then the authentication of the first reasoning model fails.
可选地,当对比结果为第一模型提供商和设备提供商一致时,认证结果指示第一推理模型能够在该模型推理实体中运行。或者,该认证结果指示第一模型提供商和设备提供商一致。或者,该认证结果指示模型管理实体部署该第一推理模型。或者,该认证结果指示第一推理模型的认证状态为通过。Optionally, when the comparison result shows that the first model provider and the device provider are consistent, the authentication result indicates that the first reasoning model can run in the model reasoning entity. Alternatively, the authentication result indicates that the first model provider and the device provider are identical. Alternatively, the authentication result instructs the model management entity to deploy the first reasoning model. Alternatively, the authentication result indicates that the authentication status of the first reasoning model is passed.
可选地,模型认证实体也可以向模型推理实体发送该认证结果,对应地,模型推理实体从模型认证实体接收该认证结果。Optionally, the model authentication entity may also send the authentication result to the model inference entity, and correspondingly, the model inference entity receives the authentication result from the model authentication entity.
该认证结果还可以指示认证通过标识或认证通过口令,认证通过标识或认证通过口令可以用于模型推理实体确定该第一推理模型认证通过。例如,该认证结果包括表示通过的状态信息和认证通过标识或认证通过口令。The authentication result may also indicate an authenticated ID or an authenticated password, and the authenticated ID or the authenticated password may be used by the model inference entity to determine that the first inference model is authenticated. For example, the authentication result includes status information indicating that the authentication is passed, and an authentication-passed identifier or a pass-authentication password.
例如,模型认证实体可以根据第一推理模型的版本或第一推理模型的标识生成认证通过标识或认证通过口令,并将认证结果分别发送给模型管理实体和模型推理实体。模型管理实体在发送用于部署第一推理模型的模型部署消息中携带该认证结果,模型推理实体通过确定模型部署消息中携带的标识或口令与从模型认证实体接收到的认证结果指示的标识或口令是否一致获知该第一推理模型是否认证通过。For example, the model authentication entity may generate an authentication pass ID or an authentication pass password according to the version of the first inference model or the identifier of the first inference model, and send the authentication results to the model management entity and the model inference entity respectively. The model management entity carries the authentication result in the model deployment message used to deploy the first inference model, and the model inference entity determines the identity or password carried in the model deployment message and the identity or password indicated by the authentication result received from the model authentication entity. Whether the passwords are consistent or not determines whether the first reasoning model is authenticated.
另外,该模型部署消息中还可以携带以下信息中的至少一项:第一推理模型的标识信息、第一推理模型的版本信息、第一推理模型、第一推理模型的使用信息。In addition, the model deployment message may also carry at least one of the following information: identification information of the first inference model, version information of the first inference model, first inference model, usage information of the first inference model.
其中,第一推理模型的使用信息可以用于指示第一推理模型的运行条件,例如该使用信息可以包括以下信息中的至少一项:第一推理模型的使用制式、第一推理模型的使用时间和第一推理模型的使用区域等信息。Wherein, the usage information of the first reasoning model may be used to indicate the operating conditions of the first reasoning model, for example, the usage information may include at least one of the following information: the usage format of the first reasoning model, the usage time of the first reasoning model and information such as the use area of the first inference model.
可选地,当对比结果为第一模型提供商和设备提供商不一致时,方法200还可以执行步骤S205。Optionally, when the result of the comparison is that the first model provider and the device provider are inconsistent, the method 200 may also perform step S205.
可选地,S205,模型认证实体根据对比结果和性能信息确定认证结果。Optionally, in S205, the model verification entity determines the verification result according to the comparison result and performance information.
例如,模型认证实体根据性能信息判定第一推理模型是否认证通过,该性能信息用于指示第一推理模型的准确度。For example, the model certification entity determines whether the first reasoning model is certified according to the performance information, where the performance information is used to indicate the accuracy of the first reasoning model.
对比结果为第一模型提供商和设备提供商不一致,第一推理模型的准确度小于第一阈值,认证结果指示第一推理模型认证不通过;或者,对比结果为第一模型提供商和设备提供商不一致,第一推理模型的准确度大于等于第一阈值,认证结果指示第一推理模型认证通过。The comparison result shows that the first model provider and the device provider are inconsistent, the accuracy of the first inference model is less than the first threshold, and the authentication result indicates that the first inference model fails to pass the authentication; or, the comparison result is that the first model provider and the device provide If the quotients are inconsistent, the accuracy of the first inference model is greater than or equal to the first threshold, and the authentication result indicates that the first inference model has passed the authentication.
其中,根据性能信息确定第一推理模型认证通过可以是指:该第一推理模型部署于模型推理实体可以达到预期的性能;根据性能信息确定第一推理模型认证不通过可以是指:该第一推理模型部署于模型推理实体不能达到预期的性能。Wherein, determining that the first inference model is authenticated according to the performance information may refer to: the first inference model is deployed in the model inference entity to achieve the expected performance; determining that the first inference model is not certified according to the performance information may refer to: the first The inference model deployed on the model inference entity cannot achieve the expected performance.
在模型认证实体判定第一推理模型是否认证通过之前,模型认证实体可以获取该性能信息,例如,该模型认证实体可以通过测试该第一推理模型获得该性能信息。Before the model certification entity determines whether the first inference model is certified, the model certification entity may acquire the performance information, for example, the model certification entity may obtain the performance information by testing the first inference model.
作为示例而非限定,第一推理模型用于使用第一时间段内的第一数据预测第二时间段内的第二数据,该第一数据和第二数据为模型推理实体保存的历史数据,换句话说,第一数据和第二数据是两个时间段内的真实数据。模型认证实体将第一数据作为输入数据,运行第一推理模型获得预测的第二时间段内的预测数据。进而,模型认证实体可以通过确定第二数据和预测数据的匹配度确定第一推理模型的准确度,以生成性能信息。 As an example but not a limitation, the first inference model is used to predict the second data in the second time period using the first data in the first time period, the first data and the second data are historical data saved by the model inference entity, In other words, the first data and the second data are real data in two time periods. The model verification entity takes the first data as input data, and runs the first reasoning model to obtain forecast data within the second forecast period. Furthermore, the model certification entity may determine the accuracy of the first reasoning model by determining the matching degree of the second data and the predicted data, so as to generate performance information.
可选地,模型认证实体根据该第一推理模型的使用信息向模型推理实体请求获取该第一数据和该第二数据。Optionally, the model authentication entity requests the model inference entity to acquire the first data and the second data according to the usage information of the first inference model.
其中,该第一阈值可以是预设置的,也可以是模型管理实体发送给模型认证实体的模型性能信息指示的,该模型性能信息用于描述第一推理模型在使用信息指示的运行条件运行时可以达到的性能。Wherein, the first threshold may be preset, or may be indicated by the model performance information sent by the model management entity to the model authentication entity. achievable performance.
从而,即使第一模型提供商和设备提供商不一致时,如果第一推理模型的性能能够达到预期,那么也可以确定第一推理模型认证通过,模型管理实体可以将第一推理模型部署于模型推理实体。Therefore, even if the first model provider is inconsistent with the equipment provider, if the performance of the first inference model can meet expectations, then it can be determined that the first inference model has passed the certification, and the model management entity can deploy the first inference model in the model inference entity.
可选地,认证结果指示第一推理模型不能够在该模型推理实体中运行(即第一模型提供商和设备提供商不一致,且第一推理模型的准确度小于第一阈值时)。或者,该认证结果指示第一推理模型认证不通过。或者,该认证结果指示该第一推理模型部署于该模型推理实体不能正常运行。或者,该认证结果指示该第一推理模型部署于该模型推理实体达不到预期的性能。Optionally, the authentication result indicates that the first reasoning model cannot run in the model reasoning entity (that is, when the first model provider is inconsistent with the device provider, and the accuracy of the first reasoning model is less than the first threshold). Alternatively, the authentication result indicates that the authentication of the first reasoning model fails. Or, the authentication result indicates that the first reasoning model deployed in the model reasoning entity cannot run normally. Or, the authentication result indicates that the first reasoning model deployed on the model reasoning entity cannot achieve expected performance.
可选地,S206,模型认证实体向模型管理实体发送原因信息,对应地,模型管理实体从模型认证实体接收该原因信息。Optionally, in S206, the model certification entity sends reason information to the model management entity, and correspondingly, the model management entity receives the reason information from the model certification entity.
该原因信息指示得出该认证结果的原因。The cause information indicates the reason for the authentication result.
可选地,模型认证实体可以只在认证结果指示第一推理模型认证不通过的情况下向模型管理实体发送原因信息。Optionally, the model authentication entity may send cause information to the model management entity only when the authentication result indicates that the authentication of the first reasoning model fails.
例如,该原因信息指示得出第一推理模型认证不通过的原因。例如,该原因信息可以指示第一推理模型的性能信息不能达到预期,还可以指示第一模型提供商和设备提供商不一致。For example, the reason information indicates the reason why the first reasoning model fails the authentication. For example, the cause information may indicate that the performance information of the first reasoning model cannot meet expectations, and may also indicate that the first model provider and the device provider are inconsistent.
该原因信息可以包括第一推理模型的使用信息、准确度和用于表示第一模型提供商和设备提供商不一致的信息。其中,使用信息和准确度可以是指:第一推理模型根据该使用信息指示的方式运行测试预计能够达到的准确度。The cause information may include usage information, accuracy of the first reasoning model, and information indicating that the first model provider and the device provider are inconsistent. Wherein, the usage information and the accuracy may refer to: the accuracy that the first inference model is expected to achieve by running the test in a manner indicated by the usage information.
模型管理实体接收到指示第一推理模型不可以在模型推理实体中运行的认证结果后,可以采用两种方式进行处理,以下分别对该两种方式进行介绍。After the model management entity receives the authentication result indicating that the first inference model cannot be run in the model inference entity, it may process it in two ways, which will be introduced respectively below.
方式1:Method 1:
方式1中,该方法200可以执行步骤S207至S208。In manner 1, the method 200 may execute steps S207 to S208.
可选地,S207,模型管理实体向模型训练实体发送以下信息中至少一项:认证结果,原因信息和第一调整信息。Optionally, in S207, the model management entity sends at least one of the following information to the model training entity: authentication result, cause information, and first adjustment information.
其中,该第一调整信息用于指示对第一推理模型进行调整。以生成调整后的推理模型(在本申请实施例中,用第二推理模型表示调整后的第一推理模型)。Wherein, the first adjustment information is used to indicate to adjust the first reasoning model. to generate an adjusted inference model (in the embodiment of the present application, the adjusted first inference model is represented by a second inference model).
该认证结果和原因信息可以与第一推理模型的标识信息对应,进而模型训练实体可以根据认证结果和/或原因信息重新生成第二推理模型,或者当接收到调整信息时,可以对第一推理模型进行调整以生成第二推理模型The authentication result and cause information may correspond to the identification information of the first inference model, and then the model training entity may regenerate the second inference model according to the authentication result and/or cause information, or when receiving the adjustment information, may update the first inference model The model is tuned to generate a second inference model
其中,对第一推理模型进行调整可以是指:对第一推理模型进行再训练。Wherein, adjusting the first reasoning model may refer to: retraining the first reasoning model.
应理解,当模型管理实体向模型训练实体仅发送该原因信息时,该原因信息可以隐含的指示该第一推理模型认证不通过,以及该原因信息也可以隐含的指示对该第一推理模型进行调整。 It should be understood that when the model management entity only sends the reason information to the model training entity, the reason information may implicitly indicate that the first reasoning model fails the authentication, and the reason information may also implicitly indicate that the first reasoning model The model is adjusted.
模型管理实体还可以将用于指示设备提供商的信息、模型的标识信息、模型性能信息或模型的版本信息发送给模型训练实体,用于辅助模型训练实体确定第二推理模型。The model management entity may also send information indicating the device provider, model identification information, model performance information or model version information to the model training entity to assist the model training entity in determining the second inference model.
可选地,S208,模型训练实体确定第二推理模型。Optionally, in S208, the model training entity determines a second inference model.
模型训练实体可以根据该认证结果、该原因信息或者第一调整信息确定第一推理模型认证不通过,进而可以生成第二推理模型,或者对第一推理模型进行调整以生成第二推理模型。The model training entity may determine that the first inference model fails the authentication according to the authentication result, the cause information, or the first adjustment information, and then generate a second inference model, or adjust the first inference model to generate a second inference model.
例如,模型训练实体可以获取该设备提供商训练生成的第二推理模型。For example, the model training entity may acquire the second inference model trained and generated by the device provider.
或者,再例如,模型训练实体可以提取设备提供商的数据集对第一推理模型进行再训练以生成第二推理模型。Or, for another example, the model training entity may extract the data set of the device provider and retrain the first inference model to generate the second inference model.
可选地,模型训练实体向模型管理实体发送第二推理模型,对应地,模型管理实体从模型管理实体接收该第二推理模型。Optionally, the model training entity sends the second inference model to the model management entity, and correspondingly, the model management entity receives the second inference model from the model management entity.
进而模型管理实体可以对第二推理模型进行与步骤S201至S206类似的认证过程。Furthermore, the model management entity may perform an authentication process similar to steps S201 to S206 on the second reasoning model.
从而,在方式1中,当第一推理模型认证不通过时,模型训练实体可以通过重新获取或者对第一推理模型再训练生成第二推理模型。Therefore, in mode 1, when the first inference model fails to pass the authentication, the model training entity can generate the second inference model by reacquiring or retraining the first inference model.
方式2:Method 2:
可选地,模型管理实体向模型推理实体发送第一推理模型和第二调整信息,对应地,模型推理实体从模型管理实体接收第一推理模型和第二调整信息。Optionally, the model management entity sends the first inference model and the second adjustment information to the model inference entity, and correspondingly, the model inference entity receives the first inference model and the second adjustment information from the model management entity.
该第二调整信息用于指示对该第一推理模型进行调整。The second adjustment information is used to indicate to adjust the first reasoning model.
例如,模型管理实体可以向模型推理实体发送模型部署请求消息,该模型部署请求消息中携带该第一推理模型和该第二调整信息,进而,模型推理实体根据第二调整信息可以知道第一推理模型认证未通过,或者,该模型推理实体根据第二调整信息可以知道运行第一推理模型不能达到预期的性能,模型推理实体在运行第一推理模型之前可以对该第一推理模型进行再训练。For example, the model management entity can send a model deployment request message to the model inference entity, and the model deployment request message carries the first inference model and the second adjustment information, and then, the model inference entity can know the first inference model according to the second adjustment information If the model authentication fails, or the model inference entity knows that running the first inference model cannot achieve the expected performance according to the second adjustment information, the model inference entity can retrain the first inference model before running the first inference model.
需要说明的是,当模型认证实体是与模型推理实体分开部署的模型认证实体时,模型管理实体还可以将第一推理模型的标识信息、版本信息、模型性能信息或使用信息发送给模型推理实体,用于辅助模型推理实体对第一推理模型进行再训练,使得再训练后的第一推理模型达到模型性能信息指示的性能。It should be noted that when the model authentication entity is deployed separately from the model inference entity, the model management entity can also send the identification information, version information, model performance information or usage information of the first inference model to the model inference entity , used to assist the model reasoning entity to retrain the first reasoning model, so that the retrained first reasoning model can achieve the performance indicated by the model performance information.
应理解,在一种可能的实现方式中,当模型认证实体和模型推理实体部署于一个设备中时,模型认证实体和模型推理实体接收第一提供商信息,该第一提供商信息携带于模型部署请求消息中,即模型认证实体在判定第一推理模型是否认证通过之前已经获取了该第一推理模型,模型认证实体确定第一推理模型认证不通过时,可以通过内部接口通知模型推理实体该第一推理模型认证不通过,该模型推理实体可以对第一推理模型进行调整后再运行。在这种情况下,模型管理实体可以不需要向模型推理实体发送该调整信息。It should be understood that, in a possible implementation manner, when the model authentication entity and the model inference entity are deployed in one device, the model authentication entity and the model inference entity receive the first provider information, and the first provider information is carried in the model In the deployment request message, that is, the model authentication entity has acquired the first inference model before determining whether the first inference model is authenticated. When the model authentication entity determines that the first inference model is not certified, it can notify the model inference entity through the internal interface If the authentication of the first reasoning model fails, the model reasoning entity may adjust the first reasoning model before running it. In this case, the model management entity may not need to send the adjustment information to the model inference entity.
从而,在方式2中,当第一推理模型认证不通过时,模型推理实体可以在部署第一推理模型之前对第一推理模型进行再训练。Therefore, in mode 2, when the first inference model fails to pass the authentication, the model inference entity may retrain the first inference model before deploying the first inference model.
在上述说明中,在第一推理模型运行之前,模型认证实体可以先判定第一提供商信息指示的提供第一推理模型的第一模型提供商与第一推理模型待部署于的设备提供商是否一致,当一致时,模型认证实体可以认为第一推理模型认证通过,当不一致时,模型认证实体可以进一步根据第一推理模型的性能信息判定第一推理模型是否认证通过。但是,模 型管理实体发送的第一提供商信息有可能是不真实的,因此,在步骤S202之前,该方法200还可以执行步骤S209至S210以判定第一提供商信息是否真实,以下对此进行介绍。In the above description, before running the first inference model, the model certification entity may first determine whether the first model provider that provides the first inference model and the device provider to which the first inference model is to be deployed, indicated by the first provider information, are Consistent, when they are consistent, the model certification entity may consider that the first reasoning model has passed the certification; when they are inconsistent, the model certification entity may further determine whether the first reasoning model has passed the certification based on the performance information of the first reasoning model. However, mod The first provider information sent by the type management entity may be unauthentic. Therefore, before step S202, the method 200 may also execute steps S209 to S210 to determine whether the first provider information is authentic, which will be introduced below.
可选地,S209,模型管理实体向模型认证实体发送第一推理模型,对应地,模型认证实体从模型管理实体接收该第一推理模型。Optionally, in S209, the model management entity sends the first inference model to the model certification entity, and correspondingly, the model certification entity receives the first inference model from the model management entity.
该第一推理模型可以以模型文件或者模型文件地址的形式承载。其中,模型文件是指描述第一推理模型的信息,以文件格式记载,模型文件地址是指用于索引到该模型文件的地址信息。另外,应理解,该模型文件可以由多个子文件组成。The first reasoning model may be carried in the form of a model file or a model file address. Wherein, the model file refers to information describing the first reasoning model, which is recorded in a file format, and the model file address refers to address information for indexing to the model file. In addition, it should be understood that the model file may consist of multiple sub-files.
描述第一推理模型的信息可以包括如下信息中的至少一项:第一推理模型的名称、提供第一推理模型的第二模型提供商信息、第一推理模型的标识。The information describing the first inference model may include at least one of the following information: a name of the first inference model, information of a second model provider providing the first inference model, and an identifier of the first inference model.
可选地,S210,模型认证实体第一推理模型确定该第一模型提供商提供是该第一推理模型的提供商。Optionally, in S210, the first inference model of the model certification entity determines that the first model provider is the provider of the first inference model.
例如,模型认证实体根据第一推理模型和第一提供商信息判定第一推理模型是否由该第一模型提供商提供或训练。For example, the model certification entity determines whether the first inference model is provided or trained by the first model provider according to the first inference model and the first provider information.
应理解,该第一提供商信息记载于模型文件之外,与第一推理模型中的第二模型提供商信息的不同处在于:第一推理模型中的第二模型提供商信息是原始记载于模型文件中的真实信息,模型文件之外的第一提供商信息是模型管理实体指定的可能会有错误的信息。It should be understood that the first provider information is recorded outside the model file, and the difference from the second model provider information in the first inference model is that the second model provider information in the first inference model is originally recorded in The real information in the model file, the first provider information outside the model file is the information specified by the model management entity and may have errors.
模型认证实体可以采用三种方式判定第一推理模型是否由该第一模型提供商训练,以下对该三种方式分别进行说明。The model certification entity may use three methods to determine whether the first reasoning model is trained by the first model provider, and the three methods will be described below.
方式A:Method A:
当第二模型提供商和第一模型提供商一致时,模型认证实体确定第一推理模型是由第一模型提供商训提供。When the second model provider is consistent with the first model provider, the model certification entity determines that the first inference model is provided by the first model provider.
当第二模型提供商和第一模型提供商不一致时,模型认证实体确定第一推理模型不是由第一模型提供商提供。When the second model provider is inconsistent with the first model provider, the model certification entity determines that the first inference model is not provided by the first model provider.
其中,第二模型提供商为第一推理模型指示的提供第一推理模型的提供商,或者说,该第二模型提供商时第一推理模型的模型文件中记载的提供商。Wherein, the second model provider is the provider indicated by the first inference model that provides the first inference model, or in other words, the second model provider is the provider recorded in the model file of the first inference model.
从而,在方式A中,模型认证实体可以根据第一提供商信息是否与第一推理模型中记载的第二模型提供商是否一致判定第一提供商信息是否真实。Therefore, in mode A, the model certification entity can determine whether the first provider information is authentic according to whether the first provider information is consistent with the second model provider recorded in the first reasoning model.
方式B:Method B:
当第一格式和第二格式一致时,模型认证实体确定第一推理模型是由第一模型提供商提供;When the first format is consistent with the second format, the model certification entity determines that the first reasoning model is provided by the first model provider;
当第一格式和第二格式不一致时,模型认证实体确定第一推理模型不是由第一模型提供商提供;When the first format is inconsistent with the second format, the model certification entity determines that the first inference model is not provided by the first model provider;
其中,第一格式为第一推理模型的格式,第二格式是根据第一提供商信息确定的。Wherein, the first format is the format of the first reasoning model, and the second format is determined according to the first provider information.
第一推理模型的格式可以是指:第一推理模型的模型文件的格式,例如文件格式、语法等。The format of the first reasoning model may refer to: the format of the model file of the first reasoning model, such as file format, syntax, and the like.
第二格式是根据第一提供商信息确定的可以是指:模型认证实体可以根据第一提供商信息指示的第一模型提供商确定该第一模型提供商训练的模型的第二格式。例如,模型认证实体可以记载多个供应商训练模型的格式,模型认证实体接收到第一提供商信息,可以确定该第一模型提供商对应的第一格式。 The fact that the second format is determined according to the first provider information may mean that the model certification entity may determine the second format of the model trained by the first model provider according to the first model provider indicated by the first provider information. For example, the model certification entity may record the formats of the training models of multiple suppliers, and the model certification entity may determine the first format corresponding to the first model provider after receiving the information of the first provider.
从而,在方式B中,模型认证实体可以根据第一模型提供商的格式和第一推理模型的格式是否一致判定第一提供商信息是否真实。Therefore, in mode B, the model certification entity can determine whether the information of the first provider is authentic according to whether the format of the first model provider is consistent with the format of the first reasoning model.
方式C:Method C:
当第二模型提供商和第一模型提供商一致,第一格式和第二格式一致时,模型认证实体确定第一推理模型是由第一模型提供商提供;When the second model provider is consistent with the first model provider, and the first format is consistent with the second format, the model certification entity determines that the first reasoning model is provided by the first model provider;
当第二模型提供商和第一模型提供商不一致,和/或,第一格式和第二格式不一致时,模型认证实体确定第一推理模型不是由第一模型提供商提供;When the second model provider is inconsistent with the first model provider, and/or, the first format is inconsistent with the second format, the model certification entity determines that the first inference model is not provided by the first model provider;
其中,第二模型提供商为第一推理模型指示的提供第一推理模型的供应商,第一格式为第一推理模型的格式,第二格式是根据第一提供商信息确定的。Wherein, the second model provider is the supplier indicated by the first inference model to provide the first inference model, the first format is the format of the first inference model, and the second format is determined according to the information of the first provider.
从而,在方式C中,模型认证实体可以同时判定第一提供商信息与第一推理模型中记载的第二模型提供商是否一致,以及第一模型提供商的格式和第一推理模型的格式是否一致,在两个条件都是一致的情况下,确定第一提供商信息真实。Therefore, in method C, the model certification entity can simultaneously determine whether the first provider information is consistent with the second model provider recorded in the first inference model, and whether the format of the first model provider and the format of the first inference model Consistent, when both conditions are consistent, it is determined that the information of the first provider is true.
在上述三种方式中,当模型认证实体确定第一推理模型是由第一模型提供商提供时,该方法200可以执行步骤S202至步骤S209以判定第一模型提供商和设备提供商是否一致。当该模型认证实体确定第一推理模型不是由第一模型提供商训练时,模型认证实体可以认为第一推理模型认证不成功,该方法可以执行步骤S206至步骤S209第一推理模型认证不成功后的动作。在这种情况下,步骤S207中的原因信息还可以用于指示第一推理模型不是由第一模型提供商训练,或者指示第一提供商信息错误。In the above three manners, when the model certification entity determines that the first reasoning model is provided by the first model provider, the method 200 may execute steps S202 to S209 to determine whether the first model provider and the equipment provider are consistent. When the model certification entity determines that the first reasoning model is not trained by the first model provider, the model certification entity may consider that the first reasoning model certification is unsuccessful, and the method may perform steps S206 to S209 after the first reasoning model certification is unsuccessful Actions. In this case, the cause information in step S207 may also be used to indicate that the first inference model is not trained by the first model provider, or indicate that the information of the first provider is wrong.
从而,在本申请中,模型管理实体可以将指示提供第一推理模型的第一模型提供商的第一提供商信息发送给模型认证实体,模型认证实体可以对第一模型提供商和设备提供商进行对比,并可以根据对比结果生成认证结果告知模型管理实体,进而模型管理实体可以基于不同的认证结果采取不同的处理,能够提高第一推理模型运行的可靠性。Therefore, in this application, the model management entity can send the first provider information indicating the first model provider that provides the first reasoning model to the model certification entity, and the model certification entity can verify the first model provider and the equipment provider The comparison is performed, and an authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can take different processing based on different authentication results, which can improve the reliability of the operation of the first reasoning model.
当模型认证实体和模型推理实体部署在不同的设备上时,模型管理实体可以向模型认证实体请求对认证,并根据认证结果确定是否将第一推理模型不属于模型推理实体中。When the model authentication entity and the model inference entity are deployed on different devices, the model management entity may request the model authentication entity for authentication, and determine whether the first inference model does not belong to the model inference entity according to the authentication result.
图3是本申请实施例提供的一种模型管理的方法300的示意性流程图。FIG. 3 is a schematic flowchart of a method 300 for model management provided by an embodiment of the present application.
可选地,S301,模型管理实体获取第一推理模型。Optionally, S301. A model management entity acquires a first inference model.
例如,模型管理实体可以根据网络状态或者运行需求向模型市场发送模型查询请求消息,该模型查询请求消息用于请求获取第一推理模型。模型市场可以响应于该查询请求消息,向模型管理实体发送第一推理模型,或者,发送多个模型,由模型管理实体从多个模型中确定该第一推理模型。其中,模型市场中具有多个供应商训练的模型,该模型市场可以部署在图1所示的模型训练实体中。For example, the model management entity may send a model query request message to the model marketplace according to network status or operation requirements, where the model query request message is used to request to acquire the first reasoning model. The model marketplace may send the first inference model to the model management entity in response to the query request message, or send multiple models, and the model management entity determines the first inference model from the multiple models. Wherein, the model market has models trained by multiple suppliers, and the model market can be deployed in the model training entity shown in FIG. 1 .
或者,又例如,模型管理实体可以根据网络状态或者运行需求向模型训练实体发送模型训练请求消息,该模型训练请求消息用于请求模型训练实体进行训练以生成第一推理模型。模型训练实体根据模型训练请求消息进行模型训练以生成第一推理模型。模型训练实体响应于该模型训练请求消息,向模型管理实体发送该第一推理模型。Or, for another example, the model management entity may send a model training request message to the model training entity according to network status or operation requirements, where the model training request message is used to request the model training entity to perform training to generate the first inference model. The model training entity performs model training according to the model training request message to generate a first reasoning model. The model training entity sends the first inference model to the model management entity in response to the model training request message.
另外,模型管理实体与模型训练实体可以部署在同一个运营商平台。In addition, the model management entity and the model training entity can be deployed on the same operator platform.
S302,模型管理实体向模型认证实体发送第一提供商信息,对应地,模型认证实体从模型管理实体接收该第一提供商信息。S302, the model management entity sends the first provider information to the model certification entity, and correspondingly, the model certification entity receives the first provider information from the model management entity.
该第一提供商信息指示由第一模型提供商提供第一推理模型。有关第一提供商信息的 描述可参见方法200中步骤S201对第一提供商信息的描述,为了简便,在此不再赘述。The first provider information indicates that the first inference model is provided by the first model provider. Information about the first provider For description, refer to the description of the first provider information in step S201 in the method 200, and for simplicity, details are not repeated here.
例如,模型管理实体向模型认证实体发送模型认证请求消息,该模型认证请求消息用于请求对第一推理模型进行认证,该模型认证请求消息中携带该第一提供商信息。For example, the model management entity sends a model certification request message to the model certification entity, where the model certification request message is used to request certification of the first reasoning model, and the model certification request message carries the first provider information.
可选地,S303,模型管理实体向模型认证实体发送第二提供商信息和第一推理模型,对应地,模型认证实体从模型管理实体接收该第二提供商信息和第一推理模型。Optionally, in S303, the model management entity sends the second provider information and the first reasoning model to the model certification entity, and correspondingly, the model certification entity receives the second provider information and the first reasoning model from the model management entity.
该第二提供商信息用于指示该设备提供商,该第一推理模型待部署于该设备提供商,设备提供商为第二厂商或第二运营商。有关设备提供商和第一推理模型的描述可参见方法200中步骤S202对设备提供商的描述,为了简便,在此不再赘述。The second provider information is used to indicate the equipment provider on which the first reasoning model is to be deployed, and the equipment provider is a second manufacturer or a second operator. For the description about the device provider and the first reasoning model, refer to the description of the device provider in step S202 of the method 200, and for the sake of brevity, details are not repeated here.
例如,模型管理实体向模型认证实体发送模型认证请求消息,该模型认证请求消息中携带该第二提供商信息和第一推理模型。For example, the model management entity sends a model certification request message to the model certification entity, where the model certification request message carries the second provider information and the first reasoning model.
在一种可能的实现方式中,该模型认证请求消息还可以携带以下信息中的至少一项:第一推理模型的标识信息、第一推理模型的版本信息、第一推理模型的性能信息和第一推理模型的使用信息。有关上述信息的描述可参见方法200中的描述,为了简便,在此不再赘述。In a possible implementation, the model authentication request message may also carry at least one of the following information: identification information of the first inference model, version information of the first inference model, performance information of the first inference model, and - Usage information for an inference model. For a description of the above information, reference may be made to the description of the method 200, and for simplicity, details are not repeated here.
可选地,S304,模型认证实体根据第一推理模型和第一提供商信息确定第一推理模型的是该第一推理模型的提供商。Optionally, in S304, the model authentication entity determines, according to the first inference model and the first provider information, that the provider of the first inference model is the first inference model.
有关模型认证实体确定第一推理模型是否由该第一模型提供商提供的描述可参见方法200中步骤S210中的描述,为了简便,在此不再赘述。For the description about whether the model certification entity determines whether the first inference model is provided by the first model provider, refer to the description in step S210 in the method 200, and for the sake of brevity, details are not repeated here.
可选地,模型认证实体确定第一推理模型是由该第一模型提供商提供,方法300可以执行步骤S305至S313。Optionally, the model certification entity determines that the first reasoning model is provided by the first model provider, and the method 300 may perform steps S305 to S313.
S305,模型认证实体对第一模型提供商和设备提供商进行对比,以生成对比结果。S305. The model certification entity compares the first model provider and the equipment provider to generate a comparison result.
例如,模型认证实体可以判定第一模型提供商和设备提供商是否一致。For example, the model certification entity may determine whether the first model provider and the device provider are consistent.
有关模型认证实体进行对比以生成对比结果的方式可参见方法200中步骤S202的描述,为了简便,在此不再赘述。For the manner in which the model verification entity performs comparison to generate a comparison result, reference may be made to the description of step S202 in the method 200 , and details are not repeated here for brevity.
可选地,当对比结果为第一模型提供商和设备提供商不一致时,方法300可以执行步骤S306。Optionally, when the comparison result shows that the first model provider and the device provider are inconsistent, the method 300 may perform step S306.
可选地,S306,模型认证实体根据对比结果和性能信息确定认证结果。Optionally, at S306, the model verification entity determines the verification result according to the comparison result and performance information.
例如,模型认证实体根据性能信息判定第一推理模型是否认证通过,该性能信息用于指示第一推理模型的准确度。For example, the model certification entity determines whether the first reasoning model is certified according to the performance information, where the performance information is used to indicate the accuracy of the first reasoning model.
再例如,模型认证实体可以向模型推理实体发送评估数据请求消息,该评估数据请求消息用于请求第一数据和第二数据,该评估数据请求消息可以包括类型信息、条件信息。模型推理实体响应于评估数据请求消息,向模型认证实体发送评估数据响应消息,该评估数据响应消息携带该第一数据和第二数据。其中,类型信息可以是指第一数据和第二数据的数据类型,条件信息是指第一数据和第二数据需要满足的条件,例如,条件信息可以包括制式条件、时间条件和区域条件,模型推理实体将满足条件的第一数据和第二数据发送给模型认证实体。For another example, the model certification entity may send an evaluation data request message to the model reasoning entity, where the evaluation data request message is used to request the first data and the second data, and the evaluation data request message may include type information and condition information. In response to the evaluation data request message, the model reasoning entity sends an evaluation data response message to the model certification entity, where the evaluation data response message carries the first data and the second data. Among them, the type information may refer to the data types of the first data and the second data, and the condition information refers to the conditions that the first data and the second data need to meet. For example, the condition information may include standard conditions, time conditions, and area conditions. Model The reasoning entity sends the first data and the second data satisfying the condition to the model verification entity.
模型认证实体根据第一数据和第二数据确定性能信息,以及根据性能信息判定第一推理模型是否认证通过的方式可以参见方法200中步骤S205的描述,为了简便,在此不再赘述。 The manner in which the model certification entity determines the performance information according to the first data and the second data, and determines whether the first inference model is certified according to the performance information can refer to the description of step S205 in the method 200, and for the sake of brevity, details are not repeated here.
S307,模型认证实体向模型管理实体发送认证结果,对应地,模型管理实体从模型认证实体接收该认证结果。S307, the model authentication entity sends the authentication result to the model management entity, and correspondingly, the model management entity receives the authentication result from the model authentication entity.
该认证结果指示第一推理模型是否可以在该模型推理实体中运行。The authentication result indicates whether the first inference model can run in the model inference entity.
例如,模型认证实体响应于模型认证请求消息,向模型管理实体发送模型认证响应消息,该模型认证响应消息携带该认证结果。有关该认证结果的描述可参见方法200中步骤S204的描述,为了简便,在此不再赘述。For example, in response to the model certification request message, the model certification entity sends a model certification response message to the model management entity, where the model certification response message carries the certification result. For a description of the authentication result, reference may be made to the description of step S204 in the method 200, and for simplicity, details are not repeated here.
可选地,S308,模型认证实体向模型推理实体发送认证结果,对应地,模型推理实体从模型认证实体接收该认证结果。Optionally, in S308, the model authentication entity sends the authentication result to the model inference entity, and correspondingly, the model inference entity receives the authentication result from the model authentication entity.
例如,该模型认证实体向模型推理实体发送模型认证通过通知消息,该模型认证通过通知消息携带该认证结果。For example, the model authentication entity sends a model authentication pass notification message to the model reasoning entity, and the model authentication pass notification message carries the authentication result.
可选地,S309,模型认证实体向模型管理实体发送原因信息,对应地,模型管理实体从模型认证实体接收该原因信息。Optionally, in S309, the model certification entity sends reason information to the model management entity, and correspondingly, the model management entity receives the reason information from the model certification entity.
该原因信息用于指示认证结果的原因。有关原因信息的描述可参见方法200中步骤S206的描述,为了简便,在此不再赘述。The cause information is used to indicate the cause of the authentication result. For the description of the cause information, refer to the description of step S206 in the method 200, and for the sake of brevity, details are not repeated here.
例如,该模型认证响应消息携带该原因信息。For example, the model authentication response message carries the cause information.
当认证结果指示第一推理模型认证通过时,该方法300可以执行步骤S310。When the authentication result indicates that the first reasoning model is authenticated, the method 300 may execute step S310.
可选地,S310,模型管理实体向模型推理实体发送模型部署消息,对应地,模型推理实体从模型管理实体接收模型部署消息。Optionally, at S310, the model management entity sends a model deployment message to the model inference entity, and correspondingly, the model inference entity receives the model deployment message from the model management entity.
该模型部署消息用于部署第一推理模型。该模型部署消息可以携带以下信息中的至少一项:第一推理模型的标识信息、第一推理模型的版本信息、第一推理模型、第一推理模型的使用信息、该认证结果。The model deployment message is used to deploy the first inference model. The model deployment message may carry at least one of the following information: identification information of the first reasoning model, version information of the first reasoning model, the first reasoning model, usage information of the first reasoning model, and the authentication result.
模型推理实体可以根据模型部署消息和模型认证通过通知消息中的认证结果一致确定该第一推理模型认证成功。例如,模型推理实体根据模型认证通过通知消息存储模型标识和/或模型版本与认证结果的对应关系,模型推理实体根据模型部署消息中的模型标识和/或版本确定存储的对应的认证结果与模型部署消息中的认证结果一致,即确定该第一推理模型认证成功。进而模型推理实体可以进行第一推理模型的部署。The model inference entity may determine that the authentication of the first inference model is successful according to the consistency of the authentication results in the model deployment message and the model authentication notification message. For example, the model inference entity stores the corresponding relationship between the model identifier and/or model version and the authentication result through the notification message according to the model authentication, and the model inference entity determines the stored corresponding authentication result and model according to the model identifier and/or version in the model deployment message. The authentication results in the deployment message are consistent, that is, it is determined that the authentication of the first reasoning model is successful. Furthermore, the model reasoning entity can deploy the first reasoning model.
可选地,模型推理实体响应于模型部署消息,向模型管理实体发送模型部署响应,该模型部署响应用于指示第一推理模型的部署状态。Optionally, the model inference entity sends a model deployment response to the model management entity in response to the model deployment message, where the model deployment response is used to indicate the deployment status of the first inference model.
模型管理实体可以采用两种方式对认证结果指示第一推理模型认证不通过的情况进行处理,以下分别对该两种方式进行说明。The model management entity can use two methods to handle the situation that the authentication result indicates that the first reasoning model fails to pass the authentication, and the two methods will be described below.
方式a:Method a:
包括步骤S311至S312。Including steps S311 to S312.
可选地,S311,模型管理实体向模型训练实体发送以下信息中的至少一项:认证结果、原因信息和第一调整信息。Optionally, S311. The model management entity sends at least one of the following information to the model training entity: authentication result, cause information, and first adjustment information.
有关的描述可参见方法200中步骤S207的描述,为了简便,在此不再赘述。For related descriptions, refer to the description of step S207 in the method 200, and for the sake of brevity, details are not repeated here.
例如,模型管理实体向模型训练实体发送模型优化请求消息,该模型优化请求消息携带该以下信息中的至少一项:认证结果、原因信息和第一调整信息。For example, the model management entity sends a model optimization request message to the model training entity, and the model optimization request message carries at least one of the following information: authentication result, cause information, and first adjustment information.
在一种可能的实现方式中,该模型优化请求消息还携带以下信息中的至少一项:设备提供商的信息、模型的标识信息、模型性能信息或模型的版本信息。 In a possible implementation manner, the model optimization request message also carries at least one of the following information: information of the equipment provider, identification information of the model, information of model performance or version information of the model.
可选地,S312,模型训练实体确定第二推理模型。Optionally, S312. The model training entity determines a second reasoning model.
有关模型训练实体确定第二推理模型的方式可参见方法200中步骤S208的描述,为了简便,在此不再赘述。For the manner in which the model training entity determines the second inference model, reference may be made to the description of step S208 in the method 200, and details are not repeated here for brevity.
可选地,模型训练实体向模型管理实体发送第二推理模型,对应地,模型管理实体从模型管理实体接收该第二推理模型。Optionally, the model training entity sends the second inference model to the model management entity, and correspondingly, the model management entity receives the second inference model from the model management entity.
例如,模型训练实体响应于模型优化请求消息,向模型管理实体发送模型优化响应消息,该模型优化响应消息携带该第二推理模型。For example, the model training entity sends a model optimization response message to the model management entity in response to the model optimization request message, where the model optimization response message carries the second inference model.
在一种可能的实现方式中,该模型优化响应消息还携带第二推理模型的模型性能信息和第二推理模型的使用信息,该第二推理模型的使用信息可以用于指示第二推理模型的运行条件,模型性能信息用于描述第二推理模型在使用信息指示的运行条件运行时可以达到的性能。In a possible implementation, the model optimization response message also carries model performance information of the second inference model and usage information of the second inference model, and the usage information of the second inference model can be used to indicate the For the running condition, the model performance information is used to describe the performance that the second inference model can achieve when running using the running condition indicated by the information.
方式b:Method b:
包括步骤S313。Step S313 is included.
S313,模型管理实体向模型推理实体发送第一推理模型和第二调整信息,对应地,模型推理实体从模型管理实体接收第一推理模型和第二调整信息。S313. The model management entity sends the first inference model and the second adjustment information to the model inference entity. Correspondingly, the model inference entity receives the first inference model and the second adjustment information from the model management entity.
该第二调整信息指示对该推理模型进行调整。The second adjustment information indicates that the reasoning model should be adjusted.
例如,模型管理实体可以向模型推理实体发送模型部署请求消息,该模型部署请求消息中携带该第一推理模型和该第二调整信息。For example, the model management entity may send a model deployment request message to the model inference entity, where the model deployment request message carries the first inference model and the second adjustment information.
在一种可能的实现方式中,该模型部署请求消息还携带以下信息中的至少一项:第一推理模型的标识信息、版本信息、模型性能信息或使用信息。In a possible implementation manner, the model deployment request message further carries at least one of the following information: identification information, version information, model performance information, or usage information of the first inference model.
应理解,模型推理实体对该第一推理模型进行调整或者再训练,可以是指模型推理实体中的模型训练功能模块对该推理模型进行调整或者再训练。It should be understood that the adjustment or retraining of the first inference model by the model inference entity may refer to the adjustment or retraining of the inference model by the model training function module in the model inference entity.
从而,在本申请中,模型管理实体可以将指示提供推理模型的第一模型提供商的第一提供商信息发送给模型认证实体,模型认证实体可以对第一模型提供商和设备提供商进行对比,并可以根据对比结果生成认证结果告知模型管理实体,进而模型管理实体可以基于不同的认证结果采取不同的处理,能够提高推理模型运行的可靠性。Therefore, in this application, the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
当模型认证实体和模型推理实体为同一个设备上不同功能模块时,或者说模型认证实体集成了模型认证和运行的功能时,模型管理网元向模型认证实体发送模型部署消息,模型认证实体可以确定认证结果后根据认证结果确定是否可以运行推理模型。When the model certification entity and the model reasoning entity are different functional modules on the same device, or when the model certification entity integrates the functions of model certification and operation, the model management network element sends a model deployment message to the model certification entity, and the model certification entity can After the authentication result is determined, it is determined whether the inference model can be run according to the authentication result.
图4是本申请实施例提供的一种模型管理的方法400的示意性流程图。FIG. 4 is a schematic flowchart of a method 400 for model management provided by an embodiment of the present application.
可选地,S401,模型管理实体获取推理模型。Optionally, S401. The model management entity acquires an inference model.
模型管理实体获取推理模型的方式可参见方法300中步骤S301中的描述,为了简便,在此不再赘述。For the manner in which the model management entity acquires the inference model, refer to the description in step S301 in the method 300, and for simplicity, details are not repeated here.
S402,模型管理实体向模型认证实体发送第一提供商信息,对应地,模型认证实体从模型管理实体接收该第一提供商信息。S402, the model management entity sends the first provider information to the model certification entity, and correspondingly, the model certification entity receives the first provider information from the model management entity.
有关第一提供商信息的描述可参见方法200中步骤S201对第一提供商信息的描述,为了简便,在此不再赘述。For the description of the first provider information, refer to the description of the first provider information in step S201 in the method 200, and for the sake of brevity, details are not repeated here.
例如,模型管理实体向模型推理实体发送模型部署消息,该模型部署消息用于部署推理模型,该模型部署消息携带该第一提供商信息。 For example, the model management entity sends a model deployment message to the model inference entity, where the model deployment message is used to deploy an inference model, and the model deployment message carries the first provider information.
可选地,S403,模型管理实体向模型认证实体发送推理模型,对应地,模型认证实体从模型管理实体接收该推理模型。Optionally, in S403, the model management entity sends the reasoning model to the model certification entity, and correspondingly, the model certification entity receives the reasoning model from the model management entity.
例如,该模型部署消息携带该推理模型。For example, the model deployment message carries the reasoning model.
在一种可能的实现方式中,该模型部署消息还可以携带以下信息中的至少一项:推理模型的标识信息、推理模型的版本信息、推理模型的模型性能信息、推理模型的使用信息。有关上述信息的描述可参见方法200中的描述,为了简便,在此不再赘述。In a possible implementation manner, the model deployment message may also carry at least one of the following information: identification information of the inference model, version information of the inference model, model performance information of the inference model, and use information of the inference model. For a description of the above information, reference may be made to the description of the method 200, and for simplicity, details are not repeated here.
可选地,S404,模型认证实体根据推理模型和第一提供商信息确定第一模型提供商是推理模型的提供商。Optionally, S404. The model certification entity determines, according to the inference model and the first provider information, that the first model provider is the inference model provider.
有关模型认证实体判定推理模型是否由该第一模型提供商提供的描述可参见方法200中步骤S210中的描述,为了简便,在此不再赘述。The description about whether the inference model is provided by the first model provider by the model certification entity can refer to the description in step S210 in the method 200, and for the sake of brevity, details are not repeated here.
可选地,模型认证实体确定推理模型是由该第一模型提供商提供,方法400可以执行步骤S405至S408。Optionally, the model certification entity determines that the inference model is provided by the first model provider, and the method 400 may perform steps S405 to S408.
S405,模型认证实体对第一模型提供商和设备提供商进行对比,以生成对比结果。S405. The model certification entity compares the first model provider and the equipment provider to generate a comparison result.
例如,模型认证实体可以判定第一模型提供商和设备提供商是否一致。For example, the model certification entity may determine whether the first model provider and the device provider are consistent.
有关模型认证实体进行对比以生成对比结果的方式可参见方法200中步骤S202的描述,为了简便,在此不再赘述。For the manner in which the model verification entity performs comparison to generate a comparison result, reference may be made to the description of step S202 in the method 200 , and details are not repeated here for brevity.
可选地,当对比结果为第一模型提供商和设备提供商不一致时,方法400可以执行步骤S406。Optionally, when the comparison result shows that the first model provider and the device provider are inconsistent, the method 400 may perform step S406.
可选地,S406,模型认证实体根据对比结果和性能信息确定认证结果。Optionally, at S406, the model verification entity determines the verification result according to the comparison result and performance information.
例如,模型认证实体根据性能信息判定推理模型是否认证通过,该性能信息用于指示推理模型的准确度。For example, the model certification entity determines whether the reasoning model is certified according to the performance information, and the performance information is used to indicate the accuracy of the reasoning model.
有关模型认证实体根据性能信息判定推理模型是否认证通过的方式可以参见方法200中步骤S205的描述,为了简便,在此不再赘述。For the manner in which the model certification entity judges whether the inference model has passed the certification based on the performance information, refer to the description of step S205 in the method 200, and for simplicity, details are not repeated here.
S407,模型认证实体向模型管理实体发送认证结果,对应地,模型管理实体从模型认证实体接收该认证结果。S407, the model authentication entity sends the authentication result to the model management entity, and correspondingly, the model management entity receives the authentication result from the model authentication entity.
该认证结果指示推理模型是否可以在该模型推理实体中运行。The authentication result indicates whether the inference model can run in the model inference entity.
例如,模型认证实体响应于模型部署请求消息,向模型管理实体发送模型部署响应消息,该模型部署响应消息携带该认证结果。有关该认证结果的描述可参见方法200中步骤S204的描述,为了简便,在此不再赘述。For example, in response to the model deployment request message, the model authentication entity sends a model deployment response message to the model management entity, where the model deployment response message carries the authentication result. For a description of the authentication result, reference may be made to the description of step S204 in the method 200, and for simplicity, details are not repeated here.
可选地,S408,模型认证实体向模型管理实体发送原因信息,对应地,模型管理实体从模型认证实体接收该原因信息。Optionally, in S408, the model certification entity sends reason information to the model management entity, and correspondingly, the model management entity receives the reason information from the model certification entity.
该原因信息用于指示认证结果的原因。有关原因信息的描述可参见方法200中步骤S206的描述,为了简便,在此不再赘述。The cause information is used to indicate the cause of the authentication result. For the description of the cause information, refer to the description of step S206 in the method 200, and for the sake of brevity, details are not repeated here.
例如,该模型部署响应消息携带该原因信息。For example, the model deployment response message carries the cause information.
进而,模型认证实体和模型管理实体可以根据认证结果对推理模型进行处理:Furthermore, the model authentication entity and the model management entity can process the inference model according to the authentication result:
当该认证结果指示推理模型可以在该模型推理实体中运行时,模型认证实体可以通过内部接口通知模型推理实体运行该推理模型。When the authentication result indicates that the inference model can be run in the model inference entity, the model authentication entity can notify the model inference entity to run the inference model through an internal interface.
当该认证结果指示推理模型不可以在该模型推理实体中运行时,模型管理实体可以采用方法300中描述的方式a和方式b两种方式对推理模型认证不通过的情况进行处理,为 了简便,在此不再赘述。When the authentication result indicates that the inference model cannot be run in the model inference entity, the model management entity can use the two methods described in method 300, method a and method b, to handle the situation that the inference model fails to pass the authentication, for For the sake of simplicity, it will not be repeated here.
从而,在本申请中,模型管理实体可以将指示提供推理模型的第一模型提供商的第一提供商信息发送给模型认证实体,模型认证实体可以对第一模型提供商和设备提供商进行对比,并可以根据对比结果生成认证结果告知模型管理实体,进而模型管理实体可以基于不同的认证结果采取不同的处理,能够提高推理模型运行的可靠性。Therefore, in this application, the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
图5和图6为本申请实施例提供的可能的通信装置的结构示意图。这些通信装置可以用于实现上述方法实施例中模型认证实体、模型管理实体的功能,因此也能实现上述方法实施例所具备的有益效果。在本申请的实施例中,该通信装置可以是模型认证实体、模型管理实体,还可以是应用于模型认证实体、模型管理实体中的模块(如芯片)。FIG. 5 and FIG. 6 are schematic structural diagrams of a possible communication device provided by an embodiment of the present application. These communication devices can be used to implement the functions of the model authentication entity and the model management entity in the above method embodiments, and therefore can also realize the beneficial effects of the above method embodiments. In the embodiment of the present application, the communication device may be a model authentication entity or a model management entity, or a module (such as a chip) applied to the model authentication entity or the model management entity.
如图5所示,通信装置500包括处理模块510和收发模块520。通信装置500用于实现上述图2中所示的方法实施例中模型认证实体、模型管理实体的功能。或者,通信装置500可以包括用于实现上述图2中所示的方法实施例中模型认证实体、模型管理实体的任一功能或操作的模块,该模块可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。As shown in FIG. 5 , a communication device 500 includes a processing module 510 and a transceiver module 520 . The communication device 500 is configured to realize the functions of the model authentication entity and the model management entity in the method embodiment shown in FIG. 2 above. Alternatively, the communication device 500 may include a module for realizing any function or operation of the model authentication entity and the model management entity in the method embodiment shown in FIG. or any combination thereof.
当通信装置500用于实现图2所示的方法实施例中模型认证实体的功能时,收发模块520用于从模型管理实体接收第一提供商信息,第一提供商信息指示推理模型由第一模型提供商提供;处理模块510用于对第一模型提供商和模型推理实体的设备提供商进行对比,以生成对比结果,其中,模型推理实体为将运行推理模型的实体;处理模块510用于基于对比结果向模型管理实体发送推理模型的认证结果,认证结果指示推理模型是否能够在模型推理实体中运行。When the communication device 500 is used to implement the function of the model authentication entity in the method embodiment shown in FIG. 2 , the transceiver module 520 is used to receive the first provider information from the model management entity. Provided by the model provider; the processing module 510 is used to compare the first model provider and the device provider of the model inference entity to generate a comparison result, wherein the model inference entity is an entity that will run the inference model; the processing module 510 is used to An authentication result of the inference model is sent to the model management entity based on the comparison result, and the authentication result indicates whether the inference model can run in the model inference entity.
从而,在本申请中,模型管理实体可以将指示提供推理模型的第一模型提供商的第一提供商信息发送给模型认证实体,模型认证实体可以对第一模型提供商和设备提供商进行对比,并可以根据对比结果生成认证结果告知模型管理实体,进而模型管理实体可以基于不同的认证结果采取不同的处理,能够提高推理模型运行的可靠性。Therefore, in this application, the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
有关上述处理模块510和收发模块520更详细的描述可以直接参考图2所示的方法实施例中相关描述直接得到,这里不加赘述。More detailed descriptions about the processing module 510 and the transceiver module 520 can be directly obtained by referring to the relevant descriptions in the method embodiment shown in FIG. 2 , and will not be repeated here.
当通信装置500用于实现图2所示的方法实施例中模型管理实体的功能时,收发模块520用于向模型认证实体发送第一提供商信息,第一提供商信息指示第一推理模型由第一模型提供商提供;收发模块520还用于从模型认证实体接收第一推理模型的认证结果,认证结果指示第一推理模型是否能够在模型推理实体中运行。When the communication device 500 is used to realize the function of the model management entity in the method embodiment shown in FIG. The first model provider provides; the transceiving module 520 is also used to receive the authentication result of the first inference model from the model authentication entity, and the authentication result indicates whether the first inference model can run in the model inference entity.
从而,在本申请中,模型管理实体可以将指示提供推理模型的第一模型提供商的第一提供商信息发送给模型认证实体,模型认证实体可以对第一模型提供商和设备提供商进行对比,并可以根据对比结果生成认证结果告知模型管理实体,进而模型管理实体可以基于不同的认证结果采取不同的处理,能够提高推理模型运行的可靠性。Therefore, in this application, the model management entity can send the first provider information indicating the first model provider that provides the reasoning model to the model certification entity, and the model certification entity can compare the first model provider with the device provider , and the authentication result can be generated according to the comparison result to notify the model management entity, and then the model management entity can adopt different processing based on different authentication results, which can improve the reliability of the reasoning model operation.
有关上述处理模块510和收发模块520更详细的描述可以直接参考图2所示的方法实施例中相关描述直接得到,这里不加赘述。More detailed descriptions about the processing module 510 and the transceiver module 520 can be directly obtained by referring to the relevant descriptions in the method embodiment shown in FIG. 2 , and will not be repeated here.
如图6所示,通信装置600包括处理器610和接口电路620。处理器610和接口电路620之间相互耦合。可以理解的是,接口电路620可以为收发器或输入输出接口。可选的,通信装置600还可以包括存储器630,用于存储处理器610执行的指令或存储处理器610 运行指令所需要的输入数据或存储处理器610运行指令后产生的数据。As shown in FIG. 6 , the communication device 600 includes a processor 610 and an interface circuit 620 . The processor 610 and the interface circuit 620 are coupled to each other. It can be understood that the interface circuit 620 may be a transceiver or an input-output interface. Optionally, the communication device 600 may further include a memory 630 for storing instructions executed by the processor 610 or for storing The input data required to execute the instruction or store the data generated by the processor 610 after executing the instruction.
当通信装置600用于实现图2或图4所示的方法时,处理器610用于实现上述处理模块510的功能,接口电路620用于实现上述收发模块520的功能。When the communication device 600 is used to implement the method shown in FIG. 2 or FIG. 4 , the processor 610 is used to implement the functions of the above-mentioned processing module 510 , and the interface circuit 620 is used to implement the functions of the above-mentioned transceiver module 520 .
当通信装置600用于实现图2或图4所示的方法时,通信装置600包括处理器610和接口电路620。处理器610和接口电路620之间相互耦合。可以理解的是,接口电路620可以为收发器或输入输出接口。可选的,通信装置600还可以包括存储器630,用于存储处理器610执行的指令或存储处理器610运行指令所需要的输入数据或存储处理器610运行指令后产生的数据。When the communication device 600 is used to implement the method shown in FIG. 2 or FIG. 4 , the communication device 600 includes a processor 610 and an interface circuit 620 . The processor 610 and the interface circuit 620 are coupled to each other. It can be understood that the interface circuit 620 may be a transceiver or an input-output interface. Optionally, the communication device 600 may further include a memory 630 for storing instructions executed by the processor 610 or storing input data required by the processor 610 to execute the instructions or storing data generated after the processor 610 executes the instructions.
当通信装置600用于实现图2或图4所示的方法时,处理器610用于实现上述处理模块510的功能,接口电路620用于实现上述收发模块520的功能。When the communication device 600 is used to implement the method shown in FIG. 2 or FIG. 4 , the processor 610 is used to implement the functions of the above-mentioned processing module 510 , and the interface circuit 620 is used to implement the functions of the above-mentioned transceiver module 520 .
当上述通信装置为应用于模型认证实体的芯片时,该模型认证实体芯片实现上述方法实施例中模型认证实体的功能。该模型认证实体芯片从模型认证实体中的其它模块(如射频模块或天线)接收信息,该信息是模型管理实体发送给模型认证实体的;或者,该模型认证实体芯片向模型认证实体中的其它模块(如射频模块或天线)发送信息,该信息是模型认证实体发送给模型管理实体的。When the above-mentioned communication device is a chip applied to a model certification entity, the model certification entity chip implements the function of the model certification entity in the above method embodiment. The model certification entity chip receives information from other modules (such as radio frequency modules or antennas) in the model certification entity, and the information is sent to the model certification entity by the model management entity; or, the model certification entity chip sends information to other modules in the model certification entity Modules (such as radio frequency modules or antennas) send information that is sent by the model authentication entity to the model management entity.
当上述通信装置为应用于模型管理实体的芯片时,该模型管理实体芯片实现上述方法实施例中模型管理实体的功能。该模型管理实体芯片从模型管理实体中的其它模块(如射频模块或天线)接收信息,该信息是模型认证实体发送给模型管理实体的;或者,该模型管理实体向模型管理实体中的其它模块(如射频模块或天线)发送信息,该信息是模型管理实体发送给模型认证实体的。When the above-mentioned communication device is a chip applied to a model management entity, the model management entity chip implements the function of the model management entity in the above method embodiment. The model management entity chip receives information from other modules in the model management entity (such as radio frequency modules or antennas), and the information is sent to the model management entity by the model authentication entity; or, the model management entity sends information to other modules in the model management entity (such as a radio frequency module or an antenna) to send information, which is sent by the model management entity to the model certification entity.
可以理解的是,本申请的实施例中的处理器可以是中央处理模块(Central Processing Unit,CPU),还可以是其它通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其它可编程逻辑器件、晶体管逻辑器件,硬件部件或者其任意组合。通用处理器可以是微处理器,也可以是任何常规的处理器。It can be understood that the processor in the embodiments of the present application can be a central processing module (Central Processing Unit, CPU), and can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application-specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (Field Programmable Gate Array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof. A general-purpose processor can be a microprocessor, or any conventional processor.
本申请的实施例中存储器可以是随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于网络设备或终端设备中。当然,处理器和存储介质也可以作为分立组件存在于网络设备或终端设备中。In the embodiment of the present application, memory can be random access memory (Random Access Memory, RAM), flash memory, read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable Programmable read-only memory (Erasable PROM, EPROM), electrically erasable programmable read-only memory (Electrically EPROM, EEPROM), registers, hard disk, mobile hard disk, CD-ROM or any other form of storage medium known in the art . An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be a component of the processor. The processor and storage medium can be located in the ASIC. In addition, the ASIC can be located in a network device or a terminal device. Certainly, the processor and the storage medium may also exist in the network device or the terminal device as discrete components.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行所述计算机程序或指令时,全部或部分地执行本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、终端设备、或者其它可编程装置。所述计算机程序或指 令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是集成一个或多个可用介质的服务器等数据存储设备。所述可用介质可以是磁性介质,例如,软盘、硬盘、磁带;也可以是光介质,例如,DVD;还可以是半导体介质,例如,固态硬盘(solid state disk,SSD)。In the above embodiments, all or part of them may be implemented by software, hardware, firmware or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer programs or instructions. When the computer program or instructions are loaded and executed on the computer, the processes or functions described in the embodiments of the present application are executed in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, a network device, a terminal device, or other programmable devices. said computer program or The commands may be stored in or transmitted via a computer-readable storage medium. The computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server integrating one or more available media. The available medium may be a magnetic medium, such as a floppy disk, a hard disk, or a magnetic tape; it may also be an optical medium, such as a DVD; and it may also be a semiconductor medium, such as a solid state disk (solid state disk, SSD).
在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。In each embodiment of the present application, if there is no special explanation and logical conflict, the terms and/or descriptions between different embodiments are consistent and can be referred to each other, and the technical features in different embodiments are based on their inherent Logical relationships can be combined to form new embodiments.
应理解,在本申请实施例中,编号“第一”、“第二”…仅仅为了区分不同的对象,比如为了区分不同的网络设备,并不对本申请实施例的范围构成限制,本申请实施例并不限于此。It should be understood that in this embodiment of the application, the numbers "first", "second"... are only used to distinguish different objects, such as different network devices, and do not limit the scope of the embodiment of this application. Examples are not limited to this.
还应理解,在本申请中,“当…时”、“若”以及“如果”均指在某种客观情况下实体会做出相应的处理,并非是限定时间,且也不要求实体实现时一定要有判断的动作,也不意味着存在其它限定。It should also be understood that in this application, "when", "if" and "if" all mean that the entity will make corresponding processing under certain objective circumstances, and it is not a limited time, nor does it require the entity to realize when There must be an action of judgment, and it does not mean that there are other restrictions.
还应理解,在本申请各实施例中,“A对应的B”表示B与A相关联,根据A可以确定B。但还应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。It should also be understood that in each embodiment of the present application, "A corresponds to B" means that B is associated with A, and B can be determined according to A. However, it should also be understood that determining B according to A does not mean determining B only according to A, and B may also be determined according to A and/or other information.
还应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。It should also be understood that the term "and/or" in this article is only an association relationship describing associated objects, indicating that there may be three relationships, for example, A and/or B may indicate: A exists alone, and A and B exist simultaneously. B, there are three situations of B alone. In addition, the character "/" in this article generally indicates that the contextual objects are an "or" relationship.
本申请中出现的类似于“项目包括如下中的一项或多项:A,B,以及C”表述的含义,如无特别说明,通常是指该项目可以为如下中任一个:A;B;C;A和B;A和C;B和C;A,B和C;A和A;A,A和A;A,A和B;A,A和C,A,B和B;A,C和C;B和B,B,B和B,B,B和C,C和C;C,C和C,以及其他A,B和C的组合。以上是以A,B和C共3个元素进行举例来说明该项目的可选用条目,当表达为“项目包括如下中至少一种:A,B,……,以及X”时,即表达中具有更多元素时,那么该项目可以适用的条目也可以按照前述规则获得。The meaning of the expression similar to "the item includes one or more of the following: A, B, and C" appearing in this application, unless otherwise specified, usually means that the item can be any of the following: A; B ;C;A and B;A and C;B and C;A,B and C;A and A;A,A and A;A,A and B;A,A and C,A,B and B;A , C and C; B and B, B, B and B, B, B and C, C and C; C, C and C, and other combinations of A, B and C. The above is an example of the three elements of A, B and C to illustrate the optional items of the project. When the expression is "the project includes at least one of the following: A, B, ..., and X", it is in the expression When there are more elements, then the applicable entries for this item can also be obtained according to the aforementioned rules.
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定。 It can be understood that the various numbers involved in the embodiments of the present application are only for convenience of description, and are not used to limit the scope of the embodiments of the present application. The size of the serial numbers of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic.

Claims (43)

  1. 一种模型管理的方法,其特征在于,所述方法包括:A method for model management, characterized in that the method comprises:
    模型认证实体从模型管理实体接收第一提供商信息,所述第一提供商信息指示推理模型由第一模型提供商提供;The model certification entity receives first provider information from the model management entity, the first provider information indicating that the reasoning model is provided by the first model provider;
    所述模型认证实体对所述第一模型提供商和模型推理实体的设备提供商进行对比,以生成对比结果,其中,所述模型推理实体为将运行所述推理模型的实体;The model authentication entity compares the first model provider and the device provider of the model inference entity to generate a comparison result, wherein the model inference entity is an entity that will run the inference model;
    所述模型认证实体基于所述对比结果向所述模型管理实体发送所述推理模型的认证结果,所述认证结果指示所述推理模型是否能够在所述模型推理实体中运行。The model authentication entity sends an authentication result of the inference model to the model management entity based on the comparison result, and the authentication result indicates whether the inference model can run in the model inference entity.
  2. 如权利要求1所述的方法,其特征在于:The method of claim 1, characterized in that:
    所述对比结果为所述第模型提供商和所述设备提供商不一致,所述推理模型的准确度小于第一阈值,所述认证结果指示所述推理模型不能够在所述模型推理实体中运行;或者,The result of the comparison is that the first model provider is inconsistent with the device provider, the accuracy of the reasoning model is less than a first threshold, and the authentication result indicates that the reasoning model cannot run in the model reasoning entity ;or,
    所述对比结果为所述第一模型提供商和所述设备提供商不一致,所述推理模型的准确度大于或等于所述第一阈值,所述认证结果指示所述推理模型能够在所述模型推理实体中运行。The result of the comparison is that the first model provider is inconsistent with the equipment provider, the accuracy of the reasoning model is greater than or equal to the first threshold, and the authentication result indicates that the reasoning model can be used in the model run in the inference entity.
  3. 如权利要求1或2所述的方法,其特征在于,所述模型认证实体对所述第一模型提供商和所述设备提供商进行对比之前,所述方法还包括:The method according to claim 1 or 2, wherein, before the model certification entity compares the first model provider and the equipment provider, the method further comprises:
    所述模型认证实体从所述模型管理实体接收所述推理模型;the model certification entity receives the inference model from the model management entity;
    所述模型认证实体确定所述第一模型提供商是所述推理模型的提供商。The model certification entity determines that the first model provider is the provider of the inference model.
  4. 如权利要求3所述的方法,其特征在于,所述模型认证实体确定所述第一模型提供商是所述推理模型的提供商,包括:The method of claim 3, wherein the model certification entity determining that the first model provider is the provider of the inference model comprises:
    所述模型认证实体确定第一格式和第二格式一致;或者,The model certification entity determines that the first format and the second format are consistent; or,
    所述模型认证实体确定第二模型提供商和所述第一模型提供商一致;或者,the model certification entity determines that the second model provider agrees with the first model provider; or,
    所述模型认证实体确定所述第一格式和所述第二格式一致,以及所述第二模型提供商和所述第一模型提供商一致;the model certification entity determines that the first format is consistent with the second format, and that the second model provider is consistent with the first model provider;
    其中,所述第一格式是所述推理模型的格式,所述第二格式是根据所述第一提供商信息确定的,所述第二模型提供商是根据所述推理模型确定的。Wherein, the first format is the format of the reasoning model, the second format is determined according to the first provider information, and the second model provider is determined according to the reasoning model.
  5. 如权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 4, further comprising:
    所述模型认证实体向所述模型管理实体发送原因信息,所述原因信息指示得出所述认证结果的原因。The model authentication entity sends cause information to the model management entity, the cause information indicating the reason for the authentication result.
  6. 如权利要求1所述的方法,其特征在于,所述对比结果为所述第一模型提供商和所述设备提供商一致,所述认证结果指示所述推理模型能够在所述模型推理实体中运行。The method according to claim 1, wherein the comparison result is that the first model provider is consistent with the device provider, and the authentication result indicates that the reasoning model can be used in the model reasoning entity run.
  7. 如权利要求1至6中任一项所述的方法,其特征在于,所述模型认证实体对所述第一模型提供商和所述设备提供商进行对比之前,所述方法还包括:The method according to any one of claims 1 to 6, wherein, before the model certification entity compares the first model provider and the equipment provider, the method further comprises:
    所述模型认证实体从所述模型管理实体接收第二提供商信息,所述第二提供商信息指示所述设备提供商。The model certification entity receives second provider information from the model management entity, the second provider information indicating the device provider.
  8. 一种模型管理的方法,其特征在于,所述方法包括:A method for model management, characterized in that the method comprises:
    模型管理实体向模型认证实体发送第一提供商信息,所述第一提供商信息指示第一推 理模型由第一模型提供商提供;The model management entity sends the first provider information to the model certification entity, and the first provider information indicates that the first pushed The theoretical model is provided by the first model provider;
    所述模型管理实体从所述模型认证实体接收所述第一推理模型的认证结果,所述认证结果指示所述第一推理模型是否能够在模型推理实体中运行。The model management entity receives an authentication result of the first inference model from the model authentication entity, the authentication result indicating whether the first inference model can run in the model inference entity.
  9. 如权利要求8所述的方法,其特征在于:The method of claim 8, wherein:
    所述第一模型提供商和所述模型推理实体的设备提供商不一致,所述第一推理模型的准确度小于第一阈值,所述认证结果指示所述第一推理模型不能够在所述模型推理实体中运行;或者,The first model provider is inconsistent with the device provider of the model inference entity, the accuracy of the first inference model is less than a first threshold, and the authentication result indicates that the first inference model cannot be used in the model running in the inference entity; or,
    所述第一模型提供商和所述设备提供商不一致,所述第一推理模型的准确度大于或等于所述第一阈值,所述认证结果指示所述第一推理模型能够在所述模型推理实体中运行。The first model provider is inconsistent with the device provider, the accuracy of the first reasoning model is greater than or equal to the first threshold, and the authentication result indicates that the first reasoning model can reason in the model running in the entity.
  10. 如权利要求8或9所述的方法,其特征在于,所述方法还包括:The method according to claim 8 or 9, wherein the method further comprises:
    所述模型管理实体向所述模型认证实体发送所述第一推理模型。The model management entity sends the first inference model to the model certification entity.
  11. 如权利要求8至10中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 8 to 10, further comprising:
    所述模型管理实体从所述模型认证实体接收原因信息,所述原因信息指示得出所述认证结果的原因。The model management entity receives reason information from the model certification entity, the reason information indicating a reason for the certification result.
  12. 如权利要求8至11中任一项至所述的方法,其特征在于,所述认证结果指示所述第一推理模型不能够在所述模型推理实体中运行,所述方法还包括:The method according to any one of claims 8 to 11, wherein the authentication result indicates that the first inference model cannot run in the model inference entity, and the method further comprises:
    所述模型管理实体向模型训练实体发送以下信息中至少一项:所述认证结果,原因信息和第一调整信息,其中,所述原因信息指示得出所述认证结果的原因,所述第一调整信息指示所述模型训练实体对所述第一推理模型进行调整;The model management entity sends at least one of the following information to the model training entity: the authentication result, cause information and first adjustment information, wherein the cause information indicates the reason for obtaining the authentication result, and the first The adjustment information instructs the model training entity to adjust the first inference model;
    所述模型管理实体从所述模型训练实体接收第二推理模型,所述第二推理模型为基于所述第一推理模型调整而得。The model management entity receives a second inference model from the model training entity, and the second inference model is adjusted based on the first inference model.
  13. 如权利要求8至11中任一项所述的方法,其特征在于,所述认证结果指示所述第一推理模型不能够在所述模型推理实体中运行,所述方法还包括:The method according to any one of claims 8 to 11, wherein the authentication result indicates that the first inference model cannot run in the model inference entity, and the method further comprises:
    所述模型管理实体向所述模型推理实体发送所述第一推理模型和第二调整信息,所述第二调整信息指示所述模型推理实体对所述第一推理模型进行调整。The model management entity sends the first inference model and second adjustment information to the model inference entity, and the second adjustment information instructs the model inference entity to adjust the first inference model.
  14. 如权利要求8所述的方法,其特征在于,所述第一模型提供商和所述模型推理实体的设备提供商一致,所述认证结果指示所述第一推理模型能够在所述模型推理实体中运行。The method according to claim 8, wherein the first model provider is consistent with the equipment provider of the model inference entity, and the authentication result indicates that the first inference model can be used in the model inference entity run in.
  15. 如权利要求8至14中任一项所述的方法,其特征在于,在从所述模型认证实体接收所述第一推理模型的认证结果之前,所述方法还包括:The method according to any one of claims 8 to 14, wherein before receiving the certification result of the first reasoning model from the model certification entity, the method further comprises:
    所述模型管理实体向所述模型认证实体发送第二提供商信息,所述第二提供商信息指示所述模型推理实体的设备提供商。The model management entity sends second provider information to the model certification entity, where the second provider information indicates a device provider of the model inference entity.
  16. 一种模型管理的方法,其特征在于,所述方法包括:A method for model management, characterized in that the method comprises:
    模型管理实体向模型认证实体发送第一提供商信息,所述模型认证实体接收来自所述模型管理实体的所述第一提供商信息,所述第一提供商信息指示第一推理模型由第一模型提供商提供;The model management entity sends first provider information to a model certification entity, and the model certification entity receives the first provider information from the model management entity, the first provider information indicating that the first reasoning model is provided by the first provided by the model provider;
    所述模型认证实体对所述第一模型提供商和模型推理实体的设备提供商进行对比,以生成对比结果,其中,所述模型推理实体为将运行所述第一推理模型的实体;The model authentication entity compares the first model provider with the device provider of the model inference entity to generate a comparison result, wherein the model inference entity is an entity that will run the first inference model;
    所述模型认证实体基于所述对比结果向所述模型管理实体发送所述第一推理模型的 认证结果,所述模型管理实体接收来自所述模型认证实体的所述认证结果,所述认证结果指示所述第一推理模型是否能够在所述模型推理实体中运行。The model authentication entity sends the first inference model to the model management entity based on the comparison result An authentication result, the model management entity receives the authentication result from the model authentication entity, the authentication result indicating whether the first inference model can run in the model inference entity.
  17. 如权利要求16所述的方法,其特征在于,The method of claim 16, wherein,
    所述对比结果为所述第一模型提供商和所述设备提供商不一致,所述第一推理模型的准确度小于第一阈值,所述认证结果指示所述第一推理模型不能够在所述模型推理实体中运行;或者,The comparison result is that the first model provider is inconsistent with the equipment provider, the accuracy of the first reasoning model is less than a first threshold, and the authentication result indicates that the first reasoning model cannot be used in the run in the model inference entity; or,
    所述对比结果为所述第一模型提供商和所述设备提供商不一致,所述第一推理模型的准确度大于或等于所述第一阈值,所述认证结果指示所述第一推理模型能够在所述模型推理实体中运行。The comparison result is that the first model provider is inconsistent with the equipment provider, the accuracy of the first reasoning model is greater than or equal to the first threshold, and the authentication result indicates that the first reasoning model can Run in the model inference entity.
  18. 如权利要求1或2所述的方法,其特征在于,所述模型认证实体对所述第一模型提供商和所述设备提供商进行对比之前,所述方法还包括:The method according to claim 1 or 2, wherein, before the model certification entity compares the first model provider and the equipment provider, the method further comprises:
    所述模型管理实体向所述模型认证实体发送所述第一推理模型,所述模型认证实体接收来自所述模型管理实体的所述第一推理模型;the model management entity sends the first inference model to the model certification entity, and the model certification entity receives the first inference model from the model management entity;
    所述模型认证实体确定所述第一模型提供商是所述第一推理模型的提供商。The model certification entity determines that the first model provider is a provider of the first inference model.
  19. 如权利要求18所述的方法,所述模型认证实体确定所述第一模型提供商是所述第一推理模型的提供商,包括:The method of claim 18, the model certification entity determining that the first model provider is the provider of the first inference model, comprising:
    所述模型认证实体确定第一格式和第二格式一致;或者,The model certification entity determines that the first format and the second format are consistent; or,
    所述模型认证实体确定第二模型提供商和所述第一模型提供商一致;或者,the model certification entity determines that the second model provider agrees with the first model provider; or,
    所述模型认证实体确定所述第一格式和所述第二格式一致,以及所述第二模型提供商和所述第一模型提供商一致;the model certification entity determines that the first format is consistent with the second format, and that the second model provider is consistent with the first model provider;
    其中,所述第一格式是所述第一推理模型的格式,所述第二格式是根据所述第一提供商信息确定的,所述第二模型提供商是根据所述第一推理模型确定的。Wherein, the first format is the format of the first inference model, the second format is determined according to the first provider information, and the second model provider is determined according to the first inference model of.
  20. 如权利要求16至19中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 16 to 19, further comprising:
    所述模型认证实体向所述模型管理实体发送原因信息,所述模型管理实体接收来自所述模型认证实体的所述原因信息,所述原因信息指示得出所述认证结果的原因。The model certification entity sends reason information to the model management entity, and the model management entity receives the reason information from the model certification entity, the reason information indicating the reason for the certification result.
  21. 如权利要求16至20中任一项所述的方法,其特征在于,所述认证结果指示所述第一推理模型不能够在所述模型推理实体中运行,所述方法还包括:The method according to any one of claims 16 to 20, wherein the authentication result indicates that the first inference model cannot run in the model inference entity, and the method further comprises:
    所述模型管理实体向模型训练实体发送以下信息中至少一项,所述模型训练实体接收来自所述模型管理实体的所述以下信息中至少一项:所述认证结果,原因信息和第一调整信息,其中,所述原因信息指示得出所述认证结果的原因,所述第一调整信息指示所述模型训练实体对所述第一推理模型进行调整;The model management entity sends at least one of the following information to the model training entity, and the model training entity receives at least one of the following information from the model management entity: the authentication result, reason information and first adjustment information, wherein the reason information indicates the reason for obtaining the authentication result, and the first adjustment information indicates that the model training entity adjusts the first inference model;
    所述模型训练实体向所述模型管理实体发送第二推理模型,所述模型管理实体从所述模型训练实体接收所述第二推理模型,所述第二推理模型为基于所述第一推理模型调整而得。The model training entity sends a second inference model to the model management entity, and the model management entity receives the second inference model from the model training entity, the second inference model is based on the first inference model Adjusted.
  22. 如权利要求16至20中任一项所述的方法,其特征在于,所述认证结果指示所述第一推理模型不能够在所述模型推理实体中运行,所述方法还包括:The method according to any one of claims 16 to 20, wherein the authentication result indicates that the first inference model cannot run in the model inference entity, and the method further comprises:
    所述模型管理实体向所述模型推理实体发送所述第一推理模型和第二调整信息,所述模型推理实体接收来自所述模型管理实体的所述第一推理模型和所述第二调整信息,所述第二调整信息指示所述模型推理实体对所述第一推理模型进行调整。 The model management entity sends the first inference model and the second adjustment information to the model inference entity, and the model inference entity receives the first inference model and the second adjustment information from the model management entity , the second adjustment information instructs the model inference entity to adjust the first inference model.
  23. 如权利要求16所述的方法,其特征在于,所述对比结果为所述第一模型提供商和所述设备提供商一致,所述认证结果指示所述第一推理模型能够在所述模型推理实体中运行。The method according to claim 16, wherein the comparison result is that the first model provider is consistent with the equipment provider, and the authentication result indicates that the first inference model can be used for reasoning in the model running in the entity.
  24. 如权利要求16至23中任一项所述的方法,其特征在于,在所述模型认证实体基于所述对比结果向所述模型管理实体发送所述第一推理模型的认证结果之前,所述方法还包括:The method according to any one of claims 16 to 23, wherein, before the model certification entity sends the certification result of the first inference model to the model management entity based on the comparison result, the Methods also include:
    所述模型管理实体向所述模型认证实体发送第二提供商信息,所述模型认证实体接收来自所述模型管理实体的所述第二提供商信息,所述第二提供商信息指示所述模型推理实体的设备提供商。The model management entity sends second provider information to the model certification entity, and the model certification entity receives the second provider information from the model management entity, the second provider information indicating that the model The device provider for the inference entity.
  25. 一种模型管理的通信装置,其特征在于,包括:A communication device for model management, characterized by comprising:
    收发模块,用于从模型管理实体接收第一提供商信息,所述第一提供商信息指示推理模型由第一模型提供商提供;A transceiver module, configured to receive first provider information from the model management entity, the first provider information indicating that the reasoning model is provided by the first model provider;
    处理模块,用于对所述第一模型提供商和模型推理实体的设备提供商进行对比,以生成对比结果,其中,所述模型推理实体为将运行所述推理模型的实体;A processing module, configured to compare the first model provider with a device provider of a model reasoning entity to generate a comparison result, wherein the model reasoning entity is an entity that will run the reasoning model;
    所述收发模块,还用于基于所述对比结果向所述模型管理实体发送所述推理模型的认证结果,所述认证结果指示所述推理模型是否能够在所述模型推理实体中运行。The transceiving module is further configured to send an authentication result of the inference model to the model management entity based on the comparison result, the authentication result indicating whether the inference model can run in the model inference entity.
  26. 如权利要求25所述的通信装置,其特征在于:The communication device according to claim 25, characterized in that:
    所述对比结果为所述第一模型提供商和所述设备提供商不一致,所述推理模型的准确度小于第一阈值,所述认证结果指示所述推理模型不能够在所述模型推理实体中运行;或者,The comparison result is that the first model provider and the equipment provider are inconsistent, the accuracy of the inference model is less than a first threshold, and the authentication result indicates that the inference model cannot be used in the model inference entity run; or,
    所述对比结果为所述第一模型提供商和所述设备提供商不一致,所述推理模型的准确度大于或等于所述第一阈值,所述认证结果指示所述推理模型能够在所述模型推理实体中运行。The result of the comparison is that the first model provider is inconsistent with the equipment provider, the accuracy of the reasoning model is greater than or equal to the first threshold, and the authentication result indicates that the reasoning model can be used in the model run in the inference entity.
  27. 如权利要求25或26所述的通信装置,其特征在于,所述收发模块还用于:The communication device according to claim 25 or 26, wherein the transceiver module is also used for:
    从所述模型管理实体接收所述推理模型;receiving the inference model from the model management entity;
    所述处理模块还用于:The processing module is also used to:
    确定所述第一模型提供商是所述推理模型的提供商。It is determined that the first model provider is the provider of the inference model.
  28. 如权利要求27所述的通信装置,其特征在于,所述处理模块具体用于:The communication device according to claim 27, wherein the processing module is specifically used for:
    确定第一格式和第二格式一致;或者,determining that the first format and the second format are consistent; or,
    确定第二模型提供商和所述第一模型提供商一致;或者,determining that the second model provider is consistent with said first model provider; or,
    确定所述第一格式和所述第二格式一致,以及所述第二模型提供商和所述第一模型提供商一致;determining that the first format is consistent with the second format, and that the second model provider is consistent with the first model provider;
    其中,所述第一格式是所述推理模型的格式,所述第二格式是根据所述第一提供商信息确定的,所述第二模型提供商是根据所述推理模型确定的。Wherein, the first format is the format of the reasoning model, the second format is determined according to the first provider information, and the second model provider is determined according to the reasoning model.
  29. 如权利要求25至28中任一项所述的通信装置,其特征在于,所述收发模块还用于:The communication device according to any one of claims 25 to 28, wherein the transceiver module is also used for:
    向所述模型管理实体发送原因信息,所述原因信息指示得出所述认证结果的原因。sending cause information to the model management entity, the cause information indicating the reason for the authentication result.
  30. 如权利要求25所述的通信装置,其特征在于,所述对比结果为所述第一模型提供商和所述设备提供商一致,所述认证结果指示所述推理模型能够在所述模型推理实体中运 行。The communication device according to claim 25, wherein the comparison result is that the first model provider is consistent with the equipment provider, and the authentication result indicates that the inference model can be used for inference entities in the model Zhongyun OK.
  31. 如权利要求25至30中任一项所述的通信装置,其特征在于,所述收发模块还用于:The communication device according to any one of claims 25 to 30, wherein the transceiver module is also used for:
    从所述模型管理实体接收第二提供商信息,所述第二提供商信息用于指示所述设备提供商。Receive second provider information from the model management entity, the second provider information indicating the device provider.
  32. 一种模型管理的通信装置,其特征在于,包括:A communication device for model management, characterized by comprising:
    处理模块,用于生成第一提供商信息,所述第一提供商信息用于指示第一推理模型由第一模型提供商提供;A processing module, configured to generate first provider information, where the first provider information is used to indicate that the first reasoning model is provided by a first model provider;
    收发模块,用于向模型认证实体发送所述第一提供商信息;A transceiver module, configured to send the first provider information to a model certification entity;
    所述收发模块,还用于从所述模型认证实体接收所述第一推理模型的认证结果,所述认证结果指示所述第一推理模型是否能够在模型推理实体中运行。The transceiving module is further configured to receive an authentication result of the first inference model from the model authentication entity, the authentication result indicating whether the first inference model can run in the model inference entity.
  33. 如权利要求32所述的通信装置,其特征在于,所述第一模型提供商和所述模型推理实体的设备提供商不一致,所述第一推理模型的准确度小于第一阈值,所述认证结果指示所述第一推理模型不能够在所述模型推理实体中运行;或者,The communication device according to claim 32, wherein the first model provider is inconsistent with the equipment provider of the model inference entity, the accuracy of the first inference model is less than a first threshold, and the authentication A result indicates that the first inference model is not capable of running in the model inference entity; or,
    所述第一模型提供商和所述设备提供商不一致,所述第一推理模型的准确度大于或等于所述第一阈值,所述认证结果指示所述第一推理模型能够在所述模型推理实体中运行。The first model provider is inconsistent with the device provider, the accuracy of the first inference model is greater than or equal to the first threshold, and the authentication result indicates that the first inference model can reason in the model running in the entity.
  34. 如权利要求32或33所述的通信装置,其特征在于,所述收发模块还用于:The communication device according to claim 32 or 33, wherein the transceiver module is also used for:
    向所述模型认证实体发送所述第一推理模型。sending the first inference model to the model certification entity.
  35. 如权利要求32至34中任一项所述的通信装置,其特征在于,所述收发模块还用于:The communication device according to any one of claims 32 to 34, wherein the transceiver module is also used for:
    从所述模型认证实体接收原因信息,所述原因信息指示得出所述认证结果的原因。Reason information is received from the model certification entity, the reason information indicating a reason for the certification result.
  36. 如权利要求32至35中任一项所述的通信装置,其特征在于,所述认证结果指示所述第一推理模型不能够在所述模型推理实体中运行,所述收发模块还用于:The communication device according to any one of claims 32 to 35, wherein the authentication result indicates that the first inference model cannot run in the model inference entity, and the transceiver module is further used for:
    向模型训练实体发送以下信息中至少一项:所述认证结果,原因信息和第一调整信息,其中,所述原因信息指示得出所述认证结果的原因,所述第一调整信息指示所述模型训练实体对所述第一推理模型进行调整;Send at least one of the following information to the model training entity: the authentication result, reason information and first adjustment information, wherein the reason information indicates the reason for obtaining the authentication result, and the first adjustment information indicates the The model training entity adjusts the first inference model;
    从所述模型训练实体接收第二推理模型,所述第二推理模型为基于所述第一推理模型调整而得。A second inference model is received from the model training entity, the second inference model being adjusted based on the first inference model.
  37. 如权利要求32至36中任一项所述的通信装置,其特征在于,所述认证结果指示所述第一推理模型不能够在所述模型推理实体中运行,所述收发模块还用于:The communication device according to any one of claims 32 to 36, wherein the authentication result indicates that the first inference model cannot run in the model inference entity, and the transceiver module is further used for:
    向模型推理实体发送所述第一推理模型和第二调整信息,所述第二调整信息指示所述模型推理实体对所述第一推理模型进行调整。Sending the first inference model and second adjustment information to a model inference entity, where the second adjustment information instructs the model inference entity to adjust the first inference model.
  38. 如权利要求32所述的通信装置,其特征在于,所述第一模型提供商和所述模型推理实体的设备提供商一致,所述认证结果指示所述第一推理模型能够在所述模型推理实体中运行。The communication device according to claim 32, wherein the first model provider is the same as the equipment provider of the model inference entity, and the authentication result indicates that the first inference model can be used for inference in the model running in the entity.
  39. 如权利要求32至38中任一项所述的通信装置,其特征在于,所述收发模块还用于:The communication device according to any one of claims 32 to 38, wherein the transceiver module is also used for:
    向所述模型认证实体发送第二提供商信息,所述第二提供商信息指示所述模型推理实体的设备提供商。 Sending second provider information to the model authentication entity, the second provider information indicating a device provider of the model inference entity.
  40. 一种通信装置,其特征在于,包括至少一个处理器,所述至少一个处理器用于执行存储器中存储的计算机程序,以使得所述装置实现如权利要求1至7、或者权利要求8至15中任一项所述的方法、或者权利要求16至24中任一项所述的方法。A communication device, characterized in that it includes at least one processor, and the at least one processor is used to execute a computer program stored in a memory, so that the device implements the communication system described in claims 1 to 7 or claims 8 to 15. The method of any one, or the method of any one of claims 16 to 24.
  41. 一种计算机可读存储介质,其特征在于,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如权利要求1至7、或者权利要求8至15中任一项所述的方法、或者权利要求16至24中任一项所述的方法。A computer-readable storage medium, characterized in that it includes a computer program, and when the computer program is run on a computer, it causes the computer to execute the computer according to any one of claims 1 to 7 or claims 8 to 15. The method described in the present invention, or the method described in any one of claims 16 to 24.
  42. 一种计算机程序产品,其特征在于,当计算机读取并执行所述计算机程序产品时,使得计算机执行如权利要求1至7中任一项所述的方法,或如权利要求8至15中任一项所述方法,或如权利要求16至24中任一项所述方法。A computer program product, characterized in that, when the computer reads and executes the computer program product, the computer executes the method according to any one of claims 1 to 7, or the method according to any one of claims 8 to 15 A method according to any one of claims 16 to 24.
  43. 一种模型管理的通信系统,其特征在于,包括如权利要求25至31中任一项所述的通信装置以及权利要求32至39中任一项所述的通信装置。 A communication system for model management, characterized by comprising the communication device according to any one of claims 25 to 31 and the communication device according to any one of claims 32 to 39.
PCT/CN2023/077287 2022-02-25 2023-02-21 Model management method and communication apparatus WO2023160508A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210178362.4A CN116703422A (en) 2022-02-25 2022-02-25 Model management method and communication device
CN202210178362.4 2022-02-25

Publications (1)

Publication Number Publication Date
WO2023160508A1 true WO2023160508A1 (en) 2023-08-31

Family

ID=87764738

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/077287 WO2023160508A1 (en) 2022-02-25 2023-02-21 Model management method and communication apparatus

Country Status (2)

Country Link
CN (1) CN116703422A (en)
WO (1) WO2023160508A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050066325A1 (en) * 2003-09-18 2005-03-24 Brother Kogyo Kabushiki Kaisha Software install program product, installation method, and software install system
EP2541397A1 (en) * 2011-06-30 2013-01-02 Siemens Aktiengesellschaft Method for compatibility checking when installing a software component
US20140025537A1 (en) * 2012-07-23 2014-01-23 Cellco Partnership D/B/A Verizon Wireless Verifying accessory compatibility with a mobile device
CN106528415A (en) * 2016-10-27 2017-03-22 广东浪潮大数据研究有限公司 Software compatibility test method, business platform and system
CN112561044A (en) * 2019-09-26 2021-03-26 西安闻泰电子科技有限公司 Neural network model acceleration method and device, server and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050066325A1 (en) * 2003-09-18 2005-03-24 Brother Kogyo Kabushiki Kaisha Software install program product, installation method, and software install system
EP2541397A1 (en) * 2011-06-30 2013-01-02 Siemens Aktiengesellschaft Method for compatibility checking when installing a software component
US20140025537A1 (en) * 2012-07-23 2014-01-23 Cellco Partnership D/B/A Verizon Wireless Verifying accessory compatibility with a mobile device
CN106528415A (en) * 2016-10-27 2017-03-22 广东浪潮大数据研究有限公司 Software compatibility test method, business platform and system
CN112561044A (en) * 2019-09-26 2021-03-26 西安闻泰电子科技有限公司 Neural network model acceleration method and device, server and storage medium

Also Published As

Publication number Publication date
CN116703422A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
US11269608B2 (en) Internet-of-things device blank
US10015165B2 (en) Methods and apparatus to support GlobalPlatform™ usage on an embedded UICC (eUICC)
AU2017404864B2 (en) Network slice management method, unit and system
CN115119331A (en) Reinforcement learning for multi-access traffic management
US20170244600A1 (en) Network architecture for internet-of-things device
US10187272B2 (en) Interface management service entity, function service entity, and element management method
KR102561083B1 (en) Profile-based content and services
US10372923B2 (en) System and method for controlling the power states of a mobile computing device
CN113016172B (en) Information processing apparatus and communication system
CN113849594A (en) User intention implementation method, device and storage medium in intention driven network
CN115866601A (en) Connected device zone identification
EP3843362A1 (en) Apparatus and method for managing concurrent activation of bundle installed in smart security platform
WO2023160508A1 (en) Model management method and communication apparatus
EP4324125A1 (en) Apparatus, method and computer program for managing a request for cognitive network functions and/or machine learning models
WO2023155699A1 (en) Method and apparatus for mining security vulnerability of air interface protocol, and mobile terminal
BR112020016595A2 (en) METHODS TO SUPPORT INTERFUNCTION AND MOBILITY IN IDLE MODE, NETWORK UNITS CONFIGURED TO SUPPORT INTERFUNCTION AND MOBILITY IN IDLE MODE, WIRELESS COMMUNICATION DEVICE, COMPUTER PROGRAM, AND, COMPUTER PROGRAM PRODUCT.
US20220264316A1 (en) Launching radio spectrum resources into a fifth generation (5g) network or other next generation networks
WO2023014985A1 (en) Artificial intelligence regulatory mechanisms
CN113918423A (en) Cloud platform monitoring method and device and application thereof
CN114095330A (en) Intention negotiation method and device
EP4066109A1 (en) Methods for determining application of models in multi-vendor networks
US20220247577A1 (en) Provisioning system and method
EP4027602A1 (en) Mutual device-to-device authentication method and device during device-to-device bundle or profile transfer
US20220385670A1 (en) Method and device for setting state of bundle after transfer of bundle between apparatuses
US20220095095A1 (en) Method and apparatus for moving profiles with different versions during device change

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23759133

Country of ref document: EP

Kind code of ref document: A1