CN116703422A - Model management method and communication device - Google Patents

Model management method and communication device Download PDF

Info

Publication number
CN116703422A
CN116703422A CN202210178362.4A CN202210178362A CN116703422A CN 116703422 A CN116703422 A CN 116703422A CN 202210178362 A CN202210178362 A CN 202210178362A CN 116703422 A CN116703422 A CN 116703422A
Authority
CN
China
Prior art keywords
model
entity
inference
provider
authentication
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210178362.4A
Other languages
Chinese (zh)
Inventor
黄谢田
曹龙雨
于益俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210178362.4A priority Critical patent/CN116703422A/en
Priority to PCT/CN2023/077287 priority patent/WO2023160508A1/en
Publication of CN116703422A publication Critical patent/CN116703422A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Economics (AREA)
  • Accounting & Taxation (AREA)
  • Computational Linguistics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The application provides a method and a communication device for model management, wherein the method comprises the following steps: the model authentication entity receives first provider information from the model management entity, the first provider information indicating that the inference model is provided by the first model provider; the model authentication entity compares the first model provider with a device provider of a model reasoning entity to generate a comparison result, wherein the model reasoning entity is an entity which is to run a reasoning model; the model authentication entity sends an authentication result of the inference model to the model management entity based on the comparison result, and the authentication result indicates whether the inference model can be operated in the model inference entity or not, so that the reliability of the operation of the inference model can be improved.

Description

Model management method and communication device
Technical Field
The present application relates to the field of communication technology, and more particularly, to a method of model management and a communication apparatus.
Background
To improve the level of intellectualization and automation of networks, inference models, such as artificial intelligence (artificial intelligence, AI) models and Machine Learning (ML) models, are used in more and more technical fields. The inference models provided by current model providers (e.g., vendors or operators) can be published into the model marketplace. The model management entity may obtain the inference model from the model marketplace and deploy the inference model in the device provider, which may then run the deployed inference model.
However, due to the differences in private data, expert experience, etc. of different providers, one device provider may not function properly when running the inference model from other providers.
Disclosure of Invention
The application provides a model management method and a communication device, which are used for improving the reliability of the operation of an inference model.
In a first aspect, a method of model management is provided, the method being executable by a model authentication entity or a chip in the model authentication entity, the method comprising: the model authentication entity receives first provider information from the model management entity, the first provider information indicating that the inference model is provided by the first model provider; the model authentication entity compares the first model provider with the equipment provider of the model reasoning entity to generate a comparison result, wherein the model reasoning entity is an entity which is to run the reasoning model; the model authentication entity sends an authentication result of the inference model to the model management entity based on the comparison result, the authentication result indicating whether the inference model is operable in the model inference entity.
Therefore, in the application, the model management entity can send the first provider information indicating the first model provider providing the reasoning model to the model authentication entity, the model authentication entity can compare the first model provider with the equipment provider and can generate the authentication result according to the comparison result to inform the model management entity, and the model management entity can adopt different processes based on different authentication results, so that the reliability of the operation of the reasoning model can be improved.
With reference to the first aspect, in some implementations of the first aspect, the comparison result is that the first model provider and the device provider are inconsistent, the accuracy of the inference model is less than a first threshold, and the authentication result indicates that the inference model cannot operate in the model inference entity; or the comparison result is that the first model provider and the equipment provider are inconsistent, the accuracy of the reasoning model is greater than or equal to a first threshold, and the authentication result indicates that the reasoning model can run in a model reasoning entity.
Therefore, in the application, if the model authentication network element judges that the first model provider and the equipment provider of the inference model are inconsistent, the accuracy of the inference model can be further determined, whether the authentication of the inference model can be judged according to the accuracy of the inference model can be determined, and the flexibility of the system can be improved.
With reference to the first aspect, in certain implementations of the first aspect, before the model authentication entity compares the first model provider and the device provider, the method further includes: the model authentication entity receives an inference model from the model management entity; the model authentication entity determines that the first model provider is the provider of the inference model.
Therefore, in the application, before judging whether the first model provider and the equipment provider are consistent, the model authentication network element can judge whether the first model provider provides the reasoning model, namely judge the authenticity of the first provider information, so that the reliability of model operation can be improved.
With reference to the first aspect, in certain implementations of the first aspect, the model authentication entity determines that the first model provider is a provider of the inference model, including: the model authentication entity determines that the first format is consistent with the second format; alternatively, the model certification entity determines that the second model provider is consistent with the first model provider; alternatively, the model authentication entity determines that the first format and the second format are consistent, and that the second model provider and the first model provider are consistent; wherein the first format is a format of an inference model, the second format is determined based on first provider information, and the second model provider is determined based on the inference model.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: the model authentication entity sends cause information to the model management entity, the cause information indicating a cause for deriving the authentication result.
Therefore, in the application, the model authentication entity can inform the model management entity of the reason that the model authentication is not passed, and the model management entity can perform different processes based on different reasons, so that the reliability of the model operation can be improved.
With reference to the first aspect, in certain implementations of the first aspect, the comparison result is that the first model provider and the device provider agree, and the authentication result indicates that the inference model is capable of running in the model inference entity.
Therefore, in the application, the model authentication network element can judge whether the first model provider of the reasoning model is consistent with the equipment provider of the model reasoning entity, and under the condition of consistency, the model authentication is judged to pass, and then the model management entity can deploy the authenticated reasoning model in the model reasoning entity, so that the reliability of model operation can be improved.
With reference to the first aspect, in certain implementations of the first aspect, before the model authentication entity compares the first model provider and the device provider, the method further includes: the model authentication entity receives second provider information from the model management entity, the second provider information indicating the device provider.
In a second aspect, there is provided a method of model management, the method being executable by a model management entity or a chip in a model management entity, the method comprising: the model management entity sends first provider information to the model authentication entity, the first provider information indicating that the first inference model is provided by the first model provider; the model management entity receives an authentication result of the first inference model from the model authentication entity, the authentication result indicating whether the first inference model is capable of operating in the model inference entity.
Therefore, in the application, the model management entity can send the first provider information indicating the first model provider providing the reasoning model to the model authentication entity, the model authentication entity can compare the first model provider with the equipment provider and can generate the authentication result according to the comparison result to inform the model management entity, and the model management entity can adopt different processes based on different authentication results, so that the reliability of the operation of the reasoning model can be improved.
With reference to the second aspect, in certain implementations of the second aspect, the first model provider is inconsistent with a device provider of the model inference entity, an accuracy of the first inference model is less than a first threshold, and the authentication result indicates that the first inference model is not capable of operating in the model inference entity; alternatively, the first model provider and the device provider are inconsistent, the accuracy of the first inference model is greater than or equal to a first threshold, and the authentication result indicates that the first inference model is capable of operating in the model inference entity.
With reference to the second aspect, in certain implementations of the second aspect, the method further includes: the model management entity sends the first inference model to the model authentication entity.
With reference to the second aspect, in certain implementations of the second aspect, the first model provider is a provider of an inference model, including: the first format is consistent with the second format; alternatively, the second model provider is identical to the first model provider; or the first format and the second format are consistent with the second model provider and the first model provider; wherein the first format is a format of an inference model, the second format is determined based on first provider information, and the second model provider is determined based on the inference model.
With reference to the second aspect, in certain implementations of the second aspect, the method further includes: the model management entity receives cause information from the model authentication entity, the cause information indicating a cause for deriving the authentication result.
With reference to the second aspect, in certain implementations of the second aspect, the authentication result indicates that the first inference model is not capable of operating in the model inference entity, the method further comprising: the model management entity sends at least one of the following information to the model training entity: the authentication method comprises the steps of authenticating a result, reason information and first adjustment information, wherein the reason information indicates the reason for obtaining the authentication result, and the first adjustment information indicates a model training entity to adjust a first reasoning model; the model management entity receives a second inference model from the model training entity, the second inference model being adapted based on the first inference model.
Therefore, in the application, when the authentication of the inference model fails, the model training entity can improve the flexibility of model management by reacquiring or retraining the inference model.
With reference to the second aspect, in certain implementations of the second aspect, the authentication result indicates that the first inference model is not capable of operating in the model inference entity, the method further comprising: the model management entity sends a first reasoning model and second adjustment information to the model reasoning entity, and the second adjustment information is used for indicating the model training entity to adjust the first reasoning model.
Therefore, in the application, when the authentication of the reasoning model is not passed, the model reasoning entity can retrain the reasoning model before deploying the reasoning model, so that the flexibility of model management is improved.
With reference to the second aspect, in certain implementations of the second aspect, the first model provider is consistent with a device provider of the model inference entity, and the authentication result indicates that the first inference model is capable of operating in the model inference entity.
With reference to the second aspect, in certain implementations of the second aspect, before receiving an authentication result of the first inference model from the model authentication entity, the method further includes: the model management entity sends second provider information to the model authentication entity, the second provider information indicating the device provider.
In a third aspect, a communication device for model management is provided, the device comprising a transceiver module and a processing module, the transceiver module being configured to receive first provider information from a model management entity, the first provider information indicating that an inference model is provided by a first model provider; the processing module is used for comparing the first model provider with the equipment provider of the model reasoning entity to generate a comparison result, wherein the model reasoning entity is an entity which is to run the reasoning model; and the receiving and transmitting module is also used for transmitting the authentication result of the reasoning model to the model management entity based on the comparison result, wherein the authentication result indicates whether the reasoning model can be operated in the model reasoning entity.
Therefore, in the application, the model management entity can send the first provider information indicating the first model provider providing the reasoning model to the model authentication entity, the model authentication entity can compare the first model provider with the equipment provider and can generate the authentication result according to the comparison result to inform the model management entity, and the model management entity can adopt different processes based on different authentication results, so that the reliability of the operation of the reasoning model can be improved.
With reference to the third aspect, in some implementations of the third aspect, the comparison result is that the first model provider and the device provider are inconsistent, the accuracy of the inference model is less than a first threshold, and the authentication result indicates that the inference model cannot operate in the model inference entity; or the comparison result is that the first model provider and the equipment provider are inconsistent, the accuracy of the reasoning model is greater than or equal to a first threshold, and the authentication result indicates that the reasoning model can run in a model reasoning entity.
With reference to the third aspect, in certain implementations of the third aspect, the transceiver module is further configured to: receiving an inference model from a model management entity; the processing module is also used for: it is determined that the first model provider is the provider of the inference model.
With reference to the third aspect, in some implementations of the third aspect, the processing module is specifically configured to: determining that the first format is consistent with the second format; or determining that the second model provider is consistent with the first model provider; or determining that the first format is consistent with the second format and that the second model provider is consistent with the first model provider; wherein the first format is a format of an inference model, the second format is determined based on first provider information, and the second model provider is determined based on the inference model.
With reference to the third aspect, in certain implementations of the third aspect, the transceiver module is further configured to: and sending reason information to the model management entity, wherein the reason information indicates the reason for obtaining the authentication result.
With reference to the third aspect, in some implementations of the third aspect, the comparison result indicates that the first model provider and the device provider agree, and the authentication result indicates that the inference model is capable of running in a model inference entity.
With reference to the third aspect, in certain implementations of the third aspect, the transceiver module is further configured to: second provider information is received from the model management entity, the second provider information being indicative of the device provider.
In a fourth aspect, a communication device for model management is provided, the device comprising a transceiver module and a processing module, the processing module being configured to generate first provider information, the first provider information being configured to indicate that a first inference model is provided by a first model provider; the receiving and transmitting module is used for transmitting the first provider information to the model authentication entity; and the receiving and transmitting module is further used for receiving an authentication result of the first reasoning model from the model authentication entity, wherein the authentication result indicates whether the first reasoning model can be operated in the model reasoning entity.
Therefore, in the application, the model management entity can send the first provider information indicating the first model provider providing the reasoning model to the model authentication entity, the model authentication entity can compare the first model provider with the equipment provider and can generate the authentication result according to the comparison result to inform the processing module to use, and the model management entity can further adopt different processes based on different authentication results, so that the reliability of the operation of the reasoning model can be improved.
With reference to the fourth aspect, in some implementations of the fourth aspect, the first model provider and the device provider are inconsistent, an accuracy of the first inference model is less than a first threshold, and the authentication result indicates that the first inference model is not capable of operating in the model inference entity; alternatively, the first model provider and the device provider are inconsistent, the accuracy of the first inference model is greater than or equal to a first threshold, and the authentication result indicates that the first inference model is capable of operating in the model inference entity.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver module is further configured to: the first inference model is sent to a model authentication entity.
With reference to the fourth aspect, in some implementations of the fourth aspect, the first model provider is a provider of the first inference model, including: the first format is consistent with the second format; alternatively, the second model provider is identical to the first model provider; or the first format and the second format are consistent with the second model provider and the first model provider; wherein the first format is a format of an inference model, the second format is determined based on first provider information, and the second model provider is determined based on the inference model.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver module is further configured to: cause information is received from the model authentication entity, the cause information indicating a cause for deriving the authentication result.
With reference to the fourth aspect, in certain implementations of the fourth aspect, the authentication result indicates that the first inference model is not capable of operating in a model inference entity, the transceiver module being further configured to: transmitting at least one of the following information to a model training entity: the authentication method comprises the steps of authenticating a result, reason information and first adjustment information, wherein the reason information indicates the reason for obtaining the authentication result, and the first adjustment information indicates a model training entity to adjust a first reasoning model; a second inference model is received from the model training entity, the second inference model being adapted based on the first inference model.
With reference to the fourth aspect, in certain implementations of the fourth aspect, the authentication result indicates that the inference model is not capable of running in a model inference entity, the transceiver module is further configured to: and sending the first reasoning model and second adjustment information to the model reasoning entity, wherein the second adjustment information instructs the model reasoning entity to adjust the first reasoning model.
With reference to the fourth aspect, in some implementations of the fourth aspect, the first model provider and the device provider agree, and the authentication result indicates that the first inference model is capable of running in the model inference entity.
With reference to the fourth aspect, in some implementations of the fourth aspect, the transceiver module is further configured to: second provider information is sent to the model authentication entity, the second provider information indicating a device provider of the model reasoning entity.
In a fifth aspect, a communication device is provided, which may include a processing module, a transmitting unit, and a receiving unit. Alternatively, the transmitting unit and the receiving unit may also be transceiver modules.
When the apparatus is a model authentication entity, the processing module may be a processor, and the transmitting unit and the receiving unit may be transceivers; the apparatus may further include a storage unit, which may be a memory; the storage unit is configured to store instructions, and the processing module executes the instructions stored in the storage unit, so that the model authentication entity performs any one of the methods of the first aspect. When the device is a chip in a model validation body, the processing module may be a processor, and the transmitting unit and the receiving unit may be input/output interfaces, pins, circuits, or the like; the processing module executes instructions stored by the memory unit to cause the chip to perform any of the methods of the first aspect. The storage unit is used for storing instructions, and the storage unit may be a storage unit (for example, a register, a cache, etc.) in the chip, or may be a storage unit (for example, a read-only memory, a random access memory, etc.) in the model validation body, which is located outside the chip.
When the apparatus is a model management entity, the processing module may be a processor, and the transmitting unit and the receiving unit may be transceivers; the apparatus may further include a storage unit, which may be a memory; the storage unit is configured to store instructions, and the processing module executes the instructions stored in the storage unit, so that the model management entity performs any one of the methods in the second aspect. When the device is a chip within a model management entity, the processing module may be a processor and the transmitting unit and receiving unit may be input/output interfaces, pins or circuits, etc.; the processing module executes instructions stored by the memory unit to cause the chip to perform any of the methods of the second aspect. The storage unit is used for storing instructions, and the storage unit may be a storage unit (for example, a register, a cache, etc.) in the chip, or may be a storage unit (for example, a read-only memory, a random access memory, etc.) located outside the chip in the model management entity.
In a sixth aspect, there is provided a communication device comprising a processor and interface circuitry for receiving signals from or transmitting signals to the processor from or to other communication devices than the communication device, the processor being operable to implement any of the methods of the first or second aspects by logic circuitry or executing code instructions.
In a seventh aspect, there is provided a computer readable storage medium having stored therein a computer program or instructions which, when executed, implement any of the methods of the first or second aspects described above.
In an eighth aspect, there is provided a computer program product comprising instructions which, when executed, implement any of the methods of the first or second aspects described above.
In a ninth aspect, there is provided a computer program comprising code or instructions which, when executed, implement any of the methods of the first or second aspects described above.
In a tenth aspect, a chip system is provided, the chip system comprising a processor and possibly a memory, for implementing any of the methods of the first or second aspects. The chip system may be formed of a chip or may include a chip and other discrete devices.
An eleventh aspect provides a communication system comprising the apparatus of any of the third and fourth aspects.
Drawings
FIG. 1 is a schematic diagram of a system for model management as provided by an embodiment of the present application.
Fig. 2 is a schematic flowchart of an exemplary model management method according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of another exemplary model management method provided by an embodiment of the present application.
Fig. 4 is a schematic flowchart of another embodiment of a model management method according to an embodiment of the present application.
Fig. 5 and fig. 6 are schematic structural diagrams of possible communication devices according to an embodiment of the present application.
Detailed Description
The technical scheme of the application will be described below with reference to the accompanying drawings.
The method of the embodiment of the application can be applied to a long term evolution technology (long term evolution, LTE) system, a long term evolution advanced (long term evolution-advanced, LTE-A) system, an enhanced long term evolution technology (enhanced long term evolution-advanced, eLTE), a fifth generation (the 5) th New air interface (NR) system of Generation,5G mobile communication system can be extended to similar wireless communication systems, such as Wireless-Fidelity (WiFi), worldwide interoperability for microwave Access (worldwide interoperability for microwave access, WIMAX), and third Generation partnership project (3) rd generation partnership project,3 gpp).
For clarity, some terms in the embodiments of the present application are explained below.
Inference model (also simply referred to as model): from the data, a function of a particular function/mapping may be implemented. The model may be derived based on artificial intelligence (artificial intelligence, AI) or Machine Learning (ML) techniques, and thus may also be referred to as an artificial intelligence/AI model, a machine learning/ML model, or the like. Common algorithms for generating AI/ML models include: supervised learning, unsupervised learning, enhanced learning, the corresponding models may be referred to as supervised learning models, unsupervised learning models, enhanced learning models. By way of example, the supervised learning model may be a classification model, a prediction model, a regression model, etc., and the unsupervised learning model may be a cluster model. In addition, the model may be obtained based on Neural Network (NN) technology, and such a model may also be referred to as a neural network model, a deep learning model, or the like.
Model training: and training by using training data to obtain a usable model.
Model reasoning: and (5) carrying out reasoning or prediction based on the model to generate a reasoning result. In addition, model reasoning entities may be used for model reasoning.
Model deployment: the model is deployed in a model reasoning entity.
Model activation: the model deployed in the model inference entity is activated to begin operation.
Model evaluation: and evaluating whether the performance of the model running in the model reasoning entity meets the requirements.
Model authentication: and judging whether the entity for model training is consistent with the entity deployed by the model, and judging whether the running performance of the model after deployment can reach the expected performance when the entity for model training is inconsistent with the entity deployed by the model.
Model management: the model is managed during the life cycle. For example, model deployment, model activation, model evaluation, model training, and the like.
For the convenience of understanding the embodiment of the present application, first, an application scenario of the embodiment of the present application will be described in detail with reference to fig. 1.
Fig. 1 is a schematic block diagram of a communication system to which an embodiment of the present application is applied. First, a description will be given of devices that may be involved in the communication system.
1. Model management entity 110: for managing the model during the life cycle. For example, the model management entity 110 may be a network management system (network management system, NMS).
In an embodiment of the present application, the model management entity 110 may be deployed in an operator device.
2. Model training entity 120: for obtaining a usable model by training. For example, the model training entity may be an operator platform or vendor training platform, or other entity that deploys model training functionality.
In an embodiment of the present application, the model training entity 120 may publish the trained model to a model market, and the model management entity 110 may obtain the model from the model market and deploy the model to the model inference entity 130. The model market may be deployed in the model management entity 110, the model training entity 120, or independently, which is not particularly limited in the present application.
Additionally, in embodiments of the present application, the provider of model training entity 120 may be referred to as a model provider.
3. Model inference entity 130: and the method is used for carrying out reasoning or operation based on the model and generating a reasoning result. For example, the model inference entity 130 may be a network element management system (element management system, EMS) or management data analysis function (management data analytics function, MDAF), a radio access network (radio access network, RAN), or a network element in a 5G system (e.g., a network data analysis function (network data analytics function, NWDAF) network element).
In an embodiment of the present application, the model inference entity 120 may be deployed in a vendor device, and the provider of the model inference entity 120 may be referred to as a device provider.
In an embodiment of the present application, the communication system may further comprise a model authentication entity 140:
4. model authentication entity 140: the model authentication entity 140 may be used to authenticate the model, for example, to determine whether the performance of the model is expected.
It should be noted that, the solution of the present application may be applied to other systems including corresponding entities, and the present application is not limited thereto. It will be appreciated that the entities or functions described above may be either network elements in a hardware device, software functions running on dedicated hardware, or virtualized functions instantiated on a platform (e.g., a cloud platform). Alternatively, the above entity or function may be implemented by one device, or may be implemented by a plurality of devices together, or may be a functional module in one device, which is not specifically limited in the embodiment of the present application. For example, in an embodiment of the present application, model management entity 110 and model training entity 120 may be different functional models within one device, and model inference entity 130 and model authentication entity 140 may be different functional modules within one device.
When a model inference entity runs a model provided by a different provider, the model may not be normally run, so the embodiment of the application provides a method and a communication device for model management, which can improve the reliability of model running. Wherein, the method embodiments shown in fig. 2 to fig. 4 may be combined with each other, and steps in the method embodiments shown in fig. 2 to fig. 4 may be cited with each other. For example, in an embodiment of the present application, the method embodiments shown in fig. 3 and 4 may each be one implementation that implements the functionality of the method embodiment shown in fig. 2.
Fig. 2 is a schematic flow chart of a method 200 for model management provided by an embodiment of the present application.
S201, the model management entity transmits the first provider information to the model authentication entity, and correspondingly, the model authentication entity receives the first provider information from the model management entity.
The first provider information indicates that the inference model (i.e., the first inference model) is provided by the first model provider. Alternatively, the first provider information indicates that the first inference model was generated by first model provider training.
The first model provider may be a first vendor or a first vendor. The first inference model provided by the first model provider may refer to: the device providing the first inference model belongs to the first model provider, or the device training the first inference model belongs to the first model provider, or the device generating the first inference model belongs to the first model provider.
It should be understood that, in the embodiment of the present application, the first model provider may train the first inference model based on the data of the first model provider, or acquire the data of other providers to train the first inference model, which is not particularly limited in the present application.
The first provider information may be the name or identification information of the first model provider, for example, the manufacturer of Hua Cheng, zhongxing, iris, etc., or the operator of China Mobile, china telecom, etc.
S202, the model authentication entity compares the first model provider with the equipment provider of the model reasoning entity to generate a comparison result, wherein the model reasoning entity is the entity which is to run the first reasoning model.
For example, the model authentication entity determines whether the first model provider and the device provider for providing the model inference entity agree.
The device provider may be a second vendor or a second operator.
The model inference entity being the entity that will run the first inference model may refer to: the device to be received (i.e., the model inference entity) of the first inference model is produced or provided by the device provider, or the device to be run the first inference model is produced or provided by the device provider.
For example, the model authentication entity may determine that the first model provider and the device provider are identical when the first model provider and the device provider are the same vendor, or the first model provider and the device provider are the same operator; when the first model provider and the device provider are different vendors, or different operators, or vendors and operators, respectively, it is determined that the first model provider and the device provider are inconsistent.
Therefore, in the embodiment of the application, before the first reasoning model is operated, the model authentication entity can firstly judge whether the provider for training the first reasoning model is consistent with the provider for operating the model, so that different processing aiming at different results can be realized, and the reliability of the operation model is improved.
Alternatively, when the model authentication entity and the model inference entity are deployed in the same device, or when the model authentication entity and the model inference entity are provided by the same provider, information of the device provider may be recorded in the model authentication entity.
Alternatively, the method 200 may obtain information of the device provider by performing step S203 when the model authentication entity is not provided by the device provider.
Optionally, S203, the model management entity sends second provider information to the model authentication entity, and correspondingly, the model authentication entity receives the second provider information from the model management entity.
The second provider information indicates the device provider.
Wherein the second provider information may be name or identification information of the device provider and/or name or identification information of the model inference entity. The model management entity may determine that the first inference model needs to be deployed in the model inference entity of the device provider before sending the second provider information, which is in turn sent to the model authentication entity.
S204, the model authentication entity sends an authentication result to the model management entity based on the comparison result, and correspondingly, the model management entity receives the authentication result from the model authentication entity.
The authentication result indicates whether the first inference model can be run in the model inference entity.
For example, the model authentication entity determines from the comparison result whether the first inference model is capable of operating in the model inference entity to generate an authentication result. If the model authentication entity determines that the first inference model is capable of operating in the model inference entity, the first inference model authenticates. Conversely, if the model authentication entity determines that the first inference model is not capable of operating in the model inference entity, the first inference model authentication is not passed.
Optionally, when the comparison result is that the first model provider and the device provider agree, the authentication result indicates that the first inference model is capable of running in the model inference entity. Alternatively, the authentication result indicates that the first model provider and the device provider are consistent. Alternatively, the authentication result instructs the model management entity to deploy the first inference model. Alternatively, the authentication result indicates that the authentication state of the first inference model is passed.
Alternatively, the model authentication entity may also send the authentication result to the model inference entity, which correspondingly receives the authentication result from the model authentication entity.
The authentication result may also indicate an authentication pass identity or an authentication pass password, which may be used by the model inference entity to determine that the first inference model authentication passes. For example, the authentication result includes status information indicating passing and an authentication passing identification or an authentication passing password.
For example, the model authentication entity may generate an authentication passing identifier or an authentication passing password according to the version of the first inference model or the identifier of the first inference model, and transmit the authentication result to the model management entity and the model inference entity, respectively. The model management entity carries the authentication result in a model deployment message for deploying the first inference model, and the model inference entity learns whether the first inference model passes authentication or not by determining whether an identification or a password carried in the model deployment message is consistent with an identification or a password indicated by the authentication result received from the model authentication entity.
In addition, the model deployment message may also carry at least one of the following information: identification information of the first inference model, version information of the first inference model, usage information of the first inference model.
Wherein the usage information of the first inference model may be used to indicate the operating conditions of the first inference model, e.g. the usage information may comprise at least one of the following information: information such as a usage pattern of the first inference model, a usage time of the first inference model, and a usage area of the first inference model.
Optionally, the method 200 may further perform step S205 when the comparison result is that the first model provider and the device provider are not identical.
Optionally, S205, the model authentication entity determines an authentication result according to the comparison result and the performance information.
For example, the model authentication entity determines whether the first inference model is authenticated based on performance information indicating the accuracy of the first inference model.
The comparison result shows that the first model provider is inconsistent with the equipment provider, the accuracy of the first reasoning model is smaller than a first threshold value, and the authentication result indicates that the authentication of the first reasoning model is not passed; or the comparison result is that the first model provider and the equipment provider are inconsistent, the accuracy of the first reasoning model is greater than or equal to a first threshold value, and the authentication result indicates that the first reasoning model passes authentication.
Wherein determining that the first inference model is authenticated based on the performance information may refer to: the first reasoning model is deployed in a model reasoning entity to achieve expected performance; determining that the first inference model does not authenticate from the performance information may refer to: the first inference model is deployed in a manner that the model inference entity cannot achieve the desired performance.
The model certification entity may obtain the performance information before the model certification entity determines whether the first inference model is certified, e.g., the model certification entity may obtain the performance information by testing the first inference model.
By way of example and not limitation, the first inference model is used to predict second data over a second time period using first data over a first time period, the first and second data being historical data maintained by the model inference entity, in other words, the first and second data being real data over both time periods. The model authentication entity uses the first data as input data, and operates the first reasoning model to obtain predicted data in a predicted second time period. Further, the model authentication entity may determine the accuracy of the first inference model by determining the degree of matching of the second data and the predictive data to generate the performance information.
Optionally, the model authentication entity requests the model inference entity to acquire the first data and the second data according to the usage information of the first inference model.
The first threshold may be preset, or may be indicated by model performance information sent by the model management entity to the model authentication entity, where the model performance information is used to describe performance that may be achieved by the first inference model when the operation condition indicated by the usage information is operated.
Thus, even when the first model provider and the device provider are inconsistent, if the performance of the first inference model can reach expectations, it may be determined that the first inference model is authenticated, and the model management entity may deploy the first inference model to the model inference entity.
Optionally, the authentication result indicates that the first inference model is not capable of operating in the model inference entity (i.e., when the first model provider and the device provider are inconsistent and the accuracy of the first inference model is less than the first threshold). Alternatively, the authentication result indicates that the first inference model does not authenticate. Or, the authentication result indicates that the first reasoning model is deployed in the model reasoning entity and cannot normally operate. Alternatively, the authentication result indicates that the first inference model is deployed to the model inference entity without achieving the expected performance.
Optionally, S206, the model authentication entity sends the cause information to the model management entity, and correspondingly, the model management entity receives the cause information from the model authentication entity.
The cause information indicates a cause of deriving the authentication result.
Alternatively, the model authentication entity may send the cause information to the model management entity only if the authentication result indicates that the first inference model authentication is not passed.
For example, the cause information indicates a cause for which the first inference model is derived to be not authenticated. For example, the cause information may indicate that the performance information of the first inference model is not expected, and may also indicate that the first model provider and the device provider are inconsistent.
The cause information may include usage information of the first inference model, accuracy, and information indicating that the first model provider and the device provider are inconsistent. The usage information and accuracy may refer to, among other things: the first inference model runs the accuracy that the test is expected to achieve in accordance with the manner indicated by the usage information.
After receiving the authentication result indicating that the first inference model cannot be run in the model inference entity, the model management entity can process in two ways, and the two ways are described below respectively.
Mode 1:
in mode 1, the method 200 may perform steps S207 to S208.
Optionally, S207, the model management entity sends at least one of the following information to the model training entity: authentication result, cause information and first adjustment information.
The first adjustment information is used for indicating adjustment of the first reasoning model. To generate an adjusted inference model (in embodiments of the present application, the adjusted first inference model is represented by the second inference model).
The authentication result and the reason information may correspond to identification information of the first inference model, and the model training entity may regenerate the second inference model based on the authentication result and/or the reason information, or when adjustment information is received, may adjust the first inference model to generate the second inference model
Wherein adjusting the first inference model may refer to: the first inference model is retrained.
It should be appreciated that when the model management entity sends only the cause information to the model training entity, the cause information may implicitly indicate that the first inference model authentication is not passed, and the cause information may also implicitly indicate that the first inference model is adjusted.
The model management entity may also send information indicating the device provider, identification information of the model, model performance information or version information of the model to the model training entity for assisting the model training entity in determining the second inference model.
Optionally, S208, the model training entity determines a second inference model.
The model training entity may determine that the first inference model does not authenticate based on the authentication result, the cause information, or the first adjustment information, and may generate a second inference model, or adjust the first inference model to generate the second inference model.
For example, the model training entity may obtain a second inference model generated by the device provider training.
Alternatively, for another example, the model training entity may extract a data set of the device provider to retrain the first inference model to generate the second inference model.
Optionally, the model training entity sends a second inference model to the model management entity, which correspondingly receives the second inference model from the model management entity.
The model management entity may then perform a similar authentication procedure to steps S201 to S206 on the second inference model.
Thus, in mode 1, when the first inference model fails to authenticate, the model training entity may generate a second inference model by reacquiring or retraining the first inference model.
Mode 2:
optionally, the model management entity sends the first inference model and the second adjustment information to the model inference entity, and correspondingly, the model inference entity receives the first inference model and the second adjustment information from the model management entity.
The second adjustment information is used to indicate adjustments to the first inference model.
For example, the model management entity may send a model deployment request message to the model inference entity, where the model deployment request message carries the first inference model and the second adjustment information, and further the model inference entity may know that the authentication of the first inference model fails according to the second adjustment information, or the model inference entity may know that the first inference model cannot reach the expected performance when running the first inference model according to the second adjustment information, and the model inference entity may retrain the first inference model before running the first inference model.
When the model authentication entity is a model authentication entity which is deployed separately from the model reasoning entity, the model management entity may further send the identification information, version information, model performance information or usage information of the first reasoning model to the model reasoning entity, so as to assist the model reasoning entity in retraining the first reasoning model, so that the retrained first reasoning model achieves performance indicated by the model performance information.
It should be appreciated that, in one possible implementation, when the model authentication entity and the model inference entity are deployed in one device, the model authentication entity and the model inference entity receive first provider information, which is carried in a model deployment request message, that is, the model authentication entity has acquired the first inference model before determining whether the first inference model is authenticated, and when the model authentication entity determines that the first inference model is not authenticated, the model authentication entity may be notified through an internal interface that the first inference model is not authenticated, and the model inference entity may perform an adjustment on the first inference model and then operate. In this case, the model management entity may not need to send the adjustment information to the model reasoning entity.
Thus, in mode 2, when the first inference model fails to authenticate, the model inference entity may retrain the first inference model prior to deploying the first inference model.
In the above description, before the first inference model operates, the model authentication entity may determine whether the first model provider that provides the first inference model indicated by the first provider information is consistent with the device provider to which the first inference model is to be deployed, and when the first model provider is consistent with the device provider to which the first inference model is to be deployed, the model authentication entity may consider that the first inference model is authenticated, and when the first model provider is inconsistent with the device provider, the model authentication entity may further determine whether the first inference model is authenticated according to the performance information of the first inference model. However, the first provider information sent by the model management entity may not be authentic, and thus, the method 200 may further perform steps S209 to S210 to determine whether the first provider information is authentic, as described below, before step S202.
Optionally, S209, the model management entity sends a first inference model to the model authentication entity, and correspondingly, the model authentication entity receives the first inference model from the model management entity.
The first inference model may be carried in the form of a model file or model file address. The model file is information describing the first inference model, and is recorded in a file format, and the model file address is address information for indexing to the model file. In addition, it should be understood that the model file may be composed of a plurality of subfiles.
The information describing the first inference model may include at least one of the following information: the name of the first inference model, the second model provider information providing the first inference model, the identity of the first inference model.
Optionally, S210, the model certification entity first inference model determines that the first model provider offer is a provider of the first inference model.
For example, the model certification entity determines whether the first inference model is provided or trained by the first model provider based on the first inference model and the first provider information.
It should be appreciated that the first provider information is recorded outside the model file, and differs from the second model provider information in the first inference model in that: the second model provider information in the first inference model is real information originally recorded in a model file, and the first provider information outside the model file is information which may be wrong and is specified by a model management entity.
The model certification entity can determine whether the first inference model is trained by the first model provider in three ways, each of which is described below.
Mode a:
when the second model provider and the first model provider agree, the model certification entity determines that the first inference model is provided by the first model provider training.
When the second model provider and the first model provider are inconsistent, the model certification entity determines that the first inference model is not provided by the first model provider.
The second model provider is a provider indicated by the first reasoning model and providing the first reasoning model, or a provider recorded in a model file of the first reasoning model when the second model provider is used.
Thus, in the embodiment a, the model authentication entity can determine whether the first provider information is authentic based on whether the first provider information matches the second model provider described in the first inference model.
Mode B:
when the first format and the second format are consistent, the model certification entity determines that the first inference model is provided by the first model provider;
when the first format and the second format are inconsistent, the model certification entity determines that the first inference model is not provided by the first model provider;
Wherein the first format is a format of a first inference model and the second format is determined based on the first provider information.
The format of the first inference model may refer to: the format of the model file of the first inference model, e.g., file format, grammar, etc.
The second format determined from the first provider information may refer to: the model certification entity may determine a second format of the model trained by the first model provider based on the first model provider indicated by the first provider information. For example, the model certification entity may record the formats of the plurality of vendor training models, and the model certification entity may determine the first format corresponding to the first model vendor upon receiving the first vendor information.
Thus, in mode B, the model authentication entity may determine whether the first provider information is authentic based on whether the format of the first model provider and the format of the first inference model agree.
Mode C:
when the second model provider is consistent with the first model provider and the first format is consistent with the second format, the model authentication entity determines that the first inference model is provided by the first model provider;
when the second model provider is inconsistent with the first model provider and/or the first format is inconsistent with the second format, the model certification entity determines that the first inference model is not provided by the first model provider;
Wherein the second model provider is a provider indicated by the first inference model that provides the first inference model, the first format is a format of the first inference model, and the second format is determined based on the first provider information.
Thus, in the mode C, the model authentication entity can determine whether the first provider information is identical to the second model provider described in the first inference model, and whether the format of the first model provider is identical to the format of the first inference model, and can determine that the first provider information is authentic when both conditions are identical.
In the above three ways, when the model authenticating entity determines that the first inference model is provided by the first model provider, the method 200 may perform steps S202 to S209 to determine whether the first model provider and the device provider are consistent. When the model authentication entity determines that the first inference model is not trained by the first model provider, the model authentication entity may consider the first inference model authentication unsuccessful, and the method may perform actions after the first inference model authentication unsuccessful in steps S206 to S209. In this case, the cause information in step S207 may also be used to indicate that the first inference model is not trained by the first model provider, or to indicate that the first provider information is wrong.
Therefore, in the application, the model management entity can send the first provider information indicating the first model provider providing the first reasoning model to the model authentication entity, the model authentication entity can compare the first model provider with the equipment provider and can generate the authentication result according to the comparison result to inform the model management entity, and the model management entity can further adopt different processes based on different authentication results, so that the running reliability of the first reasoning model can be improved.
When the model authentication entity and the model inference entity are deployed on different devices, the model management entity may request authentication from the model authentication entity and determine whether to disambiguate the first inference model from the model inference entity according to the authentication result.
Fig. 3 is a schematic flow chart of a method 300 for model management provided by an embodiment of the present application.
Optionally, S301, the model management entity obtains a first inference model.
For example, the model management entity may send a model query request message to the model marketplace, based on the network state or operational requirements, for requesting acquisition of the first inference model. The model marketplace may send a first inference model to the model management entity in response to the query request message, or send a plurality of models from which the model management entity determines the first inference model. Wherein there are multiple vendor-trained models in a model marketplace that can be deployed in the model training entity shown in fig. 1.
Alternatively, and as another example, the model management entity may send a model training request message to the model training entity requesting the model training entity to train to generate the first inference model based on network status or operational requirements. The model training entity performs model training according to the model training request message to generate a first inference model. The model training entity transmits the first inference model to the model management entity in response to the model training request message.
In addition, the model management entity and the model training entity can be deployed on the same operator platform.
S302, the model management entity sends first provider information to the model authentication entity, and correspondingly, the model authentication entity receives the first provider information from the model management entity.
The first provider information indicates that a first inference model is provided by a first model provider. For the description of the first provider information, reference may be made to step S201 in the method 200, and for brevity, the description of the first provider information is not repeated here.
For example, the model management entity sends a model authentication request message to the model authentication entity, the model authentication request message being used for requesting authentication of the first inference model, the model authentication request message carrying the first provider information.
Optionally, S303, the model managing entity sends the second provider information and the first inference model to the model authenticating entity, and correspondingly, the model authenticating entity receives the second provider information and the first inference model from the model managing entity.
The second provider information is used for indicating the equipment provider, the first reasoning model is to be deployed on the equipment provider, and the equipment provider is a second manufacturer or a second operator. For a description of the device provider and the first inference model, reference may be made to the description of the device provider in step S202 in the method 200, and for brevity, a description thereof will not be repeated here.
For example, the model management entity sends a model authentication request message to the model authentication entity, the model authentication request message carrying the second provider information and the first inference model.
In one possible implementation, the model authentication request message may also carry at least one of the following information: identification information of the first inference model, version information of the first inference model, performance information of the first inference model, and usage information of the first inference model. For the description of the above information, reference may be made to the description in the method 200, and for brevity, the description is omitted here.
Optionally, S304, the model authentication entity determines from the first inference model and the first provider information what is the provider of the first inference model.
The description of the model authentication entity determining whether the first inference model is provided by the first model provider may be referred to the description in step S210 in the method 200, and will not be repeated here for the sake of brevity.
Alternatively, the model authentication entity determines that the first inference model is provided by the first model provider, and the method 300 may perform steps S305 to S313.
S305, the model authentication entity compares the first model provider with the device provider to generate a comparison result.
For example, the model authentication entity may determine whether the first model provider and the device provider are consistent.
For a way to compare the model authentication entities to generate a comparison result, refer to the description of step S202 in the method 200, and for brevity, the description is omitted here.
Alternatively, the method 300 may perform step S306 when the comparison result is that the first model provider and the device provider are not identical.
Optionally, S306, the model authentication entity determines an authentication result according to the comparison result and the performance information.
For example, the model authentication entity determines whether the first inference model is authenticated based on performance information indicating the accuracy of the first inference model.
For another example, the model authenticating entity may send an evaluation data request message to the model reasoning entity, the evaluation data request message requesting the first data and the second data, the evaluation data request message may include type information, condition information. The model inference entity sends an evaluation data response message to the model authentication entity in response to the evaluation data request message, the evaluation data response message carrying the first data and the second data. The type information may refer to a data type of the first data and the second data, the condition information refers to a condition that the first data and the second data need to satisfy, for example, the condition information may include a standard condition, a time condition, and an area condition, and the model inference entity sends the first data and the second data that satisfy the condition to the model authentication entity.
The method for determining, by the model authentication entity, the performance information according to the first data and the second data, and determining, by the model authentication entity, whether the first inference model passes the authentication according to the performance information may be referred to the description of step S205 in the method 200, which is omitted for brevity.
S307, the model authentication entity transmits the authentication result to the model management entity, and correspondingly, the model management entity receives the authentication result from the model authentication entity.
The authentication result indicates whether the first inference model can be run in the model inference entity.
For example, the model authentication entity sends a model authentication response message to the model management entity in response to the model authentication request message, the model authentication response message carrying the authentication result. For the description of the authentication result, refer to the description of step S204 in the method 200, and for brevity, the description is omitted here.
Optionally, S308, the model authentication entity sends an authentication result to the model inference entity, which correspondingly receives the authentication result from the model authentication entity.
For example, the model authentication entity sends a model authentication pass notification message to the model inference entity, the model authentication pass notification message carrying the authentication result.
Optionally, S309, the model authentication entity sends the cause information to the model management entity, and correspondingly, the model management entity receives the cause information from the model authentication entity.
The cause information is used to indicate the cause of the authentication result. For the description of the reason information, refer to the description of step S206 in the method 200, and for brevity, the description is omitted here.
For example, the model authentication response message carries the cause information.
When the authentication result indicates that the first inference model is authenticated, the method 300 may perform step S310.
Optionally, S310, the model management entity sends a model deployment message to the model inference entity, and correspondingly, the model inference entity receives the model deployment message from the model management entity.
The model deployment message is used to deploy the first inference model. The model deployment message may carry at least one of the following information: identification information of the first inference model, version information of the first inference model, usage information of the first inference model, the authentication result.
The model inference entity may determine that the first inference model authentication was successful based on the model deployment message and the model authentication being consistent with the authentication result in the notification message. For example, the model reasoning entity stores the corresponding relation between the model identification and/or the model version and the authentication result through the notification message according to the model authentication, and the model reasoning entity determines that the stored corresponding authentication result is consistent with the authentication result in the model deployment message according to the model identification and/or the version in the model deployment message, namely determines that the authentication of the first reasoning model is successful. And the model inference entity can deploy the first inference model.
Optionally, the model inference entity sends a model deployment response to the model management entity in response to the model deployment message, the model deployment response being used to indicate the deployment status of the first inference model.
The model management entity can process the case that the authentication result indicates that the first inference model does not pass the authentication in two ways, which are respectively described below.
Mode a:
including steps S311 to S312.
Optionally, S311, the model management entity sends at least one of the following information to the model training entity: authentication result, cause information, and first adjustment information.
For the sake of brevity, the description of step S207 in the method 200 will not be repeated here.
For example, the model management entity sends a model optimization request message to the model training entity, the model optimization request message carrying at least one of the following information: authentication result, cause information, and first adjustment information.
In one possible implementation, the model optimization request message also carries at least one of the following information: information of a device provider, identification information of a model, model performance information, or version information of a model.
Optionally, S312, the model training entity determines a second inference model.
The manner in which the model training entity determines the second inference model can be referred to as the description of step S208 in the method 200, and for brevity, will not be described in detail herein.
Optionally, the model training entity sends a second inference model to the model management entity, which correspondingly receives the second inference model from the model management entity.
For example, the model training entity sends a model optimization response message to the model management entity in response to the model optimization request message, the model optimization response message carrying the second inference model.
In a possible implementation manner, the model optimization response message further carries model performance information of the second inference model and usage information of the second inference model, where the usage information of the second inference model may be used to indicate an operation condition of the second inference model, and the model performance information is used to describe performance that the second inference model may achieve when the operation condition indicated by the usage information is operated.
Mode b:
including step S313.
S313, the model management entity sends the first reasoning model and the second adjustment information to the model reasoning entity, and correspondingly, the model reasoning entity receives the first reasoning model and the second adjustment information from the model management entity.
The second adjustment information indicates an adjustment to the inference model.
For example, the model management entity may send a model deployment request message to the model reasoning entity, the model deployment request message carrying the first reasoning model and the second tuning information.
In one possible implementation, the model deployment request message also carries at least one of the following information: identification information, version information, model performance information, or usage information of the first inference model.
It should be appreciated that the model inference entity adjusting or retraining the first inference model may refer to a model training function module in the model inference entity adjusting or retraining the inference model.
Therefore, in the application, the model management entity can send the first provider information indicating the first model provider providing the reasoning model to the model authentication entity, the model authentication entity can compare the first model provider with the equipment provider and can generate the authentication result according to the comparison result to inform the model management entity, and the model management entity can adopt different processes based on different authentication results, so that the reliability of the operation of the reasoning model can be improved.
When the model authentication entity and the model reasoning entity are different functional modules on the same device, or the model authentication entity integrates the functions of model authentication and operation, the model management network element sends a model deployment message to the model authentication entity, and the model authentication entity can determine whether the reasoning model can be operated according to the authentication result after determining the authentication result.
Fig. 4 is a schematic flow chart of a method 400 for model management provided by an embodiment of the present application.
Optionally, S401, the model management entity obtains an inference model.
The mode of the model management entity obtaining the inference model can be referred to the description in step S301 in the method 300, and for brevity, will not be described herein.
S402, the model management entity transmits the first provider information to the model authentication entity, and correspondingly, the model authentication entity receives the first provider information from the model management entity.
For the description of the first provider information, reference may be made to step S201 in the method 200, and for brevity, the description of the first provider information is not repeated here.
For example, the model management entity sends a model deployment message to the model reasoning entity, the model deployment message being used to deploy the reasoning model, the model deployment message carrying the first provider information.
Optionally, S403, the model management entity sends the inference model to the model authentication entity, which correspondingly receives the inference model from the model management entity.
For example, the model deployment message carries the inference model.
In one possible implementation, the model deployment message may also carry at least one of the following information: identification information of the inference model, version information of the inference model, model performance information of the inference model, and use information of the inference model. For the description of the above information, reference may be made to the description in the method 200, and for brevity, the description is omitted here.
Optionally, S404, the model authentication entity determines that the first model provider is a provider of the inference model based on the inference model and the first provider information.
The description of the model authentication entity in determining whether the inference model is provided by the first model provider can be referred to the description in step S210 in the method 200, and is not repeated herein for brevity.
Alternatively, the model authentication entity determines that the inference model is provided by the first model provider, and the method 400 may perform steps S405 to S408.
S405, the model authentication entity compares the first model provider and the device provider to generate a comparison result.
For example, the model authentication entity may determine whether the first model provider and the device provider are consistent.
For a way to compare the model authentication entities to generate a comparison result, refer to the description of step S202 in the method 200, and for brevity, the description is omitted here.
Alternatively, the method 400 may perform step S406 when the comparison result is that the first model provider and the device provider are not identical.
Optionally, S406, the model authentication entity determines an authentication result according to the comparison result and the performance information.
For example, the model authentication entity determines whether the inference model is authenticated based on performance information indicating the accuracy of the inference model.
The mode of determining whether the inference model passes the authentication according to the performance information by the model authentication entity can be referred to the description of step S205 in the method 200, and is not described herein for brevity.
S407, the model authentication entity transmits an authentication result to the model management entity, and correspondingly, the model management entity receives the authentication result from the model authentication entity.
The authentication result indicates whether the inference model can be run in the model inference entity.
For example, the model authentication entity sends a model deployment response message to the model management entity in response to the model deployment request message, the model deployment response message carrying the authentication result. For the description of the authentication result, refer to the description of step S204 in the method 200, and for brevity, the description is omitted here.
Optionally, S408, the model authentication entity sends the cause information to the model management entity, and correspondingly, the model management entity receives the cause information from the model authentication entity.
The cause information is used to indicate the cause of the authentication result. For the description of the reason information, refer to the description of step S206 in the method 200, and for brevity, the description is omitted here.
For example, the model deployment response message carries the cause information.
Further, the model authentication entity and the model management entity may process the inference model according to the authentication result:
when the authentication result indicates that the inference model can be run in the model inference entity, the model authentication entity can inform the model inference entity to run the inference model through an internal interface.
When the authentication result indicates that the inference model cannot operate in the model inference entity, the model management entity can process the situation that the authentication of the inference model is not passed in the two modes of a and b described in the method 300, which is not described herein for simplicity.
Therefore, in the application, the model management entity can send the first provider information indicating the first model provider providing the reasoning model to the model authentication entity, the model authentication entity can compare the first model provider with the equipment provider and can generate the authentication result according to the comparison result to inform the model management entity, and the model management entity can adopt different processes based on different authentication results, so that the reliability of the operation of the reasoning model can be improved.
Fig. 5 and fig. 6 are schematic structural diagrams of a possible communication device according to an embodiment of the present application. These communication devices may be used to implement the functions of the model authentication entity and the model management entity in the above method embodiments, so that the beneficial effects of the above method embodiments may also be implemented. In the embodiment of the application, the communication device can be a model authentication entity and a model management entity, and can also be a module (such as a chip) applied to the model authentication entity and the model management entity.
As shown in fig. 5, the communication device 500 includes a processing module 510 and a transceiver module 520. The communication device 500 is configured to implement the functions of the model authentication entity and the model management entity in the embodiment of the method shown in fig. 2. Alternatively, the communication apparatus 500 may include a module for implementing any of the functions or operations of the model authentication entity, the model management entity in the embodiment of the method shown in fig. 2 described above, which may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
When the communication device 500 is used to implement the functionality of the model authentication entity in the method embodiment shown in fig. 2, the transceiver module 520 is used to receive first provider information from the model management entity, the first provider information indicating that the inference model is provided by the first model provider; the processing module 510 is configured to compare the first model provider with a device provider of a model inference entity to generate a comparison result, where the model inference entity is an entity that will run an inference model; the processing module 510 is configured to send an authentication result of the inference model to the model management entity based on the comparison result, the authentication result indicating whether the inference model is capable of being run in the model inference entity.
Therefore, in the application, the model management entity can send the first provider information indicating the first model provider providing the reasoning model to the model authentication entity, the model authentication entity can compare the first model provider with the equipment provider and can generate the authentication result according to the comparison result to inform the model management entity, and the model management entity can adopt different processes based on different authentication results, so that the reliability of the operation of the reasoning model can be improved.
The above-mentioned more detailed descriptions of the processing module 510 and the transceiver module 520 may be directly obtained by referring to the related descriptions in the method embodiment shown in fig. 2, which are not repeated herein.
When the communication device 500 is used to implement the functionality of the model management entity in the embodiment of the method shown in fig. 2, the transceiver module 520 is used to send first provider information to the model authentication entity, the first provider information indicating that the first inference model is provided by the first model provider; the transceiver module 520 is further configured to receive an authentication result of the first inference model from the model authentication entity, the authentication result indicating whether the first inference model is capable of operating in the model inference entity.
Therefore, in the application, the model management entity can send the first provider information indicating the first model provider providing the reasoning model to the model authentication entity, the model authentication entity can compare the first model provider with the equipment provider and can generate the authentication result according to the comparison result to inform the model management entity, and the model management entity can adopt different processes based on different authentication results, so that the reliability of the operation of the reasoning model can be improved.
The above-mentioned more detailed descriptions of the processing module 510 and the transceiver module 520 may be directly obtained by referring to the related descriptions in the method embodiment shown in fig. 2, which are not repeated herein.
As shown in fig. 6, the communication device 600 includes a processor 610 and an interface circuit 620. The processor 610 and the interface circuit 620 are coupled to each other. It is understood that the interface circuit 620 may be a transceiver or an input-output interface. Optionally, the communication device 600 may further comprise a memory 630 for storing instructions executed by the processor 610 or for storing input data required by the processor 610 to execute instructions or for storing data generated after the processor 610 executes instructions.
When the communication device 600 is used to implement the method shown in fig. 2 or fig. 4, the processor 610 is used to implement the functions of the processing module 510, and the interface circuit 620 is used to implement the functions of the transceiver module 520.
When the communication device 600 is used to implement the method shown in fig. 2 or fig. 4, the communication device 600 includes a processor 610 and an interface circuit 620. The processor 610 and the interface circuit 620 are coupled to each other. It is understood that the interface circuit 620 may be a transceiver or an input-output interface. Optionally, the communication device 600 may further comprise a memory 630 for storing instructions executed by the processor 610 or for storing input data required by the processor 610 to execute instructions or for storing data generated after the processor 610 executes instructions.
When the communication device 600 is used to implement the method shown in fig. 2 or fig. 4, the processor 610 is used to implement the functions of the processing module 510, and the interface circuit 620 is used to implement the functions of the transceiver module 520.
When the communication device is a chip applied to the model authentication entity, the model authentication entity chip realizes the function of the model authentication entity in the embodiment of the method. The model authentication entity chip receives information from other modules (such as a radio frequency module or an antenna) in the model authentication entity, and the information is sent to the model authentication entity by the model management entity; alternatively, the model authentication entity chip sends information to other modules (such as radio frequency modules or antennas) in the model authentication entity, which the model authentication entity sends to the model management entity.
When the communication device is a chip applied to the model management entity, the model management entity chip realizes the functions of the model management entity in the embodiment of the method. The model management entity chip receives information from other modules (such as a radio frequency module or an antenna) in the model management entity, wherein the information is sent to the model management entity by the model authentication entity; alternatively, the model management entity sends information to other modules (e.g., radio frequency modules or antennas) in the model management entity, which the model management entity sends to the model authentication entity.
It is to be appreciated that the processor in embodiments of the application may be a central processing module (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field Programmable Gate Array, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. The general purpose processor may be a microprocessor, but in the alternative, it may be any conventional processor.
The Memory in embodiments of the present application may be in random access Memory (Random Access Memory, RAM), flash Memory, read-Only Memory (ROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable Programmable EPROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in a network device or terminal device. The processor and the storage medium may reside as discrete components in a network device or terminal device.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network device, a terminal device, or other programmable apparatus. The computer program or instructions may be stored in or transmitted across a computer-readable storage medium. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server that integrates one or more available media. The usable medium may be a magnetic medium, e.g., floppy disk, hard disk, tape; but also optical media such as DVD; but also semiconductor media such as Solid State Disks (SSDs).
In various embodiments of the application, where no special description or logic conflict exists, terms and/or descriptions between the various embodiments are consistent and may reference each other, and features of the various embodiments may be combined to form new embodiments based on their inherent logic.
It should be understood that, in the embodiment of the present application, the numbers "first" and "second" … are merely for distinguishing different objects, for example, for distinguishing different network devices, and are not limited to the scope of the embodiment of the present application, but the embodiment of the present application is not limited thereto.
It should also be understood that, in the present application, "when …," "if," and "if" all refer to the fact that the entity will make the corresponding treatment under some objective condition, and are not limited in time, nor do they require that the entity have to have a judgment in its implementation, nor do they mean that there are other limitations.
It should also be understood that in embodiments of the present application, "B corresponding to A" means that B is associated with A from which B may be determined. It should also be understood that determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information.
It should also be understood that the term "and/or" is merely one association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Items appearing in the present application that are similar to "include one or more of the following: the meaning of the expressions a, B, and C "generally means that the item may be any one of the following unless otherwise specified: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; a, B and C; a and A; a, A and A; a, A and B; a, a and C, a, B and B; a, C and C; b and B, B and C, C and C; c, C and C, and other combinations of a, B and C. The above is an optional entry for the item exemplified by 3 elements a, B and C, when expressed as "the item includes at least one of the following: a, B, … …, and X ", i.e. when there are more elements in the expression, then the entry to which the item is applicable can also be obtained according to the rules described above.
It will be appreciated that the various numerical numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application. The sequence number of each process does not mean the sequence of the execution sequence, and the execution sequence of each process should be determined according to the function and the internal logic.

Claims (33)

1. A method of model management, the method comprising:
The model authentication entity receiving first provider information from the model management entity, the first provider information indicating that the inference model is provided by the first model provider;
the model authentication entity compares the first model provider with a device provider of a model reasoning entity to generate a comparison result, wherein the model reasoning entity is an entity which is to run the reasoning model;
the model authentication entity sends an authentication result of the inference model to the model management entity based on the comparison result, the authentication result indicating whether the inference model is capable of being run in the model inference entity.
2. The method of claim 1, wherein:
the comparison result is that the first model provider and the equipment provider are inconsistent, the accuracy of the reasoning model is smaller than a first threshold, and the authentication result indicates that the reasoning model cannot run in the model reasoning entity; or alternatively, the process may be performed,
the comparison result is that the first model provider and the equipment provider are inconsistent, the accuracy of the inference model is greater than or equal to the first threshold, and the authentication result indicates that the inference model can operate in the model inference entity.
3. The method of claim 1 or 2, wherein prior to the model authentication entity comparing the first model provider and the device provider, the method further comprises:
the model authentication entity receiving the inference model from the model management entity;
the model authentication entity determines that the first model provider is a provider of the inference model.
4. The method of claim 3, wherein the model authentication entity determining that the first model provider is a provider of the inference model comprises:
the model authentication entity determines that the first format is consistent with the second format; or alternatively, the process may be performed,
the model certification entity determines that a second model provider is consistent with the first model provider; or alternatively, the process may be performed,
the model authentication entity determining that the first format is consistent with the second format and that the second model provider is consistent with the first model provider;
wherein the first format is a format of the inference model, the second format is determined from the first provider information, and the second model provider is determined from the inference model.
5. The method of any one of claims 1 to 4, wherein the method further comprises:
the model authentication entity sends cause information to the model management entity, wherein the cause information indicates the cause for obtaining the authentication result.
6. The method of claim 1, wherein the comparison results are consistent for the first model provider and the device provider, and the authentication results indicate that the inference model is capable of operating in the model inference entity.
7. The method of any of claims 1 to 6, wherein prior to the model authentication entity comparing the first model provider and the device provider, the method further comprises:
the model authentication entity receives second provider information from the model management entity, the second provider information indicating the device provider.
8. A method of model management, the method comprising:
the model management entity sending first provider information to the model authentication entity, the first provider information indicating that the first inference model is provided by the first model provider;
the model management entity receives an authentication result of the first inference model from the model authentication entity, the authentication result indicating whether the first inference model is capable of operating in a model inference entity.
9. The method as recited in claim 8, wherein:
the first model provider and the device provider of the model inference entity being inconsistent, the accuracy of the first inference model being less than a first threshold, the authentication result indicating that the first inference model is not capable of operating in the model inference entity; or alternatively, the process may be performed,
the first model provider and the device provider are inconsistent, the accuracy of the first inference model is greater than or equal to the first threshold, and the authentication result indicates that the first inference model is capable of operating in the model inference entity.
10. The method of claim 8 or 9, wherein the method further comprises:
the model management entity sends the first inference model to the model authentication entity.
11. The method of any one of claims 8 to 10, wherein the method further comprises:
the model management entity receives cause information from the model authentication entity, the cause information indicating a cause for deriving the authentication result.
12. The method of any of claims 8 to 11, wherein the authentication result indicates that the first inference model is not capable of operating in the model inference entity, the method further comprising:
The model management entity sends at least one of the following information to a model training entity: the authentication result, the reason information and the first adjustment information, wherein the reason information indicates the reason for obtaining the authentication result, and the first adjustment information indicates the model training entity to adjust the first reasoning model;
the model management entity receives a second inference model from the model training entity, the second inference model being adapted based on the first inference model.
13. The method of any of claims 8 to 11, wherein the authentication result indicates that the first inference model is not capable of operating in the model inference entity, the method further comprising:
the model management entity sends the first reasoning model and second adjustment information to the model reasoning entity, and the second adjustment information indicates the model reasoning entity to adjust the first reasoning model.
14. The method of claim 8, wherein the first model provider is consistent with a device provider of the model inference entity, the authentication result indicating that the first inference model is capable of operating in the model inference entity.
15. The method of any of claims 8 to 14, wherein prior to receiving the authentication result of the first inference model from the model authentication entity, the method further comprises:
the model management entity sends second provider information to the model authentication entity, the second provider information indicating a device provider of the model reasoning entity.
16. A model managed communication device, comprising:
a transceiver module for receiving first provider information from a model management entity, the first provider information indicating that an inference model is provided by a first model provider;
the processing module is used for comparing the first model provider with a device provider of a model reasoning entity to generate a comparison result, wherein the model reasoning entity is an entity which is to run the reasoning model;
the transceiver module is further configured to send an authentication result of the inference model to the model management entity based on the comparison result, where the authentication result indicates whether the inference model can be run in the model inference entity.
17. The communication device of claim 16, wherein:
The comparison result is that the first model provider and the equipment provider are inconsistent, the accuracy of the reasoning model is smaller than a first threshold, and the authentication result indicates that the reasoning model cannot run in the model reasoning entity; or alternatively, the process may be performed,
the comparison result is that the first model provider and the equipment provider are inconsistent, the accuracy of the inference model is greater than or equal to the first threshold, and the authentication result indicates that the inference model can operate in the model inference entity.
18. The communication device according to claim 16 or 17, wherein the transceiver module is further configured to:
receiving the inference model from the model management entity;
the processing module is further configured to:
determining that the first model provider is a provider of the inference model.
19. The communication device of claim 18, wherein the processing module is specifically configured to:
determining that the first format is consistent with the second format; or alternatively, the process may be performed,
determining that a second model provider is consistent with the first model provider; or alternatively, the process may be performed,
determining that the first format is consistent with the second format, and that the second model provider is consistent with the first model provider;
Wherein the first format is a format of the inference model, the second format is determined from the first provider information, and the second model provider is determined from the inference model.
20. The communication device of any of claims 16 to 19, wherein the transceiver module is further to:
and sending reason information to the model management entity, wherein the reason information indicates the reason for obtaining the authentication result.
21. The communications apparatus of claim 16, wherein the comparison results are that the first model provider and the device provider agree, and the authentication result indicates that the inference model is capable of operating in the model inference entity.
22. The communication device of any of claims 16 to 21, wherein the transceiver module is further to:
second provider information is received from the model management entity, the second provider information being indicative of the device provider.
23. A model managed communication device, comprising:
a processing module for generating first provider information for indicating that the first inference model is provided by the first model provider;
The receiving and transmitting module is used for transmitting the first provider information to a model authentication entity;
the transceiver module is further configured to receive an authentication result of the first inference model from the model authentication entity, the authentication result indicating whether the first inference model is capable of operating in the model inference entity.
24. The communications apparatus of claim 23, wherein the first model provider is inconsistent with a device provider of the model inference entity, an accuracy of the first inference model is less than a first threshold, and the authentication result indicates that the first inference model is not capable of operating in the model inference entity; or alternatively, the process may be performed,
the first model provider and the device provider are inconsistent, the accuracy of the first inference model is greater than or equal to the first threshold, and the authentication result indicates that the first inference model is capable of operating in the model inference entity.
25. The communication device according to claim 23 or 24, wherein the transceiver module is further configured to:
and sending the first reasoning model to the model authentication entity.
26. The communication device of any of claims 23 to 25, wherein the transceiver module is further to:
Cause information is received from the model authentication entity, the cause information indicating a cause for deriving the authentication result.
27. The communication apparatus according to any of the claims 23 to 26, wherein the authentication result indicates that the first inference model is not capable of running in the model inference entity, the transceiver module being further configured to:
transmitting at least one of the following information to a model training entity: the authentication result, the reason information and the first adjustment information, wherein the reason information indicates the reason for obtaining the authentication result, and the first adjustment information indicates the model training entity to adjust the first reasoning model;
a second inference model is received from the model training entity, the second inference model being adapted based on the first inference model.
28. The communication apparatus according to any of the claims 23 to 27, wherein the authentication result indicates that the first inference model is not capable of running in the model inference entity, the transceiving module being further to:
and sending the first reasoning model and second adjustment information to a model reasoning entity, wherein the second adjustment information indicates the model reasoning entity to adjust the first reasoning model.
29. The communications apparatus of claim 23, wherein the first model provider is consistent with a device provider of the model inference entity, the authentication result indicating that the first inference model is capable of operating in the model inference entity.
30. The communication device of any of claims 23 to 29, wherein the transceiver module is further to:
second provider information is sent to the model authentication entity, the second provider information indicating a device provider of the model reasoning entity.
31. A communication device comprising at least one processor for executing a computer program stored in a memory, to cause the device to implement the method of any one of claims 1 to 7 or 8 to 15.
32. A computer readable storage medium comprising a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 7 or 8 to 15.
33. A model managed communication system comprising a communication device according to any of claims 16 to 22 and a communication device according to any of claims 23 to 30.
CN202210178362.4A 2022-02-25 2022-02-25 Model management method and communication device Pending CN116703422A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210178362.4A CN116703422A (en) 2022-02-25 2022-02-25 Model management method and communication device
PCT/CN2023/077287 WO2023160508A1 (en) 2022-02-25 2023-02-21 Model management method and communication apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210178362.4A CN116703422A (en) 2022-02-25 2022-02-25 Model management method and communication device

Publications (1)

Publication Number Publication Date
CN116703422A true CN116703422A (en) 2023-09-05

Family

ID=87764738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210178362.4A Pending CN116703422A (en) 2022-02-25 2022-02-25 Model management method and communication device

Country Status (2)

Country Link
CN (1) CN116703422A (en)
WO (1) WO2023160508A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4168338B2 (en) * 2003-09-18 2008-10-22 ブラザー工業株式会社 Installation program, computer-readable recording medium, and installation method
EP2541397A1 (en) * 2011-06-30 2013-01-02 Siemens Aktiengesellschaft Method for compatibility checking when installing a software component
US20140025537A1 (en) * 2012-07-23 2014-01-23 Cellco Partnership D/B/A Verizon Wireless Verifying accessory compatibility with a mobile device
CN106528415A (en) * 2016-10-27 2017-03-22 广东浪潮大数据研究有限公司 Software compatibility test method, business platform and system
CN112561044B (en) * 2019-09-26 2023-07-14 西安闻泰电子科技有限公司 Neural network model acceleration method and device, server and storage medium

Also Published As

Publication number Publication date
WO2023160508A1 (en) 2023-08-31

Similar Documents

Publication Publication Date Title
US11269608B2 (en) Internet-of-things device blank
US11144301B2 (en) Over-the-air (OTA) update for firmware of a vehicle component
KR102259679B1 (en) Network slice management method, unit and system
US11146449B2 (en) Network architecture for internet-of-things device
US11206534B2 (en) Method and apparatus for managing bundles of smart secure platform
US12028443B2 (en) Security profiles for internet of things devices and trusted platforms
US20150305008A1 (en) Method and apparatus for updating information regarding specific resource in wireless communication system
US10372923B2 (en) System and method for controlling the power states of a mobile computing device
KR20210101373A (en) Apparatus and method for generating network slice in a wireless communication system
EP4155752A1 (en) Connected device region identification
US20220053029A1 (en) Apparatus and method for managing concurrent activation of bundle installed in smart security platform
US20230198841A1 (en) Pico-Base Station Configuration Method and Apparatus, Storage Medium and Electronic Apparatus
CN116703422A (en) Model management method and communication device
Liu et al. GA-AdaBoostSVM classifier empowered wireless network diagnosis
WO2023014985A1 (en) Artificial intelligence regulatory mechanisms
US11800356B2 (en) Method and device for remote management and verification of remote management authority
US20230118418A1 (en) Network access based on ai filtering
CN113615140B (en) Access method, device and equipment of collection resource and storage medium
US20230116207A1 (en) Systems and methods for authentication based on dynamic radio frequency response information
US20240214903A1 (en) Intelligent client steering in mesh networks
CN112189322B (en) Configuration method and device of network equipment and storage medium
US20220247577A1 (en) Provisioning system and method
CN117278419A (en) Data transmission method, communication device and communication system
CN115733629A (en) Safety testing method, device and medium for narrow-band Internet of things equipment
CN117459392A (en) Network intrusion type node upgrading method, device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination