WO2023138234A1 - Model management method and apparatus, networking architecture, electronic device and storage medium - Google Patents

Model management method and apparatus, networking architecture, electronic device and storage medium Download PDF

Info

Publication number
WO2023138234A1
WO2023138234A1 PCT/CN2022/136416 CN2022136416W WO2023138234A1 WO 2023138234 A1 WO2023138234 A1 WO 2023138234A1 CN 2022136416 W CN2022136416 W CN 2022136416W WO 2023138234 A1 WO2023138234 A1 WO 2023138234A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
node
end node
new version
library
Prior art date
Application number
PCT/CN2022/136416
Other languages
French (fr)
Chinese (zh)
Inventor
许晓东
原英婷
董辰
韩书君
王碧舳
Original Assignee
北京邮电大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京邮电大学 filed Critical 北京邮电大学
Publication of WO2023138234A1 publication Critical patent/WO2023138234A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/0836Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability to enhance reliability, e.g. reduce downtime
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network

Definitions

  • the present disclosure relates to the technical field of communication, and in particular, to a model management method, device, networking architecture, electronic equipment, and storage medium.
  • network nodes tend to be intelligent.
  • the intelligentization of network nodes has led to rapid expansion of information space, and even dimensional disasters, which has exacerbated the difficulty of representing information carrying space, making it difficult to match traditional network service capabilities with high-dimensional information space.
  • the amount of data transmitted through communication is too large, and the information business service system cannot continue to meet people's needs for complex, diverse, and intelligent information transmission.
  • Using artificial intelligence models to encode, disseminate, and decode business information can significantly reduce the amount of data transmission in communication services and greatly improve the efficiency of information transmission.
  • These models are relatively stable, and have reusability and dissemination. The dissemination and reuse of models will help to enhance network intelligence while reducing overhead and resource waste, forming an intelligent network with extremely intelligent nodes and a minimal network.
  • the present disclosure provides a method, device, networking architecture, electronic equipment, and storage medium for managing models in an Intent-Driven Network.
  • a model management method including:
  • the model management method further includes: after the new version model is formed, controlling the receiving end node to transmit the new version model to its cluster member nodes, and updating the model information corresponding to the new version model in the model library.
  • the model management method further includes: after updating the model information of the new version model in the model library, query whether there is still a node to be updated that stores the old version model, and transmit the new version model to the node to be updated that stores the old version model, so as to update the old version model in the node to be updated to generate the new version model.
  • the model library updates the model information corresponding to the old version model and the model information corresponding to the new version model respectively.
  • the model information includes model generation location and/or model storage location and/or model encoding and/or model function information and/or model version information.
  • the step of establishing at least one transmission path in the model library specifically includes:
  • the sending end node transmits part of the object model or all of the object model to the receiving end node.
  • the model library includes one or more memories.
  • a model management device including:
  • the model query module is configured to acquire the model propagation request initiated by the receiving end node, and query the storage location of the target model according to the model propagation request as the sending end node;
  • a model transmission module configured to establish at least one transmission path for transmitting the target model between the sending node and the receiving node, so that the transmitting node transmits the target model to the receiving node;
  • the model training module is configured to store model information of the new version model in a model library in response to the receiving end node using the target model to train a pre-stored old version model into a new version model.
  • the model management device further includes: a first update module configured to control the receiving end node to actively transmit the new version model to its cluster member nodes after the new version model is formed, and update the model information corresponding to the new version model in the model library.
  • a first update module configured to control the receiving end node to actively transmit the new version model to its cluster member nodes after the new version model is formed, and update the model information corresponding to the new version model in the model library.
  • the model management device further includes: a second update module configured to query whether there is still a node to be updated that stores the old version model after updating the model information of the new version model in the model library, and transmit the new version model to the node to be updated that stores the old version model, so as to update the old version model in the node to be updated to generate the new version model, and update the model information corresponding to the new version model and the model information corresponding to the old version model in the model library.
  • a second update module configured to query whether there is still a node to be updated that stores the old version model after updating the model information of the new version model in the model library, and transmit the new version model to the node to be updated that stores the old version model, so as to update the old version model in the node to be updated to generate the new version model, and update the model information corresponding to the new version model and the model information corresponding to the old version model in the model library.
  • the present disclosure also provides a network architecture of an intelligent simplified network, which is applied to execute the above model management method, including:
  • the first type of nodes, the second type of nodes, the third type of nodes and the model library establish communication connections with each other; wherein, the first type of nodes and the second type of nodes are configured to train, run and store models, and the third type of nodes are configured to run and store the models; the model library is configured to store and update model information corresponding to the models.
  • the present disclosure also provides an electronic device, comprising:
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the model management method described in any one of the above technical solutions.
  • the present disclosure also provides a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute the model management method according to any one of the above embodiments.
  • the present disclosure also provides a computer program product, including a computer program, when the computer program is executed by a processor, the model management method according to any one of the above embodiments is realized.
  • the disclosure improves the management ability of the model, improves the sharing of the model in the smart network, and realizes a more efficient and stable communication system.
  • FIG. 1 is a step diagram of a model management method in an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a model management method in an embodiment of the present disclosure
  • Fig. 3 is a functional block diagram of the first model management device in an embodiment of the present disclosure
  • FIG. 4 is a functional block diagram of a second model management device in an embodiment of the present disclosure.
  • FIG. 5 is a functional block diagram of a third model management device in an embodiment of the present disclosure.
  • Fig. 6 is a diagram of a network architecture that can implement the model management method in the embodiment of the present disclosure.
  • the sending-end device extracts the first service information by using a pre-configured first model to obtain the second service information to be transmitted; the sending-end device transmits the second service information to the receiving-end device.
  • the receiving end device receives the second service information, and uses the pre-configured second model to restore the second service information to obtain the third service information; the third service information restored by the second model may have a slight difference in quality compared with the original first service information, but the two are consistent in content, and the user experience is almost the same.
  • the transmitting end device transmits the second service information to the receiving end device, it further includes: an update module judges whether the receiving end device needs to update the second model, and transmits a preconfigured third model to the receiving end device when it is judged that an update is required, and the receiving end device uses the third model to update the second model. Processing business information through pre-trained artificial intelligence models can significantly reduce the amount of data transmission in communication services and greatly improve the efficiency of information transmission.
  • Model propagation and reuse will help enhance network intelligence while reducing overhead and resource waste.
  • the model can be divided into several model slices according to different segmentation rules, the above model slices can also be transmitted between different network nodes, and the model slices can be assembled into a model.
  • Model slices can be distributed and stored on multiple network nodes. When a network node finds that it lacks or needs to update a certain model or a certain model slice, it can make a request to the surrounding nodes that may have the slice.
  • Both the transmission of the business information and the transmission of the model take place in the communication network, and the communication transmission is performed based on a network protocol.
  • the network nodes passed on the path for transmitting the service information and the model include an intelligent simplified router.
  • the functions of the I-Driven Router include but are not limited to business information transmission, model transmission, absorbing model self-update, security protection and other functions.
  • the transmission function of the Intelligent-Driven Router involves the transmission of business information or models from the source node to the sink node, and there are multiple paths between the source node and the sink node.
  • the model transmission function of the Smart-Driven Router can transmit model slices. By rationally arranging model slices to take multiple paths, multiple transmission model slices can be used to improve the model transmission rate.
  • the present disclosure provides a model management method, as shown in Figure 1, including:
  • Step S101 obtain the model propagation request initiated by the receiving end node, and query the storage location of the target model as the sending end node according to the model propagation request;
  • Step S102 establishing at least one transmission path for transmitting the target model between the sending end node and the receiving end node, so that the sending end node transmits the target model to the receiving end node;
  • Step S103 in response to the receiving end node using the target model to train the pre-stored old version model into a new version model, storing the model information of the new version model in the model library.
  • this embodiment provides a model management method, including processes such as model query, transmission, training, and information storage.
  • node A needs to use the models of other nodes to improve its stored model 3.1, it can initiate a model propagation request to the model library.
  • the model library queries the model storage in its storage table according to the model propagation request of the receiving end node, that is, the storage node of the target model, and selects the node closest to the receiving end node from these model storages as the sending end node.
  • node B large server
  • node C ordinary node 1 are selected to transmit model 1 and model 2 to node A respectively.
  • the transmission path between the sending end node and the receiving end node is calculated, and there may be one transmission path or multiple transmission paths.
  • the network establishes a transmission path transmission model between the sending end nodes B and C and the receiving end node A.
  • the transmission path may be a direct connection, or may be connected via one or more relay nodes, and the connection may include one link or multiple links.
  • this step needs to determine the routing node between the sending node and the receiving node; in addition, it also needs to determine that the sending node transmits part of the target model or all target models to the receiving node.
  • the existing model of some nodes has similar parts with the target model, then only the different parts of the model need to be transmitted to complete the model update; if some nodes do not have the target model at all, all models need to be transmitted. In this way, occupied network resources can be reduced, repeated transmission can be avoided, and communication efficiency can be improved.
  • the receiving end node A sends a signaling to the model library, and the model library records the receiving end node A as the storer of model 1 and model 2.
  • the receiving end node A uses model synthesis technology to combine model 1 and model 2 with its own old version model 3.1 to train to generate a new version of model 3.2, and generates a number of model 3.2, and sends it to the model library for new model registration.
  • the model library stores the model information of the new version of model 3.2.
  • the model information includes but is not limited to the model generation location (the node that trains the model) and/or the storage location (i.e. the storage node) and/or the model code and/or the function information of the model and/or the version information of the model.
  • the model management method further includes: after the new version model is formed, controlling the receiving end node to transmit the new version model to its cluster member nodes, and updating the model information corresponding to the new version model in the model library. For example, after receiving node A forms a new version of model 3.2, it needs to transmit the new version of model 3.2 to its bound cluster members. When receiving node A is the cluster head, node D and node E are its cluster members, and receiving node A transmits model 3.2 to node D and node E. After receiving model 3.2, node D and node E send signaling to the model library, and the model library registers node D and node E as the storers of model 3.2.
  • the model management method further includes: after updating the model information of the new version model in the model library, query whether there is still a node to be updated that stores the old version model, and transmit the new version model to the node to be updated that stores the old version model, so as to update the old version model in the node to be updated to generate a new version model.
  • the model library inquires about the existence of the lower version of the same model, that is, model 3.1. Notify the storage nodes of model 3.1. For example, node F stores model 3.1. At the same time, node F is not a cluster member of node A, so it has not been updated in time. At this time, the model library lists node F as a node to be updated, and actively sends a model update notification to node F. After receiving the model update notification, node F can initiate a model propagation request to the model library to update the local model 3.1.
  • the model library can select node D to transmit the part of model 3.2 to node F, for example, the part of model 3.2 that is different from model 3.1, so as to reduce the amount of model transmission and improve transmission efficiency.
  • Node F combines the received partial model 3.2 with the local model 3.1 to obtain model 3.2, deletes the locally stored model 3.1, and sends information to the model library, the model library registers node F as the storer of model 3.2, deletes its storer record in model 3.1, that is, updates the model information of model 3.1 and model 3.2 respectively.
  • the model library in any of the above embodiments may include a centralized storage or be composed of multiple distributed storages.
  • This disclosure aims to manage the specific processes of model training, transmission, and storage with the assistance of the model library, so that each node can share the model and update its own model in time, so that the content transmitted by other nodes can be processed to obtain the required business information.
  • the communication system can quickly query the model-related information, thereby improving the speed of model transmission, training, storage and other processes, and improving the overall communication efficiency.
  • the present disclosure also provides a model management device, as shown in FIG. 3 , including:
  • the model query module 301 is configured to obtain the model propagation request initiated by the receiving end node, and query the storage location of the target model as the sending end node according to the model propagation request;
  • the model transmission module 302 is configured to establish at least one transmission path for transmitting the target model between the sending end node and the receiving end node, so that the sending end node transmits the target model to the receiving end node;
  • the model training module 303 is configured to store the model information of the new version model in the model library 304 in response to the receiving end node using the target model to train the pre-stored old version model into a new version model.
  • this embodiment provides a model management device, including processes such as model query, transmission, training, and information storage.
  • node A needs to use the models of other nodes to improve its stored model 3.1, it can initiate a model propagation request.
  • the model query module 301 queries the model storage in its storage table according to the model propagation request of receiving node A.
  • the model storage is the storage node of the target model, and selects the node closest to the receiving node from these model storages as the sending node.
  • node B large server
  • node C ordinary node 1 are selected to transmit model 1 and model 2 to node A respectively.
  • the model transmission module 302 establishes a transmission path transmission model between the sending end nodes B and C and the receiving end node A.
  • the transmission path may be formed by a direct connection between the sending end node and the receiving end node, or may be connected via one or more relay nodes, and the connection may include one link or multiple links.
  • this step needs to determine the routing node between the sending node and the receiving node; in addition, it also needs to determine that the sending node transmits part of the target model or all target models to the receiving node.
  • the existing model of some nodes has similar parts with the target model, then only the different parts of the model need to be transmitted to complete the model update; if some nodes do not have the target model at all, all models need to be transmitted. In this way, occupied network resources can be reduced, repeated transmission can be avoided, and communication efficiency can be improved.
  • the receiving end node A sends a signaling to the model library, and the model library records the receiving end node A as the storer of model 1 and model 2.
  • the model training module 303 uses the model synthesis technology to combine model 1 and model 2 with the old version model 3.1 of the receiving end node A to train to generate a new version model 3.2, and generates a number of the model 3.2, and sends it to the model library for new model registration.
  • the model library stores the model information of the new version model 3.2.
  • the model information includes but is not limited to the model generation location (i.e. the node for training the model) and/or the storage location (i.e. the storage node) and/or the model code and/or the function information of the model and/or the version information of the model.
  • the model management device further includes: a first update module 305, which controls the receiving end node to transmit the new version model to its cluster member nodes after the new version model is formed, and updates the model information corresponding to the new version model in the model library.
  • a first update module 305 controls the receiving end node to transmit the new version model to its cluster member nodes after the new version model is formed, and updates the model information corresponding to the new version model in the model library.
  • a first update module 305 controls the receiving end node to transmit the new version model to its cluster member nodes after the new version model is formed, and updates the model information corresponding to the new version model in the model library.
  • a first update module 305 controls the receiving end node to transmit the new version model to its cluster member nodes after the new version model is formed, and updates the model information corresponding to the new version model in the model library.
  • the model management device further includes: a second update module 306. After updating the model information of the new version model in the model library, query whether there is still a node to be updated that stores the old version model, and transmit the new version model to the node to be updated that stores the old version model, so as to update the old version model in the node to be updated to generate a new version model.
  • the model library inquires about the existence of the lower version of the same model, that is, model 3.1. Notify the storage nodes of model 3.1. For example, node F stores model 3.1. At the same time, node F is not a cluster member of node A, so it has not been updated in time. At this time, the model library lists node F as a node to be updated, and actively sends a model update notification to node F. After receiving the model update notification, node F can initiate a model propagation request to the model library to update the local model 3.1.
  • the model library can select node D to transmit the part of model 3.2 to node F, for example, the part of model 3.2 that is different from model 3.1, so as to reduce the amount of model transmission and improve transmission efficiency.
  • Node F combines the received partial model 3.2 with the local model 3.1 to obtain model 3.2, deletes the locally stored model 3.1, and sends information to the model library, the model library registers node F as the storer of model 3.2, deletes its storer record in model 3.1, that is, updates the model information of model 3.1 and model 3.2 respectively.
  • model library in any of the above embodiments may include a centralized storage memory or multiple distributed memories, which may be deployed in the core network or in the access network, and the storage range of the model library may be determined according to factors such as specific application scenarios and storage capabilities.
  • This disclosure aims to manage the specific processes of model training, transmission, and storage with the assistance of the model library, so that each node can share the model and update its own model in time, so that the content transmitted by other nodes can be processed to obtain the required business information.
  • the communication system can quickly query the model-related information, thereby improving the speed of model transmission, training, storage and other processes, and improving the overall communication efficiency.
  • the present disclosure also provides a network architecture of an intelligent simplified network, which is applied to implement the model management method described in any one of the above embodiments, including:
  • the first type of nodes, the second type of nodes, the third type of nodes and the model library establish communication connections with each other; among them, the first type of nodes and the second type of nodes are configured to train, run and store models, and the third type of nodes are configured to run and store models; the model library is configured to store and update model information corresponding to the model.
  • the IDN can include large-scale servers (i.e., the first type of nodes), ordinary nodes (i.e., the second type of nodes), and deployment nodes (i.e., the third type of nodes).
  • the large-scale servers can generate large models, which can be devices with strong computing capabilities in the network, such as cloud computing servers, cloudlet small cloud slices, etc.; ordinary nodes can be used for migration learning and training small and medium models, including but not limited to dedicated computing servers and satellites with computing capabilities deployed on the access network, and devices with strong computing capabilities in the network, such as personal Computers, smart phones, smart vehicles, smart ships, drones, etc.; deployment nodes only have the ability to perceive and collect data, but not the ability to train models, including but not limited to mobile phones, cameras, smart bracelets, smart TVs and other devices.
  • this embodiment is divided into three types of nodes according to the training capability of the model, that is, large-scale server 601, common node 602, and deployment node 603.
  • These three types of nodes form an intelligent network architecture, and these three types of nodes are capable of running, storing, and transmitting models.
  • Large-scale servers communicate with other large-scale servers and some ordinary nodes using wired connections.
  • Ordinary nodes can communicate with other ordinary nodes using wired or wireless connections.
  • Deployment nodes can communicate with other deployment nodes or ordinary nodes using wireless connections.
  • large-scale servers and ordinary nodes can use locally stored data to directly train and generate models; they can also use locally stored models to generate new models through model processing technology combined with locally stored data training.
  • Model processing technologies include but are not limited to model synthesis technology and model compression technology, such as knowledge distillation, transfer learning, stacking methods, etc.
  • the node that generates the model will generate the model information of the model, which includes the model number, function information and version information.
  • the node sends the model number to the model library for storage, and the model library records the node as a model contributor and a model storer.
  • the model library can include a centralized storage memory or multiple distributed memories, which can be deployed in the core network or in the access network. The storage range of the model library can be determined according to specific application scenarios and storage capabilities.
  • the model propagation process initiated by the model demander includes: the model library receives the model information of the new version of the model, and can query whether there is an old version of the model in the node. If it exists, it will be listed as a node to be updated, and send a model update notification to the node to be updated.
  • model Actively query the existing models in the model library, and initiate a propagation request for one or more required models; when a node receives a model expiration notification, it can initiate a model propagation request to request a new version of the model.
  • the model update notification does not need to be sent to each node to be updated, but only needs to be sent to the node to be updated as the cluster head in the bound cluster, and then the cluster head transmits the model to other cluster member nodes.
  • the model propagation process initiated by the model generator includes: after a large server or a common node generates a new model, it can directly send the model to a specific node, requiring the node to receive the new model, so that the models between the nodes remain unified.
  • the premise of using this mode is that it needs to be agreed in advance between the nodes, for example, to form a binding cluster between the nodes, the cluster head should be the uppermost node, and when the model in the cluster head is updated, the new model will be actively transmitted to the cluster members.
  • the connection between the sending end node and the receiving end node can be a direct connection, or can be connected through one or more relay nodes; the transmission path can include one link or multiple links; you can also choose to transmit part of the target model or all of the target model to the receiving end node.
  • the node After the target model is transmitted to the receiving node, the node stores the target model. At this time, the receiving node needs to send information to the model library, and record the node as the model storer. Since a node may only store parts of a model, the model information can include which part of the model the node stores.
  • the propagation of the model is limited by the time limit. For different versions of the model, after the time limit expires, the model library will send an expiration notification to the node storing the lower version model, suggesting that the node delete the old version model, which can reduce the memory resources occupied by the old version model, update the model of each node in time, and also improve the information processing capability of each node, thereby improving communication efficiency.
  • a node uses a new version model to extract and encode the first business information to form the second business information. If the node receiving the second business information of the node still uses the old version model, the node may not be able to restore the second business information to the third business information, or the restored quality is too low to meet the needs of users. Therefore, updating the models of each node in time is also conducive to improving user experience. It should be noted that the time limit can be set and can be determined according to actual application scenarios. After the node deletes the model, it needs to send relevant messages to the model library, and the model library will delete the model storage information.
  • the model contributor information will not be deleted, that is, the model library will not delete the generation location of the model when updating the model information, but only update the storage location of the model.
  • the model library With the assistance of the model library, you can know the generation location, storage location, function and version of each model in the entire system.
  • the model library can quickly search for the target model and transmit it to the model demander through the fastest transmission path. .
  • the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
  • electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the device includes a computing unit, which can perform various appropriate actions and processes according to computer programs stored in a read-only memory (ROM, Read-Only Memory) or loaded from a storage unit into a random access memory (RAM, Random Access Memory). In RAM, various programs and data necessary for device operation are also stored.
  • the computing unit, ROM, and RAM are connected to each other through a bus.
  • An input/output (I/O, Input/Output) interface is also connected to the bus.
  • I/O interface Multiple components in the device are connected to the I/O interface, including: input units, such as keyboards, mice, etc.; output units, such as various types of displays, speakers, etc.; storage units, such as magnetic disks, optical discs, etc.; and communication units, such as network cards, modems, wireless communication transceivers, etc.
  • the communication unit allows the device to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
  • Computing units may be various general and/or special purpose processing components having processing and computing capabilities. Some examples of computing units include, but are not limited to, central processing units (CPU, Central Processing Unit), graphics processing units (GPU, Graphics Processing Unit), various dedicated artificial intelligence (AI, Artificial Intelligence) computing chips, various computing units that run machine learning model algorithms, digital signal processors (DSP, Digital Signal Processing), and any appropriate processors, controllers, microcontrollers, etc.
  • the calculation unit executes various methods and processes described above, such as the model management method in the above embodiments.
  • the model management method can be implemented as a computer software program tangibly embodied on a machine-readable medium, such as a storage unit.
  • part or all of the computer program may be loaded and/or installed on the device via a ROM and/or a communication unit.
  • the computer program When the computer program is loaded into RAM and executed by the computing unit, one or more steps of the model management method described above may be performed.
  • the computing unit may be configured to execute the model management method in any other suitable manner (eg, by means of firmware).
  • programmable system comprising at least one programmable processor, which may be a special purpose or general purpose programmable processor, capable of receiving data and instructions from and transmitting data and instructions to a storage system, at least one input device, and at least one output device.
  • programmable processor which may be a special purpose or general purpose programmable processor, capable of receiving data and instructions from and transmitting data and instructions to a storage system, at least one input device, and at least one output device.
  • Program codes for implementing the model management method of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to processors or controllers of general-purpose computers, special purpose computers, or other programmable data processing devices, so that the program codes cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented when executed by the processors or controllers.
  • the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), EPROM (Electrical Programmable Read-Only Memory or flash memory), optical fiber, Compact Disc Read-Only Memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. combination.
  • RAM random access memory
  • ROM read-only memory
  • EPROM Electrical Programmable Read-Only Memory or flash memory
  • CD-ROM Compact Disc Read-Only Memory
  • magnetic storage devices or any suitable combination of the foregoing. combination.
  • the systems and techniques described herein can be implemented on a computer having: a display device (e.g., a CRT (Cathode Ray Tube, cathode ray tube) or LCD (Liquid Crystal Display, liquid crystal display) monitor) for displaying information to the user; and a keyboard and pointing device (e.g., a mouse or trackball) through which the user can provide input to the computer.
  • a display device e.g., a CRT (Cathode Ray Tube, cathode ray tube) or LCD (Liquid Crystal Display, liquid crystal display) monitor
  • a keyboard and pointing device e.g., a mouse or trackball
  • Other types of devices may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, voice input, or tactile input.
  • the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., a user computer having a graphical user interface or web browser through which a user can interact with implementations of the systems and techniques described herein), or any combination of such back-end, middleware, or front-end components.
  • the components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: Local Area Network (LAN, Local Area Network), Wide Area Network (WAN, Wide Area Network) and the Internet.
  • a computer system may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
  • the server can be a cloud server, a server of a distributed system, or a server combined with a blockchain.
  • steps may be reordered, added or deleted using the various forms of flow shown above.
  • each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Provided are a model management method and apparatus, a networking architecture, an electronic device and a storage medium. The specific technical solution comprises: acquiring a model propagation request initiated by a receiving end node, and querying a storage position of a target model as a sending end node according to the model propagation request (S101); establishing at least one transmission path for transmitting the target model between the sending end node and the receiving end node (S102); and in response to the receiving end node training a pre-stored old version model into a new version model by utilizing the target model, storing model information of the new version model in a model library (S103).

Description

模型管理方法、装置、组网架构、电子设备及存储介质Model management method, device, networking architecture, electronic equipment and storage medium 技术领域technical field
本公开涉及通信技术领域,尤其涉及模型管理方法、装置、组网架构、电子设备及存储介质。The present disclosure relates to the technical field of communication, and in particular, to a model management method, device, networking architecture, electronic equipment, and storage medium.
背景技术Background technique
在未来的万物智联网络中,网络节点趋向于智能化,网络节点智能化导致了信息空间快速扩张、甚至维度灾难,加剧了表征信息承载空间的难度,导致传统的网络服务能力与高维信息空间难以匹配,通信传输的数据量过大,信息业务服务系统无法持续满足人们复杂、多样和智能化信息传输的需求。而通过人工智能模型来编码、传播、解码业务信息,可显著降低通信业务中的数据传输量,极大地提升了信息传输效率。这些模型相对稳定,并具有复用性、传播性。模型的传播和复用将有助于增强网络智能,同时降低开销和资源浪费,形成节点极智、网络极简的智简网络。In the future intelligent network of all things, network nodes tend to be intelligent. The intelligentization of network nodes has led to rapid expansion of information space, and even dimensional disasters, which has exacerbated the difficulty of representing information carrying space, making it difficult to match traditional network service capabilities with high-dimensional information space. The amount of data transmitted through communication is too large, and the information business service system cannot continue to meet people's needs for complex, diverse, and intelligent information transmission. Using artificial intelligence models to encode, disseminate, and decode business information can significantly reduce the amount of data transmission in communication services and greatly improve the efficiency of information transmission. These models are relatively stable, and have reusability and dissemination. The dissemination and reuse of models will help to enhance network intelligence while reducing overhead and resource waste, forming an intelligent network with extremely intelligent nodes and a minimal network.
不同于传统通信,智简网络传播的核心在于模型。因此,模型的训练、传播以及存储等管理流程对智简网络通信系统是至关重要的,现亟需一种能够在智简网络中提升模型训练、运行、传播以及存储等能力的管理方法,以及网络架构,提升通信系统对模型的管理能力,进而提升通信系统的整体通信效率。Different from traditional communication, the core of Intent-Driven Network communication lies in the model. Therefore, the management process of model training, dissemination, and storage is crucial to the I-Driven network communication system. There is an urgent need for a management method that can improve the capabilities of model training, operation, dissemination, and storage in the I-Driven network, as well as the network architecture. Improve the communication system’s ability to manage models, and then improve the overall communication efficiency of the communication system.
发明内容Contents of the invention
本公开提供了一种用于在智简网络中管理模型的方法、装置、组网架构、电子设备以及存储介质。The present disclosure provides a method, device, networking architecture, electronic equipment, and storage medium for managing models in an Intent-Driven Network.
根据本公开的一方面,提供了一种模型管理方法,包括:According to an aspect of the present disclosure, a model management method is provided, including:
获取接收端节点发起的模型传播请求,并根据所述模型传播请求查询目标模型的存储位置作为发送端节点;Acquiring the model propagation request initiated by the receiving end node, and querying the storage location of the target model according to the model propagation request as the sending end node;
在所述发送端节点与所述接收端节点之间建立至少一条用于传输所述目标模型的传输路径,以用于所述发送端节点向所述接收端节点传输所 述目标模型;Establishing at least one transmission path for transmitting the target model between the sending end node and the receiving end node, for the sending end node to transmit the target model to the receiving end node;
响应于所述接收端节点利用所述目标模型将预先存储的旧版本模型训练成新版本模型,将所述新版本模型的模型信息存储于模型库中。In response to the receiving end node using the target model to train a pre-stored old version model into a new version model, storing model information of the new version model in a model library.
可选的,所述模型管理方法还包括:在形成所述新版本模型后,控制所述接收端节点向其簇成员节点传输所述新版本模型,并于所述模型库中更新所述新版本模型对应的模型信息。Optionally, the model management method further includes: after the new version model is formed, controlling the receiving end node to transmit the new version model to its cluster member nodes, and updating the model information corresponding to the new version model in the model library.
可选的,所述模型管理方法还包括:于所述模型库中更新所述新版本模型的模型信息后,查询是否仍然存在存储有所述旧版本模型的待更新节点,并向存储有所述旧版本模型的所述待更新节点传输所述新版本模型,以对所述待更新节点中的所述旧版本模型进行更新生成所述新版本模型。Optionally, the model management method further includes: after updating the model information of the new version model in the model library, query whether there is still a node to be updated that stores the old version model, and transmit the new version model to the node to be updated that stores the old version model, so as to update the old version model in the node to be updated to generate the new version model.
可选的,在所述待更新节点更新生成所述新版本模型后,所述模型库分别更新所述旧版本模型对应的模型信息和所述新版本模型对应的模型信息。Optionally, after the node to be updated updates and generates the new version model, the model library updates the model information corresponding to the old version model and the model information corresponding to the new version model respectively.
可选的,所述模型信息包括模型生成位置和/或模型存储位置和/或模型编码和/或模型功能信息和/或模型版本信息。Optionally, the model information includes model generation location and/or model storage location and/or model encoding and/or model function information and/or model version information.
可选的,所述模型库建立至少一条所述传输路径的步骤具体包括:Optionally, the step of establishing at least one transmission path in the model library specifically includes:
确定所述发送端节点与所述接收端节点之间的路由节点;determining routing nodes between the sending end node and the receiving end node;
确定所述发送端节点传输部分所述目标模型或全部所述目标模型至所述接收端节点。It is determined that the sending end node transmits part of the object model or all of the object model to the receiving end node.
可选的,所述模型库包括一个或多个存储器。Optionally, the model library includes one or more memories.
根据本公开的另一方面,提供了一种模型管理装置,包括:According to another aspect of the present disclosure, a model management device is provided, including:
模型查询模块,被配置为获取接收端节点发起的模型传播请求,并根据所述模型传播请求查询目标模型的存储位置作为发送端节点;The model query module is configured to acquire the model propagation request initiated by the receiving end node, and query the storage location of the target model according to the model propagation request as the sending end node;
模型传输模块,被配置为在所述发送端节点与所述接收端节点之间建立至少一条用于传输所述目标模型的传输路径,以用于所述发送端节点向所述接收端节点传输所述目标模型;A model transmission module configured to establish at least one transmission path for transmitting the target model between the sending node and the receiving node, so that the transmitting node transmits the target model to the receiving node;
模型训练模块,被配置为响应于所述接收端节点利用所述目标模型将预先存储的旧版本模型训练成新版本模型,将所述新版本模型的模型信息存储于模型库中。The model training module is configured to store model information of the new version model in a model library in response to the receiving end node using the target model to train a pre-stored old version model into a new version model.
可选的,模型管理装置还包括:第一更新模块,被配置为在形成所述新版本模型后,控制所述接收端节点主动向其簇成员节点传输所述新版本模型,并于所述模型库中更新所述新版本模型对应的模型信息。Optionally, the model management device further includes: a first update module configured to control the receiving end node to actively transmit the new version model to its cluster member nodes after the new version model is formed, and update the model information corresponding to the new version model in the model library.
可选的,模型管理装置还包括:第二更新模块,被配置为在所述模型库中更新所述新版本模型的模型信息后,查询是否仍然存在存储有所述旧版本模型的待更新节点,并向存储有所述旧版本模型的所述待更新节点传输所述新版本模型,以对所述待更新节点中的所述旧版本模型进行更新生成所述新版本模型,并于所述模型库中更新所述新版本模型对应的模型信息和所述旧版本模型对应的模型信息。Optionally, the model management device further includes: a second update module configured to query whether there is still a node to be updated that stores the old version model after updating the model information of the new version model in the model library, and transmit the new version model to the node to be updated that stores the old version model, so as to update the old version model in the node to be updated to generate the new version model, and update the model information corresponding to the new version model and the model information corresponding to the old version model in the model library.
本公开还提供了一种智简网络的组网架构,应用于执行上述模型管理方法,包括:The present disclosure also provides a network architecture of an intelligent simplified network, which is applied to execute the above model management method, including:
第一类节点、第二类节点、第三类节点以及模型库之间相互建立通信连接;其中,所述第一类节点与所述第二类节点被配置为训练、运行以及存储模型,所述第三类节点被配置为运行及存储所述模型;所述模型库被配置为存储及更新所述模型对应的模型信息。The first type of nodes, the second type of nodes, the third type of nodes and the model library establish communication connections with each other; wherein, the first type of nodes and the second type of nodes are configured to train, run and store models, and the third type of nodes are configured to run and store the models; the model library is configured to store and update model information corresponding to the models.
本公开还提供了一种电子设备,包括:The present disclosure also provides an electronic device, comprising:
至少一个处理器;以及at least one processor; and
与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述技术方案中任一项所述的模型管理方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the model management method described in any one of the above technical solutions.
本公开还提供了一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行根据上述实施例中任一项所述的模型管理方法。The present disclosure also provides a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute the model management method according to any one of the above embodiments.
本公开还提供了一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据上述实施例中任一项所述的模型管理方法。The present disclosure also provides a computer program product, including a computer program, when the computer program is executed by a processor, the model management method according to any one of the above embodiments is realized.
本公开基于上述技术方案中的模型管理方法、装置、组网架构、电子设备及存储介质,提升了对模型的管理能力,提升了智简网络中模型的共 享性,实现更高效更稳定的通信系统。Based on the model management method, device, networking architecture, electronic equipment, and storage medium in the above-mentioned technical solution, the disclosure improves the management ability of the model, improves the sharing of the model in the smart network, and realizes a more efficient and stable communication system.
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。It should be understood that what is described in this section is not intended to identify key or important features of the embodiments of the present disclosure, nor is it intended to limit the scope of the present disclosure. Other features of the present disclosure will be readily understood through the following description.
附图说明Description of drawings
附图用于更好地理解本方案,不构成对本公开的限定。其中:The accompanying drawings are used to better understand the present solution, and do not constitute a limitation to the present disclosure. in:
图1是本公开实施例中的模型管理方法的步骤图;FIG. 1 is a step diagram of a model management method in an embodiment of the present disclosure;
图2是本公开实施例中的模型管理方法的流程示意图;FIG. 2 is a schematic flowchart of a model management method in an embodiment of the present disclosure;
图3是本公开实施例中的第一种模型管理装置的原理框图;Fig. 3 is a functional block diagram of the first model management device in an embodiment of the present disclosure;
图4是本公开实施例中的第二种模型管理装置的原理框图;FIG. 4 is a functional block diagram of a second model management device in an embodiment of the present disclosure;
图5是本公开实施例中的第三种模型管理装置的原理框图;;FIG. 5 is a functional block diagram of a third model management device in an embodiment of the present disclosure;
图6是可以实现本公开实施例中的模型管理方法的组网架构图。Fig. 6 is a diagram of a network architecture that can implement the model management method in the embodiment of the present disclosure.
具体实施方式Detailed ways
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and they should be regarded as exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
智简网络中主要通过人工智能模型传播业务信息,通过使用人工智能模型将待传播的第一业务信息压缩为与所述人工智能模型相关的第二业务信息,极大地降低了网络中的数据通信量,压缩效率远超传统的压缩算法。其中,发送端设备利用预先配置的第一模型对所述第一业务信息进行提取并得到待传输的第二业务信息;所述发送端设备向接收端设备传输所述第二业务信息。接收端设备接收所述第二业务信息,并利用预先配置的第二模型对所述第二业务信息进行恢复处理得到第三业务信息;经第二模型恢复的第三业务信息比起原先的第一业务信息会有些许质量上的差异,但两者在内容上是一致的,给用户的体验几乎是无差异的。在所述发送端设备向接收端设备传输所述第二业务信息之前,还包括:更新模块判断所 述接收端设备是否需要对所述第二模型进行更新,并在判断需要更新时向所述接收端设备传输预先配置的第三模型,所述接收端设备利用所述第三模型对所述第二模型进行更新。通过预先训练好的人工智能模型对业务信息进行处理,可显著降低通信业务中的数据传输量,极大地提升了信息传输效率。这些模型相对稳定,并具有复用性、传播性。模型的传播和复用将有助于增强网络智能,同时降低开销和资源浪费。所述模型能够根据不同切分规则切分为若干个模型切片,上述模型切片也可以在不同的网络节点之间传输,模型切片可以组装成模型。模型切片可以分散存储在多个网络节点上。当网络节点请发现自己缺少或需要更新某模型或某模型切片时,可以通过请求的方式,向周围可能具有该切片的节点请求。In the I-Driven Network, business information is mainly disseminated through the artificial intelligence model. By using the artificial intelligence model to compress the first business information to be disseminated into the second business information related to the artificial intelligence model, the data traffic in the network is greatly reduced, and the compression efficiency far exceeds the traditional compression algorithm. Wherein, the sending-end device extracts the first service information by using a pre-configured first model to obtain the second service information to be transmitted; the sending-end device transmits the second service information to the receiving-end device. The receiving end device receives the second service information, and uses the pre-configured second model to restore the second service information to obtain the third service information; the third service information restored by the second model may have a slight difference in quality compared with the original first service information, but the two are consistent in content, and the user experience is almost the same. Before the transmitting end device transmits the second service information to the receiving end device, it further includes: an update module judges whether the receiving end device needs to update the second model, and transmits a preconfigured third model to the receiving end device when it is judged that an update is required, and the receiving end device uses the third model to update the second model. Processing business information through pre-trained artificial intelligence models can significantly reduce the amount of data transmission in communication services and greatly improve the efficiency of information transmission. These models are relatively stable, and have reusability and dissemination. Model propagation and reuse will help enhance network intelligence while reducing overhead and resource waste. The model can be divided into several model slices according to different segmentation rules, the above model slices can also be transmitted between different network nodes, and the model slices can be assembled into a model. Model slices can be distributed and stored on multiple network nodes. When a network node finds that it lacks or needs to update a certain model or a certain model slice, it can make a request to the surrounding nodes that may have the slice.
传输所述业务信息、传输所述模型均发生在通信网络中,基于网络协议进行通信传输。传输所述业务信息、传输所述模型的路径上经过的网络节点包括智简路由器。智简路由器的功能包括但不限于业务信息传输、模型传输,吸收模型自我更新,安全保护等功能。智简路由器的传输功能,涉及将业务信息或模型从信源节点传输到信宿节点,信源节点和信宿节点之间存在多个路径。智简路由器的模型传输功能,可以对模型切片进行传输,通过合理安排模型切片走多个路径,多路传输模型切片,提高模型传输速率。Both the transmission of the business information and the transmission of the model take place in the communication network, and the communication transmission is performed based on a network protocol. The network nodes passed on the path for transmitting the service information and the model include an intelligent simplified router. The functions of the I-Driven Router include but are not limited to business information transmission, model transmission, absorbing model self-update, security protection and other functions. The transmission function of the Intelligent-Driven Router involves the transmission of business information or models from the source node to the sink node, and there are multiple paths between the source node and the sink node. The model transmission function of the Smart-Driven Router can transmit model slices. By rationally arranging model slices to take multiple paths, multiple transmission model slices can be used to improve the model transmission rate.
本公开提供了一种模型管理方法,如图1所示,包括:The present disclosure provides a model management method, as shown in Figure 1, including:
步骤S101,获取接收端节点发起的模型传播请求,并根据模型传播请求查询目标模型的存储位置作为发送端节点;Step S101, obtain the model propagation request initiated by the receiving end node, and query the storage location of the target model as the sending end node according to the model propagation request;
步骤S102,在发送端节点与接收端节点之间建立至少一条用于传输目标模型的传输路径,以用于发送端节点向接收端节点传输目标模型;Step S102, establishing at least one transmission path for transmitting the target model between the sending end node and the receiving end node, so that the sending end node transmits the target model to the receiving end node;
步骤S103,响应于接收端节点利用目标模型将预先存储的旧版本模型训练成新版本模型,将新版本模型的模型信息存储于模型库中。Step S103, in response to the receiving end node using the target model to train the pre-stored old version model into a new version model, storing the model information of the new version model in the model library.
具体地,本实施例提供了一种模型管理方法,包括对模型的查询、传输、训练以及信息存储等流程。示例性地,假设节点A需利用其它节点的模型改进其存储的模型3.1时,可向模型库发起模型传播请求,模型库根据接收端节点的模型传播请求在其存储表中查询模型存储者,即目标模型 的存储节点,并从这些模型存储者中选取距离接收端节点最近的节点作为发送端节点。如图2所示,基于信道条件、隐私保护、能量消耗、时延限制等因素的考量,选定节点B(大型服务器)和节点C(普通节点1)分别传输模型1和模型2至节点A。Specifically, this embodiment provides a model management method, including processes such as model query, transmission, training, and information storage. Exemplarily, assuming that node A needs to use the models of other nodes to improve its stored model 3.1, it can initiate a model propagation request to the model library. The model library queries the model storage in its storage table according to the model propagation request of the receiving end node, that is, the storage node of the target model, and selects the node closest to the receiving end node from these model storages as the sending end node. As shown in Figure 2, based on the consideration of channel conditions, privacy protection, energy consumption, delay constraints and other factors, node B (large server) and node C (ordinary node 1) are selected to transmit model 1 and model 2 to node A respectively.
进一步地,在确定了发送端节点B和C之后,计算出发送端节点与接收端节点之间的传输路径,传输路径可以是一条也可以是多条。网络为发送端节点B、C与接收端节点A之间建立传输路径传输模型。该传输路径可以是直接连接,也可经由一个或多个中继节点连接,该连接可包含一条链路,也可包含多条链路。作为可选的实施方式,该步骤一方面需要确定发送端节点与接收端节点之间的路由节点;此外,还要确定发送端节点传输部分目标模型或全部目标模型至接收端节点。例如,某些节点已有的模型与目标模型具有相似的部分,则只需要传输模型不同的部分即可完成模型的更新;若某些节点完全不存在目标模型,则需要传输全部模型。通过这种方式可以减少占用的网络资源,避免重复的传输,提升通信效率。Further, after the sending end nodes B and C are determined, the transmission path between the sending end node and the receiving end node is calculated, and there may be one transmission path or multiple transmission paths. The network establishes a transmission path transmission model between the sending end nodes B and C and the receiving end node A. The transmission path may be a direct connection, or may be connected via one or more relay nodes, and the connection may include one link or multiple links. As an optional implementation, on the one hand, this step needs to determine the routing node between the sending node and the receiving node; in addition, it also needs to determine that the sending node transmits part of the target model or all target models to the receiving node. For example, if the existing model of some nodes has similar parts with the target model, then only the different parts of the model need to be transmitted to complete the model update; if some nodes do not have the target model at all, all models need to be transmitted. In this way, occupied network resources can be reduced, repeated transmission can be avoided, and communication efficiency can be improved.
在模型1和2传输完成后,接收端节点A向模型库发送信令,模型库将接收端节点A记录为模型1和模型2的存储者。接收端节点A利用模型合成技术将模型1、模型2结合自身的旧版本模型3.1训练生成新版本模型3.2,并生成模型3.2的编号,发送至模型库进行新模型登记,模型库存储新版本模型3.2的模型信息,模型信息包括但不限于模型生成位置(训练模型的节点)和/或存储位置(即存储节点)和/或模型编码和/或模型的功能信息和/或模型的版本信息。After the transmission of models 1 and 2 is completed, the receiving end node A sends a signaling to the model library, and the model library records the receiving end node A as the storer of model 1 and model 2. The receiving end node A uses model synthesis technology to combine model 1 and model 2 with its own old version model 3.1 to train to generate a new version of model 3.2, and generates a number of model 3.2, and sends it to the model library for new model registration. The model library stores the model information of the new version of model 3.2. The model information includes but is not limited to the model generation location (the node that trains the model) and/or the storage location (i.e. the storage node) and/or the model code and/or the function information of the model and/or the version information of the model.
作为可选的实施方式,模型管理方法还包括:在形成新版本模型后,控制接收端节点向其簇成员节点传输新版本模型,并于模型库中更新新版本模型对应的模型信息。例如,在接收端节点A形成新版本模型3.2后,需向其绑定簇成员传输新版本模型3.2,当接收端节点A为簇头时,节点D和节点E是其簇成员,接收端节点A向节点D和节点E传输模型3.2,节点D和节点E在接收模型3.2后向模型库发送信令,模型库登记节点D和节点E为模型3.2的存储者。As an optional implementation, the model management method further includes: after the new version model is formed, controlling the receiving end node to transmit the new version model to its cluster member nodes, and updating the model information corresponding to the new version model in the model library. For example, after receiving node A forms a new version of model 3.2, it needs to transmit the new version of model 3.2 to its bound cluster members. When receiving node A is the cluster head, node D and node E are its cluster members, and receiving node A transmits model 3.2 to node D and node E. After receiving model 3.2, node D and node E send signaling to the model library, and the model library registers node D and node E as the storers of model 3.2.
作为可选的实施方式,模型管理方法还包括:在模型库中更新新版本模型的模型信息后,查询是否仍然存在存储有旧版本模型的待更新节点, 并向存储有旧版本模型的待更新节点传输新版本模型,以对待更新节点中的旧版本模型进行更新生成新版本模型。As an optional implementation, the model management method further includes: after updating the model information of the new version model in the model library, query whether there is still a node to be updated that stores the old version model, and transmit the new version model to the node to be updated that stores the old version model, so as to update the old version model in the node to be updated to generate a new version model.
示例性地,模型库在接收到新版本模型3.2的模型信息后,查询到同一模型的低版本的存在,即模型3.1。通知模型3.1的存储节点,例如节点F存储了模型3.1,同时节点F也不是节点A的簇成员,因此没有得到及时更新,这时模型库就把节点F列为待更新节点,主动向节点F发送模型更新通知。节点F在接收模型更新通知后,可以向模型库发起模型传播请求来更新本地的模型3.1。模型库接收模型传播请求后,模型库可以选定节点D向节点F传输模型3.2的部分,例如模型3.2与模型3.1不同的部分,减小模型的传输量,提升传输效率。节点F将接收到的部分模型3.2与本地模型3.1相结合得到模型3.2,删除本地存储的模型3.1,并向模型库发送信息,模型库登记节点F为模型3.2的存储者,删除其在模型3.1的存储者记录,即对模型3.1和模型3.2的模型信息分别进行更新。Exemplarily, after receiving the model information of the new version model 3.2, the model library inquires about the existence of the lower version of the same model, that is, model 3.1. Notify the storage nodes of model 3.1. For example, node F stores model 3.1. At the same time, node F is not a cluster member of node A, so it has not been updated in time. At this time, the model library lists node F as a node to be updated, and actively sends a model update notification to node F. After receiving the model update notification, node F can initiate a model propagation request to the model library to update the local model 3.1. After the model library receives the model dissemination request, the model library can select node D to transmit the part of model 3.2 to node F, for example, the part of model 3.2 that is different from model 3.1, so as to reduce the amount of model transmission and improve transmission efficiency. Node F combines the received partial model 3.2 with the local model 3.1 to obtain model 3.2, deletes the locally stored model 3.1, and sends information to the model library, the model library registers node F as the storer of model 3.2, deletes its storer record in model 3.1, that is, updates the model information of model 3.1 and model 3.2 respectively.
需要说明的是,上述任一实施例中的模型库可以包括一个集中式存储的存储器或者是多个分布式存储器组成。本公开旨在模型库的协助下管理模型的训练、传输、存储等具体流程,便于各个节点实现模型共享,及时更新自身的模型,以便后续能够对其它节点传输的内容进行处理得到所需的业务信息,在模型库的协助下,通信系统能够快速地查询到模型相关信息,进而提升模型传输、训练、存储等流程的速度,提升整体通信效率。It should be noted that, the model library in any of the above embodiments may include a centralized storage or be composed of multiple distributed storages. This disclosure aims to manage the specific processes of model training, transmission, and storage with the assistance of the model library, so that each node can share the model and update its own model in time, so that the content transmitted by other nodes can be processed to obtain the required business information. With the assistance of the model library, the communication system can quickly query the model-related information, thereby improving the speed of model transmission, training, storage and other processes, and improving the overall communication efficiency.
本公开还提供了一种模型管理装置,如图3所示,包括:The present disclosure also provides a model management device, as shown in FIG. 3 , including:
模型查询模块301,被配置为获取接收端节点发起的模型传播请求,并根据模型传播请求查询目标模型的存储位置作为发送端节点;The model query module 301 is configured to obtain the model propagation request initiated by the receiving end node, and query the storage location of the target model as the sending end node according to the model propagation request;
模型传输模块302,被配置为在发送端节点与接收端节点之间建立至少一条用于传输目标模型的传输路径,以用于发送端节点向接收端节点传输目标模型;The model transmission module 302 is configured to establish at least one transmission path for transmitting the target model between the sending end node and the receiving end node, so that the sending end node transmits the target model to the receiving end node;
模型训练模块303,被配置为响应于接收端节点利用目标模型将预先存储的旧版本模型训练成新版本模型,将新版本模型的模型信息存储于模型库304中。The model training module 303 is configured to store the model information of the new version model in the model library 304 in response to the receiving end node using the target model to train the pre-stored old version model into a new version model.
具体地,本实施例提供了一种模型管理装置,包括对模型的查询、传 输、训练以及信息存储等流程。示例性地,假设节点A需利用其它节点的模型改进其存储的模型3.1时,可发起模型传播请求,模型查询模块301根据接收端节点A的模型传播请求在其存储表中查询模型存储者,模型存储者即目标模型的存储节点,并从这些模型存储者中选取距离接收端节点最近的节点作为发送端节点。如图2所示,基于信道条件、隐私保护、能量消耗、时延限制等因素的考量,假设选定节点B(大型服务器)和节点C(普通节点1)分别传输模型1和模型2给节点A。Specifically, this embodiment provides a model management device, including processes such as model query, transmission, training, and information storage. Exemplarily, assuming that node A needs to use the models of other nodes to improve its stored model 3.1, it can initiate a model propagation request. The model query module 301 queries the model storage in its storage table according to the model propagation request of receiving node A. The model storage is the storage node of the target model, and selects the node closest to the receiving node from these model storages as the sending node. As shown in Figure 2, based on the consideration of channel conditions, privacy protection, energy consumption, delay constraints and other factors, it is assumed that node B (large server) and node C (ordinary node 1) are selected to transmit model 1 and model 2 to node A respectively.
进一步地,在确定了发送端节点B和C之后,计算出发送端节点与接收端节点之间的传输路径,传输路径可以是一条也可以是多条。模型传输模块302为发送端节点B、C与接收端节点A之间建立传输路径传输模型。该传输路径可以是发送端节点和接收端节点之间直接连接形成,也可经由一个或多个中继节点连接,该连接可包含一条链路,也可包含多条链路。作为可选的实施方式,该步骤一方面需要确定发送端节点与接收端节点之间的路由节点;此外,还要确定发送端节点传输部分目标模型或全部目标模型至接收端节点。例如,某些节点已有的模型与目标模型具有相似的部分,则只需要传输模型不同的部分即可完成模型的更新;若某些节点完全不存在目标模型,则需要传输全部模型。通过这种方式可以减少占用的网络资源,避免重复的传输,提升通信效率。Further, after the sending end nodes B and C are determined, the transmission path between the sending end node and the receiving end node is calculated, and there may be one transmission path or multiple transmission paths. The model transmission module 302 establishes a transmission path transmission model between the sending end nodes B and C and the receiving end node A. The transmission path may be formed by a direct connection between the sending end node and the receiving end node, or may be connected via one or more relay nodes, and the connection may include one link or multiple links. As an optional implementation, on the one hand, this step needs to determine the routing node between the sending node and the receiving node; in addition, it also needs to determine that the sending node transmits part of the target model or all target models to the receiving node. For example, if the existing model of some nodes has similar parts with the target model, then only the different parts of the model need to be transmitted to complete the model update; if some nodes do not have the target model at all, all models need to be transmitted. In this way, occupied network resources can be reduced, repeated transmission can be avoided, and communication efficiency can be improved.
在模型1和2传输完成后,接收端节点A向模型库发送信令,模型库将接收端节点A记录为模型1和模型2的存储者。模型训练模块303利用模型合成技术将模型1、模型2结合接收端节点A的旧版本模型3.1训练生成新版本模型3.2,并生成模型3.2的编号,发送至模型库进行新模型登记,模型库存储新版本模型3.2的模型信息,模型信息包括但不限于模型生成位置(即训练模型的节点)和/或存储位置(即存储节点)和/或模型编码和/或模型的功能信息和/或模型的版本信息。After the transmission of models 1 and 2 is completed, the receiving end node A sends a signaling to the model library, and the model library records the receiving end node A as the storer of model 1 and model 2. The model training module 303 uses the model synthesis technology to combine model 1 and model 2 with the old version model 3.1 of the receiving end node A to train to generate a new version model 3.2, and generates a number of the model 3.2, and sends it to the model library for new model registration. The model library stores the model information of the new version model 3.2. The model information includes but is not limited to the model generation location (i.e. the node for training the model) and/or the storage location (i.e. the storage node) and/or the model code and/or the function information of the model and/or the version information of the model.
作为可选的实施方式,如图4所示,模型管理装置还包括:第一更新模块305,在形成新版本模型后,控制接收端节点向其簇成员节点传输新版本模型,并于模型库中更新新版本模型对应的模型信息。例如,在接收端节点A形成新版本模型3.2后,需向其绑定簇成员传输新版本模型3.2,当接收端节点A为簇头时,节点D和节点E是其簇成员,接收端节点A 向节点D和节点E传输模型3.2,节点D和节点E在接收模型3.2后向模型库发送信令,模型库登记节点D和节点E为模型3.2的存储者。As an optional implementation, as shown in FIG. 4 , the model management device further includes: a first update module 305, which controls the receiving end node to transmit the new version model to its cluster member nodes after the new version model is formed, and updates the model information corresponding to the new version model in the model library. For example, after receiving node A forms a new version of model 3.2, it needs to transmit the new version of model 3.2 to its bound cluster members. When receiving node A is the cluster head, node D and node E are its cluster members, and receiving node A transmits model 3.2 to node D and node E. After receiving model 3.2, node D and node E send signaling to the model library, and the model library registers node D and node E as the storers of model 3.2.
作为可选的实施方式,如图5所示,模型管理装置还包括:第二更新模块306,在模型库中更新新版本模型的模型信息后,查询是否仍然存在存储有旧版本模型的待更新节点,并向存储有旧版本模型的待更新节点传输新版本模型,以对待更新节点中的旧版本模型进行更新生成新版本模型。As an optional implementation, as shown in FIG. 5 , the model management device further includes: a second update module 306. After updating the model information of the new version model in the model library, query whether there is still a node to be updated that stores the old version model, and transmit the new version model to the node to be updated that stores the old version model, so as to update the old version model in the node to be updated to generate a new version model.
示例性地,模型库在接收到新版本模型3.2的模型信息后,查询到同一模型的低版本的存在,即模型3.1。通知模型3.1的存储节点,例如节点F存储了模型3.1,同时节点F也不是节点A的簇成员,因此没有得到及时更新,这时模型库就把节点F列为待更新节点,主动向节点F发送模型更新通知。节点F在接收模型更新通知后,可以向模型库发起模型传播请求来更新本地的模型3.1。模型库接收模型传播请求后,模型库可以选定节点D向节点F传输模型3.2的部分,例如模型3.2与模型3.1不同的部分,减小模型的传输量,提升传输效率。节点F将接收到的部分模型3.2与本地模型3.1相结合得到模型3.2,删除本地存储的模型3.1,并向模型库发送信息,模型库登记节点F为模型3.2的存储者,删除其在模型3.1的存储者记录,即对模型3.1和模型3.2的模型信息分别进行更新。Exemplarily, after receiving the model information of the new version model 3.2, the model library inquires about the existence of the lower version of the same model, that is, model 3.1. Notify the storage nodes of model 3.1. For example, node F stores model 3.1. At the same time, node F is not a cluster member of node A, so it has not been updated in time. At this time, the model library lists node F as a node to be updated, and actively sends a model update notification to node F. After receiving the model update notification, node F can initiate a model propagation request to the model library to update the local model 3.1. After the model library receives the model dissemination request, the model library can select node D to transmit the part of model 3.2 to node F, for example, the part of model 3.2 that is different from model 3.1, so as to reduce the amount of model transmission and improve transmission efficiency. Node F combines the received partial model 3.2 with the local model 3.1 to obtain model 3.2, deletes the locally stored model 3.1, and sends information to the model library, the model library registers node F as the storer of model 3.2, deletes its storer record in model 3.1, that is, updates the model information of model 3.1 and model 3.2 respectively.
需要说明的是,上述任一实施例中的模型库可以包括一个集中式存储的存储器或者是多个分布式存储器组成,其可以部署在核心网,也可以部署在接入网,模型库的存储范围可以根据具体应用场景及存储能力等因素确定。本公开旨在模型库的协助下管理模型的训练、传输、存储等具体流程,便于各个节点实现模型共享,及时更新自身的模型,以便后续能够对其它节点传输的内容进行处理得到所需的业务信息,在模型库的协助下,通信系统能够快速地查询到模型相关信息,进而提升模型传输、训练、存储等流程的速度,提升整体通信效率。It should be noted that the model library in any of the above embodiments may include a centralized storage memory or multiple distributed memories, which may be deployed in the core network or in the access network, and the storage range of the model library may be determined according to factors such as specific application scenarios and storage capabilities. This disclosure aims to manage the specific processes of model training, transmission, and storage with the assistance of the model library, so that each node can share the model and update its own model in time, so that the content transmitted by other nodes can be processed to obtain the required business information. With the assistance of the model library, the communication system can quickly query the model-related information, thereby improving the speed of model transmission, training, storage and other processes, and improving the overall communication efficiency.
本公开还提供了一种智简网络的组网架构,应用于执行上述实施例中中任一所述的模型管理方法,包括:The present disclosure also provides a network architecture of an intelligent simplified network, which is applied to implement the model management method described in any one of the above embodiments, including:
第一类节点、第二类节点、第三类节点以及模型库之间相互建立通信连接;其中,第一类节点与第二类节点被配置为训练、运行以及存储模型, 第三类节点被配置为运行及存储模型;模型库被配置为存储及更新模型对应的模型信息。The first type of nodes, the second type of nodes, the third type of nodes and the model library establish communication connections with each other; among them, the first type of nodes and the second type of nodes are configured to train, run and store models, and the third type of nodes are configured to run and store models; the model library is configured to store and update model information corresponding to the model.
具体地,按照功能划分网络中的节点,智简网络中可以包括大型服务器(即第一类节点)、普通节点(即第二类节点)以及部署节点(即第三类节点),其中大型服务器可以诞生大模型,可以是网络中具有强计算能力的设备,例如云计算服务器、cloudlet小云片等;普通结点可做迁移学习以及训练中小模型,其包括但不限于部署在接入网的具备计算能力的专用计算服务器、卫星等,网络中具有较强计算能力的设备,如个人计算机、智能手机、智能车辆、智能船只、无人机等;部署结点只有感知和采集数据的能力,没有训练模型能力,其包括但不限于手机、摄像机、智能手环、智能电视机等设备。Specifically, the nodes in the network are divided according to their functions. The IDN can include large-scale servers (i.e., the first type of nodes), ordinary nodes (i.e., the second type of nodes), and deployment nodes (i.e., the third type of nodes). The large-scale servers can generate large models, which can be devices with strong computing capabilities in the network, such as cloud computing servers, cloudlet small cloud slices, etc.; ordinary nodes can be used for migration learning and training small and medium models, including but not limited to dedicated computing servers and satellites with computing capabilities deployed on the access network, and devices with strong computing capabilities in the network, such as personal Computers, smart phones, smart vehicles, smart ships, drones, etc.; deployment nodes only have the ability to perceive and collect data, but not the ability to train models, including but not limited to mobile phones, cameras, smart bracelets, smart TVs and other devices.
如图6所示,本实施例根据模型的训练能力分为三类节点,即大型服务器601、普通节点602以及部署节点603,由这三类节点组成了智简网络架构,这三类节点均具备运行、存储及传输模型的能力。大型服务器与其他的大型服务器和部分普通节点之间采用有线连接进行通信,除此之外大型服务器与其他节点之间无直接连接,普通节点与其他普通节点之间可采用有线连接或无线连接进行通信,部署节点与其他的部署节点或普通节点之间可采用无线连接进行通信。As shown in Figure 6, this embodiment is divided into three types of nodes according to the training capability of the model, that is, large-scale server 601, common node 602, and deployment node 603. These three types of nodes form an intelligent network architecture, and these three types of nodes are capable of running, storing, and transmitting models. Large-scale servers communicate with other large-scale servers and some ordinary nodes using wired connections. In addition, there is no direct connection between large-scale servers and other nodes. Ordinary nodes can communicate with other ordinary nodes using wired or wireless connections. Deployment nodes can communicate with other deployment nodes or ordinary nodes using wireless connections.
在本实施例的网络架构中,大型服务器和普通节点可以利用本地存储的数据直接训练生成模型;也可以利用本地存储的模型,通过模型处理技术结合本地存储的数据训练生成新的模型,模型处理技术包括但不限于模型合成技术和模型压缩技术,如知识蒸馏、迁移学习、stacking方法等。In the network architecture of this embodiment, large-scale servers and ordinary nodes can use locally stored data to directly train and generate models; they can also use locally stored models to generate new models through model processing technology combined with locally stored data training. Model processing technologies include but are not limited to model synthesis technology and model compression technology, such as knowledge distillation, transfer learning, stacking methods, etc.
在生成模型后,生成模型的节点将会生成该模型的模型信息,该模型信息中包含模型的编号、功能信息以及版本信息。节点将模型编号发送到模型库中存储,模型库记录节点为模型贡献者和模型存储者。模型库可以包括一个集中式存储的存储器或者是多个分布式存储器组成,其可以部署在核心网,也可以部署在接入网,模型库的存储范围可以根据具体应用场景及存储能力等因素确定。After the model is generated, the node that generates the model will generate the model information of the model, which includes the model number, function information and version information. The node sends the model number to the model library for storage, and the model library records the node as a model contributor and a model storer. The model library can include a centralized storage memory or multiple distributed memories, which can be deployed in the core network or in the access network. The storage range of the model library can be determined according to specific application scenarios and storage capabilities.
示例性地,由模型需求者(即接收端节点)发起的模型传播流程包括:模型库接收到新版本模型的模型信息,可查询是否有节点存在该模型的旧 版本,若存在则列为待更新节点,并向待更新节点发送模型更新通知,待更新节点接收模型更新通知后可以根据需求向模型库发起模型传播请求;各节点可周期性地(如为了及时发现新模型实现自身模型进化的目的)或非周期性地(如大型服务器和普通节点需要其他模型来生成新的模型时)主动查询模型库已有模型,并对所需的一个或多个模型的发起传播请求;节点接收到模型时效过期通知时,可发起模型传播请求,请求新版本模型。Exemplarily, the model propagation process initiated by the model demander (i.e., the receiving end node) includes: the model library receives the model information of the new version of the model, and can query whether there is an old version of the model in the node. If it exists, it will be listed as a node to be updated, and send a model update notification to the node to be updated. model) Actively query the existing models in the model library, and initiate a propagation request for one or more required models; when a node receives a model expiration notification, it can initiate a model propagation request to request a new version of the model.
作为可选的实施方式,在上述技术方案中,模型更新通知无需发送给每个待更新节点,只需要发给绑定簇中作为簇头的待更新节点,再由簇头将模型传输给其它的簇成员节点。As an optional implementation, in the above technical solution, the model update notification does not need to be sent to each node to be updated, but only needs to be sent to the node to be updated as the cluster head in the bound cluster, and then the cluster head transmits the model to other cluster member nodes.
在另外一种实施例中,由模型生成者(即发送端节点)发起的模型传播流程包括:当大型服务器或普通节点生成新的模型后,可以向特定节点直接发送模型,要求该节点接收新模型,使节点之间的模型保持统一。使用这一模式的前提是,需要各节点之间提前协定,例如各节点之间形成绑定簇,簇头应为最上层节点,当簇头中的模型更新时,主动将新模型传输给簇成员。In another embodiment, the model propagation process initiated by the model generator (i.e., the sending end node) includes: after a large server or a common node generates a new model, it can directly send the model to a specific node, requiring the node to receive the new model, so that the models between the nodes remain unified. The premise of using this mode is that it needs to be agreed in advance between the nodes, for example, to form a binding cluster between the nodes, the cluster head should be the uppermost node, and when the model in the cluster head is updated, the new model will be actively transmitted to the cluster members.
在确定了目标模型的发送端节点及接收端节点之后,需要在发送端节点和接收端节点之间选择传输路径。发送端节点与接收端节点之间的连接可以是直接连接,也可经由一个或多个中继节点连接;传输路径可包含一条链路,也可包含多条链路;也可以选择传输目标模型的部分或目标模型的全部至接收端节点。After determining the sender node and the receiver node of the target model, it is necessary to select a transmission path between the sender node and the receiver node. The connection between the sending end node and the receiving end node can be a direct connection, or can be connected through one or more relay nodes; the transmission path can include one link or multiple links; you can also choose to transmit part of the target model or all of the target model to the receiving end node.
在目标模型传输至接收端节点后,节点将该目标模型存储,此时接收端节点需发送信息至模型库,记录该节点为模型存储者。由于节点可能仅存储某个模型的部分,模型信息中可以包含该节点存储了该模型的哪部分。模型的传播有时效的限制,对于不同版本的模型,超过时效后,模型库会向存储低版本模型的节点发送超过时效通知,建议该节点删除旧版本模型,这样可以减少旧版本模型所占用的内存资源,及时更新各节点的模型,也能提升各节点的信息处理能力,从而提升通信效率。例如,某节点使用新版本模型对第一业务信息进行提取及编码形成第二业务信息,若接收该节点的第二业务信息的节点仍然使用旧版本模型,可能导致该节点无法将第二业务信息恢复成第三业务信息,或者是恢复后的质量太低,无法满足用 户的需求,因此,及时更新各节点的模型也有利于提升用户的体验。需要说明的是,该时效是可设定的,可以根据实际的应用场景来确定。节点删除模型后需向模型库发送相关消息,模型库会删除模型存储者信息。但出于安全性的考虑,模型贡献者信息不随之删除,即模型库对模型信息对更新不会删除模型的生成位置,仅更新模型的存储位置,通过模型库的协助,可以了解整个系统中所有模型的生成位置、存储位置以及各模型的功能、版本等等,在任意一个节点需要相关模型时,模型库可以快速搜索到目标模型,并通过最快的传输路径传输给模型需求者,提升了对模型的管理能力,提升了智简网络中模型的共享性,实现更高效更稳定的通信系统。After the target model is transmitted to the receiving node, the node stores the target model. At this time, the receiving node needs to send information to the model library, and record the node as the model storer. Since a node may only store parts of a model, the model information can include which part of the model the node stores. The propagation of the model is limited by the time limit. For different versions of the model, after the time limit expires, the model library will send an expiration notification to the node storing the lower version model, suggesting that the node delete the old version model, which can reduce the memory resources occupied by the old version model, update the model of each node in time, and also improve the information processing capability of each node, thereby improving communication efficiency. For example, a node uses a new version model to extract and encode the first business information to form the second business information. If the node receiving the second business information of the node still uses the old version model, the node may not be able to restore the second business information to the third business information, or the restored quality is too low to meet the needs of users. Therefore, updating the models of each node in time is also conducive to improving user experience. It should be noted that the time limit can be set and can be determined according to actual application scenarios. After the node deletes the model, it needs to send relevant messages to the model library, and the model library will delete the model storage information. However, for the sake of security, the model contributor information will not be deleted, that is, the model library will not delete the generation location of the model when updating the model information, but only update the storage location of the model. With the assistance of the model library, you can know the generation location, storage location, function and version of each model in the entire system. When any node needs a related model, the model library can quickly search for the target model and transmit it to the model demander through the fastest transmission path. .
根据本公开的实施例,本公开还提供了一种电子设备、一种可读存储介质和一种计算机程序产品。According to the embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
具体地,电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。In particular, electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
设备包括计算单元,其可以根据存储在只读存储器(ROM,Read-OnlyMemory)中的计算机程序或者从存储单元加载到随机访问存储器(RAM,RandomAccessMemory)中的计算机程序,来执行各种适当的动作和处理。在RAM中,还可存储设备操作所需的各种程序和数据。计算单元、ROM以及RAM通过总线彼此相连。输入/输出(I/O,Input/Output)接口也连接至总线。The device includes a computing unit, which can perform various appropriate actions and processes according to computer programs stored in a read-only memory (ROM, Read-Only Memory) or loaded from a storage unit into a random access memory (RAM, Random Access Memory). In RAM, various programs and data necessary for device operation are also stored. The computing unit, ROM, and RAM are connected to each other through a bus. An input/output (I/O, Input/Output) interface is also connected to the bus.
设备中的多个部件连接至I/O接口,包括:输入单元,例如键盘、鼠标等;输出单元,例如各种类型的显示器、扬声器等;存储单元,例如磁盘、光盘等;以及通信单元,例如网卡、调制解调器、无线通信收发机等。通信单元允许设备通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。Multiple components in the device are connected to the I/O interface, including: input units, such as keyboards, mice, etc.; output units, such as various types of displays, speakers, etc.; storage units, such as magnetic disks, optical discs, etc.; and communication units, such as network cards, modems, wireless communication transceivers, etc. The communication unit allows the device to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
计算单元可以是各种具有处理和计算能力的通用和/或专用处理组件。 计算单元的一些示例包括但不限于中央处理单元(CPU,Central Processing Unit)、图形处理单元(GPU,Graphics Processing Unit)、各种专用的人工智能(AI,Artificial Intelligence)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP,Digital Signal Processing)、以及任何适当的处理器、控制器、微控制器等。计算单元执行上文所描述的各个方法和处理,例如上述实施例中的模型管理方法。例如,在一些实施例中,模型管理方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元。在一些实施例中,计算机程序的部分或者全部可以经由ROM和/或通信单元而被载入和/或安装到设备上。当计算机程序加载到RAM并由计算单元执行时,可以执行上文描述的模型管理方法的一个或多个步骤。备选地,在其他实施例中,计算单元可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行模型管理方法。Computing units may be various general and/or special purpose processing components having processing and computing capabilities. Some examples of computing units include, but are not limited to, central processing units (CPU, Central Processing Unit), graphics processing units (GPU, Graphics Processing Unit), various dedicated artificial intelligence (AI, Artificial Intelligence) computing chips, various computing units that run machine learning model algorithms, digital signal processors (DSP, Digital Signal Processing), and any appropriate processors, controllers, microcontrollers, etc. The calculation unit executes various methods and processes described above, such as the model management method in the above embodiments. For example, in some embodiments, the model management method can be implemented as a computer software program tangibly embodied on a machine-readable medium, such as a storage unit. In some embodiments, part or all of the computer program may be loaded and/or installed on the device via a ROM and/or a communication unit. When the computer program is loaded into RAM and executed by the computing unit, one or more steps of the model management method described above may be performed. Alternatively, in other embodiments, the computing unit may be configured to execute the model management method in any other suitable manner (eg, by means of firmware).
本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、现场可编程门阵列(FPGA,Field Programmable Gate Array)、专用集成电路(ASIC,Application Specific Integrated Circuit)、专用标准产品(ASSP,Application Specific Standard Product)、芯片上系统的系统(SOC,System on Chip)、负载可编程逻辑设备(CPLD,Complex Programmable Logic Device)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。Various implementations of the systems and technologies described above in this paper can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGA, Field Programmable Gate Array), application specific integrated circuits (ASIC, Application Specific Integrated Circuit), application specific standard products (ASSP, Application Specific Standard Product), systems on chips (SOC, System on Chip), load programmable logic devices (CPLD, Comp lex Programmable Logic Device), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs executable and/or interpreted on a programmable system comprising at least one programmable processor, which may be a special purpose or general purpose programmable processor, capable of receiving data and instructions from and transmitting data and instructions to a storage system, at least one input device, and at least one output device.
用于实施本公开的模型管理方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务 器上执行。Program codes for implementing the model management method of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to processors or controllers of general-purpose computers, special purpose computers, or other programmable data processing devices, so that the program codes cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented when executed by the processors or controllers. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM,Electrical Programmable Read Only Memory或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM,Compact Disc Read-Only Memory)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device. A machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), EPROM (Electrical Programmable Read-Only Memory or flash memory), optical fiber, Compact Disc Read-Only Memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. combination.
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(Cathode Ray Tube,阴极射线管)或者LCD(Liquid Crystal Display,液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。To provide for interaction with a user, the systems and techniques described herein can be implemented on a computer having: a display device (e.g., a CRT (Cathode Ray Tube, cathode ray tube) or LCD (Liquid Crystal Display, liquid crystal display) monitor) for displaying information to the user; and a keyboard and pointing device (e.g., a mouse or trackball) through which the user can provide input to the computer. Other types of devices may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, voice input, or tactile input.
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN,Local Area Network)、广域网(WAN,Wide Area Network)和互联网。The systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., a user computer having a graphical user interface or web browser through which a user can interact with implementations of the systems and techniques described herein), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: Local Area Network (LAN, Local Area Network), Wide Area Network (WAN, Wide Area Network) and the Internet.
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此 并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,也可以为分布式系统的服务器,或者是结合了区块链的服务器。A computer system may include clients and servers. Clients and servers are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, a server of a distributed system, or a server combined with a blockchain.
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。It should be understood that steps may be reordered, added or deleted using the various forms of flow shown above. For example, each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。The specific implementation manners described above do not limit the protection scope of the present disclosure. It should be apparent to those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made depending on design requirements and other factors. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present disclosure shall be included within the protection scope of the present disclosure.

Claims (14)

  1. 一种模型管理方法,其特征在于,包括:A model management method, characterized by comprising:
    获取接收端节点发起的模型传播请求,并根据所述模型传播请求查询目标模型的存储位置作为发送端节点(S101);Obtain the model propagation request initiated by the receiving end node, and query the storage location of the target model as the sending end node according to the model propagation request (S101);
    在所述发送端节点与所述接收端节点之间建立至少一条用于传输所述目标模型的传输路径,以用于所述发送端节点向所述接收端节点传输所述目标模型(S102);Establishing at least one transmission path for transmitting the target model between the sending end node and the receiving end node, so that the sending end node transmits the target model to the receiving end node (S102);
    响应于所述接收端节点利用所述目标模型将预先存储的旧版本模型训练成新版本模型,将所述新版本模型的模型信息存储于模型库中(S103)。In response to the receiving end node using the target model to train the pre-stored old version model into a new version model, store the model information of the new version model in a model library (S103).
  2. 根据权利要求1所述的模型管理方法,其特征在于,所述模型管理方法还包括:在形成所述新版本模型后,控制所述接收端节点向其簇成员节点传输所述新版本模型,并于所述模型库中更新所述新版本模型对应的模型信息。The model management method according to claim 1, further comprising: after forming the new version model, controlling the receiving end node to transmit the new version model to its cluster member nodes, and updating the model information corresponding to the new version model in the model library.
  3. 根据权利要求1所述的模型管理方法,其特征在于,所述模型管理方法还包括:于所述模型库中更新所述新版本模型的模型信息后,查询是否仍然存在存储有所述旧版本模型的待更新节点,并向存储有所述旧版本模型的所述待更新节点传输所述新版本模型,以对所述待更新节点中的所述旧版本模型进行更新生成所述新版本模型。The model management method according to claim 1, further comprising: after updating the model information of the new version model in the model library, querying whether there is still a node to be updated that stores the old version model, and transmitting the new version model to the node to be updated that stores the old version model, so as to update the old version model in the node to be updated to generate the new version model.
  4. 根据权利要求3所述的模型管理方法,其特征在于,在所述待更新节点更新生成所述新版本模型后,所述模型库分别更新所述旧版本模型对应的模型信息和所述新版本模型对应的模型信息。The model management method according to claim 3, wherein after the node to be updated updates and generates the new version model, the model library updates the model information corresponding to the old version model and the model information corresponding to the new version model respectively.
  5. 根据权利要求1所述的模型管理方法,其特征在于,所述模型信息包括模型生成位置和/或模型存储位置和/或模型编码和/或模型功能信息和/或模型版本信息。The model management method according to claim 1, wherein the model information includes model generation location and/or model storage location and/or model coding and/or model function information and/or model version information.
  6. 根据权利要求1所述的模型管理方法,其特征在于,所述 模型库建立至少一条所述传输路径的步骤具体包括:The model management method according to claim 1, wherein the step of establishing at least one transmission path in the model library specifically includes:
    确定所述发送端节点与所述接收端节点之间的路由节点;determining routing nodes between the sending end node and the receiving end node;
    确定所述发送端节点传输部分所述目标模型或全部所述目标模型至所述接收端节点。It is determined that the sending end node transmits part of the object model or all of the object model to the receiving end node.
  7. 根据权利要求1-6中任一所述的模型管理方法,其特征在于,所述模型库包括一个或多个存储器。The model management method according to any one of claims 1-6, wherein the model library includes one or more storages.
  8. 一种模型管理装置,其特征在于,包括:A model management device, characterized by comprising:
    模型查询模块(301),被配置为获取接收端节点发起的模型传播请求,并根据所述模型传播请求查询目标模型的存储位置作为发送端节点;A model query module (301), configured to obtain a model propagation request initiated by a receiving end node, and query the storage location of a target model as a sending end node according to the model propagation request;
    模型传输模块(302),被配置为在所述发送端节点与所述接收端节点之间建立至少一条用于传输所述目标模型的传输路径,以用于所述发送端节点向所述接收端节点传输所述目标模型;A model transmission module (302), configured to establish at least one transmission path for transmitting the target model between the sending end node and the receiving end node, so that the sending end node transmits the target model to the receiving end node;
    模型训练模块(303),被配置为响应于所述接收端节点利用所述目标模型将预先存储的旧版本模型训练成新版本模型,将所述新版本模型的模型信息存储于模型库(304)中。The model training module (303) is configured to store model information of the new version model in a model library (304) in response to the receiving end node using the target model to train a pre-stored old version model into a new version model.
  9. 根据权利要求8所述的模型管理装置,其特征在于,所述模型管理装置还包括:第一更新模块(305),被配置为在形成所述新版本模型后,控制所述接收端节点主动向其簇成员节点传输所述新版本模型,并于所述模型库中更新所述新版本模型对应的模型信息。The model management device according to claim 8, characterized in that the model management device further comprises: a first update module (305), configured to control the receiving end node to actively transmit the new version model to its cluster member nodes after the new version model is formed, and update the model information corresponding to the new version model in the model library.
  10. 根据权利要求8所述的模型管理装置,其特征在于,所述模型管理装置还包括:第二更新模块(306),被配置为在所述模型库中更新所述新版本模型的模型信息后,查询是否仍然存在存储有所述旧版本模型的待更新节点,并向存储有所述旧版本模型的所述待更新节点传输所述新版本模型,以对所述待更新节点中的所述旧版本模型进行更新生成所述新版本模型,并于所述模型库中更新所述新版本模型对应的模型信息和所述旧版本模型对应的模型信息。The model management device according to claim 8, characterized in that the model management device further comprises: a second update module (306), configured to query whether there is still a node to be updated that stores the old version model after updating the model information of the new version model in the model library, and transmit the new version model to the node to be updated that stores the old version model, so as to update the old version model in the node to be updated to generate the new version model, and update the model information corresponding to the new version model and the old version model in the model library Corresponding model information.
  11. 一种智简网络的组网架构,其特征在于应用于执行权利 要求1-7中任一所述的模型管理方法,包括:A networking architecture of an intelligent network, characterized in that it is applied to the model management method described in any one of claims 1-7, including:
    第一类节点(601)、第二类节点(602)、第三类节点(603)以及模型库之间相互建立通信连接;其中,所述第一类节点(601)与所述第二类节点(602)被配置为训练、运行以及存储模型,所述第三类节点(603)被配置为运行及存储所述模型;所述模型库被配置为存储及更新所述模型对应的模型信息。The first type of nodes (601), the second type of nodes (602), the third type of nodes (603) and the model library establish communication connections with each other; wherein, the first type of nodes (601) and the second type of nodes (602) are configured to train, run and store models, and the third type of nodes (603) are configured to run and store the models; the model library is configured to store and update model information corresponding to the models.
  12. 一种电子设备,包括:An electronic device comprising:
    至少一个处理器;以及at least one processor; and
    与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-7中任一项所述的模型管理方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the model management method according to any one of claims 1-7.
  13. 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行根据权利要求1-7中任一项所述的模型管理方法。A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute the model management method according to any one of claims 1-7.
  14. 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-7中任一项所述的模型管理方法。A computer program product, comprising a computer program, the computer program implements the model management method according to any one of claims 1-7 when executed by a processor.
PCT/CN2022/136416 2022-01-20 2022-12-04 Model management method and apparatus, networking architecture, electronic device and storage medium WO2023138234A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210065485.7A CN116527497A (en) 2022-01-20 2022-01-20 Model management method, device, networking architecture, electronic equipment and storage medium
CN202210065485.7 2022-01-20

Publications (1)

Publication Number Publication Date
WO2023138234A1 true WO2023138234A1 (en) 2023-07-27

Family

ID=87347755

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/136416 WO2023138234A1 (en) 2022-01-20 2022-12-04 Model management method and apparatus, networking architecture, electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN116527497A (en)
WO (1) WO2023138234A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140241346A1 (en) * 2013-02-25 2014-08-28 Google Inc. Translating network forwarding plane models into target implementation using network primitives
CN111552462A (en) * 2019-12-31 2020-08-18 远景智能国际私人投资有限公司 Equipment model construction method and device of Internet of things equipment and storage medium
CN111797289A (en) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 Model processing method and device, storage medium and electronic equipment
CN112738061A (en) * 2020-12-24 2021-04-30 四川虹微技术有限公司 Information processing method, device, management platform, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140241346A1 (en) * 2013-02-25 2014-08-28 Google Inc. Translating network forwarding plane models into target implementation using network primitives
CN111797289A (en) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 Model processing method and device, storage medium and electronic equipment
CN111552462A (en) * 2019-12-31 2020-08-18 远景智能国际私人投资有限公司 Equipment model construction method and device of Internet of things equipment and storage medium
CN112738061A (en) * 2020-12-24 2021-04-30 四川虹微技术有限公司 Information processing method, device, management platform, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN116527497A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
US10581932B2 (en) Network-based dynamic data management
EP3852419A1 (en) Latency-sensitive network communication method and apparatus thereof
KR20120123262A (en) System and method for providing quality of service in wide-area messaging fabric
WO2024104284A1 (en) Nwdaf-based management and decision-making method for computing resources
US12033044B2 (en) Interactive and dynamic mapping engine (IDME)
CN115426327B (en) Calculation force scheduling method and device, electronic equipment and storage medium
Gadasin et al. Organization of Interaction between the Concept of Fog Computing and Segment Routing for the Provision of IoT Services in Smart Grid Networks
WO2023138234A1 (en) Model management method and apparatus, networking architecture, electronic device and storage medium
Duran et al. Age of Twin (AoT): A New Digital Twin Qualifier for 6G Ecosystem
WO2024001266A9 (en) Video stream transmission control method and apparatus, device, and medium
CN112714146B (en) Resource scheduling method, device, equipment and computer readable storage medium
EP2335392B1 (en) Method, apparatus and computer program product for providing composite capability information for devices in distributed networks
WO2023138231A1 (en) Residual propagation method and apparatus for network model
CN110063050B (en) Service scheduling method and system
WO2023138233A1 (en) Model transmission method and apparatus, electronic device and readable storage medium
CN102647424A (en) Data transmission method and data transmission device
WO2023138232A1 (en) Model training method and apparatus, electronic device, and storage medium
WO2023138238A1 (en) Information transmitting method and apparatus based on intent-driven network, electronic device, and medium
CN110830295A (en) Equipment management method and system
CN113791896B (en) Connection path determination method, device and readable storage medium
WO2023198212A1 (en) Model selection method and apparatus based on environmental perception
CN103905249A (en) Mobile Internet network monitoring management method based on JXME
CN116684939B (en) Message processing method, device, computer equipment and computer readable storage medium
US20240320049A1 (en) Artificial Intelligence-Based Data Processing Method, Electronic Device and Computer-Readable Storage Medium
CN116389349B (en) Node autonomous hybrid cloud data transmission method and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22921658

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE