WO2023138231A1 - Residual propagation method and apparatus for network model - Google Patents

Residual propagation method and apparatus for network model Download PDF

Info

Publication number
WO2023138231A1
WO2023138231A1 PCT/CN2022/136413 CN2022136413W WO2023138231A1 WO 2023138231 A1 WO2023138231 A1 WO 2023138231A1 CN 2022136413 W CN2022136413 W CN 2022136413W WO 2023138231 A1 WO2023138231 A1 WO 2023138231A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
sink node
model
sink
slices
Prior art date
Application number
PCT/CN2022/136413
Other languages
French (fr)
Chinese (zh)
Inventor
许晓东
任水迪
韩书君
董辰
王碧舳
Original Assignee
北京邮电大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京邮电大学 filed Critical 北京邮电大学
Publication of WO2023138231A1 publication Critical patent/WO2023138231A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability

Definitions

  • the present disclosure relates to the field of communication technologies, and in particular, to a residual propagation method and a residual propagation device of a network model.
  • network nodes tend to be intelligent.
  • the intelligentization of network nodes has led to rapid expansion of information space, and even dimensional disasters, which has exacerbated the difficulty of representing information carrying space, making it difficult to match traditional network service capabilities with high-dimensional information space.
  • the amount of data transmitted through communication is too large, and the information business service system cannot continue to meet people's needs for complex, diverse, and intelligent information transmission.
  • Using artificial intelligence models to encode, disseminate, and decode business information can significantly reduce the amount of data transmission in communication services and greatly improve the efficiency of information transmission.
  • These models are relatively stable, and have reusability and dissemination. The dissemination and reuse of models will help to enhance network intelligence while reducing overhead and resource waste, forming an intelligent network with extremely intelligent nodes and a minimal network.
  • the network has a storage function, and the model is stored in the network, which may be stored on the end user side or in the cloud.
  • Each node can absorb many models on the network to realize self-evolution, which is similar to knowledge distillation.
  • the essence of model propagation is federated learning, which requires support and control from corresponding protocols. Therefore, the technical problem that needs to be solved at present is: how to realize the transmission model in the communication process.
  • the disclosure provides a residual propagation method and a residual propagation device of a network model.
  • a method for residual propagation of a network model which is applied to a network, and the network includes at least one routing path, and all routing paths include a source node and a sink node, and an intermediate node arranged between the source node and the sink node;
  • the source node stores all model slices in the preset demand of the sink node, and the demand of the sink node is the demand of the sink node for model slices;
  • Residual propagation methods include:
  • the intermediate node and/or the source node sends the model slices required by the sink node to the sink node.
  • a residual propagation device for a network model which is applied to a network, and the network includes at least one routing path, and all routing paths include a source node and a sink node, and an intermediate node arranged between the source node and the sink node;
  • the source node stores all model slices in the preset sink node requirements, and the sink node requirements are the sink node's demand for model slices;
  • the residual propagation device includes:
  • a path obtaining unit configured to obtain a routing path
  • the processing unit is configured to traverse along the routing path from the sink node to the source node, and the intermediate node and/or the source node sends the model slice required by the sink node to the sink node.
  • an electronic device including:
  • the memory stores instructions executable by at least one processor, and the instructions are executed by at least one processor, so that the at least one processor can execute the above method.
  • a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to cause a computer to execute the above method.
  • the solution provided by the embodiments of the present disclosure performs reverse traversal along the routing path from the sink node to the source node, and for the model slices that need to be transmitted, the traversed intermediate nodes and/or source nodes transmit the necessary parts to the sink nodes according to the needs of the sink nodes, thereby forming model residual propagation, which not only reduces the time delay for the sink nodes to obtain the required model slices, but also reduces data redundancy in the end-to-end communication process, and improves the resource utilization of the overall network.
  • FIG. 1 is a schematic flowchart of a method for residual propagation of a network model according to Embodiment 1 of the present disclosure
  • FIG. 2 is a schematic flowchart of step S102 in the method for residual propagation of a network model according to Embodiment 1 of the present disclosure
  • FIG. 3 is a routing path diagram without model residual propagation according to Embodiment 1 of the present disclosure.
  • FIG. 9 is a routing path diagram after model residual propagation is completed according to Embodiment 1 of the present disclosure.
  • FIG. 10 is a schematic structural diagram of a residual propagation device provided according to Embodiment 2 of the present disclosure.
  • FIG. 11 is a block diagram of an electronic device of an embodiment of the present disclosure.
  • the sending end device uses the preconfigured first model to extract the first service information and obtain the second service information to be transmitted; the sending end device transmits the second service information to the receiving end device.
  • the receiving end device receives the second service information, and uses the pre-configured second model to recover the second service information to obtain the third service information; the third service information restored by the second model will have a slight difference in quality compared with the original first service information, but the two are consistent in content, and the user experience is almost the same.
  • the transmitting end device transmits the second service information to the receiving end device, it also includes: an update module judges whether the receiving end device needs to update the second model, and transmits a pre-configured third model to the receiving end device when it is judged that an update is required, and the receiving end device uses the third model to update the second model. Processing business information through pre-trained artificial intelligence models can significantly reduce the amount of data transmission in communication services and greatly improve the efficiency of information transmission.
  • Model propagation and reuse will help enhance network intelligence while reducing overhead and resource waste.
  • the model can be divided into several model slices according to different segmentation rules.
  • the above model slices can also be transmitted between different network nodes, and the model slices can be assembled into models.
  • Model slices can be distributed and stored on multiple network nodes. When a network node finds that it lacks or needs to update a certain model or a certain model slice, it can make a request to the surrounding nodes that may have the slice.
  • Both the transmission of business information and the transmission model occur at the network layer of the communication network, and the communication transmission is performed based on the network layer protocol.
  • the network nodes passing through the path of transmitting service information and transmitting models include the IDR.
  • the functions of the I-Driven Router include but are not limited to business information transmission, model transmission, absorbing model self-update, security protection and other functions.
  • the transmission function of the Intelligent-Driven Router involves the transmission of business information or models from the source node to the sink node, and there are multiple paths between the source node and the sink node.
  • the model transmission function of the Smart-Driven Router can transmit model slices. By rationally arranging model slices to take multiple paths, multiple transmission model slices can be used to improve the model transmission rate.
  • Fig. 1 shows a residual propagation method of a network model provided by an embodiment of the present disclosure, wherein, applied to a network, the network includes at least one routing path, and all routing paths include a source node and a sink node, and an intermediate node arranged between the source node and the sink node;
  • the source node stores all model slices in the preset sink node requirements, and the sink node requirements are the sink node's demand for model slices;
  • Step S101 obtaining a routing path
  • Step S102 traversing along the routing path from the sink node to the source node, and the intermediate node and/or the source node sends the model slice required by the sink node to the sink node.
  • the disclosure traverses backwards along the routing path from the sink node to the source node, and for the model slices that need to be transmitted, the traversed intermediate nodes and/or source nodes transmit the necessary parts to the sink nodes according to the needs of the sink nodes, thereby forming model residual propagation, which not only reduces the time delay for the sink nodes to obtain the required model slices, but also reduces the data redundancy in the end-to-end communication process and improves the resource utilization of the overall network.
  • the source node, intermediate node, and sink node on the routing path are also determined at this time.
  • the transmission nodes can be traversed in reverse order from the sink node to the source node along the routing path.
  • the transmission node includes the intermediate node and the source node, and only the model slice required by the sink node needs to be transmitted, thus forming the model residual transmission.
  • the model slice required by the sink node includes the first model slice and the second model slice.
  • the first intermediate node under the sink node has the second model slice and the third model slice, it is only necessary to transmit the second model slice to the sink node.
  • only the second model slice exists in the second intermediate node. Since the sink node already has the second model slice, and the second intermediate node is the last intermediate node, skip the second intermediate node and traverse to the source node.
  • the source node has all the model slices required by the sink node. , at this time, the source node sends the first model slice to the sink node.
  • the intermediate node stores model slices, but the model slices in the intermediate nodes may include the existing model slices of the sink node, and the model slices in the intermediate nodes may also include model slices required by the sink node.
  • the sink node already has the third model slice and the fourth model slice, and at this time the sink node needs to store the first model slice and the second model slice;
  • the first intermediate node in the routing path has the second model slice and the third model slice, that is, the first intermediate node at this time has the model slice that the sink node already has - the third model slice, and the model slice that the sink node needs - the second model slice;
  • the second intermediate node in the routing path only has the third model slice, that is, the second intermediate node at this time only has the existing model slice of the sink node—the third model slice.
  • determining the requirements of the sink node includes the following steps:
  • the type and quantity of model slices in the source node and the type and quantity of model slices in the sink node determine the type and quantity of model slices that the sink node still lacks.
  • the model slices that the sink node currently lacks can be found in the source node, and these missing model slices are the sink node's demand for model slices.
  • the model slice is an AI model produced by training each intelligent node in the Intent-Driven Network, which can include various types according to the actual situation and node requirements.
  • model slices can be edited models and generated animation models, etc., and include but are not limited to classification models, segmentation models, graph neural network models, etc.
  • the embodiment of the present disclosure provides a possible implementation manner, wherein the source node also stores all model slices existing in the sink node.
  • the source node at this time not only stores all the model slices required by the sink node, but also stores all the model slices in the sink node.
  • the information source node at this time includes: 2 classification model slices, 2 segmentation model slices and 3 graph neural network model slices;
  • the sink node includes: 1 classification model slice and 1 segmentation model slice;
  • the sink node's demand for model slices at this time is: 1 classification model slice, 1 segmentation model slice, and 3 graph neural network model slices.
  • determining the requirements of the sink node includes the following steps:
  • the model slice in the source node According to the number of the model slice in the source node and the number of the model slice in the sink node, determine the model slice that the sink node still lacks.
  • model slice For example, set a unique number for each model slice, assuming that there are ten types of model slices, labeled A1-A10;
  • the source node not only stores all the model slices required by the sink node, but also stores all the model slices already in the sink node;
  • the source nodes at this time include: model slice A1, model slice A2, model slice A3, model slice A4, model slice A5, model slice A6, model slice A7, model slice A8, model slice A9 and model slice A10;
  • the sink nodes include: model slice A1, model slice A2, model slice A3, model slice A4, model slice A5, model slice A6 and model slice A7;
  • step S102 specifically includes the following steps:
  • Step S1021 traversing along the routing path from the sink node to the source node
  • Step S1022 judging whether the intermediate node has at least one model slice required by the sink node
  • the intermediate node sends the existing model slices in the demand of the sink node to the sink node, and then executes step S1023;
  • step S1022 If it does not exist, continue to traverse along the routing path to the direction of the source node, and return to step S1022 until the requirements of the sink node are met or all intermediate nodes cannot be satisfied after traversing all intermediate nodes.
  • Step S1023 judging whether the requirements of the sink node are met
  • update the requirements of the sink node continue to traverse along the routing path to the direction of the source node, and return to step S1022 until the requirements of the sink node are met or all intermediate nodes cannot be satisfied after traversing all intermediate nodes.
  • the two nodes form a certain routing path, and the routing path also includes an intermediate node arranged at least one of the source node B and the source node C.
  • the routing path is defined as a routing path, and there are model slices of different numbers and types at each passing node in the routing path.
  • the transmission node includes the intermediate node and the source node B;
  • the sink node C When the sink node C wants to obtain some model slices that it lacks, it does not directly initiate a request to the source node B for transmission, but in reverse order, starting from the previous hop transmission node of the sink node C, and inquires each transmission node in the routing path one by one.
  • the specific steps are as follows:
  • the intermediate node does not have the model slice required by the sink node, skip the intermediate node and then go back to the previous hop transmission node to continue searching;
  • the transfer node sends the model slice included in the sink node requirement to the sink node C, and judges whether the model slice in the intermediate node meets the sink node requirement:
  • the intermediate node does not meet the requirements of the sink node, update the requirements of the sink node, and then trace back to the previous hop transmission node to find the model slice in the updated sink node requirements;
  • the intermediate node meets the requirements of the sink node, that is, the intermediate node at this time contains all the model slices required by the sink node, then the intermediate node sends all the model slices it needs to the sink node C, and terminates the backtracking search process at the same time;
  • the source node B will send the model slice in the current sink node demand to the sink node C according to the current sink node demand.
  • the sink node can initiate a request containing the sink node's requirements to the last-hop intermediate node according to its preset sink node requirements, so that the intermediate node can search for the model slice in the sink node's requirements according to the received request:
  • the found model slice is sent to the sink node, and the requirements of the sink node are updated at the same time, and the request containing the updated requirements of the sink node is forwarded to the previous hop transmission node;
  • the request is directly forwarded to the previous hop transfer node
  • the last intermediate node will forward the request containing the latest sink node requirements to the source node, and the source node will send the model slice in the latest sink node requirements to the sink node, ending this model residual propagation.
  • an intermediate node If an intermediate node has all the model slices required in the received request, after it sends the corresponding model slices back to the sink node, it stops updating and forwarding the requests containing the needs, terminates the backtracking search process, and ends this model residual propagation.
  • the two nodes form a certain routing path
  • the routing path also includes an intermediate node arranged at least one of the source node B and the source node C, and the intermediate nodes can be set as D1, D2, and D3, and the routing path is defined as a routing path.
  • the sink node C When the sink node C wants to obtain some model slices that it lacks, it does not directly initiate a request to the source node B for transmission, but in reverse order, starting from the last hop transfer node of the sink node C, and inquiring about each transfer node in the routing path one by one, as shown in Figure 3, the routing path includes: sink node C, intermediate node D1, intermediate node D2, intermediate node D3, and source node B in turn,
  • the source node B not only includes all the model slices required by the sink node C, but also includes all the existing model slices in the sink node C;
  • the source node B at this time includes: model slice A1, model slice A2, model slice A3, model slice A4, model slice A5, model slice A6, model slice A7, model slice A8, model slice A9 and model slice A10;
  • the sink node C includes: model slice A1, model slice A2, model slice A3, model slice A4, model slice A5, model slice A6 and model slice A7;
  • intermediate node D1 at this time includes: model slice A2, model slice A7 and model slice A8;
  • the intermediate node D2 includes: model slice A1, model slice A2 and model slice A9;
  • the intermediate node D3 at this time includes: model slice A1, model slice A5 and model slice A6;
  • the requirements of the sink node are: model slice A8, model slice A9, and model slice A10.
  • the requirement of the sink node at this time is defined as the first requirement.
  • the sink node C initiates a model slicing request according to the sink node's demand for model slicing, first traverses to the intermediate node D1 under the sink node C, and determines that the intermediate node D1 at this time has the model slice A8 in the first demand, and the intermediate node D1 at this time has the model slice A8 required by the sink node.
  • the updated sink node requirement is defined as the second requirement;
  • the request is forwarded directly according to the second requirement to the previous transfer node—the intermediate node D2. It is determined that the intermediate node D2 has the model slice A9 in the second demand, and the intermediate node D2 has the model slice A9 required by the sink node. At this time, as shown in FIG.
  • the sink node requirement is defined as the third requirement;
  • the request is directly forwarded to the previous hop transmission node—the intermediate node D3 according to the third requirement, and it is determined that the intermediate node D3 at this time does not have the model slice A10 in the third requirement;
  • the request is directly forwarded to the previous hop transfer node—the source node B, according to the third requirement, as shown in Figure 8, according to the current third requirement, the source node B sends the remaining model slice A10 to the sink node C, and the model residual propagation ends at this time;
  • the final sink node C includes model slice A1, model slice A2, model slice A3, model slice A4, model slice A5, model slice A6, model slice A7, model slice A8, model slice A9 and model slice A10.
  • the embodiment of the present disclosure provides a possible implementation manner, wherein the model slice can be run and stored in all nodes in the network, and can be transmitted between all nodes.
  • the source nodes, sink nodes, and intermediate nodes are all intelligent nodes in the communication system.
  • the intelligent nodes include but are not limited to smartphones, tablet computers, laptops, and edge servers.
  • the above-mentioned intelligent nodes all have strong computing capabilities, can learn and train to generate AI models, can perform hierarchical semantic intelligent information source coding and classification models, and have the ability to absorb many models on the network to achieve self-evolution.
  • An embodiment of the present disclosure provides a possible implementation manner, wherein the network is a residual network.
  • the residual network is a convolutional neural network proposed by four researchers from Microsoft Research (Microsoft Research). It won the image classification and object recognition in the 2015 ImageNet Large Scale Visual Recognition Challenge (ILSVRC).
  • ILSVRC ImageNet Large Scale Visual Recognition Challenge
  • the characteristic of the residual network is that it is easy to optimize and can improve the accuracy by adding considerable depth.
  • Its internal residual block uses skip connections, which alleviates the problem of gradient disappearance caused by increasing depth in deep neural networks.
  • Fig. 10 shows a residual propagation device of a network model provided by an embodiment of the present disclosure, which is characterized in that it is applied in a network, and the network includes at least one routing path, and all routing paths include a source node and a sink node, and intermediate nodes arranged between the source node and the sink node;
  • the source node stores all model slices in the preset sink node requirements, and the sink node requirements are the sink node's demand for model slices;
  • the residual propagation device includes:
  • a path obtaining unit 201 configured to obtain a routing path
  • the processing unit 201 is configured to traverse along the routing path from the sink node to the source node, and the intermediate node and/or the source node sends the model slices required by the sink node to the sink node.
  • the disclosure traverses backwards along the routing path from the sink node to the source node, and for the model slices that need to be transmitted, the traversed intermediate nodes and/or source nodes transmit the necessary parts to the sink nodes according to the needs of the sink nodes, thereby forming model residual propagation, which not only reduces the time delay for the sink nodes to obtain the required model slices, but also reduces the data redundancy in the end-to-end communication process and improves the resource utilization of the overall network.
  • processing unit 202 includes:
  • the demand determination module is configured to determine the type and quantity of model slices that the sink node still lacks according to the type and quantity of model slices in the source node and the type and quantity of model slices in the sink node.
  • An embodiment of the present disclosure provides a possible implementation manner, wherein the source node further includes all model slices existing in the sink node.
  • processing unit 202 includes:
  • the demand determination module is configured to set a unique number for each model slice, and determine the model slices that the sink node still lacks according to the number of the model slice in the source node and the number of the model slice in the sink node.
  • processing unit 202 includes:
  • a traversal module configured to traverse along the routing path from the sink node to the source node
  • a first judging module judging whether the intermediate node has at least one model slice required by the sink node
  • the intermediate node sends the model slice in the existing sink node requirement to the sink node;
  • the second judging module judges whether the requirements of the sink node are met
  • the acquisition, storage and application of the user's personal information involved are in compliance with relevant laws and regulations, and do not violate public order and good customs.
  • the present disclosure also provides an electronic device and a readable storage medium.
  • the electronic equipment including:
  • the memory stores instructions executable by at least one processor, and the instructions are executed by at least one processor, so that the at least one processor can execute the above method.
  • the electronic device performs reverse traversal along the routing path from the sink node to the source node, and for the model slices that need to be transmitted, the traversed intermediate nodes and/or source nodes transmit the existing necessary parts to the sink node according to the needs of the sink node, thereby forming model residual propagation, which not only reduces the time delay for the sink node to obtain the required model slices, but also reduces data redundancy in the end-to-end communication process and improves the resource utilization of the overall network.
  • the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to make a computer execute the method provided by the embodiments of the present disclosure.
  • the readable storage medium performs reverse traversal along the routing path from the sink node to the source node, and for the model slices that need to be transmitted, the traversed intermediate nodes and/or source nodes transmit the existing necessary parts to the sink node according to the needs of the sink node, thereby forming model residual propagation, which not only reduces the time delay for the sink node to obtain the required model slices, but also reduces data redundancy in the end-to-end communication process, and improves the resource utilization of the overall network.
  • FIG. 11 shows a schematic block diagram of an example electronic device 300 that may be used to implement embodiments of the present disclosure.
  • Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the device 300 includes a computing unit 301, which can execute various appropriate actions and processes according to a computer program stored in a read-only memory (ROM, Read-Only Memory) 302 or a computer program loaded from a storage unit 308 into a random access memory (RAM, Random Access Memory) 303.
  • ROM Read-only memory
  • RAM Random Access Memory
  • various programs and data necessary for the operation of the device 300 can also be stored.
  • the computing unit 301, ROM 302, and RAM 303 are connected to each other through a bus 304.
  • An input/output (I/O, Input/Output) interface 307 is also connected to the bus 304 .
  • Multiple components in the device 300 are connected to the I/O interface 305, including: an input unit 306, such as a keyboard, a mouse, etc.; an output unit 307, such as various types of displays, speakers, etc.; a storage unit 308, such as a magnetic disk, an optical disk, etc.; and a communication unit 309, such as a network card, a modem, a wireless communication transceiver, etc.
  • the communication unit 309 allows the device 300 to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
  • the computing unit 301 may be various general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 301 include, but are not limited to, a central processing unit (CPU, Central Processing Unit), a graphics processing unit (GPU, Graphics Processing Unit), various dedicated artificial intelligence (AI, Artificial Intelligence) computing chips, various computing units that run machine learning model algorithms, digital signal processors (DSP, Digital Signal Processing), and any appropriate processors, controllers, microcontrollers, etc.
  • the calculation unit 301 executes various methods and processes described above, such as the residual propagation method of the network model.
  • the residual propagation method for network models may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 307 .
  • part or all of the computer program may be loaded and/or installed on the device 300 via the ROM 302 and/or the communication unit 309.
  • the computing unit 301 may be configured in any other appropriate way (for example, by means of firmware) to execute the residual propagation method of the network model.
  • programmable system comprising at least one programmable processor, which may be a special purpose or general purpose programmable processor, capable of receiving data and instructions from and transmitting data and instructions to a storage system, at least one input device, and at least one output device.
  • programmable processor which may be a special purpose or general purpose programmable processor, capable of receiving data and instructions from and transmitting data and instructions to a storage system, at least one input device, and at least one output device.
  • Program codes for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to processors or controllers of general-purpose computers, special purpose computers, or other programmable data processing devices, so that the program codes cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented when executed by the processors or controllers.
  • the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), EPROM (Electrical Programmable Read-Only Memory or flash memory), optical fiber, Compact Disc Read-Only Memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. combination.
  • RAM random access memory
  • ROM read-only memory
  • EPROM Electrical Programmable Read-Only Memory or flash memory
  • CD-ROM Compact Disc Read-Only Memory
  • magnetic storage devices or any suitable combination of the foregoing. combination.
  • the systems and techniques described herein can be implemented on a computer having: a display device (e.g., a CRT (Cathode Ray Tube, cathode ray tube) or LCD (Liquid Crystal Display, liquid crystal display) monitor) for displaying information to the user; and a keyboard and pointing device (e.g., a mouse or trackball) through which the user can provide input to the computer.
  • a display device e.g., a CRT (Cathode Ray Tube, cathode ray tube) or LCD (Liquid Crystal Display, liquid crystal display) monitor
  • a keyboard and pointing device e.g., a mouse or trackball
  • Other types of devices may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, voice input, or tactile input.
  • the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., a user computer having a graphical user interface or web browser through which a user can interact with implementations of the systems and techniques described herein), or any combination of such back-end, middleware, or front-end components.
  • the components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: Local Area Network (LAN, Local Area Network), Wide Area Network (WAN, Wide Area Network) and the Internet.
  • a computer system may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
  • the server can be a cloud server, a server of a distributed system, or a server combined with a blockchain.
  • steps may be reordered, added or deleted using the various forms of flow shown above.
  • each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Provided are a residual propagation method and apparatus for a network model, which relate to the technical field of communications. The specific implementation is: the method is applied to a network, the network comprises a routing path, and the routing path comprises a source node, an intermediate node and a sink node; and the source node stores all model slices in a preset sink node requirement, and the sink node requirement is a requirement of the sink node for the model slices. The residual propagation method comprises: acquiring a routing path (S101); and performing traversal from a sink node to a source node along the routing path, and an intermediate node and/or the source node sending a model slice in a sink node requirement to the sink node (S102).

Description

一种网络模型的残差传播方法和残差传播装置A residual propagation method and residual propagation device for a network model 技术领域technical field
本公开涉及通信技术领域,尤其涉及一种网络模型的残差传播方法和残差传播装置。The present disclosure relates to the field of communication technologies, and in particular, to a residual propagation method and a residual propagation device of a network model.
背景技术Background technique
在未来的万物智联网络中,网络节点趋向于智能化,网络节点智能化导致了信息空间快速扩张、甚至维度灾难,加剧了表征信息承载空间的难度,导致传统的网络服务能力与高维信息空间难以匹配,通信传输的数据量过大,信息业务服务系统无法持续满足人们复杂、多样和智能化信息传输的需求。而通过人工智能模型来编码、传播、解码业务信息,可显著降低通信业务中的数据传输量,极大地提升了信息传输效率。这些模型相对稳定,并具有复用性、传播性。模型的传播和复用将有助于增强网络智能,同时降低开销和资源浪费,形成节点极智、网络极简的智简网络。In the future intelligent network of all things, network nodes tend to be intelligent. The intelligentization of network nodes has led to rapid expansion of information space, and even dimensional disasters, which has exacerbated the difficulty of representing information carrying space, making it difficult to match traditional network service capabilities with high-dimensional information space. The amount of data transmitted through communication is too large, and the information business service system cannot continue to meet people's needs for complex, diverse, and intelligent information transmission. Using artificial intelligence models to encode, disseminate, and decode business information can significantly reduce the amount of data transmission in communication services and greatly improve the efficiency of information transmission. These models are relatively stable, and have reusability and dissemination. The dissemination and reuse of models will help to enhance network intelligence while reducing overhead and resource waste, forming an intelligent network with extremely intelligent nodes and a minimal network.
基于智简网络中节点极智,网络极简,虚实结合,数字孪生的背景,因此在智简网络中传播的将不再只是传统的内容数据,而是计算生成的相对稳定的模型。网络具有存储功能,模型存于网络,可能存储在终端用户侧,也可能存储在云端。各个节点可以吸收网络上诸多模型实现自我进化,这一方法类似于知识蒸馏。模型传播的本质是联邦学习,它需要对应的协议提供支持和控制。因此目前需要解决的技术问题在于:如何实现在通信过程中传输模型。Based on the background of extremely intelligent nodes, minimal network, combination of virtual and real, and digital twins in the IDN, what is disseminated in the IDN is no longer just traditional content data, but a relatively stable model generated by calculation. The network has a storage function, and the model is stored in the network, which may be stored on the end user side or in the cloud. Each node can absorb many models on the network to realize self-evolution, which is similar to knowledge distillation. The essence of model propagation is federated learning, which requires support and control from corresponding protocols. Therefore, the technical problem that needs to be solved at present is: how to realize the transmission model in the communication process.
发明内容Contents of the invention
本公开提供了一种网络模型的残差传播方法和残差传播装置。The disclosure provides a residual propagation method and a residual propagation device of a network model.
根据本公开的第一方面,提供了一种网络模型的残差传播方法,其中,应用于网络中,网络包括至少一条路由路径,所有路由路径包括一信源节点和一信宿节点,以及设置在信源节点和信宿节点之间的中间节点;According to a first aspect of the present disclosure, there is provided a method for residual propagation of a network model, which is applied to a network, and the network includes at least one routing path, and all routing paths include a source node and a sink node, and an intermediate node arranged between the source node and the sink node;
其中,信源节点存储预设的信宿节点需求中的所有模型切片,信宿节 点需求为信宿节点对模型切片的需求;Among them, the source node stores all model slices in the preset demand of the sink node, and the demand of the sink node is the demand of the sink node for model slices;
残差传播方法包括:Residual propagation methods include:
获取路由路径;Get routing path;
沿着路由路径从信宿节点向信源节点方向遍历,中间节点和/或信源节点将信宿节点需求中的模型切片发送至信宿节点。Traversing along the routing path from the sink node to the source node, the intermediate node and/or the source node sends the model slices required by the sink node to the sink node.
根据本公开的第二方面,提供了一种网络模型的残差传播装置,其中,应用于网络中,网络包括至少一条路由路径,所有路由路径包括一信源节点和一信宿节点,以及设置在信源节点和信宿节点之间的中间节点;According to a second aspect of the present disclosure, there is provided a residual propagation device for a network model, which is applied to a network, and the network includes at least one routing path, and all routing paths include a source node and a sink node, and an intermediate node arranged between the source node and the sink node;
其中,信源节点存储预设的信宿节点需求中的所有模型切片,信宿节点需求为信宿节点对模型切片的需求;Among them, the source node stores all model slices in the preset sink node requirements, and the sink node requirements are the sink node's demand for model slices;
残差传播装置包括:The residual propagation device includes:
路径获取单元,用于获取路由路径;a path obtaining unit, configured to obtain a routing path;
处理单元,用于沿着路由路径从信宿节点向信源节点方向遍历,中间节点和/或信源节点将信宿节点需求中的模型切片发送至信宿节点。The processing unit is configured to traverse along the routing path from the sink node to the source node, and the intermediate node and/or the source node sends the model slice required by the sink node to the sink node.
根据本公开的第三方面,提供了一种电子设备,包括:According to a third aspect of the present disclosure, an electronic device is provided, including:
至少一个处理器;以及at least one processor; and
与至少一个处理器通信连接的存储器;其中,memory communicatively coupled to at least one processor; wherein,
存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述方法。The memory stores instructions executable by at least one processor, and the instructions are executed by at least one processor, so that the at least one processor can execute the above method.
根据本公开的第四方面,提供了一种存储有计算机指令的非瞬时计算机可读存储介质,其中,计算机指令用于使计算机执行上述方法。According to a fourth aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to cause a computer to execute the above method.
本公开提供的技术方案带来的有益效果是:The beneficial effects brought by the technical solution provided by the disclosure are:
本公开实施例提供的方案,沿着路由路径从信宿节点向信源节点方向进行倒叙遍历,并且对于需要传输的模型切片,遍历到的中间节点和/或信源节点根据信宿节点需求将存在的需要部分传输至信宿节点,从而形成模型残差传播,进而不仅降低了信宿节点获取所需模型切片的时延,同时也减少了端到端通信过程中的数据冗余,提高了整体网络的资源利用率。The solution provided by the embodiments of the present disclosure performs reverse traversal along the routing path from the sink node to the source node, and for the model slices that need to be transmitted, the traversed intermediate nodes and/or source nodes transmit the necessary parts to the sink nodes according to the needs of the sink nodes, thereby forming model residual propagation, which not only reduces the time delay for the sink nodes to obtain the required model slices, but also reduces data redundancy in the end-to-end communication process, and improves the resource utilization of the overall network.
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其他特征将通过以下的说明书而变得容易理解。It should be understood that what is described in this section is not intended to identify key or important features of the embodiments of the present disclosure, nor is it intended to limit the scope of the present disclosure. Other features of the present disclosure will be easily understood from the following description.
附图说明Description of drawings
附图用于更好地理解本方案,不构成对本公开的限定。其中:The accompanying drawings are used to better understand the present solution, and do not constitute a limitation to the present disclosure. in:
图1是根据本公开实施例一提供的网络模型的残差传播方法的流程示意图;FIG. 1 is a schematic flowchart of a method for residual propagation of a network model according to Embodiment 1 of the present disclosure;
图2是根据本公开实施例一提供的网络模型的残差传播方法中步骤S102的流程示意图;FIG. 2 is a schematic flowchart of step S102 in the method for residual propagation of a network model according to Embodiment 1 of the present disclosure;
图3是本公开实施例一的未进行模型残差传播的路由路径图;FIG. 3 is a routing path diagram without model residual propagation according to Embodiment 1 of the present disclosure;
图4-8是本公开实施例一的模型残差传播流程图;4-8 are flow charts of model residual propagation in Embodiment 1 of the present disclosure;
图9是根据本公开实施例一的模型残差传播结束后的路由路径图;FIG. 9 is a routing path diagram after model residual propagation is completed according to Embodiment 1 of the present disclosure;
图10是根据本公开实施例二提供的残差传播装置的结构示意图;FIG. 10 is a schematic structural diagram of a residual propagation device provided according to Embodiment 2 of the present disclosure;
图11是本公开实施例的电子设备的框图。FIG. 11 is a block diagram of an electronic device of an embodiment of the present disclosure.
具体实施方式Detailed ways
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and they should be regarded as exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
智简网络中主要通过人工智能模型传播业务信息,通过使用人工智能模型将待传播的第一业务信息压缩为与人工智能模型相关的第二业务信息,极大地降低了网络中的数据通信量,压缩效率远超传统的压缩算法。其中,发送端设备利用预先配置的第一模型对第一业务信息进行提取并得到待传输的第二业务信息;发送端设备向接收端设备传输第二业务信息。接收端设备接收第二业务信息,并利用预先配置的第二模型对第二业务信息进行恢复处理得到第三业务信息;经第二模型恢复的第三业务信息比起原先的第一业务信息会有些许质量上的差异,但两者在内容上是一致的,给用户的体验几乎是无差异的。在发送端设备向接收端设备传输第二业务信息之前,还包括:更新模块判断接收端设备是否需要对第二模型进行更新,并在判断需要更新时向接收端设备传输预先配置的第三模型,接收端设备利用第三模型对第二模型进行更新。通过预先训练好的人工智能模型 对业务信息进行处理,可显著降低通信业务中的数据传输量,极大地提升了信息传输效率。这些模型相对稳定,并具有复用性、传播性。模型的传播和复用将有助于增强网络智能,同时降低开销和资源浪费。模型能够根据不同切分规则切分为若干个模型切片,上述模型切片也可以在不同的网络节点之间传输,模型切片可以组装成模型。模型切片可以分散存储在多个网络节点上。当网络节点请发现自己缺少或需要更新某模型或某模型切片时,可以通过请求的方式,向周围可能具有该切片的节点请求。In IDN, business information is mainly disseminated through the artificial intelligence model. By using the artificial intelligence model to compress the first business information to be disseminated into the second business information related to the artificial intelligence model, the data traffic in the network is greatly reduced, and the compression efficiency far exceeds the traditional compression algorithm. Wherein, the sending end device uses the preconfigured first model to extract the first service information and obtain the second service information to be transmitted; the sending end device transmits the second service information to the receiving end device. The receiving end device receives the second service information, and uses the pre-configured second model to recover the second service information to obtain the third service information; the third service information restored by the second model will have a slight difference in quality compared with the original first service information, but the two are consistent in content, and the user experience is almost the same. Before the transmitting end device transmits the second service information to the receiving end device, it also includes: an update module judges whether the receiving end device needs to update the second model, and transmits a pre-configured third model to the receiving end device when it is judged that an update is required, and the receiving end device uses the third model to update the second model. Processing business information through pre-trained artificial intelligence models can significantly reduce the amount of data transmission in communication services and greatly improve the efficiency of information transmission. These models are relatively stable, and have reusability and dissemination. Model propagation and reuse will help enhance network intelligence while reducing overhead and resource waste. The model can be divided into several model slices according to different segmentation rules. The above model slices can also be transmitted between different network nodes, and the model slices can be assembled into models. Model slices can be distributed and stored on multiple network nodes. When a network node finds that it lacks or needs to update a certain model or a certain model slice, it can make a request to the surrounding nodes that may have the slice.
传输业务信息、传输模型均发生在通信网络的网络层,基于网络层协议进行通信传输。传输业务信息、传输模型的路径上经过的网络节点包括智简路由器。智简路由器的功能包括但不限于业务信息传输、模型传输,吸收模型自我更新,安全保护等功能。智简路由器的传输功能,涉及将业务信息或模型从信源节点传输到信宿节点,信源节点和信宿节点之间存在多个路径。智简路由器的模型传输功能,可以对模型切片进行传输,通过合理安排模型切片走多个路径,多路传输模型切片,提高模型传输速率。Both the transmission of business information and the transmission model occur at the network layer of the communication network, and the communication transmission is performed based on the network layer protocol. The network nodes passing through the path of transmitting service information and transmitting models include the IDR. The functions of the I-Driven Router include but are not limited to business information transmission, model transmission, absorbing model self-update, security protection and other functions. The transmission function of the Intelligent-Driven Router involves the transmission of business information or models from the source node to the sink node, and there are multiple paths between the source node and the sink node. The model transmission function of the Smart-Driven Router can transmit model slices. By rationally arranging model slices to take multiple paths, multiple transmission model slices can be used to improve the model transmission rate.
实施例一Embodiment one
图1示出了本公开实施例提供的一种网络模型的残差传播方法,其中,应用于网络中,网络包括至少一条路由路径,所有路由路径包括一信源节点和一信宿节点,以及设置在信源节点和信宿节点之间的中间节点;Fig. 1 shows a residual propagation method of a network model provided by an embodiment of the present disclosure, wherein, applied to a network, the network includes at least one routing path, and all routing paths include a source node and a sink node, and an intermediate node arranged between the source node and the sink node;
其中,信源节点存储预设的信宿节点需求中的所有模型切片,信宿节点需求为信宿节点对模型切片的需求;Among them, the source node stores all model slices in the preset sink node requirements, and the sink node requirements are the sink node's demand for model slices;
如图1所示,包括:As shown in Figure 1, including:
步骤S101,获取路由路径;Step S101, obtaining a routing path;
步骤S102,沿着路由路径从信宿节点向信源节点方向遍历,中间节点和/或信源节点将信宿节点需求中的模型切片发送至信宿节点。Step S102, traversing along the routing path from the sink node to the source node, and the intermediate node and/or the source node sends the model slice required by the sink node to the sink node.
本公开沿着路由路径从信宿节点向信源节点方向进行倒叙遍历,并且对于需要传输的模型切片,遍历到的中间节点和/或信源节点根据信宿节点需求将存在的需要部分传输至信宿节点,从而形成模型残差传播,进而不仅降低了信宿节点获取所需模型切片的时延,同时也减少了端到端通信过程中的数据冗余,提高了整体网络的资源利用率。The disclosure traverses backwards along the routing path from the sink node to the source node, and for the model slices that need to be transmitted, the traversed intermediate nodes and/or source nodes transmit the necessary parts to the sink nodes according to the needs of the sink nodes, thereby forming model residual propagation, which not only reduces the time delay for the sink nodes to obtain the required model slices, but also reduces the data redundancy in the end-to-end communication process and improves the resource utilization of the overall network.
具体地,当路由路径确定时,此时路由路径上的信源节点、中间节点 和信宿节点也都确定了,此时可以沿着路由路径从信宿节点向信源节点方向倒序遍历的传输节点,其中,传输节点包括中间节点和信源节点,只需要传输信宿节点需要的模型切片,从而形成模型残差传输。Specifically, when the routing path is determined, the source node, intermediate node, and sink node on the routing path are also determined at this time. At this time, the transmission nodes can be traversed in reverse order from the sink node to the source node along the routing path. Among them, the transmission node includes the intermediate node and the source node, and only the model slice required by the sink node needs to be transmitted, thus forming the model residual transmission.
示例性地,信宿节点需要的模型切片包括第一模型切片和第二模型切片,此时信宿节点下的第一个中间节点存在第二模型切片和第三模型切片时,只需要将第二模型切片传输至信宿节点即可,此时继续遍历到第二个中间节点,此时第二个中间节点只存在第二模型切片,由于信宿节点已经有了第二模型切片,并且第二个中间节点是最后一个中间节点,因此跳过第二个中间节点遍历到信源节点,信源节点存在信宿节点需求中的所有模型切片,此时信源节点将第一模型切片发送至信宿节点。Exemplarily, the model slice required by the sink node includes the first model slice and the second model slice. At this time, when the first intermediate node under the sink node has the second model slice and the third model slice, it is only necessary to transmit the second model slice to the sink node. At this time, continue to traverse to the second intermediate node. At this time, only the second model slice exists in the second intermediate node. Since the sink node already has the second model slice, and the second intermediate node is the last intermediate node, skip the second intermediate node and traverse to the source node. The source node has all the model slices required by the sink node. , at this time, the source node sends the first model slice to the sink node.
具体地,中间节点存储有模型切片,但是中间节点中的模型切片可以包括信宿节点已有的模型切片,中间节点中的模型切片也可以包括信宿节点需要中的模型切片。Specifically, the intermediate node stores model slices, but the model slices in the intermediate nodes may include the existing model slices of the sink node, and the model slices in the intermediate nodes may also include model slices required by the sink node.
例如,信宿节点已有第三模型切片和第四模型切片,此时信宿节点需要存储有第一模型切片和第二模型切片;For example, the sink node already has the third model slice and the fourth model slice, and at this time the sink node needs to store the first model slice and the second model slice;
此时路由路径中的第一个中间节点存在第二模型切片和第三模型切片,即此时的第一个中间节点存在信宿节点已有的模型切片——第三模型切片,和信宿节点需要中的模型切片——第二模型切片;At this time, the first intermediate node in the routing path has the second model slice and the third model slice, that is, the first intermediate node at this time has the model slice that the sink node already has - the third model slice, and the model slice that the sink node needs - the second model slice;
此时路由路径中的第二个中间节点只存在第三模型切片,即此时的第二个中间节点只存在信宿节点已有的模型切片——第三模型切片。At this time, the second intermediate node in the routing path only has the third model slice, that is, the second intermediate node at this time only has the existing model slice of the sink node—the third model slice.
本公开实施例提供了一种可能的实现方式,其中,确定信宿节点需求,包括以下步骤:An embodiment of the present disclosure provides a possible implementation, wherein determining the requirements of the sink node includes the following steps:
根据信源节点中的模型切片的种类和数量和信宿节点中的模型切片的种类和数量确定信宿节点还缺少的模型切片的种类和数量。According to the type and quantity of model slices in the source node and the type and quantity of model slices in the sink node, determine the type and quantity of model slices that the sink node still lacks.
具体地,由于在信源节点中存储有信宿节点所需的所有模型切片,因此,可以在信源节点中查找目前信宿节点还缺少的模型切片,这些还缺少的模型切片就是信宿节点对模型切片的需求。Specifically, since all the model slices required by the sink node are stored in the source node, the model slices that the sink node currently lacks can be found in the source node, and these missing model slices are the sink node's demand for model slices.
具体地,模型切片是由智简网络中各个智能节点训练产生的AI模型,按照实际情况及节点需求可以包括有多种种类。Specifically, the model slice is an AI model produced by training each intelligent node in the Intent-Driven Network, which can include various types according to the actual situation and node requirements.
例如,模型切片可以为编辑模型和生成动画模型等,并且包括但不限于 分类模型、分割模型、图神经网络模型等。For example, model slices can be edited models and generated animation models, etc., and include but are not limited to classification models, segmentation models, graph neural network models, etc.
本公开实施例提供了一种可能的实现方式,其中,信源节点还存储有信宿节点中已有的所有模型切片。The embodiment of the present disclosure provides a possible implementation manner, wherein the source node also stores all model slices existing in the sink node.
具体地,此时的信源节点不仅存储了信宿节点需求中的所有模型切片,还存储了信宿节点中已有的所有模型切片。Specifically, the source node at this time not only stores all the model slices required by the sink node, but also stores all the model slices in the sink node.
示例性地,此时的信源节点包括:2个分类模型切片、2个分割模型切片和3个图神经网络模型切片;Exemplarily, the information source node at this time includes: 2 classification model slices, 2 segmentation model slices and 3 graph neural network model slices;
而此时的信宿节点中包括:1个分类模型切片和1个分割模型切片;At this time, the sink node includes: 1 classification model slice and 1 segmentation model slice;
因此,此时信宿节点对模型切片的需求为:1个分类模型切片、1个分割模型切片和3个图神经网络模型切片。Therefore, the sink node's demand for model slices at this time is: 1 classification model slice, 1 segmentation model slice, and 3 graph neural network model slices.
本公开实施例提供了一种可能的实现方式,其中,确定信宿节点需求,包括以下步骤:An embodiment of the present disclosure provides a possible implementation, wherein determining the requirements of the sink node includes the following steps:
给每个模型切片设置唯一编号;Set a unique number for each model slice;
根据信源节点中的模型切片的编号和信宿节点中的模型切片的编号确定信宿节点还缺少的模型切片。According to the number of the model slice in the source node and the number of the model slice in the sink node, determine the model slice that the sink node still lacks.
示例性地,给每个模型切片设置唯一编号,假设模型切片共有十种,标号A1-A10;For example, set a unique number for each model slice, assuming that there are ten types of model slices, labeled A1-A10;
当信源节点不仅存储了信宿节点需求中的所有模型切片,还存储了信宿节点中已有的所有模型切片时;When the source node not only stores all the model slices required by the sink node, but also stores all the model slices already in the sink node;
此时的信源节点包括:模型切片A1、模型切片A2、模型切片A3、模型切片A4、模型切片A5、模型切片A6、模型切片A7、模型切片A8、模型切片A9和模型切片A10;The source nodes at this time include: model slice A1, model slice A2, model slice A3, model slice A4, model slice A5, model slice A6, model slice A7, model slice A8, model slice A9 and model slice A10;
而此时的信宿节点包括:模型切片A1、模型切片A2、模型切片A3、模型切片A4、模型切片A5、模型切片A6和模型切片A7;At this time, the sink nodes include: model slice A1, model slice A2, model slice A3, model slice A4, model slice A5, model slice A6 and model slice A7;
因此,此时信宿节点对模型切片的需求为:模型切片A8、模型切片A9和模型切片A10。Therefore, at this time, the sink node's requirements for model slices are: model slice A8, model slice A9, and model slice A10.
本公开实施例提供了一种可能的实现方式,如图2所示,步骤S102具体包括以下步骤:The embodiment of the present disclosure provides a possible implementation manner. As shown in FIG. 2, step S102 specifically includes the following steps:
步骤S1021,沿着路由路径从信宿节点向信源节点方向遍历;Step S1021, traversing along the routing path from the sink node to the source node;
步骤S1022,判断中间节点是否存在信宿节点需求中的至少一个模型 切片;Step S1022, judging whether the intermediate node has at least one model slice required by the sink node;
若存在,中间节点将存在的信宿节点需求中的模型切片发送至信宿节点,随后执行步骤S1023;If it exists, the intermediate node sends the existing model slices in the demand of the sink node to the sink node, and then executes step S1023;
若不存在,继续沿着路由路径向信源节点方向遍历,并返回步骤S1022,直到满足信宿节点需求或者遍历完所有中间节点也不能满足信宿节点需求,当遍历完所有中间节点也不能满足信宿节点对模型切片的需求时,信源节点将信宿节点需求中的模型切片发送至信宿节点;If it does not exist, continue to traverse along the routing path to the direction of the source node, and return to step S1022 until the requirements of the sink node are met or all intermediate nodes cannot be satisfied after traversing all intermediate nodes.
步骤S1023,判断是否满足信宿节点需求;Step S1023, judging whether the requirements of the sink node are met;
若满足,结束遍历;If satisfied, end the traversal;
若不满足,更新信宿节点需求,继续沿着路由路径向信源节点方向遍历,返回步骤S1022,直到满足信宿节点需求或者遍历完所有中间节点也不能满足信宿节点需求,当遍历完所有中间节点也不能满足信宿节点对模型切片的需求时,信源节点将信宿节点需求中的模型切片发送至信宿节点。If it is not satisfied, update the requirements of the sink node, continue to traverse along the routing path to the direction of the source node, and return to step S1022 until the requirements of the sink node are met or all intermediate nodes cannot be satisfied after traversing all intermediate nodes.
具体地,假设某一个节点B为信源节点,另一个节点C为信宿节点,两节点形成一条确定的路由通路,该路由通路还包括设置在信源节点B和信源节点C中至少一个的中间节点,将该路由通路定义为路由路径,该路由路径中的经过的各个传输节点处有数量、种类不尽相同的模型切片,该传输节点包括中间节点和信源节点B;Specifically, assuming that a certain node B is a source node, and another node C is a sink node, the two nodes form a certain routing path, and the routing path also includes an intermediate node arranged at least one of the source node B and the source node C. The routing path is defined as a routing path, and there are model slices of different numbers and types at each passing node in the routing path. The transmission node includes the intermediate node and the source node B;
当信宿节点C想要获取自己所缺少的一些模型切片时,并非直接向信源节点B发起请求进行传输,而是按照倒序,从信宿节点C的上一跳传输节点开始,逐一询问路由路径中的各个传输节点,具体步骤如下所示:When the sink node C wants to obtain some model slices that it lacks, it does not directly initiate a request to the source node B for transmission, but in reverse order, starting from the previous hop transmission node of the sink node C, and inquires each transmission node in the routing path one by one. The specific steps are as follows:
判断中间节点是否存在信宿节点需求中的至少一个的模型切片:Determine whether the intermediate node has at least one model slice in the requirements of the sink node:
倘若中间节点没有信宿节点需求中的模型切片,则跳过该中间节点接着追溯到上一跳传输节点继续查找;If the intermediate node does not have the model slice required by the sink node, skip the intermediate node and then go back to the previous hop transmission node to continue searching;
倘如中间节点包含信宿节点需求中的模型切片,则该传输节点向信宿节点C发送自己所包含的信宿节点需求中的模型切片,并判断中间节点中的模型切片是否满足信宿节点需求:If the intermediate node contains the model slice required by the sink node, the transfer node sends the model slice included in the sink node requirement to the sink node C, and judges whether the model slice in the intermediate node meets the sink node requirement:
倘如中间节点不满足信宿节点需求,就更新信宿节点需求,接着追溯到上一跳传输节点查找更新后的信宿节点需求中的模型切片;If the intermediate node does not meet the requirements of the sink node, update the requirements of the sink node, and then trace back to the previous hop transmission node to find the model slice in the updated sink node requirements;
倘如中间节点满足信宿节点需求,即此时的中间节点包含信宿节点需 求中所有的模型切片,则该中间节点向信宿节点C发送其所需的全部模型切片,同时终止回溯查找过程;If the intermediate node meets the requirements of the sink node, that is, the intermediate node at this time contains all the model slices required by the sink node, then the intermediate node sends all the model slices it needs to the sink node C, and terminates the backtracking search process at the same time;
倘若中间节点全部查找完毕,仍有信宿节点C所需的模型切片未找到,则根据当前的信宿节点需求由信源节点B将当前的信宿节点需求中的模型切片发送至信宿节点C。If all the intermediate nodes are searched and the model slice required by the sink node C is still not found, then the source node B will send the model slice in the current sink node demand to the sink node C according to the current sink node demand.
具体的,信宿节点可以根据自己预设的信宿节点需求,发起包含有信宿节点需求的请求至上一跳中间节点,使得中间节点根据所接收的请求,查找信宿节点需求中的模型切片:Specifically, the sink node can initiate a request containing the sink node's requirements to the last-hop intermediate node according to its preset sink node requirements, so that the intermediate node can search for the model slice in the sink node's requirements according to the received request:
如果查找到信宿节点需求中的模型切片,则将查找到的模型切片发送给信宿节点,同时更新信宿节点需求,并将包含有更新信宿节点需求的请求转发至上一跳传输节点;If the model slice in the requirements of the sink node is found, the found model slice is sent to the sink node, and the requirements of the sink node are updated at the same time, and the request containing the updated requirements of the sink node is forwarded to the previous hop transmission node;
如果没有查找到信宿节点需求中的模型切片,则直接转发请求至上一跳传输节点;If the model slice required by the sink node is not found, the request is directly forwarded to the previous hop transfer node;
如果查找至最后一个中间节点时,仍未满足信宿节点需求时,则最后一个中间节点将包含有最新的信宿节点需求的请求转发至信源节点,由信源节点将最新的信宿节点需求中的模型切片发送至信宿节点,结束本次模型残差传播。If the requirements of the sink node are not met when the last intermediate node is searched, the last intermediate node will forward the request containing the latest sink node requirements to the source node, and the source node will send the model slice in the latest sink node requirements to the sink node, ending this model residual propagation.
如果某中间节点拥有所接收的请求中所需的全部模型切片,在其将对应的模型切片发送回信宿节点后,停止更新与转发包含有需求的请求,终止回溯查找过程,结束本次模型残差传播。If an intermediate node has all the model slices required in the received request, after it sends the corresponding model slices back to the sink node, it stops updating and forwarding the requests containing the needs, terminates the backtracking search process, and ends this model residual propagation.
示例性地,假设某一个节点B为信源节点,另一个节点C为信宿节点,两节点形成一条确定的路由通路,该路由通路还包括设置在信源节点B和信源节点C中至少一个的中间节点,可以将中间节点设置为D1、D2和D3,将该路由通路定义为路由路径,该路由路径中的经过的各个传输节点处有数量、种类不尽相同的模型切片,该传输节点包括中间节点和信源节点B;Exemplarily, assuming that a certain node B is a source node and another node C is a sink node, the two nodes form a certain routing path, and the routing path also includes an intermediate node arranged at least one of the source node B and the source node C, and the intermediate nodes can be set as D1, D2, and D3, and the routing path is defined as a routing path. There are model slices of different numbers and types at each transmission node passing through the routing path, and the transmission node includes the intermediate node and the source node B;
当信宿节点C想要获取自己所缺少的一些模型切片时,并非直接向信源节点B发起请求进行传输,而是按照倒序,从信宿节点C的上一跳传输节点开始,逐一询问路由路径中的各个传输节点,如图3所示,路由路径依次包括:信宿节点C、中间节点D1、中间节点D2、中间节点D3和信源节点B,When the sink node C wants to obtain some model slices that it lacks, it does not directly initiate a request to the source node B for transmission, but in reverse order, starting from the last hop transfer node of the sink node C, and inquiring about each transfer node in the routing path one by one, as shown in Figure 3, the routing path includes: sink node C, intermediate node D1, intermediate node D2, intermediate node D3, and source node B in turn,
给每个模型切片设置唯一编号,假设模型切片共有十种,标号A1-A10;Set a unique number for each model slice, assuming there are ten types of model slices, labeled A1-A10;
当信源节点B不仅包括信宿节点C所需的所有模型切片,还包括信宿节点C中已有的所有模型切片时;When the source node B not only includes all the model slices required by the sink node C, but also includes all the existing model slices in the sink node C;
此时的信源节点B包括:模型切片A1、模型切片A2、模型切片A3、模型切片A4、模型切片A5、模型切片A6、模型切片A7、模型切片A8、模型切片A9和模型切片A10;The source node B at this time includes: model slice A1, model slice A2, model slice A3, model slice A4, model slice A5, model slice A6, model slice A7, model slice A8, model slice A9 and model slice A10;
而此时的信宿节点C包括:模型切片A1、模型切片A2、模型切片A3、模型切片A4、模型切片A5、模型切片A6和模型切片A7;At this time, the sink node C includes: model slice A1, model slice A2, model slice A3, model slice A4, model slice A5, model slice A6 and model slice A7;
并且此时的中间节点D1包括:模型切片A2、模型切片A7和模型切片A8;And the intermediate node D1 at this time includes: model slice A2, model slice A7 and model slice A8;
此时的中间节点D2包括:模型切片A1、模型切片A2和模型切片A9;At this time, the intermediate node D2 includes: model slice A1, model slice A2 and model slice A9;
此时的中间节点D3包括:模型切片A1、模型切片A5和模型切片A6;The intermediate node D3 at this time includes: model slice A1, model slice A5 and model slice A6;
根据信源节点B和信宿节点C中的模型切片可知:According to the model slices in the source node B and the sink node C, it can be known that:
此时信宿节点需求为:模型切片A8、模型切片A9和模型切片A10,为了便于说明,将此时的信宿节点需求定义为第一需求。At this time, the requirements of the sink node are: model slice A8, model slice A9, and model slice A10. For the convenience of description, the requirement of the sink node at this time is defined as the first requirement.
具体步骤如下所示:The specific steps are as follows:
如图4所示,信宿节点C根据信宿节点对模型切片的需求发起模型切片请求,首先遍历到信宿节点C下的中间节点D1,确定此时的中间节点D1存在第一需求中的模型切片A8,并且此时的中间节点D1存在信宿节点需要的模型切片A8,此时,如图5所示,中间节点D1将模型切片A8发送至信宿节点C,并且确定此时没有满足信宿节点需求,将信宿节点需求更新为:模型切片A9和模型切片A10,为了便于说明,将更新后的信宿节点需求定义为第二需求;As shown in Figure 4, the sink node C initiates a model slicing request according to the sink node's demand for model slicing, first traverses to the intermediate node D1 under the sink node C, and determines that the intermediate node D1 at this time has the model slice A8 in the first demand, and the intermediate node D1 at this time has the model slice A8 required by the sink node. , for the convenience of illustration, the updated sink node requirement is defined as the second requirement;
如图5所示,此时直接根据第二需求转发请求至上一跳传输节点——中间节点D2,确定此时的中间节点D2存在第二需求中的模型切片A9,并且此时的中间节点D2存在信宿节点需要的模型切片A9,此时,如图6所示,中间节点D2将模型切片A9通过中间节点D1发送至信宿节点C,并且确定此时没有满足信宿节点对模型切片的需求,将信宿节点需求更新为:模型切片A10,为了便于说明,将该更新后的信宿节点需求定义为第三需求;As shown in Figure 5, at this time, the request is forwarded directly according to the second requirement to the previous transfer node—the intermediate node D2. It is determined that the intermediate node D2 has the model slice A9 in the second demand, and the intermediate node D2 has the model slice A9 required by the sink node. At this time, as shown in FIG. The sink node requirement is defined as the third requirement;
如图6所示,此时直接根据第三需求转发请求至上一跳传输节点——中间节点D3,确定此时的中间节点D3不存在第三需求中的模型切片A10;As shown in Figure 6, at this time, the request is directly forwarded to the previous hop transmission node—the intermediate node D3 according to the third requirement, and it is determined that the intermediate node D3 at this time does not have the model slice A10 in the third requirement;
此时,中间节点全部查找完毕,仍有信宿节点C所需的模型切片未找到,如图7所示,此时直接根据第三需求转发请求至上一跳传输节点——信源节点B,如图8所示,根据当前的第三需求由信源节点B将剩余的模型切片A10发送至信宿节点C,此时结束本次模型残差传播;At this time, all the intermediate nodes have been searched, and the model slice required by the sink node C is still not found, as shown in Figure 7, at this time, the request is directly forwarded to the previous hop transfer node—the source node B, according to the third requirement, as shown in Figure 8, according to the current third requirement, the source node B sends the remaining model slice A10 to the sink node C, and the model residual propagation ends at this time;
如图9所示,最后的信宿节点C中包括模型切片A1、模型切片A2、模型切片A3、模型切片A4、模型切片A5、模型切片A6、模型切片A7、模型切片A8、模型切片A9和模型切片A10。As shown in Figure 9, the final sink node C includes model slice A1, model slice A2, model slice A3, model slice A4, model slice A5, model slice A6, model slice A7, model slice A8, model slice A9 and model slice A10.
本公开实施例提供了一种可能的实现方式,其中,模型切片能够在网络中的所有节点中运行及存储,并可在所有节点之间传输。The embodiment of the present disclosure provides a possible implementation manner, wherein the model slice can be run and stored in all nodes in the network, and can be transmitted between all nodes.
具体的,信源节点、信宿节点、中间节点均为通信系统中的智能节点,其中智能节点包括但不限于智能手机、平板电脑、笔记本电脑、边缘服务器等,其中上述智能节点均具有较强的计算能力,能够进行学习并训练生成AI模型,可进行层次化语义智简信源编码和分类模型,且具备吸收网络上诸多模型实现自我进化的能力。Specifically, the source nodes, sink nodes, and intermediate nodes are all intelligent nodes in the communication system. The intelligent nodes include but are not limited to smartphones, tablet computers, laptops, and edge servers. The above-mentioned intelligent nodes all have strong computing capabilities, can learn and train to generate AI models, can perform hierarchical semantic intelligent information source coding and classification models, and have the ability to absorb many models on the network to achieve self-evolution.
本公开实施例提供了一种可能的实现方式,其中,网络为残差网络。An embodiment of the present disclosure provides a possible implementation manner, wherein the network is a residual network.
需要说明的是,残差网络是由来自Microsoft Research(微软研究院)的4位学者提出的卷积神经网络,在2015年的ImageNet大规模视觉识别竞赛(ImageNet Large Scale Visual Recognition Challenge,ILSVRC)中获得了图像分类和物体识别的优胜。残差网络的特点是容易优化,并且能够通过增加相当的深度来提高准确率。其内部的残差块使用了跳跃连接,缓解了在深度神经网络中增加深度带来的梯度消失问题。It should be noted that the residual network is a convolutional neural network proposed by four scholars from Microsoft Research (Microsoft Research). It won the image classification and object recognition in the 2015 ImageNet Large Scale Visual Recognition Challenge (ILSVRC). The characteristic of the residual network is that it is easy to optimize and can improve the accuracy by adding considerable depth. Its internal residual block uses skip connections, which alleviates the problem of gradient disappearance caused by increasing depth in deep neural networks.
实施例二Embodiment two
图10示出了本公开实施例提供的一种网络模型的残差传播装置,其特征在于,应用于网络中,网络包括至少一条路由路径,所有路由路径包括一信源节点和一信宿节点,以及设置在信源节点和信宿节点之间的中间节点;Fig. 10 shows a residual propagation device of a network model provided by an embodiment of the present disclosure, which is characterized in that it is applied in a network, and the network includes at least one routing path, and all routing paths include a source node and a sink node, and intermediate nodes arranged between the source node and the sink node;
其中,信源节点存储预设的信宿节点需求中的所有模型切片,信宿节点需求为信宿节点对模型切片的需求;Among them, the source node stores all model slices in the preset sink node requirements, and the sink node requirements are the sink node's demand for model slices;
如图10所示,残差传播装置包括:As shown in Figure 10, the residual propagation device includes:
路径获取单元201,用于获取路由路径;a path obtaining unit 201, configured to obtain a routing path;
处理单元201,用于沿着路由路径从信宿节点向信源节点方向遍历,中间节点和/或信源节点将信宿节点需求中的模型切片发送至信宿节点。The processing unit 201 is configured to traverse along the routing path from the sink node to the source node, and the intermediate node and/or the source node sends the model slices required by the sink node to the sink node.
本公开沿着路由路径从信宿节点向信源节点方向进行倒叙遍历,并且对于需要传输的模型切片,遍历到的中间节点和/或信源节点根据信宿节点需求将存在的需要部分传输至信宿节点,从而形成模型残差传播,进而不仅降低了信宿节点获取所需模型切片的时延,同时也减少了端到端通信过程中的数据冗余,提高了整体网络的资源利用率。The disclosure traverses backwards along the routing path from the sink node to the source node, and for the model slices that need to be transmitted, the traversed intermediate nodes and/or source nodes transmit the necessary parts to the sink nodes according to the needs of the sink nodes, thereby forming model residual propagation, which not only reduces the time delay for the sink nodes to obtain the required model slices, but also reduces the data redundancy in the end-to-end communication process and improves the resource utilization of the overall network.
本公开实施例提供了一种可能的实现方式,处理单元202包括:An embodiment of the present disclosure provides a possible implementation manner, and the processing unit 202 includes:
需求确定模块,用于根据信源节点中的模型切片的种类和数量和信宿节点中的模型切片的种类和数量确定信宿节点还缺少的模型切片的种类和数量。The demand determination module is configured to determine the type and quantity of model slices that the sink node still lacks according to the type and quantity of model slices in the source node and the type and quantity of model slices in the sink node.
本公开实施例提供了一种可能的实现方式,其中,信源节点还包括信宿节点中已有的所有模型切片。An embodiment of the present disclosure provides a possible implementation manner, wherein the source node further includes all model slices existing in the sink node.
本公开实施例提供了一种可能的实现方式,处理单元202包括:An embodiment of the present disclosure provides a possible implementation manner, and the processing unit 202 includes:
需求确定模块,用于给每个模型切片设置唯一编号,根据信源节点中的模型切片的编号和信宿节点中的模型切片的编号确定信宿节点还缺少的模型切片。The demand determination module is configured to set a unique number for each model slice, and determine the model slices that the sink node still lacks according to the number of the model slice in the source node and the number of the model slice in the sink node.
本公开实施例提供了一种可能的实现方式,处理单元202包括:An embodiment of the present disclosure provides a possible implementation manner, and the processing unit 202 includes:
遍历模块,用于沿着路由路径从信宿节点向信源节点方向遍历;A traversal module, configured to traverse along the routing path from the sink node to the source node;
第一判断模块,判断中间节点是否存在信宿节点需求中的至少一个模型切片;A first judging module, judging whether the intermediate node has at least one model slice required by the sink node;
若存在,中间节点将存在的信宿节点需求中的模型切片发送至信宿节点;If it exists, the intermediate node sends the model slice in the existing sink node requirement to the sink node;
若不存在,继续沿着路由路径向信源节点方向遍历,使得中间节点将存在的信宿节点需求中的模型切片发送至信宿节点;If it does not exist, continue to traverse along the routing path to the direction of the source node, so that the intermediate node sends the model slice in the existing sink node demand to the sink node;
第二判断模块,判断是否满足信宿节点需求;The second judging module judges whether the requirements of the sink node are met;
若满足,结束遍历;If satisfied, end the traversal;
若不满足,更新信宿节点需求,继续沿着路由路径向信源节点方向遍历,使得中间节点将存在的信宿节点需求中的模型切片发送至信宿节点,当遍历完所有中间节点也不能满足信宿节点需求时,信源节点将信宿节点需求中的模型切片发送至信宿节点。If it is not satisfied, update the requirements of the sink node and continue to traverse along the routing path to the direction of the source node, so that the intermediate node sends the model slices in the requirements of the sink node to the sink node.
对于本公开实施例,其实现的有益效果同上述网络模型的残差传播方法实施例,此处不再赘述。For the embodiments of the present disclosure, the beneficial effects achieved are the same as those of the above embodiment of the method for residual propagation of the network model, and will not be repeated here.
本公开的技术方案中,所涉及的用户个人信息的获取,存储和应用等,均符合相关法律法规的规定,且不违背公序良俗。In the technical solution of the present disclosure, the acquisition, storage and application of the user's personal information involved are in compliance with relevant laws and regulations, and do not violate public order and good customs.
根据本公开的实施例,本公开还提供了一种电子设备和一种可读存储介质。According to the embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
该电子设备,包括:The electronic equipment, including:
至少一个处理器;以及at least one processor; and
与至少一个处理器通信连接的存储器;其中,memory communicatively coupled to at least one processor; wherein,
存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述方法。The memory stores instructions executable by at least one processor, and the instructions are executed by at least one processor, so that the at least one processor can execute the above method.
该电子设备沿着路由路径从信宿节点向信源节点方向进行倒叙遍历,并且对于需要传输的模型切片,遍历到的中间节点和/或信源节点根据信宿节点需求将存在的需要部分传输至信宿节点,从而形成模型残差传播,进而不仅降低了信宿节点获取所需模型切片的时延,同时也减少了端到端通信过程中的数据冗余,提高了整体网络的资源利用率。The electronic device performs reverse traversal along the routing path from the sink node to the source node, and for the model slices that need to be transmitted, the traversed intermediate nodes and/or source nodes transmit the existing necessary parts to the sink node according to the needs of the sink node, thereby forming model residual propagation, which not only reduces the time delay for the sink node to obtain the required model slices, but also reduces data redundancy in the end-to-end communication process and improves the resource utilization of the overall network.
该存储有计算机指令的非瞬时计算机可读存储介质,计算机指令用于使计算机执行本公开实施例提供的方法。The non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to make a computer execute the method provided by the embodiments of the present disclosure.
该可读存储介质沿着路由路径从信宿节点向信源节点方向进行倒叙遍历,并且对于需要传输的模型切片,遍历到的中间节点和/或信源节点根据信宿节点需求将存在的需要部分传输至信宿节点,从而形成模型残差传播,进而不仅降低了信宿节点获取所需模型切片的时延,同时也减少了端到端通信过程中的数据冗余,提高了整体网络的资源利用率。The readable storage medium performs reverse traversal along the routing path from the sink node to the source node, and for the model slices that need to be transmitted, the traversed intermediate nodes and/or source nodes transmit the existing necessary parts to the sink node according to the needs of the sink node, thereby forming model residual propagation, which not only reduces the time delay for the sink node to obtain the required model slices, but also reduces data redundancy in the end-to-end communication process, and improves the resource utilization of the overall network.
图11示出了可以用来实施本公开的实施例的示例电子设备300的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型 计算机和其他适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其他类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。FIG. 11 shows a schematic block diagram of an example electronic device 300 that may be used to implement embodiments of the present disclosure. Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
如图11所示,设备300包括计算单元301,其可以根据存储在只读存储器(ROM,Read-OnlyMemory)302中的计算机程序或者从存储单元308加载到随机访问存储器(RAM,RandomAccessMemory)303中的计算机程序,来执行各种适当的动作和处理。在RAM 303中,还可存储设备300操作所需的各种程序和数据。计算单元301、ROM 302以及RAM 303通过总线304彼此相连。输入/输出(I/O,Input/Output)接口307也连接至总线304。As shown in FIG. 11 , the device 300 includes a computing unit 301, which can execute various appropriate actions and processes according to a computer program stored in a read-only memory (ROM, Read-Only Memory) 302 or a computer program loaded from a storage unit 308 into a random access memory (RAM, Random Access Memory) 303. In the RAM 303, various programs and data necessary for the operation of the device 300 can also be stored. The computing unit 301, ROM 302, and RAM 303 are connected to each other through a bus 304. An input/output (I/O, Input/Output) interface 307 is also connected to the bus 304 .
设备300中的多个部件连接至I/O接口305,包括:输入单元306,例如键盘、鼠标等;输出单元307,例如各种类型的显示器、扬声器等;存储单元308,例如磁盘、光盘等;以及通信单元309,例如网卡、调制解调器、无线通信收发机等。通信单元309允许设备300通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。Multiple components in the device 300 are connected to the I/O interface 305, including: an input unit 306, such as a keyboard, a mouse, etc.; an output unit 307, such as various types of displays, speakers, etc.; a storage unit 308, such as a magnetic disk, an optical disk, etc.; and a communication unit 309, such as a network card, a modem, a wireless communication transceiver, etc. The communication unit 309 allows the device 300 to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
计算单元301可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元301的一些示例包括但不限于中央处理单元(CPU,Central Processing Unit)、图形处理单元(GPU,Graphics Processing Unit)、各种专用的人工智能(AI,Artificial Intelligence)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP,Digital Signal Processing)、以及任何适当的处理器、控制器、微控制器等。计算单元301执行上文所描述的各个方法和处理,例如网络模型的残差传播方法。例如,在一些实施例中,网络模型的残差传播方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元307。在一些实施例中,计算机程序的部分或者全部可以经由ROM 302和/或通信单元309而被载入和/或安装到设备300上。当计算机程序加载到RAM 303并由计算单元301执行时,可以执行上文描述的网络模型的残差传播方法的一个或多个步骤。备选地,在其他实施例中,计算单元301可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行网络模型的残差传播方 法。The computing unit 301 may be various general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 301 include, but are not limited to, a central processing unit (CPU, Central Processing Unit), a graphics processing unit (GPU, Graphics Processing Unit), various dedicated artificial intelligence (AI, Artificial Intelligence) computing chips, various computing units that run machine learning model algorithms, digital signal processors (DSP, Digital Signal Processing), and any appropriate processors, controllers, microcontrollers, etc. The calculation unit 301 executes various methods and processes described above, such as the residual propagation method of the network model. For example, in some embodiments, the residual propagation method for network models may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 307 . In some embodiments, part or all of the computer program may be loaded and/or installed on the device 300 via the ROM 302 and/or the communication unit 309. When the computer program is loaded into the RAM 303 and executed by the computing unit 301, one or more steps of the residual propagation method for the network model described above can be performed. Alternatively, in other embodiments, the computing unit 301 may be configured in any other appropriate way (for example, by means of firmware) to execute the residual propagation method of the network model.
本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA,Field Programmable Gate Array)、专用集成电路(ASIC,Application Specific Integrated Circuit)、专用标准产品(ASSP,Application Specific Standard Product)、芯片上系统的系统(SOC,System on Chip)、负载可编程逻辑设备(CPLD,Complex Programmable Logic Device)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。Various implementations of the systems and technologies described above in this paper can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGA, Field Programmable Gate Array), application specific integrated circuits (ASIC, Application Specific Integrated Circuit), application specific standard products (ASSP, Application Specific Standard Product), systems on chips (SOC, System on Chip), load programmable logic devices (CPLD, Comp lex Programmable Logic Device), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs executable and/or interpreted on a programmable system comprising at least one programmable processor, which may be a special purpose or general purpose programmable processor, capable of receiving data and instructions from and transmitting data and instructions to a storage system, at least one input device, and at least one output device.
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。Program codes for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to processors or controllers of general-purpose computers, special purpose computers, or other programmable data processing devices, so that the program codes cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented when executed by the processors or controllers. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM,Electrical Programmable Read Only Memory或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM,Compact Disc Read-Only Memory)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device. A machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), EPROM (Electrical Programmable Read-Only Memory or flash memory), optical fiber, Compact Disc Read-Only Memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. combination.
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(Cathode Ray Tube,阴极射线管)或者LCD(Liquid Crystal Display,液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。To provide for interaction with a user, the systems and techniques described herein can be implemented on a computer having: a display device (e.g., a CRT (Cathode Ray Tube, cathode ray tube) or LCD (Liquid Crystal Display, liquid crystal display) monitor) for displaying information to the user; and a keyboard and pointing device (e.g., a mouse or trackball) through which the user can provide input to the computer. Other types of devices may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, voice input, or tactile input.
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN,Local Area Network)、广域网(WAN,Wide Area Network)和互联网。The systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., a user computer having a graphical user interface or web browser through which a user can interact with implementations of the systems and techniques described herein), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: Local Area Network (LAN, Local Area Network), Wide Area Network (WAN, Wide Area Network) and the Internet.
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,也可以为分布式系统的服务器,或者是结合了区块链的服务器。A computer system may include clients and servers. Clients and servers are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, a server of a distributed system, or a server combined with a blockchain.
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。It should be understood that steps may be reordered, added or deleted using the various forms of flow shown above. For example, each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。The specific implementation manners described above do not limit the protection scope of the present disclosure. It should be apparent to those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made depending on design requirements and other factors. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present disclosure shall be included within the protection scope of the present disclosure.

Claims (12)

  1. 一种网络模型的残差传播方法,其特征在于,应用于网络中,所述网络包括至少一条路由路径,所有所述路由路径包括一信源节点和一信宿节点,以及设置在所述信源节点和所述信宿节点之间的中间节点;A method for residual propagation of a network model, characterized in that it is applied to a network, and the network includes at least one routing path, and all of the routing paths include a source node and a sink node, and an intermediate node arranged between the source node and the sink node;
    其中,所述信源节点存储预设的信宿节点需求中的所有模型切片,所述信宿节点需求为所述信宿节点对模型切片的需求;Wherein, the source node stores all model slices in the preset sink node requirements, and the sink node requirements are the sink node's requirements for model slices;
    所述残差传播方法包括:The residual propagation method includes:
    获取所述路由路径(S101);Obtain the routing path (S101);
    沿着所述路由路径从所述信宿节点向所述信源节点方向遍历,所述中间节点和/或所述信源节点将所述信宿节点需求中的模型切片发送至所述信宿节点(S102)。Traversing along the routing path from the sink node to the source node, the intermediate node and/or the source node sends the model slices required by the sink node to the sink node (S102).
  2. 如权利要求1所述的残差传播方法,其中,确定所述信宿节点需求,包括以下步骤:The residual propagation method according to claim 1, wherein determining the demand of the sink node comprises the following steps:
    根据所述信源节点中的模型切片的种类和数量以及所述信宿节点中的模型切片的种类和数量确定所述信宿节点还缺少的模型切片的种类和数量。Determine the type and number of model slices that the sink node still lacks according to the type and number of model slices in the source node and the type and number of model slices in the sink node.
  3. 如权利要求1所述的残差传播方法,其中,所述信源节点还存储了所述信宿节点中已有的所有模型切片。The residual propagation method according to claim 1, wherein the source node also stores all existing model slices in the sink node.
  4. 如权利要求1所述的残差传播方法,其中,确定所述信宿节点需求,包括以下步骤:The residual propagation method according to claim 1, wherein determining the demand of the sink node comprises the following steps:
    给每个模型切片设置唯一编号;Set a unique number for each model slice;
    根据所述信源节点中的模型切片的编号和所述信宿节点中的模型切片的编号确定所述信宿节点还缺少的模型切片。Determine the model slice that the sink node still lacks according to the number of the model slice in the source node and the number of the model slice in the sink node.
  5. 如权利要求1-4任一所述的残差传播方法,其中,所述沿着所述路由路径从所述信宿节点向所述信源节点方向遍历,所述中间节点和/或所述信源节点将所述信宿节点需求中的模型切片发送至所述信宿节点,包括:The method for residual propagation according to any one of claims 1-4, wherein the traversing along the routing path from the sink node to the source node, the intermediate node and/or the source node sending the model slices required by the sink node to the sink node includes:
    沿着所述路由路径从所述信宿节点向所述信源节点方向遍历;Traverse along the routing path from the sink node to the source node;
    判断所述中间节点是否存在所述信宿节点需求中的至少一个模型切 片;Judging whether the intermediate node has at least one model slice required by the sink node;
    若存在,所述中间节点将存在的所述信宿节点需求中的模型切片发送至所述信宿节点;If it exists, the intermediate node sends the existing model slices required by the sink node to the sink node;
    若不存在,继续沿着所述路由路径向所述信源节点方向遍历,使得所述中间节点将存在的所述信宿节点需求中的模型切片发送至所述信宿节点;If it does not exist, continue to traverse along the routing path to the direction of the source node, so that the intermediate node sends the existing model slice in the sink node requirement to the sink node;
    判断是否满足所述信宿节点需求;judging whether the requirements of the sink node are met;
    若满足,结束遍历;If satisfied, end the traversal;
    若不满足,更新所述信宿节点需求,继续沿着所述路由路径向所述信源节点方向遍历,使得所述中间节点将存在的所述信宿节点需求中的模型切片发送至所述信宿节点,当遍历完所有所述中间节点也不能满足所述信宿节点需求时,所述信源节点将所述信宿节点需求中的模型切片发送至所述信宿节点。If it is not satisfied, update the requirements of the sink node, and continue to traverse along the routing path to the direction of the source node, so that the intermediate node sends the model slices in the requirements of the sink node to the sink node.
  6. 如权利要求1所述的残差传播方法,其中,所述模型切片能够在所述网络中的所有节点中运行及存储,并可在所有所述节点之间传输。The residual propagation method according to claim 1, wherein the model slice can be run and stored in all nodes in the network, and can be transmitted between all nodes.
  7. 一种网络模型的残差传播装置,其特征在于,应用于网络中,所述网络包括至少一条路由路径,所有所述路由路径包括一信源节点和一信宿节点,以及设置在所述信源节点和所述信宿节点之间的中间节点;A residual propagation device for a network model, characterized in that it is applied in a network, and the network includes at least one routing path, and all the routing paths include a source node and a sink node, and an intermediate node arranged between the source node and the sink node;
    其中,所述信源节点存储预设的信宿节点需求中的所有模型切片,所述信宿节点需求为所述信宿节点对模型切片的需求;Wherein, the source node stores all model slices in the preset sink node requirements, and the sink node requirements are the sink node's requirements for model slices;
    所述残差传播装置包括:The residual propagation device includes:
    路径获取单元(201),用于获取所述路由路径;a path obtaining unit (201), configured to obtain the routing path;
    处理单元(202),用于沿着所述路由路径从所述信宿节点向所述信源节点方向遍历,所述中间节点和/或所述信源节点将所述信宿节点需求中的模型切片发送至所述信宿节点。A processing unit (202), configured to traverse along the routing path from the sink node to the source node, and the intermediate node and/or the source node send the model slices required by the sink node to the sink node.
  8. 如权利要求7所述的残差传播装置,其中,所述处理单元(202)包括:The residual propagation device according to claim 7, wherein the processing unit (202) comprises:
    需求确定模块,用于根据所述信源节点中的模型切片的种类和数量以及所述信宿节点中的模型切片的种类和数量确定所述信宿节点还缺少的模型切片的种类和数量。A requirement determination module, configured to determine the type and quantity of model slices that the sink node still lacks according to the type and quantity of model slices in the source node and the type and quantity of model slices in the sink node.
  9. 如权利要求7所述的残差传播装置,其中,所述处理单元(202)包括:The residual propagation device according to claim 7, wherein the processing unit (202) comprises:
    需求确定模块,用于给每个模型切片设置唯一编号,根据所述信源节点中的模型切片的编号和所述信宿节点中的模型切片的编号确定所述信宿节点还缺少的模型切片。The demand determination module is configured to set a unique number for each model slice, and determine the model slices that the sink node still lacks according to the number of model slices in the source node and the number of model slices in the sink node.
  10. 如权利要求7-9任一所述的残差传播装置,其中,所述处理单元(202)包括:The residual propagation device according to any one of claims 7-9, wherein the processing unit (202) comprises:
    遍历模块,用于沿着所述路由路径从所述信宿节点向所述信源节点方向遍历;A traversal module, configured to traverse along the routing path from the sink node to the source node;
    第一判断模块,判断所述中间节点是否存在所述信宿节点需求中的至少一个模型切片;A first judging module, judging whether the intermediate node has at least one model slice required by the sink node;
    若存在,所述中间节点将存在的所述信宿节点需求中的模型切片发送至所述信宿节点;If it exists, the intermediate node sends the existing model slices required by the sink node to the sink node;
    若不存在,继续沿着所述路由路径向所述信源节点方向遍历,使得所述中间节点将存在的所述信宿节点需求中的模型切片发送至所述信宿节点;If it does not exist, continue to traverse along the routing path to the direction of the source node, so that the intermediate node sends the existing model slice in the sink node requirement to the sink node;
    第二判断模块,判断是否满足所述信宿节点需求;A second judging module, judging whether the requirements of the sink node are met;
    若满足,结束遍历;If satisfied, end the traversal;
    若不满足,更新所述信宿节点需求,继续沿着所述路由路径向所述信源节点方向遍历,使得所述中间节点将存在的所述信宿节点需求中的模型切片发送至所述信宿节点,当遍历完所有所述中间节点也不能满足所述信宿节点需求时,所述信源节点将所述信宿节点需求中的模型切片发送至所述信宿节点。If it is not satisfied, update the requirements of the sink node, and continue to traverse along the routing path to the direction of the source node, so that the intermediate node sends the model slices in the requirements of the sink node to the sink node.
  11. 一种电子设备,包括:An electronic device comprising:
    至少一个处理器;以及at least one processor; and
    与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-6中任一项所述的方法。The memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor, so that the at least one processor can perform the method of any one of claims 1-6.
  12. 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所 述计算机指令用于使所述计算机执行根据权利要求1-6中任一项所述的方法。A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to cause the computer to perform the method according to any one of claims 1-6.
PCT/CN2022/136413 2022-01-20 2022-12-04 Residual propagation method and apparatus for network model WO2023138231A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210067751.X 2022-01-20
CN202210067751.XA CN116527561A (en) 2022-01-20 2022-01-20 Residual error propagation method and residual error propagation device of network model

Publications (1)

Publication Number Publication Date
WO2023138231A1 true WO2023138231A1 (en) 2023-07-27

Family

ID=87347772

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/136413 WO2023138231A1 (en) 2022-01-20 2022-12-04 Residual propagation method and apparatus for network model

Country Status (2)

Country Link
CN (1) CN116527561A (en)
WO (1) WO2023138231A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146035A (en) * 2007-06-13 2008-03-19 中兴通讯股份有限公司 Label switching path routing search method and system
WO2015105322A1 (en) * 2014-01-13 2015-07-16 이화여자대학교 산학협력단 Ad-hoc network system using selective data compression algorithm and method for transmitting data in ad-hoc network
CN108665067A (en) * 2018-05-29 2018-10-16 北京大学 Compression method and system for deep neural network frequent transmission
CN112651510A (en) * 2019-10-12 2021-04-13 华为技术有限公司 Model updating method, working node and model updating system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146035A (en) * 2007-06-13 2008-03-19 中兴通讯股份有限公司 Label switching path routing search method and system
WO2015105322A1 (en) * 2014-01-13 2015-07-16 이화여자대학교 산학협력단 Ad-hoc network system using selective data compression algorithm and method for transmitting data in ad-hoc network
CN108665067A (en) * 2018-05-29 2018-10-16 北京大学 Compression method and system for deep neural network frequent transmission
CN112651510A (en) * 2019-10-12 2021-04-13 华为技术有限公司 Model updating method, working node and model updating system

Also Published As

Publication number Publication date
CN116527561A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN112800247B (en) Semantic encoding/decoding method, equipment and communication system based on knowledge graph sharing
TWI812623B (en) Node device, computer-implemented method, and related non-transitory processor-readable medium
US9569285B2 (en) Method and system for message handling
Lee et al. Performance analysis of local exit for distributed deep neural networks over cloud and edge computing
US20210073235A1 (en) Incremental data retrieval based on structural metadata
EP4163805A1 (en) Graph-based labeling of heterogenous digital content items
CN108153803A (en) A kind of data capture method, device and electronic equipment
WO2023142399A1 (en) Information search methods and apparatuses, and electronic device
WO2022143987A1 (en) Tree model training method, apparatus and system
WO2023179800A1 (en) Communication receiving method and apparatus thereof
US10965771B2 (en) Dynamically switchable transmission data formats in a computer system
WO2023138231A1 (en) Residual propagation method and apparatus for network model
WO2024001266A1 (en) Video stream transmission control method and apparatus, device, and medium
US11568014B2 (en) Information centric network distributed search with approximate cache
CN114679283A (en) Block chain data request processing method and device, server and storage medium
EP4120117A1 (en) Disfluency removal using machine learning
WO2023138234A1 (en) Model management method and apparatus, networking architecture, electronic device and storage medium
US10572500B2 (en) Feeding networks of message brokers with compound data elaborated by dynamic sources
CN115865334A (en) Quantum key distribution method and device and electronic equipment
KR20230029502A (en) Edge computing network, data transmission method, device, electronic equipment and storage medium
CN114900489B (en) Message processing method and device, electronic equipment and storage medium
WO2023138233A1 (en) Model transmission method and apparatus, electronic device and readable storage medium
WO2023138238A1 (en) Information transmitting method and apparatus based on intent-driven network, electronic device, and medium
WO2023198212A1 (en) Model selection method and apparatus based on environmental perception
CN108429683B (en) Network data routing method, system and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22921655

Country of ref document: EP

Kind code of ref document: A1