WO2022171082A1 - 信息处理方法、装置、系统、电子设备及存储介质 - Google Patents

信息处理方法、装置、系统、电子设备及存储介质 Download PDF

Info

Publication number
WO2022171082A1
WO2022171082A1 PCT/CN2022/075516 CN2022075516W WO2022171082A1 WO 2022171082 A1 WO2022171082 A1 WO 2022171082A1 CN 2022075516 W CN2022075516 W CN 2022075516W WO 2022171082 A1 WO2022171082 A1 WO 2022171082A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
graph
resource
processed
functional component
Prior art date
Application number
PCT/CN2022/075516
Other languages
English (en)
French (fr)
Inventor
曲薇
Original Assignee
中国移动通信有限公司研究院
中国移动通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国移动通信有限公司研究院, 中国移动通信集团有限公司 filed Critical 中国移动通信有限公司研究院
Priority to JP2023548276A priority Critical patent/JP2024507133A/ja
Priority to EP22752243.0A priority patent/EP4293965A1/en
Publication of WO2022171082A1 publication Critical patent/WO2022171082A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/12Arrangements for remote connection or disconnection of substations or of equipment thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5041Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
    • H04L41/5054Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1012Server selection for load balancing based on compliance of requirements or conditions with available server resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/085Retrieval of network configuration; Tracking network configuration history
    • H04L41/0853Retrieval of network configuration; Tracking network configuration history by actively collecting configuration information or by backing up configuration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies

Definitions

  • the present application relates to the field of Internet of Things (IoT, Internet of Things), and in particular, to an information processing method, device, system, electronic device and storage medium.
  • IoT Internet of Things
  • Edge computing is a computing method that offloads (ie, allocates) computing tasks to the network edge side closer to IoT devices. Compared with cloud computing, edge computing does not need to upload a large amount of user original data to the cloud data center. Therefore, edge computing can effectively solve the delay, reliability, energy consumption, communication bandwidth consumption, user privacy and Security and other issues, especially in application scenarios with high requirements for data processing delay, user privacy and reliability, have greater value and broad application prospects, such as autonomous driving, virtual reality (VR, Virtual Reality), Augmented Reality (AR, Augmented Reality) and other application scenarios.
  • VR virtual reality
  • AR Augmented Reality
  • AI Artificial Intelligence
  • AI Artificial Intelligence
  • IoT devices To perform computationally intensive computing tasks that require high computing power and/or storage space, which imposes limited resources on resources.
  • Limited i.e., limited computing power and/or storage space
  • highly heterogeneous edge side brings great challenges, that is, how to make full use of resource-constrained and highly heterogeneous IoT devices to perform computing tasks becomes an urgent need to solve question.
  • embodiments of the present application provide an information processing method, apparatus, system, electronic device, and storage medium.
  • the embodiment of the present application provides an information processing method, including:
  • the first functional component generates a resource graph by abstracting the capabilities of the IoT devices; the resource graph is used to manage and/or orchestrate the available capabilities on the heterogeneous IoT devices;
  • the second functional component acquires the task to be processed, and generates a computation graph corresponding to the task to be processed;
  • the third functional component performs task assignment based on the resource graph and the computation graph.
  • the generating a computation graph corresponding to the task to be processed includes:
  • the second functional component decomposes the task to be processed into at least one operator; and determines the relationship between the operators;
  • a calculation graph corresponding to the task to be processed is generated.
  • the decomposing the to-be-processed task into at least one operator includes:
  • the second functional component uses the first strategy to decompose the to-be-processed task to obtain at least one operator.
  • generating a computation graph corresponding to the task to be processed based on the at least one operator and the relationship between the operators includes:
  • the second functional component abstracts each operator in the at least one operator into a corresponding node; and determines the relationship between the nodes based on the relationship between the operators;
  • a computation graph corresponding to the task to be processed is generated.
  • a node of the computation graph represents an operator of the task to be processed; an edge of the computation graph represents a relationship between two adjacent nodes.
  • the tasks to be processed include at least one of the following:
  • the method further includes:
  • the second functional component optimizes the generated computational graph
  • the third functional component performs task assignment based on the resource graph and the optimized computation graph.
  • the optimization of the generated calculation graph includes at least one of the following:
  • the generation of a resource graph by abstracting the capabilities of IoT devices includes:
  • the first functional component discovers IoT devices in the network; detects the capabilities of the IoT devices; for each IoT device, abstracts the IoT devices into corresponding nodes based on the capabilities of the corresponding IoT devices;
  • a resource graph is generated.
  • the nodes of the resource graph represent at least part of the capabilities of an IoT device; the edges of the resource graph represent the relationship between two adjacent nodes.
  • the method further includes:
  • the first functional component When the first functional component detects that the Internet of Things device changes, it updates the resource map based on the monitored change of the Internet of Things device.
  • the task assignment based on the resource graph and the computation graph includes:
  • the third functional component uses the second strategy to generate at least one task allocation strategy based on the resource graph and the calculation graph; determines a task allocation strategy with the best performance from the at least one task allocation strategy; and based on the at least one task allocation strategy The task allocation strategy with the best performance is used for task allocation; the task allocation strategy is used to allocate the to-be-processed task to at least one IoT device.
  • the use of the second strategy to generate at least one task allocation strategy includes:
  • the third functional component adopts the second strategy to generate at least one resource subgraph based on the computation graph and the resource graph; each resource subgraph includes a task allocation strategy; the nodes of the resource subgraph represent an Internet of Things At least part of the capabilities of the device; the edges of the resource subgraph represent the relationship between two adjacent nodes.
  • the determining the task allocation strategy with the best performance from the at least one task allocation strategy includes:
  • the third functional component predicts the performance of each task allocation strategy; and determines the task allocation strategy with the best performance based on the predicted performance of each task allocation strategy.
  • the predicting the performance of each task allocation strategy includes:
  • the third functional component extracts the features of the computational graph to obtain a first feature set; and extracts the features of each resource subgraph to obtain a plurality of second feature sets; each resource subgraph contains a task allocation strategy;
  • the performance of the corresponding task allocation strategy is predicted based on the first feature set and the corresponding second feature set.
  • the feature of the computation graph is extracted to obtain a first feature set; and the feature of each resource subgraph is extracted to obtain a plurality of second feature sets, including:
  • the third functional component extracts features of the computational graph through a feature extraction network to obtain a first feature set; and extracts features of each resource sub-graph through the feature extraction network to obtain a plurality of second feature sets.
  • the predicting the performance of the corresponding task allocation strategy based on the first feature set and the corresponding second feature set includes:
  • the third functional component obtains the prediction data corresponding to the corresponding task allocation strategy through the prediction network based on the first feature set and the corresponding second feature set; and determines the prediction of the corresponding task allocation strategy based on the prediction data corresponding to the corresponding task allocation strategy performance.
  • the prediction data includes at least one of the following:
  • the determining the prediction performance of the corresponding task allocation strategy based on the prediction data corresponding to the corresponding task allocation strategy includes:
  • the third functional component performs weighting processing on the prediction data corresponding to the corresponding task allocation strategy according to the preset weight, so as to determine the prediction performance of the corresponding task allocation strategy.
  • the feature extraction network is obtained by training based on a training data set; the training process can generate optimized network parameters; the optimized network parameters are used to extract data that is conducive to improving performance Features of prediction accuracy.
  • the prediction network is obtained by training based on a training data set; the training process can generate optimized network parameters; the optimized network parameters are used to improve the accuracy of performance prediction.
  • the training data set can be continuously updated by means of historical data accumulation and/or random walk to generate new data, so that the training process has the capability of continuous learning.
  • the method further includes:
  • the third functional component obtains the actual performance of the task to be processed when the task allocation strategy with the best performance is executed; and combines the task allocation strategy with the best performance and the obtained actual performance.
  • the performance is stored to the training dataset.
  • the embodiment of the present application also provides an information processing method, including:
  • the task to be processed includes a calculation task; the node of the calculation graph represents an operator of the task to be processed; the edge of the calculation graph represents the adjacent the relationship between two nodes;
  • the generated calculation graph is optimized to obtain an optimized calculation graph; the optimized calculation graph is used for task allocation in combination with the resource graph; the resource graph is generated by abstracting the capabilities of the Internet of Things devices; the Resource graphs are used to manage and/or orchestrate available capabilities on heterogeneous IoT devices.
  • the generating a computation graph corresponding to the task to be processed includes:
  • a calculation graph corresponding to the task to be processed is generated.
  • the decomposing the to-be-processed task into at least one operator includes:
  • the task to be processed is decomposed using the first strategy to obtain at least one operator.
  • generating a computation graph corresponding to the task to be processed based on the at least one operator and the relationship between the operators includes:
  • Each operator in the described at least one operator is abstracted into a corresponding node; and the relationship between the nodes is determined based on the relationship between the operators;
  • a computation graph corresponding to the task to be processed is generated.
  • the optimization of the generated calculation graph includes at least one of the following:
  • the embodiment of the present application also provides an information processing device, including:
  • a first functional component configured to generate a resource graph by abstracting capabilities of IoT devices; the resource graph is used to manage and/or orchestrate available capabilities on heterogeneous IoT devices;
  • the second functional component is configured to obtain the task to be processed and generate a calculation graph corresponding to the task to be processed;
  • the third functional component is configured to perform task assignment based on the resource graph and the computation graph.
  • the embodiment of the present application also provides an information processing device, including:
  • a first processing unit configured to obtain a task to be processed; and generate a calculation graph corresponding to the task to be processed;
  • the task to be processed includes a calculation task;
  • a node of the calculation graph represents an operator of the task to be processed;
  • the The edge of the computational graph represents the relationship between two adjacent nodes;
  • the second processing unit is configured to optimize the generated calculation graph to obtain an optimized calculation graph; the optimized calculation graph is used for task allocation in combination with the resource graph; the resource graph Generated by abstraction; the resource graph is used to manage and/or orchestrate the capabilities available on heterogeneous IoT devices.
  • the embodiment of the present application also provides an information processing system, including:
  • a first functional component configured to generate a resource graph by abstracting capabilities of IoT devices; the resource graph is used to manage and/or orchestrate available capabilities on heterogeneous IoT devices;
  • the second functional component is configured to obtain the task to be processed and generate a calculation graph corresponding to the task to be processed;
  • the third functional component is configured to perform task assignment based on the resource graph and the computation graph;
  • the first functional component, the second functional component and the third functional component are provided on at least two electronic devices.
  • Embodiments of the present application also provide an electronic device, including: a processor and a memory configured to store a computer program that can be executed on the processor,
  • the processor is configured to execute the steps of any of the above methods when running the computer program.
  • Embodiments of the present application further provide a storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of any of the foregoing methods.
  • the first functional component generates a resource graph by abstracting the capabilities of IoT devices; the resource graph is used for managing and/or arranging different
  • the second functional component obtains the task to be processed and generates a calculation graph corresponding to the to-be-processed task; the third functional component performs task allocation based on the resource graph and the calculation graph.
  • the solution of the embodiments of the present application generates a resource graph for managing and/or orchestrating available capabilities on heterogeneous IoT devices by abstracting the capabilities of IoT devices, and based on the computation graph corresponding to the task to be processed and the Resource graph for task allocation; in this way, resource-constrained and highly heterogeneous IoT devices can be efficiently managed and flexibly scheduled, that is, resource-constrained and highly heterogeneous IoT devices can be fully utilized to perform pending tasks (e.g. computationally intensive deep learning tasks).
  • pending tasks e.g. computationally intensive deep learning tasks
  • FIG. 1 is a schematic flowchart of an information processing method according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of another information processing method according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a scenario of an application embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an Intelligent Distributed Edge Computing (IDEC, Intelligent Distributed Edge Computing) system according to an application embodiment of the present application;
  • IDEC Intelligent Distributed Edge Computing
  • FIG. 5 is a schematic diagram of an application scenario of a service capability abstraction module of an application embodiment of the present application.
  • FIG. 6 is a schematic diagram of a resource knowledge graph building module in an application embodiment of the present application.
  • FIG. 7 is a schematic diagram of calculation diagram optimization in an application embodiment of the present application.
  • ICTA Intelligent Computing Task Allocation
  • FIG. 9 is a schematic structural diagram of an IDEC-based intelligent IoT edge computing platform according to an application embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of another information processing apparatus according to an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of an information processing system according to an embodiment of the present application.
  • edge computing resources for distributed inference and/or training of machine learning models can be considered.
  • edge computing methods that collaborate across devices can be considered. The problem of limited resources in the edge environment.
  • the distributed training and inference in the edge deep learning system mainly adopts the method of coarse-grained hierarchical model segmentation and layer scheduling, and the divided sub-models are deployed on the device side, the edge side and the cloud respectively.
  • the underlying implementation of this coarse-grained hierarchical model segmentation completely relies on third-party programming frameworks (also known as software platforms or operator libraries), such as TensorFlow, Caffe, Torch, etc., which cannot make full use of resource-constrained and highly heterogeneous
  • the IoT devices are constructed to perform computationally intensive deep learning tasks, thus limiting the overall system performance improvement.
  • a resource graph for managing and/or orchestrating available capabilities on heterogeneous IoT devices is generated, and based on tasks to be processed, corresponding
  • the resource-constrained and highly heterogeneous IoT devices can be efficiently managed and flexibly scheduled, that is, the resource-constrained and highly heterogeneous IoT devices can be fully utilized to Execute pending tasks (such as computationally intensive deep learning tasks).
  • An embodiment of the present application provides an information processing method, as shown in FIG. 1 , the method includes:
  • Step 101 The first functional component generates a resource graph (also referred to as a resource knowledge graph or a resource knowledge graph) by abstracting the capabilities of the IoT device;
  • a resource graph also referred to as a resource knowledge graph or a resource knowledge graph
  • the resource graph is used to manage (Management in English) and/or orchestrate (Orchestration in English) available capabilities on heterogeneous IoT devices;
  • Step 102 the second functional component acquires the task to be processed, and generates a computation graph (also referred to as a computation flow graph or a data flow graph) corresponding to the to-be-processed task;
  • a computation graph also referred to as a computation flow graph or a data flow graph
  • Step 103 The third functional component performs task assignment based on the resource graph and the computation graph.
  • the nodes of the resource graph represent at least part of the capabilities of an IoT device; the edges of the resource graph represent the relationship between two adjacent nodes ( It can also be called an association relationship between two adjacent nodes (expressed as Association Relationship in English), the relationship can include a communication relationship and a affiliation relationship, and the communication relationship can be embodied in the relationship between two adjacent nodes.
  • Information that can characterize communication strength such as information transmission rate and transmission delay.
  • the tasks to be processed include computing tasks, and the computing tasks may include general computing tasks and computations such as training and/or inference machine learning models (also referred to as deep models, deep learning models, or deep neural networks).
  • Intensive computing tasks Computation-intensive computing tasks have higher requirements on computing capabilities and/or storage capabilities, and are more suitable for task allocation based on the resource graph and the computation graph.
  • the information processing method provided by the embodiments of the present application, it is possible to fully utilize resource-constrained and highly heterogeneous IoT devices to perform computationally intensive computing tasks.
  • heterogeneous IoT devices refer to: in a network including multiple IoT devices and servers, the hardware of one IoT device is different from the hardware of another IoT device, and/or the server of one IoT device Unlike another IoT device's server.
  • the hardware of one IoT device is different from the hardware of another IoT device refers to: a central processing unit (CPU, Central Processing Unit), a graphics processing unit (GPU, Graphics Processing Unit), a bus interface chip of an IoT device (BIC, Bus Interface Chip), digital signal processor (DSP, Digital Signal Processor) and other processing hardware or random access memory (RAM, Random Access Memory), read-only memory (ROM, Read Only Memory) and other storage hardware
  • the model is different from that of the hardware of another IoT device;
  • the server of one IoT device is different from the server of another IoT device means: the back-end program or operating system corresponding to one IoT device is different from that of another IoT device.
  • the corresponding back-end programs or operating systems are different, in other words, there are differences in software between two IoT devices.
  • the IoT device may include a mobile phone, a personal computer (PC, Personal Computer), a wearable smart device, an intelligent gateway, a computing box, etc.;
  • the PC may include a desktop computer, a notebook computer, a tablet computer, etc.;
  • the wearable smart devices may include smart watches, smart glasses, and the like.
  • the information processing method provided in the embodiment of the present application is applied to an information processing system, and the system may include the first functional component, the second functional component, and the third functional component; wherein the first functional component
  • the functional component, the second functional component, and the third functional component may be implemented by electronic devices, such as a server, respectively; of course, the first functional component, the second functional component, and the third functional component may also be be provided on the same electronic device, or, any two functional components among the first functional component, the second functional component and the third functional component may be provided on the same electronic device.
  • the generation of a resource graph by abstracting the capabilities of IoT devices may include:
  • the first functional component discovers IoT devices in the network; detects the capabilities of the IoT devices; for each IoT device, abstracts the IoT devices into corresponding nodes based on the capabilities of the corresponding IoT devices;
  • a resource graph is generated.
  • the discovery of IoT devices may also be referred to as IoT devices in the perception edge network; the edge network refers to the edge of the telecommunications network (the edge network includes the convergence layer network and the access layer network. Part or all of it is the last segment of the network that accesses the user).
  • the discovery or perception can also be understood as detection.
  • the first functional component can be detected based on the Dynamic Host Configuration Protocol (DHCP, Dynamic Host Configuration Protocol) using zero configuration networking (ZEROCONF, ZERO CONFiguration networking) technology.
  • DHCP Dynamic Host Configuration Protocol
  • ZEROCONF Zero configuration networking
  • ZEROCONF ZERO CONFiguration networking
  • the first functional component may detect the capability of the corresponding IoT device by exchanging information with the corresponding IoT device; exemplarily, the first functional component may send a capability request to the corresponding IoT device message, and the capability of the corresponding Internet of Things device is determined according to the message replied by the corresponding Internet of Things device to the capability request message.
  • the capabilities of the IoT device can include at least one of the following:
  • the capability of the IoT device refers to the service capability of the IoT device, which can be understood as the resources of the corresponding IoT device; correspondingly, at least part of the capability of an IoT device can be understood as at least part of the resources of an IoT device,
  • the available capacity on an IoT device can be understood as the available resources on an IoT device, that is, idle resources (also known as idle capacity, idle capacity, idle capacity, idle resources, or idle resources); that is, computing power Refers to the computing resources available to the corresponding IoT device; storage capacity refers to the storage resources (ie, storage space) available to the corresponding IoT device.
  • the capabilities available on heterogeneous IoT devices can include at least one of the following:
  • the communication capability can also be called communication resources, which can be specifically understood as the communication strength between two nodes, for example, the bandwidth resources provided by the edge network for communication between IoT devices, the information transmission speed (ie transmission rate), transmission delay, etc.; for another example, the transmission rate, transmission delay, etc. between one part of the capability of an IoT device and another part of the capability.
  • communication resources can be specifically understood as the communication strength between two nodes, for example, the bandwidth resources provided by the edge network for communication between IoT devices, the information transmission speed (ie transmission rate), transmission delay, etc.; for another example, the transmission rate, transmission delay, etc. between one part of the capability of an IoT device and another part of the capability.
  • the first functional component may use a software-defined technology to abstract the physical IoT device into a virtualized node, and the node may contain capability information of the corresponding IoT device.
  • the node may contain capability information of the corresponding IoT device.
  • different nodes can be abstracted; for an IoT device, the abstracted nodes can include at least one of the following:
  • Device node can represent the computing power and storage capacity of the corresponding IoT device
  • Computing nodes capable of representing the computing power of corresponding IoT devices
  • Storage node can represent the storage capacity of the corresponding IoT device.
  • generating a resource graph based on the abstracted nodes may include:
  • the first functional component determines the characteristics of each node, and determines the relationship between multiple nodes; the characteristics are at least used to describe the Internet of Things device information corresponding to the corresponding node and at least part of the capability information of the Internet of Things devices;
  • a resource graph is generated.
  • the features of a node will include multiple, therefore, the feature may also be called a feature vector, feature set, feature vector set or feature set; since the feature contains multiple description information (that is, the corresponding node corresponds to The IoT device information and at least part of the capability information of the IoT device), therefore, the features may also be called information or information sets.
  • the features of nodes can be used for the representation of an ontology description model (expressed as Ontology Description Model in English), and the ontology description model can also be called an entity description model.
  • the method may further include:
  • the first functional component monitors IoT devices
  • the resource map is updated based on the monitored change of the Internet of Things device.
  • the generating a computation graph corresponding to the task to be processed may include:
  • the second functional component decomposes the task to be processed into at least one operator; and determines the relationship between the operators;
  • a calculation graph corresponding to the task to be processed is generated.
  • the decomposing the to-be-processed task into at least one operator may include:
  • the second functional component uses the first strategy to decompose the to-be-processed task to obtain at least one operator.
  • the second functional component adopts the first strategy to decompose the to-be-processed task, which may include:
  • model design (also called task design or program design) is carried out according to the functions that need to be implemented in applications and services.
  • the applications and services may be general applications and services (eg, map positioning, online banking, online shopping, etc.), or intelligent applications and services (eg, intelligent control, automatic driving, etc.).
  • the functions can be general functions (such as playing videos, accessing browsers, opening web pages, editing files, etc.) or AI-related functions (such as face recognition, behavior recognition, speech recognition, natural language processing, etc.).
  • the model design includes designing an algorithm model (that is, designing the to-be-processed task, the to-be-processed task may also be called a task model or a program model, and the to-be-processed task includes a computing task) to achieve corresponding functions, such as designing a neural network. Network structure to realize functions such as behavior recognition.
  • the computational graph is composed of nodes and edges, and the nodes of the computational graph represent a certain type of operation that the algorithm model needs to perform when the program is implemented (that is, an operation unit, which can also be called an operator, and can be expressed as Operation in English.
  • Node or Operator Node that is, an operator of the task to be processed;
  • the operator can be a general mathematical operator or an array operator (for example, an addition operator, a multiplication operator, etc.), or a Neural network operator (that is, the basic operation unit of neural network, such as: convolution operator, pooling operator, etc.);
  • the node contains the following characteristics or information: when the operator represented by the corresponding node is operated (or executed)
  • the consumption or demand for resources such as computing power and storage that is, the hardware execution cost of the operator, can also be understood as the hardware occupancy data of the operator (for example: CPU occupancy, GPU occupancy, DSP occupancy, FPGA occupancy, memory occupancy rate, etc.); the occupancy rate may also be referred to as occupancy, occupancy ratio, occupancy ratio, usage, usage ratio, usage ratio, usage ratio, utilization, utilization ratio or utilization ratio;
  • the edge in the computational graph Represents the relationship between two adjacent nodes, that is, the relationship between two adjacent operators, including the calculation dependency or data dependency between two adjacent operators, the direction of
  • the generating a computation graph corresponding to the task to be processed based on the at least one operator and the relationship between the operators may include:
  • the second functional component abstracts each operator in the at least one operator into a corresponding node; and determines the relationship between the nodes based on the relationship between the operators;
  • a computation graph corresponding to the task to be processed is generated.
  • the second functional component can determine the relationship between nodes according to the calculation dependency between the at least one operator, the operation sequence of the at least one operator, or the data flow direction between the at least one operator. , and based on the determined nodes and the relationships between the nodes, a computation graph corresponding to the task to be processed is generated.
  • the second functional component may optimize the computation graph.
  • the method may further include:
  • the second functional component optimizes the generated computational graph
  • the third functional component performs task assignment based on the resource graph and the optimized computation graph.
  • the optimization of the generated calculation graph may include at least one of the following:
  • Operator fusion in English can be expressed as Operator Fusion
  • Operator Fusion used to combine multiple adjacent small operators into one operator; so that there is no need to combine the multiple adjacent small operators in the process of executing the task to be processed.
  • the intermediate result of the operator is stored in the global memory, so as to reduce the execution time of executing the to-be-processed task by reducing memory access;
  • Constant merging (or constant folding, which can be expressed as Constant Folding in English); used to traverse the nodes in the calculation graph to find nodes that can be fully calculated statically, that is, nodes that rely entirely on constant input for calculation, on the CPU Calculate these nodes and replace these nodes, that is, combine the calculations of constants in the calculation graph.
  • This constant combining algorithm reduces unnecessary repeated calculations and improves computing performance;
  • Static memory planning pass (or called static memory plan pass, English can be expressed as Static Memory Planning Pass); used to pre-allocate memory to all intermediate result tensors (English can be expressed as Tensor, intermediate results exist in the form of Tensor ); Graph optimization is performed by preallocating all intermediate result Tensors, which can save runtime costs (for example, enabling a constant folding Pass to be statically executed in the computational graph pre-computation phase);
  • Data layout transformation in English can be expressed as Data Layout Transformation
  • it is used when there is a data layout mismatch between the data producer (in English can be expressed as Producer) and the data consumer (in English can be expressed as Consumer).
  • Data layout transformation In English can be expressed as Data Layout Transformation
  • the Tensor operation is the basic operator of the calculation graph, and the operations involved in the Tensor will have different data layout requirements according to different operators.
  • a deep learning accelerator may use 4x4 tensor operations, so the data needs to be cut into 4x4 blocks for storage to optimize local memory access efficiency.
  • it is necessary to provide a customized data layout for each operator.
  • system performance may include at least one of the following:
  • the reliability of executing the to-be-processed task may be reflected in the success rate of executing the to-be-processed task.
  • the task assignment based on the resource graph and the computation graph may include:
  • the third functional component uses the second strategy to generate at least one task allocation strategy based on the resource graph and the calculation graph; determines a task allocation strategy with the best performance from the at least one task allocation strategy; and based on the at least one task allocation strategy
  • the task allocation strategy with the best performance is used for task allocation; the task allocation strategy is used to map (or allocate) the to-be-processed task to at least one IoT device.
  • the task allocation strategy refers to a strategy for allocating to-be-processed tasks to at least one IoT device for execution, or for allocating at least one node of a corresponding resource graph to each node of the computing graph, or for assigning to-be-processed tasks to at least one node of the corresponding resource graph.
  • Networked devices execute pending tasks as directed by the task distribution policy.
  • the task allocation strategy may also be referred to as a task allocation method, a task allocation method, a task scheduling strategy, a task scheduling method, a task scheduling method, a task scheduling strategy, a task scheduling method, a task scheduling method, and the like.
  • the task assignment based on the task assignment strategy with the best performance refers to: mapping (ie, assigning) the to-be-processed task to at least the task assignment strategy with the best performance. on one IoT device, so that the at least one IoT device utilizes at least part of its own capabilities to execute the tasks to be processed in a parallel and cooperative manner, for example, to implement training and/or reasoning of the machine learning model.
  • the mapping of the to-be-processed task to at least one IoT device can also be understood as: assigning at least part of the capabilities of at least one IoT device to each operator of the to-be-processed task; in other words In other words, each node of the computation graph is allocated at least one node of the resource graph. It can be seen that, through task assignment, matching between tasks to be processed and IoT devices is actually achieved, or in other words, matching between tasks to be processed and resources (that is, available resources on IoT devices) is achieved.
  • At least one node of the resource graph allocated to each node of the computation graph may be the same or different; that is, an IoT device may utilize at least part of its own capabilities to implement
  • multiple IoT devices can implement a computing unit corresponding to an operator in a cooperative manner.
  • nodes without computational dependencies in the computational graph ie, operators without computational dependencies
  • can be executed ie, operations or computations in parallel on the same or different IoT devices.
  • the task allocation strategy can indicate at least one node of the resource graph allocated to each node of the computation graph, and the at least one task allocation strategy can be determined based on the resource graph. Therefore, the task allocation strategy can be embodied as a resource subgraph obtained by dividing the resource graph, and the resource subgraph includes the correspondence between each node in the computation graph and at least one node in the resource graph relation.
  • the second strategy can be implemented by means of graph search, graph optimization, sub-graph matching, heuristic method, etc., or a random walk method.
  • the generating at least one task allocation strategy using the second strategy may include:
  • the third functional component adopts the second strategy to generate at least one resource subgraph based on the computation graph and the resource graph; each resource subgraph includes a task allocation strategy; the nodes of the resource subgraph represent an Internet of Things At least part of the capabilities of the device; the edges of the resource subgraph represent the relationship between two adjacent nodes.
  • the determining the task allocation strategy with the best performance from the at least one task allocation strategy may include:
  • the third functional component predicts the performance of each task allocation strategy; and determines the task allocation strategy with the best performance based on the predicted performance of each task allocation strategy.
  • the predicting the performance of each task allocation strategy may include:
  • the third functional component extracts the features of the computational graph to obtain a first feature set; and extracts the features of each resource subgraph to obtain a plurality of second feature sets; each resource subgraph contains a task allocation strategy;
  • the performance of the corresponding task allocation strategy is predicted based on the first feature set and the corresponding second feature set.
  • the feature sets may also be referred to as features for short, or may be referred to as feature sets, feature vectors, or feature vector sets.
  • the third functional component can extract the first feature set and the second feature set through a feature extraction network.
  • the feature of the computational graph is extracted to obtain a first feature set; and the feature of each resource sub-graph is extracted to obtain a plurality of second feature sets, which may include:
  • the third functional component extracts features of the computational graph through a feature extraction network to obtain a first feature set; and extracts features of each resource sub-graph through the feature extraction network to obtain a plurality of second feature sets.
  • the features of the calculation graph may include at least one of the following:
  • the feature extraction network can be constructed based on a graph neural network (GCN, Graph Convolutional Network), and the feature extraction network can be trained based on a training data set; the training process can generate optimized network parameters; the optimization The network parameters can be used to extract features that are conducive to improving the accuracy of performance prediction.
  • GCN graph neural network
  • the third functional component can predict the performance of the corresponding task allocation strategy through the prediction network.
  • the predicting the performance of the corresponding task allocation strategy based on the first feature set and the corresponding second feature set may include:
  • the third functional component obtains the prediction data corresponding to the corresponding task allocation strategy through the prediction network based on the first feature set and the corresponding second feature set; and determines the prediction of the corresponding task allocation strategy based on the prediction data corresponding to the corresponding task allocation strategy performance.
  • the prediction data may include at least one of the following:
  • the predicted reliability of executing the to-be-processed task may be reflected in the predicted success rate of executing the to-be-processed task.
  • the tasks to be processed have different requirements on the performance of the task allocation strategy.
  • the tasks to be processed need to be processed within the shortest possible time.
  • the execution is completed; for another example, the to-be-processed task needs to consume as little energy consumption as possible.
  • the determining the prediction performance of the corresponding task allocation strategy based on the prediction data corresponding to the corresponding task allocation strategy may include:
  • the third functional component performs weighting processing on the prediction data corresponding to the corresponding task allocation strategy according to the preset weight, so as to determine the prediction performance of the corresponding task allocation strategy.
  • the preset weight can be set according to requirements.
  • Preset weights for weighting are the following formula:
  • represents the prediction performance of the corresponding task allocation strategy
  • Q( ) represents a function, which includes the weighting information for each component (that is, each type of prediction data, which can be understood as a performance index)
  • ⁇ t represents the prediction duration
  • ⁇ e is the predicted energy consumption
  • ⁇ r is the prediction reliability.
  • the specific form of Q( ) in expression (1) that is, the specific value of the preset weight, depends on the different requirements of different scenarios for delay, energy consumption, reliability, etc., or the degree of importance or attention, That is, by using a specific function to weight different performance indicators to achieve a trade-off between various performance indicators, the weighted value of each key performance indicator is calculated according to the set formula to obtain the overall system performance, that is, through the expression (1)
  • the obtained predicted performance reflects the overall system performance related to Quality of Service (QoS).
  • QoS Quality of Service
  • the prediction network can be constructed based on a deep neural network (DNN, Deep Neural Networks), and the prediction network can be trained based on a training data set; the training process can generate optimized network parameters; the optimized network Parameters can be used to improve the accuracy of performance predictions.
  • DNN Deep Neural Networks
  • the training data set may be continuously updated by means of historical data accumulation and/or random walk to generate new data, so that the training process has the ability to continuously learn.
  • the training data may be referred to as samples or training samples, and may include task allocation strategies and their corresponding actual performances.
  • the method may further include:
  • the third functional component obtains the actual performance of the task to be processed when the task allocation strategy with the best performance is executed; and combines the task allocation strategy with the best performance and the obtained actual performance.
  • the performance is stored to the training dataset.
  • the feature extraction network and the prediction network can be implemented inside the third functional component, that is, the third functional component performs training and/or inference; it can also be implemented outside the third functional component Implementation, i.e. training and/or inference by other functional components.
  • an embodiment of the present application also provides an information processing method, which is applied to the second functional component. As shown in FIG. 2 , the method includes:
  • Step 201 Obtain the task to be processed; and generate a calculation graph corresponding to the task to be processed;
  • the task to be processed includes a computing task;
  • the node of the computing graph represents an operator of the task to be processed;
  • the edge of the computing graph represents the relationship between two adjacent nodes;
  • Step 202 Optimizing the generated computational graph to obtain an optimized computational graph
  • the optimized computation graph is used for task assignment in combination with the resource graph; the resource graph is generated by abstracting the capabilities of IoT devices; the resource graph is used to manage and/or orchestrate heterogeneous IoT available capabilities on the device.
  • the generating a computation graph corresponding to the task to be processed may include:
  • a calculation graph corresponding to the task to be processed is generated.
  • the decomposing the to-be-processed task into at least one operator may include:
  • the task to be processed is decomposed using the first strategy to obtain at least one operator.
  • the generating a computation graph corresponding to the task to be processed based on the at least one operator and the relationship between the operators may include:
  • a computation graph corresponding to the task to be processed is generated.
  • the optimization of the generated calculation graph may include at least one of the following:
  • the first functional component generates a resource graph by abstracting the capabilities of the IoT devices; the resource graph is used to manage and/or arrange the available capabilities on the heterogeneous IoT devices;
  • the second functional component acquires the task to be processed, and generates a computation graph corresponding to the to-be-processed task;
  • the third functional component performs task allocation based on the resource graph and the computation graph.
  • the solution of the embodiments of the present application generates a resource graph for managing and/or orchestrating available capabilities on heterogeneous IoT devices by abstracting the capabilities of IoT devices, and based on the computation graph corresponding to the task to be processed and the Resource graph for task allocation; in this way, resource-constrained and highly heterogeneous IoT devices can be efficiently managed and flexibly scheduled, that is, resource-constrained and highly heterogeneous IoT devices can be fully utilized to perform pending tasks (e.g. computationally intensive deep learning tasks).
  • pending tasks e.g. computationally intensive deep learning tasks
  • the purpose of this application embodiment is to provide an intelligent distributed edge computing (IDEC) system that supports efficient deep learning across heterogeneous IoT devices, and the IDEC system may also be called collaborative decentralization Machine Learning (CDML, Collaborative Decentralized Machine Learning) system, collaborative distributed machine learning system, decentralized machine learning system based on device collaboration, or distributed machine learning system based on device collaboration.
  • IDEC intelligent distributed edge computing
  • CDML collaborative decentralization Machine Learning
  • CDML Collaborative Decentralized Machine Learning
  • collaborative distributed machine learning system collaborative distributed machine learning system
  • decentralized machine learning system based on device collaboration or distributed machine learning system based on device collaboration.
  • edge-side IoT devices Through extensive connection and intelligent perception of edge-side IoT devices, unified resource management and computing power sharing, efficient device collaboration and intelligent scheduling, operator-level computing task decomposition, and graph convolution-based task allocation and optimization to achieve A full-stack optimized system design that supports distributed training and/or inference of deep models for collaboration across heterogeneous IoT devices, further enabling AI models to sink from the cloud center to the network edge side closer to IoT devices, supporting edge intelligence
  • the efficient deployment and execution of services and applications solves the problems of delay, reliability, energy consumption, communication bandwidth consumption, user privacy and security in the data processing process of IoT application scenarios.
  • the IDEC system mainly includes three modules: an edge resource management module (or called an IoT device resource management module, that is, the above-mentioned first functional component), a computing task decomposition module ( Or referred to as a machine learning computing task decomposition module (ie, the above-mentioned second functional component) and an intelligent computing task assignment (ICTA) module (ie, the above-mentioned third functional component).
  • an edge resource management module or called an IoT device resource management module, that is, the above-mentioned first functional component
  • a computing task decomposition module Or referred to as a machine learning computing task decomposition module (ie, the above-mentioned second functional component)
  • ICTA intelligent computing task assignment
  • the IDEC system connects to the widely distributed IoT edge infrastructure (ie edge devices, also known as IoT devices) in the south direction, and generates a resource graph that supports dynamic construction and update through the edge resource management module.
  • the IDEC system northbound generates a computation graph through the computation task decomposition module for deep learning tasks from intelligent applications and services in actual scenarios, realizing fine-grained operator-level computing task decomposition, providing conditions for parallel computing and distributed processing, and facilitating Perform graph-level optimization of deep learning task execution performance.
  • the middle layer (ie core module) of the IDEC system is the ICTA module. ICTA realizes the cross-device distribution of the underlying deep learning operators on the basis of the generated resource graph and computation graph.
  • the ICTA module uses the graph convolutional network (GCN) and Deep neural network (DNN) and other deep learning algorithms realize the task allocation strategy corresponding to the best system performance by learning the inherent statistical laws of complex and changeable task scheduling problems between different operating systems on heterogeneous IoT devices. Intelligent decision-making maximizes the use of decentralized and heterogeneous resources on the edge side of the Internet of Things, thereby improving the overall system performance; at the same time, the ICTA module makes the IDEC system intelligent and adaptive by introducing a continuous learning mechanism, realizing "the more you use, the more you use it.” clever".
  • GCN graph convolutional network
  • DNN Deep neural network
  • the IoT infrastructure linked to the south of the IDEC system mainly includes two categories: terminal devices (ie, intelligent IoT devices with computing capabilities, such as smart cameras, smart gateways, computing boxes, smart IoT devices, etc.).
  • terminal devices ie, intelligent IoT devices with computing capabilities, such as smart cameras, smart gateways, computing boxes, smart IoT devices, etc.
  • edge servers that is, intelligent IoT devices with slightly stronger computing power, storage capacity and management capacity, responsible for hosting and running IDEC systems, and some large-scale deep learning models
  • IDEC system northbound docking with a variety of intelligent edge applications and services in the field of Internet of Things, including: smart elderly care (also known as smart elderly care), smart home (also known as smart home), Internet of Vehicles , smart communities, smart cities, industrial Internet, etc.
  • edge resource management module the functions of the edge resource management module are described with reference to FIG. 5 and FIG. 6 .
  • the edge resource management module adopts technologies such as virtualization, software definition and knowledge graph, through the functions of the edge device service capability abstraction module (or called IoT device service capability abstraction module) (as shown in Figure 5) and The function of the resource knowledge graph building module (as shown in Figure 6) realizes the unified management and orchestration of heterogeneous resources on the distributed edge infrastructure of the Internet of Things, as well as the intelligent perception and collaboration of edge devices.
  • the edge device service capability abstraction module or called IoT device service capability abstraction module
  • the function of the resource knowledge graph building module realizes the unified management and orchestration of heterogeneous resources on the distributed edge infrastructure of the Internet of Things, as well as the intelligent perception and collaboration of edge devices.
  • the edge device service capability abstraction module is mainly used to solve the problem of heterogeneity, and its fundamental goal is to break the boundaries between heterogeneous hardware and enable a variety of IoT devices to perform deep learning tasks in a collaborative manner.
  • it can include three layers, as shown in Figure 5, the edge infrastructure layer realizes the identification and connection of various heterogeneous devices; the resource pooling layer realizes the computing resources (for example: CPU, GPU, FPGA, ARM) on the edge device.
  • the capability abstraction layer uses virtualization and software-defined technologies to convert computing and storage resources into virtual computing nodes and storage nodes, which is convenient for unification Manage and orchestrate.
  • the edge device service capability abstraction module facilitates resource scheduling across heterogeneous edge devices and contributes to the discovery and matching of suitable resources to meet specific computing needs, whereby widely distributed multiple edge resources and processing capabilities can be perceived, Reuse and sharing improve resource utilization, thereby improving the overall service capability of the edge side.
  • the resource knowledge graph building module can use semantics and knowledge engine technology to describe and model interconnected IoT devices.
  • nodes represent different edge devices or fine-grained computing and/or storage capabilities abstracted from edge devices.
  • virtualized nodes can include device nodes, computing nodes, and storage nodes. node; wherein, the ontology description model of the device node may include the following information: IoT device information (including device ID, location, type, status value, function, owner, interface, IP information, etc.), and capability information (available CPU , GPU, FPGA, DSP, memory resources, etc.)
  • Edges in the resource knowledge graph represent associations between adjacent nodes.
  • the association relationship represents the interconnection between heterogeneous edge device resources, and further reflects the internal cooperation and sharing mechanism of edge devices.
  • an automatic update mechanism is introduced into the resource knowledge graph building module to keep the resource status and connection status of physical edge devices consistent.
  • the use of scheduling strategies and shared collaboration mechanisms further improves resource utilization and overall computing power.
  • the IDEC system can achieve efficient management and flexible scheduling of limited available resources on heterogeneous distributed edge devices to meet the resource requirements of computing tasks.
  • the computational task decomposition module has the functions of computational graph construction and computational graph optimization.
  • the computation graph construction refers to generating a computation graph corresponding to a deep learning computation task.
  • deep learning computing tasks are usually some multi-layer deep neural network models, and the basic units of which are deep learning operators, such as convolution operators, pooling operators, and so on.
  • An abstract node is used to represent an operator, and an edge is used to represent the data flow, data dependency or calculation dependency, so as to form a graph structure that can represent the implementation process of the operator-level program of the deep learning model, which is called a computational graph and computational flow.
  • a graph or data flow graph, as shown in Figure 7, a computational graph is an expression of deep learning computational tasks in the form of graphs.
  • Computational graph optimization is to perform some operations on operators in the computational graph before they are actually allocated and executed, in order to obtain better system performance, such as reducing task execution time.
  • the methods of computational graph optimization mainly include: operator fusion, constant merging, static memory planning transfer, and data layout transformation.
  • operator fusion refers to combining multiple adjacent small operators into one operator without saving intermediate results in global memory, so as to reduce execution time by reducing memory access.
  • fine-grained operator-level computation tasks can be decomposed, providing the possibility for parallel processing and distributed execution of operators; at the same time, it is conducive to operator fusion , constant merging and other graph-level optimizations, and provide prerequisites for the next computing task allocation and optimization.
  • the computational graph constructed by the computational task decomposition module provides a global view of the operators, but does not specify the specific IoT devices that implement each operator to obtain the best system performance, that is, the computing task allocation strategy has not yet been determined.
  • the resource graph provides the resources available on IoT devices capable of carrying deep learning workloads. Therefore, based on the computation graph and resource graph, in order to make full use of the decentralized resources on IoT devices to efficiently perform computation tasks in a collaborative manner, the ICTA module reasonably allocates the deep learning operators in the computation graph in an optimal allocation manner. For IoT devices with idle resources in the resource map, to achieve the best match between computing tasks and device resources, and to achieve intelligent decision-making of task allocation strategies corresponding to the best system performance.
  • the ICTA module may specifically include: a resource subgraph construction module, a feature extraction module, and a performance prediction module.
  • the resource subgraph building module is configured to construct resource subgraphs by means of graph search, graph optimization, subgraph matching, heuristic method or random walk method, and each resource subgraph carries a specific task allocation strategy.
  • the feature extraction module is configured to use the GCN algorithm to extract the graph topology features of the resource graph and the computation graph respectively.
  • the extracted features cover the dimensions of computing power, storage, communication and other dimensions that play a decisive role in the efficient execution of deep learning computing tasks.
  • the performance prediction module is configured to use the DNN algorithm to predict the system performance for a given task allocation strategy (that is, the task allocation strategy carried by each resource sub-graph or corresponding to the task) before the task is actually executed.
  • the system performance indicators that focus on can include: Execution time (i.e. length), energy consumption and reliability (eg success rate).
  • the performance prediction module can trade off these three indicators according to the actual needs of different application scenarios (for example, multiplying a large weight by a large weight), and finally obtain a comprehensive representation of the overall system performance. index.
  • the performance prediction module selects the task allocation strategy that can obtain the best system performance for actual task allocation according to the obtained comprehensive indicators of each task allocation strategy.
  • end-to-end training can be performed on the GCN model (that is, the above-mentioned feature extraction network) and the DNN model (that is, the above-mentioned prediction network) to learn the potential correspondence between different task assignment strategies and system performance, as well as a variety of heterogeneous In this way, the accuracy of system performance prediction can be improved.
  • GCN model that is, the above-mentioned feature extraction network
  • DNN model that is, the above-mentioned prediction network
  • the ICTA module can solve the problem of optimal matching between computing tasks and device resources, thereby improving resource utilization and overall system performance.
  • the ICTA module reasonably allocates the computing units (i.e. operators) of the deep learning model to a variety of heterogeneous IoT devices according to the task allocation strategy with the best system performance. In this way, the cross-device heterogeneous resources in the IDEC system can be fully utilized.
  • Distributed (or decentralized) execution of computationally intensive deep learning tasks in the manner of multi-device collaboration helps distributed edge computing systems improve the deployment and execution efficiency of edge-side intelligent applications.
  • the ICTA module can achieve "the more you use the smarter", which makes the entire IDEC system more intelligent with integrated adaptive and self-learning capabilities. step.
  • this application example also provides an intelligent IoT edge computing platform.
  • the platform connects with intelligent applications in multiple vertical industries through the mode of "demand downlink, service uplink” in the north direction, and through “data uplink, service uplink” in the south direction.
  • the "task downlink” mode is linked with a variety of heterogeneous and widely distributed IoT devices.
  • the entire platform integrates multiple guarantee systems of operation and maintenance, security and privacy. Consumers, supply chains, collaborative enterprises and developers, etc.
  • the platform specifically includes: an application layer, a core layer and a resource layer.
  • the application layer integrates a variety of common capabilities and intelligent algorithms, which are used to convert intelligent service requirements from specific scenarios in industry applications into functional modules such as behavior recognition and face recognition, and further decompose them into CNN, RNN, etc. Multiple deep learning tasks and/or models.
  • the core layer is equipped with the IDEC system, which implements fine-grained (ie operator-level) decomposition of deep learning tasks from the application layer on the top and unified management and efficient scheduling of edge resources on the bottom. and computational graph) by intelligently assigning and optimizing tasks on multiple devices according to the best matching mode of tasks and resources, and finally realizing distributed training and/or inference of machine learning models.
  • the main functions of the core layer include: edge resource management, deep learning computing task decomposition, intelligent computing task allocation, etc.
  • the features and advantages of the core layer include: intelligent perception, heterogeneous compatibility, scheduling and orchestration, shared collaboration, distributed deployment, and intelligent adaptation.
  • the resource layer realizes capability abstraction and resource extraction on IoT devices through technologies such as virtualization and software definition, and is used for computing capability virtualization, storage capability virtualization, and network resource virtualization.
  • a full-stack optimized system design is realized from the top-level edge intelligent application to the bottom-level widely distributed heterogeneous IoT edge devices, and through the full-stack optimized system design, the IDEC system has heterogeneous compatibility, high performance, and The characteristics of intelligent self-adaptation realize the unified management and resource sharing of a large number of scattered heterogeneous IoT edge devices with limited resources, so as to support the decentralized deep learning model distributed across heterogeneous devices. training and/or inference.
  • edge resource management module Through the edge resource management module, the intelligent perception, unified management and collaboration of IoT edge devices are realized, and resource sharing and efficient scheduling for IoT devices are realized, so as to make full use of the widely distributed heterogeneous resource constraints IoT devices.
  • the operator-level decomposition of the deep learning task is realized, and the generated calculation graph is conducive to the implementation of parallel processing and distributed computing, that is, it is conducive to the parallel processing and distributed execution of operators; and , which is conducive to graph-level optimization (which can also be understood as operator-level optimization) to improve task execution performance.
  • the embodiment of the present application further provides an information processing apparatus, as shown in FIG. 10 , the apparatus includes:
  • a first functional component 1001 configured to generate a resource graph by abstracting the capabilities of IoT devices; the resource graph is used to manage and/or orchestrate available capabilities on heterogeneous IoT devices;
  • the second functional component 1002 is configured to obtain a task to be processed and generate a computation graph corresponding to the task to be processed;
  • the third functional component 1003 is configured to perform task allocation based on the resource graph and the computation graph.
  • the second functional component 1002 is configured as:
  • a calculation graph corresponding to the task to be processed is generated.
  • the second functional component 1002 is configured to use the first strategy to decompose the to-be-processed task to obtain at least one operator.
  • the second functional component 1002 is configured as:
  • a computation graph corresponding to the task to be processed is generated.
  • the second functional component 1002 is configured to optimize the generated computational graph
  • the third functional component 1003 is configured to perform task allocation based on the resource graph and the optimized computation graph.
  • the second functional component 1002 is configured to execute at least one of the following:
  • the first functional component 1001 is configured as:
  • Discover IoT devices in the network detect the capabilities of IoT devices; for each IoT device, abstract IoT devices into corresponding nodes based on the capabilities of the corresponding IoT devices;
  • a resource graph is generated.
  • the first functional component 1001 is configured to update the resource map based on the monitored change of the Internet of Things device when a change of the Internet of Things device is monitored.
  • the third functional component 1003 is configured as:
  • a second strategy to generate at least one task allocation strategy; determine a task allocation strategy with the best performance from the at least one task allocation strategy; and based on the best performance A task allocation strategy for task allocation; the task allocation strategy is used to allocate the to-be-processed task to at least one IoT device.
  • the third functional component 1003 is configured as:
  • the second strategy is adopted to generate at least one resource subgraph; each resource subgraph includes a task allocation strategy; the nodes of the resource subgraph represent at least part of the capabilities of an IoT device; The edges of the resource subgraph represent the relationship between two adjacent nodes.
  • the third functional component 1003 is configured as:
  • Predict the performance of each task allocation strategy determine the best performing task allocation strategy based on the predicted performance of each task allocation strategy.
  • the third functional component 1003 is configured as:
  • the performance of the corresponding task allocation strategy is predicted based on the first feature set and the corresponding second feature set.
  • the third functional component 1003 is configured to extract features of the computational graph through a feature extraction network to obtain a first feature set; and extract features of each resource sub-graph through the feature extraction network, A plurality of second feature sets are obtained.
  • the third functional component 1003 is configured to obtain prediction data corresponding to a corresponding task assignment strategy through a prediction network based on the first feature set and the corresponding second feature set; The prediction data of the corresponding task allocation strategy determines the prediction performance.
  • the third functional component 1003 is configured to perform weighting processing on the prediction data corresponding to the corresponding task allocation strategy according to a preset weight, so as to determine the prediction performance of the corresponding task allocation strategy.
  • the third functional component 1003 is configured to obtain the actual performance of the to-be-processed task when it is executed based on the task allocation strategy with the best performance after task allocation;
  • the optimal task allocation strategy and the obtained actual performance are stored in the training data set.
  • the function of the first functional component 1001 is equivalent to the function of the edge resource management module in the application embodiment of the present application
  • the function of the second functional component 1002 is equivalent to the function of the computing task decomposition module in the application embodiment of the present application
  • the function of the third functional component 1003 is equivalent to the function of the Intelligent Computing Task Assignment (ICTA) module in the application embodiment of the present application.
  • ICTA Intelligent Computing Task Assignment
  • the first functional component 1001, the second functional component 1002 and the third functional component 1003 may be implemented by a processor in the apparatus.
  • the embodiment of the present application further provides an information processing apparatus, as shown in FIG. 11 , the apparatus includes:
  • the first processing unit 1101 is configured to obtain a task to be processed; and generate a calculation graph corresponding to the task to be processed; the task to be processed includes a calculation task; a node of the calculation graph represents an operator of the task to be processed; The edge of the computational graph represents the relationship between two adjacent nodes;
  • the second processing unit 1102 is configured to optimize the generated computation graph to obtain an optimized computation graph; the optimized computation graph is used for task allocation in combination with the resource graph; the resource graph is obtained by Capabilities are abstractly generated; the resource graph is used to manage and/or orchestrate available capabilities on heterogeneous IoT devices.
  • the first processing unit 1101 is configured as:
  • a calculation graph corresponding to the task to be processed is generated.
  • the first processing unit 1101 is configured to use a first strategy to decompose the to-be-processed task to obtain at least one operator.
  • the first processing unit 1101 is configured as:
  • a computation graph corresponding to the task to be processed is generated.
  • the second processing unit 1102 is configured to perform at least one of the following:
  • the function of the first processing unit 1101 and the function of the second processing unit 1102 are equivalent to the functions of the computing task decomposition module in the application embodiment of the present application.
  • the first processing unit 1101 and the second processing unit 1102 may be implemented by a processor in the device.
  • the embodiments of the present application further provide an electronic device.
  • the electronic device 1200 includes:
  • the processor 1202 is connected to the communication interface 1201 to realize information interaction with other electronic devices, and is configured to execute the method provided by one or more of the above technical solutions when running a computer program;
  • the memory 1203 stores computer programs that can run on the processor 1202 .
  • At least one functional component among the first functional component, the second functional component and the third functional component may be provided on the electronic device 1200 .
  • the processor 1202 is configured to:
  • a resource graph is generated; the resource graph is used to manage and/or orchestrate available capabilities on heterogeneous IoT devices;
  • task assignment is performed.
  • processor 1202 is configured as:
  • a calculation graph corresponding to the task to be processed is generated.
  • the processor 1202 is configured to:
  • the task to be processed is decomposed using the first strategy to obtain at least one operator.
  • the processor 1202 is configured to:
  • a computation graph corresponding to the task to be processed is generated.
  • the processor 1202 is configured to:
  • task assignment is performed.
  • the processor 1202 is configured to perform at least one of the following operations:
  • the processor 1202 is configured to:
  • Discover IoT devices in the network detect the capabilities of IoT devices; for each IoT device, abstract IoT devices into corresponding nodes based on the capabilities of the corresponding IoT devices;
  • a resource graph is generated.
  • the processor 1202 is configured to:
  • the resource map is updated based on the monitored change of the Internet of Things device.
  • the processor 1202 is configured to:
  • a second strategy to generate at least one task allocation strategy; determine a task allocation strategy with the best performance from the at least one task allocation strategy; and based on the best performance A task allocation strategy for task allocation; the task allocation strategy is used to allocate the to-be-processed task to at least one IoT device.
  • the processor 1202 is configured to:
  • the second strategy is adopted to generate at least one resource subgraph; each resource subgraph includes a task allocation strategy; the nodes of the resource subgraph represent at least part of the capabilities of an IoT device; The edges of the resource subgraph represent the relationship between two adjacent nodes.
  • the processor 1202 is configured to:
  • Predict the performance of each task allocation strategy determine the best performing task allocation strategy based on the predicted performance of each task allocation strategy.
  • the processor 1202 is configured to:
  • the performance of the corresponding task allocation strategy is predicted based on the first feature set and the corresponding second feature set.
  • the processor 1202 is configured to:
  • the features of the computational graph are extracted through the feature extraction network to obtain a first feature set; and the features of each resource sub-graph are extracted through the feature extraction network to obtain multiple second feature sets.
  • the processor 1202 is configured to:
  • the prediction data corresponding to the corresponding task allocation strategy is obtained through the prediction network; the prediction performance of the corresponding task allocation strategy is determined based on the prediction data corresponding to the corresponding task allocation strategy.
  • the processor 1202 is configured to:
  • weighting processing is performed on the prediction data corresponding to the corresponding task allocation strategy, so as to determine the prediction performance of the corresponding task allocation strategy.
  • the processor 1202 is configured to:
  • the actual performance of the task to be processed when the task allocation strategy with the best performance is executed is obtained; and the task allocation strategy with the best performance and the obtained actual performance are stored in the training data set.
  • the processor 1202 is configured to:
  • the task to be processed includes a calculation task; the node of the calculation graph represents an operator of the task to be processed; the edge of the calculation graph represents the adjacent the relationship between two nodes;
  • the generated calculation graph is optimized to obtain an optimized calculation graph; the optimized calculation graph is used for task allocation in combination with the resource graph; the resource graph is generated by abstracting the capabilities of the Internet of Things devices; the Resource graphs are used to manage and/or orchestrate available capabilities on heterogeneous IoT devices.
  • processor 1202 is configured as:
  • a calculation graph corresponding to the task to be processed is generated.
  • the processor 1202 is configured to:
  • the task to be processed is decomposed using the first strategy to obtain at least one operator.
  • the processor 1202 is configured to:
  • a computation graph corresponding to the task to be processed is generated.
  • the processor 1202 is configured to perform at least one of the following operations:
  • bus system 1204 is used to implement connection communication between these components.
  • bus system 1204 also includes a power bus, a control bus, and a status signal bus.
  • the various buses are labeled as bus system 1204 in FIG. 12 .
  • the memory 1203 in this embodiment of the present application is used to store various types of data to support the operation of the electronic device 1200 .
  • Examples of such data include: any computer program used to operate on the electronic device 1200 .
  • the methods disclosed in the above embodiments of the present application may be applied to the processor 1202 or implemented by the processor 1202 .
  • the processor 1202 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above-mentioned method may be completed by an integrated logic circuit of hardware in the processor 1202 or an instruction in the form of software.
  • the above-mentioned processor 1202 may be a general-purpose processor, a DSP, a GPU, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like.
  • the processor 1202 may implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of this application.
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application can be directly embodied as being executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a storage medium, and the storage medium is located in the memory 1203, and the processor 1202 reads the information in the memory 1203, and completes the steps of the foregoing method in combination with its hardware.
  • the electronic device 1200 may be implemented by one or more of an Application Specific Integrated Circuit (ASIC), a DSP, a Programmable Logic Device (PLD), a Complex Programmable Logic Device (CPLD) , Complex Programmable Logic Device), FPGA, general-purpose processor, GPU, controller, microcontroller (MCU, Micro Controller Unit), microprocessor (Microprocessor), various AI chips, brain-like chips, or other electronic components to achieve , used to perform the aforementioned method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processing
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA field-programmable Logic Device
  • general-purpose processor GPU
  • controller microcontroller
  • MCU Micro Controller Unit
  • microprocessor Microprocessor
  • the memory 1203 in this embodiment of the present application may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memory.
  • the non-volatile memory can be ROM, Programmable Read-Only Memory (PROM, Programmable Read-Only Memory), Erasable Programmable Read-Only Memory (EPROM, Erasable Programmable Read-Only Memory), Electrically Erasable Programmable Read-Only Memory Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), FRAM, Flash Memory (Flash Memory), Magnetic Surface Memory, Optical Disc, or Compact Disc Read-Only Memory (CD-ROM, Compact Disc Read-Only Memory); Magnetic Surface storage can be disk storage or tape storage.
  • PROM Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory Programmable Read-Only Memory
  • FRAM Flash Memory
  • Flash Memory
  • Volatile memory can be RAM, which acts as an external cache.
  • RAM Static Random Access Memory
  • SSRAM Synchronous Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM Enhanced Type Synchronous Dynamic Random Access Memory
  • SLDRAM Synchronous Link Dynamic Random Access Memory
  • DRRAM Direct Rambus Random Access Memory
  • DRRAM Direct Rambus Random Access Memory
  • the embodiment of the present application further provides an information processing system, including:
  • a first functional component configured to generate a resource graph by abstracting capabilities of IoT devices; the resource graph is used to manage and/or orchestrate available capabilities on heterogeneous IoT devices;
  • the second functional component is configured to obtain the task to be processed and generate a calculation graph corresponding to the task to be processed;
  • a third functional component configured to perform task assignment based on the resource graph and the computation graph
  • the first functional component, the second functional component and the third functional component are arranged on at least two electronic devices.
  • the system may include: a first electronic device 1301 and a second electronic device 1302 ; the first electronic device 1301 is provided with the second functional component; the second electronic device 1302 The first functional component and the third functional component are provided.
  • an embodiment of the present application further provides a storage medium, that is, a computer storage medium, specifically a computer-readable storage medium, for example, including a memory 1203 for storing a computer program, and the above-mentioned computer program can be processed by the electronic device 1200
  • the device 1202 is executed to complete the steps of the aforementioned method.
  • the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本申请公开了一种信息处理方法、装置、系统、电子设备及存储介质。其中,方法包括:第一功能组件通过将物联网设备的能力进行抽象,生成资源图;所述资源图用于管理和/或编排异构物联网设备上的可用能力;第二功能组件获取待处理任务,并生成待处理任务对应的计算图;第三功能组件基于所述资源图和所述计算图,进行任务分配。

Description

信息处理方法、装置、系统、电子设备及存储介质
相关申请的交叉引用
本申请基于申请号为202110184807.5、申请日为2021年02月10日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及物联网(IoT,Internet of Things)领域,尤其涉及一种信息处理方法、装置、系统、电子设备及存储介质。
背景技术
边缘计算是一种将计算任务卸载(即分配)在更接近物联网设备的网络边缘侧的计算方式。与云计算相比,边缘计算无需将大量的用户原始数据上传至云数据中心,因此,边缘计算可以有效地解决数据处理过程中的时延、可靠性、能耗、通信带宽消耗、用户隐私和安全等问题,特别是在对数据处理时延、用户隐私和可靠性等要求较高的应用场景中有较大的价值和广阔的应用前景,比如自动驾驶、虚拟现实(VR,Virtual Reality)、增强现实(AR,Augmented Reality)等应用场景。
然而,用于实现基于人工智能(AI,Artificial Intelligence)的应用场景的智能应用和/或服务通常需要执行对计算能力和/或存储空间要求较高的计算密集型的计算任务,这给资源受限(即计算能力和/或存储空间有限)且高度异构的边缘侧带来了较大的挑战,即如何充分利用资源受限且高度异构的物联网设备来执行计算任务成为亟待解决的问题。
发明内容
为解决相关技术问题,本申请实施例提供一种信息处理方法、装置、系统、电子设备及存储介质。
本申请实施例的技术方案是这样实现的:
本申请实施例提供了一种信息处理方法,包括:
第一功能组件通过将物联网设备的能力进行抽象,生成资源图;所述资源图用于管理和/或编排异构物联网设备上的可用能力;
第二功能组件获取待处理任务,并生成待处理任务对应的计算图;
第三功能组件基于所述资源图和所述计算图,进行任务分配。
在本公开的一些可选实施例中,所述生成待处理任务对应的计算图,包括:
所述第二功能组件将所述待处理任务分解为至少一个算子;并确定算子之间的关系;
基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图。
在本公开的一些可选实施例中,所述将所述待处理任务分解为至少一个算子,包括:
所述第二功能组件采用第一策略对所述待处理任务进行分解,得到至少一个算子。
在本公开的一些可选实施例中,所述基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图,包括:
所述第二功能组件将所述至少一个算子中的每个算子抽象成相应的节点;并基于算子之间的关系确定节点之间的关系;
基于确定的节点及节点之间的关系,生成待处理任务对应的计算图。
在本公开的一些可选实施例中,所述计算图的节点代表所述待处理任务的一个算子;所述计算图的边代表相邻两个节点之间的关系。
在本公开的一些可选实施例中,所述待处理任务包含以下至少之一:
需要进行训练的机器学习模型;
需要进行推理的机器学习模型。
在本公开的一些可选实施例中,所述方法还包括:
所述第二功能组件对生成的计算图进行优化;
所述第三功能组件基于所述资源图和优化后的计算图,进行任务分配。
在本公开的一些可选实施例中,所述对生成的计算图进行优化,包括以下至少之一:
算子融合;
常量合并;
静态内存规划传递;
数据布局转换。
在本公开的一些可选实施例中,所述通过将物联网设备的能力进行抽象,生成资源图,包括:
第一功能组件发现网络中的物联网设备;检测物联网设备的能力;针对每个物联网设备,基于相应物联网设备的能力,将物联网设备抽象成相应的节点;
基于抽象出的节点,生成资源图。
在本公开的一些可选实施例中,所述资源图的节点代表一个物联网设备的至少部分能力;所述资源图的边代表相邻两个节点之间的关系。
在本公开的一些可选实施例中,所述方法还包括:
第一功能组件监测到物联网设备发生变化时,基于监测到的物联网设备的变化情况,更新所述资源图。
在本公开的一些可选实施例中,所述基于所述资源图和所述计算图,进行任务分配,包括:
所述第三功能组件基于所述资源图和所述计算图,采用第二策略生成至少一种任务分配策略;从所述至少一种任务分配策略中确定性能最佳的任务分配策略;并基于所述性能最佳的任务分配策略,进行任务分配;所述任务分配策略用于将所述待处理任务分配到至少一个物联网设备上。
在本公开的一些可选实施例中,所述采用第二策略生成至少一种任务分配策略,包括:
所述第三功能组件基于所述计算图和资源图,采用第二策略,生成至少一个资源子图;每个资源子图包含一种任务分配策略;所述资源子图的节点代表一个物联网设备的至少部分能力;所述资源子图的边代表相邻两个节点之间的关系。
在本公开的一些可选实施例中,所述从所述至少一种任务分配策略中确定性能最佳的任务分配策略,包括:
所述第三功能组件预测每个任务分配策略的性能;基于预测的每个任务分配策略的性能,确定性能最佳的任务分配策略。
在本公开的一些可选实施例中,所述预测每个任务分配策略的性能,包括:
所述第三功能组件提取所述计算图的特征,得到第一特征集;并提取每个资源子图的特征,得到多个第二特征集;每个资源子图包含一种任务分配策略;
针对每个任务分配策略,基于所述第一特征集和相应的第二特征集,预测相应任务分配策略的性能。
在本公开的一些可选实施例中,所述提取所述计算图的特征,得到第一特征集;并提取每个资源子图的特征,得到多个第二特征集,包括:
所述第三功能组件通过特征提取网络提取所述计算图的特征,得到第一特征集;并通过所述特征提取网络提取每个资源子图的特征,得到多个第二特征集。
在本公开的一些可选实施例中,所述基于所述第一特征集和相应的第二特征集,预测相应任务分配策略的性能,包括:
所述第三功能组件基于所述第一特征集和相应的第二特征集,通过预测网络获得相应任务分配策略对应的预测数据;基于相应任务分配策略对应的预测数据确定相应任务分配策略的预测性能。
在本公开的一些可选实施例中,所述预测数据,包含以下至少之一:
执行所述待处理任务的预测时长;
执行所述待处理任务的预测能耗;
执行所述待处理任务的预测可靠性。
在本公开的一些可选实施例中,所述基于相应任务分配策略对应的预测数据确定相应任务分配策略的预测性能,包括:
所述第三功能组件根据预设权重,对相应任务分配策略对应的预测数据进行加权处理,以确定相应任务分配策略的预测性能。
在本公开的一些可选实施例中,所述特征提取网络是基于训练数据集训练得到的;所述训练过程能够产生优化的网络参数;所述优化的网络参数用于提取到有利于提升性能预测准确率的特征。
在本公开的一些可选实施例中,所述预测网络是基于训练数据集训练得到的;所述训练过程能够产生优化的网络参数;所述优化的网络参数用于提升性能预测的准确率。
在本公开的一些可选实施例中,所述训练数据集能够通过历史数据积累和/或随机游走产生新数据的方式不断更新,以使所述训练过程具有持续学习的能力。
在本公开的一些可选实施例中,所述方法还包括:
所述第三功能组件在进行任务分配后,获取所述待处理任务基于所述性能最佳的任务分配策略被执行时的实际性能;并将所述性能最佳的任务分配策略及获取的实际性能存储至所述训练数据集。
本申请实施例还提供了一种信息处理方法,包括:
获取待处理任务;并生成待处理任务对应的计算图;所述待处理任务包含计算任务;所述计算图的节点代表所述待处理任务的一个算子;所述计算图的边代表相邻两个节点之间的关系;
对生成的计算图进行优化,得到优化后的计算图;所述优化后的计算图用于结合资源图进行任务分配;所述资源图是通过将物联网设备的能力进行抽象生成的;所述资源图用于管理和/或编排异构物联网设备上的可用能力。
在本公开的一些可选实施例中,所述生成待处理任务对应的计算图,包括:
将所述待处理任务分解为至少一个算子;并确定算子之间的关系;
基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图。
在本公开的一些可选实施例中,所述将所述待处理任务分解为至少一个算子,包括:
采用第一策略对所述待处理任务进行分解,得到至少一个算子。
在本公开的一些可选实施例中,所述基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图,包括:
将所述至少一个算子中的每个算子抽象成相应的节点;并基于算子之 间的关系确定节点之间的关系;
基于确定的节点及节点之间的关系,生成待处理任务对应的计算图。
在本公开的一些可选实施例中,所述对生成的计算图进行优化,包括以下至少之一:
算子融合;
常量合并;
静态内存规划传递;
数据布局转换。
本申请实施例还提供了一种信息处理装置,包括:
第一功能组件,配置为通过将物联网设备的能力进行抽象,生成资源图;所述资源图用于管理和/或编排异构物联网设备上的可用能力;
第二功能组件,配置为获取待处理任务,并生成待处理任务对应的计算图;
第三功能组件,配置为基于所述资源图和所述计算图,进行任务分配。
本申请实施例还提供了一种信息处理装置,包括:
第一处理单元,配置为获取待处理任务;并生成待处理任务对应的计算图;所述待处理任务包含计算任务;所述计算图的节点代表所述待处理任务的一个算子;所述计算图的边代表相邻两个节点之间的关系;
第二处理单元,配置为对生成的计算图进行优化,得到优化后的计算图;所述优化后的计算图用于结合资源图进行任务分配;所述资源图是通过将物联网设备的能力进行抽象生成的;所述资源图用于管理和/或编排异构物联网设备上的可用能力。
本申请实施例还提供了一种信息处理系统,包括:
第一功能组件,配置为通过将物联网设备的能力进行抽象,生成资源图;所述资源图用于管理和/或编排异构物联网设备上的可用能力;
第二功能组件,配置为获取待处理任务,并生成待处理任务对应的计算图;
第三功能组件,配置为基于所述资源图和所述计算图,进行任务分配;其中,
所述第一功能组件、所述第二功能组件和所述第三功能组件设置在至少两个电子设备上。
本申请实施例还提供了一种电子设备,包括:处理器和配置为存储能够在处理器上运行的计算机程序的存储器,
其中,所述处理器配置为运行所述计算机程序时,执行上述任一方法的步骤。
本申请实施例还提供了一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任一方法的步骤。
本申请实施例提供的信息处理方法、装置、系统、电子设备及存储介 质,第一功能组件通过将物联网设备的能力进行抽象,生成资源图;所述资源图用于管理和/或编排异构物联网设备上的可用能力;第二功能组件获取待处理任务,并生成待处理任务对应的计算图;第三功能组件基于所述资源图和所述计算图,进行任务分配。本申请实施例的方案,通过将物联网设备的能力进行抽象,生成用于管理和/或编排异构物联网设备上的可用能力的资源图,并基于待处理任务对应的计算图及所述资源图进行任务分配;如此,能够对资源受限且高度异构的物联网设备进行高效管理和灵活调度,即充分地利用资源受限且高度异构的物联网设备来执行待处理任务(例如计算密集型的深度学习任务)。
附图说明
图1为本申请实施例一种信息处理方法的流程示意图;
图2为本申请实施例另一种信息处理方法的流程示意图;
图3为本申请应用实施例的场景示意图;
图4为本申请应用实施例智能分布式边缘计算(IDEC,Intelligent Distributed Edge Computing)系统的结构示意图;
图5为本申请应用实施例服务能力抽象模块的应用场景示意图;
图6为本申请应用实施例资源知识图谱构建模块的示意图;
图7为本申请应用实施例计算图优化的示意图;
图8为本申请应用实施例智能计算任务分配(ICTA,Intelligent Computing Task Allocation)模块的示意图;
图9为本申请应用实施例基于IDEC的智能物联网边缘计算平台的结构示意图;
图10为本申请实施例一种信息处理装置的结构示意图;
图11为本申请实施例另一种信息处理装置的结构示意图;
图12为本申请实施例电子设备的结构示意图;
图13为本申请实施例信息处理系统的结构示意图。
具体实施方式
下面结合附图及实施例对本申请再作进一步详细的描述。
随着深度学习技术的突破性进展以及第五代移动通信技术(5G,5th Generation)的推广和普及,基于人工智能的物联网边缘智能应用和/或服务近年来也在不断增长,并在车联网、智慧养老、智慧社区、智慧城市、智慧家庭、工业互联网等领域取得了鼓舞人心的初步成果。在这种情况下,可以考虑利用边缘计算资源进行机器学习模型的分布式推理和/或训练。然而,与云计算强大的计算能力和丰富的存储空间相比,资源受限的边缘环境往往很难支撑计算密集型的深度学习任务,因此,可以考虑通过跨设备 协作的分布式边缘计算方式解决边缘环境资源受限的问题。
相关技术中,边缘深度学习系统中的分布式训练和推理主要采用了粗粒度的层级模型分割和层调度的方法,将分割后的子模型分别部署在设备端、边缘侧以及云端。这种粗粒度的层级模型分割的底层实现完全依赖于第三方编程框架(也可以称为软件平台或算子库),例如:TensorFlow、Caffe、Torch等,无法充分地利用资源受限且高度异构的物联网设备来执行计算密集型的深度学习任务,从而限制了整体系统性能的提升。
基于此,在本申请的各种实施例中,通过将物联网设备的能力进行抽象,生成用于管理和/或编排异构物联网设备上的可用能力的资源图,并基于待处理任务对应的计算图及所述资源图进行任务分配;如此,能够对资源受限且高度异构的物联网设备进行高效管理和灵活调度,即充分地利用资源受限且高度异构的物联网设备来执行待处理任务(例如计算密集型的深度学习任务)。
本申请实施例提供一种信息处理方法,如图1所示,该方法包括:
步骤101:第一功能组件通过将物联网设备的能力进行抽象,生成资源图(也可以称为资源知识图或资源知识图谱);
这里,所述资源图用于管理(英文表达为Management)和/或编排(英文表达为Orchestration)异构物联网设备上的可用能力;
步骤102:第二功能组件获取待处理任务,并生成待处理任务对应的计算图(也可以称为计算流图或数据流图);
步骤103:第三功能组件基于所述资源图和所述计算图,进行任务分配。
这里,需要说明的是,在本申请的各种实施例中,所述资源图的节点代表一个物联网设备的至少部分能力;所述资源图的边代表相邻两个节点之间的关系(也可以称为相邻两个节点之间的关联关系(英文表达为Association Relationship)),所述关系可以包括通信关系和从属关系,所述通信关系可以体现在包括相邻两个节点之间的信息传输速率及传输时延等能够表征通信强度的信息。
实际应用时,所述待处理任务包含计算任务,所述计算任务可以包含一般的计算任务及训练和/或推理机器学习模型(也可以称为深度模型、深度学习模型或深度神经网络)等计算密集型的计算任务。计算密集型的计算任务对计算能力和/或存储能力的要求较高,更适用于基于所述资源图和所述计算图进行任务分配。换句话说,采用本申请实施例提供的信息处理方法,能够充分地利用资源受限且高度异构的物联网设备来执行计算密集型的计算任务。
其中,异构物联网设备是指:在一个包含多个物联网设备和服务器的网络中,一个物联网设备的硬件与另一个物联网设备的硬件不同,和/或,一个物联网设备的服务器与另一个物联网设备的服务器不同。其中,一个物联网设备的硬件与另一个物联网设备的硬件不同是指:一个物联网设备 的中央处理器(CPU,Central Processing Unit)、图形处理器(GPU,Graphics Processing Unit)、总线接口芯片(BIC,Bus Interface Chip)、数字信号处理器(DSP,Digital Signal Processor)等处理类硬件或随机存取存储器(RAM,Random Access Memory)、只读存储器(ROM,Read Only Memory)等存储类硬件的型号与另一个物联网设备的硬件的型号不同;一个物联网设备的服务器与另一个物联网设备的服务器不同是指:一个物联网设备对应的后端程序或者操作系统与另一个物联网设备对应的后端程序或者操作系统不同,换句话说,两个物联网设备之间在软件层面存在不同。
实际应用时,所述物联网设备可以包括手机、个人电脑(PC,Personal Computer)、可穿戴智能设备、智能网关、计算盒子等;所述PC可以包括台式电脑、笔记本电脑、平板电脑等;所述可穿戴智能设备可以包括智能手表、智能眼镜等。
实际应用时,本申请实施例提供的信息处理方法应用于信息处理系统,该系统可以包括所述第一功能组件、所述第二功能组件和所述第三功能组件;其中,所述第一功能组件、所述第二功能组件和所述第三功能组件可以分别通过电子设备实现,比如服务器;当然,所述第一功能组件、所述第二功能组件和所述第三功能组件也可以设置在同一电子设备上,或者,所述第一功能组件、所述第二功能组件和所述第三功能组件中的任意两个功能组件可以设置在同一电子设备上。
其中,对于步骤101,在一实施例中,所述通过将物联网设备的能力进行抽象,生成资源图,可以包括:
所述第一功能组件发现网络中的物联网设备;检测物联网设备的能力;针对每个物联网设备,基于相应物联网设备的能力,将物联网设备抽象成相应的节点;
基于抽象出的节点,生成资源图。
具体地,实际应用时,所述发现物联网设备,也可以称为感知边缘网络中的物联网设备;所述边缘网络是指电信网络的边缘(边缘网络包括汇聚层网络和接入层网络的一部分或全部,是接入用户的最后一段网络)。其中,所述发现或感知也可以理解为检测,比如,所述第一功能组件可以基于动态主机配置协议(DHCP,Dynamic Host Configuration Protocal),利用零配置组网(ZEROCONF,ZERO CONFiguration networking)技术检测边缘网络中的物联网设备。当然,也可以根据需求设置所述第一功能组件采用其他方式发现或感知物联网设备,本申请实施例对此不作限定。
发现物联网设备后,所述第一功能组件可以通过与相应物联网设备进行信息交互来检测相应物联网设备的能力;示例性地,所述第一功能组件可以向相应物联网设备发送能力请求消息,并根据相应物联网设备针对所述能力请求消息所回复的消息确定相应物联网设备的能力。
这里,物联网设备的能力可以包含以下至少之一:
计算能力;
存储能力。
其中,所述物联网设备的能力是指物联网设备的服务能力,可以理解为相应物联网设备的资源;相应地,一个物联网设备的至少部分能力可以理解为一个物联网设备的至少部分资源,一个物联网设备上的可用能力可以理解为一个物联网设备上的可用资源,即闲置资源(也可以称为闲置能力、闲散能力、空闲能力、闲散资源或空闲资源);也就是说,计算能力是指相应物联网设备可用的计算资源;存储能力是指相应物联网设备可用的存储资源(即存储空间)。
另外,节点之间需要进行通信,以体现节点之间的通信关系。
基于此,在资源图中,异构物联网设备上的可用能力可以包括以下至少之一:
计算能力;
存储能力;
通信能力。
其中,所述通信能力又可以称为通信资源,具体可以理解为两个节点之间的通信强度,比如,由边缘网络提供的用于物联网设备之间通信的带宽资源、信息传输速度(即传输速率)、传输时延等;再比如,一个物联网设备的一个部分能力与另一个部分能力之间的传输速率、传输时延等。
实际应用时,所述第一功能组件可以利用软件定义技术将实体的物联网设备抽象成虚拟化的节点,所述节点可以包含相应物联网设备的能力信息。根据物联网设备的不同能力,可以抽象出不同的节点;对于一个物联网设备,抽象出的节点可以包括以下至少之一:
设备节点;能够代表相应物联网设备的计算能力和存储能力;
计算节点;能够代表相应物联网设备的计算能力;
存储节点;能够代表相应物联网设备的存储能力。
实际应用时,所述基于抽象出的节点,生成资源图,可以包括:
所述第一功能组件确定每个节点的特征,并确定多个节点之间的关系;所述特征至少用于描述相应节点对应的物联网设备信息及物联网设备的至少部分能力信息;
基于确定的特征及节点之间的关系,生成资源图。
这里,实际应用时,节点的特征会包含多个,因此,所述特征也可以称为特征向量、特征集、特征向量集或特征集合;由于所述特征包含多个描述信息(即相应节点对应的物联网设备信息及物联网设备的至少部分能力信息),因此,所述特征也可以称为信息或信息集。实际应用时,节点的特征可以用于本体描述模型(英文表达为Ontology Description Model)表征,所述本体描述模型也可以称为实体描述模型。
实际应用时,由于物联网设备的能力以及物联网设备之间的关系是动 态变化的,为了使虚拟化的资源图对应的信息与实体的物联网设备对应的信息保持一致,提高资源图的准确性,需要监测物联网设备的变化,以使资源图随着物联网设备的变化动态更新。
基于此,在一实施例中,所述方法还可以包括:
所述第一功能组件监测物联网设备;
监测到物联网设备发生变化时,基于监测到的物联网设备的变化情况,更新所述资源图。
对于步骤102,在一实施例中,所述生成待处理任务对应的计算图,可以包括:
所述第二功能组件将所述待处理任务分解为至少一个算子;并确定算子之间的关系;
基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图。
其中,在一实施例中,所述将所述待处理任务分解为至少一个算子,可以包括:
所述第二功能组件采用第一策略对所述待处理任务进行分解,得到至少一个算子。
具体地,实际应用时,所述第二功能组件采用第一策略对所述待处理任务进行分解,可以包括:
首先,根据应用和服务中所需要实现的功能进行模型设计(也可以称为任务设计或程序设计)。所述应用和服务可以是一般的应用和服务(例如地图定位、网上银行、网购等),也可以是智能应用和服务(例如智能控制、自动驾驶等)。所述功能可以是一般功能(例如播放视频、访问浏览器、打开网页、编辑文件等),也可以是与AI相关的功能(例如人脸识别、行为识别、语音识别、自然语言处理等)。所述模型设计包括设计算法模型(即设计所述待处理任务,所述待处理任务也可以称为任务模型或程序模型,所述待处理任务包含计算任务)以实现相应的功能,例如设计神经网络结构以实现行为识别等功能。
其次,将设计的算法模型转换成图拓扑结构,即数据流图。如果所述算法模型对应的是某种计算任务,则又可将其抽象成的图拓扑结构称为计算图,或计算流图。所述计算图由节点和边组成,所述计算图的节点表示所述算法模型在程序实现时需要进行的某种类型的运算(即运算单元,也可称为算子,英文可以表示为Operation Node或Operator Node),即所述待处理任务的一个算子;所述算子可以是一般的数学运算算子或数组运算算子(例如:加法算子、乘法算子等),也可以是神经网络算子(即神经网络基本运算单元,例如:卷积算子、池化算子等);所述节点包含如下特征或信息:相应节点所表示的算子在被运算(或执行)时对算力、存储等资源的消耗或需求,即算子的硬件执行代价,也可以理解为算子的硬件占用数 据(例如:CPU占用率、GPU占用率、DSP占用率、FPGA占用率、内存占用率等);所述占用率也可称为占用、占用比例、占用比率、使用、使用率、使用比例、使用比率、利用、利用率、利用比例或利用比率;所述计算图中的边表示相邻两个节点之间的关系,即相邻两个算子之间的关系,包括两个相邻算子之间的计算依赖关系或数据依赖关系,其方向可以表示运算的先后顺序或数据流向;所述边包含如下特征或信息:两个相邻算子之间传输数据的大小。所述节点和边的特征或信息可以通过将相应算子进行实际执行或在仿真环境下执行等方式获得。
在一实施例中,所述基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图,可以包括:
所述第二功能组件将所述至少一个算子中的每个算子抽象成相应的节点;并基于算子之间的关系确定节点之间的关系;
基于确定的节点及节点之间的关系,生成待处理任务对应的计算图。
实际应用时,所述第二功能组件可以根据所述至少一个算子之间的计算依赖性、至少一个算子的运算先后顺序或至少一个算子之间的数据流向,确定节点之间的关系,并基于确定的节点及节点之间的关系,生成待处理任务对应的计算图。
实际应用时,为了提升系统(即包括所述第一功能组件、所述第二功能组件和所述第三功能组件的信息处理系统)的性能,比如降低执行所述待处理任务的执行时长,所述第二功能组件可以对计算图进行优化。
基于此,在一实施例中,所述方法还可以包括:
所述第二功能组件对生成的计算图进行优化;
所述第三功能组件基于所述资源图和优化后的计算图,进行任务分配。
这里,所述对生成的计算图进行优化,可以包括以下至少之一:
算子融合(英文可以表示为Operator Fusion);用于将多个相邻的小算子结合成为一个算子;使得在执行所述待处理任务的过程中无需将所述多个相邻的小算子的中间结果保存至全局内存,以通过减少内存访问从而减少执行所述待处理任务的执行时长;
常量合并(或称为常量折叠,英文可以表示为Constant Folding);用于遍历计算图中的节点,找出完全能够静态计算的节点,即完全依赖于常量输入来进行计算的节点,在CPU上将这些节点计算出来,并替换这些节点,也就是将计算图中常量的计算合并起来,这种常量合并算法减少了不必要的重复计算量,提升了计算性能;
静态内存规划传递(或称为静态内存计划传递,英文可以表示为Static Memory Planning Pass);用于将内存预分配给所有中间结果张量(英文可以表示为Tensor,中间结果是以Tensor的形式存在的);通过预分配所有中间结果Tensor来进行计算图优化,能够节省运行时代价(例如使一个常数折叠Pass能够在计算图预计算阶段被静态地执行);
数据布局转换(英文可以表示为Data Layout Transformation);用于在数据生产者(英文可以表示为Producer)和数据消费者(英文可以表示为Consumer)之间出现了数据布局不匹配的情况时,进行数据布局转换。这里,Tensor操作是计算图的基本操作符,Tensor中涉及到的运算会根据不同的操作符拥有不同的数据布局需求。例如,一个深度学习加速器可能会使用4x4张量操作,所以需要将数据切割成4x4的块来存储以优化局部访存效率。实际应用时,为了优化数据布局,需要为每个操作符提供定制的数据布局。
实际应用时,所述系统性能,可以包含以下至少之一:
执行所述待处理任务的时长;
执行所述待处理任务的能耗;
执行所述待处理任务的可靠性。
实际应用时,执行所述待处理任务的可靠性,可以体现为执行所述待处理任务的成功率。
对于步骤103,在一实施例中,所述基于所述资源图和所述计算图,进行任务分配,可以包括:
所述第三功能组件基于所述资源图和所述计算图,采用第二策略生成至少一种任务分配策略;从所述至少一种任务分配策略中确定性能最佳的任务分配策略;并基于所述性能最佳的任务分配策略,进行任务分配;所述任务分配策略用于将所述待处理任务映射(或分配)到至少一个物联网设备上。
这里,所述任务分配策略表示将待处理任务分配到至少一个物联网设备执行的策略,或者用于为所述计算图的每个节点分配相应资源图的至少一个节点,或者用于将待处理任务和物联网设备之间进行匹配,或者用于将待处理任务与资源之间进行匹配;换句话说,通过所述任务分配策略,可以确定至少一个物联网设备,并利用确定的至少一个物联网设备按照任务分配策略的指示执行待处理任务。实际应用时,所述任务分配策略也可以称为任务分配方法、任务分配方式、任务调度策略、任务调度方法、任务调度方式、任务编排策略、任务编排方法、任务编排方式等等。
具体地,实际应用时,所述基于所述性能最佳的任务分配策略进行任务分配,是指:基于所述性能最佳的任务分配策略,将所述待处理任务映射(即分配)到至少一个物联网设备上,以使所述至少一个物联网设备利用自身的至少部分能力,以并行和协作的方式执行所述待处理任务,例如实现所述机器学习模型的训练和/或推理。
实际应用时,所述将所述待处理任务映射到至少一个物联网设备上,还可以理解为:为所述待处理任务的每个算子分配至少一个物联网设备的至少部分能力;换句话说,为所述计算图的每个节点分配所述资源图的至少一个节点。由此可见,通过任务分配,实际上实现了待处理任务与物联 网设备之间的匹配,或者说,实现了待处理任务和资源(即物联网设备上的可用资源)之间的匹配。
实际应用时,为所述计算图的每个节点分配的所述资源图的至少一个节点可以相同或不同;也就是说,一个物联网设备可以利用自身的至少部分能力实现多个算子对应的计算单元,同时,多个物联网设备可以以协作的方式实现一个算子对应的计算单元。此外,计算图中没有计算依赖关系的节点(即没有计算依赖关系的算子)可以在相同或不同的物联网设备上并行地执行(即运算或计算)。
实际应用时,由于所述任务分配策略能够指示为所述计算图的每个节点分配的所述资源图的至少一个节点,且所述至少一种任务分配策略可以基于所述资源图确定。因此,所述任务分配策略可以体现为从所述资源图中切分得到的资源子图,并且资源子图中包含了计算图中的每个节点与资源图中的至少一个节点之间的对应关系。所述第二策略可以采用图搜索、图优化、子图匹配、启发式方法等方式实现,或者采用随机游走的方法实现。
基于此,在一实施例中,所述采用第二策略生成至少一种任务分配策略,可以包括:
所述第三功能组件基于所述计算图和资源图,采用第二策略,生成至少一个资源子图;每个资源子图包含一种任务分配策略;所述资源子图的节点代表一个物联网设备的至少部分能力;所述资源子图的边代表相邻两个节点之间的关系。
在一实施例中,所述从所述至少一种任务分配策略中确定性能最佳的任务分配策略,可以包括:
所述第三功能组件预测每个任务分配策略的性能;基于预测的每个任务分配策略的性能,确定性能最佳的任务分配策略。
具体地,在一实施例中,所述预测每个任务分配策略的性能,可以包括:
所述第三功能组件提取所述计算图的特征,得到第一特征集;并提取每个资源子图的特征,得到多个第二特征集;每个资源子图包含一种任务分配策略;
针对每个任务分配策略,基于所述第一特征集和相应的第二特征集,预测相应任务分配策略的性能。
实际应用时,所述特征集(即第一特征集和第二特征集)也可以简称为特征,或者,可以称为特征集合、特征向量或特征向量集。
实际应用时,所述第三功能组件可以通过特征提取网络提取所述第一特征集和所述第二特征集。
基于此,在一实施例中,所述提取所述计算图的特征,得到第一特征集;并提取每个资源子图的特征,得到多个第二特征集,可以包括:
所述第三功能组件通过特征提取网络提取所述计算图的特征,得到第一特征集;并通过所述特征提取网络提取每个资源子图的特征,得到多个第二特征集。
其中,实际应用时,所述计算图的特征,可以包含以下至少之一:
执行所述计算图的每个节点对应的算子需要占用的计算资源;
执行所述计算图的每个节点对应的算子需要占用的存储资源;
执行所述计算图的每个节点对应的算子需要占用的通信资源。
所述资源子图的特征,可以包含以下至少之一:
至少一个物联网设备上可用的计算资源;
至少一个物联网设备上可用的存储资源;
至少一个物联网设备上可用的通信资源。
实际应用时,可以基于图神经网络(GCN,Graph Convolutional Network)构建所述特征提取网络,并可基于训练数据集对特征提取网络进行训练;所述训练过程能够产生优化的网络参数;所述优化的网络参数可以用于提取到有利于提升性能预测准确率的特征。
实际应用时,所述第三功能组件可以通过预测网络预测相应任务分配策略的性能。
基于此,在一实施例中,所述基于所述第一特征集和相应的第二特征集,预测相应任务分配策略的性能,可以包括:
所述第三功能组件基于所述第一特征集和相应的第二特征集,通过预测网络获得相应任务分配策略对应的预测数据;基于相应任务分配策略对应的预测数据确定相应任务分配策略的预测性能。
其中,所述预测数据,可以包含以下至少之一:
执行所述待处理任务的预测时长;
执行所述待处理任务的预测能耗;
执行所述待处理任务的预测可靠性。
实际应用时,执行所述待处理任务的预测可靠性,可以体现为执行所述待处理任务的预测成功率。
实际应用时,针对所述待处理任务所对应的不同的应用场景,所述待处理任务对所述任务分配策略的性能的需求不同,比如,所述待处理任务需要在尽可能短的时间内执行完毕;再比如,所述待处理任务需要消耗尽可能少的能耗。
基于此,在一实施例中,所述基于相应任务分配策略对应的预测数据确定相应任务分配策略的预测性能,可以包括:
所述第三功能组件根据预设权重,对相应任务分配策略对应的预测数据进行加权处理,以确定相应任务分配策略的预测性能。
实际应用时,所述预设权重可以根据需求设置。
示例性地,假设相应任务分配策略对应的预测数据包括三个分量(即 包含执行所述待处理任务的预测时长、预测能耗和预测可靠性),则可以通过以下公式按照每个分量对应的预设权重进行加权处理:
η=Q(λ ter,…)   (1)
其中,η表示相应任务分配策略的预测性能,Q(·)表示一种函数,其中包括了对每个分量(即每种预测数据,可以理解为性能指标)的加权信息,λ t表示预测时长,λ e表示预测能耗,λ r表示预测可靠性。
由于表达式(1)中Q(·)的具体形式,即预设权重的具体取值,取决于不同场景对时延、能耗、可靠性等的不同要求,或者说重视程度或关注度,即通过使用特定函数给不同性能指标进行加权来实现多种性能指标之间的权衡,根据所设定的公式计算各项关键性能指标的加权值以得到整体系统性能,也就是说,通过表达式(1)得到的预测性能反映了与服务质量(QoS,Quality of Service)相关的整体系统性能。
实际应用时,可以基于深度神经网络(DNN,Deep Neural Networks)构建所述预测网络,并可基于训练数据集对预测网络进行训练;所述训练过程能够产生优化的网络参数;所述优化的网络参数可以用于提升性能预测的准确率。
实际应用时,为了提高性能预测的准确率,所述训练数据集可以通过历史数据积累和/或随机游走产生新数据的方式不断更新,以使所述训练过程具有持续学习的能力。这里,所述训练数据可以称为样本或训练样本,并可以包含任务分配策略及其对应的实际性能。
基于此,在一实施例中,所述方法还可以包括:
所述第三功能组件在进行任务分配后,获取所述待处理任务基于所述性能最佳的任务分配策略被执行时的实际性能;并将所述性能最佳的任务分配策略及获取的实际性能存储至所述训练数据集。
实际应用时,所述特征提取网络和所述预测网络可以在所述第三功能组件内部实现,即由所述第三功能组件进行训练和/或推理;也可以在所述第三功能组件外部实现,即由其他功能组件进行训练和/或推理。
相应地,本申请实施例还提供了一种信息处理方法,应用于第二功能组件,如图2所示,该方法包括:
步骤201:获取待处理任务;并生成待处理任务对应的计算图;
这里,所述待处理任务包含计算任务;所述计算图的节点代表所述待处理任务的一个算子;所述计算图的边代表相邻两个节点之间的关系;
步骤202:对生成的计算图进行优化,得到优化后的计算图;
这里,所述优化后的计算图用于结合资源图进行任务分配;所述资源图是通过将物联网设备的能力进行抽象生成的;所述资源图用于管理和/或编排异构物联网设备上的可用能力。
其中,在一实施例中,所述生成待处理任务对应的计算图,可以包括:
将所述待处理任务分解为至少一个算子;并确定算子之间的关系;
基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图。
在一实施例中,所述将所述待处理任务分解为至少一个算子,可以包括:
采用第一策略对所述待处理任务进行分解,得到至少一个算子。
在一实施例中,所述基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图,可以包括:
将所述至少一个算子中的每个算子抽象成相应的节点;并基于算子之间的关系确定节点之间的关系;
基于确定的节点及节点之间的关系,生成待处理任务对应的计算图。
在一实施例中,所述对生成的计算图进行优化,可以包括以下至少之一:
算子融合;
常量合并;
静态内存规划传递;
数据布局转换。
这里,需要说明的是,所述第二功能组件的具体处理过程已在上文详述,这里不再赘述。
本申请实施例提供的信息处理方法,第一功能组件通过将物联网设备的能力进行抽象,生成资源图;所述资源图用于管理和/或编排异构物联网设备上的可用能力;第二功能组件获取待处理任务,并生成待处理任务对应的计算图;第三功能组件基于所述资源图和所述计算图,进行任务分配。本申请实施例的方案,通过将物联网设备的能力进行抽象,生成用于管理和/或编排异构物联网设备上的可用能力的资源图,并基于待处理任务对应的计算图及所述资源图进行任务分配;如此,能够对资源受限且高度异构的物联网设备进行高效管理和灵活调度,即充分地利用资源受限且高度异构的物联网设备来执行待处理任务(例如计算密集型的深度学习任务)。
下面结合应用实施例对本申请再作进一步详细的描述。
如图3所示,本应用实施例的目的在于提供一种支持跨异构物联网设备高效深度学习的智能分布式边缘计算(IDEC)系统,所述IDEC系统也可称为协作式去中心化机器学习(CDML,Collaborative Decentralized Machine Learning)系统、协作分布式机器学习系统、基于设备协作的去中心化机器学习系统、或基于设备协作的分布式机器学习系统。通过边缘侧物联网设备的广泛连接和智能感知、统一的资源管理和算力共享、高效的设备协作和智能调度、算子级的计算任务分解、以及基于图卷积的任务分配和优化,实现支持深度模型分布式训练和/或推理的跨异构物联网设备协作的全栈优化的系统设计,进一步实现AI模型从云中心向更接近于物联网设备的网络边缘侧下沉,支持边缘智能服务和应用的高效部署和执行,解 决物联网应用场景数据处理过程中的时延、可靠性、能耗、通信带宽消耗、用户隐私和安全等问题。
具体地,本应用实施例提供了一种IDEC系统,IDEC系统主要包括三大模块:边缘资源管理模块(或称为物联网设备资源管理模块,即上述第一功能组件)、计算任务分解模块(或称为机器学习计算任务分解模块,即上述第二功能组件)和智能计算任务分配(ICTA)模块(即上述第三功能组件)。如图4所示,IDEC系统南向对接广泛分布的物联网边缘基础设施(即边缘设备,也可称为物联网设备),通过边缘资源管理模块生成支持动态构建和更新的资源图,实现了多种异构物联网设备资源的动态感知、统一管理、高效调度和共享协作。IDEC系统北向将来自实际场景中的智能应用和服务的深度学习任务通过计算任务分解模块生成计算图,实现了细粒度的算子级计算任务分解,为并行计算和分布式处理提供条件,同时利于进行深度学习任务执行性能的图级优化。IDEC系统的中间层(即核心模块)为ICTA模块,ICTA在生成的资源图和计算图的基础上,实现了底层深度学习算子的跨设备分配,ICTA模块利用图卷积网络(GCN)和深度神经网络(DNN)等深度学习算法,通过对异构物联网设备上不同操作系统间复杂多变的任务调度问题的内在统计学规律的学习,实现了对应最佳系统性能的任务分配策略的智能决策,最大化地利用了物联网边缘侧的分散异构资源,从而提升了整体系统性能;同时,ICTA模块通过引入持续学习机制,使IDEC系统具备智能自适应性,实现了“越用越聪明”。
需要说明的是,IDEC系统南向联动的物联网基础设施,即边缘设备,主要包括两类:终端设备(即具有计算能力的智能物联网设备,例如:智能摄像头、智能网关、计算盒子、智能手机等,这类设备往往具有较高的异构性和资源受限性)和边缘服务器(即具有稍强的计算能力、存储能力及管理能力的智能物联网设备,负责托管和运行IDEC系统,以及一些大型深度学习模型);IDEC系统北向对接物联网领域多种智能边缘应用和服务,包括:智慧养老(也可以称为智能老年护理)、智能家居(也可称为智慧家庭)、车联网、智慧社区、智慧城市、工业互联网等。
下面对IDEC系统的功能进行详细描述。
首先,结合图5和图6描述边缘资源管理模块的功能。
相关技术中,对边缘侧广泛分布的物联网设备进行统一管理和调度是实现跨设备协作的分布式边缘计算的重要前提,但由于物联网设备的多样性、资源受限性以及硬件后端和网络的异构性,增加了资源共享与交互的复杂性和不确定性。
为解决上述问题,边缘资源管理模块采用虚拟化、软件定义以及知识图谱等技术,通过边缘设备服务能力抽象模块(或称为物联网设备服务能力抽象模块)的功能(如图5所示)及资源知识图谱构建模块的功能(如图6所示),实现对物联网分布式边缘基础设施上的异构资源的统一管理和 编排,以及边缘设备的智能感知和协作。从而充分利用有限的边缘资源进行跨异构设备的协作式的资源调度和任务分配,实现资源利用率的最大化,为IDEC系统在资源受限的边缘计算环境下充分利用异构的边缘资源高效执行深度模型的分布式训练和/或推理提供了条件。
具体地,边缘设备服务能力抽象模块主要用于解决异构性的问题,其根本目标是打破异构硬件之间的界限,使多种多样的物联网设备能够以协作的方式执行深度学习任务。具体可以包括三层,如图5所示,边缘基础设施层实现对多种异构设备的识别和连接;资源池化层实现对边缘设备上的计算资源(例如:CPU、GPU、FPGA、ARM、AI芯片等)和存储资源(例如:缓存、RAM等)的细粒度感知和调度;能力抽象层利用虚拟化和软件定义技术将计算和存储资源转化成虚拟的计算节点和存储节点,便于统一管理和编排。边缘设备服务能力抽象模块促进了跨异构边缘设备的资源调度,对发现和匹配合适资源以满足特定的计算需求做出了贡献,由此广泛分布的多种边缘资源和处理能力可以被感知、重用和共享,提高了资源利用率,从而提升了边缘侧整体的服务能力。
为了进一步实现对可用边缘资源的动态感知和充分理解,如图6所示,资源知识图谱构建模块可以采用语义和知识引擎技术对互联互通的物联网设备进行描述和建模。在资源知识图谱中,节点代表不同的边缘设备或从边缘设备上抽象得到的细粒度的计算和/或存储能力,基于代表的不同的能力,虚拟化的节点可以包含设备节点、计算节点和存储节点;其中,设备节点的本体描述模型可以包括以下信息:物联网设备信息(包含设备ID、位置、类型、状态值、功能、所有者、接口、IP信息等)、以及能力信息(可用的CPU、GPU、FPGA、DSP、内存资源等)等。资源知识图谱中的边代表相邻节点间的关联关系。关联关系表示了异构边缘设备资源之间的互联互通,进一步体现了边缘设备的内部协作和共享机制。为了适应动态多变的物联网边缘场景,应对可用资源波动变化的挑战,资源知识图谱构建模块中引入了自动更新机制以与物理上的边缘设备的资源情况及连接状态保持一致。此外,调度编排策略和共享协作机制的使用也进一步提高了资源利用率以及整体的计算能力。基于资源知识图谱构建模块,IDEC系统能够实现异构分布式边缘设备上有限可用资源的高效管理和灵活调度,以满足计算任务的资源需求。
其次,结合图7描述计算任务分解模块的功能。
计算任务分解模块具备计算图构建和计算图优化的功能。
其中,计算图构建是指生成深度学习计算任务对应的计算图。具体地,深度学习计算任务通常是一些多层的深度神经网络模型,其组成的基本单元是深度学习算子,例如:卷积算子、池化算子等。用抽象化的节点表示算子,用边表示数据流向、数据依赖关系或计算依赖关系,以此可以构成能够表示深度学习模型的算子级程序实现过程的图结构,称为计算图、计 算流图或数据流图,如图7所示,计算图是以图的形式对深度学习计算任务的一种表达。
计算图优化是对计算图中的算子在实际分配和执行前进行一些操作,以便于获得更好的系统性能,例如降低任务执行时间等。计算图优化的方法主要包括:算子融合、常量合并、静态内存规划传递及数据布局转换等。其中,算子融合是指将多个相邻的小算子结合成为一个算子而不将中间结果保存至全局内存,以通过减少内存访问从而减少执行时间。
通过对深度学习模型的计算图构建以及对计算图的优化,能够实现细粒度的算子级的计算任务分解,为算子的并行处理和分布式执行提供可能;同时,有利于进行算子融合、常量合并等图级优化,并为下一步的计算任务分配和最优化提供前提。
第三,结合图8描述ICTA模块的功能。
一方面,计算任务分解模块构建的计算图提供了算子的全局视图,但却并未指定实现每个算子的具体物联网设备以获得最佳的系统性能,即计算任务分配策略尚未确定。另一方面,资源图中提供了能够承载深度学习工作负载的物联网设备上的可用资源。因此,基于计算图和资源图,为了充分利用物联网设备上的分散资源以协作的方式高效地执行计算任务,ICTA模块通过以最优的分配方式将计算图中的深度学习算子合理地分配给资源图中有闲散资源的物联网设备,以达到计算任务和设备资源之间的最佳匹配,实现对应最佳系统性能的任务分配策略的智能决策。
如图8所示,ICTA模块具体可以包括:资源子图构建模块、特征提取模块和性能预测模块。
其中,资源子图构建模块配置为采用图搜索、图优化、子图匹配、启发式方法或随机游走方法等方式进行资源子图的构建,每个资源子图携带特定的任务分配策略。
特征提取模块配置为利用GCN算法分别提取资源图和计算图的图拓扑结构特征,提取的特征涵盖了算力、存储、通信等对深度学习计算任务的高效执行起决定性作用的维度的特征。
性能预测模块配置为采用DNN算法对给定的任务分配策略(即每个资源子图携带或对应的任务分配策略)在任务实际执行前进行系统性能的预测,重点关注的系统性能指标可以包括:执行时间(即时长)、能耗和可靠性(比如成功率)。实际应用时,性能预测模块可以根据不同应用场景的实际需求在这三个指标之间进行权衡(例如对于关注度大的指标乘以一个较大的权重),最终得到一个代表整体系统性能的综合指标。最后,性能预测模块根据得到的每个任务分配策略的综合指标,选择能够获得最佳系统性能的任务分配策略进行实际的任务分配。
实际应用时,可以对GCN模型(即上述特征提取网络)和DNN模型(即上述预测网络)进行端到端的训练,学习不同任务分配策略和系统性 能之间的潜在对应关系,以及多种异构的物联网设备之间不同操作系统上任务调度的复杂的内在统计学规律,如此,能够提高系统性能预测的准确性。
通过资源子图构建模块、特征提取模块和性能预测模块,ICTA模块能够解决计算任务与设备资源的最优匹配问题,从而提高资源利用率和整体系统性能。ICTA模块根据系统性能最佳的任务分配策略将深度学习模型的计算单元(即算子)合理地分配给多种异构的物联网设备,如此,能够充分利用IDEC系统中的跨设备异构资源以多设备协作的方式分布式(或称为去中心化)地执行计算密集型深度学习任务,帮助分布式边缘计算系统提升边缘侧智能应用的部署和执行效率。此外,借助于基于历史样本积累和随机游走等策略的持续学习机制,ICTA模块能够实现“越用越聪明”,从而使得整个IDEC系统向集成了自适应和自学习能力的智能化更迈进了一步。
基于IDEC系统,本应用实施例还提供了一种智能物联网边缘计算平台,该平台北向通过“需求下行,服务上行”的模式与多个垂直行业的智能应用对接,南向通过“数据上行,任务下行”的模式与多种异构且分布广泛的物联网设备联动,整个平台在集成了运维、安全及隐私的多重保障体系下,可为消费者、供应链、协作企业及开发者等多类用户群体提供物联网智能应用和服务,实现多种边缘智能应用和服务在广泛分布的异构物联网设备上的部署和执行,进而实现端到端全栈优化的物联网边缘智能生态系统,从而统一市场并加速智能物联网解决方案的部署。如图9所示,该平台具体包括:应用层、核心层和资源层。
其中,应用层集成了多种共性能力和智能算法,用于将来自于行业应用中具体场景的智能服务需求转化为行为识别、人脸识别等功能模块,并进一步将其分解为CNN、RNN等多个深度学习任务和/或模型。
核心层搭载了IDEC系统,对上实现对来自应用层的深度学习任务的细粒度(即算子级)分解,对下实现对边缘资源的统一管理和高效调度,并基于两者(即资源图和计算图)通过按照任务和资源的最佳匹配模式进行任务在多个设备上的智能分配和优化,最终实现机器学习模型的分布式训练和/或推理。核心层的主要功能包括:边缘资源管理、深度学习计算任务分解、智能的计算任务分配等。核心层的特点及优势包括:智能感知、异构兼容、调度编排、共享协作、分布式部署和智能自适应等。
资源层通过虚拟化及软件定义等技术,实现对物联网设备上的能力抽象和资源提取,用于计算能力虚拟化、存储能力虚拟化和网络资源虚拟化。
本应用实施例提供的方案,具有以下优点:
1)实现了从顶层边缘智能应用到底层广泛分布的异构物联网边缘设备打通的全栈优化的系统设计,并通过全栈优化的系统设计,使得IDEC系统具有异构兼容、高性能、以及智能自适应的特点,实现了对大量分散的资 源受限的多种异构物联网边缘设备的统一管理与资源共享,以支持协作式的跨异构设备的去中心化的深度学习模型分布式训练和/或推理。
2)通过边缘资源管理模块,实现了对物联网边缘设备的智能感知、统一管理和协作,并实现了针对物联网设备的资源共享和高效调度,以充分利用分布广泛的异构的资源受限物联网设备。
3)通过计算任务分解模块,实现了对深度学习任务的算子级分解,生成的计算图有助于并行处理和分布式计算的实施,即有利于算子的并行处理和分布式执行;并且,利于进行图级优化(也可以理解为算子级的优化)以提升任务执行性能。
4)考虑到了多种异构的物联网设备上不同操作系统之间任务调度的复杂性和不确定性,通过ICTA模块,基于对多层GCN和DNN网络的端到端训练,学习不同操作系统内在复杂的任务调度规律,以及不同任务分配策略和系统性能之间潜在的对应关系,实现在任务实际执行前对给定的任务分配策略在实际执行后可能得到的系统性能的准确预测,以便选择最优的任务分配策略;通过对计算任务和可用资源之间的最佳匹配,实现对最佳任务分配策略的智能决策,以此最大化对边缘资源的利用率,提升整体系统性能。
5)通过持续学习机制,实现自学习和自适应,达到“越用越聪明”的效果。
为了实现本申请实施例的方法,本申请实施例还提供了一种信息处理装置,如图10所示,该装置包括:
第一功能组件1001,配置为通过将物联网设备的能力进行抽象,生成资源图;所述资源图用于管理和/或编排异构物联网设备上的可用能力;
第二功能组件1002,配置为获取待处理任务,并生成待处理任务对应的计算图;
第三功能组件1003,配置为基于所述资源图和所述计算图,进行任务分配。
其中,在一实施例中,所述第二功能组件1002,配置为:
将所述待处理任务分解为至少一个算子;并确定算子之间的关系;
基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图。
在一实施例中,所述第二功能组件1002,配置为采用第一策略对所述待处理任务进行分解,得到至少一个算子。
在一实施例中,所述第二功能组件1002,配置为:
将所述至少一个算子中的每个算子抽象成相应的节点;并基于算子之间的关系确定节点之间的关系;
基于确定的节点及节点之间的关系,生成待处理任务对应的计算图。
在一实施例中,所述第二功能组件1002,配置为对生成的计算图进行 优化;
所述第三功能组件1003,配置为基于所述资源图和优化后的计算图,进行任务分配。
其中,在一实施例中,所述第二功能组件1002,配置为执行以下至少之一:
算子融合;
常量合并;
静态内存规划传递;
数据布局转换。
在一实施例中,所述第一功能组件1001,配置为:
发现网络中的物联网设备;检测物联网设备的能力;针对每个物联网设备,基于相应物联网设备的能力,将物联网设备抽象成相应的节点;
基于抽象出的节点,生成资源图。
在一实施例中,所述第一功能组件1001,配置为在监测到物联网设备发生变化时,基于监测到的物联网设备的变化情况,更新所述资源图。
在一实施例中,所述第三功能组件1003,配置为:
基于所述资源图和所述计算图,采用第二策略生成至少一种任务分配策略;从所述至少一种任务分配策略中确定性能最佳的任务分配策略;并基于所述性能最佳的任务分配策略,进行任务分配;所述任务分配策略用于将所述待处理任务分配到至少一个物联网设备上。
在一实施例中,所述第三功能组件1003,配置为:
基于所述计算图和资源图,采用第二策略,生成至少一个资源子图;每个资源子图包含一种任务分配策略;所述资源子图的节点代表一个物联网设备的至少部分能力;所述资源子图的边代表相邻两个节点之间的关系。
在一实施例中,所述第三功能组件1003,配置为:
预测每个任务分配策略的性能;基于预测的每个任务分配策略的性能,确定性能最佳的任务分配策略。
在一实施例中,所述第三功能组件1003,配置为:
提取所述计算图的特征,得到第一特征集;并提取每个资源子图的特征,得到多个第二特征集;每个资源子图包含一种任务分配策略;
针对每个任务分配策略,基于所述第一特征集和相应的第二特征集,预测相应任务分配策略的性能。
在一实施例中,所述第三功能组件1003,配置为通过特征提取网络提取所述计算图的特征,得到第一特征集;并通过所述特征提取网络提取每个资源子图的特征,得到多个第二特征集。
在一实施例中,所述第三功能组件1003,配置为基于所述第一特征集和相应的第二特征集,通过预测网络获得相应任务分配策略对应的预测数据;基于相应任务分配策略对应的预测数据确定相应任务分配策略的预测 性能。
在一实施例中,所述第三功能组件1003,配置为根据预设权重,对相应任务分配策略对应的预测数据进行加权处理,以确定相应任务分配策略的预测性能。
在一实施例中,所述第三功能组件1003,配置为在进行任务分配后,获取所述待处理任务基于所述性能最佳的任务分配策略被执行时的实际性能;并将所述性能最佳的任务分配策略及获取的实际性能存储至所述训练数据集。
这里,所述第一功能组件1001的功能相当于本申请应用实施例中边缘资源管理模块的功能;所述第二功能组件1002的功能相当于本申请应用实施例中计算任务分解模块的功能;所述第三功能组件1003的功能相当于本申请应用实施例中智能计算任务分配(ICTA)模块的功能。
实际应用时,所述第一功能组件1001、所述第二功能组件1002和所述第三功能组件1003可由该装置中的处理器实现。
为了实现本申请实施例的方法,本申请实施例还提供了一种信息处理装置,如图11所示,该装置包括:
第一处理单元1101,配置为获取待处理任务;并生成待处理任务对应的计算图;所述待处理任务包含计算任务;所述计算图的节点代表所述待处理任务的一个算子;所述计算图的边代表相邻两个节点之间的关系;
第二处理单元1102,配置为对生成的计算图进行优化,得到优化后的计算图;所述优化后的计算图用于结合资源图进行任务分配;所述资源图是通过将物联网设备的能力进行抽象生成的;所述资源图用于管理和/或编排异构物联网设备上的可用能力。
其中,在一实施例中,所述第一处理单元1101,配置为:
将所述待处理任务分解为至少一个算子;并确定算子之间的关系;
基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图。
在一实施例中,所述第一处理单元1101,配置为采用第一策略对所述待处理任务进行分解,得到至少一个算子。
在一实施例中,所述第一处理单元1101,配置为:
将所述至少一个算子中的每个算子抽象成相应的节点;并基于算子之间的关系确定节点之间的关系;
基于确定的节点及节点之间的关系,生成待处理任务对应的计算图。
在一实施例中,所述第二处理单元1102,配置为执行以下至少之一:
算子融合;
常量合并;
静态内存规划传递;
数据布局转换。
这里,所述第一处理单元1101的功能和所述第二处理单元1102的功能相当于本申请应用实施例中计算任务分解模块的功能。
实际应用时,所述第一处理单元1101和所述第二处理单元1102可由该装置中的处理器实现。
需要说明的是:上述实施例提供的信息处理装置在基于任务进行信息处理时,仅以上述各程序模块的划分进行举例说明,实际应用时,可以根据需要而将上述处理分配由不同的程序模块完成,即将装置的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的信息处理装置与信息处理方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
基于上述程序模块的硬件实现,且为了实现本申请实施例的方法,本申请实施例还提供了一种电子设备,如图12所示,该电子设备1200包括:
通信接口1201,能够与其他电子设备进行信息交互;
处理器1202,与所述通信接口1201连接,以实现与其他电子设备进行信息交互,配置为运行计算机程序时,执行上述一个或多个技术方案提供的方法;
存储器1203,存储能够在所述处理器1202上运行的计算机程序。
这里,所述电子设备1200上可以设置第一功能组件、第二功能组件和第三功能组件中的至少一个功能组件。
具体地,在所述第一功能组件、所述第二功能组件和所述第三功能组件均设置在所述电子设备1200上的情况下,所述处理器1202,配置为:
通过将物联网设备的能力进行抽象,生成资源图;所述资源图用于管理和/或编排异构物联网设备上的可用能力;
获取待处理任务,并生成待处理任务对应的计算图;
基于所述资源图和所述计算图,进行任务分配。
其中,在一实施例中,所述处理器1202,配置为:
将所述待处理任务分解为至少一个算子;并确定算子之间的关系;
基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图。
在一实施例中,所述处理器1202,配置为:
采用第一策略对所述待处理任务进行分解,得到至少一个算子。
在一实施例中,所述处理器1202,配置为:
将所述至少一个算子中的每个算子抽象成相应的节点;并基于算子之间的关系确定节点之间的关系;
基于确定的节点及节点之间的关系,生成待处理任务对应的计算图。
在一实施例中,所述处理器1202,配置为:
对生成的计算图进行优化;
基于所述资源图和优化后的计算图,进行任务分配。
在一实施例中,所述处理器1202,配置为执行以下操作中的至少之一:
算子融合;
常量合并;
静态内存规划传递;
数据布局转换。
在一实施例中,所述处理器1202,配置为:
发现网络中的物联网设备;检测物联网设备的能力;针对每个物联网设备,基于相应物联网设备的能力,将物联网设备抽象成相应的节点;
基于抽象出的节点,生成资源图。
在一实施例中,所述处理器1202,配置为:
监测到物联网设备发生变化时,基于监测到的物联网设备的变化情况,更新所述资源图。
在一实施例中,所述处理器1202,配置为:
基于所述资源图和所述计算图,采用第二策略生成至少一种任务分配策略;从所述至少一种任务分配策略中确定性能最佳的任务分配策略;并基于所述性能最佳的任务分配策略,进行任务分配;所述任务分配策略用于将所述待处理任务分配到至少一个物联网设备上。
在一实施例中,所述处理器1202,配置为:
基于所述计算图和资源图,采用第二策略,生成至少一个资源子图;每个资源子图包含一种任务分配策略;所述资源子图的节点代表一个物联网设备的至少部分能力;所述资源子图的边代表相邻两个节点之间的关系。
在一实施例中,所述处理器1202,配置为:
预测每个任务分配策略的性能;基于预测的每个任务分配策略的性能,确定性能最佳的任务分配策略。
在一实施例中,所述处理器1202,配置为:
提取所述计算图的特征,得到第一特征集;并提取每个资源子图的特征,得到多个第二特征集;每个资源子图包含一种任务分配策略;
针对每个任务分配策略,基于所述第一特征集和相应的第二特征集,预测相应任务分配策略的性能。
在一实施例中,所述处理器1202,配置为:
通过特征提取网络提取所述计算图的特征,得到第一特征集;并通过所述特征提取网络提取每个资源子图的特征,得到多个第二特征集。
在一实施例中,所述处理器1202,配置为:
基于所述第一特征集和相应的第二特征集,通过预测网络获得相应任务分配策略对应的预测数据;基于相应任务分配策略对应的预测数据确定相应任务分配策略的预测性能。
在一实施例中,所述处理器1202,配置为:
根据预设权重,对相应任务分配策略对应的预测数据进行加权处理, 以确定相应任务分配策略的预测性能。
在一实施例中,所述处理器1202,配置为:
在进行任务分配后,获取所述待处理任务基于所述性能最佳的任务分配策略被执行时的实际性能;并将所述性能最佳的任务分配策略及获取的实际性能存储至所述训练数据集。
相应地,在所述第二功能组件设置在所述电子设备1200上的情况下,所述处理器1202,配置为:
获取待处理任务;并生成待处理任务对应的计算图;所述待处理任务包含计算任务;所述计算图的节点代表所述待处理任务的一个算子;所述计算图的边代表相邻两个节点之间的关系;
对生成的计算图进行优化,得到优化后的计算图;所述优化后的计算图用于结合资源图进行任务分配;所述资源图是通过将物联网设备的能力进行抽象生成的;所述资源图用于管理和/或编排异构物联网设备上的可用能力。
其中,在一实施例中,所述处理器1202,配置为:
将所述待处理任务分解为至少一个算子;并确定算子之间的关系;
基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图。
在一实施例中,所述处理器1202,配置为:
采用第一策略对所述待处理任务进行分解,得到至少一个算子。
在一实施例中,所述处理器1202,配置为:
将所述至少一个算子中的每个算子抽象成相应的节点;并基于算子之间的关系确定节点之间的关系;
基于确定的节点及节点之间的关系,生成待处理任务对应的计算图。
在一实施例中,所述处理器1202,配置为执行以下操作中的至少之一:
算子融合;
常量合并;
静态内存规划传递;
数据布局转换。
需要说明的是:所述处理器1202具体执行上述操作的过程详见方法实施例,这里不再赘述。
当然,实际应用时,电子设备1200中的各个组件通过总线系统1204耦合在一起。可理解,总线系统1204用于实现这些组件之间的连接通信。总线系统1204除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图12中将各种总线都标为总线系统1204。
本申请实施例中的存储器1203用于存储各种类型的数据以支持电子设备1200的操作。这些数据的示例包括:用于在电子设备1200上操作的任何计算机程序。
上述本申请实施例揭示的方法可以应用于处理器1202中,或者由处理器1202实现。处理器1202可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1202中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1202可以是通用处理器、DSP、GPU,或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器1202可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于存储器1203,处理器1202读取存储器1203中的信息,结合其硬件完成前述方法的步骤。
在示例性实施例中,电子设备1200可以被一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、FPGA、通用处理器、GPU、控制器、微控制器(MCU,Micro Controller Unit)、微处理器(Microprocessor)、各类AI芯片、类脑芯片、或者其他电子元件实现,用于执行前述方法。
可以理解,本申请实施例的存储器1203可以是易失性存储器或者非易失性存储器,也可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是ROM、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、FRAM、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是RAM,其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random Access Memory)、同步静态随机存取存储器(SSRAM,Synchronous Static Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random Access Memory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random Access Memory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double Data Rate Synchronous Dynamic Random Access Memory)、增强型同步动态随机存取存储器(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步连接动态随机存取存储器(SLDRAM,SyncLink Dynamic Random Access Memory)、直接内存总线随机存取存储器(DRRAM,Direct Rambus Random Access Memory)。本申请实施例描述的存储器旨在包括但不限于这些和任意其他适合类型的存储器。
为了实现本申请实施例提供的方法,本申请实施例还提供了一种信息处理系统,包括:
第一功能组件,配置为通过将物联网设备的能力进行抽象,生成资源图;所述资源图用于管理和/或编排异构物联网设备上的可用能力;
第二功能组件,配置为获取待处理任务,并生成待处理任务对应的计算图;
第三功能组件,配置为基于所述资源图和所述计算图,进行任务分配;
其中,所述第一功能组件、所述第二功能组件和所述第三功能组件设置在至少两个电子设备上。
示例性地,如图13所示,该系统可以包括:第一电子设备1301和第二电子设备1302;所述第一电子设备1301设置有所述第二功能组件;所述第二电子设备1302设置有所述第一功能组件和所述第三功能组件。
这里,需要说明的是:所述第一功能组件、所述第二功能组件和所述第三功能组件的具体处理过程已在上文详述,这里不再赘述。
在示例性实施例中,本申请实施例还提供了一种存储介质,即计算机存储介质,具体为计算机可读存储介质,例如包括存储计算机程序的存储器1203,上述计算机程序可由电子设备1200的处理器1202执行,以完成前述方法所述步骤。计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、Flash Memory、磁表面存储器、光盘、或CD-ROM等存储器。
需要说明的是:“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
另外,本申请实施例所记载的技术方案之间,在不冲突的情况下,可以任意组合。
以上所述,仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。

Claims (27)

  1. 一种信息处理方法,包括:
    第一功能组件通过将物联网设备的能力进行抽象,生成资源图;所述资源图用于管理和/或编排异构物联网设备上的可用能力;
    第二功能组件获取待处理任务,并生成待处理任务对应的计算图;
    第三功能组件基于所述资源图和所述计算图,进行任务分配。
  2. 根据权利要求1所述的方法,其中,所述生成待处理任务对应的计算图,包括:
    所述第二功能组件将所述待处理任务分解为至少一个算子;并确定算子之间的关系;
    基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图。
  3. 根据权利要求2所述的方法,其中,所述将所述待处理任务分解为至少一个算子,包括:
    所述第二功能组件采用第一策略对所述待处理任务进行分解,得到至少一个算子。
  4. 根据权利要求2所述的方法,其中,所述基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图,包括:
    所述第二功能组件将所述至少一个算子中的每个算子抽象成相应的节点;并基于算子之间的关系确定节点之间的关系;
    基于确定的节点及节点之间的关系,生成待处理任务对应的计算图。
  5. 根据权利要求1所述的方法,其中,所述计算图的节点代表所述待处理任务的一个算子;所述计算图的边代表相邻两个节点之间的关系。
  6. 根据权利要求2所述的方法,其中,所述方法还包括:
    所述第二功能组件对生成的计算图进行优化;
    所述第三功能组件基于所述资源图和优化后的计算图,进行任务分配。
  7. 根据权利要求1所述的方法,其中,所述通过将物联网设备的能力进行抽象,生成资源图,包括:
    第一功能组件发现网络中的物联网设备;检测物联网设备的能力;针对每个物联网设备,基于相应物联网设备的能力,将物联网设备抽象成相应的节点;
    基于抽象出的节点,生成资源图。
  8. 根据权利要求1所述的方法,其中,所述资源图的节点代表一个物联网设备的至少部分能力;所述资源图的边代表相邻两个节点之间的关系。
  9. 根据权利要求1所述的方法,其中,所述基于所述资源图和所述计算图,进行任务分配,包括:
    所述第三功能组件基于所述资源图和所述计算图,采用第二策略生成至少一种任务分配策略;从所述至少一种任务分配策略中确定性能最佳的任务分配策略;并基于所述性能最佳的任务分配策略,进行任务分配;所述任务分配策略用于将所述待处理任务分配到至少一个物联网设备上。
  10. 根据权利要求9所述的方法,其中,所述采用第二策略生成至少一种任务分配策略,包括:
    所述第三功能组件基于所述计算图和资源图,采用第二策略,生成至少一个资源子图;每个资源子图包含一种任务分配策略;所述资源子图的节点代表一个物联网设备的至少部分能力;所述资源子图的边代表相邻两个节点之间的关系。
  11. 根据权利要求9所述的方法,其中,所述从所述至少一种任务分配策略中确定性能最佳的任务分配策略,包括:
    所述第三功能组件预测每个任务分配策略的性能;基于预测的每个任务分配策略的性能,确定性能最佳的任务分配策略。
  12. 根据权利要求11所述的方法,其中,所述预测每个任务分配策略的性能,包括:
    所述第三功能组件提取所述计算图的特征,得到第一特征集;并提取每个资源子图的特征,得到多个第二特征集;每个资源子图包含一种任务分配策略;
    针对每个任务分配策略,基于所述第一特征集和相应的第二特征集,预测相应任务分配策略的性能。
  13. 根据权利要求12所述的方法,其中,所述提取所述计算图的特征,得到第一特征集;并提取每个资源子图的特征,得到多个第二特征集,包括:
    所述第三功能组件通过特征提取网络提取所述计算图的特征,得到第一特征集;并通过所述特征提取网络提取每个资源子图的特征,得到多个第二特征集。
  14. 根据权利要求12所述的方法,其中,所述基于所述第一特征集和相应的第二特征集,预测相应任务分配策略的性能,包括:
    所述第三功能组件基于所述第一特征集和相应的第二特征集,通过预测网络获得相应任务分配策略对应的预测数据;基于相应任务分配策略对应的预测数据确定相应任务分配策略的预测性能。
  15. 根据权利要求14所述的方法,其中,所述基于相应任务分配策略对应的预测数据确定相应任务分配策略的预测性能,包括:
    所述第三功能组件根据预设权重,对相应任务分配策略对应的预测 数据进行加权处理,以确定相应任务分配策略的预测性能。
  16. 根据权利要求13所述的方法,其中,所述特征提取网络是基于训练数据集训练得到的;所述训练过程能够产生优化的网络参数;所述优化的网络参数用于提取到有利于提升性能预测准确率的特征。
  17. 根据权利要求14所述的方法,其中,所述预测网络是基于训练数据集训练得到的;所述训练过程能够产生优化的网络参数;所述优化的网络参数用于提升性能预测的准确率。
  18. 根据权利要求16或17所述的方法,其中,所述方法还包括:
    所述第三功能组件在进行任务分配后,获取所述待处理任务基于所述性能最佳的任务分配策略被执行时的实际性能;并将所述性能最佳的任务分配策略及获取的实际性能存储至所述训练数据集。
  19. 一种信息处理方法,包括:
    获取待处理任务;并生成待处理任务对应的计算图;所述待处理任务包含计算任务;所述计算图的节点代表所述待处理任务的一个算子;所述计算图的边代表相邻两个节点之间的关系;
    对生成的计算图进行优化,得到优化后的计算图;所述优化后的计算图用于结合资源图进行任务分配;所述资源图是通过将物联网设备的能力进行抽象生成的;所述资源图用于管理和/或编排异构物联网设备上的可用能力。
  20. 根据权利要求19所述的方法,其中,所述生成待处理任务对应的计算图,包括:
    将所述待处理任务分解为至少一个算子;并确定算子之间的关系;
    基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图。
  21. 根据权利要求20所述的方法,其中,所述将所述待处理任务分解为至少一个算子,包括:
    采用第一策略对所述待处理任务进行分解,得到至少一个算子。
  22. 根据权利要求20所述的方法,其中,所述基于所述至少一个算子及算子之间的关系,生成待处理任务对应的计算图,包括:
    将所述至少一个算子中的每个算子抽象成相应的节点;并基于算子之间的关系确定节点之间的关系;
    基于确定的节点及节点之间的关系,生成待处理任务对应的计算图。
  23. 一种信息处理装置,包括:
    第一功能组件,配置为通过将物联网设备的能力进行抽象,生成资源图;所述资源图用于管理和/或编排异构物联网设备上的可用能力;
    第二功能组件,配置为获取待处理任务,并生成待处理任务对应的计算图;
    第三功能组件,配置为基于所述资源图和所述计算图,进行任务分 配。
  24. 一种信息处理装置,包括:
    第一处理单元,配置为获取待处理任务;并生成待处理任务对应的计算图;所述待处理任务包含计算任务;所述计算图的节点代表所述待处理任务的一个算子;所述计算图的边代表相邻两个节点之间的关系;
    第二处理单元,配置为对生成的计算图进行优化,得到优化后的计算图;所述优化后的计算图用于结合资源图进行任务分配;所述资源图是通过将物联网设备的能力进行抽象生成的;所述资源图用于管理和/或编排异构物联网设备上的可用能力。
  25. 一种信息处理系统,包括:
    第一功能组件,配置为通过将物联网设备的能力进行抽象,生成资源图;所述资源图用于管理和/或编排异构物联网设备上的可用能力;
    第二功能组件,配置为获取待处理任务,并生成待处理任务对应的计算图;
    第三功能组件,配置为基于所述资源图和所述计算图,进行任务分配;其中,
    所述第一功能组件、所述第二功能组件和所述第三功能组件设置在至少两个电子设备上。
  26. 一种电子设备,包括:处理器和配置为存储能够在处理器上运行的计算机程序的存储器,
    其中,所述处理器配置为运行所述计算机程序时,执行权利要求1至18任一项所述方法的步骤,或者执行权利要求19至22任一项所述方法的步骤。
  27. 一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至18任一项所述方法的步骤,或者实现权利要求19至22任一项所述方法的步骤。
PCT/CN2022/075516 2021-02-10 2022-02-08 信息处理方法、装置、系统、电子设备及存储介质 WO2022171082A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2023548276A JP2024507133A (ja) 2021-02-10 2022-02-08 情報処理方法、装置、システム、電子機器及び記憶媒体
EP22752243.0A EP4293965A1 (en) 2021-02-10 2022-02-08 Information processing method, apparatus, system, electronic device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110184807.5 2021-02-10
CN202110184807.5A CN114915629B (zh) 2021-02-10 2021-02-10 信息处理方法、装置、系统、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022171082A1 true WO2022171082A1 (zh) 2022-08-18

Family

ID=82761151

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/075516 WO2022171082A1 (zh) 2021-02-10 2022-02-08 信息处理方法、装置、系统、电子设备及存储介质

Country Status (4)

Country Link
EP (1) EP4293965A1 (zh)
JP (1) JP2024507133A (zh)
CN (1) CN114915629B (zh)
WO (1) WO2022171082A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115421930A (zh) * 2022-11-07 2022-12-02 山东海量信息技术研究院 任务处理方法、系统、装置、设备及计算机可读存储介质
CN116049908A (zh) * 2023-04-03 2023-05-02 北京数力聚科技有限公司 一种基于区块链的多方隐私计算方法及系统
CN116208970A (zh) * 2023-04-18 2023-06-02 山东科技大学 一种基于知识图谱感知的空地协作卸载和内容获取方法
CN116361120A (zh) * 2023-05-31 2023-06-30 山东浪潮科学研究院有限公司 一种数据库异构资源管理与调度方法、装置、设备及介质
CN116700934A (zh) * 2023-08-04 2023-09-05 浪潮电子信息产业股份有限公司 一种多元异构算力设备调度方法、装置、设备及存储介质
CN117255126A (zh) * 2023-08-16 2023-12-19 广东工业大学 基于多目标强化学习的数据密集任务边缘服务组合方法
CN117349026A (zh) * 2023-12-04 2024-01-05 环球数科集团有限公司 一种用于aigc模型训练的分布式算力调度系统
WO2024082692A1 (zh) * 2022-10-21 2024-04-25 华为技术有限公司 执行任务的方法和异构服务器

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110307957A1 (en) * 2010-06-15 2011-12-15 International Business Machines Corporation Method and System for Managing and Monitoring Continuous Improvement in Detection of Compliance Violations
CN107291538A (zh) * 2017-06-14 2017-10-24 中国人民解放军信息工程大学 面向任务的拟态云构建方法及基于拟态云的任务调度方法、装置、系统
CN108243246A (zh) * 2017-12-25 2018-07-03 北京市天元网络技术股份有限公司 一种边缘计算资源调度方法、边缘设备及系统
CN112272227A (zh) * 2020-10-22 2021-01-26 华侨大学 一种基于计算图的边缘计算任务调度方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200184366A1 (en) * 2018-12-06 2020-06-11 Fujitsu Limited Scheduling task graph operations
CN111342984B (zh) * 2018-12-18 2021-08-10 大唐移动通信设备有限公司 一种信息处理方法、系统及装置
CN110750342B (zh) * 2019-05-23 2020-10-09 北京嘀嘀无限科技发展有限公司 调度方法、装置、电子设备及可读存储介质
CN112187859B (zh) * 2020-08-24 2022-05-24 国网浙江省电力有限公司信息通信分公司 物联网业务与边缘网络能力动态映射的方法及电子设备
CN112328378B (zh) * 2020-11-05 2023-03-24 南京星环智能科技有限公司 任务调度方法、计算机设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110307957A1 (en) * 2010-06-15 2011-12-15 International Business Machines Corporation Method and System for Managing and Monitoring Continuous Improvement in Detection of Compliance Violations
CN107291538A (zh) * 2017-06-14 2017-10-24 中国人民解放军信息工程大学 面向任务的拟态云构建方法及基于拟态云的任务调度方法、装置、系统
CN108243246A (zh) * 2017-12-25 2018-07-03 北京市天元网络技术股份有限公司 一种边缘计算资源调度方法、边缘设备及系统
CN112272227A (zh) * 2020-10-22 2021-01-26 华侨大学 一种基于计算图的边缘计算任务调度方法

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024082692A1 (zh) * 2022-10-21 2024-04-25 华为技术有限公司 执行任务的方法和异构服务器
CN115421930A (zh) * 2022-11-07 2022-12-02 山东海量信息技术研究院 任务处理方法、系统、装置、设备及计算机可读存储介质
CN116049908A (zh) * 2023-04-03 2023-05-02 北京数力聚科技有限公司 一种基于区块链的多方隐私计算方法及系统
CN116208970A (zh) * 2023-04-18 2023-06-02 山东科技大学 一种基于知识图谱感知的空地协作卸载和内容获取方法
CN116361120A (zh) * 2023-05-31 2023-06-30 山东浪潮科学研究院有限公司 一种数据库异构资源管理与调度方法、装置、设备及介质
CN116361120B (zh) * 2023-05-31 2023-08-15 山东浪潮科学研究院有限公司 一种数据库异构资源管理与调度方法、装置、设备及介质
CN116700934A (zh) * 2023-08-04 2023-09-05 浪潮电子信息产业股份有限公司 一种多元异构算力设备调度方法、装置、设备及存储介质
CN116700934B (zh) * 2023-08-04 2023-11-07 浪潮电子信息产业股份有限公司 一种多元异构算力设备调度方法、装置、设备及存储介质
CN117255126A (zh) * 2023-08-16 2023-12-19 广东工业大学 基于多目标强化学习的数据密集任务边缘服务组合方法
CN117349026A (zh) * 2023-12-04 2024-01-05 环球数科集团有限公司 一种用于aigc模型训练的分布式算力调度系统
CN117349026B (zh) * 2023-12-04 2024-02-23 环球数科集团有限公司 一种用于aigc模型训练的分布式算力调度系统

Also Published As

Publication number Publication date
EP4293965A1 (en) 2023-12-20
CN114915629B (zh) 2023-08-15
JP2024507133A (ja) 2024-02-16
CN114915629A (zh) 2022-08-16

Similar Documents

Publication Publication Date Title
WO2022171082A1 (zh) 信息处理方法、装置、系统、电子设备及存储介质
Zhou et al. Edge intelligence: Paving the last mile of artificial intelligence with edge computing
Movahedi et al. An efficient population-based multi-objective task scheduling approach in fog computing systems
WO2022171066A1 (zh) 基于物联网设备的任务分配方法、网络训练方法及装置
Chen et al. Data-driven task allocation for multi-task transfer learning on the edge
Memari et al. A latency-aware task scheduling algorithm for allocating virtual machines in a cost-effective and time-sensitive fog-cloud architecture
Mirmohseni et al. Using Markov learning utilization model for resource allocation in cloud of thing network
Abdulazeez et al. Offloading mechanisms based on reinforcement learning and deep learning algorithms in the fog computing environment
Kanungo Edge-to-Cloud Intelligence: Enhancing IoT Devices with Machine Learning and Cloud Computing
Hu et al. Software-defined edge computing (SDEC): Principles, open system architecture and challenges
Jin et al. A review of intelligent computation offloading in multiaccess edge computing
Ryabinin et al. Ontology-driven edge computing
Liu et al. A survey of state-of-the-art on edge computing: Theoretical models, technologies, directions, and development paths
Farahbakhsh et al. Context‐aware computation offloading for mobile edge computing
Cañete et al. Supporting IoT applications deployment on edge-based infrastructures using multi-layer feature models
Zhou et al. Knowledge transfer and reuse: A case study of AI-enabled resource management in RAN slicing
Alqarni et al. A survey of computational offloading in cloud/edge-based architectures: strategies, optimization models and challenges
Xiang et al. Energy-effective artificial internet-of-things application deployment in edge-cloud systems
Salehnia et al. An optimal task scheduling method in IoT-Fog-Cloud network using multi-objective moth-flame algorithm
Ghebleh et al. A multi-criteria method for resource discovery in distributed systems using deductive fuzzy system
Zhu et al. Deep reinforcement learning-based edge computing offloading algorithm for software-defined IoT
Klimenko Model and method of resource-saving tasks distribution for the fog robotics
Arif et al. A model-driven framework for optimum application placement in fog computing using a machine learning based approach
Li et al. Dependency-aware task offloading based on deep reinforcement learning in mobile edge computing networks
CN111400300B (zh) 一种边缘设备管理方法、装置及管理设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22752243

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023548276

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2022752243

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022752243

Country of ref document: EP

Effective date: 20230911