WO2015149491A1 - 一种网络资源的处理装置、方法和系统 - Google Patents

一种网络资源的处理装置、方法和系统 Download PDF

Info

Publication number
WO2015149491A1
WO2015149491A1 PCT/CN2014/087637 CN2014087637W WO2015149491A1 WO 2015149491 A1 WO2015149491 A1 WO 2015149491A1 CN 2014087637 W CN2014087637 W CN 2014087637W WO 2015149491 A1 WO2015149491 A1 WO 2015149491A1
Authority
WO
WIPO (PCT)
Prior art keywords
computing
information
task
computing task
routing
Prior art date
Application number
PCT/CN2014/087637
Other languages
English (en)
French (fr)
Inventor
柴晓前
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP14888126.1A priority Critical patent/EP3113429B1/en
Publication of WO2015149491A1 publication Critical patent/WO2015149491A1/zh
Priority to US15/272,542 priority patent/US10200287B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/64Hybrid switching systems
    • H04L12/6418Hybrid transport
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0894Policy-based network configuration management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/125Shortest path evaluation based on throughput or bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications

Definitions

  • the present invention relates to the field of communication network technologies, and in particular, to a device, method and system for processing network resources.
  • the data when processing a data that requires more computing resources, the data is divided, and then the data divided into multiple parts is allocated to a plurality of computing nodes for processing. After the partial data processed by each computing node is processed to obtain a partial calculation result, the partial calculation results processed by the computing nodes are aggregated to form a calculation result corresponding to the data, which is distributed computing.
  • the Scheduler platform is first configured to decompose the computing task submitted by the user equipment (User Equipment, UE), and then send a request computing resource request to the computing resource manager, where the computing resource request is requested. At least the number of compute nodes is included. After receiving the request for computing resource request, the computing resource manager allocates a computing node to the computing task according to the number of computing nodes, and then feeds back to the Scheduler platform an application computing resource response carrying the allocation result, where the allocation result includes the allocated computing node information.
  • the user equipment User Equipment, UE
  • UE User Equipment
  • the computing resource manager allocates a computing node to the computing task according to the number of computing nodes, and then feeds back to the Scheduler platform an application computing resource response carrying the allocation result, where the allocation result includes the allocated computing node information.
  • the Scheduler platform sends the decomposed partial calculation subtasks to the corresponding computing nodes, and then collects partial calculation results of the respective computing nodes, thereby completing processing of data requiring more computing resources, wherein one computing node executes a part Calculate subtasks.
  • the scheduling Scheduler platform interacts between the computing nodes after sending the number of computing subtasks to the assigned computing node.
  • network devices such as switches and routers
  • An embodiment of the present invention provides a device, a method, and a system for processing network resources, which are used to solve the problem of communication congestion between network devices when processing network resources.
  • an embodiment of the present invention provides a routing policy decider, including:
  • the receiving module is configured to receive the computing environment information corresponding to the computing task and the network information of each computing node that are transmitted by the Scheduler platform, and provide the computing environment information to the decision bandwidth module and the generating module, where the computing nodes are Network information is provided to the generating module;
  • the decision bandwidth module is configured to allocate a bandwidth for the computing task decision according to the computing environment information
  • the generating module is configured to generate routing configuration policy information for the computing task according to the network information of the computing nodes, the bandwidth allocated by the decision, and the computing environment information, and provide the routing configuration policy information to Sending module
  • the sending module is configured to send the routing configuration policy information to a routing configuration controller.
  • the network information of each computing node includes an Internet Protocol IP address and a port number corresponding to each computing node; and the computing environment information includes the computing task status;
  • the generating module is configured to: when the state of the computing task is a pause, generate the routing configuration policy information for the computing task according to the network information of the computing nodes and the bandwidth allocated by the determining;
  • the routing configuration policy information is generated for the computing task according to the network information of the respective computing nodes and the bandwidth allocated by the decision and according to the first predetermined policy.
  • the computing environment information further includes a priority of the computing task, where the first predetermined policy specifically includes :
  • the generating module is further configured to: when the state of the computing task is running, and the priority of the computing task is higher than or equal to a predetermined threshold, according to the network information of the computing nodes and the bandwidth allocated by the decision
  • the routing configuration policy information is generated for the computing task when the routing configuration is performed, wherein the predetermined threshold is used to measure the priority of the computing task;
  • the computing task When the state of the computing task is running, and the priority of the computing task is lower than a predetermined threshold, according to the network information of the computing nodes and the bandwidth allocated by the decision, and the next time the routing configuration needs to be performed, The computing task generates the routing configuration policy information.
  • an embodiment of the present invention provides a Scheduler platform, including:
  • a receiving module configured to receive computing task description information submitted by the user equipment UE, and provide the computing task description information to an obtaining module, a decomposition module, and a generating module, where the computing task description information includes a user identifier ID and a required Calculating node information;
  • the obtaining module is configured to acquire, according to the computing task description information, a computing task corresponding to the user ID;
  • the decomposition module is configured to decompose the computing task into at least one sub-computing task according to the required computing node information
  • the acquiring module is further configured to acquire network information of each computing node corresponding to each of the sub-computing tasks, and provide network information of each computing node to the first sending module;
  • the generating module is configured to generate computing environment information of the computing task according to the computing task description information, and provide the computing environment information to the first sending module routing policy decider;
  • the first sending module is configured to send the computing environment information and network information of each computing node to a routing policy decision maker routing policy decision maker.
  • the network information of each computing node includes an Internet Protocol IP address and a port number corresponding to each computing node; the device further includes: a specifying module and a second Sending module
  • the second sending module is configured to send, according to network information of each computing node, a corresponding sub-computing task to the computing node, and provide the sent information to the designated module, where the sent information is used to notify The specifying module has sent the at least one sub-computing task;
  • the specifying module is configured to specify a state of the computing task, and provide a state of the computing task to the generating module;
  • the generating module is further configured to generate computing environment information of the computing task according to the computing task description information and the state of the computing task.
  • an embodiment of the present invention provides a method for processing a network resource, including:
  • the routing policy decider receives the computing environment information corresponding to the computing task and the network information of each computing node that are transmitted by the Scheduler platform;
  • the routing policy decider according to the network information of the respective computing nodes, the decision allocation
  • the bandwidth and the computing environment information generate routing configuration policy information for the computing task
  • the routing policy decider sends the routing configuration policy information to the routing configuration controller.
  • the network information of each computing node includes an Internet Protocol IP address and a port number corresponding to each computing node; and the computing environment information includes the computing task
  • the routing policy controller generates routing configuration policy information for the computing task according to the network information of the computing nodes, the bandwidth allocated by the decision, and the computing environment information, including:
  • the routing policy decider When the state of the computing task is a pause, the routing policy decider generates the routing configuration policy information for the computing task according to the network information of the computing nodes and the bandwidth allocated by the determining;
  • the routing policy decider When the state of the computing task is running, the routing policy decider generates the routing configuration policy for the computing task according to the network information of the respective computing nodes and the bandwidth allocated by the decision and according to the first predetermined policy. information.
  • the computing environment information further includes a priority of the computing task, where the first predetermined policy is specific include:
  • the routing policy decider allocates the bandwidth according to the network information of the computing nodes and the decision
  • the routing configuration policy information is generated for the computing task when the routing configuration is performed, wherein the predetermined threshold is used to measure the priority of the computing task;
  • the routing policy decider When the state of the computing task is running, and the priority of the computing task is lower than a predetermined threshold, the routing policy decider performs the bandwidth according to the network information of the respective computing nodes and the decision to allocate the next time.
  • the routing configuration policy information is generated for the computing task when a routing configuration is required.
  • the network information of each computing node includes an Internet Protocol IP address and a port number corresponding to each computing node.
  • the computing environment information includes a priority of the computing task, and the routing policy decider generates a routing configuration policy for the computing task according to the network information of the computing nodes, the bandwidth allocated by the decision, and the computing environment information.
  • Information including:
  • the routing policy decider When the priority of the computing task is higher than or equal to a predetermined threshold, the routing policy decider generates the routing configuration policy information for the computing task according to the network information of the computing nodes and the bandwidth allocated by the decision.
  • the predetermined threshold is used to measure the priority of the computing task;
  • the routing policy decider When the priority of the computing task is lower than a predetermined threshold, the routing policy decider generates the task for the computing task according to the network information of the respective computing node and the bandwidth allocated by the decision and according to the first predetermined policy. Routing configuration policy information.
  • the computing environment information further includes a state of the computing task, where the first predetermined policy specifically includes:
  • the routing policy decider allocates the bandwidth according to the network information of the respective computing node and the decision at the current time or Generating the routing configuration policy information for the computing task when the routing configuration is required next time;
  • the routing policy decider When the priority of the computing task is lower than a predetermined threshold, and the state of the computing task is running, the routing policy decider performs the bandwidth according to the network information of the respective computing nodes and the decision and the next time The routing configuration policy information is generated for the computing task when a routing configuration is required.
  • an embodiment of the present invention provides another method for processing network resources, including:
  • the Scheduler platform Determining, by the Scheduler platform, the computing task into at least one sub-computing task according to the required computing node information, and acquiring network information of each computing node corresponding to each of the sub-computing tasks;
  • an embodiment of the present invention provides a network resource processing system, including:
  • a routing policy decider configured to receive computing environment information corresponding to the computing task and network information of each computing node that are transmitted by the scheduling Scheduler platform; and allocate bandwidth according to the computing environment information to the computing task decision;
  • the network information of the computing node, the bandwidth allocated by the decision, and the computing environment information are used to generate routing configuration policy information for the computing task; and the routing configuration policy information is sent to the routing configuration controller;
  • the Scheduler platform is configured to receive the computing task description information submitted by the user equipment UE, where the computing task description information includes a user identifier ID and required computing node information, and obtain, according to the computing task description information, the user ID corresponding to the Calculating a task; decomposing the computing task into at least one sub-calculation task according to the required computing node information, and applying a computing node to each sub-calculation task, and acquiring each computing node corresponding to each sub-calculation task Generating the computing environment information of the computing task according to the computing task description information; and sending the computing environment information and the network information of each computing node to the routing policy decision maker;
  • the route configuration controller is configured to receive the route configuration policy information sent by the routing policy decider.
  • An embodiment of the present invention provides a device, a method, and a system for processing a network resource.
  • the Scheduler platform sends the number of each computing subtask to the Scheduler platform.
  • network devices such as switches, routers, etc.
  • the embodiment of the present invention sends the obtained computing environment information and the network information of each computing node to the receiving module of the routing policy decision maker according to the Scheduler platform, and the generating module of the routing policy decider provides the computing environment information and the network of each computing node according to the Scheduler platform.
  • the information generates routing configuration policy information, and then the sending module of the routing policy decision maker sends the routing configuration policy information to the routing configuration controller, so that the switch (network device) finally performs routing control on the data according to the routing configuration policy information, thereby preventing When processing network resources Induced clogging communication between network devices.
  • FIG. 1 is a schematic structural diagram of a network resource processing system according to an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of a routing policy decider according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a routing policy decision maker according to an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of a scheduling scheduler platform according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of a scheduling scheduler platform according to an embodiment of the present disclosure.
  • FIG. 6 is a hardware structural diagram of a routing policy decision maker in a network resource processing system according to an embodiment of the present invention.
  • FIG. 7 is a hardware structural diagram of a Scheduler platform in a network resource processing system according to an embodiment of the present disclosure
  • FIG. 8 is a flowchart of a method for processing network resources according to an embodiment of the present invention.
  • FIG. 9 is a flowchart of a method for processing network resources according to an embodiment of the present invention.
  • FIG. 10 is a flowchart of a method for processing a network resource by first configuring a routing configuration policy information and processing a computing task according to a method for processing a network resource according to an embodiment of the present disclosure
  • FIG. 11 is a flowchart of a method for processing network resources in which a computing task and a routing configuration policy information are processed in parallel in a method for processing a network resource according to an embodiment of the present invention.
  • the processing device for network resources of the present invention is applicable to a processing system for network resources.
  • the system 10 may include: a JAVA CLIENT 101, a hypertext transfer protocol client (HTTP CLIENT). 102. Representational State Transfer SERVER 103, Scheduler platform 104, Resource Manager (RM) 105, routing policy decision maker (specifically shown as: SpecMod) 106, bandwidth A Bandwidth Allocation Algorithm (BAA) module 1061, a Network Resource Manager (NRM) module 1062, a Data Base (DB) module 1063, an OpenFlow Controller (OFC) 107, and an OpenFlow switch (OpenFlow) Switch, OFS) 108, virtual machine (VM) 109, and node (NODE) 110.
  • BAA Bandwidth Allocation Algorithm
  • NVM Network Resource Manager
  • DB Data Base
  • OFC OpenFlow Controller
  • OFS OpenFlow switch
  • OFS virtual machine
  • NODE node
  • JAVA CLIENT101 JAVA for submitting computing tasks to the Scheduler platform 104 Client program.
  • HTTP CLIENT 102 an HTTP client program for submitting computing tasks to the Scheduler platform 104 in accordance with the REST Server 103.
  • the REST SERVER 103 is used to encapsulate the capabilities of the Scheduler platform 104 to provide a better performing interface to the user.
  • the scheduler platform 104 is configured to verify the computing task; apply to the RM 105 for processing the resource of the computing task (ie, the computing node); schedule and monitor the computing task, and complete the processing of the result of the computing task.
  • the RM105 is used to receive the registration of the computing node and manage the registered computing node.
  • the management computing node includes computing node state detection and assigning computing nodes to computing tasks in the Scheduler platform 104.
  • SpecMod is the routing policy decider 106.
  • the function of the SecMod module specifically includes: information about a routing policy that provides guarantee for communication between computing nodes of a computing task. For convenience of description, the present invention is described by a routing policy decision maker.
  • the BAA module 1061 included in the SpecMod module is configured to calculate information related to the computing task (including the status of the computing task, the priority of the computing task, and the like) and the network information of each computing node according to the Scheduler platform 104.
  • the task generates routing configuration policy information.
  • the NRM module 1062 is an internal core module of the SpecMod module, and is used to invoke the BAA module 1061 to generate routing configuration policy information for the computing task.
  • the DB module 1063 is configured to store content related to generating routing configuration policy information.
  • the OFC 107 is configured to control the OFS, and specifically includes: calculating a routing path, maintaining an OFS state, and configuring a routing policy executable by the OFS.
  • the OFS 108 which is a switching device supporting OpenFlow, may be physical or virtual, and is used to perform routing and QoS guarantee provided by the OFC 107.
  • VM 109 a virtual device for processing computing tasks.
  • the NODE 110 the JAVA process running on the physical machine (HOST) or the VM 109, can be connected to the RM105 to report the process status, or can be connected to the Scheduler platform 104 to report the execution status of the task.
  • HOST physical machine
  • VM 109 the JAVA process running on the VM 109
  • the routing policy decider 106 specifically included in the solution is configured to receive a Scheduler.
  • the computing environment information corresponding to the computing task sent by the platform 104 and the network information of each computing node; the bandwidth allocated for the computing task decision according to the computing environment information; the network information of each computing node, the bandwidth allocated by the decision, and the computing environment information are used as computing tasks Generate routing configuration policy information; send routing configuration policy information to the routing configuration controller.
  • the routing configuration controller may be the OFC 107 in FIG.
  • the scheduler platform 104 is configured to receive computing task description information submitted by the UE, where the computing task description information includes a user identifier ID and required computing node information; and the computing task corresponding to the user ID is obtained according to the computing task description information; The information is divided into the at least one sub-calculation task, and the computing node is applied for each sub-calculation task, and the network information of each computing node corresponding to each sub-calculation task is acquired; and the computing environment information of the computing task is generated according to the computing task description information.
  • the computing environment information is necessary for the routing policy decider 106 to generate routing configuration policy information corresponding to the computing task; the computing environment information and the network information of each computing node are sent to the routing policy decider 106.
  • the routing configuration controller 107 is configured to receive routing configuration policy information sent by the routing policy decision maker.
  • the routing configuration policy information may include routing information corresponding to each computing node. Route control can be performed for the computing task through the routing information corresponding to each computing node.
  • calculation task description information further includes bandwidth requirement information
  • calculation environment information includes at least one of the following: a user ID, required computing node information, bandwidth requirement information, a status of the computing task, and a priority of the computing task.
  • the required computing node information includes the number of computing nodes and/or configuration information of the computing node.
  • the RM 105 is configured to receive a computing node allocation request that is sent by the Scheduler platform 104 and that carries the required computing node information; and allocate a computing node to the computing task according to the number of computing nodes in the computing node allocation request and the configuration information of the computing node, And returning computing node information to the Scheduler platform 104, where the computing node information includes network information of each computing node.
  • the computing node 110 is configured to receive the sub-computing task sent by the Scheduler platform 104.
  • the computing node 110 can be a general term for computing nodes that process all sub-computing tasks, or can be a computing node corresponding to one of the sub-computing tasks.
  • the Scheduler platform 104 is also used to decompose the computing task into at least one sub-computation. a task; transmitting at least one sub-computing task to the corresponding computing node 110; transmitting a computing node allocation request carrying the number of computing nodes and configuration information of the computing node to the computing resource manager 105; receiving the calculation sent by the computing resource manager 105 Node information.
  • the switch 108 is configured to receive the OpenFlow configuration policy information sent by the route configuration controller 107, and perform routing control on the calculation task according to the OpenFlow configuration policy information.
  • the switch can be OFS108 in this solution.
  • routing configuration controller 107 is further configured to convert the received routing configuration policy information into OpenFlow configuration policy information; and send the OpenFlow configuration policy information to the switch 108.
  • route configuration policy information may further include node bandwidth information related to routing information corresponding to each computing node.
  • the routing configuration policy information includes routing information corresponding to each computing node.
  • the routing configuration policy information may further include node bandwidth information corresponding to each computing node.
  • the routing configuration policy information generated by the routing policy decider 106 includes nodes between the computing nodes. Inter-bandwidth information.
  • resource reservation can be performed for the computing task according to the node bandwidth information corresponding to each computing node or the node bandwidth information between the computing nodes.
  • the resource reservation may include aspects such as QoS reservation, exclusive bandwidth of the route calculation task, and the like.
  • Scheduler platform is further configured to send a sub-calculation task to the computing node 105, and specify a state of the computing task; wherein, the status of the computing task is any one of the following: running, pausing, ending, or error; Describe the information and the state of the computing task to generate computing environment information for the computing task.
  • the Scheduler platform 102 is further configured to acquire a user level according to the user ID, and generate a priority of the calculation task according to the user level corresponding to the user ID and the calculation task level.
  • the user level may be included in the user's subscription information or a service level of the user may be assigned based on the user's subscription information.
  • Scheduler platform 102 is further configured to generate computing environment information according to the computing task description information, the status of the computing task, and the priority of the computing task.
  • the Scheduler platform 102 is further configured to change the state of the computing task to obtain a state after the computing task is changed, wherein the state after the computing task is changed is any one of the following: running, pause, end, or error;
  • the policy decider 106 sends the state after the change of the calculation task, and the state after the task is changed is used by the routing policy controller to determine whether to release the resource corresponding to the routing configuration policy information corresponding to the computing task.
  • the change of the calculation task state is preferably based on user instructions, such as pause, stop, etc.; and may also be based on the preparation of the computing node, such as the completion of the calculation node, or the calculation of the node error; or the calculation of the task execution, such as the calculation task. Execution errors and so on.
  • the routing policy decider 106 is further configured to receive the changed state of the computing task sent by the Scheduler platform 104; when the state after the computing task is changed from the running to the end or from the running to the error, according to the second predetermined The policy releases the resources corresponding to the routing configuration policy information corresponding to the computing task.
  • the second predetermined policy may be configured to set a time interval (the time interval is greater than or equal to 0, such as in the current execution or the time interval is 10s), or the routing policy decider 106 performs routing configuration next time (ie, the routing policy decider 106).
  • the routing policy determinator 106 configures the new routing configuration policy information
  • the resources corresponding to the routing configuration policy information corresponding to the computing task are released.
  • network devices such as switches, routers, etc.
  • the embodiment of the invention sends the obtained computing environment information and the network information of each computing node to the routing policy controller according to the scheduler platform, and the routing policy controller generates the routing configuration policy information according to the scheduling environment information provided by the scheduler platform and the network information of each computing node.
  • the routing control and/or resource reservation takes into account the network state and the bandwidth requirement of the computing task in advance, thereby preventing communication congestion between network devices when processing network resources.
  • the routing policy decider 20 in FIG. 2 is a network resource processing apparatus according to an embodiment of the present invention, specifically a routing policy decision Schematic diagram of the mechanism.
  • the routing policy decider 20 in FIG. 2 includes a receiving module 201, a decision bandwidth module 202, a generating module 203, and a sending module 204.
  • the routing policy decider 20 in FIG. 2 may be the routing policy decider 106 in FIG.
  • the receiving module 201 is configured to receive the computing environment information corresponding to the computing task and the network information of each computing node that are transmitted by the Scheduler platform, and provide the computing environment information to the decision bandwidth module 202 and the generating module 203, where the computing nodes are The network information is provided to the generation module 203.
  • the receiving module 201 may receive the computing environment information corresponding to the computing task and the network information of each computing node corresponding to the computing task; or the receiving environment 201 may receive the computing environment information corresponding to the computing task and all Calculate the network information of the node.
  • the decision bandwidth module 202 is configured to allocate bandwidth allocated for the computing task decision according to the computing environment information, and provide the bandwidth allocated by the decision to the generating module 203.
  • the generating module 203 is configured to generate routing configuration policy information for the computing task according to the network information of each computing node, the bandwidth allocated by the decision, and the computing environment information, and provide the routing configuration policy information to the sending module 204.
  • the generating module 203 may generate routing configuration policy information for the computing task according to the network information of each computing node, the bandwidth allocated by the decision, and the computing environment information.
  • the network topology state is also referred to when generating routing configuration policy information, wherein the network topology state is a state in which the transmission medium interconnects the physical layout of the various devices.
  • the sending module 204 is configured to send routing configuration policy information to the routing configuration controller.
  • the solution generates routing configuration policy information according to the bandwidth allocated by the computing environment information obtained by the Scheduler platform and the network information of each computing node for the computing task decision.
  • the switch network device
  • the switch finally performs routing control according to the OpenFlow configuration policy information converted by the routing configuration controller, thereby preventing communication congestion between network devices when processing network resources.
  • the present invention further provides another routing policy decision maker 30.
  • the device 30 further includes a release resource module 205.
  • the release resource module 205 is configured to release the resource corresponding to the routing configuration policy information corresponding to the computing task according to the second predetermined policy when the state after the change of the computing task is changed from the running to the ending or from the running to the error.
  • changing the state of the computing task is preferably based on user instructions, such as pausing, stopping It can also be based on the preparation of the computing node, such as the completion of the computing node, or the calculation of the node error; it can also be based on the calculation of the task execution, such as the calculation of the task execution error.
  • the receiving module 201 is further configured to receive the changed state of the computing task sent by the Scheduler platform, and provide the state after the computing task is changed to the release resource module 205, where the state after the task is changed is any one of the following : Run, pause, end, or error.
  • the second predetermined policy may be configured to set a time interval (the time interval is greater than or equal to 0, such as in the current execution or the time interval is 10s), or the routing module is configured next time in the generating module 203 (that is, the generating module 203 is an arbitrary computing task).
  • the corresponding routing configuration policy information is configured, the resources corresponding to the routing configuration policy information corresponding to the computing task are released.
  • the computing environment information further includes bandwidth requirement information, where the bandwidth requirement information includes: required bandwidth information and/or a computing task type, and the determining bandwidth module 202 is further configured to use at least one of the bandwidth requirement information.
  • the decision bandwidth module 202 acquires the bandwidth allocated by the user level and the user level for the calculation task decision according to the user ID in the calculation environment information.
  • the generating module 203 in the routing policy decision maker generates a routing configuration policy information for the computing task according to different information included in the computing environment information, as follows:
  • the first way in the case of calculating the state of the task in the computing environment information:
  • the generating module 203 is configured to: when the state of the computing task is paused, generate routing configuration policy information for the computing task according to the network information of each computing node and the bandwidth allocated by the decision, and apply the generated routing configuration policy information to the current The routing configuration is performed during the execution of the task; when the state of the computing task is running, the routing configuration policy information is generated for the computing task according to the network information of each computing node and the bandwidth allocated by the decision and according to the first predetermined policy.
  • the first predetermined policy is a predetermined time or a predetermined configuration manner.
  • the computing environment information further includes a priority of the computing task.
  • the first predetermined strategy specifically includes:
  • the generating module 203 When the state of the computing task is running, and the priority of the computing task is higher than or equal to the predetermined threshold, the generating module 203 generates a routing configuration for the computing task according to the network information of each computing node and the bandwidth allocated by the decision. Policy information, wherein the predetermined threshold is used to measure the priority of the computing task; when the state of the computing task is running, and the priority of the computing task is lower than the predetermined At the threshold, the generating module 203 needs to perform the routing configuration next time, and the computing task is still in the running state, and generates routing configuration policy information for the computing task according to the network information of each computing node and the bandwidth allocated by the decision. The next time the routing configuration needs to be performed, the routing policy decider needs to generate a routing configuration policy for another computing task; it may also be that a predetermined time period is reached, etc. (this interpretation is also applicable to other embodiments).
  • next time routing configuration is required
  • the generation module 203 when the generation module 203 needs to wait until the next time the route configuration needs to be performed, the route configuration policy information is generated for the calculation task, and it is necessary to determine whether the calculation task has been executed.
  • the generating module 203 may consider the related information of the computing task (network information of each computing node, etc.) into the new routing configuration policy information when the routing configuration needs to be performed next time.
  • the generating module 203 does not need to consider related information of the computing task when generating a new routing configuration policy.
  • the range of the predetermined threshold is not limited in the present scheme, and the setting of the predetermined threshold is determined based on the division of the hierarchical importance (priority) of each computing task to determine the predetermined threshold.
  • the generating module 203 is configured to: when the priority of the computing task is higher than or equal to a predetermined threshold, generate routing configuration policy information for the computing task according to the network information of each computing node and the bandwidth allocated by the determining, wherein the predetermined threshold is used for Measuring the priority of the computing task; when the priority of the computing task is lower than the predetermined threshold, the routing policy decider generates a routing configuration policy for the computing task according to the network information of each computing node and the bandwidth allocated by the decision and according to the first predetermined policy. information.
  • the first predetermined policy in the second mode is the same as the first predetermined policy in the first mode.
  • the first predetermined policy specifically includes: (further, the computing environment information further includes the state of the computing task)
  • the generating module 203 When the priority of the computing task is lower than the predetermined threshold, and the state of the computing task is paused, the generating module 203 generates the computing task according to the network information of each computing node and the bandwidth allocated by the decision at the current time or when the routing configuration needs to be performed next time. Routing configuration policy information; where the routing configuration policy information is generated for the computing task when the routing configuration needs to be performed this time or the next time, it may be configured in advance in the routing policy decision maker.
  • the generating module 203 When the priority of the computing task is lower than the predetermined threshold, and the state of the computing task is running, the generating module 203 according to the network information of each computing node and the bandwidth allocated by the decision and needs to enter the next time. Generate route configuration policy information for the calculation task when the route configuration is performed.
  • the generating module 203 prioritizes Different conditions.
  • the state of the computing task is prioritized.
  • the generating module 203 performs the operation of generating the routing configuration policy information corresponding to the computing task.
  • the generating module 203 considers whether the priority of the computing task is high. At or equal to the predetermined threshold, when the priority of the computing task is higher than or equal to the predetermined threshold, the generating module 203 performs an operation of generating routing configuration policy information corresponding to the computing task when the routing configuration is performed this time; When the priority is lower than the predetermined threshold, the generating module 203 performs an operation of generating routing configuration policy information corresponding to the computing task when the routing configuration needs to be performed next time.
  • the priority of the computing task is prioritized.
  • the generating module 203 performs the operation of generating the routing configuration policy information corresponding to the computing task this time; when the priority of the computing task is lower than the predetermined threshold, the generating module 203 Considering whether the state of the computing task is running or paused, and when the state of the computing task is a pause, the generating module 203 performs an operation of generating routing configuration policy information corresponding to the computing task according to a predetermined time (after the current time or a certain time); When the state of the computing task is running, the generating module 203 performs an operation of generating routing configuration policy information corresponding to the computing task according to a predetermined configuration manner (the next routing configuration).
  • the first method is: after the Scheduler platform sends the decomposed sub-computing tasks to the corresponding computing node, the generating module 203 in the routing policy decider preferentially configures the routing configuration policy information corresponding to the computing task, and then informs after the configuration is completed.
  • the Scheduler platform is configured so that the compute nodes begin processing their respective sub-computing tasks.
  • the scheduler is configured to be configured as follows: After receiving the successful configuration response from the switch, the routing policy controller informs the scheduler that the configuration is complete.
  • the routing policy controller sends the generated routing configuration policy information to the network policy controller, so that the network policy controller sends the OpenFlow configuration policy information to the switch after converting the routing configuration policy information into the OpenFlow configuration policy information, and the switch Perform routing control and/or resource reservation for the computing task according to the OpenFlow configuration policy information.
  • the generation module 203 in the routing policy decision maker generates dynamic routing configuration policy information according to the computing environment information, the network information of each computing node, and the real-time network topology information, that is, the switch can generate the routing according to the dynamically generated route.
  • the configuration policy performs routing control and/or resource reservation for the computing task, so as to avoid communication congestion when transmitting data.
  • the first method is applicable when the user level is high or the priority of the computing task is high.
  • the second method is: after the Scheduler platform sends the decomposed sub-computing tasks to the corresponding computing nodes, the computing node begins to perform operations for processing the respective sub-computing tasks, and the generating module 203 in the routing policy decider is based on the computing environment.
  • the content of the information determines when the routing configuration policy information corresponding to the computing task is generated.
  • the execution operation after the generation module 203 generates the route configuration policy information is the same as the first method.
  • the difference is that the processing method of the first type of network resource is that after the generating module 203 configures the routing configuration policy information, the computing node starts to process the respective sub-computing tasks, and then the switch performs data according to the dynamically generated OpenFlow configuration policy information.
  • the processing method of the second network resource is a process in which each computing node performs a sub-computing task received by each computing node and allocates routing configuration policy information to the computing task in parallel. Since the loading of the routing configuration policy information takes effect, a certain delay is required (how long delay is required) The time depends on the generation module 203 decision in the routing policy decision maker. Therefore, before the routing configuration policy load assigned to the computing task takes effect, if the computing task in the execution has communication requirements, it is a routing configuration policy pre-configured according to the switch.
  • Routing and transmission control (in this case, the data communication of the computing task is QoS-free), and only after the calculation of the task is performed, the routing configuration policy information allocated for the computing task is validated, and then the dynamically generated OpenFlow is executed.
  • the configuration policy information is used for routing control and/or resource reservation for the computing task (in this case, the data communication of the computing task is guaranteed by QoS).
  • the second method is applicable when the user level is low or the priority of the computing task is low. This method ensures network communication of computing tasks under the premise of minimizing frequent changes to the network to improve the utilization of network resources.
  • the generating module 203 generates routing configuration policy information for the computing task according to the network information of each computing node, the bandwidth allocated by the decision, and the computing environment information. Specifically include:
  • the routing configuration policy information may include routing information corresponding to each computing node. Further, the routing configuration policy information may further include routing information corresponding to each computing node. Node bandwidth information.
  • the generating module 203 When the network information includes an IP address, a port number, and a MAC address corresponding to each computing node, the generating module 203 generates routing configuration policy information including node bandwidth information corresponding to each computing node.
  • the generating module 203 When the network information includes the IP address, the port number, the MAC address, and the communication information between the computing nodes, the generating module 203 generates routing configuration policy information including the inter-node bandwidth information between the computing nodes.
  • the routing configuration policy information generated by the generating module 203 includes the communication for the node A.
  • the port is reserved for 6M
  • the node B communication port is reserved for 6M
  • the node C communication port is reserved for 6M.
  • the generating module 203 When the network information includes the communication information between the computing nodes, the communication information between the computing nodes is: the interaction between the node A and the node C, and the interaction between the node B and the node C, the generating module 203 generates
  • the routing configuration policy information includes 9M (or less than 9M, according to its own policy) reserved for the communication port of the node A, and 9M (or less than 9M, according to its own policy) reserved for the communication port of the node B, and is the communication port of the node C.
  • the reserved bandwidth is 9M.
  • FIG. 4 is a schematic diagram of a network resource processing apparatus according to another embodiment of the present invention, and is specifically a schematic structural diagram of a scheduling scheduler platform.
  • the scheduling scheduler platform 40 in FIG. 4 includes: a receiving module 401, an obtaining module 402, a decomposition module 403, a generating module 404, and a first sending module 405.
  • the scheduling Scheduler platform 40 in FIG. 4 may be the Scheduler platform 104 in FIG.
  • the receiving module 401 is configured to receive the computing task description information submitted by the UE, and provide the computing task description information to the obtaining module 402, the decomposition module 403, and the generating module 404, where the computing task description information includes the user ID and the required computing node information. .
  • the obtaining module 402 is configured to obtain a computing task corresponding to the user ID according to the computing task description information.
  • the decomposition module 403 is configured to decompose the computing task into at least one sub-computing task according to the required computing node information.
  • the obtaining module 402 is further configured to acquire network information of each computing node corresponding to each sub-calculation task, and provide network information of each computing node to the first sending module 405.
  • the generating module 404 is configured to generate computing environment information of the computing task according to the computing task description information, and provide the computing environment information to the first sending module 405, where the computing environment information is used by the routing policy controller to generate routing configuration policy information corresponding to the computing task Necessary information.
  • the first sending module 405 is configured to send the computing environment information and the network information of each computing node to the routing policy decision maker.
  • the embodiment of the present invention provides another scheduling scheduler platform 50.
  • the apparatus 50 further includes a second sending module 406, a specifying module 407, and a changing module 408.
  • the obtaining module 402 includes a sending unit 5021. Receiving unit 5022.
  • the second sending module 406 is further configured to send the corresponding sub-calculation task to the computing node according to the network information of each computing node, and provide the sent information to the specifying module 407, where the sent information is used to notify the specifying module 407 that at least one has been sent. Sub-calculation task.
  • the specifying module 407 is configured to specify a state of the computing task and provide the state of the computing task to the generating module 404.
  • the generating module 404 generates computing environment information of the computing task according to the computing task description information and the state of the computing task.
  • the calculating task description information further includes calculating a task level.
  • the obtaining module 402 is further configured to acquire a user level according to the user ID, and provide the user level to the generating module 404.
  • the generation module 404 then generates a priority for the computing task based on the user level and the computing task level.
  • the generating module 404 generates the computing environment information according to the computing task description information and the priority of the computing task.
  • the generating module 404 is further configured to generate the computing environment information according to the computing task description information, the status of the computing task, and the priority of the computing task.
  • calculation task description information may include a user ID, required calculation node information, bandwidth requirement information (optional), calculation task acquisition information (optional), and calculation task level (optional).
  • the computing environment information includes at least one of the following: a user ID, required computing node information, bandwidth requirement information, a status of a computing task, and a priority of a computing task.
  • the obtaining module 402 obtains the computing task corresponding to the user ID according to the computing task description information, and specifically includes two methods:
  • the obtaining module 402 receives the computing task data packet sent by the UE, where the computing task data packet includes the computing task description information, and the obtaining module 402 calculates the computing task according to the computing task description information. Calculate task packets to get calculation tasks.
  • the obtaining module 402 acquires the computing task according to the computing task obtaining address or the obtaining manner in the computing task description information.
  • the decomposition module 403 decomposes the computing task into at least one sub-computing task according to the required computing node information, and specifically includes:
  • the required computing node information includes configuration information of the computing node and the number of computing nodes, where the configuration information of the computing node may include hardware configuration (memory, CPU, network, etc.), software configuration (operating system type, application library) )Wait. It can be understood that the decomposition module 403 decomposes the computing task into at least one sub-computing task according to the number of computing nodes.
  • the sending unit 5021 in the obtaining module 402 sends the computing resource manager with the computing node carrying the number of computing nodes and the configuration information of the computing node. Assign a request.
  • the computing resource manager configures the computing node information for the computing task according to the content in the received computing node allocation request, and sends the configured computing node information to the Scheduler platform
  • the receiving unit 5022 receives the computing node information sent by the computing resource manager.
  • the computing node information includes network information of each computing node.
  • the network information of each computing node includes one or more of the following information: a remote access address, an Internet IP address, a port number, and a MAC address corresponding to each computing node.
  • the change module 408 is configured to change the state of the calculation task, obtain the state after the calculation task is changed, and provide the state after the calculation task is changed to the sending module 405, where the state after the task change is any of the following One: run, pause, end, or error. Then, the sending module 405 sends the state of the computing task to the routing policy controller, and the state after the task is changed is used by the routing policy controller to determine whether to release the resource corresponding to the routing configuration policy information corresponding to the computing task.
  • the change of the calculation task state is preferably based on user instructions, such as pause, stop, etc.; and may also be based on the preparation of the computing node, such as the completion of the calculation node, or the calculation of the node error; or the calculation of the task execution, such as the calculation task. Execution errors and so on.
  • the solution sends the generated computing environment information and the network information of each computing node obtained by the acquiring module of the Scheduler platform to the routing policy controller according to the generating module of the Scheduler platform, so that the routing policy decider according to the computing environment information and each computing node
  • the network information generates routing configuration policy information, and then the routing policy decider sends the routing configuration policy information to the routing configuration control.
  • the switch network device finally performs routing control and/or resource reservation according to the OpenFlow configuration policy information converted by the routing configuration controller, thereby preventing communication congestion between network devices when processing network resources.
  • FIG. 6 is a schematic diagram of a hardware structure of a routing policy decision maker.
  • the routing policy decider may include a memory 601, a transceiver 602, a processor 603, and a bus 604, wherein the memory 601, the transceiver 602, and the processor 603 are communicatively coupled by a bus 604.
  • the memory 601 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM).
  • the memory 601 can store an operating system and other applications.
  • the program code for implementing the technical solution provided by the embodiment of the present invention is stored in the memory 601 and executed by the processor 603.
  • the transceiver 602 is used for communication between the device and other devices or communication networks such as, but not limited to, Ethernet, Radio Access Network (RAN), Wireless Local Area Network (WLAN), and the like.
  • RAN Radio Access Network
  • WLAN Wireless Local Area Network
  • the processor 603 can be a general-purpose central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC), or one or more integrated circuits for executing related programs.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • Bus 604 can include a path for communicating information between various components of the device, such as memory 601, transceiver 602, and processor 603.
  • FIG. 6 only shows the memory 601, the transceiver 602, and the processor 603 and the bus 604, in the specific implementation process, those skilled in the art should understand that the terminal also includes the normal operation. Other devices necessary. At the same time, those skilled in the art will appreciate that hardware devices that implement other functions may also be included, depending on the particular needs.
  • the transceiver 602 in the apparatus is configured to receive a computing environment corresponding to the computing task transmitted by the scheduling Scheduler platform.
  • the information and the network information of the respective computing nodes, and the network information of the computing environment information provided to the respective computing nodes are provided to the processor 603.
  • the processor 603 is respectively connected to the memory 601 and the transceiver 602;
  • the environment information is used to calculate the bandwidth allocated for the task decision;
  • the routing configuration policy information is generated for the computing task according to the network information of each computing node, the bandwidth allocated by the decision, and the computing environment information, and the routing configuration policy information is provided to the transceiver 602.
  • the transceiver 602 is configured to send routing configuration policy information to the routing configuration controller.
  • the processor 603 is further configured to release the resource corresponding to the routing configuration policy information corresponding to the computing task according to the second predetermined policy when the state after the change of the computing task is changed from the running to the ending or from the running to the error.
  • the transceiver 602 is further configured to receive the changed state of the computing task sent by the Scheduler platform, and provide the state after the computing task is changed to the processor 603, where the state after the computing task is changed is any one of the following: Run, pause, end, or error.
  • the computing environment information further includes bandwidth requirement information, where the bandwidth requirement information includes: required bandwidth information and/or a computing task type, and the processor 603 is further configured to: according to at least one of the bandwidth requirement information Calculate the bandwidth allocated by the task decision.
  • the processor 603 acquires the bandwidth allocated by the user level and the user level for the calculation task decision according to the user ID in the calculation environment information.
  • routing configuration policy information for the computing task is also different, as follows:
  • the first way in the case of calculating the state of the task in the computing environment information:
  • the processor 603 is specifically configured to: when the state of the computing task is paused, generate routing configuration policy information for the computing task according to the network information of each computing node and the bandwidth allocated by the determining; when the state of the computing task is running, according to The network information of each computing node and the bandwidth allocated by the decision are used to generate routing configuration policy information for the computing task according to the first predetermined policy.
  • the first predetermined policy specifically includes:
  • the computing environment information also includes the priority of the computing task.
  • the processor 603 When the state of the computing task is running, and the priority of the computing task is higher than or equal to the predetermined threshold, the processor 603 generates a routing configuration for the computing task according to the network information of each computing node and the bandwidth allocated by the decision when the routing configuration is performed this time. Policy information, wherein the predetermined threshold is used to measure the priority of the computing task; when the state of the computing task is running, and the priority of the computing task is lower than a predetermined threshold, the processor 603 allocates network information and decision according to each computing node. Bandwidth and next time Generate routing configuration policy information for computing tasks when routing configuration is required.
  • the range of the predetermined threshold is not limited in the present scheme, and the setting of the predetermined threshold is determined in accordance with the division of the hierarchical importance (priority) of each computing task to determine the predetermined threshold.
  • the processor 603 is specifically configured to: when the priority of the computing task is higher than or equal to a predetermined threshold, generate routing configuration policy information for the computing task according to the network information of each computing node and the bandwidth allocated by the determining, wherein the predetermined threshold is used. Measuring the priority of the computing task; when the priority of the computing task is lower than the predetermined threshold, the routing policy decider generates a routing configuration policy for the computing task according to the network information of each computing node and the bandwidth allocated by the decision and according to the first predetermined policy. information.
  • the first predetermined policy in the second mode is the same as the first predetermined policy in the first mode.
  • the first predetermined strategy includes:
  • the computing environment information also includes when calculating the status of the task
  • the processor 603 When the priority of the computing task is lower than the predetermined threshold, and the state of the computing task is paused, the processor 603 generates a computing task according to the network information of each computing node and the bandwidth allocated by the decision at the current time or when the routing configuration needs to be performed next time. Routing configuration policy information; when the priority of the computing task is lower than a predetermined threshold, and the state of the computing task is running, the processor 603 calculates the bandwidth according to the network information of each computing node and the decision-making bandwidth, and performs the next routing configuration configuration. The task generates routing configuration policy information.
  • the processor 603 generates routing configuration policy information for the computing task according to the network information of each computing node, the bandwidth allocated by the decision, and the computing environment information. Specifically include:
  • the routing configuration policy information may include routing information corresponding to each computing node. Further, the routing configuration policy information may further include node bandwidth information related to routing information corresponding to each computing node.
  • the processor 603 When the network information includes an IP address, a port number, and a MAC address corresponding to each computing node, the processor 603 generates routing configuration policy information including node bandwidth information corresponding to each computing node.
  • the processor 603 When the network information includes the IP address, the port number, the MAC address, and the communication information between the respective computing nodes, the processor 603 generates routing configuration policy information including the inter-node bandwidth information between the computing nodes.
  • This scheme can prevent processing by performing routing control and/or resource reservation for each computing task.
  • Network resources cause communication congestion between network devices.
  • FIG. 7 is a schematic diagram of a hardware structure of a Scheduler platform.
  • the Scheduler platform can include a memory 701, a transceiver 702, a processor 703, and a bus 704.
  • the memory 701, the transceiver 702, and the processor 703 are communicatively connected according to the bus 704.
  • FIG. 7 only shows the memory 701, the transceiver 702, and the processor 703 and the bus 704, in the specific implementation process, those skilled in the art should understand that the terminal also includes the normal operation. Other devices necessary. At the same time, those skilled in the art will appreciate that hardware devices that implement other functions may also be included, depending on the particular needs.
  • the transceiver 702 in the apparatus is configured to receive the computing task description information submitted by the UE, and describe the computing task.
  • the information is provided to the processor 703, wherein the computing task description information includes a user ID, required computing node information.
  • the processor 703 is connected to the memory 701 and the transceiver 702 respectively; specifically, the computing task corresponding to the user ID is obtained according to the computing task description information; and the computing task is decomposed into at least one sub-computing task according to the required computing node information; Network information of each computing node corresponding to each sub-computing task, and providing network information of each computing node to the transceiver 702; generating computing environment information of the computing task according to the computing task description information, and providing the computing environment information to the transceiver 702
  • the computing environment information is necessary information for the routing policy decision maker to generate routing configuration policy information corresponding to the computing task.
  • the transceiver 702 is configured to send the computing environment information and the network information of each computing node to the routing policy decision maker.
  • the routing policy decision maker generates the routing configuration policy information according to the computing environment information and the network information of each computing node and the network topology information acquired by itself.
  • the transceiver 702 is further configured to send the corresponding sub-calculation task to the computing node, and provide the sent information to the processor 703, where the sent information is used to notify the specifying module 407 that the at least one sub-computing task has been sent.
  • Corresponding processor 703 is further configured to specify a state of the computing task; and The state of the information and computing tasks generates computing environment information for the computing tasks.
  • the calculating task description information further includes calculating a task level.
  • the processor 703 is further configured to acquire a user level according to the user ID. The priority of the computing task is then generated based on the user level and the calculated task level.
  • the user level may be the user's subscription information or a service level assigned to the user based on the user's subscription information.
  • the processor 703 generates the computing environment information according to the calculation task description information and the priority of the computing task.
  • the processor 703 is further configured to generate the computing environment information according to the computing task description information, the status of the computing task, and the priority of the computing task.
  • calculation task description information may include a user ID, required calculation node information, bandwidth requirement information (optional), calculation task acquisition information (optional), and calculation task level (optional).
  • the computing environment information includes at least one of the following: a user ID, required computing node information, bandwidth requirement information, a status of a computing task, and a priority of a computing task.
  • processor 703 decomposes the computing task into at least one sub-computing task according to the required computing node information, which specifically includes:
  • the required computing node information includes configuration information of the computing node and the number of computing nodes, where the configuration information of the computing node may include hardware configuration (memory, CPU, network, etc.), software configuration (operating system type, application library) )Wait. It can be understood that the decomposition module 403 decomposes the computing task into at least one sub-computing task according to the number of computing nodes.
  • the transceiver 702 sends a computing node allocation request carrying the number of computing nodes and the configuration information of the computing node to the computing resource manager.
  • the computing resource manager configures the computing node information for the computing task according to the content in the received computing node allocation request, and sends the configured computing node information to the Scheduler platform, the transceiver 702 receives the computing node information sent by the computing resource manager.
  • the computing node information includes network information of each computing node.
  • the network information of each computing node includes one or more of the following information: a remote access address, an Internet IP address, a port number, and a MAC address corresponding to each computing node.
  • the processor 703 is further configured to change a state of the computing task, obtain a state after the computing task is changed, and provide the state after the computing task is changed to the transceiver 702, where The changed status is any of the following: Run, Pause, End, or Error.
  • the transceiver 702 then sends the status of the changed task to the routing policy decision maker.
  • the status of the changed task is used by the routing policy controller to determine whether to release the resource corresponding to the routing configuration policy information corresponding to the computing task.
  • This solution can prevent communication congestion between network devices when processing network resources.
  • FIG. 8 is a flowchart of a method for processing network resources according to an embodiment of the present invention.
  • the method of Figure 8 can be performed by the apparatus 20 and apparatus 30 (i.e., routing policy decision maker) described in Figures 2-3.
  • the routing policy decider receives the computing environment information corresponding to the computing task transmitted by the Scheduler platform and the network information of each computing node.
  • the routing policy decider allocates bandwidth for the computing task decision according to the computing environment information.
  • the routing policy controller generates routing configuration policy information for the computing task according to the network information of each computing node, the bandwidth allocated by the decision, and the computing environment information.
  • the routing policy decider sends the routing configuration policy information to the routing configuration controller.
  • a method for processing a network resource according to an embodiment of the present invention and when processing data (a computing task) requiring more computing resources in the prior art, the Scheduler platform sends the number of each calculated subtask to the allocated computing node.
  • the network devices such as switches, routers, etc.
  • the network devices can only be routed according to the pre-configured static policy, which may cause communication congestion between the network devices.
  • the obtained computing environment information and the network information of each computing node are sent to the routing policy controller, and the routing policy controller provides the computing environment information and the network information of each computing node according to the Scheduler platform to generate routing configuration policy information, and then The routing policy controller sends the routing configuration policy information to the routing configuration controller, so that the switch (the network device) finally transmits the data according to the routing configuration controller, thereby preventing communication congestion between the network devices when processing the network resources.
  • the specific device of the route configuration controller is not limited.
  • the network configuration controller may be an OFC.
  • the computing environment information in step 801 includes at least one of the following information: a user ID, a status of the computing task, a priority of the computing task, and bandwidth requirement information.
  • step 802 when the computing environment information includes the bandwidth requirement information, the routing policy decider allocates the bandwidth allocated for the computing task according to the required bandwidth information and/or the computing task type in the bandwidth requirement information.
  • the road The user decides the user level based on the user ID in the computing environment information, and then allocates the bandwidth for the computing task decision according to the user level.
  • the routing policy decider generates routing configuration policy information for the computing task in different manners, as follows:
  • the first way in the case of calculating the state of the task in the computing environment information:
  • the routing policy decider When the state of the computing task is paused, the routing policy decider generates routing configuration policy information for the computing task according to the network information of each computing node and the bandwidth allocated by the decision.
  • the routing policy decider When the state of the computing task is running, the routing policy decider generates routing configuration policy information for the computing task according to the network information of each computing node and the bandwidth allocated by the decision and according to the first predetermined policy.
  • the network information of each computing node includes an Internet Protocol IP address and a port number corresponding to each computing node. Further, the computing environment information further includes a priority of the computing task.
  • the first predetermined strategy specifically includes:
  • the routing policy decider When the state of the computing task is running, and the priority of the computing task is higher than or equal to a predetermined threshold, the routing policy decider generates a route for the computing task according to the network information of each computing node and the bandwidth allocated by the decision during the routing configuration.
  • the policy information is configured, wherein the predetermined threshold is used to measure the priority of the computing task.
  • the routing policy decider When the state of the computing task is running, and the priority of the computing task is lower than a predetermined threshold, the routing policy decider generates a routing configuration for the computing task according to the network information of each computing node and the bandwidth allocated by the decision and the next time the routing configuration needs to be performed. Strategy information.
  • the specific manner of the first predetermined strategy is not limited in this scenario.
  • the routing policy decider may generate the routing configuration policy information corresponding to the computing task.
  • the first predetermined policy may be that the routing policy decider generates routing configuration policy information for the computing task when the routing configuration is required for the next time. It can be understood that the next time the routing configuration needs to be performed, the routing policy decision maker can generate the routing configuration policy information for a certain computing task next time, and consider the information related to the computing task into the new routing configuration policy information.
  • the new routing configuration policy information is made applicable to the processing operation of the computing task.
  • the range of the predetermined threshold is not limited in the present scheme, and the setting of the predetermined threshold is determined based on the division of the hierarchical importance (priority) of each computing task to determine the predetermined threshold.
  • the routing policy decider When the priority of the computing task is higher than or equal to the predetermined threshold, the routing policy decider generates routing configuration policy information for the computing task according to the network information of each computing node and the bandwidth allocated by the decision, wherein the predetermined threshold is used to measure the computing task.
  • the priority of the level is used to measure the computing task.
  • the routing policy decider When the priority of the computing task is lower than the predetermined threshold, the routing policy decider generates routing configuration policy information for the computing task according to the network information of each computing node and the bandwidth allocated by the decision and according to the first predetermined policy.
  • the first predetermined policy in the second mode is the same as the first predetermined policy in the first mode, where the first predetermined policy specifically includes:
  • the routing policy decider needs to perform routing configuration according to the network information of each computing node and the bandwidth allocated by the decision. Generate routing configuration policy information for the computing task.
  • the routing policy decider When the priority of the computing task is lower than the predetermined threshold and the state of the computing task is running, the routing policy decider generates a routing configuration for the computing task according to the network information of each computing node and the bandwidth allocated by the decision and the next time the routing configuration needs to be performed. Strategy information.
  • the routing policy decision maker generates routing configuration policy information for the computing task according to the network information of the computing nodes, the bandwidth allocated by the decision, and the computing environment information, the priority condition is different.
  • the state of the computing task is prioritized.
  • the routing policy decider performs the operation of generating the routing configuration policy information corresponding to the computing task; when the state of the computing task is running, the routing policy decider considers the priority of the computing task again. Whether it is higher than or equal to a predetermined threshold, when the priority of the computing task is higher than or equal to a predetermined threshold, the routing policy decider performs an operation of generating routing configuration policy information corresponding to the computing task when performing routing configuration this time; When the priority of the computing task is lower than the predetermined threshold, the routing policy controller performs the operation of generating the routing configuration policy information corresponding to the computing task when the routing configuration is required.
  • the priority of the computing task is prioritized.
  • the routing policy decider performs the operation of generating the routing configuration policy information corresponding to the computing task; when the priority of the computing task is lower than the predetermined threshold, the routing policy decision Then consider whether the state of the computing task is running or paused. When the state of the computing task is paused, the routing policy is determined. The operation of generating the routing configuration policy information corresponding to the computing task is performed at the current time or the next time the routing configuration needs to be performed; when the state of the computing task is running, the routing policy decider performs the generation when the routing configuration needs to be performed next time. The operation of routing configuration policy information corresponding to the computing task.
  • the first method is: after the Scheduler platform sends the decomposed sub-computing tasks to the corresponding computing nodes, the routing policy controller preferentially configures the routing configuration policy information corresponding to the computing tasks, and after the configuration is completed, the scheduling of the Scheduler platform is completed. So that the compute nodes start processing their respective sub-computing tasks.
  • the scheduler is configured to be configured as follows: After receiving the successful configuration response from the switch, the routing policy controller informs the scheduler that the configuration is complete. Specifically, the route configuration policy information generated by the routing policy controller is generated. The network policy controller sends the OpenFlow configuration policy information to the switch after the routing policy information is converted into the OpenFlow configuration policy information, and the switch reserves resources and performs routing control according to the OpenFlow configuration policy information.
  • the routing policy decision maker generates dynamic routing configuration policy information according to the computing environment information, the network information of each computing node, and the real-time network topology information, that is, the switch can reserve resources according to the dynamically generated routing configuration policy. And routing control to avoid communication congestion when transmitting data.
  • the first method is applicable when the user level is high or the priority of the computing task is high.
  • the second method is: after the Scheduler platform sends the decomposed sub-computing tasks to the corresponding computing nodes, the computing node starts to perform operations for processing the respective sub-calculation tasks, and the routing policy decider determines according to the content of the computing environment information.
  • the routing configuration policy information corresponding to the computing task is generated.
  • the execution after the routing policy decision maker generates the routing configuration policy information is the same as the first method.
  • the difference is that the first type of network resource is processed after the routing policy controller configures the routing configuration policy information, and then the computing node begins to process the respective sub-computing tasks, and then the switch performs the dynamic generated OpenFlow configuration policy information. Routing control and/or resource reservation (in this case, the data communication of the computing task is guaranteed by QoS).
  • the processing method of the second type of network resource is a process in which each computing node performs a sub-computing task received by each computing node and allocates routing configuration policy information to the computing task in parallel. Since the loading of the routing configuration policy information takes effect, a certain delay is required (more delay) Long time depends on the routing policy decision maker decision). Therefore, before the routing configuration policy load for the computing task takes effect, if the computing task in the execution has communication requirements, it is routed according to the existing routing configuration policy configured before the switch. And transmission control (in this case, the data communication of the computing task is no QoS guarantee).
  • the second method is applicable when the user level is low or the priority of the computing task is low. This method ensures network communication of computing tasks under the premise of minimizing frequent changes to the network to improve the utilization of network resources.
  • the routing policy decider when the network information includes an IP address, a port number, and a MAC address corresponding to each computing node, the routing policy decider generates routing configuration policy information including node bandwidth information corresponding to each computing node.
  • the routing policy decider When the network information includes the IP address, the port number, the MAC address, and the communication information between the computing nodes, the routing policy decider generates routing configuration policy information including the inter-node bandwidth information between the computing nodes.
  • the method further includes: the routing policy decider receives the changed state of the computing task sent by the Scheduler platform, where the state after the task is changed is any one of the following: running, pausing, ending Or an error; when the state after the calculation task is changed from the running to the end or from the running to the error, the routing policy decider releases the resource corresponding to the routing configuration policy information corresponding to the computing task according to the second predetermined policy.
  • the state of the computing task is that the routing policy decider reserves network resources (such as bandwidth, etc.) or releases reference information of network resources.
  • the routing policy decider reserves and stores the network resources allocated for the computing task; when the state of the computing task is ended or an error occurs, the routing policy decider releases the allocation for the computing task.
  • Internet resources Thereby improving the utilization of network resources.
  • FIG. 9 is a flowchart of a method for processing network resources according to an embodiment of the present invention.
  • the method of Figure 9 can be performed by the apparatus 40, apparatus 50 (i.e., the Scheduler platform) depicted in Figures 4 and 5.
  • the Scheduler platform receives the calculation task description information submitted by the UE, where the calculation task description information includes a user identifier ID and required computing node information.
  • the Scheduler platform acquires a computing task corresponding to the user ID according to the computing task description information.
  • the Scheduler platform decomposes the computing task into at least one according to the required computing node information. Sub-calculation tasks, and requesting calculation nodes for each sub-calculation task, and acquiring network information for processing each computing node corresponding to each sub-calculation task.
  • the Scheduler platform generates computing environment information of the computing task according to the computing task description information.
  • the Scheduler platform sends the computing environment information and the network information of each computing node to the routing policy decision maker.
  • the solution sends the computing environment information and the network information of each computing node to the routing policy controller according to the Scheduler platform, and the routing policy controller generates the routing configuration policy information according to the computing environment information and the network information of each computing node according to the Scheduler platform, and then generates routing configuration policy information.
  • the routing policy controller sends the routing configuration policy information to the routing configuration controller, so that the switch (the network device) finally performs routing control and/or resource reservation according to the OpenFlow configuration policy information converted by the routing configuration controller, thereby preventing the processing.
  • Network resources cause communication congestion between network devices.
  • step 902 the scheduler platform obtains the calculation task corresponding to the user ID in two ways, including:
  • the Scheduler platform receives the computing task data packet sent by the UE, where the computing task data packet includes the computing task description information, and the computing task data packet is parsed according to the computing task description information to obtain the computing task.
  • the Scheduler platform obtains a computing task according to the computing task obtaining address or obtaining manner in the computing task description information.
  • the Scheduler platform decomposes the computing task into at least one sub-computing task according to the required computing node information, which specifically includes:
  • the required computing node information includes configuration information of the computing node and the number of computing nodes, where the configuration information of the computing node may include hardware configuration (memory, CPU, network, etc.), software configuration (operating system type, application library) )Wait. It can be understood that the Scheduler platform decomposes the computing task into at least one sub-computing task according to the number of computing nodes.
  • the method further includes:
  • the Scheduler platform sends a compute node allocation request to the computing resource manager that carries the number of compute nodes and configuration information for the compute nodes.
  • the computing resource manager then configures the computing node information for the computing task based on the content in the computing node allocation request. Then send the configured compute node information to Scheduler platform.
  • the Scheduler platform receives the computing node information sent by the computing resource manager, and the computing node information includes network information of each computing node. That is, the Scheduler platform obtains network information of each computing node.
  • the network information of each computing node includes one or more of the following information: a remote access address, an Internet IP address, a port number, and a MAC address corresponding to each computing node.
  • the routing policy information generated by the routing policy controller includes the node bandwidth information corresponding to each computing node;
  • the routing policy policy generated by the routing policy controller includes the inter-node bandwidth information between the computing nodes.
  • routing configuration policy information may include routing information corresponding to each computing node. Further, the routing configuration policy information may further include node bandwidth information related to routing information corresponding to each computing node.
  • the method further includes:
  • the Scheduler platform sends the corresponding sub-computing tasks to the computing nodes and specifies the status of the computing tasks.
  • each computing node suspends processing of the corresponding sub-computing task; when the Scheduler platform When the state of the specified computing task is the runtime, each computing node processes the corresponding sub-computing tasks.
  • the state of the computing task also includes an error or an end. Both states are feedbacks of the computing node to the Scheduler platform. When these two states occur, each computing node stops processing the corresponding sub-computing tasks.
  • the Scheduler platform acquires a user level according to the user ID; then the Scheduler platform generates a priority of the computing task according to the user level and the computing task level.
  • step 904 the Scheduler platform performs the task description information according to Calculate the state of the task to generate computing environment information for the computing task.
  • the Scheduler platform generates computing environment information according to the computing task description information and the priority of the computing task.
  • calculation task description information may include a user ID, required calculation node information, bandwidth requirement information (optional), calculation task acquisition information (optional), and calculation task level (optional).
  • the computing environment information includes at least one of the following: a user ID, required computing node information, bandwidth requirement information, a status of a computing task, and a priority of a computing task.
  • the Scheduler platform changes the state of the computing task to obtain a state after the computing task is changed, wherein the state after the computing task is changed is any one of the following: running, pausing, ending, or error. Then, the Scheduler platform sends the status of the changed task to the routing policy decision maker. The status of the changed task is used by the routing policy controller to determine whether to release the resource corresponding to the routing configuration policy information corresponding to the computing task.
  • FIG. 10 is a flowchart of a method for processing network resources of a computing task after configuring routing configuration policy information in an embodiment of the present invention.
  • the Scheduler platform receives the calculation task description information sent by the UE.
  • the calculation task description information may include a user ID, required calculation node information, bandwidth requirement information (optional), calculation task acquisition information (optional), and calculation task level (optional).
  • the Scheduler platform decomposes the computing task according to the computing task description information to obtain at least one sub-computing task.
  • the Scheduler platform determines the computing node information required for the computing task based on the computing task description information.
  • the computing node information required includes: configuration information of the computing node, the number of computing nodes, and the configuration information of the computing node may include hardware configuration (memory, CPU, network, etc.), software configuration (operating system type, application library), and the like. .
  • the Scheduler platform decomposes the computing task into sub-computing tasks corresponding to the number of computing nodes according to the number of computing nodes. It can be understood that the number of computing nodes is equal to the number of sub-computing tasks.
  • the Scheduler platform applies to the computing resource manager for a computing node required for the computing task according to the required computing node information.
  • the Scheduler platform sends a computing node allocation request that carries the number of computing nodes and the configuration information of the computing node to the computing resource manager; and then receives the calculation sent by the computing resource manager.
  • Node information the calculation node information includes network information of each computing node.
  • the Scheduler platform sends the sub-calculation task to the corresponding computing node, and indicates that the status of each sub-calculation task (the state of the computing task) is a pause.
  • the Scheduler platform generates computing environment information.
  • the computing environment information includes at least one of the following: a user ID, required computing node information, bandwidth requirement information, a status of a computing task, and a priority of a computing task.
  • the Scheduler platform sends the computing environment information and the network information of each computing node to the routing policy decider.
  • the routing policy decider generates routing configuration policy information according to the received computing environment information and network information of each computing node.
  • the routing policy decider sends the routing configuration policy information to the routing configuration controller.
  • routing configuration controller in this embodiment may be an OFC.
  • the route configuration controller converts the received route configuration policy information into OpenFlow configuration policy information, and then sends the configuration to the managed switch to perform configuration.
  • the type of the switch is not limited in this solution.
  • the switch may be an OFS.
  • the switch After receiving the OpenFlow configuration policy information, the switch sends a successful configuration response to the routing policy decision maker.
  • the successful configuration response is used to inform the routing policy decider switch that the OpenFlow configuration policy information has been received.
  • the routing policy decider sends a routing configuration result to the Scheduler platform.
  • the routing configuration result is used to inform the Scheduler platform that the routing configuration policy information corresponding to the computing task has been completed.
  • the route configuration policy information may be included in the route configuration result.
  • the Scheduler platform After receiving the routing configuration result, the Scheduler platform changes the state of the computing task to run, and sends the status of the computing task to the proxy module of each computing node.
  • each computing node After learning the status of the computing task, each computing node begins processing the respective received sub-computing tasks.
  • the switch may perform resource reservation and data routing according to the OpenFlow configuration policy information.
  • the OpenFlow configuration policy information is dynamic network policy information, which can prevent the communication from being blocked when the switch transmits data.
  • the method of configuring the routing configuration policy information and then processing the computing task can dynamically generate routing configuration policy information for the computing task when the network topology is clear.
  • the routing configuration policy information can avoid communication congestion.
  • 11 is a flowchart of a method for processing network resources processed by a computing task and configured routing configuration policy information in an embodiment of the present invention.
  • the Scheduler platform receives the calculation task description information sent by the UE.
  • the Scheduler platform decomposes the computing task according to the computing task description information to obtain at least one sub-computing task.
  • the Scheduler platform applies to the computing resource manager for a computing node required for the computing task according to the computing node information.
  • the Scheduler platform sends the sub-calculation task to the corresponding computing node, and indicates that the status of each sub-calculation task (the state of the computing task) is running.
  • Each computing node begins to process its corresponding sub-computing task.
  • each computing node After receiving the corresponding sub-computing tasks, each computing node directly processes the corresponding sub-computing tasks.
  • the Scheduler platform generates computing environment information.
  • the Scheduler platform sends the computing environment information and the network information of each computing node to the routing policy decider.
  • the routing policy decider generates routing configuration policy information according to the received computing environment information and network information of each computing node.
  • the routing policy decider sends the routing configuration policy information to the routing configuration controller.
  • routing configuration controller in this embodiment may be an OFC.
  • the route configuration controller converts the received route configuration policy information into OpenFlow configuration policy information, and then sends the configuration to the managed switch.
  • the type of the switch is not limited in this solution.
  • the switch may be an OFS.
  • the switch After receiving the OpenFlow configuration policy information, the switch sends a successful configuration response to the routing policy decision maker.
  • the routing policy decider sends a routing configuration result to the Scheduler platform.
  • the routing configuration result sent by the routing policy controller is only used to know whether the routing policy decision maker generates routing configuration policy information, and does not affect the state of the computing task.
  • the computing task and the routing configuration policy information are processed in parallel.
  • the computing nodes can directly process the corresponding sub-computing tasks, the computing nodes, and the respective computing nodes.
  • the switch does not need to wait for the routing configuration policy information to be generated, so that the routing policy decision maker can adjust the network according to its own policy without immediately adjusting the network, which reduces the network instability caused by frequent adjustment of the network.
  • the method of configuring the routing configuration policy information and then processing the computing task is applicable when the priority of the computing task is high, which can ensure the reliability of the processing task processing.
  • the method of parallel processing between the calculation task and the configuration route configuration policy information is applicable to the case where the priority of the calculation task is low. This method can improve the utilization of network resources.
  • the present invention can also be used in combination with the manner in which the routing configuration policy information is first processed and the computing task is processed in parallel with the configuration routing configuration policy information, that is, when the Scheduler platform has multiple computing tasks, the user can be prioritized.
  • Level or computing task priority or other local policy part of the computing task is processed by first configuring the routing configuration policy information and then processing the computing task, and the other part of the computing task adopts the parallel processing method of calculating the task and configuring the routing configuration policy information. Process it.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative
  • the division of the module or unit is only a logical function division, and the actual implementation may have another division manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be Ignore, or not execute.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Abstract

本发明公开一种网络资源的处理装置、方法和系统,涉及通信网络技术领域,用于解决处理网络资源时导致网络设备间的通信堵塞问题。根据接收模块接收调度Scheduler平台传递的与计算任务相对应的计算环境信息和各个计算节点的网络信息,并将计算环境信息提供给决策带宽模块和生成模块,将各个计算节点的网络信息提供给生成模块;决策带宽模块根据计算环境信息为计算任务决策分配的带宽;生成模块根据各个计算节点的网络信息、决策分配的带宽和计算环境信息为计算任务生成路由配置策略信息,并将路由配置策略信息提供给发送模块;发送模块将路由配置策略信息发送给路由配置控制器。本发明实施例提供的方案适于处理网络资源时采用。

Description

一种网络资源的处理装置、方法和系统
本申请要求于2014年03月31日提交中国专利局、申请号为201410127405.1、发明名称为“一种网络资源的处理装置、方法和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及通信网络技术领域,尤其涉及一种网络资源的处理装置、方法和系统。
背景技术
通常,在面对一项需要较多计算资源的数据进行处理时,将该数据进行划分,然后将划分成多个部分的数据分配给多个计算节点进行处理。在各个计算节点将分配到的部分数据处理得到部分计算结果之后,将这些计算节点处理的部分计算结果汇总起来,从而形成该数据对应的计算结果,这就是分布式计算。
在现有技术中采用分布式计算时,首先调度Scheduler平台将接收的用户设备(User Equipment,UE)提交的计算任务进行分解,然后向计算资源管理器发送申请计算资源请求,该申请计算资源请求中至少包括计算节点的数量。计算资源管理器在接收到申请计算资源请求之后,按照计算节点的数量为该计算任务分配计算节点,然后向Scheduler平台反馈携带分配结果的申请计算资源响应,分配结果中包括分配的计算节点信息。Scheduler平台将分解后的各个部分计算子任务发送到对应的计算节点上,之后并收集各个计算节点的部分计算结果,从而完成对需要较多计算资源的数据的处理,其中一个计算节点执行一个部分计算子任务。
然而,采用现有技术中的方式来处理需要较多计算资源的数据(计算任务)时,调度Scheduler平台在将各个计算子任务的数量下发给分配计算节点之后,当计算节点之间交互的数据较多时,网络设备(如交换机、路由器等)只能按照预先配置的静态策略进行路由控制,从而可能导致网络设备间出现通信堵塞。
发明内容
本发明的实施例提供一种网络资源的处理装置、方法和系统,用于解决处理网络资源时导致网络设备间的通信堵塞问题。
第一方面,本发明的实施例提供一种路由策略决策器,包括:
接收模块,用于接收调度Scheduler平台传递的与计算任务相对应的计算环境信息和各个计算节点的网络信息,并将所述计算环境信息提供给决策带宽模块和生成模块,将所述各个计算节点的网络信息提供给所述生成模块;
所述决策带宽模块,用于根据所述计算环境信息为所述计算任务决策分配的带宽;
所述生成模块,用于根据所述各个计算节点的网络信息、决策分配的所述带宽和所述计算环境信息为所述计算任务生成路由配置策略信息,并将所述路由配置策略信息提供给发送模块;
所述发送模块,用于将所述路由配置策略信息发送给路由配置控制器。
结合第一方面,在第一方面的另一种实现方式中,所述各个计算节点的网络信息包括各个计算节点对应的互联网协议IP地址和端口号;所述计算环境信息包括所述计算任务的状态;
所述生成模块,具体用于当所述计算任务的状态为暂停时,根据所述各个计算节点的网络信息和决策分配的所述带宽为所述计算任务生成所述路由配置策略信息;
当所述计算任务的状态为运行时,根据所述各个计算节点的网络信息和决策分配的所述带宽并按照第一预定策略为所述计算任务生成所述路由配置策略信息。
结合第一方面及其上述实现方式中的任一种实现方式,在第一方面的另一种实现方式中,所述计算环境信息还包括计算任务的优先级;所述第一预定策略具体包括:
所述生成模块,还用于当所述计算任务的状态为运行,并且所述计算任务的优先级高于或等于预定阈值时,根据所述各个计算节点的网络信息和决策分配的所述带宽在本次进行路由配置时为所述计算任务生成所述路由配置策略信息,其中所述预定阈值用于衡量所述计算任务的优先级的高低;
当所述计算任务的状态为运行,并且所述计算任务的优先级低于预定阈值时,根据所述各个计算节点的网络信息和决策分配的所述带宽并在下一次需要进行路由配置时为所述计算任务生成所述路由配置策略信息。
第二方面,本发明的实施例提供一种Scheduler平台,包括:
接收模块,用于接收用户设备UE提交的计算任务描述信息,并将所述计算任务描述信息提供给获取模块、分解模块和生成模块,其中所述计算任务描述信息包括用户标识ID和所需的计算节点信息;
所述获取模块,用于根据所述计算任务描述信息获取所述用户ID对应的计算任务;
所述分解模块,用于根据所述所需的计算节点信息将所述计算任务分解成至少一个子计算任务;
所述获取模块,还用于获取处理所述各个子计算任务对应的各个计算节点的网络信息,并将所述各个计算节点的网络信息提供给第一发送模块;
所述生成模块,用于根据计算任务描述信息生成所述计算任务的计算环境信息,并将所述计算环境信息提供给所述第一发送模块路由策略决策器;
所述第一发送模块,用于向路由策略决策器路由策略决策器发送所述计算环境信息和所述各个计算节点的网络信息。
结合第二方面,在第二方面的另一种实现方式中,所述各个计算节点的网络信息包括各个计算节点对应的互联网协议IP地址和端口号;所述装置还包括:指定模块和第二发送模块;
所述第二发送模块,用于根据所述各个计算节点的网络信息向所述计算节点发送各自对应的子计算任务,并向所述指定模块提供已发送信息,所述已发送信息用于告知所述指定模块已发送所述至少一个子计算任务;
所述指定模块,用于指定所述计算任务的状态,并将所述计算任务的状态提供给所述生成模块;
所述生成模块,还用于根据计算任务描述信息和所述计算任务的状态生成所述计算任务的计算环境信息。
第三方面,本发明的实施例提供一种网络资源的处理方法,包括:
路由策略决策器接收调度Scheduler平台传递的与计算任务相对应的计算环境信息和各个计算节点的网络信息;
所述路由策略决策器根据所述计算环境信息为所述计算任务决策分配的带宽;
所述路由策略决策器根据所述各个计算节点的网络信息、决策分配的所述 带宽和所述计算环境信息为所述计算任务生成路由配置策略信息;
所述路由策略决策器将所述路由配置策略信息发送给路由配置控制器。
结合第三方面,在第三方面的另一种实现方式中,所述各个计算节点的网络信息包括各个计算节点对应的互联网协议IP地址和端口号;所述计算环境信息包括所述计算任务的状态;则所述路由策略决策器根据所述各个计算节点的网络信息、决策分配的所述带宽和所述计算环境信息为所述计算任务生成路由配置策略信息,包括:
当所述计算任务的状态为暂停时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽为所述计算任务生成所述路由配置策略信息;
当所述计算任务的状态为运行时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽并按照第一预定策略为所述计算任务生成所述路由配置策略信息。
结合第三方面及其上述实现方式中的任一种实现方式,在第三方面的另一种实现方式中,所述计算环境信息还包括计算任务的优先级,则所述第一预定策略具体包括:
当所述计算任务的状态为运行,并且所述计算任务的优先级高于或等于预定阈值时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽在本次进行路由配置时为所述计算任务生成所述路由配置策略信息,其中所述预定阈值用于衡量所述计算任务的优先级的高低;
当所述计算任务的状态为运行,并且所述计算任务的优先级低于预定阈值时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽并在下一次进行需要路由配置时为所述计算任务生成所述路由配置策略信息。
结合第三方面及其上述实现方式中的任一种实现方式,在第三方面的另一种实现方式中,所述各个计算节点的网络信息包括各个计算节点对应的互联网协议IP地址和端口号;所述计算环境信息包括计算任务的优先级,所述路由策略决策器根据所述各个计算节点的网络信息、决策分配的所述带宽和所述计算环境信息为所述计算任务生成路由配置策略信息,包括:
当所述计算任务的优先级高于或等于预定阈值时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽为所述计算任务生成所述路由配置策略信息,其中所述预定阈值用于衡量所述计算任务的优先级的高低;
当所述计算任务的优先级低于预定阈值时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽并按照第一预定策略为所述计算任务生成所述路由配置策略信息。
结合第三方面及其上述实现方式中的任一种实现方式,在第三方面的另一种实现方式中,所述计算环境信息还包括计算任务的状态;所述第一预定策略具体包括:
当所述计算任务的优先级低于预定阈值,并且所述计算任务的状态为暂停时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽在本次或者在下一次进行需要路由配置时为所述计算任务生成所述路由配置策略信息;
当所述计算任务的优先级低于预定阈值,并且所述计算任务的状态为运行时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽并在下一次进行需要路由配置时为所述计算任务生成所述路由配置策略信息。
第四方面,本发明的实施例提供另一种网络资源的处理方法,包括:
调度Scheduler平台接收用户设备UE提交的计算任务描述信息,所述计算任务描述信息包括用户标识ID和所需的计算节点信息;
所述Scheduler平台根据所述计算任务描述信息获取所述用户ID对应的计算任务;
所述Scheduler平台根据所述所需的计算节点信息将所述计算任务分解成至少一个子计算任务,并获取处理所述各个子计算任务对应的各个计算节点的网络信息;
所述Scheduler平台根据计算任务描述信息生成所述计算任务的计算环境信息;
所述Scheduler平台向所述路由策略决策器发送所述计算环境信息和所述 各个计算节点的网络信息。
第五方面,本发明的实施例提供一种网络资源的处理系统,包括:
路由策略决策器,用于接收调度Scheduler平台传递的与计算任务相对应的计算环境信息和各个计算节点的网络信息;根据所述计算环境信息为所述计算任务决策分配的带宽;根据所述各个计算节点的网络信息、决策分配的所述带宽和所述计算环境信息为所述计算任务生成路由配置策略信息;将所述路由配置策略信息发送给路由配置控制器;
所述Scheduler平台,用于接收用户设备UE提交的计算任务描述信息,所述计算任务描述信息包括用户标识ID和所需的计算节点信息;根据所述计算任务描述信息获取所述用户ID对应的计算任务;根据所述所需的计算节点信息将所述计算任务分解成至少一个子计算任务,并为各个子计算任务申请计算节点,和获取处理所述各个子计算任务对应的各个计算节点的网络信息;根据计算任务描述信息生成所述计算任务的计算环境信息;向所述路由策略决策器发送所述计算环境信息和所述各个计算节点的网络信息;
所述路由配置控制器,用于接收所述路由策略决策器发送的所述路由配置策略信息。
本发明实施例提供一种网络资源的处理装置、方法和系统,与现有技术中在处理需要较多计算资源的数据(计算任务)时,Scheduler平台在将各个计算子任务的数量下发给分配计算节点之后,当计算节点之间交互的数据较多时,网络设备(如交换机、路由器等)只能按照预先配置的静态策略进行路由,从而可能导致网络设备间出现通信堵塞的问题相比,本发明实施例根据Scheduler平台将获取的计算环境信息和各个计算节点的网络信息发送给路由策略决策器的接收模块,路由策略决策器的生成模块根据Scheduler平台提供计算环境信息和各个计算节点的网络信息生成路由配置策略信息,然后路由策略决策器的发送模块将该路由配置策略信息下发给路由配置控制器,使得交换机(网络设备)最终根据路由配置策略信息对数据进行路由控制,从而能够防止处理网络资源时导致网络设备间的通信堵塞。
附图说明
图1为本发明实施例提供的一种网络资源的处理系统的结构示意图;
图2为本发明实施例提供的一种路由策略决策器的结构示意图;
图3为本发明实施例提供的一种路由策略决策器的结构示意图;
图4为本发明实施例提供的一种调度Scheduler平台的结构示意图;
图5为本发明实施例提供的一种调度Scheduler平台的结构示意图;
图6为本发明实施例提供的网络资源的处理系统中的一种路由策略决策器的硬件结构图;
图7为本发明实施例提供的网络资源的处理系统中的一种Scheduler平台的硬件结构图;
图8为本发明实施例提供的一种网络资源的处理方法的流程图;
图9为本发明实施例提供的一种网络资源的处理方法的流程图;
图10为本发明实施例提供的网络资源的处理方法中先配置路由配置策略信息再处理计算任务的网络资源的处理方法的流程图;
图11为本发明实施例提供的网络资源的处理方法中计算任务与配置路由配置策略信息并行处理的网络资源的处理方法的流程图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。
本发明一种网络资源的处理装置适用于一种网络资源的处理系统,如图1所示,该系统10可包括:JAVA客户端(JAVA CLIENT)101、超文本转移协议客户端(HTTP CLIENT)102、表征状态转移接口服务器(Representational State Transfer SERVER)103、调度(Scheduler)平台104、计算资源管理器(Resource Manager,RM)105、路由策略决策器(图中具体体现为:SpecMod)106、带宽分配算法(Bandwidth Allocation Algorithm,BAA)模块1061、网络资源管理(Network Resource Manager,NRM)模块1062、数据库(Data Base,DB)模块1063、OpenFlow控制器(OpenFlow Controller,OFC)107、OpenFlow交换机(OpenFlow Switch,OFS)108、虚拟机(Virtual Machine,VM)109和节点(NODE)110。
其中,JAVA CLIENT101,用于向Scheduler平台104提交计算任务的JAVA 客户端程序。
HTTP CLIENT102,用于根据REST Server103向Scheduler平台104提交计算任务的HTTP客户端程序。
REST SERVER103,用于封装Scheduler平台104的能力,以向用户提供性能较好的接口。
Scheduler平台104,用于验证计算任务;向RM105申请处理计算任务的资源(即:计算节点);对计算任务进行调度、监控,并完成计算任务的结果处理等操作。
RM105,用于接收计算节点的注册,并对已注册的计算节点进行管理。其中管理计算节点包括计算节点状态检测,以及为Scheduler平台104中的计算任务分配计算节点。
SpecMod:为路由策略决策器106。其中,该SecMod模块的功能具体包括:为计算任务的计算节点间通信提供保障的路由策略的信息。为了方便描述,本发明中以路由策略决策器来进行描述。
在SpecMod模块中包括的BAA模块1061,用于根据Scheduler平台104提供的关于计算任务的相关信息(包括计算任务的状态、计算任务的优先级等等)、各个计算节点的网络信息等信息为计算任务生成路由配置策略信息。
NRM模块1062,为SpecMod模块的内部核心模块,用于调用BAA模块1061为计算任务生成路由配置策略信息。
DB模块1063,用于存储生成路由配置策略信息相关的内容。
OFC107,用于控制OFS,具体包括:计算路由路径、维护OFS状态、配置OFS可执行的路由策略。
OFS108,为支持OpenFlow的交换设备,可以是物理的或者虚拟的,用于执行OFC107提供的路由策略进行路由和QoS保证。
VM109,用于处理计算任务的虚拟设备。
NODE110,运行在物理机(HOST)或者VM109上的JAVA进程,可以连接RM105上报进程状态,也可以连接Scheduler平台104上报子计算任务的执行情况等。
结合图1,本方案具体包括的路由策略决策器106,用于接收Scheduler 平台104发送的计算任务对应的计算环境信息和各个计算节点的网络信息;根据计算环境信息为计算任务决策分配的带宽;根据各个计算节点的网络信息、决策分配的带宽和计算环境信息为计算任务生成路由配置策略信息;将路由配置策略信息发送给路由配置控制器。其中路由配置控制器可以为图1中的OFC107。
Scheduler平台104,用于接收UE提交的计算任务描述信息,计算任务描述信息包括用户标识ID和所需的计算节点信息;根据计算任务描述信息获取用户ID对应的计算任务;根据所需的计算节点信息将计算任务分解成至少一个子计算任务,并为各个子计算任务申请计算节点,和获取处理各个子计算任务对应的各个计算节点的网络信息;根据计算任务描述信息生成计算任务的计算环境信息,计算环境信息为路由策略决策器106生成计算任务对应的路由配置策略信息的必要信息;向路由策略决策器106发送计算环境信息和各个计算节点的网络信息。
路由配置控制器107,用于接收路由策略决策器发送的路由配置策略信息。
路由配置策略信息中可以包括各个计算节点对应的路由信息。通过各个计算节点对应的路由信息可以为计算任务进行路由控制。
需要说明的是,计算任务描述信息还包括带宽需求信息,计算环境信息包括以下至少一种信息:用户ID、所需的计算节点信息、带宽需求信息、计算任务的状态和计算任务的优先级。
需要说明的是,所需的计算节点信息包括计算节点的数量和/或计算节点的配置信息。
进一步的,RM105,用于接收Scheduler平台104发送的携带所需的计算节点信息的计算节点分配请求;根据计算节点分配请求中的计算节点的数量和计算节点的配置信息为计算任务分配计算节点,并向Scheduler平台104返回计算节点信息,计算节点信息中包括各个计算节点的网络信息。
计算节点110,用于接收Scheduler平台104发送的子计算任务。
可以理解的是,计算节点110可以为处理所有子计算任务的计算节点的总称,或者可以为处理其中一个子计算任务对应的计算节点。
相对应的,Scheduler平台104,还用于将计算任务分解成至少一个子计算 任务;将至少一个子计算任务发送给对应的计算节点110;将携带计算节点的数量和计算节点的配置信息的计算节点分配请求发送给计算资源管理器105;接收计算资源管理器105发送的计算节点信息。
交换机108,用于接收路由配置控制器107发送的OpenFlow配置策略信息;根据OpenFlow配置策略信息为计算任务进行路由控制。其中,在本方案中交换机可以为OFS108。
相对应的,路由配置控制器107,还用于将接收的路由配置策略信息转换成OpenFlow配置策略信息;将OpenFlow配置策略信息发送给交换机108。
进一步可选的,路由配置策略信息(OpenFlow配置策略信息)中还可以包括各个计算节点对应的路由信息相关的节点带宽信息。
当Scheduler平台104向路由策略决策器106发送的各个计算节点的网络信息包括各个计算节点对应的IP地址、端口号和介质访问控制MAC(Medium/Media Access Control)地址时,路由策略决策器106生成的路由配置策略信息中包括各个计算节点对应的路由信息。优选的,路由配置策略信息中还可以包括各个计算节点对应的节点带宽信息。
当Scheduler平台104向路由策略决策器106发送的各个计算节点的网络信息还包括各个计算节点之间的通信信息时,路由策略决策器106生成的路由配置策略信息中包括各个计算节点之间的节点间带宽信息。
可以理解的是,根据各个计算节点对应的节点带宽信息或各个计算节点之间的节点带宽信息可以为计算任务进行资源预留。资源预留可以包括QoS预留、路由计算任务的专属带宽等方面。
进一步需要说明的,Scheduler平台,还用于向计算节点105发送子计算任务,并指定计算任务的状态;其中,计算任务的状态为以下任意一种:运行、暂停、结束或出错;根据计算任务描述信息和计算任务的状态生成计算任务的计算环境信息。
进一步需要说明的,Scheduler平台102,还用于根据用户ID获取用户级别;根据用户ID对应的用户级别和计算任务级别生成计算任务的优先级。该用户级别可以包含在用户的订购信息中,或者根据用户的订购信息分配该用户的一个服务级别。
进一步需要说明的,Scheduler平台102,还用于根据计算任务描述信息、计算任务的状态和计算任务的优先级生成计算环境信息。
进一步需要说明的,Scheduler平台102,还用于更改计算任务的状态,得到计算任务更改后的状态,其中,计算任务更改后的状态为以下任意一种:运行、暂停、结束或出错;向路由策略决策器106发送计算任务更改后的状态,计算任务更改后的状态用于路由策略决策器判断是否释放计算任务对应的路由配置策略信息对应的资源。其中,更改计算任务状态优选的是依据用户指令,如暂停、停止等;还可以依据计算节点的准备情况,如计算节点准备完成、或者计算节点报错;还可以依据计算任务执行情况,如计算任务执行出错等等。
相对应的,路由策略决策器106,还用于接收Scheduler平台104发送的计算任务更改后的状态;当计算任务更改后的状态由运行更改为结束或者由运行更改为出错时,按照第二预定策略释放计算任务对应的路由配置策略信息对应的资源。
其中,第二预定策略可以为设置一段时间间隔(时间间隔大于等于0,比如在本次执行或者时间间隔为10s),或者在路由策略决策器106下一次进行路由配置(即路由策略决策器106为任意计算任务配置对应的路由配置策略信息,即路由策略决策器106配置新的路由配置策略信息)时,释放该计算任务对应的路由配置策略信息对应的资源。
本发明实施例提供的一种网络资源的处理系统,与现有技术中在处理需要较多计算资源的数据(即:计算任务)时,Scheduler平台在将各个计算子任务的数量下发给分配计算节点之后,当计算节点之间交互的数据较多时,网络设备(如交换机、路由器等)只能按照预先配置的静态策略进行路由,从而可能导致网络设备间出现通信堵塞的问题相比,本发明实施例根据Scheduler平台将获取的计算环境信息和各个计算节点的网络信息发送给路由策略决策器,由于路由策略决策器根据Scheduler平台提供计算环境信息和各个计算节点的网络信息生成路由配置策略信息,路由控制和/或资源预留事先考虑了网络状态及计算任务的带宽需求,从而能够防止处理网络资源时导致网络设备间的通信堵塞。
图2是本发明一个实施例的网络资源的处理装置,具体为一种路由策略决 策器的结构示意图。图2中的路由策略决策器20包括:接收模块201,决策带宽模块202,生成模块203,发送模块204。其中,图2中的路由策略决策器20可以为图1中的路由策略决策器106。
接收模块201,用于接收调度Scheduler平台传递的与计算任务相对应的计算环境信息和各个计算节点的网络信息,并将计算环境信息提供给决策带宽模块202和生成模块203,将各个计算节点的网络信息提供给生成模块203。
其中,接收模块201接收的可以是与计算任务相对应的计算环境信息和计算任务相对应的各个计算节点的网络信息;或者接收模块201接收的可以是与计算任务相对应的计算环境信息和全部计算节点的网络信息。
决策带宽模块202,用于根据计算环境信息为计算任务决策分配的带宽,并将决策分配的带宽提供给生成模块203。
生成模块203,用于根据各个计算节点的网络信息、决策分配的带宽和计算环境信息为计算任务生成路由配置策略信息,并将路由配置策略信息提供给发送模块204。
具体的,生成模块203可以根据各个计算节点的网络信息、决策分配的带宽和计算环境信息为计算任务生成路由配置策略信息。优选的在生成路由配置策略信息时还参考网络拓扑状态,其中,网络拓扑状态为传输介质互连各种设备的物理布局的状态。
发送模块204,用于将路由配置策略信息发送给路由配置控制器。
本方案根据Scheduler平台将获取的计算环境信息和各个计算节点的网络信息为该计算任务决策分配的带宽,生成路由配置策略信息,。本方案对应图1的系统架构中实现时,交换机(网络设备)最终根据路由配置控制器转换的OpenFlow配置策略信息进行路由控制,从而能够防止处理网络资源时导致网络设备间的通信堵塞。
进一步的,本发明还提供另一种路由策略决策器30,如图3所示,装置30还包括释放资源模块205。
释放资源模块205,用于当计算任务更改后的状态由运行更改为结束或者由运行更改为出错时,按照第二预定策略释放计算任务对应的路由配置策略信息对应的资源。其中,更改计算任务状态优选的是依据用户指令,如暂停、停 止等;还可以依据计算节点的准备情况,如计算节点准备完成、或者计算节点报错;还可以依据计算任务执行情况,如计算任务执行出错等等。
对应的,接收模块201,还用于接收Scheduler平台发送的计算任务更改后的状态,并将计算任务更改后的状态提供给释放资源模块205,其中,计算任务更改后的状态为以下任意一种:运行、暂停、结束或出错。
其中,第二预定策略可以为设置一段时间间隔(时间间隔大于等于0,比如在本次执行或者时间间隔为10s),或者在生成模块203下一次进行路由配置(即生成模块203为任意计算任务配置对应的路由配置策略信息)时,释放该计算任务对应的路由配置策略信息对应的资源。
进一步可选的,计算环境信息中还包括带宽需求信息,其中带宽需求信息包括:所需带宽信息和/或计算任务类型;决策带宽模块202,还用于根据带宽需求信息中的至少一种信息为计算任务决策分配的带宽。另外,当计算环境信息中不包括带宽需要求信息时,决策带宽模块202根据计算环境信息中的用户ID获取用户级别和用户级别为计算任务决策分配的带宽。
进一步可选的,根据计算环境信息中包括的信息不同,路由策略决策器中的生成模块203为计算任务生成路由配置策略信息的方式亦不相同,具体如下:
第一种方式:在计算环境信息中包括计算任务的状态的情况下:
生成模块203,具体用于当计算任务的状态为暂停时,根据各个计算节点的网络信息和决策分配的带宽在本次为计算任务生成路由配置策略信息,将生成的路由配置策略信息应用于当前计算任务执行中路由配置;当计算任务的状态为运行时,根据各个计算节点的网络信息和决策分配的带宽并按照第一预定策略为计算任务生成路由配置策略信息。
其中,第一预定策略为预定时间或者预定配置方式。
进一步的,计算环境信息还包括计算任务的优先级。第一预定策略具体包括:
当计算任务的状态为运行,并且计算任务的优先级高于或等于预定阈值时,生成模块203根据各个计算节点的网络信息和决策分配的带宽在本次进行路由配置时为计算任务生成路由配置策略信息,其中预定阈值用于衡量计算任务的优先级的高低;当计算任务的状态为运行,并且计算任务的优先级低于预定 阈值时,生成模块203在下一次需要进行路由配置时,并且该计算任务仍然在运行状态,根据各个计算节点的网络信息和决策分配的带宽为所述计算任务生成路由配置策略信息。其中,下一次需要进行路由配置可以是路由策略决策器需要为另一个计算任务生成路由配置策略时;还可以是达到了预定的时间周期等等(该解释也同样适用于其他实施例)。
优选的,在下一次需要进行路由配置时,
需要说明的是,当生成模块203需要等到下一次需要进行路由配置时为该计算任务生成路由配置策略信息时,需要判断该计算任务是否已经执行完毕。当该计算任务仍在继续处理时,生成模块203可以在下一次需要进行路由配置时,将该计算任务的相关信息(各个计算节点的网络信息等)考虑到新的路由配置策略信息中。当该计算任务已处理完成时,生成模块203在生成新的路由配置策略时无需考虑该计算任务的相关信息。
同样的,在本方案中不限制预定阈值的范围,预定阈值的设置具体根据各个计算任务的等级重要性(优先级)的划分来确定预定阈值。
第二种方式,在计算环境信息包括计算任务的优先级的情况下:
生成模块203,具体用于当计算任务的优先级高于或等于预定阈值时,根据各个计算节点的网络信息和决策分配的带宽在本次为计算任务生成路由配置策略信息,其中预定阈值用于衡量计算任务的优先级的高低;当计算任务的优先级低于预定阈值时,路由策略决策器根据各个计算节点的网络信息和决策分配的带宽并按照第一预定策略为计算任务生成路由配置策略信息。
其中,第二种方式下的第一预定策略与第一种方式下的第一预定策略相同,第一预定策略具体包括:(进一步的,计算环境信息还包括计算任务的状态)
当计算任务的优先级低于预定阈值,并且计算任务的状态为暂停时,生成模块203根据各个计算节点的网络信息和决策分配的带宽在本次或者在下一次需要进行路由配置时为计算任务生成路由配置策略信息;其中是在本次还是在下一次需要进行路由配置时为计算任务生成路由配置策略信息可以是预先在路由策略决策器中配置的。
当计算任务的优先级低于预定阈值,并且计算任务的状态为运行时,生成模块203根据各个计算节点的网络信息和决策分配的带宽并在下一次需要进 行路由配置时为计算任务生成路由配置策略信息。
值得说明的是,在生成模块203根据所述各个计算节点的网络信息、决策分配的所述带宽和所述计算环境信息为所述计算任务生成路由配置策略信息的两种方式中,优先考虑的条件不同。
第一种方式中优先考虑计算任务的状态。当计算任务的状态为暂停时,生成模块203在本次执行生成该计算任务对应的路由配置策略信息的操作;当计算任务的状态为运行时,生成模块203再考虑计算任务的优先级是否高于或等于预定阈值,在该计算任务的优先级高于或等于预定阈值时,生成模块203在本次进行路由配置时执行生成该计算任务对应的路由配置策略信息的操作;在该计算任务的优先级低于预定阈值时,生成模块203在下一次需要进行路由配置时执行生成该计算任务对应的路由配置策略信息的操作。
第二种方式中优先考虑计算任务的优先级。当计算任务的优先级高于或等于预定阈值时,生成模块203在本次执行生成该计算任务对应的路由配置策略信息的操作;当计算任务的优先级低于预定阈值时,生成模块203再考虑计算任务的状态为运行还是暂停,在该计算任务的状态为暂停时,生成模块203按照预定时间(在本次或者某一时间之后)执行生成该计算任务对应的路由配置策略信息的操作;在该计算任务的状态为运行时,生成模块203按照预定配置方式(下一次进行路由配置)执行生成该计算任务对应的路由配置策略信息的操作。
根据这两种方式,出现两种网络资源的处理方法。
第一种方法为:Scheduler平台在将分解的各个子计算任务下发到对应的计算节点之后,路由策略决策器中的生成模块203优先配置计算任务对应的路由配置策略信息,在配置完毕之后告知Scheduler平台配置完毕,以便计算节点开始处理各自的子计算任务。其中,在配置完毕之后告知Scheduler平台配置完毕具体可以为:路由策略决策器在收到来自交换机的成功配置响应后,告知Scheduler平台配置完毕。具体的,路由策略决策器将生成的路由配置策略信息发送给网络策略控制器,以便网络策略控制器在将路由配置策略信息转换成OpenFlow配置策略信息之后,向交换机发送该OpenFlow配置策略信息,交换机按照OpenFlow配置策略信息为计算任务进行路由控制和/或资源预留。
需要说明的是,由于路由策略决策器中的生成模块203根据计算环境信息、各个计算节点的网络信息和实时的网络拓扑信息,生成动态的路由配置策略信息,即交换机可以根据所动态生成的路由配置策略为计算任务进行路由控制和/资源预留,从而避免传输数据时出现通信堵塞情况。第一种方法适用于用户级别高或者计算任务的优先级高时采用。
第二种方法为:Scheduler平台在将分解的各个子计算任务下发到对应的计算节点之后,计算节点开始执行处理各自子计算任务的操作,而路由策略决策器中的生成模块203根据计算环境信息的内容来决定何时生成该计算任务对应的路由配置策略信息。在生成模块203生成路由配置策略信息之后的执行操作与第一种方法相同。不同的是,第一种网络资源的处理方法是在生成模块203配置完毕路由配置策略信息之后,计算节点才开始处理各自的子计算任务,然后交换机会按照动态的生成的OpenFlow配置策略信息进行数据路由和QoS预留,此时该计算任务的数据通信是有QoS保证的。第二种网络资源的处理方法是各个计算节点执行各自接收的子计算任务和为该计算任务分配路由配置策略信息是并行的过程,由于路由配置策略信息的加载生效需要一定的延迟(延迟多长时间取决于路由策略决策器中的生成模块203决策),所以在为该计算任务分配的路由配置策略加载生效之前,若执行中计算任务有通信需求,则其是按照交换机预先配置的路由配置策略进行路由和传输控制(此时该计算任务的数据通信是无QoS保证的),只有在计算任务执行过程中,为该计算任务分配的路由配置策略信息加载生效之后,才按照所动态生成的OpenFlow配置策略信息为计算任务进行路由控制和/资源预留(此时该计算任务的数据通信才是有QoS保证的)。
需要说明的是,第二种方法适用于用户级别低或者计算任务的优先级低时采用。该方式在尽量减少对网络进行频繁更改的前提下,保证计算任务的网络通信,以提高网络资源的利用率。
进一步可选的,生成模块203根据各个计算节点的网络信息、决策分配的带宽和计算环境信息为计算任务生成路由配置策略信息。具体包括:
其中路由配置策略信息中可以包括各个计算节点对应的路由信息。进一步可选的,路由配置策略信息中还可以包括各个计算节点对应的路由信息相关的 节点带宽信息。
当网络信息包括各个计算节点对应的IP地址、端口号和MAC地址时,生成模块203生成包括各个计算节点对应的节点带宽信息的路由配置策略信息。
当网络信息包括各个计算节点对应的IP地址、端口号、MAC地址和各个计算节点之间的通信信息时,生成模块203生成包括各个计算节点之间的节点间带宽信息的路由配置策略信息。
例如,设该网络信息中有三个节点:节点A,节点B,节点C,该计算任务的总带宽为9M。为了防止该计算任务的各节点通信总带宽超过决策的总带宽,则当网络信息中不包括各个计算节点之间的通信信息时,则生成模块203生成的路由配置策略信息中包括为节点A通信端口预留6M,节点B通信端口预留6M,节点C通信端口预留6M。当网络信息中包括各个计算节点之间的通信信息时,各个计算节点之间的通信信息为:节点A与节点C之间有交互,节点B与节点C之间有交互,则生成模块203生成的路由配置策略信息中包括为节点A的通信端口预留9M(或小于9M,依据自身策略),为节点B的通信端口预留9M(或小于9M,依据自身策略),为节点C通信端口预留的带宽为9M。
图4是本发明另一个实施例的网络资源的处理装置,具体为一种调度Scheduler平台的结构示意图。图4中的调度Scheduler平台40包括:接收模块401,获取模块402,分解模块403,生成模块404,第一发送模块405。其中,图4中的调度Scheduler平台40可以为图1中的Scheduler平台104。
接收模块401,用于接收UE提交的计算任务描述信息,并将计算任务描述信息提供给获取模块402、分解模块403和生成模块404,其中计算任务描述信息包括用户ID、所需的计算节点信息。
获取模块402,用于根据计算任务描述信息获取用户ID对应的计算任务。
分解模块403,用于根据所需的计算节点信息将计算任务分解成至少一个子计算任务。
获取模块402,还用于获取处理各个子计算任务对应的各个计算节点的网络信息,并将各个计算节点的网络信息提供给第一发送模块405。
生成模块404,用于根据计算任务描述信息生成计算任务的计算环境信息,并将计算环境信息提供给第一发送模块405,其中计算环境信息为路由策略决策器生成计算任务对应的路由配置策略信息的必要信息。
第一发送模块405,用于向路由策略决策器发送计算环境信息和各个计算节点的网络信息。
进一步的,本发明实施例提供另一种调度Scheduler平台50,如图5所示,该装置50还包括第二发送模块406,指定模块407,更改模块408;和获取模块402包括发送单元5021,接收单元5022。
第二发送模块406,还用于根据各个计算节点的网络信息向计算节点发送各自对应的子计算任务,并向指定模块407提供已发送信息,已发送信息用于告知指定模块407已发送至少一个子计算任务。
指定模块407,用于指定计算任务的状态,并将计算任务的状态提供给生成模块404。生成模块404根据计算任务描述信息和计算任务的状态生成计算任务的计算环境信息。
进一步可选的,计算任务描述信息还包括计算任务级别。获取模块402,还用于根据用户ID获取用户级别,并将用户级别提供给生成模块404。然后生成模块404根据用户级别和计算任务级别生成计算任务的优先级。
进一步的,生成模块404根据计算任务描述信息和计算任务的优先级生成计算环境信息。
进一步可选的,生成模块404,还用于根据计算任务描述信息、计算任务的状态和计算任务的优先级生成计算环境信息。
可以理解的是,计算任务描述信息可以包括用户ID、所需的计算节点信息、带宽需求信息(可选的)、计算任务的获取信息(可选的)、计算任务级别(可选的)。计算环境信息包括以下至少一种信息:用户ID、所需的计算节点信息、带宽需求信息、计算任务的状态和计算任务的优先级。
进一步需要说明的是,获取模块402根据计算任务描述信息获取用户ID对应的计算任务,具体包括两种方式:
第一种方式,获取模块402接收UE发送的计算任务数据包,该计算任务数据包中包含计算任务描述信息,获取模块402根据计算任务描述信息解析计 算任务数据包以获取计算任务。
第二种方式,获取模块402根据计算任务描述信息中的计算任务获取地址或获取方式获取计算任务。
进一步需要说明的是,分解模块403根据所需的计算节点信息将计算任务分解成至少一个子计算任务,具体包括:
其中,所需的计算节点信息中包括计算节点的配置信息、计算节点的数量,其中计算节点的配置信息可以包括硬件配置(内存、CPU、网络等)、软件配置(操作系统类型、应用程序库)等。可以理解的是,分解模块403根据计算节点的数量将计算任务分解成至少一个子计算任务。
进一步需要说明的是,在分解模块403将计算任务分解成至少一个子计算任务之后,获取模块402中的发送单元5021向计算资源管理器发送携带计算节点的数量和计算节点的配置信息的计算节点分配请求。在计算资源管理器根据接收的计算节点分配请求中的内容为该计算任务配置计算节点信息,并将配置的计算节点信息发送给Scheduler平台之后,接收单元5022接收计算资源管理器发送的计算节点信息,计算节点信息中包括各个计算节点的网络信息。其中,各个计算节点的网络信息中包括如下信息中的一个或多个:各个计算节点对应的远程访问地址、互联网IP地址、端口号和MAC地址。
进一步可选的,更改模块408,用于更改计算任务的状态,得到计算任务更改后的状态,并将计算任务更改后的状态提供给发送模块405,其中,计算任务更改后的状态为以下任意一种:运行、暂停、结束或出错。然后发送模块405向路由策略决策器发送计算任务更改后的状态,计算任务更改后的状态用于路由策略决策器判断是否释放计算任务对应的路由配置策略信息对应的资源。其中,更改计算任务状态优选的是依据用户指令,如暂停、停止等;还可以依据计算节点的准备情况,如计算节点准备完成、或者计算节点报错;还可以依据计算任务执行情况,如计算任务执行出错等等。
本方案根据Scheduler平台的生成模块将生成的计算环境信息和Scheduler平台的的获取模块获取的各个计算节点的网络信息发送给路由策略决策器,以便路由策略决策器根据计算环境信息和各个计算节点的网络信息生成路由配置策略信息,然后路由策略决策器将该路由配置策略信息下发给路由配置控制 器,使得交换机(网络设备)最终根据路由配置控制器转换的OpenFlow配置策略信息进行路由控制和/或资源预留,从而能够防止处理网络资源时导致网络设备间的通信堵塞。
如图6所示,图6为路由策略决策器的硬件结构示意图。其中,路由策略决策器可包括存储器601、收发器602、处理器603和总线604,其中,存储器601、收发器602、处理器603通过总线604通信连接。
存储器601可以是只读存储器(Read Only Memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(Random Access Memory,RAM)。存储器601可以存储操作系统和其他应用程序。在通过软件或者固件来实现本发明实施例提供的技术方案时,用于实现本发明实施例提供的技术方案的程序代码保存在存储器601中,并由处理器603来执行。
收发器602用于装置与其他设备或通信网络(例如但不限于以太网,无线接入网(Radio Access Network,RAN),无线局域网(Wireless Local Area Network,WLAN)等)之间的通信。
处理器603可以采用通用的中央处理器(Central Processing Unit,CPU),微处理器,应用专用集成电路(Application Specific Integrated Circuit,ASIC),或者一个或多个集成电路,用于执行相关程序,以实现本发明实施例所提供的技术方案。
总线604可包括一通路,在装置各个部件(例如存储器601、收发器602和处理器603)之间传送信息。
应注意,尽管图6所示的硬件仅仅示出了存储器601、收发器602和处理器603和总线604,但是在具体实现过程中,本领域的技术人员应当明白,该终端还包含实现正常运行所必须的其他器件。同时,根据具体需要,本领域的技术人员应当明白,还可包含实现其他功能的硬件器件。
具体的,图6所示的路由策略决策器用于实现图2-图3实施例所示的装置时,该装置中的收发器602,用于接收调度Scheduler平台传递的计算任务相对应的计算环境信息和各个计算节点的网络信息,并将计算环境信息提供给各个计算节点的网络信息提供给处理器603。
处理器603,分别与存储器601和收发器602连接;具体用于根据计算环 境信息为计算任务决策分配的带宽;根据各个计算节点的网络信息、决策分配的带宽和计算环境信息为计算任务生成路由配置策略信息,并将路由配置策略信息提供给收发器602。
收发器602,用于将路由配置策略信息发送给路由配置控制器。
进一步的,处理器603,还用于当计算任务更改后的状态由运行更改为结束或者由运行更改为出错时,按照第二预定策略释放计算任务对应的路由配置策略信息对应的资源。
对应的,收发器602,还用于接收Scheduler平台发送的计算任务更改后的状态,并将计算任务更改后的状态提供给处理器603,其中,计算任务更改后的状态为以下任意一种:运行、暂停、结束或出错。
进一步可选的,计算环境信息中还包括带宽需求信息,其中带宽需求信息包括:所需带宽信息和/或计算任务类型;处理器603,还用于根据带宽需求信息中的至少一种信息为计算任务决策分配的带宽。另外,当计算环境信息中不包括带宽需要求信息时,处理器603根据计算环境信息中的用户ID获取用户级别和用户级别为计算任务决策分配的带宽。
进一步可选的,进一步可选的,根据计算环境信息中包括的信息不同,处理器603为计算任务生成路由配置策略信息的方式亦不相同,具体如下:
第一种方式:在计算环境信息中包括计算任务的状态的情况下:
处理器603,具体用于当计算任务的状态为暂停时,根据各个计算节点的网络信息和决策分配的带宽在本次为计算任务生成路由配置策略信息;当计算任务的状态为运行时,根据各个计算节点的网络信息和决策分配的带宽并按照第一预定策略为计算任务生成路由配置策略信息。
其中,第一预定策略具体包括:
(进一步的,计算环境信息还包括计算任务的优先级。)
当计算任务的状态为运行,并且计算任务的优先级高于或等于预定阈值时,处理器603根据各个计算节点的网络信息和决策分配的带宽在本次进行路由配置时为计算任务生成路由配置策略信息,其中预定阈值用于衡量计算任务的优先级的高低;当计算任务的状态为运行,并且计算任务的优先级低于预定阈值时,处理器603根据各个计算节点的网络信息和决策分配的带宽并在下一次 进行需要路由配置时为计算任务生成路由配置策略信息。
在本方案中不限制预定阈值的范围,预定阈值的设置具体根据各个计算任务的等级重要性(优先级)的划分来确定预定阈值。
第二种方式,在计算环境信息包括计算任务的优先级的情况下:
处理器603,具体用于当计算任务的优先级高于或等于预定阈值时,根据各个计算节点的网络信息和决策分配的带宽在本次为计算任务生成路由配置策略信息,其中预定阈值用于衡量计算任务的优先级的高低;当计算任务的优先级低于预定阈值时,路由策略决策器根据各个计算节点的网络信息和决策分配的带宽并按照第一预定策略为计算任务生成路由配置策略信息。
其中,第二种方式下的第一预定策略与第一种方式下的第一预定策略相同。
具体的,第一预定策略包括:
(进一步的,计算环境信息还包括计算任务的状态时)
当计算任务的优先级低于预定阈值,并且计算任务的状态为暂停时,处理器603根据各个计算节点的网络信息和决策分配的带宽在本次或者在下一次需要进行路由配置时为计算任务生成路由配置策略信息;当计算任务的优先级低于预定阈值,并且计算任务的状态为运行时,处理器603根据各个计算节点的网络信息和决策分配的带宽并在下一次进行需要路由配置时为计算任务生成路由配置策略信息。
进一步可选的,处理器603根据各个计算节点的网络信息、决策分配的带宽和计算环境信息为计算任务生成路由配置策略信息。具体包括:
其中路由配置策略信息中可以包括各个计算节点对应的路由信息。进一步可选的,路由配置策略信息中还可以包括各个计算节点对应的路由信息相关的节点带宽信息。
当网络信息包括各个计算节点对应的IP地址、端口号和MAC地址时,处理器603生成包括各个计算节点对应的节点带宽信息的路由配置策略信息。
当网络信息包括各个计算节点对应的IP地址、端口号、MAC地址和各个计算节点之间的通信信息时,处理器603生成包括各个计算节点之间的节点间带宽信息的路由配置策略信息。
本方案通过为每个计算任务进行路由控制和/或资源预留,能够防止处理 网络资源时导致网络设备间的通信堵塞。
如图7所示,图7为Scheduler平台的硬件结构示意图。其中,Scheduler平台可包括存储器701、收发器702、处理器703和总线704。其中,存储器701、收发器702、处理器703根据总线704通信连接。
其中在装置中对于存储器701、收发器702、处理器703和总线704的共同功能的概述可参考图7中的认证客户端包括的存储器601、收发器602、处理器603和总线604的说明,在此不再一一赘述。
应注意,尽管图7所示的硬件仅仅示出了存储器701、收发器702和处理器703和总线704,但是在具体实现过程中,本领域的技术人员应当明白,该终端还包含实现正常运行所必须的其他器件。同时,根据具体需要,本领域的技术人员应当明白,还可包含实现其他功能的硬件器件。
具体的,图7所示的Scheduler平台用于实现图4-图5实施例所示的装置时,该装置中的收发器702,用于接收UE提交的计算任务描述信息,并将计算任务描述信息提供给处理器703,其中计算任务描述信息包括用户ID、所需的计算节点信息。
处理器703,分别与存储器701和收发器702连接;具体用于根据计算任务描述信息获取用户ID对应的计算任务;根据所需的计算节点信息将计算任务分解成至少一个子计算任务;获取处理各个子计算任务对应的各个计算节点的网络信息,并将各个计算节点的网络信息提供给收发器702;根据计算任务描述信息生成计算任务的计算环境信息,并将计算环境信息提供给收发器702,其中计算环境信息为路由策略决策器生成计算任务对应的路由配置策略信息的必要信息。
收发器702,用于向路由策略决策器发送计算环境信息和各个计算节点的网络信息。以便于路由策略决策器根据计算环境信息和各个计算节点的网络信息和自身获取的网络拓扑信息生成路由配置策略信息。
进一步可选的,收发器702,还用于向计算节点发送各自对应的子计算任务,并向处理器703提供已发送信息,已发送信息用于告知指定模块407已发送至少一个子计算任务。
对应的处理器703,还用于指定计算任务的状态;和根据计算任务描述信 息和计算任务的状态生成计算任务的计算环境信息。
进一步可选的,计算任务描述信息还包括计算任务级别。处理器703,还用于根据用户ID获取用户级别。然后根据用户级别和计算任务级别生成计算任务的优先级。该用户级别可以为用户的订购信息,或者根据用户的订购信息赋予该用户的一个服务级别。
进一步的,处理器703根据计算任务描述信息和计算任务的优先级生成计算环境信息。
进一步可选的,处理器703,还用于根据计算任务描述信息、计算任务的状态和计算任务的优先级生成计算环境信息。
可以理解的是,计算任务描述信息可以包括用户ID、所需的计算节点信息、带宽需求信息(可选的)、计算任务的获取信息(可选的)、计算任务级别(可选的)。计算环境信息包括以下至少一种信息:用户ID、所需的计算节点信息、带宽需求信息、计算任务的状态和计算任务的优先级。
进一步需要说明的是,处理器703根据所需的计算节点信息将计算任务分解成至少一个子计算任务,具体包括:
其中,所需的计算节点信息中包括计算节点的配置信息、计算节点的数量,其中计算节点的配置信息可以包括硬件配置(内存、CPU、网络等)、软件配置(操作系统类型、应用程序库)等。可以理解的是,分解模块403根据计算节点的数量将计算任务分解成至少一个子计算任务。
进一步需要说明的是,在处理器703将计算任务分解成至少一个子计算任务之后,收发器702向计算资源管理器发送携带计算节点的数量和计算节点的配置信息的计算节点分配请求。在计算资源管理器根据接收的计算节点分配请求中的内容为该计算任务配置计算节点信息,并将配置的计算节点信息发送给Scheduler平台之后,收发器702接收计算资源管理器发送的计算节点信息,计算节点信息中包括各个计算节点的网络信息。其中,各个计算节点的网络信息中包括如下信息中的一个或多个:各个计算节点对应的远程访问地址、互联网IP地址、端口号和MAC地址。
进一步可选的,处理器703,还用于更改计算任务的状态,得到计算任务更改后的状态,并将计算任务更改后的状态提供给收发器702,其中,计算任 务更改后的状态为以下任意一种:运行、暂停、结束或出错。然后收发器702向路由策略决策器发送计算任务更改后的状态,计算任务更改后的状态用于路由策略决策器判断是否释放计算任务对应的路由配置策略信息对应的资源。
本方案能够防止处理网络资源时导致网络设备间的通信堵塞。
图8是本发明一个实施例的网络资源的处理方法的流程图。图8的方法可以由图2-图3描述的装置20和装置30(即路由策略决策器)来执行。
801,路由策略决策器接收Scheduler平台传递的计算任务相对应的计算环境信息和各个计算节点的网络信息。
802,路由策略决策器根据计算环境信息为计算任务决策分配的带宽。
803,路由策略决策器根据各个计算节点的网络信息、决策分配的带宽和计算环境信息为计算任务生成路由配置策略信息。
804,路由策略决策器将路由配置策略信息发送给路由配置控制器。
本发明实施例提供的一种网络资源的处理方法,与现有技术中在处理需要较多计算资源的数据(计算任务)时,Scheduler平台在将各个计算子任务的数量下发给分配计算节点之后,当计算节点之间交互的数据较多时,网络设备(如交换机、路由器等)只能按照预先配置的静态策略进行路由,从而可能导致网络设备间出现通信堵塞的问题相比,本发明实施例根据Scheduler平台将获取的计算环境信息和各个计算节点的网络信息发送给路由策略决策器,路由策略决策器根据Scheduler平台提供计算环境信息和各个计算节点的网络信息,生成路由配置策略信息,然后路由策略决策器将该路由配置策略信息下发给路由配置控制器,使得交换机(网络设备)最终根据路由配置控制器对数据进行传输,从而能够防止处理网络资源时导致网络设备间的通信堵塞。
需要说明的是,在本方案中不限制路由配置控制器的具体设备,优选的,在本实施例中网络配置控制器可以为OFC。
进一步可选的,在步骤801中的计算环境信息至少包括以下一种信息:用户ID、计算任务的状态、计算任务的优先级、带宽需求信息。
进一步可选的,在步骤802中,当计算环境信息中包括带宽需求信息时,路由策略决策器根据带宽需求信息中的所需带宽信息和/计算任务类型为计算任务决策分配的带宽。另外,当计算环境信息中不包括带宽需要求信息时,路 由策略决策器根据计算环境信息中的用户ID获取用户级别,然后根据用户级别为计算任务决策分配的带宽。
进一步可选的,在步骤803中,根据计算环境信息中包括的信息不同,路由策略决策器为计算任务生成路由配置策略信息的方式亦不相同,具体如下:
第一种方式:在计算环境信息中包括计算任务的状态的情况下:
当计算任务的状态为暂停时,路由策略决策器根据各个计算节点的网络信息和决策分配的带宽在本次为计算任务生成路由配置策略信息。
当计算任务的状态为运行时,路由策略决策器根据各个计算节点的网络信息和决策分配的带宽并按照第一预定策略为计算任务生成路由配置策略信息。
其中,各个计算节点的网络信息包括各个计算节点对应的互联网协议IP地址和端口号。进一步的,计算环境信息中还包括计算任务的优先级。
第一预定策略具体包括:
当计算任务的状态为运行,并且计算任务的优先级高于或等于预定阈值时,路由策略决策器根据各个计算节点的网络信息和决策分配的带宽在本次进行路由配置时为计算任务生成路由配置策略信息,其中预定阈值用于衡量计算任务的优先级的高低。
当计算任务的状态为运行,并且计算任务的优先级低于预定阈值时,路由策略决策器根据各个计算节点的网络信息和决策分配的带宽并在下一次需要进行路由配置时为计算任务生成路由配置策略信息。
在本方案中不限制第一预定策略的具体方式。举例来说,当计算任务的状态为运行,但计算任务的优先级高于或等于预定阈值时,路由策略决策器可以在本次生成该计算任务对应的路由配置策略信息。同样的,第一预定策略可以为路由策略决策器在为下一次需要进行路由配置时为该计算任务生成路由配置策略信息。可以理解的是,下一次需要进行路由配置可以为路由策略决策器在下一次为某一个计算任务生成路由配置策略信息的同时,将该计算任务相关的信息考虑到新的路由配置策略信息中,以使得新的路由配置策略信息适用于该计算任务的处理操作。
同样的,在本方案中不限制预定阈值的范围,预定阈值的设置具体根据各个计算任务的等级重要性(优先级)的划分来确定预定阈值。
第二种方式,在计算环境信息包括计算任务的优先级的情况下:
当计算任务的优先级高于或等于预定阈值时,路由策略决策器根据各个计算节点的网络信息和决策分配的带宽在本次为计算任务生成路由配置策略信息,其中预定阈值用于衡量计算任务的优先级的高低。
当计算任务的优先级低于预定阈值时,路由策略决策器根据各个计算节点的网络信息和决策分配的带宽并按照第一预定策略为计算任务生成路由配置策略信息。
其中,第二种方式下的第一预定策略与第一种方式下的第一预定策略相同,第一预定策略具体包括:
进一步的,当计算任务的优先级低于预定阈值,并且计算任务的状态为暂停时,路由策略决策器根据各个计算节点的网络信息和决策分配的带宽在本次或者在下一次需要进行路由配置时为计算任务生成路由配置策略信息。
当计算任务的优先级低于预定阈值,并且计算任务的状态为运行时,路由策略决策器根据各个计算节点的网络信息和决策分配的带宽并在下一次需要进行路由配置时为计算任务生成路由配置策略信息。
值得说明的是,在路由策略决策器根据所述各个计算节点的网络信息、决策分配的所述带宽和计算环境信息为所述计算任务生成路由配置策略信息的两种方式中,优先考虑的条件不同。
第一种方式中优先考虑计算任务的状态。当计算任务的状态为暂停时,路由策略决策器在本次执行生成该计算任务对应的路由配置策略信息的操作;当计算任务的状态为运行时,路由策略决策器再考虑计算任务的优先级是否高于或等于预定阈值,在该计算任务的优先级高于或等于预定阈值时,路由策略决策器在本次进行路由配置时执行生成该计算任务对应的路由配置策略信息的操作;在该计算任务的优先级低于预定阈值时,路由策略决策器在下一次需要进行路由配置时执行生成该计算任务对应的路由配置策略信息的操作。
第二种方式中优先考虑计算任务的优先级。当计算任务的优先级高于或等于预定阈值时,路由策略决策器在本次执行生成该计算任务对应的路由配置策略信息的操作;当计算任务的优先级低于预定阈值时,路由策略决策器再考虑计算任务的状态为运行还是暂停,在该计算任务的状态为暂停时,路由策略决 策器在本次或者在下一次需要进行路由配置时执行生成该计算任务对应的路由配置策略信息的操作;在该计算任务的状态为运行时,路由策略决策器在下一次需要进行路由配置时执行生成该计算任务对应的路由配置策略信息的操作。
根据这两种方式,出现两种网络资源的处理方法。第一种方法为:Scheduler平台在将分解的各个子计算任务下发到对应的计算节点之后,路由策略决策器优先配置计算任务对应的路由配置策略信息,在配置完毕之后告知Scheduler平台配置完毕,以便计算节点开始处理各自的子计算任务。其中,在配置完毕之后告知Scheduler平台配置完毕具体可以为:路由策略决策器在收到来自交换机的成功配置响应后,告知Scheduler平台配置完毕,具体的,路由策略决策器将生成的路由配置策略信息发送给网络策略控制器,以便网络策略控制器在将路由配置策略信息转换成OpenFlow配置策略信息之后,向交换机发送该OpenFlow配置策略信息,交换机按照OpenFlow配置策略信息预留资源和进行路由控制。
需要说明的是,由于路由策略决策器根据计算环境信息、各个计算节点的网络信息和实时的网络拓扑信息,生成动态的路由配置策略信息,即交换机可以根据所动态生成的路由配置策略预留资源和进行路由控制,从而避免传输数据时出现通信堵塞情况。第一种方法适用于用户级别高或者计算任务的优先级高时采用。
第二种方法为:Scheduler平台在将分解的各个子计算任务下发到对应的计算节点之后,计算节点开始执行处理各自子计算任务的操作,而路由策略决策器根据计算环境信息的内容来决定何时生成该计算任务对应的路由配置策略信息。在路由策略决策器生成路由配置策略信息之后的执行操作与第一种方法相同。不同的是,第一种网络资源的处理方法是在路由策略决策器配置完毕路由配置策略信息之后,计算节点才开始处理各自的子计算任务,然后交换机会按照动态的生成的OpenFlow配置策略信息进行路由控制和/或资源预留(此时该计算任务的数据通信是有QoS保证的)。第二种网络资源的处理方法是各个计算节点执行各自接收的子计算任务和为该计算任务分配路由配置策略信息是并行的过程,由于路由配置策略信息的加载生效需要一定的延迟(延迟多 长时间取决于路由策略决策器决策),所以在为该计算任务分配的路由配置策略加载生效之前,若执行中计算任务有通信需求,则其是按照交换机之前配置的已有路由配置策略进行路由和传输控制(此时该计算任务的数据通信是无QoS保证的)。
需要说明的是,第二种方法适用于用户级别低或者计算任务的优先级低时采用。该方式在尽量减少对网络进行频繁更改的前提下,保证计算任务的网络通信,以提高网络资源的利用率。
进一步可选的,在步骤803中,当网络信息包括各个计算节点对应的IP地址、端口号和MAC地址时,路由策略决策器生成包括各个计算节点对应的节点带宽信息的路由配置策略信息。
当网络信息包括各个计算节点对应的IP地址、端口号、MAC地址和各个计算节点之间的通信信息时,路由策略决策器生成包括各个计算节点之间的节点间带宽信息的路由配置策略信息。
进一步可选的,在步骤803之后,该方法还包括:路由策略决策器接收Scheduler平台发送的计算任务更改后的状态,其中,计算任务更改后的状态为以下任意一种:运行、暂停、结束或出错;当计算任务更改后的状态由运行更改为结束或者由运行更改为出错时,路由策略决策器按照第二预定策略释放计算任务对应的路由配置策略信息对应的资源。
即计算任务的状态为路由策略决策器预留网络资源(如带宽等)或者释放网络资源的参照信息。当计算任务的状态为运行、或者暂停时,路由策略决策器保留并存储为该计算任务分配的网络资源;当计算任务的状态为结束或者出错时,路由策略决策器释放为该计算任务分配的网络资源。从而提高了网络资源的利用率。
图9是本发明一个实施例的网络资源的处理方法的流程图。图9的方法可以由图4和图5描述的装置40、装置50(即Scheduler平台)来执行。
901,Scheduler平台接收UE提交的计算任务描述信息,计算任务描述信息包括用户标识ID和所需的计算节点信息。
902,Scheduler平台根据计算任务描述信息获取用户ID对应的计算任务。
903,Scheduler平台根据所需的计算节点信息将计算任务分解成至少一个 子计算任务,并为各个子计算任务申请计算节点,并获取处理各个子计算任务对应的各个计算节点的网络信息。
904,Scheduler平台根据计算任务描述信息生成计算任务的计算环境信息。
905,Scheduler平台向路由策略决策器发送计算环境信息和各个计算节点的网络信息。
本方案根据Scheduler平台将获取的计算环境信息和各个计算节点的网络信息发送给路由策略决策器,路由策略决策器根据Scheduler平台提供计算环境信息和各个计算节点的网络信息生成路由配置策略信息,然后路由策略决策器将该路由配置策略信息下发给路由配置控制器,使得交换机(网络设备)最终根据路由配置控制器转换的OpenFlow配置策略信息进行路由控制和/或资源预留,从而能够防止处理网络资源时导致网络设备间的通信堵塞。
进一步可选的,在步骤902中,Scheduler平台获取用户ID对应的计算任务的方式有两种,具体包括:
第一种方式,Scheduler平台接收UE发送的计算任务数据包,该计算任务数据包中包含计算任务描述信息,根据计算任务描述信息解析计算任务数据包以获取计算任务。
第二种方式,Scheduler平台根据计算任务描述信息中的计算任务获取地址或获取方式获取计算任务。
进一步可选的,在步骤903中,Scheduler平台根据所需的计算节点信息将计算任务分解成至少一个子计算任务,具体包括:
其中,所需的计算节点信息中包括计算节点的配置信息、计算节点的数量,其中计算节点的配置信息可以包括硬件配置(内存、CPU、网络等)、软件配置(操作系统类型、应用程序库)等。可以理解的是,Scheduler平台根据计算节点的数量将计算任务分解成至少一个子计算任务。
进一步可选的,在Scheduler平台根据所需的计算节点信息将计算任务分解成至少一个子计算任务之后,还包括:
Scheduler平台向计算资源管理器发送携带计算节点的数量和计算节点的配置信息的计算节点分配请求。然后计算资源管理器根据该计算节点分配请求中的内容为计算任务配置计算节点信息。然后将配置的计算节点信息发送给 Scheduler平台。
对应的,Scheduler平台接收计算资源管理器发送的计算节点信息,计算节点信息中包括各个计算节点的网络信息。即Scheduler平台获取各个计算节点的网络信息。其中,各个计算节点的网络信息中包括如下信息中的一个或多个:各个计算节点对应的远程访问地址、互联网IP地址、端口号和MAC地址。
当网络信息包括各个计算节点对应的IP地址、端口号和MAC地址时,路由策略决策器生成的路由配置策略信息中包括各个计算节点对应的节点带宽信息;
当网络信息包括各个计算节点对应的IP地址、端口号、MAC地址和各个计算节点之间的通信信息时,路由策略决策器生成的路由配置策略信息中包括各个计算节点之间的节点间带宽信息。
需要说明的是,路由配置策略信息中可以包括各个计算节点对应的路由信息。进一步可选的,路由配置策略信息中还可以包括各个计算节点对应的路由信息相关的节点带宽信息。
进一步可选的,在Scheduler平台获取各个计算节点的网络信息之后,还包括:
Scheduler平台向计算节点发送各自对应的子计算任务,并指定计算任务的状态。
值得说明的是,在Scheduler平台向计算节点发送各自对应的子计算任务之后,当Scheduler平台指定计算任务的状态为暂停时,则各个计算节点暂停对各自对应的子计算任务的处理;当Scheduler平台指定计算任务的状态为运行时,则各个计算节点处理对各自对应的子计算任务。
当然,计算任务的状态还包括出错或者结束,这两种状态均为计算节点向Scheduler平台的反馈,当出现这两种状态时,各个计算节点停止对各自对应的子计算任务的处理。
进一步可选的,在步骤901之后,Scheduler平台根据用户ID获取用户级别;然后Scheduler平台根据用户级别和计算任务级别生成计算任务的优先级。
进一步可选的,在步骤904中,Scheduler平台根据计算任务描述信息和 计算任务的状态生成计算任务的计算环境信息。
进一步可选的,在步骤904中,Scheduler平台根据计算任务描述信息和计算任务的优先级生成计算环境信息。
可以理解的是,计算任务描述信息可以包括用户ID、所需的计算节点信息、带宽需求信息(可选的)、计算任务的获取信息(可选的)、计算任务级别(可选的)。计算环境信息包括以下至少一种信息:用户ID、所需的计算节点信息、带宽需求信息、计算任务的状态和计算任务的优先级。
进一步可选的,在步骤905之后,Scheduler平台更改计算任务的状态,得到计算任务更改后的状态,其中,计算任务更改后的状态为以下任意一种:运行、暂停、结束或出错。然后Scheduler平台向路由策略决策器发送计算任务更改后的状态,计算任务更改后的状态用于路由策略决策器判断是否释放计算任务对应的路由配置策略信息对应的资源。
图10是本发明一个实施例中先配置路由配置策略信息再处理计算任务的网络资源的处理方法的流程图。
1001,Scheduler平台接收UE发送的计算任务描述信息。
计算任务描述信息可以包括用户ID、所需的计算节点信息、带宽需求信息(可选的)、计算任务的获取信息(可选的)、计算任务级别(可选的)。
1002,Scheduler平台根据计算任务描述信息将计算任务进行分解,得到至少一个子计算任务。
Scheduler平台根据计算任务描述信息确定计算任务所需的计算节点信息。其中所需的计算节点信息包括:计算节点的配置信息、计算节点的数量,计算节点的配置信息可以包括硬件配置(内存、CPU、网络等)、软件配置(操作系统类型、应用程序库)等。然后Scheduler平台根据计算节点的数量,将该计算任务分解成计算节点数量对应的子计算任务。可以理解的是,计算节点的数量与子计算任务的数量相等。
1003,Scheduler平台根据所需的计算节点信息向计算资源管理器申请计算任务所需的计算节点。
具体的,Scheduler平台向计算资源管理器发送携带计算节点的数量和计算节点的配置信息的计算节点分配请求;然后接收计算资源管理器发送的计算 节点信息,计算节点信息中包括各个计算节点的网络信息。
1004,Scheduler平台将子计算任务发送到对应的计算节点上,并指示各个子计算任务的状态(计算任务的状态)为暂停。
在现有技术中,一般在Scheduler平台向计算节点下发各自对应的子计算任务之后,各个子计算任务的状态为运行,此时,各个计算节点就开始对各自对应的子计算任务的处理。
1005,Scheduler平台生成计算环境信息。
计算环境信息包括以下至少一种信息:用户ID、所需的计算节点信息、带宽需求信息、计算任务的状态和计算任务的优先级。
1006,Scheduler平台将计算环境信息和各个计算节点的网络信息发送给路由策略决策器。
1007,路由策略决策器根据接收的计算环境信息和各个计算节点的网络信息生成路由配置策略信息。
1008,路由策略决策器将路由配置策略信息发送给路由配置控制器。
可以理解的是,本实施例中的路由配置控制器可以为OFC。
1009,路由配置控制器将接收的路由配置策略信息转换成OpenFlow配置策略信息,然后发送给管理的交换机执行配置。
本方案中不限制交换机的类型,优先的,在本实施例中交换机可以为OFS。
1010,交换机在接收到OpenFlow配置策略信息之后,向路由策略决策器发送成功配置响应。
该成功配置响应用于告知路由策略决策器交换机已接收到OpenFlow配置策略信息。
1011,路由策略决策器向Scheduler平台发送路由配置结果。
该路由配置结果用于告知Scheduler平台已完成该计算任务对应的路由配置策略信息。可选的,路由配置结果中可以包括该路由配置策略信息。
1012,Scheduler平台在收到路由配置结果之后,将计算任务的状态改为运行,并向各个计算节点的代理模块发送计算任务的状态。
各个计算节点在获知计算任务的状态之后,开始处理各自接收的子计算任务。
在交换机在接收的OpenFlow配置策略信息之后,可以按照该OpenFlow配置策略信息执行资源预留和数据路由。可以理解的是,该OpenFlow配置策略信息为动态的网络策略信息,可以避免交换机在传输数据时,出现通信堵塞情况。
该先配置路由配置策略信息再处理计算任务的方式,可以在清楚此时网络拓扑的情况下,动态的为该计算任务生成路由配置策略信息。该路由配置策略信息可以避免出现通信堵塞。
图11是本发明一个实施例中计算任务与配置路由配置策略信息并行处理的网络资源的处理方法的流程图。
1101,Scheduler平台接收UE发送的计算任务描述信息。
1102,Scheduler平台根据计算任务描述信息将计算任务进行分解,得到至少一个子计算任务。
1103,Scheduler平台根据计算节点信息向计算资源管理器申请计算任务所需的计算节点。
1104,Scheduler平台将子计算任务发送到对应的计算节点上,并指示各个子计算任务的状态(计算任务的状态)为运行。
1105,各个计算节点开始处理各自对应的子计算任务。
各个计算节点在接收到各自对应的子计算任务之后,直接处理各自对应的子计算任务。时
1106,Scheduler平台生成计算环境信息。
1107,Scheduler平台将计算环境信息和各个计算节点的网络信息发送给路由策略决策器。
1108,路由策略决策器根据接收的计算环境信息和各个计算节点的网络信息生成路由配置策略信息。
1109,路由策略决策器将路由配置策略信息发送给路由配置控制器。
可以理解的是,本实施例中的路由配置控制器可以为OFC。
1110,路由配置控制器将接收的路由配置策略信息转换成OpenFlow配置策略信息,然后发送给管理的交换机执行配置。
本方案中不限制交换机的类型,优先的,在本实施例中交换机可以为OFS。
1111,交换机在接收到OpenFlow配置策略信息之后,向路由策略决策器发送成功配置响应。
1112,路由策略决策器向Scheduler平台发送路由配置结果。
可以理解的是,在本步骤中,路由策略决策器发送的路由配置结果对Scheduler平台的作用仅仅是获知路由策略决策器是否生成路由配置策略信息,对计算任务的状态不做影响。
该计算任务与配置路由配置策略信息并行处理的方式,可以在Scheduler平台接收到计算任务之后在路由配置策略信息尚未生成时,直接让各个计算节点处理各自对应的子计算任务,各个计算节点和各个交换机无需等待路由配置策略信息的生成,使得路由策略决策器可以根据自身策略调整网络,而无需立即调整网络,减少了频繁调整网络造成网络不稳定。
得说明的是,先配置路由配置策略信息再处理计算任务的方式,适用于计算任务的优先级较高时采用,该方式可以保证计算任务处理的可靠性。计算任务与配置路由配置策略信息并行处理的方式,适用于计算任务的优先级较低时采用,该方式可以提高网络资源的利用率。
另外,本发明也可以将先配置路由配置策略信息再处理计算任务的方式和计算任务与配置路由配置策略信息并行处理的方式结合使用,即在Scheduler平台有多个计算任务时,可以根据用户优先级或者计算任务优先级或其他本地策略,将一部分的计算任务采用先配置路由配置策略信息再处理计算任务的方式进行处理,另一部分的计算任务采用计算任务与配置路由配置策略信息并行处理的方式进行处理。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性 的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (29)

  1. 一种路由策略决策器,其特征在于,包括:
    接收模块,用于接收调度Scheduler平台传递的与计算任务相对应的计算环境信息和各个计算节点的网络信息,并将所述计算环境信息提供给决策带宽模块和生成模块,将所述各个计算节点的网络信息提供给所述生成模块;
    所述决策带宽模块,用于根据所述计算环境信息为所述计算任务决策分配的带宽;
    所述生成模块,用于根据所述各个计算节点的网络信息、决策分配的所述带宽和所述计算环境信息为所述计算任务生成路由配置策略信息,并将所述路由配置策略信息提供给发送模块;
    所述发送模块,用于将所述路由配置策略信息发送给路由配置控制器。
  2. 根据权利要求1所述的路由策略决策器,其特征在于,所述计算环境信息中还包括带宽需求信息;
    所述决策带宽模块,还用于根据所述带宽需求信息为所述计算任务决策所述带宽。
  3. 根据权利要求1或2所述的路由策略决策器,其特征在于,所述各个计算节点的网络信息包括各个计算节点对应的互联网协议IP地址和端口号;所述计算环境信息包括所述计算任务的状态;
    所述生成模块,具体用于当所述计算任务的状态为暂停时,根据所述各个计算节点的网络信息和决策分配的所述带宽为所述计算任务生成所述路由配置策略信息;
    当所述计算任务的状态为运行时,根据所述各个计算节点的网络信息和决策分配的所述带宽并按照第一预定策略为所述计算任务生成所述路由配置策略信息。
  4. 根据权利要求3所述的路由策略决策器,其特征在于,所述计算环境信息还包括计算任务的优先级;所述第一预定策略具体包括:
    所述生成模块,还用于当所述计算任务的状态为运行,并且所述计算任务的优先级高于或等于预定阈值时,根据所述各个计算节点的网络信息和决策分配的所述带宽在本次进行路由配置时为所述计算任务生成所述路由配置策略信息,其中所述预定阈值用于衡量所述计算任务的优先级的高低;
    当所述计算任务的状态为运行,并且所述计算任务的优先级低于预定阈值时,根据所述各个计算节点的网络信息和决策分配的所述带宽并在下一次需要进行路由配置时为所述计算任务生成所述路由配置策略信息。
  5. 根据权利要求1或2所述的路由策略决策器,其特征在于,所述各个计算节点的网络信息包括各个计算节点对应的互联网协议IP地址和端口号;所述计算环境信息还包括计算任务的优先级;
    所述生成模块,具体用于当所述计算任务的优先级高于或等于预定阈值时,根据所述各个计算节点的网络信息和决策的所述带宽为所述计算任务生成所述路由配置策略信息,其中所述预定阈值用于衡量所述计算任务的优先级的高低;
    当所述计算任务的优先级低于预定阈值时,根据所述各个计算节点的网络信息和决策分配的所述带宽并按照第一预定策略为所述计算任务生成所述路由配置策略信息。
  6. 根据权利要求5所述的路由策略决策器,其特征在于,所述计算环境信息还包括计算任务的状态;所述第一预定策略具体包括:
    所述生成模块,还用于当所述计算任务的优先级低于预定阈值,并且所述计算任务的状态为暂停时,根据所述各个计算节点的网络信息和决策分配的所述带宽在本次或者在下一次需要进行路由配置时,按照预定时间为所述计算任务生成所述路由配置策略信息;
    当所述计算任务的优先级低于预定阈值,并且所述计算任务的状态为运行时,根据所述各个计算节点的网络信息和决策分配的所述带宽,在下一次需要进行路由配置时为所述计算任务生成所述路由配置策略信息。
  7. 根据权利要求1、2、4或6中任一项所述的路由策略决策器,其特征在于,所述装置还包括释放资源模块;
    所述接收模块,还用于接收所述Scheduler平台发送的计算任务更改后的状态,并将所述计算任务更改后的状态提供给所述释放资源模块,其中,所述计算任务更改后的状态为以下任意一种:运行、暂停、结束或出错;
    所述释放资源模块,用于当所述计算任务更改后的状态由运行更改为结束或者由运行更改为出错时,按照第二预定策略释放所述计算任务对应的所述路由配置策略信息对应的资源。
  8. 一种调度Scheduler平台,其特征在于,包括:
    接收模块,用于接收用户设备UE提交的计算任务描述信息,并将所述计算任务描述信息提供给获取模块、分解模块和生成模块,其中所述计算任务描述信息包括用户标识ID和所需的计算节点信息;
    所述获取模块,用于根据所述计算任务描述信息获取所述用户ID对应的计算任务;
    所述分解模块,用于根据所述所需的计算节点信息将所述计算任务分解成至少一个子计算任务;
    所述获取模块,还用于获取处理所述各个子计算任务对应的各个计算节点的网络信息,并将所述各个计算节点的网络信息提供给第一发送模块;
    所述生成模块,用于根据计算任务描述信息生成所述计算任务的计算环境信息,并将所述计算环境信息提供给所述第一发送模块;
    所述第一发送模块,用于向路由策略决策器发送所述计算环境信息和所述各个计算节点的网络信息。
  9. 根据权利要求8所述的Scheduler平台,其特征在于,所述各个计算节点的网络信息包括各个计算节点对应的互联网协议IP地址和端口号;所述装置还包括:指定模块和第二发送模块;
    所述第二发送模块,用于根据所述各个计算节点的网络信息向所述计算节 点发送各自对应的子计算任务,并向所述指定模块提供已发送信息,所述已发送信息用于告知所述指定模块已发送所述至少一个子计算任务;
    所述指定模块,用于指定所述计算任务的状态,并将所述计算任务的状态提供给所述生成模块;
    所述生成模块,还用于根据计算任务描述信息和所述计算任务的状态生成所述计算任务的计算环境信息。
  10. 根据权利要求8或9所述的Scheduler平台,其特征在于,所述计算任务描述信息还包括计算任务级别;
    所述获取模块,还用于根据所述用户ID获取用户级别,并将所述用户级别提供给所述生成模块;
    所述生成模块,还用于根据所述用户级别和所述计算任务级别生成所述计算任务的优先级。
  11. 根据权利要求10所述的Scheduler平台,其特征在于,
    所述生成模块,还用于根据所述计算任务描述信息和所述计算任务的优先级生成所述计算环境信息;或者,
    所述生成模块,还用于根据所述计算任务描述信息、所述计算任务的状态和所述计算任务的优先级生成所述计算环境信息。
  12. 根据权利要求8、9或11中任一项所述的Scheduler平台,其特征在于,所述计算任务描述信息还包括带宽需求信息,所述计算环境信息包括以下至少一种信息:所述用户ID、所述所需的计算节点信息、所述带宽需求信息、所述计算任务的状态和所述计算任务的优先级。
  13. 根据权利要求12所述的Scheduler平台,其特征在于,所述装置还包括:更改模块;
    所述更改模块,用于更改所述计算任务的状态,得到计算任务更改后的状态,并将所述计算任务更改后的状态提供给所述发送模块,其中,所述计算任务更改后的状态为以下任意一种:运行、暂停、结束或出错;
    所述发送模块,还用于向所述路由策略决策器发送计算任务更改后的状态,所述计算任务更改后的状态用于所述路由策略决策器判断是否释放所述计算任务对应的所述路由配置策略信息对应的资源。
  14. 一种网络资源的处理方法,其特征在于,包括:
    路由策略决策器接收调度Scheduler平台传递的与计算任务相对应的计算环境信息和各个计算节点的网络信息;
    所述路由策略决策器根据所述计算环境信息为所述计算任务决策分配的带宽;
    所述路由策略决策器根据所述各个计算节点的网络信息、决策分配的所述带宽和所述计算环境信息为所述计算任务生成路由配置策略信息;
    所述路由策略决策器将所述路由配置策略信息发送给路由配置控制器。
  15. 根据权利要求14所述的网络资源的处理方法,其特征在于,所述计算环境信息中还包括带宽需求信息;所述路由策略决策器根据所述计算环境信息为所述计算任务决策分配的带宽,包括:
    所述路由策略决策器根据所述带宽需求信息为所述计算任务决策分配的所述带宽。
  16. 根据权利要求14或15所述的网络资源的处理方法,其特征在于,所述各个计算节点的网络信息包括各个计算节点对应的互联网协议IP地址和端口号;所述计算环境信息包括所述计算任务的状态;则所述路由策略决策器根据所述各个计算节点的网络信息、决策分配的所述带宽和所述计算环境信息为所述计算任务生成路由配置策略信息,包括:
    当所述计算任务的状态为暂停时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽为所述计算任务生成所述路由配置策略信息;
    当所述计算任务的状态为运行时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽并按照第一预定策略为所述计算任务 生成所述路由配置策略信息。
  17. 根据权利要求16所述的网络资源的处理方法,其特征在于,所述计算环境信息还包括计算任务的优先级,则所述第一预定策略具体包括:
    当所述计算任务的状态为运行,并且所述计算任务的优先级高于或等于预定阈值时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽在本次进行路由配置时为所述计算任务生成所述路由配置策略信息,其中所述预定阈值用于衡量所述计算任务的优先级的高低;
    当所述计算任务的状态为运行,并且所述计算任务的优先级低于预定阈值时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽并在下一次进行需要路由配置时为所述计算任务生成所述路由配置策略信息。
  18. 根据权利要求14或15所述的网络资源的处理方法,其特征在于,所述各个计算节点的网络信息包括各个计算节点对应的互联网协议IP地址和端口号;所述计算环境信息包括计算任务的优先级,所述路由策略决策器根据所述各个计算节点的网络信息、决策分配的所述带宽和所述计算环境信息为所述计算任务生成路由配置策略信息,包括:
    当所述计算任务的优先级高于或等于预定阈值时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽为所述计算任务生成所述路由配置策略信息,其中所述预定阈值用于衡量所述计算任务的优先级的高低;
    当所述计算任务的优先级低于预定阈值时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽并按照第一预定策略为所述计算任务生成所述路由配置策略信息。
  19. 根据权利要求18所述的网络资源的处理方法,其特征在于,所述计算环境信息还包括计算任务的状态;所述第一预定策略具体包括:
    当所述计算任务的优先级低于预定阈值,并且所述计算任务的状态为暂停 时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽在本次或者在下一次进行需要路由配置时为所述计算任务生成所述路由配置策略信息;
    当所述计算任务的优先级低于预定阈值,并且所述计算任务的状态为运行时,所述路由策略决策器根据所述各个计算节点的网络信息和决策分配的所述带宽并在下一次进行需要路由配置时为所述计算任务生成所述路由配置策略信息。
  20. 根据权利要求14、15、17或19中任一项所述的网络资源的处理方法,其特征在于,所述方法还包括:
    所述路由策略决策器接收所述Scheduler平台发送的计算任务更改后的状态,其中,所述计算任务更改后的状态为以下任意一种:运行、暂停、结束或出错;
    当所述计算任务更改后的状态由运行更改为结束或者由运行更改为出错时,所述路由策略决策器按照第二预定策略释放所述计算任务对应的所述路由配置策略信息对应的资源。
  21. 一种网络资源的处理方法,其特征在于,包括:
    调度Scheduler平台接收用户设备UE提交的计算任务描述信息,所述计算任务描述信息包括用户标识ID和所需的计算节点信息;
    所述Scheduler平台根据所述计算任务描述信息获取所述用户ID对应的计算任务;
    所述Scheduler平台根据所述所需的计算节点信息将所述计算任务分解成至少一个子计算任务,并获取处理所述各个子计算任务对应的各个计算节点的网络信息;
    所述Scheduler平台根据计算任务描述信息生成所述计算任务的计算环境信息;
    所述Scheduler平台向所述路由策略决策器发送所述计算环境信息和所述 各个计算节点的网络信息。
  22. 根据权利要求21所述的网络资源的处理方法,其特征在于,所述各个计算节点的网络信息包括各个计算节点对应的互联网协议IP地址和端口号;在所述Scheduler平台根据计算任务描述信息生成所述计算任务的计算环境信息之前,所述方法还包括:
    所述Scheduler平台根据所述各个计算节点的网络信息向所述计算节点发送各自对应的子计算任务,并指定所述计算任务的状态;
    所述Scheduler平台根据计算任务描述信息生成所述计算任务的计算环境信息,包括:
    所述Scheduler平台根据计算任务描述信息和所述计算任务的状态生成所述计算任务的计算环境信息。
  23. 根据权利要求21或22所述的网络资源的处理方法,其特征在于,所述计算任务描述信息还包括计算任务级别,在调度Scheduler平台接收用户设备UE提交的计算任务描述信息之后,所述方法还包括:
    所述Scheduler平台根据所述用户ID获取用户级别;
    所述Scheduler平台根据所述用户级别和所述计算任务级别生成所述计算任务的优先级。
  24. 根据权利要求23所述的网络资源的处理方法,其特征在于,所述Scheduler平台根据计算任务描述信息生成所述计算任务的计算环境信息,包括:
    所述Scheduler平台根据所述计算任务描述信息和所述计算任务的优先级生成所述计算环境信息;或者,
    所述Scheduler平台根据所述计算任务描述信息、所述计算任务的状态和所述计算任务的优先级生成所述计算环境信息。
  25. 根据权利要求21、22或24中任一项所述的网络资源的处理方法,其特征在于,所述计算任务描述信息还包括带宽需求信息,所述计算环境信息包 括以下至少一种信息:所述用户ID、所述所需的计算节点信息、所述带宽需求信息、所述计算任务的状态和所述计算任务的优先级。
  26. 根据权利要求22或25所述的网络资源的处理方法,其特征在于,在所述Scheduler平台向所述路由策略决策器发送所述计算环境信息和所述各个计算节点的网络信息之后,所述方法还包括:
    所述Scheduler平台更改所述计算任务的状态,得到计算任务更改后的状态,其中,所述计算任务更改后的状态为以下任意一种:运行、暂停、结束或出错;
    所述Scheduler平台向所述路由策略决策器发送计算任务更改后的状态,所述计算任务更改后的状态用于所述路由策略决策器判断是否释放所述计算任务对应的所述路由配置策略信息对应的资源。
  27. 一种网络资源的处理系统,其特征在于,包括:
    路由策略决策器,用于接收调度Scheduler平台传递的与计算任务相对应的计算环境信息和各个计算节点的网络信息;根据所述计算环境信息为所述计算任务决策分配的带宽;根据所述各个计算节点的网络信息、决策分配的所述带宽和所述计算环境信息为所述计算任务生成路由配置策略信息;将所述路由配置策略信息发送给路由配置控制器;
    所述Scheduler平台,用于接收用户设备UE提交的计算任务描述信息,所述计算任务描述信息包括用户标识ID和所需的计算节点信息;根据所述计算任务描述信息获取所述用户ID对应的计算任务;根据所述所需的计算节点信息将所述计算任务分解成至少一个子计算任务,并为各个子计算任务申请计算节点,和获取处理所述各个子计算任务对应的各个计算节点的网络信息;根据计算任务描述信息生成所述计算任务的计算环境信息;向所述路由策略决策器发送所述计算环境信息和所述各个计算节点的网络信息;
    所述路由配置控制器,用于接收所述路由策略决策器发送的所述路由配置策略信息。
  28. 根据权利要求27所述的网络资源的处理系统,其特征在于,所述所需的计算节点信息包括计算节点的数量和计算节点的配置信息;所述系统还包括计算资源管理器;
    所述计算资源管理器,用于接收所述Scheduler平台发送的携带所述所需的计算节点信息的计算节点分配请求;根据所述计算节点分配请求中的所述计算节点的数量和所述计算节点的配置信息为所述计算任务分配计算节点,并向所述Scheduler平台返回计算节点信息,所述计算节点信息中包括各个计算节点的网络信息;
    所述计算节点,用于接收所述Scheduler平台发送的子计算任务;
    所述Scheduler平台,还用于将携带所述计算节点的数量和所述计算节点的配置信息的计算节点分配请求发送给所述计算资源管理器;接收所述计算资源管理器发送的所述计算节点信息。
  29. 根据权利要求27或28所述的网络资源的处理系统,其特征在于,所述系统还包括:交换机;
    所述路由配置控制器,还用于将接收的所述路由配置策略信息转换成OpenFlow配置策略信息;将所述OpenFlow配置策略信息发送给所述交换机;
    所述交换机,用于接收所述路由配置控制器发送的所述OpenFlow配置策略信息;根据所述OpenFlow配置策略信息为所述计算任务进行路由控制。
PCT/CN2014/087637 2014-03-31 2014-09-28 一种网络资源的处理装置、方法和系统 WO2015149491A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP14888126.1A EP3113429B1 (en) 2014-03-31 2014-09-28 Network resource processing device, method and system
US15/272,542 US10200287B2 (en) 2014-03-31 2016-09-22 Network resource processing apparatus, method, and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410127405.1 2014-03-31
CN201410127405.1A CN103905337B (zh) 2014-03-31 2014-03-31 一种网络资源的处理装置、方法和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/272,542 Continuation US10200287B2 (en) 2014-03-31 2016-09-22 Network resource processing apparatus, method, and system

Publications (1)

Publication Number Publication Date
WO2015149491A1 true WO2015149491A1 (zh) 2015-10-08

Family

ID=50996494

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/087637 WO2015149491A1 (zh) 2014-03-31 2014-09-28 一种网络资源的处理装置、方法和系统

Country Status (4)

Country Link
US (1) US10200287B2 (zh)
EP (1) EP3113429B1 (zh)
CN (1) CN103905337B (zh)
WO (1) WO2015149491A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106790617A (zh) * 2016-12-30 2017-05-31 北京邮电大学 协同内容缓存控制系统和方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905337B (zh) * 2014-03-31 2018-01-23 华为技术有限公司 一种网络资源的处理装置、方法和系统
CN105814844B (zh) * 2014-11-17 2019-08-16 华为技术有限公司 交换机端口控制方法、装置及系统
CN107517113B (zh) * 2016-06-16 2022-07-15 中兴通讯股份有限公司 设备信息获取方法及装置
US20190047581A1 (en) * 2017-08-14 2019-02-14 GM Global Technology Operations LLC Method and apparatus for supporting mission-critical applications via computational cloud offloading
CN111064788B (zh) * 2019-12-18 2022-12-06 达闼机器人股份有限公司 信号传输方法、机器人及计算机可读存储介质
CN113556242B (zh) * 2020-04-24 2023-01-17 中科寒武纪科技股份有限公司 一种基于多处理节点来进行节点间通信的方法和设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101309201A (zh) * 2007-05-14 2008-11-19 华为技术有限公司 路由处理方法、路由处理器及路由器
CN102377598A (zh) * 2010-08-26 2012-03-14 中国移动通信集团公司 一种互联网应用托管系统、设备和方法
CN102611735A (zh) * 2011-12-21 2012-07-25 奇智软件(北京)有限公司 一种应用服务的负载均衡方法及系统
CN103294521A (zh) * 2013-05-30 2013-09-11 天津大学 一种降低数据中心通信负载及能耗的方法
CN103905337A (zh) * 2014-03-31 2014-07-02 华为技术有限公司 一种网络资源的处理装置、方法和系统

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7730119B2 (en) * 2006-07-21 2010-06-01 Sony Computer Entertainment Inc. Sub-task processor distribution scheduling
US8990812B2 (en) * 2008-07-07 2015-03-24 Infosys Limited Task decomposition with throttled message processing in a heterogeneous environment
JP5213743B2 (ja) * 2009-02-10 2013-06-19 株式会社日立製作所 ネットワーク管理端末、ネットワーク制御システム及びネットワーク管理方法
US8504718B2 (en) * 2010-04-28 2013-08-06 Futurewei Technologies, Inc. System and method for a context layer switch
US8667171B2 (en) * 2010-05-28 2014-03-04 Microsoft Corporation Virtual data center allocation with bandwidth guarantees
US8718070B2 (en) * 2010-07-06 2014-05-06 Nicira, Inc. Distributed network virtualization apparatus and method
US9239996B2 (en) * 2010-08-24 2016-01-19 Solano Labs, Inc. Method and apparatus for clearing cloud compute demand
WO2013077787A1 (en) * 2011-11-23 2013-05-30 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for distributed processing tasks
WO2013076775A1 (ja) * 2011-11-24 2013-05-30 株式会社日立製作所 計算機システム、分割ジョブ処理方法及びプログラム
US20130318268A1 (en) * 2012-05-22 2013-11-28 Xockets IP, LLC Offloading of computation for rack level servers and corresponding methods and systems
KR20160037827A (ko) * 2013-01-17 2016-04-06 엑소케츠 인코포레이티드 시스템 메모리로의 연결을 위한 오프로드 프로세서 모듈들
US9055447B2 (en) * 2013-02-20 2015-06-09 Nec Laboratories America, Inc. Mobile backhaul topology planning and/or optimization
US9137161B2 (en) * 2013-05-29 2015-09-15 Telefonaktiebolaget L M Ericsson (Publ) Method and system of bandwidth-aware service placement for service chaining

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101309201A (zh) * 2007-05-14 2008-11-19 华为技术有限公司 路由处理方法、路由处理器及路由器
CN102377598A (zh) * 2010-08-26 2012-03-14 中国移动通信集团公司 一种互联网应用托管系统、设备和方法
CN102611735A (zh) * 2011-12-21 2012-07-25 奇智软件(北京)有限公司 一种应用服务的负载均衡方法及系统
CN103294521A (zh) * 2013-05-30 2013-09-11 天津大学 一种降低数据中心通信负载及能耗的方法
CN103905337A (zh) * 2014-03-31 2014-07-02 华为技术有限公司 一种网络资源的处理装置、方法和系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3113429A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106790617A (zh) * 2016-12-30 2017-05-31 北京邮电大学 协同内容缓存控制系统和方法
CN106790617B (zh) * 2016-12-30 2019-07-16 北京邮电大学 协同内容缓存控制系统和方法

Also Published As

Publication number Publication date
EP3113429B1 (en) 2018-09-19
CN103905337A (zh) 2014-07-02
US10200287B2 (en) 2019-02-05
US20170012879A1 (en) 2017-01-12
EP3113429A4 (en) 2017-04-12
EP3113429A1 (en) 2017-01-04
CN103905337B (zh) 2018-01-23

Similar Documents

Publication Publication Date Title
WO2015149491A1 (zh) 一种网络资源的处理装置、方法和系统
US11051183B2 (en) Service provision steps using slices and associated definitions
US10838890B2 (en) Acceleration resource processing method and apparatus, and network functions virtualization system
US10742522B2 (en) Creation and modification of shareable slice instances
US10812395B2 (en) System and method for policy configuration of control plane functions by management plane functions
US10129108B2 (en) System and methods for network management and orchestration for network slicing
JP6408602B2 (ja) Nfvシステムにおけるサービス実装のための方法および通信ユニット
WO2016015559A1 (zh) 云化数据中心网络的承载资源分配方法、装置及系统
US10993127B2 (en) Network slice instance management method, apparatus, and system
CN107534981B (zh) 资源重分配
WO2015028931A1 (en) A method and system to allocate bandwidth based on task deadline in cloud computing networks
US11601343B2 (en) Dynamic adaptive network
US11490366B2 (en) Network function virtualisation
WO2017185992A1 (zh) 一种请求消息传输方法及装置
US20220224552A1 (en) Network slice charging method and apparatus
EP3652980B1 (en) Virtual anchoring in anchorless mobile networks
CN113660726A (zh) 资源分配方法和装置
JP5511709B2 (ja) QoS制御システム、QoS制御管理装置、及びQoS制御方法
CN110365720B (zh) 一种资源请求处理的方法、装置及系统
KR101566397B1 (ko) 대역폭 관리 장치, 중앙 관리 장치, 및 대역폭 관리 방법
JP2014038459A (ja) 仮想os制御装置、システム、方法およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14888126

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2014888126

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014888126

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE