CN113608869A - Task scheduling method and device, electronic equipment and computer storage medium - Google Patents

Task scheduling method and device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN113608869A
CN113608869A CN202110825105.0A CN202110825105A CN113608869A CN 113608869 A CN113608869 A CN 113608869A CN 202110825105 A CN202110825105 A CN 202110825105A CN 113608869 A CN113608869 A CN 113608869A
Authority
CN
China
Prior art keywords
task
node device
information
representing
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110825105.0A
Other languages
Chinese (zh)
Inventor
杨健
聂自非
李英斌
崔文聪
王玉全
杨娜
李烨
刘巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Media Group
Original Assignee
China Media Group
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Media Group filed Critical China Media Group
Priority to CN202110825105.0A priority Critical patent/CN113608869A/en
Publication of CN113608869A publication Critical patent/CN113608869A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application provides a task scheduling method and device, electronic equipment and a computer storage medium. The method comprises the following steps: acquiring a task; determining target node equipment from the plurality of node equipment according to the current load information of the plurality of node equipment and the expected execution information of the task on the plurality of node equipment; and scheduling the task to the target node equipment. When task scheduling is carried out, not only the current load information of the node equipment is considered, but also the expected execution information of the task on the node equipment is considered, so that the resource condition of the node equipment can be measured in multiple dimensions, the maximum utilization of resources can be realized, and load balance is realized.

Description

Task scheduling method and device, electronic equipment and computer storage medium
Technical Field
The present application relates to the field of computer cluster technologies, and in particular, to a task scheduling method and apparatus, an electronic device, and a computer storage medium.
Background
In a production environment, in order to ensure the processing capability and stability requirements of a service system, a cluster mode is usually adopted for system construction. The cluster mode refers to that a plurality of servers execute a certain type of service together, and then the execution condition of a task is scheduled through a scheduling service.
Problems existing in the prior art:
the scheduling algorithm used by the current system cannot fully utilize system resources to realize load balancing, and the load balancing effect is poor.
Disclosure of Invention
The embodiment of the application provides a task scheduling method, a task scheduling device, electronic equipment and a computer storage medium, so as to solve the problems in the prior art.
According to a first aspect of embodiments of the present application, there is provided a task scheduling method, including:
acquiring a task;
determining target node equipment from the plurality of node equipment according to current load information of the plurality of node equipment and expected execution information of the task on the plurality of node equipment;
and scheduling the task to the target node equipment.
According to a second aspect of the embodiments of the present application, there is provided a task scheduling method, including:
acquiring a task fragment, wherein the task fragment is obtained by segmenting a multimedia information coding task according to a key frame interval GOP;
determining target node equipment from the plurality of node equipment according to current load information of the plurality of node equipment and expected execution information of the task fragments on the plurality of node equipment;
and scheduling the task fragment to the target node equipment.
According to a third aspect of embodiments of the present application, there is provided a task scheduling apparatus, including:
the acquisition module is used for acquiring tasks;
a determining module, configured to determine a target node device from the plurality of node devices according to current load information of the plurality of node devices and expected execution information of the task on the plurality of node devices;
and the scheduling module is used for scheduling the task to the target node equipment.
According to a fourth aspect of embodiments herein, there is provided an electronic device comprising one or more processors, and memory for storing one or more programs; the one or more programs, when executed by the one or more processors, implement the steps of the task scheduling method as described above.
According to a fifth aspect of embodiments of the present application, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the task scheduling method as described above.
By adopting the task scheduling method, the task scheduling device, the electronic equipment and the computer storage medium provided by the embodiment of the application, the tasks are scheduled and distributed according to the current load information of the plurality of node equipment and the expected execution information of the tasks on the plurality of node equipment. When task scheduling is carried out, not only the current load information of the node equipment but also the expected execution information of the task on the node equipment are considered, the resource condition of the node equipment can be measured in multiple dimensions, and compared with a mode of carrying out load balancing only according to the resource use condition of a single dimension in the prior art, the scheme provided by the embodiment of the application can realize the maximum utilization of resources and realize load balancing.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a task scheduling method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another task scheduling method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another task scheduling method according to an embodiment of the present application;
fig. 6 is a schematic flowchart of another task scheduling method according to an embodiment of the present application;
fig. 7 is a schematic flowchart of another task scheduling method according to an embodiment of the present application;
fig. 8 is a schematic flowchart of another task scheduling method according to an embodiment of the present application;
fig. 9 is a schematic view of task analysis processing of an electronic device according to an embodiment of the present application;
fig. 10 is a block diagram of a task scheduling device according to an embodiment of the present application.
Detailed Description
In the process of implementing the application, the inventor finds that in the existing scheduling algorithm, the load factors of the server are considered by the weighted polling and fair strategies to perform task scheduling, and load balancing cannot be achieved; in the IP Hash algorithm and the URL Hash algorithm, a client and a server are bound, tasks of the client are all dispatched to the server bound with the client, once the tasks of the client bound with the server are multiple, the load of the server is high, and load balance cannot be realized.
In view of the foregoing problems, embodiments of the present application provide a task scheduling method, a task scheduling apparatus, an electronic device, and a computer storage medium, where a task is scheduled and allocated according to current load information of a plurality of node devices and expected execution information of the task on the plurality of node devices. When task scheduling is carried out, not only the current load information of the node equipment but also the expected execution information of the task on the node equipment are considered, the resource condition of the node equipment can be measured in multiple dimensions, and compared with a mode of carrying out load balancing only according to the resource use condition of a single dimension in the prior art, the scheme provided by the embodiment of the application can realize the maximum utilization of resources and realize load balancing.
The scheme in the embodiment of the application can be implemented by adopting various computer languages, such as object-oriented programming language Java and transliterated scripting language JavaScript.
In order to make the technical solutions and advantages of the embodiments of the present application more apparent, the following further detailed description of the exemplary embodiments of the present application with reference to the accompanying drawings makes it clear that the described embodiments are only a part of the embodiments of the present application, and are not exhaustive of all embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Referring to fig. 1, which is a schematic view of an application scenario provided in the embodiment of the present application, an electronic device 100 performs data communication with a client 200 and a node device 300 through multiple communication modes. The electronic device 100 may be enabled to communicate over a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks.
The client 200 is used to send tasks to the electronic device 100. The electronic device 100 is configured to allocate the task scheduling to the corresponding node device 300 by using the task scheduling method provided in the present application.
Among them, the client 200 may be understood as a business layer of a user, the electronic device 100 may be understood as an intermediate layer of a scheduling service, and the node device 300 may be understood as an execution layer. Both electronic device 100 and node device 300 may be servers; the client 200 may be, but is not limited to, a Mobile phone, a tablet Computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook Computer, a super-Mobile Personal Computer (UMPC), and the like.
The electronic device 100 may use one server to implement the scheduling service, or may use multiple servers to implement the scheduling service jointly.
As shown in fig. 2, a schematic structural diagram of an electronic device 100 provided in an embodiment of the present application is shown, where the electronic device 100 includes a memory 101, a processor 102, and a communication interface 103. The memory 101, processor 102 and communication interface 103 are electrically connected to each other, directly or indirectly, to enable the transfer or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 101 may be used to store software programs and modules, such as program instructions/modules corresponding to the task scheduling method provided in the embodiments of the present application, and the processor 102 executes the software programs and modules stored in the memory 101, so as to execute various functional applications and data processing. The communication interface 103 may be used for communication of signaling or data with the node apparatus 300 and the client 200. The electronic device 100 may have a plurality of communication interfaces 103 in this application.
The Memory 101 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 102 may be an integrated circuit chip having signal processing capabilities. The processor may be a general-purpose processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc.
Referring to fig. 3, fig. 3 is a schematic flowchart of a task scheduling method provided in the embodiment of the present application on the basis of the electronic device 100 shown in fig. 2, where the task scheduling method includes the following steps:
s401, acquiring a task.
The task may be understood as a set of one or more operations for achieving a specific purpose or function, for example, a Web request task, a distributed computing task, a multimedia information transcoding task, a multimedia information recording and playing task, a multimedia information on demand task, and the like, and may also be understood as a task slice of the multimedia information transcoding task, a task slice of the multimedia information recording and playing task, and a task slice of the multimedia information on demand task. The transcoding task is taken as an example to describe the task segment, and the task segment can be understood as being obtained by segmenting the multimedia information coding task according to the key frame interval GOP.
And S403, determining a target node device from the plurality of node devices according to the current load information of the plurality of node devices and the expected execution information of the task on the plurality of node devices.
It should be understood that the expected execution information of the task on any node device 300 may be understood as the expected relevant information required by the node device 300 to execute the task, for example, the expected execution information may include at least one of the relevant information and the waiting time; the relevance information is used to characterize communication resources of the task on any node device 300, the waiting duration is used to characterize the waiting time of the task on any node device 300, and the current load information is used to characterize the current workload of the node device 300.
And S405, scheduling the task to the target node equipment.
It should be understood that different tasks may be assigned to the same node apparatus 300, and may also be assigned to different node apparatuses 300.
In this embodiment, there may be three embodiments for determining the target node device according to the current load information of the plurality of node devices and the expected execution information of the task on the plurality of node devices 300. The current load information is correspondingly provided with a first weight parameter, the correlation information is correspondingly provided with a second weight parameter, and the waiting duration is correspondingly provided with a third weight parameter.
The first embodiment may be: the expected execution information includes correlation information, and the target node device is determined from the plurality of node devices according to the current load information, the correlation information, the first weight parameter and the second weight parameter. The second embodiment may be: the expected execution information comprises waiting time length, and the target node equipment is determined from the plurality of node equipment according to the current load information, the waiting time length, the first weight parameter and the third weight parameter. The third embodiment may be: the expected execution information comprises correlation information and waiting time, and the target node equipment is determined from the plurality of node equipment according to the current load information, the correlation information, the waiting time, the first weight parameter, the second weight parameter and the third weight parameter.
Referring to fig. 4, a flowchart of another task scheduling method according to an embodiment of the present application is shown, where the first implementation manner may include the following steps:
and S403a, performing weighted summation according to the current load information, the correlation information, the first weight parameter and the second weight parameter to obtain a first distribution value of the task and any node device.
It should be understood that, the current load information of each node device is respectively multiplied by the first weight parameter to obtain the distributed load information of each node device; performing multiplication calculation on the correlation information of the task and each node device and the second weight parameter respectively to obtain distribution correlation information of the task and each node device; and respectively carrying out addition calculation on the distribution load information and the distribution correlation information to obtain a first distribution value of the task and each node device.
The first assignment value of the ith task and the jth node device 300 may be calculated using the following formula:
Dij1=Cij*Q1+Wij*Q2
wherein D isij1A first allocation value, Q, representing the ith task and the jth node device 3001Representing a first weight parameter, Q2Denotes a second weight parameter, WijCurrent load information, C, indicating the jth node device 300 processing the ith taskijIndicating the dependency information of the ith task and the jth node device 300.
The sum of the first weight parameter and the second weight parameter is 1, and the first weight parameter and the second weight parameter may be valued according to actual situations, which is not limited herein.
S403b, the node device with the largest first allocation value is determined as the target node device.
The same task is compared with the first allocation values of different node apparatuses 300, and the node apparatus 300 with the largest first allocation value is determined as the target node apparatus. It is to be understood that the larger the first allocation value, the more the devices of the node apparatus 300 handle the task can achieve the resource utilization maximization.
Referring to fig. 5, a flowchart of another task scheduling method according to an embodiment of the present application is shown, where the second implementation manner may include the following steps:
and S403c, performing weighted summation according to the current load information, the waiting time length, the first weight parameter and the third weight parameter to obtain a second distribution value of the task and any node equipment.
It should be understood that, the current load information of each node device is respectively multiplied by the first weight parameter to obtain the distributed load information of each node device; performing multiplication calculation on the waiting time of the task on each node device and the third weight parameter respectively to obtain the distributed waiting time of the task and each node device; and respectively carrying out addition calculation on the distribution load information and the distribution waiting time to obtain a second distribution value of the task and each node device.
The second assignment value of the ith task to the jth node device 300 may be calculated using the following formula:
Dij2=Cij*Q1+T_earlistij*Q3
wherein D isij2Indicating a second assignment, Q, of the ith task to the jth node device 3001Representing a first weight parameter, Q3Represents a second weight parameter, T _ earlistijIndicates the waiting time of the ith task on the jth node device 300, CijIndicating the dependency information of the ith task and the jth node device 300.
The sum of the first weight parameter and the third weight parameter is 1, and the first weight parameter and the third weight parameter may be valued according to actual situations, which is not limited herein.
S403d, determining the node device with the largest second allocation value as the target node device.
The same task is compared with the second allocation value of the different node apparatus 300, and the node apparatus 300 with the largest second allocation value is determined as the target node apparatus. It is to be understood that the device of the node device 300 whose second allocation value is larger handles the task can achieve resource utilization maximization.
Referring to fig. 6, a flowchart of another task scheduling method provided in an embodiment of the present application is shown, where the third implementation manner may include the following steps:
and S403e, performing weighted summation according to the current load information, the correlation information, the waiting time, the first weight parameter, the second weight parameter and the third weight parameter to obtain a third distribution value of the task and any node device.
It should be understood that, the current load information of each node device is respectively multiplied by the first weight parameter to obtain the distributed load information of each node device; performing multiplication calculation on the correlation information of the task and each node device and the second weight parameter respectively to obtain distribution correlation information of the task and each node device; performing multiplication calculation on the waiting time of the task on each node device and the third weight parameter respectively to obtain the distributed waiting time of the task and each node device; and respectively carrying out addition calculation on the distribution load information, the distribution correlation information and the distribution waiting time to obtain a third distribution value of the task and each node device.
The third assignment value of the ith task and the jth node device 300 can be calculated by using the following formula:
Dij3=Cij*Q1+Wij*Q2+T_earlistij*Q3
wherein D isij3A third allocation value, Q, representing the ith task and the jth node device 3001Representing a first weight parameter, Q2Representing a second weight parameter, Q3Denotes a third weight parameter, WijCurrent load information, C, indicating the jth node device 300 processing the ith taskijRepresents dependency information, T _ earlist, of the ith task and the jth node device 300ijIndicating the waiting time of the ith task on the jth node device 300.
The sum of the first weight parameter, the second weight parameter and the third weight parameter is 1, and the first weight parameter, the second weight parameter and the third weight parameter may be valued according to the actual situation, which is not limited herein. For example, the multimedia information transcoding task has higher requirements on the load of the node device 300, and the first weight parameter may be set higher when the multimedia information transcoding task is performed; the multimedia information recording task may have a higher requirement for the waiting time on the node device 300, and the third weight parameter may be set higher when the multimedia information recording task is performed.
S403f, determining the node device with the largest third allocation value as the target node device.
The same task is compared with the third allocation values of different node apparatuses 300, and the node apparatus 300 with the maximum third allocation value is determined as the target node apparatus.
It is to be understood that the larger the third allocation value, the more the device of the node device 300 handles the task, and the resource utilization maximization can be achieved. In other words, if the plurality of node devices 300 are the first node device, the second node device and the third node device, respectively, the current load of the second node device is the largest if the value corresponding to the correlation information between the ith task and the second node device is the largest. Because the current load of the second node device is the largest, the larger the workload of the second node device is, the longer the time length that the ith task needs to wait on the second node device is. Therefore, the first weight parameter, the second weight parameter, and the third weight parameter need to be set to comprehensively measure the three indexes of the current load information, the correlation information, and the waiting time, so as to maximize the resource utilization of the node device 300 and achieve load balancing. Equivalently, the Min-Min algorithm is adopted by the three indexes of the current load information, the correlation information and the waiting time to determine the target node equipment.
The correlation information can be determined by factors such as priority constraint relation, synchronous mutual exclusion relation and data traffic among tasks, and the priority constraint relation, the synchronous mutual exclusion relation and the data traffic can be obtained in a compiling stage. The priority constraint relationship can represent the sequence of executing a plurality of tasks; the synchronous mutual exclusion relationship can represent that when the task a is executed, the program resource A is locked and used, and at the moment, the program resource A cannot be used when the tasks b and c are executed.
In an alternative embodiment, data tasks may interact and communicate independently of priority relationships, so that dependency information may be determined considering only data traffic considerations. Taking the case that the correlation information is determined by data traffic factors as an example, the correlation information between the ith task and the jth node device is calculated by the following formula:
Figure BDA0003173302750000101
wherein, CijIndicating the correlation information of the ith task and the jth node device, eikRepresenting data traffic between the ith and kth tasks, eilIndicating the data traffic between the ith and the l-th task, TjRepresenting the set of all tasks on the jth node device, T representing the set of all tasks in the cluster, m representing the total number of tasks, and n representing the total number of node devices.
It should be understood that,
Figure BDA0003173302750000102
it means the correlation between the tasks on all the node devices 300 of the whole cluster and the ith task, if there are only 1 node device 300 in the cluster, or all the tasks in the whole cluster are on 1 node device 300, there is no inter-device communication, so there is no inter-device communication
Figure BDA0003173302750000103
Is 0. If all tasks are distributed to each node device 300 in the cluster, the sum of the data traffic of the ith task on the jth node device 300 is
Figure BDA0003173302750000104
The correlation information between the ith task and the jth node device is as follows: the aggregate of data traffic on the jth node device 300 for the ith task compared to the aggregate of data traffic in the entire cluster.
To facilitate understanding of how to obtain the current load information, the current load information of the jth node device may be calculated by using the following formula:
Wj=u1Uj+u2Mj+u3Pj+u4Dj+u5Sj
wherein, WjCurrent load information, U, representing the jth node devicejDenotes CPU utilization, M, of the jth node devicejRepresents the memory usage of the jth node device, PjIndicating the number of processes in the ready queue in the jth node device, DjRepresenting the disk utilization rate of the jth node device; sjNetwork card flow u representing j-th node device1Weight parameter, u, representing CPU utilization2Weight parameter, u, representing memory usage3A weight parameter, u, representing the number of processes in the ready queue4Weight parameter, u, representing disk utilization5And weight parameters representing the network card flow.
It is understood that u1、u2、u3、u4And u5The parameter setting may be performed according to actual situations, and is not limited herein.
To facilitate understanding how to calculate the waiting time, the i-th task and the waiting time on the j-th node device may be calculated by the following formula:
Figure BDA0003173302750000111
wherein, T _ earlastijRepresents the waiting time of the ith task on the jth node equipment, T _ releaseiIndicating the issue time, Queue _ Length, of the ith taskjRepresents the ready queue length, T _ Process, in the jth node devicekjRepresents the total execution time of the ready data task on the jth node device, and k represents the execution time unit of the ready data task on the jth node device.
It should be understood that the waiting time for a process residing on the node apparatus 300 to execute a task cannot be directly calculated. Therefore, it is necessary to estimate the earliest execution time of the task on the node device 300 according to the distribution time of the task, the length of the ready queue in the node device 300, the execution time of the ready data task, and the like, and further obtain the waiting time of the task on the node device 300.
In an optional embodiment, in order to make task allocation more optimal, a scheduling overhead index may also be added to determine the target node device. The scheduling overhead represents an overhead of scheduling the task to the corresponding node device 300, such as environment information of the task, including transmission overhead of a work directory, task privileges, a network distance of the electronic device 100 from the node device 300, and the like.
In an alternative embodiment, in order to mention the stability and efficiency of task scheduling, referring to fig. 7, the above S401 may include the following steps:
s401a, a second task is acquired.
It should be understood that the second task is a multimedia information transcoding task, a multimedia information recording and broadcasting task, a multimedia information on demand task, and the like, and the multimedia information transcoding task, the multimedia information recording and broadcasting task, and the multimedia information on demand task are all complete tasks. For example, the second task may be a transcoding task for a movie.
S401b, the second task is segmented to obtain task fragments of the second task.
It should be understood that if the second task is a multimedia information transcoding task, the multimedia information transcoding task may be segmented at the GOP level to obtain task segments of the second task. The task slice can be understood as being obtained by segmenting the multimedia information coding task according to the key frame interval GOP. For example, the total duration of the transcoding task for a movie is 120 minutes, and the task slices may be transcoding task slices separated by 5 minutes.
S401c, the task slice is determined to be the first task.
It should be understood that the task in S401 is a task fragment, and when scheduling is performed subsequently, scheduling may be performed in units of task fragments. That is, a complete task is divided into a plurality of task fragments, and the plurality of task fragments are respectively scheduled to different node devices 300 for processing.
Compared with the case that one node device 300 processes a complete task, the task fragments are respectively dispatched to different node devices 300 to be processed, when a certain node device 300 processes a certain task fragment and fails, the task fragment is only required to be processed again, the complete task does not need to be processed again, the influence caused by equipment failure is reduced, the processing time is saved, and the task dispatching efficiency is improved. The task fragments are respectively dispatched to different node devices 300 to be processed, which is equivalent to that the node devices 300 process a complete task in parallel, and a state that one node device 300 processes a task which takes a long time and other node devices 300 are idle does not exist. The task scheduling speed can be improved, and cluster resources can be further used to the maximum extent.
In an alternative implementation manner, referring to fig. 8, on the basis of the task scheduling method shown in fig. 3, before S403, the task scheduling method further includes:
s402, storing the tasks into corresponding task queues according to the parameter information of each task.
It should be understood that the task includes parameter information, which may include task type information, operation type information, a time stamp, and the like. The task type information may be a transcoding task type, an on-demand task type, a receiving task type, and the like, and the operation type information may be transcoding operation type information, on-demand operation type information, receiving operation type information, and the like.
And if the task is a second task, caching the second task into a task queue corresponding to the task type information and the operation type information according to the task type information and the operation type information of the second task. And if the second task is the transcoding task type, caching the second task into the transcoding task queue, and if the second task is the on-demand task type, caching the second task into the on-demand task queue.
If the task is the first task, the task is the task fragment. Before the second task is segmented, the second task is firstly cached into a task queue corresponding to the task type information and the operation type information according to the task type information and the operation type information of the second task, the second task is segmented to obtain task fragments, and finally the task fragments are dispatched to target node equipment.
Therefore, before the segmentation, the second task is respectively cached into different task queues according to the task type information and the operation type information, so that the electronic device 100 can distribute and manage the task conveniently.
For convenience of understanding, please refer to fig. 9, and fig. 9 is a schematic view of task analysis processing of an electronic device according to an embodiment of the present disclosure. After receiving the second task sent by the client 200, the electronic device 100 first performs registration verification, that is, verifies whether the client 200 is registered in the electronic device 100, and if the client 200 is not registered in the electronic device 100, overrules the second task sent by the client 200; the electronic device 100 analyzes the parameter information of the second task, and if the parameter information of the second task does not include the task type information and/or the operation type information, the second task is rejected; if the parameter information of the second task contains task type information and operation type information, checking whether a task queue corresponding to the task type information and the operation type information is established or not, if so, directly caching the data task into the corresponding task queue, otherwise, creating the task queue, and caching the second task into the corresponding task queue.
In an alternative embodiment, if the scheduling of the second task needs to consider the priority, before the second task is stored in the task queue, the priority of the second task needs to be compared with the priorities of all the second tasks that have been buffered in the task queue, and the storage position of the second task in the task queue is determined according to the priorities. And if the priorities of the two second tasks are the same, the two second tasks are stored according to the time sequence of receiving the second tasks.
Referring to fig. 8, after the step S405, the method may further include the following steps:
s406, task fragment results corresponding to the task fragments processed by the target node equipment are obtained from the target node equipment.
It should be understood that the task fragmentation result may be obtained by the electronic device 100 from the target node device, or may be obtained by an idle node device 300 of the plurality of node devices 300 from the target node device.
And S407, integrating the task slicing results to obtain task results.
It should be understood that each task slice includes a timestamp, and the corresponding generated task slice result also includes a corresponding timestamp. And integrating according to the time stamp in the task slicing result to obtain a task result.
For example, if the first target node device includes a first task fragmentation result and a second task fragmentation result, the second target node device includes a third task fragmentation result, and the third target node device includes a fourth task fragmentation result and a fifth fragmentation result. The time stamp of the first task slicing result is 1s, the time stamp of the second task slicing result is 3s, the time stamp of the third task slicing result is 2s, the time stamp of the fourth task slicing result is 5s, and the time stamp of the fifth task slicing result is 4 s. And integrating according to the time stamp of each task fragmentation result to obtain an integrated result which is sequentially sequenced and integrated according to the first task fragmentation result, the third task fragmentation result, the second task fragmentation result, the fifth task fragmentation result and the fourth task fragmentation result.
When the task fragmentation results are integrated, a Max-Min algorithm can be adopted to realize fragmentation integration, and an integration result is obtained.
To further improve the stability of task scheduling, please continue with fig. 8, the task scheduling method may further include:
s408, recording the processing state information of the task.
The processing status information includes storage information of the task, a mapping relation with the node device 300, progress information and status information of the task, and the like.
It should be understood that after the task is cached in the task queue, the electronic device 100 records the processing status information of the task and updates the status information and the progress information of the recorded task in real time. And performing segmentation on the second task to obtain task fragments, and after the unable fragments are distributed to the corresponding node devices 300, updating and recording the state information and the progress information of the task fragments in real time. After receiving the task, the node device 300 may call back the status information of the reporting task in real time.
Before the electronic device 100 performs task allocation, the electronic device 100 is further configured to read the health status of each node device 300 and record the health status of each node device. Since the health status of the node device 300 and the processing status information of the task are recorded at the same time, when the node device 300 fails, the electronic device 100 can quickly switch the node device 300 to process the task fragment.
In the process of task fragment scheduling, when the task fragment processing fails, the electronic device 100 will return the task fragments to the original queue, reorder the task fragments, wait for redistribution, and record the sending times and the failure reason. When the task fragment is actively retransmitted more than three times, the electronic device 1000 does not actively retransmit any more, and if the task fragment is to be continuously processed, the electronic device 100 may enter a task management page provided by the electronic device 100, and manually maintain the task fragment.
In alternative embodiments, the electronic device 100 may also report the process status information to the client 200, or the client 200 may actively acquire the process status information. In an alternative embodiment, in order to implement the optimal task scheduling method, the performance, structure and processing capability of the processors of the electronic device 100 and the node device 300 may be set to be identical; tasks interact and communicate without being constrained by a priority relationship; all the information issued by the node device 300 is authentic and credible; inter-task data traffic co-resident on one node device 300 does not account for communication costs.
In order to facilitate understanding of the implementation principle of the task scheduling method provided in the present application, the following description will be given by taking scheduling of multimedia information encoding tasks as an example: acquiring task fragments, wherein the task fragments are obtained by segmenting a multimedia information coding task according to a key frame interval GOP; determining target node equipment from the plurality of node equipment according to current load information of the plurality of node equipment and expected execution information of the task fragments on the plurality of node equipment; and scheduling the task fragments to the target node equipment.
It should be understood that before the multimedia information coding task is segmented, the multimedia information coding task is stored into a corresponding coding task queue according to the parameter information of the multimedia information coding task, and the storage information, the progress information and the state information of the multimedia information coding task are recorded. When the task fragment is scheduled to the target node device, the mapping relation, the progress information and the state information of the task fragment and the target node device are also recorded. After the target node equipment processes the task fragments, acquiring corresponding task fragment results from the target node equipment; and integrating the task slicing results to obtain task results. And the task result is a multimedia information coding task result corresponding to the multimedia information coding task.
In order to implement the task scheduling method corresponding to the foregoing S401 to S408 and possible sub-steps thereof, an embodiment of the present application provides a task scheduling device, please refer to fig. 10, where fig. 10 is a block schematic diagram of a task scheduling device provided in an embodiment of the present application, and the task scheduling device 500 includes: the system comprises an acquisition module 501, a determination module 502, a scheduling module 503, a segmentation module 504, an integration module 505, a queue storage module 506 and a recording module 507.
The obtaining module 501 is used for obtaining tasks.
In an alternative embodiment, the obtaining module 501 is further configured to obtain the second task.
In an optional implementation manner, the obtaining module 501 is further configured to obtain, from the target node device, a task fragment result corresponding to a task fragment processed by the target node device.
The determining module 502 is configured to determine a target node device from the plurality of node devices 300 according to the current load information of the plurality of node devices 300 and the expected execution information of the task on the plurality of node devices 300.
In an optional embodiment, the expected execution information of the task on any node device comprises at least one of relevance information and waiting time; the relevance information is used for representing communication resources of the task on any node device, and the waiting time length is used for representing the waiting time of the task on any node device.
In an optional embodiment, the expected execution information includes correlation information, the current load information is correspondingly provided with a first weight parameter, and the correlation information is correspondingly provided with a second weight parameter. The determining module 502 is further configured to perform weighted summation according to the current load information, the correlation information, the first weight parameter, and the second weight parameter, so as to obtain a first distribution value of the task and any node device; and determining the node equipment with the maximum first distribution value as the target node equipment.
In an optional embodiment, the expected execution information includes a waiting duration, the current load information is correspondingly provided with a first weight parameter, and the waiting duration is correspondingly provided with a third weight parameter. The determining module 502 is further configured to perform weighted summation according to the current load information, the waiting duration, the first weight parameter, and the third weight parameter, so as to obtain a second allocation value of the task and any node device; and determining the node equipment with the maximum second distribution value as the target node equipment.
In an optional embodiment, the expected execution information includes correlation information and a waiting duration, the current load information is correspondingly provided with a first weight parameter, the correlation information is correspondingly provided with a second weight parameter, and the waiting duration is correspondingly provided with a third weight parameter. The determining module 502 is further configured to perform weighted summation according to the current load information, the correlation information, the waiting duration, the first weight parameter, the second weight parameter, and the third weight parameter, so as to obtain a third distribution value of the task and any node device; and determining the node equipment with the maximum third distribution value as the target node equipment.
In an alternative embodiment, the determining module 502 is further configured to determine the task slice as the first task.
The scheduling module 503 is configured to schedule the task to the target node device.
The segmentation module 504 is configured to segment the second task to obtain a task segment of the second task.
The integration module 505 is configured to integrate the task slicing results to obtain task results.
The queue storage module 506 is configured to store the task into the task queue corresponding to the parameter information according to the parameter information of each task.
The recording module 507 is used for recording processing state information of the task.
It should be understood that the obtaining module 501, the determining module 502, the scheduling module 503, the dividing module 504, the integrating module 505, the queue storing module 506 and the recording module 507 may cooperatively implement the above-mentioned S401 to S408 and possible sub-steps thereof.
In summary, the present application provides a task scheduling method, a task scheduling apparatus, an electronic device, and a computer storage medium, which perform scheduling distribution on a task according to current load information of a plurality of node devices and expected execution information of the task on the plurality of node devices. When task scheduling is carried out, not only the current load information of the node equipment is considered, but also the expected execution information of the task on the node equipment is considered, so that the resource condition of the node equipment can be measured in multiple dimensions, the maximum utilization of resources can be realized, and load balance is realized.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (20)

1. A method for task scheduling, the method comprising:
acquiring a task;
determining target node equipment from the plurality of node equipment according to current load information of the plurality of node equipment and expected execution information of the task on the plurality of node equipment;
and scheduling the task to the target node equipment.
2. The method of claim 1, wherein the information of the expected execution of the task on any node device comprises at least one of dependency information and latency duration; the relevance information is used for representing communication resources of the task on any node device, and the waiting duration is used for representing the waiting time of the task on any node device.
3. The method of claim 2, wherein the expected execution information comprises the correlation information, wherein the current load information is set with a first weight parameter, and wherein the correlation information is set with a second weight parameter;
the step of determining a target node device from the plurality of node devices according to the current load information of the plurality of node devices and the expected execution information of the task on the plurality of node devices further comprises:
carrying out weighted summation according to the current load information, the correlation information, the first weight parameter and the second weight parameter to obtain a first distribution value of the task and any node equipment;
and determining the node equipment with the maximum first allocation value as the target node equipment.
4. The method of claim 2, wherein the expected execution information comprises the waiting duration, wherein the current load information is set with a first weight parameter, and wherein the waiting duration is set with a third weight parameter;
the step of determining a target node device from the plurality of node devices according to the current load information of the plurality of node devices and the expected execution information of the task on the plurality of node devices comprises:
carrying out weighted summation according to the current load information, the waiting duration, the first weight parameter and the third weight parameter to obtain a second distribution value of the task and any node equipment;
and determining the node equipment with the maximum second distribution value as the target node equipment.
5. The method of claim 2, wherein the expected execution information comprises the correlation information and a waiting duration, wherein the current load information is provided with a first weight parameter, the correlation information is provided with a second weight parameter, and the waiting duration is provided with a third weight parameter;
the step of determining a target node device from the plurality of node devices according to the current load information of the plurality of node devices and the expected execution information of the task on the plurality of node devices further comprises:
carrying out weighted summation according to the current load information, the correlation information, the waiting time, the first weight parameter, the second weight parameter and the third weight parameter to obtain a third distribution value of the task and any node device;
and determining the node device with the maximum third allocation value as the target node device.
6. The method according to any of claims 1-5, wherein the current load information of the jth node device is calculated using the following formula:
Wj=u1Uj+u2Mj+u3Pj+u4Dj+u5Sj
wherein, WjCurrent load information, U, representing the jth node devicejDenotes CPU utilization, M, of the jth node devicejRepresents the memory usage of the jth node device, PjIndicating the number of processes in the ready queue in the jth node device, DjRepresenting the disk utilization rate of the jth node device; sjNetwork card flow u representing j-th node device1Weight parameter, u, representing CPU utilization2Weight parameter, u, representing memory usage3A weight parameter, u, representing the number of processes in the ready queue4Weight parameter, u, representing disk utilization5And weight parameters representing the network card flow.
7. The method according to any one of claims 2 to 5, wherein the correlation information of the ith task and the jth node device is calculated by using the following formula:
Figure FDA0003173302740000021
wherein, CijIndicating the correlation information of the ith task and the jth node device,eikRepresenting data traffic between the ith and kth tasks, eilIndicating the data traffic between the ith and the l-th task, TjRepresenting the set of all tasks on the jth node device, T representing the set of all tasks in the cluster, m representing the total number of tasks, and n representing the total number of node devices.
8. The method according to any of claims 2-5, wherein the ith task and the wait time on the jth node device are calculated using the following formula:
Figure FDA0003173302740000031
wherein, T _ earlastijRepresents the waiting time of the ith task on the jth node equipment, T _ releaseiIndicating the issue time, Queue _ Length, of the ith taskjRepresents the ready queue length, T _ Process, in the jth node devicekjRepresents the total execution time of the ready data task on the jth node device, and k represents the execution time unit of the ready data task on the jth node device.
9. The method according to any one of claims 1-5, wherein the task is a first task, and the step of obtaining a task comprises:
acquiring a second task;
segmenting the second task to obtain a task fragment of the second task;
determining the task slice as the first task.
10. The method of claim 9, wherein after the step of scheduling the task to the target node device, the method further comprises:
acquiring a task fragment result corresponding to the task fragment processed by the target node equipment from the target node equipment;
and integrating the task slicing results to obtain task results.
11. The method according to any of claims 1-5, wherein the task comprises parameter information, and wherein the method further comprises, before the step of determining the target node device from the plurality of node devices based on current load information of the plurality of node devices and expected execution information of the task on the plurality of node devices:
and storing the tasks into task queues corresponding to the parameter information according to the parameter information of each task.
12. The method of claim 1, further comprising:
and recording the processing state information of the task.
13. A method for task scheduling, the method comprising:
acquiring a task fragment, wherein the task fragment is obtained by segmenting a multimedia information coding task according to a key frame interval GOP;
determining target node equipment from the plurality of node equipment according to current load information of the plurality of node equipment and expected execution information of the task fragments on the plurality of node equipment;
and scheduling the task fragment to the target node equipment.
14. A task scheduling apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring tasks;
a determining module, configured to determine a target node device from the plurality of node devices according to current load information of the plurality of node devices and expected execution information of the task on the plurality of node devices;
and the scheduling module is used for scheduling the task to the target node equipment.
15. The apparatus according to claim 14, wherein the expected execution information of the task on any node device comprises at least one of dependency information and latency duration; the relevance information is used for representing communication resources of the task on any node device, and the waiting duration is used for representing the waiting time of the task on any node device.
16. The apparatus according to claim 14 or 15, wherein the current load information of the jth node device is calculated by using the following formula:
Wj=u1Uj+u2Mj+u3Pj+u4Dj+u5Sj
wherein, WjRepresenting load information of the jth node device, UjDenotes CPU utilization, M, of the jth node devicejRepresents the memory usage of the jth node device, PjIndicating the number of processes in the ready queue in the jth node device, DjRepresenting the disk utilization rate of the jth node device; sjNetwork card flow u representing j-th node device1Weight parameter, u, representing CPU utilization2Weight parameter, u, representing memory usage3A weight parameter, u, representing the number of processes in the ready queue4Weight parameter, u, representing disk utilization5And weight parameters representing the network card flow.
17. The apparatus of claim 15, wherein the correlation information between the ith task and the jth node device is calculated by using the following formula:
Figure FDA0003173302740000051
wherein, CijRepresents the ith task andcorrelation information of jth node device, eikRepresenting data traffic between the ith and kth tasks, eilIndicating the data traffic between the ith and the l-th task, TjRepresenting the set of all tasks on the jth node device, T representing the set of all tasks in the cluster, m representing the total number of tasks, and n representing the total number of node devices.
18. The apparatus of claim 15, wherein the ith task and the wait duration on the jth node device are calculated using the following equations:
Figure FDA0003173302740000052
wherein, T _ earlastijRepresents the execution time, T _ release, of the ith task on the jth node deviceiIndicating the issue time, Queue _ Length, of the ith taskjRepresents the ready queue length, T _ Process, in the jth node devicekjRepresents the total execution time of the ready task on the jth node device, and k represents the execution time unit of the ready data task on the jth node device.
19. An electronic device comprising one or more processors, and memory for storing one or more programs; the one or more programs, when executed by the one or more processors, implement the method of any of claims 1 to 12 or the method of claim 13.
20. A computer storage medium, having stored thereon a computer program which, when executed by a processor, carries out the method of any one of claims 1 to 12 or the steps of the method of claim 13.
CN202110825105.0A 2021-07-21 2021-07-21 Task scheduling method and device, electronic equipment and computer storage medium Pending CN113608869A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110825105.0A CN113608869A (en) 2021-07-21 2021-07-21 Task scheduling method and device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110825105.0A CN113608869A (en) 2021-07-21 2021-07-21 Task scheduling method and device, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN113608869A true CN113608869A (en) 2021-11-05

Family

ID=78304992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110825105.0A Pending CN113608869A (en) 2021-07-21 2021-07-21 Task scheduling method and device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113608869A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114115154A (en) * 2021-11-25 2022-03-01 广东三维家信息科技有限公司 Node scheduling method and device for production line, electronic equipment and storage medium
CN114862606A (en) * 2022-06-13 2022-08-05 新疆益盛鑫创展科技有限公司 Insurance information processing method and device based on cloud service
CN116308772A (en) * 2022-12-16 2023-06-23 蚂蚁区块链科技(上海)有限公司 Transaction distribution method, node and blockchain system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686206A (en) * 2014-01-02 2014-03-26 中安消技术有限公司 Video transcoding method and system in cloud environment
CN110809167A (en) * 2018-08-06 2020-02-18 中国移动通信有限公司研究院 Video playing method and device, electronic equipment and storage medium
CN111506398A (en) * 2020-03-03 2020-08-07 平安科技(深圳)有限公司 Task scheduling method and device, storage medium and electronic device
CN111813513A (en) * 2020-06-24 2020-10-23 中国平安人寿保险股份有限公司 Real-time task scheduling method, device, equipment and medium based on distribution
CN112667376A (en) * 2020-12-23 2021-04-16 数字广东网络建设有限公司 Task scheduling processing method and device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686206A (en) * 2014-01-02 2014-03-26 中安消技术有限公司 Video transcoding method and system in cloud environment
CN110809167A (en) * 2018-08-06 2020-02-18 中国移动通信有限公司研究院 Video playing method and device, electronic equipment and storage medium
CN111506398A (en) * 2020-03-03 2020-08-07 平安科技(深圳)有限公司 Task scheduling method and device, storage medium and electronic device
CN111813513A (en) * 2020-06-24 2020-10-23 中国平安人寿保险股份有限公司 Real-time task scheduling method, device, equipment and medium based on distribution
CN112667376A (en) * 2020-12-23 2021-04-16 数字广东网络建设有限公司 Task scheduling processing method and device, computer equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114115154A (en) * 2021-11-25 2022-03-01 广东三维家信息科技有限公司 Node scheduling method and device for production line, electronic equipment and storage medium
CN114115154B (en) * 2021-11-25 2024-04-05 广东三维家信息科技有限公司 Node scheduling method and device for production line, electronic equipment and storage medium
CN114862606A (en) * 2022-06-13 2022-08-05 新疆益盛鑫创展科技有限公司 Insurance information processing method and device based on cloud service
CN114862606B (en) * 2022-06-13 2023-05-09 新疆益盛鑫创展科技有限公司 Insurance information processing method and device based on cloud service
CN116308772A (en) * 2022-12-16 2023-06-23 蚂蚁区块链科技(上海)有限公司 Transaction distribution method, node and blockchain system
CN116308772B (en) * 2022-12-16 2023-10-13 蚂蚁区块链科技(上海)有限公司 Transaction distribution method, node and blockchain system

Similar Documents

Publication Publication Date Title
CN113608869A (en) Task scheduling method and device, electronic equipment and computer storage medium
CN106776005B (en) Resource management system and method for containerized application
CN112153700B (en) Network slice resource management method and equipment
US11010188B1 (en) Simulated data object storage using on-demand computation of data objects
CN109218355B (en) Load balancing engine, client, distributed computing system and load balancing method
US8539078B2 (en) Isolating resources between tenants in a software-as-a-service system using the estimated costs of service requests
US20200364608A1 (en) Communicating in a federated learning environment
US9201690B2 (en) Resource aware scheduling in a distributed computing environment
CN104038540B (en) Method and system for automatically selecting application proxy server
US9569236B2 (en) Optimization of virtual machine sizing and consolidation
CN111813513A (en) Real-time task scheduling method, device, equipment and medium based on distribution
WO2019091387A1 (en) Method and system for provisioning resources in cloud computing
US20140089509A1 (en) Prediction-based provisioning planning for cloud environments
CN103761146B (en) A kind of method that MapReduce dynamically sets slots quantity
US20150081908A1 (en) Computer-based, balanced provisioning and optimization of data transfer resources for products and services
US9535749B2 (en) Methods for managing work load bursts and devices thereof
CA3073377A1 (en) Distributed multicloud service placement engine and method therefor
CN104580194A (en) Virtual resource management method and device oriented to video applications
CN115033340A (en) Host selection method and related device
CN109656717A (en) A kind of containerization cloud resource distribution method
US11032392B1 (en) Including prior request performance information in requests to schedule subsequent request performance
CN115118784A (en) Computing resource scheduling method, device and system
CN113867973B (en) Resource allocation method and device
CN113079062B (en) Resource adjusting method and device, computer equipment and storage medium
US20150079966A1 (en) Methods for facilitating telecommunication network administration and devices thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination