CN108121605B - Yann-based cgroup memory control optimization method and system - Google Patents

Yann-based cgroup memory control optimization method and system Download PDF

Info

Publication number
CN108121605B
CN108121605B CN201711493850.XA CN201711493850A CN108121605B CN 108121605 B CN108121605 B CN 108121605B CN 201711493850 A CN201711493850 A CN 201711493850A CN 108121605 B CN108121605 B CN 108121605B
Authority
CN
China
Prior art keywords
container
memory
cgroup
directory
oom
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711493850.XA
Other languages
Chinese (zh)
Other versions
CN108121605A (en
Inventor
叶铿
董喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN FIBERHOME INFORMATION INTEGRATION TECHNOLOGIES Co.,Ltd.
Original Assignee
Wuhan Fiberhome Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Fiberhome Software Technology Co ltd filed Critical Wuhan Fiberhome Software Technology Co ltd
Priority to CN201711493850.XA priority Critical patent/CN108121605B/en
Publication of CN108121605A publication Critical patent/CN108121605A/en
Application granted granted Critical
Publication of CN108121605B publication Critical patent/CN108121605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Abstract

A yarn-based cgroup memory control optimization method comprises the following steps: s1, setting a memory hardlimit related to the computing node for the father directory of the control group cgroup memory control directory where the container is located when each management node NM is started; s2, opening the event _ control function to open the event monitoring function of the object-oriented method oom and open the use _ hierarchy; s3, turning on oom _ kill _ disable function for closing the oom kill function, and handing the oom kill function to another independent thread Container Monitor started by NM for processing; s4, modifying the hardware limit hardlimit set when the user submits the application program app into softlimit of the directory where the container is located in the cgroup.

Description

Yann-based cgroup memory control optimization method and system
Technical Field
The invention relates to the technical field of big data yarn platforms, in particular to a yard-based cgroup memory control optimization method and system.
Background
The basic idea of YARN is to separate the two main functions of JobTracker (resource management and job scheduling/monitoring), the main method being to create a global resourcemanager (rm) and an applicationmaster (am) for the application. The application program here refers to a conventional MapReduce job or Spark job.
The nature of the YARN hierarchy is ResourceManager. This entity controls the entire cluster and manages the allocation of applications to the underlying computing resources. ResourceManager elaborates the various resource components (computation, memory, bandwidth, etc.) to the base NodeManager (per-node agent for YARN). ResourceManager also allocates resources along with the ApplicationMaster, NodeManager monitors the underlying application and provides resources using Container. Together, the ApplicationMaster and the ApplicationMaster assume some of the roles of the former tasktacker, and the ResourceManager assumes the role of JobTracker.
The ApplicationMaster manages each instance of an application running within YARN. The ApplicationMaster is responsible for coordinating resources from the ResourceManager and monitoring the execution of containers and resource usage (resource allocation of CPU, memory, etc.) through the NodeManager. Note that while current resources are more traditional (CPU core, memory), the future will bring new resource types based on the task at hand (such as a graphics processing unit or a dedicated processing device). From the YARN perspective, the ApplicationMaster is a user code and therefore presents a potential security issue. YARN assumes that the applicationmasters are faulty or even malicious and therefore treats them as unprivileged code.
The NodeManager manages each node in a YARN cluster. The NodeManager provides services for each node in the cluster, from overseeing lifetime management of a container to monitoring resources and tracking node health. MRv1 manage the execution of Map and Reduce tasks through slots, while NodeManager manages abstract containers that represent resources for each node that are available to a particular application. YARN continues to use the HDFS layer. Its main NameNode is used for metadata services and DataNode is used for replicated storage services scattered in a cluster.
To use a YARN cluster, a request from a client containing an application is first required. ResourceManager negotiates the necessary resources for a Container and starts an ApplicationMaster to represent the submitted application. Using a resource request protocol, the ApplicationMaster negotiates the Container resources on each node for use by the application. While executing the application, the ApplicationMaster monitors the container until completion. When the application is completed, the ApplicationMaster de-registers the used Container from the ResourceManager and the execution cycle is complete.
With the continuous popularization of a big data computing platform yann, various offline and online tasks are selected to run on the platform, resources are used when a container is accurately controlled by cgroup to run, and various tasks can run in a relatively independent container at the same time.
Disclosure of Invention
In view of this, the present invention provides a grow-based cgroup memory control optimization method and system.
A yarn-based cgroup memory control optimization method comprises the following steps:
s1, setting a memory hardlimit related to the computing node for the father directory of the control group cgroup memory control directory where the container is located when each management node NM is started;
s2, opening the event _ control function to open the event monitoring function of the object-oriented method oom and open the use _ hierarchy;
s3, turning on oom _ kill _ disable function for closing the oom kill function, and handing the oom kill function to another independent thread Container Monitor started by NM for processing;
s4, modifying the hardware limit hardlimit set when the user submits the application program app into softlimit of the directory where the container is located in the cgroup.
In the method for controlling and optimizing the cgroup memory based on yarn of the invention,
the grouping of cgroup control in step S2 is in a hierarchical structure.
In the method for controlling and optimizing the cgroup memory based on yarn of the invention,
in step S3, the Container Monitor monitors the oom event of the parent directory of the Container memory cgroup directory by using an interface of the linux kernel, so as to perform uniform oomkill processing.
In the method for controlling and optimizing the cgroup memory based on yarn of the invention,
the method for monitoring the oom event of the parent directory of the Container memory cgroup directory by the Container Monitor using one interface of the linux kernel comprises the following steps:
the Container Monitor process monitors oom events of a parent directory of a Container memory cgroup directory by using the api of the memory, and selects the Container with the most amount of memory resources to kill by comparing when the oom event is triggered;
in step S4, hardlimit is kept consistent with the parent directory.
In the method for controlling and optimizing the cgroup memory based on yarn of the invention,
when the container is started, setting the limit _ in _ bytes of a container memory cgroup directory to be the same value of the limit _ in _ bytes in a parent directory, and simultaneously setting the soft _ limit _ in _ bytes to be the estimated memory resource usage of the container set by a user.
The invention also provides a grow-based cgroup memory control optimization system, which comprises the following units:
the memory setting unit is used for setting a memory hardlimit related to the computing node for a father directory of a control group cgroup memory control directory where the container is located when each management node NM is started;
a monitoring starting unit, configured to start an event _ control function to open an event monitoring function of the object-oriented method oom, and start use _ hierarchy;
a Kill opening unit for opening oom _ Kill _ disable function for closing the oom Kill function and handing the oom Kill function to another independent thread Container Monitor started by NM for processing;
and the directory modification unit is used for modifying the hardware limit hardlimit set when the user submits the application program app into softlimit of the directory where the container is located in the cgroup.
In the yarn-based cgroup memory control optimization system of the present invention, it is characterized in that,
the group controlled by cgroup in the monitoring starting unit adopts a hierarchical structure.
In the yarn-based cgroup memory control optimization system of the present invention,
and the Container Monitor in the Kill opening unit monitors the oom event of the parent directory of the Container memory cgroup directory by utilizing one interface of the linux kernel, so that uniform oom Kill processing is performed.
In the yarn-based cgroup memory control optimization system of the present invention,
the method for monitoring the oom event of the parent directory of the Container memory cgroup directory by the Container Monitor using one interface of the linux kernel comprises the following steps:
the Container Monitor process monitors oom events of a parent directory of a Container memory cgroup directory by using the api of the memory, and selects the Container with the most amount of memory resources to kill by comparing when the oom event is triggered;
in step S4, hardlimit is kept consistent with the parent directory.
In the yarn-based cgroup memory control optimization system of the present invention,
when the container is started, setting the limit _ in _ bytes of a container memory cgroup directory to be the same value of the limit _ in _ bytes in a parent directory, and simultaneously setting the soft _ limit _ in _ bytes to be the estimated memory resource usage of the container set by a user.
Compared with the prior art, the yarn-based cgroup memory control optimization method and system provided by the invention have the following beneficial effects:
by setting a memory hardlimit related to a computing node for a parent directory of a cgroup memory control directory where a Container is located at the time of starting each NM (nodemanator), wherein the value is generally larger, and at the same time, event _ control is started, namely an event monitoring function is turned on oom, use _ hierarchy is turned on, namely a cgroup control packet can adopt a hierarchical structure, oom _ kill _ disable, namely an oom kill function is turned off, and the function is handed to another independent thread started by the NM for processing, namely a controller, a controller Monitor monitors the oom event of the parent directory of the cgroup directory of the Container memory by utilizing an interface of a linux kernel, so as to perform oom kill processing of the Container, and then the set memory hardlimit when a user submits is modified to be consistent with the memory limit of the Container in the cgroup directory, so that the entire computing node is controlled in a consistent manner, the memory use efficiency of the machine is improved, and the situation that the container is killed under the condition that the memory used by the user for the resource of the app is not estimated accurately is avoided.
Drawings
Fig. 1 is a block diagram of a grow-based cgroup memory control optimization system according to an embodiment of the present invention.
Detailed Description
The nature of the YARN hierarchy is ResourceManager. This entity controls the entire cluster and manages the allocation of applications to the underlying computing resources. ResourceManager elaborates the various resource components (computation, memory, bandwidth, etc.) to the base NodeManager (per-node agent for YARN).
The invention adopts an improved cgroup memory control mode, can fully use the memory of the computing node by combining the resources expected to be used by each task set by the user under the condition that the resources of the computing node in the yann platform are sufficient, thereby improving the overall throughput rate of the yann platform.
The memory resources which can be used by the container on the Yarn platform are set by the user when the application program is submitted, and the memory resources used by the container are controlled through the cgroup on the computing node.
Generally, a user sets how many memory resources, and at most, the container can only use how many memory resources, regardless of whether there are sufficient other memory resources on the compute node. This may result in some containers of the app using insufficient memory resources, but the memory resources on the compute nodes are a contradiction that is abundant, and the memory resources on the compute nodes cannot be effectively utilized.
A yarn-based cgroup memory control optimization method comprises the following steps:
s1, setting a memory hardlimit related to the computing node for the father directory of the control group cgroup memory control directory where the container is located when each management node NM is started;
s2, opening the event _ control function to open the event monitoring function of the object-oriented method oom and open the use _ hierarchy;
s3, turning on oom _ kill _ disable function for closing the oom kill function, and handing the oom kill function to another independent thread Container Monitor started by NM for processing;
s4, modifying the hardware limit hardlimit set when the user submits the application program app into softlimit of the directory where the container is located in the cgroup.
In the method for controlling and optimizing the cgroup memory based on yarn of the invention,
the grouping of cgroup control in step S2 is in a hierarchical structure.
In the method for controlling and optimizing the cgroup memory based on yarn of the invention,
in step S3, the Container Monitor monitors the oom event of the parent directory of the Container memory cgroup directory by using an interface of the linux kernel, so as to perform uniform oom kill processing.
In the method for controlling and optimizing the cgroup memory based on yarn of the invention,
the method for monitoring the oom event of the parent directory of the Container memory cgroup directory by the Container Monitor using one interface of the linux kernel comprises the following steps:
the Container Monitor process monitors oom events of a parent directory of a Container memory cgroup directory by using the api of the memory, and selects the Container with the most amount of memory resources to kill by comparing when the oom event is triggered;
in step S4, hardlimit is kept consistent with the parent directory.
In the method for controlling and optimizing the cgroup memory based on yarn of the invention,
when the container is started, setting the limit _ in _ bytes of a container memory cgroup directory to be the same value of the limit _ in _ bytes in a parent directory, and simultaneously setting the soft _ limit _ in _ bytes to be the estimated memory resource usage of the container set by a user.
As shown in fig. 1, the present invention further provides a grow-based cgroup memory control optimization system, which includes the following units:
the memory setting unit is used for setting a memory hardlimit related to the computing node for a father directory of a control group cgroup memory control directory where the container is located when each management node NM is started;
a monitoring starting unit, configured to start an event _ control function to open an event monitoring function of the object-oriented method oom, and start use _ hierarchy;
a Kill opening unit for opening oom _ Kill _ disable function for closing the oom Kill function and handing the oom Kill function to another independent thread Container Monitor started by NM for processing;
and the directory modification unit is used for modifying the hardware limit hardlimit set when the user submits the application program app into softlimit of the directory where the container is located in the cgroup.
In the yarn-based cgroup memory control optimization system of the present invention, it is characterized in that,
the group controlled by cgroup in the monitoring starting unit adopts a hierarchical structure.
In the yarn-based cgroup memory control optimization system of the present invention,
and the Container Monitor in the Kill opening unit monitors the oom event of the parent directory of the Container memory cgroup directory by utilizing one interface of the linux kernel, so that uniform oom Kill processing is performed.
In the yarn-based cgroup memory control optimization system of the present invention,
the method for monitoring the oom event of the parent directory of the Container memory cgroup directory by the Container Monitor using one interface of the linux kernel comprises the following steps:
the Container Monitor process monitors oom events of a parent directory of a Container memory cgroup directory by using the api of the memory, and selects the Container with the most amount of memory resources to kill by comparing when the oom event is triggered;
in step S4, hardlimit is kept consistent with the parent directory.
In the yarn-based cgroup memory control optimization system of the present invention,
when the container is started, setting the limit _ in _ bytes of a container memory cgroup directory to be the same value of the limit _ in _ bytes in a parent directory, and simultaneously setting the soft _ limit _ in _ bytes to be the estimated memory resource usage of the container set by a user.
Compared with the prior art, the yarn-based cgroup memory control optimization method and system provided by the invention have the following beneficial effects:
by setting a memory hardlimit related to a computing node for a parent directory of a cgroup memory control directory where a Container is located at the time of starting each NM (nodemanator), wherein the value is generally larger, and at the same time, event _ control is started, namely an event monitoring function is turned on oom, use _ hierarchy is turned on, namely a cgroup control packet can adopt a hierarchical structure, oom _ kill _ disable, namely an oom kill function is turned off, and the function is handed to another independent thread started by the NM for processing, namely a controller, a controller Monitor monitors the oom event of the parent directory of the cgroup directory of the Container memory by utilizing an interface of a linux kernel, so as to perform oom kill processing of the Container, and then the set memory hardlimit when a user submits is modified to be consistent with the memory limit of the Container in the cgroup directory, so that the entire computing node is controlled in a consistent manner, the memory use efficiency of the machine is improved, and the situation that the container is killed under the condition that the memory used by the user for the resource of the app is not estimated accurately is avoided.
Modifying limit _ in _ bytes started by NM to a larger memory value related to a computing node, simultaneously opening event _ control (oom event monitoring function), setting use _ hierarchy, namely, a cgroup controlled group can adopt a hierarchical structure to close oom _ kill _ disable (close oom kill function), starting a Container Monitor process, using the api of the memory by the thread to Monitor oom events of the directory, and when oom event trigger is found, selecting a Container with more predicted memory resources for kill by comparison, thereby avoiding the exceeding of the integrally used memory resources.
When the Container is started, the limit _ in _ bytes is set to be the same value of the limit _ in _ bytes in the parent directory (i.e. a certain directory in 1), and the soft _ limit _ in _ bytes is the estimated memory resource usage of the Container set by the user.
It is understood that various other changes and modifications may be made by those skilled in the art based on the technical idea of the present invention, and all such changes and modifications should fall within the protective scope of the claims of the present invention.

Claims (8)

1. A yarn-based cgroup memory control optimization method is characterized by comprising the following steps:
s1, setting a memory hardlimit related to the computing node for the father directory of the control group cgroup memory control directory where the container is located when each management node NM is started;
s2, opening the event _ control function to open the event monitoring function of the object-oriented method oom and open the use _ hierarchy;
s3, turning on oom _ kill _ disable function for closing the oom kill function, and handing the oom kill function to another independent thread Container Monitor started by NM for processing;
s4, modifying the hardware limit hardlimit set when the user submits the application program app into softlimit of the directory where the container is located in the cgroup;
when the container is started, the limit _ in _ bytes of the container memory cgroup directory is set to be the same value as the limit _ in _ bytes in the parent directory, and the soft _ limit _ in _ bytes is the estimated memory resource usage of the container set by the user.
2. The method for optimizing grow-based cgroup memory control of claim 1, wherein the grouping of cgroup control in step S2 is in a hierarchical structure.
3. The method for controlling and optimizing grow-based cgroup memory of claim 1,
in step S3, the Container Monitor monitors the oom event of the parent directory of the Container memory cgroup directory by using an interface of the linux kernel, so as to perform uniform oom kill processing.
4. The method for controlling and optimizing grow-based cgroup memory of claim 3,
the method for monitoring the oom event of the parent directory of the Container memory cgroup directory by the Container Monitor using one interface of the linux kernel comprises the following steps:
the Container Monitor process monitors oom events of a parent directory of a Container memory cgroup directory by using the api of the memory, and selects the Container with the most amount of memory resources to kill by comparing when the oom event is triggered;
in step S4, hardlimit is kept consistent with the parent directory.
5. A kind of group memory control optimization system based on yarn, characterized by that, it includes the following units:
the memory setting unit is used for setting a memory hardlimit related to the computing node for a father directory of a control group cgroup memory control directory where the container is located when each management node NM is started;
a monitoring starting unit, configured to start an event _ control function to open an event monitoring function of the object-oriented method oom, and start use _ hierarchy;
a Kill opening unit for opening oom _ Kill _ disable function for closing the oom Kill function and handing the oom Kill function to another independent thread Container Monitor started by NM for processing;
the directory modification unit is used for modifying hardware limit hardlimit set when the user submits the application program app into softlimit of a directory where the container is located in the cgroup;
when the container is started, the limit _ in _ bytes of the container memory cgroup directory is set to be the same value as the limit _ in _ bytes in the parent directory, and the soft _ limit _ in _ bytes is the estimated memory resource usage of the container set by the user.
6. The yann-based cgroup memory control optimization system of claim 5, wherein,
the group controlled by cgroup in the monitoring starting unit adopts a hierarchical structure.
7. The yann-based cgroup memory control optimization system of claim 5,
and the Container Monitor in the Kill opening unit monitors the oom event of the parent directory of the Container memory cgroup directory by utilizing one interface of the linux kernel, so that uniform oom Kill processing is performed.
8. The yann-based cgroup memory control optimization system of claim 7,
the method for monitoring the oom event of the parent directory of the Container memory cgroup directory by the Container Monitor using one interface of the linux kernel comprises the following steps:
the Container Monitor process monitors oom events of a parent directory of a Container memory cgroup directory by using the api of the memory, and selects the Container with the most amount of memory resources to kill by comparing when the oom event is triggered;
in step S4, hardlimit is kept consistent with the parent directory.
CN201711493850.XA 2017-12-31 2017-12-31 Yann-based cgroup memory control optimization method and system Active CN108121605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711493850.XA CN108121605B (en) 2017-12-31 2017-12-31 Yann-based cgroup memory control optimization method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711493850.XA CN108121605B (en) 2017-12-31 2017-12-31 Yann-based cgroup memory control optimization method and system

Publications (2)

Publication Number Publication Date
CN108121605A CN108121605A (en) 2018-06-05
CN108121605B true CN108121605B (en) 2021-11-16

Family

ID=62232704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711493850.XA Active CN108121605B (en) 2017-12-31 2017-12-31 Yann-based cgroup memory control optimization method and system

Country Status (1)

Country Link
CN (1) CN108121605B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116185642B (en) * 2023-04-24 2023-07-18 安徽海马云科技股份有限公司 Container memory optimization method and device, storage medium and electronic device
CN116302849B (en) * 2023-05-20 2023-08-11 北京长亭科技有限公司 Linux socket closing event monitoring method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150259B (en) * 2013-03-22 2016-03-30 华为技术有限公司 A kind of method for recovering internal storage and device
US9602423B2 (en) * 2013-06-28 2017-03-21 Pepperdata, Inc. Systems, methods, and devices for dynamic resource monitoring and allocation in a cluster system
CN103593242B (en) * 2013-10-15 2017-04-05 北京航空航天大学 Resource sharing control system based on Yarn frameworks
CN103870314B (en) * 2014-03-06 2017-01-25 中国科学院信息工程研究所 Method and system for simultaneously operating different types of virtual machines by single node
US9686141B2 (en) * 2014-09-10 2017-06-20 Ebay Inc. Systems and methods for resource sharing between two resource allocation systems
CN106020976B (en) * 2016-05-13 2018-06-01 北京百度网讯科技有限公司 Memory is exhausted into the method and apparatus that process flow is unloaded to user's space

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Linux 中的Soft limit 和Hard limit;weixin_34111790;《https://blog.csdn.net/weixin_34111790/article/details/92144330》;20140427;第1-6页 *
Ubuntu上stack size的hard limit和soft limit设置问题;mosaic;《https://blog.csdn.net/mosaic/article/details/6172654》;20110204;第1-6页 *

Also Published As

Publication number Publication date
CN108121605A (en) 2018-06-05

Similar Documents

Publication Publication Date Title
US10049133B2 (en) Query governor across queries
US9977727B2 (en) Methods and systems for internally debugging code in an on-demand service environment
Lin et al. Moon: Mapreduce on opportunistic environments
US9106391B2 (en) Elastic auto-parallelization for stream processing applications based on a measured throughput and congestion
US9578091B2 (en) Seamless cluster servicing
US10769026B2 (en) Dynamically pausing large backups
US11614967B2 (en) Distributed scheduling in a virtual machine environment
US20140282540A1 (en) Performant host selection for virtualization centers
US10623281B1 (en) Dynamically scheduled checkpoints in distributed data streaming system
Fan et al. Agent-based service migration framework in hybrid cloud
Shukla et al. Toward reliable and rapid elasticity for streaming dataflows on clouds
Petrov et al. Adaptive performance model for dynamic scaling Apache Spark Streaming
CN108121605B (en) Yann-based cgroup memory control optimization method and system
US20220038355A1 (en) Intelligent serverless function scaling
CN105302641A (en) Node scheduling method and apparatus in virtual cluster
Narayanan et al. Analysis and exploitation of dynamic pricing in the public cloud for ml training
Liu et al. Optimizing shuffle in wide-area data analytics
CN111258746A (en) Resource allocation method and service equipment
CN109960579B (en) Method and device for adjusting service container
Stavrinides et al. The impact of checkpointing interval selection on the scheduling performance of real‐time fine‐grained parallel applications in SaaS clouds under various failure probabilities
CN105260244A (en) Task scheduling method and device for distributed system
WO2018206793A1 (en) Multicore processing system
Zhang et al. N-storm: Efficient thread-level task migration in apache storm
Ibrahim et al. Improving mapreduce performance with progress and feedback based speculative execution
CN112698914B (en) Workflow task container generation system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220210

Address after: 430000, No. 88, postal academy road, Hongshan District, Hubei, Wuhan

Patentee after: WUHAN FIBERHOME INFORMATION INTEGRATION TECHNOLOGIES Co.,Ltd.

Address before: 430000 first floor, optical fiber chemical industry building, No. 4, Guanshan Second Road, Donghu Development Zone, Wuhan, Hubei Province

Patentee before: WUHAN FIBERHOME SOFTWARE TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right