CN110874272A - Resource allocation method and device, computer readable storage medium and electronic device - Google Patents

Resource allocation method and device, computer readable storage medium and electronic device Download PDF

Info

Publication number
CN110874272A
CN110874272A CN202010044959.0A CN202010044959A CN110874272A CN 110874272 A CN110874272 A CN 110874272A CN 202010044959 A CN202010044959 A CN 202010044959A CN 110874272 A CN110874272 A CN 110874272A
Authority
CN
China
Prior art keywords
task
executed
resource
memory usage
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010044959.0A
Other languages
Chinese (zh)
Inventor
费伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yiyi Medical Cloud Technology Co Ltd
Original Assignee
Beijing Yiyi Medical Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yiyi Medical Cloud Technology Co Ltd filed Critical Beijing Yiyi Medical Cloud Technology Co Ltd
Priority to CN202010044959.0A priority Critical patent/CN110874272A/en
Publication of CN110874272A publication Critical patent/CN110874272A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention relates to a resource allocation method and a device, a computer readable storage medium and electronic equipment, relating to the technical field of computers, wherein the method comprises the following steps: receiving a resource configuration request comprising a task to be executed, and responding to the resource configuration request to acquire a target configuration parameter corresponding to the task name of the task to be executed from a configuration center; the task name comprises a registration identifier, and the registration identifier corresponds to the target configuration parameter one by one; analyzing the target configuration parameters to obtain target physical memory usage and target virtual memory usage required by the task to be executed in the execution process; and configuring corresponding memory resources for the task to be executed according to the target physical memory usage and the target virtual memory usage. The invention improves the execution efficiency of the task to be executed.

Description

Resource allocation method and device, computer readable storage medium and electronic device
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a resource allocation method, a resource allocation device, a computer readable storage medium and electronic equipment.
Background
Distributed computing, which is a research hotspot in the information field, is mainly to connect a large number of resources through a computer network or the internet, and provide various services such as storage, computing and the like for different users. In the existing distributed resource scheduling system, users apply for resources such as a memory, a central processing unit and the like in advance according to own requirements, and the distributed resource scheduling system can perform computing tasks of resource allocation to the users on managed computing nodes for use according to the requirements of the users.
Furthermore, when a user submits a big data calculation task, the required amount of resources such as a memory, a central processing unit and the like needs to be preset, and after the distributed resource scheduling system is distributed, the user uses the resources on the specific calculation node for calculation. And in the whole process of executing the computing task of the user, the computing node of the distributed resource scheduling system monitors the use condition of the user resource in real time, and if the use exceeds the pre-applied quota, the task process of the user is directly ended.
Therefore, the above solution has the following drawbacks: on one hand, when the calculation task is used as a timing task, as the user does not know how many resources are needed when submitting the task, if the application is too large, the problem of great resource waste exists; on the other hand, if the pre-application is too small, the terminated unstable factors exist in the execution process of the computing task, so that the computing task cannot be effectively executed, and the execution efficiency is further reduced; on the other hand, if the resources to be applied for the task are debugged, the execution time of the task to be executed is greatly increased.
Therefore, it is desirable to provide a new resource allocation method and apparatus.
It is to be noted that the information invented in the above background section is only for enhancing the understanding of the background of the present invention, and therefore, may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present invention is to provide a resource allocation method, a resource allocation apparatus, a computer-readable storage medium, and an electronic device, which overcome, at least to some extent, the problem of low execution efficiency due to the limitations and disadvantages of the related art.
According to an aspect of the present disclosure, there is provided a resource configuration method, including:
receiving a resource configuration request comprising a task to be executed, and responding to the resource configuration request to acquire a target configuration parameter corresponding to the task name of the task to be executed from a configuration center; the task name comprises a registration identifier, and the registration identifier corresponds to the target configuration parameter one by one;
analyzing the target configuration parameters to obtain target physical memory usage and target virtual memory usage required by the task to be executed in the execution process;
and configuring corresponding memory resources for the task to be executed according to the target physical memory usage and the target virtual memory usage.
In an exemplary embodiment of the present disclosure, the resource configuration method further includes:
acquiring a task running log of the task to be executed from a task running database, and formatting the task running log to obtain a first processing result comprising a registration identifier, a timestamp, actual physical memory usage and actual virtual memory usage of the task to be executed;
acquiring a resource use log of the task to be executed from a resource use database, and analyzing the resource use log to obtain a second processing result comprising a registration identifier, a task name, a total task resource use amount and an execution time of the task to be executed;
associating the first processing result and the second processing result according to the registration identifier, and obtaining the current configuration parameters of the task to be executed according to the associated first processing result and second processing result;
and updating the target configuration parameters by using the current configuration parameters.
In an exemplary embodiment of the present disclosure, the resource configuration method further includes:
receiving an identification acquisition request for acquiring the registration identification of the task to be executed, and responding to the identification acquisition request to generate the registration identification of the task to be executed;
and sending the registration identification to a sender of the identification acquisition request, so that the sender generates a task name according to the registration identification and generates a resource configuration request of the task to be executed according to the task name.
In an exemplary embodiment of the present disclosure, the resource configuration method further includes:
and acquiring a task running log of the task to be executed from a computing node according to a preset log acquisition program, and storing the task running log to a path corresponding to the task to be executed in the task running database.
In an exemplary embodiment of the present disclosure, storing the task execution log to a path corresponding to the task to be executed in the task execution database includes:
partitioning the path corresponding to the task to be executed according to a preset time period to obtain a plurality of sub-storage intervals;
and storing the task running log into a sub storage interval corresponding to the time period of the task running log according to the time period of the task running log.
In an exemplary embodiment of the present disclosure, obtaining the resource usage log of the task to be executed from the resource usage database includes:
and acquiring the resource use log of the task to be executed through a task interface corresponding to the task to be executed.
In an exemplary embodiment of the present disclosure, obtaining the current configuration parameter of the task to be executed according to the associated first processing result and second processing result includes:
obtaining the maximum physical memory usage amount and the maximum virtual memory usage amount required by the task to be executed in the execution process according to the associated first processing result and second processing result;
and expanding preset times on the basis of the maximum physical memory usage and the maximum virtual memory usage, and generating the current configuration parameters according to the maximum physical memory usage and the maximum virtual memory usage after the times are expanded.
According to an aspect of the present disclosure, there is provided a resource configuration apparatus, including:
the system comprises a parameter acquisition module, a parameter configuration module and a parameter configuration module, wherein the parameter acquisition module is used for receiving a resource configuration request comprising a task to be executed and responding to the resource configuration request to acquire a target configuration parameter corresponding to the task name of the task to be executed from a configuration center; the task name comprises a registration identifier, and the registration identifier corresponds to the target configuration parameter one by one;
the parameter analysis module is used for analyzing the target configuration parameters to obtain target physical memory usage and target virtual memory usage required by the task to be executed in the execution process;
and the resource configuration module is used for configuring corresponding memory resources for the task to be executed according to the target physical memory usage and the target virtual memory usage.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a resource configuration method as described in any one of the above.
According to an aspect of the present disclosure, there is provided an electronic device including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform any of the resource configuration methods described above via execution of the executable instructions.
The embodiment of the invention provides a resource configuration method, on one hand, a target configuration parameter corresponding to a task name of a task to be executed is obtained from a configuration center; then analyzing the target configuration parameters to obtain the target physical memory usage and the target virtual memory usage required by the task to be executed in the execution process; finally, corresponding memory resources are configured for the task to be executed according to the target physical memory usage and the target virtual memory usage, so that the problem that when the calculation task is used as a timing task in the prior art, a user does not know how many resources are needed when submitting the task, and if the application is too large, the resources are greatly wasted is solved; on the other hand, the problem that if the pre-application is too small in the prior art, the computing task cannot be effectively executed due to the fact that unstable factors are stopped in the computing task execution process, and therefore execution efficiency is reduced is solved, and execution efficiency of the task to be executed is improved; on the other hand, the problem that in the prior art, if the resources which are applied for in advance are debugged, the execution time of the task to be executed can be greatly increased is solved, and the execution period of the task to be executed is shortened.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 schematically shows a flow chart of a resource configuration method according to an exemplary embodiment of the present invention.
Fig. 2 is a diagram schematically showing an example of the structure of a universal resource management system according to an example embodiment of the present invention.
Fig. 3 schematically shows a flow chart of another resource configuration method according to an exemplary embodiment of the present invention.
Fig. 4 schematically shows a flow chart of another resource configuration method according to an exemplary embodiment of the present invention.
Fig. 5 schematically shows a flow chart of another resource configuration method according to an exemplary embodiment of the present invention.
Fig. 6 schematically shows a block diagram of a resource configuration apparatus according to an exemplary embodiment of the present invention.
Fig. 7 schematically illustrates an electronic device for implementing the resource configuration method according to an exemplary embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the invention.
Furthermore, the drawings are merely schematic illustrations of the invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The internet technology of human society has developed rapidly, especially since 2000, the internet technology has been developed and popularized at a high speed, and as for various portal websites, shopping websites, video websites, live broadcast websites, short video websites and the like, more and more users are used as in spring shoots after rain, so that various data are increased exponentially, and the scales of article orders, user access logs, video data, chat data and the like are PB and EB levels. The data has great potential value, and how to analyze and mine the data becomes a hotspot. In the face of different computing requirements, resource coordination is needed, and the value of computing resources can be exerted to the maximum extent only by distributing according to the needs of industries, departments and people, and after all, the computing resources are also limited.
Based on this, the present exemplary embodiment first provides a resource configuration method, which may be run on a server, a server cluster, a cloud server, or the like; of course, those skilled in the art may also operate the method of the present invention on other platforms as needed, and this is not particularly limited in this exemplary embodiment. Referring to fig. 1, the resource allocation method may include the following steps:
step S110, receiving a resource configuration request including a task to be executed, responding to the resource configuration request, and acquiring a target configuration parameter corresponding to the task name of the task to be executed from a configuration center; the task name comprises a registration identifier, and the registration identifier corresponds to the target configuration parameter one to one.
And step S120, analyzing the target configuration parameters to obtain the target physical memory usage amount and the target virtual memory usage amount required by the task to be executed in the execution process.
Step S130, configuring corresponding memory resources for the task to be executed according to the target physical memory usage amount and the target virtual memory usage amount.
In the resource configuration method, on one hand, a target configuration parameter corresponding to a task name of a task to be executed is obtained from a configuration center; then analyzing the target configuration parameters to obtain the target physical memory usage and the target virtual memory usage required by the task to be executed in the execution process; finally, corresponding memory resources are configured for the task to be executed according to the target physical memory usage and the target virtual memory usage, so that the problem that when the calculation task is used as a timing task in the prior art, a user does not know how many resources are needed when submitting the task, and if the application is too large, the resources are greatly wasted is solved; on the other hand, the problem that if the pre-application is too small in the prior art, the computing task cannot be effectively executed due to the fact that unstable factors are stopped in the computing task execution process, and therefore execution efficiency is reduced is solved, and execution efficiency of the task to be executed is improved; on the other hand, the problem that in the prior art, if the resources which are applied for in advance are debugged, the execution time of the task to be executed can be greatly increased is solved, and the execution period of the task to be executed is shortened.
Hereinafter, each step involved in the resource allocation method according to the exemplary embodiment of the present invention will be explained and explained in detail with reference to the drawings.
First, terms related to exemplary embodiments of the present invention are explained and explained.
And (3) Yarn: yarn (Another Resource coordinator) is a universal Resource management system and is responsible for Resource allocation and task scheduling of Yarn cluster, and Yarn mainly comprises 3 components: RM (resource Manager), NM (Node Manager), and AM (Application Master), Yarn may integrate various computing frameworks such as spark, mapreduce, flex, hive, etc.
Wherein Spark is an open-source memory type calculation scheduling framework; mapreduce is a calculation framework based on map and reduce models, all data calculation is abstracted into map (data loading and key/value data generation), and reduce (grouped calculation according to key); flink is an open-source, streaming-compatible, batch-processing computing framework.
Specifically, referring to fig. 2, RM (Resource Manager) 201, NM (node Manager) 202, AM (Application Master) 203, and client 204 may be communicatively connected through a remote procedure call protocol. Each node in the system may include a node manager and one or more application managers. The client is used to submit jobs to the AM. A plurality of tasks to be performed are included in the job.
The RM is responsible for resource management and scheduling of the entire system. According to the resource request of the AM, resources can be allocated to each task in the job, and the resource allocation result is fed back to the AM. NM is responsible for resource management of this node. And the NM is used for executing the task corresponding to the container after the AM of the node obtains the container of each task, and isolating the network bandwidth of each task. And the NM regularly reports the resource use condition of the node and the running state of each container in the node to the RM. To sum up, each job (job) contains a plurality of tasks (tasks), for each task a Container (Container) is applied by the AM, the RM is responsible for assigning a Container to each task, the NM is responsible for running and managing a Container, and each task is run by one Container.
A configuration center: parameter settings may be made on a project basis, while classification parameter settings may be made on a scene basis within the project.
Resource monitoring: all the computing tasks are executed on the Yarn, and the resource scheduling system can monitor the tasks, so that unused resources are prevented from exceeding used resources.
Collecting logs: the logs of the Yarn computing nodes can be collected in real time and collected to a data warehouse for structured analysis.
A data warehouse: all unstructured and structured data can be subjected to metadata establishment, cleaning and table establishment, and are opened for user query.
Compute node (node manager): a distributed computing resource scheduling system is provided with nodes which are specially used for data computing.
Resource scheduling node (resource manager): the distributed computing resource scheduling system is specially used for nodes for resource application and scheduling.
Data structuring: the ordinary text data generates various structured fields according to a certain rule.
Structuring data: there is a strict metadata definition, data in a fixed field format.
In step S110, a resource configuration request including a task to be executed is received, and a target configuration parameter corresponding to a task name of the task to be executed is acquired from a configuration center in response to the resource configuration request; the task name comprises a registration identifier, and the registration identifier corresponds to the target configuration parameter one to one.
In this exemplary embodiment, as shown in fig. 2, when the application manager 203 in the universal resource management system receives a resource configuration request including a task to be executed, which is sent by the client 204, the target configuration parameter corresponding to the task name of the task to be executed may be obtained from the configuration center in response to the resource configuration request. The task name can comprise a user-defined name and a registration identifier (registration id) issued by a configuration center, wherein the registration identifier corresponds to a target configuration parameter required by the task to be executed one by one and has uniqueness; therefore, the task names have uniqueness. By the method, the problem that the configuration parameters and the tasks to be executed cannot be in one-to-one correspondence so as to cause incapability of execution is solved.
In step S120, the target configuration parameters are analyzed to obtain a target physical memory usage amount and a target virtual memory usage amount required by the task to be executed in the execution process.
In step S130, a corresponding memory resource is configured for the task to be executed according to the target physical memory usage amount and the target virtual memory usage amount.
Step S120 and step S130 are explained and explained below. Firstly, after obtaining the target configuration parameters, the target configuration parameters may be analyzed to obtain a target physical memory usage amount and a target virtual memory usage amount required by the task to be executed in the execution process, and then corresponding memory resources are allocated to the task to be executed according to the target physical memory usage amount and the target virtual memory usage amount required by the task to be executed. By the method, the problem of resource waste is avoided, and the problem of incapability of execution caused by forced interruption is also avoided.
It should be noted that, the task to be executed belongs to a timed task, that is, the task needs to be executed once every day or every a preset time interval, for example, the collected WeChat data or data of a certain shopping platform is calculated, and the target physical memory usage and the target virtual memory usage are determined according to the usage actually required at the last time (for example, the previous day), and therefore, the target physical memory usage and the target virtual memory usage do not have any problem about excessive waste of resources or insufficient resources for the task to be executed.
Fig. 3 schematically illustrates another resource configuration method according to an exemplary embodiment of the present invention. Referring to fig. 3, the resource allocation method may include steps S310 to S340, which will be described in detail below.
In step S310, a task running log of the task to be executed is obtained from a task running database, and the task running log is formatted to obtain a first processing result including a registration identifier, a timestamp, an actual physical memory usage amount, and an actual virtual memory usage amount of the task to be executed.
In this exemplary embodiment, in order to obtain the task execution log of the task to be executed from the task execution database, the task execution log needs to be stored first. Therefore, the resource allocation method may further include: and acquiring a task running log of the task to be executed from a computing node according to a preset log acquisition program, and storing the task running log to a path corresponding to the task to be executed in the task running database. Storing the task running log to a path corresponding to the task to be executed in the task running database may include: firstly, partitioning a path corresponding to the task to be executed according to a preset time period to obtain a plurality of sub-storage intervals; and secondly, storing the task running log into a sub storage interval corresponding to the time period of the task running log according to the time period of the task running log.
Specifically, for the Node Manager (NM) of YARN, the task resource usage is monitored and output to the logs of the compute nodes in real time, and the resource usage of all tasks in the full life cycle can be summarized by collecting the logs of the compute nodes. Secondly, all the computing nodes have log output of themselves, particularly for the resource use condition of the computing task, formatted text output, subtask ID, time, physical memory use condition and the like, related log data are obtained in real time through a log collection program, corresponding field data are structured, and the field data are stored in a subtask operation log data warehouse in a unified mode.
Therefore, a log collection program can be deployed on all distributed computing nodes, the general log collection program can be a flash log collection program and the like, and then collected paths corresponding to tasks to be executed are partitioned according to preset time periods to obtain a plurality of sub storage intervals; and storing the task running log to a sub storage interval corresponding to the time period to which the task running log belongs and converging the task running log to a path of a corresponding sub task data warehouse according to the time period to which the running log belongs, for example, constructing a table app _ container _ log, and partitioning according to days, for example: dt =2019-10-11, etc., then the corresponding hdfs distributed file path may be:
and/user/hive/winehouse/app _ container _ log/dt =2019-10-11, and a new partition can be generated every day.
Further, after the task operation log is stored, the task operation log of the task to be executed can be obtained from a task operation database, and the task operation log is formatted to obtain a first processing result including a registration identifier, a timestamp, an actual physical memory usage amount and an actual virtual memory usage amount of the task to be executed, so that a second processing result obtained according to the resource usage log can be associated subsequently, a relatively comprehensive current configuration parameter corresponding to the task to be executed can be obtained, and the target configuration parameter is updated through the current configuration parameter, and therefore the accuracy of the target configuration parameter is improved.
In step S320, a resource usage log of the task to be executed is obtained from a resource usage database, and the resource usage log is analyzed to obtain a second processing result including a registration identifier, a task name, a total amount of resource usage of the task and an execution time of the task to be executed.
In this example embodiment, first, a resource usage log of the task to be executed is obtained through a task interface corresponding to the task to be executed; secondly, the resource usage log is analyzed to obtain a second processing result which comprises the registration identification, the task name, the total usage amount of the task resource and the execution time of the task to be executed.
Specifically, for the resource usage log of horn, it can be obtained through the task interface corresponding to the task to be executed. The specific resource scheduling service access address may be as follows, for example:
/ws/v1/cluster/appsfinishedTimeBegin=%s&finishedTimeEnd=%s;
therefore, a task scheduling log table app _ log can be constructed in advance, meanwhile, partitioning can be performed according to days, dt =2019-10-11 and the like, and fields such as task id and task name are structured and written into the day partition of the table. And, performing structured analysis on the resource usage log of the task, namely acquiring JSON fields such as registration identification, task name, total usage amount of task resources, execution time and the like, and structuring the JSON fields into a corresponding task log data warehouse (resource usage database).
In step S330, the first processing result and the second processing result are associated according to the registration identifier, and the current configuration parameter of the task to be executed is obtained according to the associated first processing result and second processing result.
Firstly, the first processing result and the second processing result are correlated according to the registration identification, and secondly, the current configuration parameters of the task to be executed are obtained according to the correlated first processing result and the correlated second processing result. The method specifically comprises the following steps: obtaining the maximum physical memory usage amount and the maximum virtual memory usage amount required by the task to be executed in the execution process according to the associated first processing result and second processing result; and expanding preset times on the basis of the maximum physical memory usage and the maximum virtual memory usage, and generating the current configuration parameters according to the maximum physical memory usage and the maximum virtual memory usage after the times are expanded.
Specifically, information such as a task id, a task name, a maximum physical memory usage amount in a task running process, a maximum CPU usage amount, and the like can be analyzed and mined from a subtask resource usage warehouse and a task running log warehouse. After the use condition of real resources is analyzed and mined, the ID registered by the current task in the configuration center can be obtained through the uniqueness (the task setting rule is planned in advance) of the task name (app _ name), then the reasonable 20% redundant resource application amount is given through the analyzed data, the parameters configured by the corresponding configuration center are refreshed, the resource application can be carried out according to the latest parameters when the task is executed next time, and the purposes of high efficiency of task execution and resource waste reduction are guaranteed. And all the task execution logs can be analyzed, the maximum memory resource use condition and the maximum memory use condition are obtained, so that a certain reasonable buffer can be automatically configured to the configuration center, and the reasonable resource configuration can be obtained when the task is executed next time.
In step S340, the target configuration parameters are updated by using the current configuration parameters.
In this example embodiment, after obtaining the current configuration parameters, the target configuration parameters in the configuration center may be updated by using the current configuration parameters. By the method, the target configuration parameters of the task to be executed can be continuously adjusted according to the actual situation so as to adapt to the target physical memory usage and the target virtual memory usage actually required by the task to be executed, so that the resource waste is reduced, and the task to be executed can be timely executed.
Fig. 4 schematically illustrates another resource configuration method according to an exemplary embodiment of the present invention. Referring to fig. 4, the resource allocation method may include step S410 and step S420, which will be described in detail below.
In step S410, an identifier obtaining request for obtaining the registration identifier of the task to be executed is received, and the registration identifier of the task to be executed is generated in response to the identifier obtaining request.
In step S420, the registration identifier is sent to the sender of the identifier obtaining request, so that the sender generates a task name according to the registration identifier, and generates a resource configuration request of the task to be executed according to the task name.
Step S410 and step S420 are explained and explained below. Firstly, all tasks running on the YARN, no matter spark tasks, mapredue tasks and flink tasks, can register a unique task key in the configuration center and set their own task running parameters. Meanwhile, according to an agreed rule, when the task is submitted to the YARN, the task name is provided with the registration identifier registered by the task in the configuration center at the end, and the registration identifier can be associated with the corresponding registration identifier when the task log is analyzed conveniently. Furthermore, all the batch computing tasks can be registered in a configuration center, and then the use condition of the resources can be configured according to the parameter configuration rule of the tasks. Also, the sender of the above-mentioned presentation request may be the Client (Client) mentioned above. By the method, the situation that the target configuration parameters are inconsistent with the tasks to be executed when the tasks are calculated in batches can be avoided.
Hereinafter, the resource allocation method according to the exemplary embodiment of the present invention will be further explained with reference to fig. 5. Referring to fig. 5, the resource allocation method may include the steps of:
in step S510, a set of configuration center services, a set of systems that manage configurations based on unique names (keys), is installed.
In step S520, the log collection program deploys a log collection service on all distributed computing nodes, and the log collection program generally adopts a flash and other log collection programs, and then converges to a path of a corresponding subtask data warehouse, for example, a table app _ container _ log is constructed, partitioning is performed according to days, dt =2019-10-11 and the like, and then the corresponding hdfs distributed file path is: /user/live/ware house/app _ container _ log/dt =2019-10-11, and a new partition is generated each day.
In step S530, a resource scheduling log is obtained through an interface, and a specific resource scheduling service access address may be:
the method comprises the steps of constructing a task scheduling log table app _ log in advance, partitioning according to days, and structuring a day partition with fields of task id, task name and the like written in the table by dt =2019-10-11 and the like.
In step S540, the timed log analysis and mining program analyzes the collected data of the task execution log and the resource usage log table, so as to analyze the task name (including the unique key registered in the configuration center), the maximum physical memory usage amount, and the maximum cpu usage amount, and then may generate the resource application amount for the next execution of the task according to 1.2 times of the maximum amount. When the resource allocation request of the timing task is received again next time, the resource allocation can be configured according to the resource application amount.
Moreover, all tasks register an id in the configuration center and configure default parameters, and the task name can be according to the agreed rule, such as: the name & configuration center registration id is freely defined, and thus the task name of task execution is set. When all tasks are executed, the interface is obtained through the configuration center, and the corresponding analysis program is required to be written to use the parameters of the configuration center.
The resource allocation method provided by the exemplary embodiment of the present invention has at least the following advantages:
on one hand, all tasks are registered in a configuration center according to rules, and the configuration formats of task parameters are agreed in advance according to different task types;
on the other hand, since the company itself relates to task scheduling of multiple clusters, especially a computing program may be executed in multiple clusters, and different cluster data volumes are different, the automation requirement for the parameters is particularly obvious, if each user configures one cluster next to another, the workload is huge, and the execution efficiency is extremely low, and by the method, the workload can be reduced and the execution efficiency can be improved;
on the other hand, when all clusters and all tasks are executed, resource application parameters and the like are obtained through the configuration center, and after the execution is completed, the automatic analysis program can analyze the current resource use condition and generate a new version of optimal resource application parameters. Compared with the prior situation that the resource waste reaches more than 60 percent; the method greatly reduces the waste amount of computing resources, and basically maintains about 20 percent. The quantity is saved by 40%, the success rate of task execution is greatly improved, and the manual interference is saved.
The present disclosure also provides a resource allocation device. Referring to fig. 6, the resource configuration apparatus may include a parameter obtaining module 610, a parameter parsing module 620, and a resource configuration module 630. Wherein:
the parameter obtaining module 610 may be configured to receive a resource configuration request including a task to be executed, and obtain, from a configuration center, a target configuration parameter corresponding to a task name of the task to be executed in response to the resource configuration request; the task name comprises a registration identifier, and the registration identifier corresponds to the target configuration parameter one to one.
The parameter analyzing module 620 may be configured to analyze the target configuration parameter to obtain a target physical memory usage amount and a target virtual memory usage amount required by the task to be executed in the execution process.
The resource configuration module 630 may be configured to configure a corresponding memory resource for the task to be executed according to the target physical memory usage amount and the target virtual memory usage amount.
In an example embodiment of the present disclosure, the resource configuration apparatus may further include:
the task running log obtaining module may be configured to obtain a task running log of the task to be executed from a task running database, and format the task running log to obtain a first processing result including a registration identifier, a timestamp, an actual physical memory usage amount, and an actual virtual memory usage amount of the task to be executed.
The resource usage log obtaining module may be configured to obtain a resource usage log of the task to be executed from a resource usage database, and analyze the resource usage log to obtain a second processing result including a registration identifier, a task name, a total amount of resource usage of the task, and an execution time of the task to be executed.
The processing result management module may be configured to associate the first processing result and the second processing result according to the registration identifier, and obtain the current configuration parameter of the task to be executed according to the associated first processing result and second processing result.
A configuration parameter update module, configured to update the target configuration parameter with the current configuration parameter.
In an example embodiment of the present disclosure, the resource configuration apparatus further includes:
the registration identifier obtaining module may be configured to receive an identifier obtaining request for obtaining the registration identifier of the task to be executed, and generate the registration identifier of the task to be executed in response to the identifier obtaining request.
The registration identifier sending module may be configured to send the registration identifier to a sender of the identifier obtaining request, so that the sender generates a task name according to the registration identifier, and generates the resource configuration request of the task to be executed according to the task name.
In an example embodiment of the present disclosure, the resource configuration apparatus further includes:
and the task running log acquisition module can be used for acquiring the task running log of the task to be executed from a computing node according to a preset log acquisition program and storing the task running log to a path corresponding to the task to be executed in the task running database.
In an example embodiment of the present disclosure, storing the task execution log in the task execution database under a path corresponding to the task to be executed includes:
partitioning the path corresponding to the task to be executed according to a preset time period to obtain a plurality of sub-storage intervals;
and storing the task running log into a sub storage interval corresponding to the time period of the task running log according to the time period of the task running log.
In an example embodiment of the present disclosure, obtaining the resource usage log of the task to be executed from the resource usage database includes:
and acquiring the resource use log of the task to be executed through a task interface corresponding to the task to be executed.
In an example embodiment of the present disclosure, obtaining the current configuration parameter of the task to be executed according to the associated first processing result and second processing result includes:
obtaining the maximum physical memory usage amount and the maximum virtual memory usage amount required by the task to be executed in the execution process according to the associated first processing result and second processing result;
and expanding preset times on the basis of the maximum physical memory usage and the maximum virtual memory usage, and generating the current configuration parameters according to the maximum physical memory usage and the maximum virtual memory usage after the times are expanded.
The specific details of each module in the resource allocation apparatus have been described in detail in the corresponding resource allocation method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present invention are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
In an exemplary embodiment of the present invention, there is also provided an electronic device capable of implementing the above method.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 700 according to this embodiment of the invention is described below with reference to fig. 7. The electronic device 700 shown in fig. 7 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 7, electronic device 700 is embodied in the form of a general purpose computing device. The components of the electronic device 700 may include, but are not limited to: the at least one processing unit 710, the at least one memory unit 720, a bus 730 connecting different system components (including the memory unit 720 and the processing unit 710), and a display unit 740.
Wherein the storage unit stores program code that is executable by the processing unit 710 such that the processing unit 710 performs the steps according to various exemplary embodiments of the present invention as described in the above section "exemplary method" of the present specification. For example, the processing unit 710 may perform step S110 as shown in fig. 1: receiving a resource configuration request comprising a task to be executed, and responding to the resource configuration request to acquire a target configuration parameter corresponding to the task name of the task to be executed from a configuration center; the task name comprises a registration identifier, and the registration identifier corresponds to the target configuration parameter one by one; step S120: analyzing the target configuration parameters to obtain target physical memory usage and target virtual memory usage required by the task to be executed in the execution process; step S130: and configuring corresponding memory resources for the task to be executed according to the target physical memory usage and the target virtual memory usage.
The storage unit 720 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 7201 and/or a cache memory unit 7202, and may further include a read only memory unit (ROM) 7203.
The storage unit 720 may also include a program/utility 7204 having a set (at least one) of program modules 7205, such program modules 7205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 730 may be any representation of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 700 may also communicate with one or more external devices 800 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 700, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 700 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 750. Also, the electronic device 700 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 760. As shown, the network adapter 760 communicates with the other modules of the electronic device 700 via the bus 730. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 700, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present invention.
In an exemplary embodiment of the present invention, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
According to the program product for realizing the method, the portable compact disc read only memory (CD-ROM) can be adopted, the program code is included, and the program product can be operated on terminal equipment, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (10)

1. A method for resource allocation, comprising:
receiving a resource configuration request comprising a task to be executed, and responding to the resource configuration request to acquire a target configuration parameter corresponding to the task name of the task to be executed from a configuration center; the task name comprises a registration identifier, and the registration identifier corresponds to the target configuration parameter one by one;
analyzing the target configuration parameters to obtain target physical memory usage and target virtual memory usage required by the task to be executed in the execution process;
and configuring corresponding memory resources for the task to be executed according to the target physical memory usage and the target virtual memory usage.
2. The method for resource allocation according to claim 1, further comprising:
acquiring a task running log of the task to be executed from a task running database, and formatting the task running log to obtain a first processing result comprising a registration identifier, a timestamp, actual physical memory usage and actual virtual memory usage of the task to be executed;
acquiring a resource use log of the task to be executed from a resource use database, and analyzing the resource use log to obtain a second processing result comprising a registration identifier, a task name, a total task resource use amount and an execution time of the task to be executed;
associating the first processing result and the second processing result according to the registration identifier, and obtaining the current configuration parameters of the task to be executed according to the associated first processing result and second processing result;
and updating the target configuration parameters by using the current configuration parameters.
3. The method for resource allocation according to claim 1, further comprising:
receiving an identification acquisition request for acquiring the registration identification of the task to be executed, and responding to the identification acquisition request to generate the registration identification of the task to be executed;
and sending the registration identification to a sender of the identification acquisition request, so that the sender generates a task name according to the registration identification and generates a resource configuration request of the task to be executed according to the task name.
4. The resource allocation method according to claim 2, wherein the resource allocation method further comprises:
and acquiring a task running log of the task to be executed from a computing node according to a preset log acquisition program, and storing the task running log to a path corresponding to the task to be executed in the task running database.
5. The resource configuration method according to claim 4, wherein storing the task execution log to a path corresponding to the task to be executed in the task execution database comprises:
partitioning the path corresponding to the task to be executed according to a preset time period to obtain a plurality of sub-storage intervals;
and storing the task running log into a sub storage interval corresponding to the time period of the task running log according to the time period of the task running log.
6. The resource configuration method according to claim 2, wherein obtaining the resource usage log of the task to be executed from the resource usage database comprises:
and acquiring the resource use log of the task to be executed through a task interface corresponding to the task to be executed.
7. The resource allocation method according to claim 2, wherein obtaining the current configuration parameter of the task to be executed according to the associated first processing result and second processing result comprises:
obtaining the maximum physical memory usage amount and the maximum virtual memory usage amount required by the task to be executed in the execution process according to the associated first processing result and second processing result;
and expanding preset times on the basis of the maximum physical memory usage and the maximum virtual memory usage, and generating the current configuration parameters according to the maximum physical memory usage and the maximum virtual memory usage after the times are expanded.
8. A resource allocation apparatus, comprising:
the system comprises a parameter acquisition module, a parameter configuration module and a parameter configuration module, wherein the parameter acquisition module is used for receiving a resource configuration request comprising a task to be executed and responding to the resource configuration request to acquire a target configuration parameter corresponding to the task name of the task to be executed from a configuration center; the task name comprises a registration identifier, and the registration identifier corresponds to the target configuration parameter one by one;
the parameter analysis module is used for analyzing the target configuration parameters to obtain target physical memory usage and target virtual memory usage required by the task to be executed in the execution process;
and the resource configuration module is used for configuring corresponding memory resources for the task to be executed according to the target physical memory usage and the target virtual memory usage.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the resource configuration method of any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the resource configuration method of any of claims 1-7 via execution of the executable instructions.
CN202010044959.0A 2020-01-16 2020-01-16 Resource allocation method and device, computer readable storage medium and electronic device Pending CN110874272A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010044959.0A CN110874272A (en) 2020-01-16 2020-01-16 Resource allocation method and device, computer readable storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010044959.0A CN110874272A (en) 2020-01-16 2020-01-16 Resource allocation method and device, computer readable storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN110874272A true CN110874272A (en) 2020-03-10

Family

ID=69718434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010044959.0A Pending CN110874272A (en) 2020-01-16 2020-01-16 Resource allocation method and device, computer readable storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN110874272A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488333A (en) * 2020-05-18 2020-08-04 网易(杭州)网络有限公司 Data processing method and device, storage medium and electronic equipment
CN111782466A (en) * 2020-06-28 2020-10-16 京东数字科技控股有限公司 Big data task resource utilization detection method and device
CN112558995A (en) * 2020-12-24 2021-03-26 恩亿科(北京)数据科技有限公司 Flink integration method and system based on TBDS Hadoop
CN113660231A (en) * 2021-08-06 2021-11-16 上海浦东发展银行股份有限公司 Message parsing method, device, equipment and storage medium
CN113778658A (en) * 2020-09-29 2021-12-10 北京沃东天骏信息技术有限公司 Task allocation method and device, electronic equipment and storage medium
EP4040291A1 (en) * 2021-02-07 2022-08-10 Beijing Tusen Zhitu Technology Co., Ltd. Vehicle control and task processing method and apparatus, computing device and system
CN116501474A (en) * 2023-06-08 2023-07-28 之江实验室 System, method and device for processing batch homogeneous tasks

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101694633A (en) * 2009-09-30 2010-04-14 曙光信息产业(北京)有限公司 Equipment, method and system for dispatching of computer operation
CN102906696A (en) * 2010-03-26 2013-01-30 维尔图尔梅特里克斯公司 Fine grain performance resource management of computer systems
CN104391749A (en) * 2014-11-26 2015-03-04 北京奇艺世纪科技有限公司 Resource allocation method and device
CN107220116A (en) * 2017-05-25 2017-09-29 深信服科技股份有限公司 Sandbox environment task processing method and system under a kind of NUMA architecture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101694633A (en) * 2009-09-30 2010-04-14 曙光信息产业(北京)有限公司 Equipment, method and system for dispatching of computer operation
CN102906696A (en) * 2010-03-26 2013-01-30 维尔图尔梅特里克斯公司 Fine grain performance resource management of computer systems
CN104391749A (en) * 2014-11-26 2015-03-04 北京奇艺世纪科技有限公司 Resource allocation method and device
CN107220116A (en) * 2017-05-25 2017-09-29 深信服科技股份有限公司 Sandbox environment task processing method and system under a kind of NUMA architecture

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488333A (en) * 2020-05-18 2020-08-04 网易(杭州)网络有限公司 Data processing method and device, storage medium and electronic equipment
CN111488333B (en) * 2020-05-18 2023-07-11 网易(杭州)网络有限公司 Data processing method and device, storage medium and electronic equipment
CN111782466A (en) * 2020-06-28 2020-10-16 京东数字科技控股有限公司 Big data task resource utilization detection method and device
CN113778658A (en) * 2020-09-29 2021-12-10 北京沃东天骏信息技术有限公司 Task allocation method and device, electronic equipment and storage medium
CN112558995A (en) * 2020-12-24 2021-03-26 恩亿科(北京)数据科技有限公司 Flink integration method and system based on TBDS Hadoop
EP4040291A1 (en) * 2021-02-07 2022-08-10 Beijing Tusen Zhitu Technology Co., Ltd. Vehicle control and task processing method and apparatus, computing device and system
CN113660231A (en) * 2021-08-06 2021-11-16 上海浦东发展银行股份有限公司 Message parsing method, device, equipment and storage medium
CN116501474A (en) * 2023-06-08 2023-07-28 之江实验室 System, method and device for processing batch homogeneous tasks
CN116501474B (en) * 2023-06-08 2023-09-22 之江实验室 System, method and device for processing batch homogeneous tasks

Similar Documents

Publication Publication Date Title
CN110874272A (en) Resource allocation method and device, computer readable storage medium and electronic device
US11909604B2 (en) Automatic provisioning of monitoring for containerized microservices
CN111831420B (en) Method for task scheduling, related device and computer program product
WO2019200984A1 (en) Life cycle management method for distributed application, managers, device and medium
CN110083455B (en) Graph calculation processing method, graph calculation processing device, graph calculation processing medium and electronic equipment
CN111290854A (en) Task management method, device and system, computer storage medium and electronic equipment
CN110046041B (en) Data acquisition method based on battery scheduling framework
WO2021203979A1 (en) Operation and maintenance processing method and apparatus, and computer device
CN113742031B (en) Node state information acquisition method and device, electronic equipment and readable storage medium
CN109117252B (en) Method and system for task processing based on container and container cluster management system
WO2023246347A1 (en) Digital twin processing method and digital twin system
CN109190025B (en) Information monitoring method, device, system and computer readable storage medium
CN111679911B (en) Management method, device, equipment and medium of GPU card in cloud environment
CN111782341B (en) Method and device for managing clusters
CN110781180A (en) Data screening method and data screening device
CN113590437B (en) Alarm information processing method, device, equipment and medium
CN114201294A (en) Task processing method, device and system, electronic equipment and storage medium
CN109213743B (en) Data query method and device
CN108009010B (en) Management device, system, method, electronic device and storage medium for thin client
CN109324892B (en) Distributed management method, distributed management system and device
US20230342369A1 (en) Data processing method and apparatus, and electronic device and storage medium
CN114610798A (en) Resource allocation management method, system, device, storage medium and electronic equipment
CN114510531A (en) Database synchronization method and device, electronic equipment and storage medium
CN112363774A (en) Storm real-time task configuration method and device
CN110727457A (en) Component management method, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200310