CN115269206A - Data processing method and platform based on resource allocation - Google Patents

Data processing method and platform based on resource allocation Download PDF

Info

Publication number
CN115269206A
CN115269206A CN202211181889.9A CN202211181889A CN115269206A CN 115269206 A CN115269206 A CN 115269206A CN 202211181889 A CN202211181889 A CN 202211181889A CN 115269206 A CN115269206 A CN 115269206A
Authority
CN
China
Prior art keywords
data processing
data
processing unit
resource allocation
unit determines
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211181889.9A
Other languages
Chinese (zh)
Other versions
CN115269206B (en
Inventor
陈丽辉
张德文
周可彬
李子威
张迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Sanxiang Bank Co Ltd
Original Assignee
Hunan Sanxiang Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Sanxiang Bank Co Ltd filed Critical Hunan Sanxiang Bank Co Ltd
Priority to CN202211181889.9A priority Critical patent/CN115269206B/en
Publication of CN115269206A publication Critical patent/CN115269206A/en
Application granted granted Critical
Publication of CN115269206B publication Critical patent/CN115269206B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5033Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering data affinity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of information processing, in particular to a data processing method and a data processing platform based on resource allocation, which comprises a request receiving unit, a data processing unit and a resource allocation unit, wherein the request receiving unit acquires a data processing request of a user in a server; the data processing unit determines a processing mode of the data processing request according to the matching degree of the number of the allocable resources and the number of the pre-allocated resources; when the data processing unit determines that the processing mode is to allocate resources according to the pre-allocated resource quantity, the resource allocation unit allocates resources, and the resource management unit adjusts the allocated resource quantity according to the performance influence coefficient of the data processing request on the server; when the data processing unit determines that the processing mode is to add the data processing request into the to-be-processed data list, the resource management unit releases the enabled process with delay, so that the accuracy of resource allocation during data processing of the server is improved.

Description

Data processing method and platform based on resource allocation
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a data processing method and platform based on resource allocation.
Background
In the prior solution, the allocation and management of system resources always completely depend on a resource allocation mechanism of an operating system, but some abnormal processes and abnormal threads are difficult to be discovered and processed in time by the operating system, so that the service application efficiency of the server is extremely reduced.
Chinese patent publication No.: CN108052396a discloses a resource allocation method and system, the method includes: acquiring a starting instruction input by a user, starting a service program according to the starting instruction, and generating at least one thread; acquiring at least one thread applying for computing resources of the same hardware accelerator card; allocating the service mutual exclusion lock to the target thread according to the time sequence of the thread application; distributing the target thread to a target computing unit with the least number of queuing threads, and releasing a service mutual exclusion lock of the target thread; if the number of the queuing threads before the target thread is zero, processing the service data of the target thread; allocating the service mutual exclusion lock to a target thread; resetting the flag bit of the target thread, pointing the active bit pointer of the queue where the target thread is positioned to the next thread of the service data to be processed, releasing the service mutual exclusion lock of the target thread, and canceling the target thread. Although the invention can obviously improve the utilization rate of the computing resources and the application value of the hardware accelerator card, the influence on the system resources in the resource allocation process is not considered in the resource allocation process, so that the resource allocation method and the resource allocation system have the problem that the resources are not accurately allocated according to the data processing amount when the data processing request is received.
Disclosure of Invention
Therefore, the invention provides a data processing method and a data processing platform based on resource allocation, which are used for solving the problem that in the prior art, when a data processing request is received, resources are not accurately allocated according to the size of data processing capacity.
In order to achieve the above object, an aspect of the present invention provides a data processing method based on resource allocation, including the following steps:
s1, a request receiving unit acquires a data processing request of a user in a server, and a data processing unit determines the quantity of pre-allocated resources according to the data processing request, wherein the quantity of the pre-allocated resources comprises the quantity of processes of the server and the quantity of threads under each process;
s2, the data processing unit determines a processing mode of the data processing request according to the matching degree of the number of the allocable resources and the number of the pre-allocated resources, wherein the processing mode comprises the steps of allocating resources according to the number of the pre-allocated resources and adding data into a to-be-processed data list;
s3, when the data processing unit determines that the processing mode is to allocate resources according to the pre-allocated resource quantity, the resource allocation unit allocates resources, and the resource management unit adjusts the allocated resource quantity according to the performance influence coefficient of the data processing request on the server;
and S4, when the data processing unit determines that the processing mode is adding the data processing request into the to-be-processed data list, the resource management unit releases the enabled process with delay.
Further, in the step S1, when the data processing unit determines the amount of pre-allocated resources according to the data processing request, the data processing unit determines the amount of required processes according to the comparison result between the amount W of data types in the data processing request and the amount of preset data types,
wherein the data processing unit is provided with a first preset data type quantity W1, a second preset data type quantity W2, a first process quantity P1, a second process quantity P2 and a third process quantity P3, wherein W1 is more than W2, P1 is more than P2 and more than P3,
when W is less than or equal to W1, the data processing unit determines that the process number is P1;
when W1 is larger than W and smaller than or equal to W2, the data processing unit determines that the process quantity is P2;
when W > W2, the amount of the compound, the data processing unit determines that the number of processes is P3.
Further, when the data processing unit determines that the number of the processes is completed, the data processing unit determines the number of threads pre-allocated to the processes corresponding to each data type according to the comparison result between the data quantity Di to be processed of each data type in the data processing request and the preset data quantity,
the data processing unit is provided with a first preset data volume Db1, a second preset data volume Db2, a first thread quantity T1, a second thread quantity T2 and a third thread quantity T3, wherein Db1 is less than Db2, and T1 is less than T2 and less than T3;
if Di is less than Db1, the data processing unit determines that the number of threads pre-allocated to the ith process is T1;
if Db1 is not more than Di and less than Db2, the server determines the number of threads pre-allocated to the ith process as T2;
if Db2 is less than or equal to Di, the server determines that the number of threads pre-allocated to the ith process is T3;
where i is the ith data type, i =1,2,3, …, and m is a positive integer.
Further, in step S2, when the data processing unit determines the processing mode of the data processing request according to the matching degree between the number of allocable resources and the number of pre-allocated resources, the data obtaining unit obtains the number of remaining processes and the number of remaining threads, and the data processing unit calculates the matching degree G according to the obtained number of remaining processes and the obtained number of remaining threads
Figure 298775DEST_PATH_IMAGE001
Where Pn is the number of processes, P10 represents the number of remaining processes,
Figure 397312DEST_PATH_IMAGE002
a coefficient representing the lowest remaining process quantity, alpha representing the influence weight of the thread quantity, tz representing the thread quantity, T10 representing the remaining thread quantity, K2 representing the coefficient of the lowest remaining thread quantity, beta representing the influence weight of the threadValue, n =1,2,3,z =1,2,3.
Further, when the data processing unit finishes calculating the matching degree G, the data processing unit determines the processing mode of the data processing request according to the comparison result of the matching degree G and the preset matching degree G0,
if G is less than or equal to G0, the data processing unit determines that the processing mode of the data processing request is to allocate resources according to the number of pre-allocated resources;
and if G is larger than G0, the data processing unit determines that the data processing request is processed in a mode of adding the data processing request into a to-be-processed data list.
Further, in step S3, when the data processing unit determines that the processing manner of the data processing request is to allocate resources according to the pre-allocated resource quantity, the resource allocation unit allocates Pn processes and allocates the number of threads corresponding to each process to be Tz.
Further, when the resource allocation unit completes resource allocation, the data acquisition unit acquires a processor utilization rate and a memory utilization rate of completing resource allocation, and the data processing unit calculates the performance impact coefficient U according to the acquired processor utilization rate and memory utilization rate
Figure 806428DEST_PATH_IMAGE003
Wherein C1 represents a processor utilization rate, C10 represents a preset processor utilization rate, α 1 represents a processor utilization rate influence weight, R1 represents a memory utilization rate, R10 represents a preset memory utilization rate, and β 1 represents a memory utilization rate influence weight.
Further, when the data processing unit completes the calculation of the performance impact coefficient U, the resource management unit determines whether to adjust the amount of the allocated resources according to a comparison result between the performance impact coefficient U and a preset performance impact coefficient U0,
the resource management unit is provided with a thread number adjusting coefficient Kt, and Kt is more than or equal to 0.5 and less than 0;
if U is larger than or equal to U0, the resource management unit determines to use Kt to adjust the number of threads corresponding to each process in the resource allocation;
and if U is less than U0, the resource management unit determines not to adjust the quantity of the allocated resources.
Further, in the step S4, when the resource management unit releases the enabled process with delay, the data obtaining unit obtains the processing delay time Qe of each process, the resource management unit determines whether to release the process according to the comparison result between the processing delay time Qe of each process and the preset delay time Q0,
if Qe is larger than or equal to Q0, the resource management unit determines to release the process corresponding to the e-th delay time and sends error information to the user;
and if Qe is less than Q0, the resource management unit determines not to release the process corresponding to the e-th delay time.
Another aspect of the present invention provides a data processing platform based on resource allocation, including:
the request receiving unit is used for receiving a data processing request sent by a user;
a data acquisition unit for acquiring resource utilization data of the server;
the data processing unit is respectively connected with the request receiving unit and the data acquisition unit and is used for determining the quantity of pre-allocated resources according to the data processing request acquired by the request receiving unit and calculating a performance influence coefficient according to the processor utilization rate and the memory utilization rate of the server acquired by the data acquisition unit;
the resource allocation unit is connected with the data processing unit and used for determining the process quantity in the resource allocation quantity and the thread quantity pre-allocated in the process according to the data processing type quantity determined by the data processing unit;
and the resource management unit is respectively connected with the resource allocation unit and the data processing unit and is used for monitoring the consumption of resources in the server and releasing abnormal resources.
Compared with the prior art, the method has the advantages that the request receiving unit obtains the data processing request information of the user in the server, the data processing unit pre-allocates the number of resources according to the data processing request, the data processing unit determines the processing mode of the data processing request according to the matching degree of the number of the allocable resources and the number of the pre-allocated resources, the resource management unit releases the delayed enabled process when the data processing unit determines that the processing mode is adding the data processing request into the data list to be processed, and the resource management unit releases the delayed enabled process when the data processing unit determines that the processing mode is adding the data processing request into the data list to be processed, so that the accuracy of resource allocation in data processing of the server is improved.
Furthermore, the data processing unit determines the number of the required processes according to the comparison result of the number of the data types in the data processing request and the number of the preset data types, and allocates different processes according to different data processing types, so that the accuracy of resource allocation during data processing of the server is improved.
Furthermore, the data processing unit determines the number of threads pre-allocated to the process corresponding to each data type according to the comparison result between the data amount to be processed of each data type in the data processing request and the preset data amount, so that the unnecessary thread resource allocation amount under each process is reduced, and the resource allocation accuracy is improved when the server performs data processing.
Further, the residual process quantity and the residual thread quantity acquired by the data acquisition unit, the matching degree is calculated by the data processing unit according to the acquired residual process quantity and the acquired residual thread quantity, when the matching degree is calculated by the data processing unit, the data processing unit determines the processing mode of the data processing request according to the comparison result of the matching degree and the preset matching degree, the rationality of judgment on the rationality of resource allocation is ensured, and the accuracy of resource allocation is improved when the server performs data processing.
Further, when the data processing unit determines that the processing mode of the data processing request is to allocate resources according to the pre-allocated resource quantity, the resource allocation unit allocates a process and allocates the thread quantity corresponding to each process, so that the rationality of resource allocation is ensured, and the accuracy of resource allocation when the server performs data processing is improved.
Further, when the resource allocation unit completes resource allocation, the data acquisition unit acquires a processor utilization rate and a memory utilization rate for completing resource allocation, the data processing unit calculates a performance impact coefficient according to the acquired processor utilization rate and memory utilization rate, and when the data processing unit calculates the performance impact coefficient, the resource management unit determines whether to adjust the number of allocated resources according to a comparison result of the performance impact coefficient and a preset performance impact coefficient after the resource allocation unit completes resource allocation, so that the influence of resource allocation on the system resource utilization rate is reduced, and the accuracy of resource allocation when the server performs data processing is improved.
Further, when the resource management unit releases the started processes with delay, the data acquisition unit acquires the processing delay time of each process, the resource management unit determines whether to release the processes according to the comparison result of the processing delay time of each process and the preset delay time, and abnormal resource consumption caused by zombie processes and zombie threads is prevented by releasing the high-delay processes, so that the accuracy of resource allocation during data processing of the server is improved.
Drawings
FIG. 1 is a logic diagram of a data processing method based on resource allocation according to the present invention;
fig. 2 is a connection relationship block diagram of a data processing platform based on resource allocation according to the present invention.
Detailed Description
In order that the objects and advantages of the invention will be more clearly understood, the invention is further described in conjunction with the following examples; it should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and do not limit the scope of the present invention.
Referring to fig. 1, fig. 1 is a logic block diagram of a data processing method based on resource allocation according to an embodiment of the present invention.
The data processing method based on resource allocation comprises the following steps:
s1, a request receiving unit acquires a data processing request of a user in a server, and a data processing unit determines the quantity of pre-allocated resources according to the data processing request, wherein the quantity of the pre-allocated resources comprises the quantity of processes of the server and the quantity of threads under each process;
s2, the data processing unit determines a processing mode of the data processing request according to the matching degree of the number of the allocable resources and the number of the pre-allocated resources, wherein the processing mode comprises the steps of allocating resources according to the number of the pre-allocated resources and adding data into a to-be-processed data list;
s3, when the data processing unit determines that the processing mode is to allocate resources according to the pre-allocated resource quantity, the resource allocation unit allocates resources, and the resource management unit adjusts the allocated resource quantity according to the performance influence coefficient of the data processing request on the server;
and S4, when the data processing unit determines that the processing mode is to add the data processing request into the to-be-processed data list, the resource management unit releases the delayed enabled process.
Specifically, in step S1, when the data processing unit determines the amount of pre-allocated resources according to the data processing request, the data processing unit determines the required number of processes according to the comparison result of the number W of data types in the data processing request and the preset number of data types,
wherein the data processing unit is provided with a first preset data type quantity W1, a second preset data type quantity W2, a first process quantity P1, a second process quantity P2 and a third process quantity P3, wherein W1 is more than W2, P1 is more than P2 and more than P3,
when W is less than or equal to W1, the data processing unit determines the process number to be P1;
when W1 is larger than W and is not larger than W2, the data processing unit determines that the process quantity is P2;
when W > W2, the data processing unit determines the number of processes to be P3.
Specifically, when the data processing unit determines that the number of processes is completed, the data processing unit determines the number of threads pre-allocated to the process corresponding to each data type according to the comparison result between the data amount Di to be processed of each data type in the data processing request and the preset data amount,
the data processing unit is provided with a first preset data volume Db1, a second preset data volume Db2, a first thread quantity T1, a second thread quantity T2 and a third thread quantity T3, wherein Db1 is less than Db2, and T1 is less than T2 and less than T3;
if Di is less than Db1, the data processing unit determines that the number of threads pre-allocated to the ith process is T1;
if Db1 is less than or equal to Di and less than Db2, the server determines the number of threads pre-allocated to the ith process to be T2;
if Db2 is less than or equal to Di, the server determines that the number of threads pre-allocated to the ith process is T3;
where i is the ith data type, i =1,2,3, …, and m is a positive integer.
Specifically, in step S2, when the data processing unit determines the processing mode of the data processing request according to the matching degree between the number of allocable resources and the number of pre-allocated resources, the data obtaining unit obtains the number of remaining processes and the number of remaining threads, and the data processing unit calculates the matching degree G according to the obtained number of remaining processes and the obtained number of remaining threads
Figure 860971DEST_PATH_IMAGE004
Where Pn is the number of processes, P10 represents the number of remaining processes,
Figure 461455DEST_PATH_IMAGE002
a coefficient representing the lowest residual process number, alpha representing the influence weight of the thread number, tz representing the thread number, and T10 representing the residual thread numberAmount, K2 represents the lowest remaining thread amount coefficient, β represents the thread impact weight, n =1,2,3,z =1,2,3.
Specifically, when the data processing unit completes the calculation of the matching degree G, the data processing unit determines the processing mode of the data processing request according to the comparison result of the matching degree G and the preset matching degree G0,
if G is less than or equal to G0, the data processing unit determines that the processing mode of the data processing request is to allocate resources according to the pre-allocation resource quantity;
if G is larger than G0, the data processing unit determines that the data processing request is processed in a mode of adding the data processing request into the to-be-processed data list.
Specifically, in step S3, when the data processing unit determines that the data processing request is processed in a manner that resources are allocated according to the pre-allocated resource amount, the resource allocation unit allocates Pn processes and allocates the number of threads corresponding to each process to Tz.
Specifically, when the resource allocation unit completes resource allocation, the data acquisition unit acquires a processor utilization rate and a memory utilization rate for completing resource allocation, and the data processing unit calculates the performance impact coefficient U according to the acquired processor utilization rate and memory utilization rate
Figure 375184DEST_PATH_IMAGE003
Wherein C1 represents a processor utilization rate, C10 represents a preset processor utilization rate, α 1 represents a processor utilization rate influence weight, R1 represents a memory utilization rate, R10 represents a preset memory utilization rate, and β 1 represents a memory utilization rate influence weight.
Specifically, when the data processing unit completes the calculation of the performance impact coefficient U, the resource management unit determines whether to adjust the amount of the allocated resources according to the comparison result of the performance impact coefficient U and the preset performance impact coefficient U0,
wherein the resource management unit is provided with a thread number adjusting coefficient Kt, and Kt is more than or equal to 0.5 and less than 0;
if U is larger than or equal to U0, the resource management unit determines to use Kt to adjust the number of threads corresponding to each process in the resource allocation;
and if U is less than U0, the resource management unit determines not to adjust the quantity of the allocated resources.
Specifically, in step S4, when the resource management unit releases the enabled process with delay, the data acquisition unit acquires the processing delay time Qe of each process, the resource management unit determines whether to release the process by comparing the processing delay time Qe of each process with the preset delay time Q0,
if Qe is larger than or equal to Q0, the resource management unit determines to release the process corresponding to the e-th delay time and sends error information to the user;
and if Qe is less than Q0, the resource management unit determines not to release the process corresponding to the e-th delay time.
Referring to fig. 2, fig. 2 is a block diagram of a connection relationship between data processing platforms based on resource allocation according to an embodiment of the present invention.
The data processing platform based on resource allocation of the embodiment of the invention comprises:
the request receiving unit is used for receiving a data processing request sent by a user;
a data acquisition unit for acquiring resource utilization data of the server;
the data processing unit is respectively connected with the request receiving unit and the data acquisition unit and used for determining the pre-allocation resource quantity according to the data processing request acquired by the request receiving unit and calculating the performance influence coefficient according to the processor utilization rate and the memory utilization rate of the server acquired by the data acquisition unit;
the resource allocation unit is connected with the data processing unit and used for determining the process quantity in the resource allocation quantity and the thread quantity pre-allocated in the process according to the data processing type quantity determined by the data processing unit;
and the resource management unit is respectively connected with the resource allocation unit and the data processing unit and is used for monitoring the consumption of resources in the server and releasing abnormal resources.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can be within the protection scope of the invention.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention; various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A data processing method based on resource allocation, comprising the steps of:
s1, a request receiving unit acquires a data processing request of a user in a server, and a data processing unit determines the quantity of pre-allocated resources according to the data processing request, wherein the quantity of the pre-allocated resources comprises the quantity of processes of the server and the quantity of threads under each process;
s2, the data processing unit determines a processing mode of the data processing request according to the matching degree of the number of the allocable resources and the number of the pre-allocated resources, wherein the processing mode comprises the steps of allocating resources according to the number of the pre-allocated resources and adding data into a to-be-processed data list;
s3, when the data processing unit determines that the processing mode is to allocate resources according to the pre-allocated resource quantity, the resource allocation unit allocates resources, and the resource management unit adjusts the allocated resource quantity according to the performance influence coefficient of the data processing request on the server;
and S4, when the data processing unit determines that the processing mode is adding the data processing request into the to-be-processed data list, the resource management unit releases the enabled process with delay.
2. The data processing method based on resource allocation according to claim 1, wherein in step S1, when the data processing unit determines the amount of pre-allocated resources according to the data processing request, the data processing unit determines the required number of processes according to the comparison result between the amount of data types W in the data processing request and the preset amount of data types,
wherein the data processing unit is provided with a first preset data type quantity W1, a second preset data type quantity W2, a first process quantity P1, a second process quantity P2 and a third process quantity P3, wherein W1 is more than W2, P1 is more than P2 and more than P3,
when W is less than or equal to W1, the data processing unit determines that the process number is P1;
when W1 is larger than W and is smaller than or equal to W2, the data processing unit determines that the process quantity is P2;
when W > W2, the data processing unit determines that the number of processes is P3.
3. The data processing method based on resource allocation according to claim 2, wherein when the data processing unit determines that the number of processes is completed, the data processing unit determines the number of threads pre-allocated to the process corresponding to each data type according to the comparison result between the data amount Di to be processed of each data type in the data processing request and the preset data amount,
the data processing unit is provided with a first preset data volume Db1, a second preset data volume Db2, a first thread quantity T1, a second thread quantity T2 and a third thread quantity T3, wherein Db1 is less than Db2, and T1 is less than T2 and less than T3;
if Di is less than Db1, the data processing unit determines that the number of threads pre-allocated to the ith process is T1;
if Db1 is not more than Di and less than Db2, the server determines the number of threads pre-allocated to the ith process as T2;
if Db2 is less than or equal to Di, the server determines that the number of threads pre-allocated to the ith process is T3;
where i is the ith data type, i =1,2,3, …, and m is a positive integer.
4. The data processing method based on resource allocation according to claim 3, wherein in step S2, when the data processing unit determines the processing mode of the data processing request according to the matching degree between the number of allocable resources and the number of pre-allocated resources, the data processing unit obtains the number of remaining processes and the number of remaining threads, and the data processing unit calculates the matching degree G according to the obtained number of remaining processes and the obtained number of remaining threads
Figure 790705DEST_PATH_IMAGE001
Where Pn is the number of processes, P10 represents the number of remaining processes,
Figure 192868DEST_PATH_IMAGE002
represents the lowest remaining process number coefficient, α represents the thread number influence weight, tz is the thread number, T10 represents the remaining thread number, K2 represents the lowest remaining thread number coefficient, β represents the thread influence weight, n =1,2,3, z =1,2,3.
5. The data processing method based on resource allocation according to claim 4, wherein when the data processing unit completes calculating the matching degree G, the data processing unit determines the processing mode of the data processing request according to the comparison result of the matching degree G and a preset matching degree G0,
if G is less than or equal to G0, the data processing unit determines that the processing mode of the data processing request is to allocate resources according to the number of pre-allocated resources;
and if G is larger than G0, the data processing unit determines that the data processing request is processed in a mode of adding the data processing request into a to-be-processed data list.
6. The method according to claim 5, wherein in step S3, when the data processing unit determines that the data processing request is processed by allocating resources according to the pre-allocated resource amount, the resource allocation unit allocates Pn processes and allocates the number of threads corresponding to each process to be Tz.
7. The data processing method based on resource allocation according to claim 6, wherein when the resource allocation unit completes resource allocation, the data obtaining unit obtains a processor utilization rate and a memory utilization rate for completing resource allocation, and the data processing unit calculates the performance impact coefficient U according to the obtained processor utilization rate and memory utilization rate
Figure 737113DEST_PATH_IMAGE003
Wherein C1 represents a processor utilization rate, C10 represents a preset processor utilization rate, α 1 represents a processor utilization rate influence weight, R1 represents a memory utilization rate, R10 represents a preset memory utilization rate, and β 1 represents a memory utilization rate influence weight.
8. The data processing method based on resource allocation according to claim 7, wherein when the data processing unit completes calculating the performance impact coefficient U, the resource management unit determines whether to adjust the amount of allocated resources according to a comparison result between the performance impact coefficient U and a preset performance impact coefficient U0, wherein the resource management unit is provided with a thread amount adjustment coefficient Kt, kt is greater than or equal to 0.5 and is less than 0;
if U is larger than or equal to U0, the resource management unit determines to use Kt to adjust the number of threads corresponding to each process in the resource allocation;
and if U is less than U0, the resource management unit determines not to adjust the quantity of the allocated resources.
9. The data processing method based on resource allocation according to claim 8, wherein in said step S4, when said resource management unit releases the enabled processes with delay, said data obtaining unit obtains the processing delay time Qe of each process, said resource management unit determines whether to release the processes according to the comparison result between the processing delay time Qe of each process and the preset delay time Q0,
if Qe is more than or equal to Q0, the resource management unit determines to release the process corresponding to the e-th delay time and sends error information to the user;
and if Qe is less than Q0, the resource management unit determines not to release the process corresponding to the e-th delay time.
10. A data processing platform to which the data processing method of resource allocation of claims 1-9 is applied, comprising:
the request receiving unit is used for receiving a data processing request sent by a user;
a data acquisition unit for acquiring resource utilization data of the server;
the data processing unit is respectively connected with the request receiving unit and the data acquisition unit and is used for determining the quantity of pre-allocated resources according to the data processing request acquired by the request receiving unit and calculating a performance influence coefficient according to the processor utilization rate and the memory utilization rate of the server acquired by the data acquisition unit;
the resource allocation unit is connected with the data processing unit and used for determining the process quantity in the resource allocation quantity and the thread quantity pre-allocated in the process according to the data processing type quantity determined by the data processing unit;
and the resource management unit is respectively connected with the resource allocation unit and the data processing unit and is used for monitoring the consumption of resources in the server and releasing abnormal resources.
CN202211181889.9A 2022-09-27 2022-09-27 Data processing method and platform based on resource allocation Active CN115269206B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211181889.9A CN115269206B (en) 2022-09-27 2022-09-27 Data processing method and platform based on resource allocation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211181889.9A CN115269206B (en) 2022-09-27 2022-09-27 Data processing method and platform based on resource allocation

Publications (2)

Publication Number Publication Date
CN115269206A true CN115269206A (en) 2022-11-01
CN115269206B CN115269206B (en) 2023-01-10

Family

ID=83757268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211181889.9A Active CN115269206B (en) 2022-09-27 2022-09-27 Data processing method and platform based on resource allocation

Country Status (1)

Country Link
CN (1) CN115269206B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116700999A (en) * 2023-08-07 2023-09-05 上海观安信息技术股份有限公司 Data processing method, device, computer equipment and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110231513A1 (en) * 2000-09-25 2011-09-22 Yevgeny Korsunsky Application distribution control network apparatus
CN103336722A (en) * 2013-07-16 2013-10-02 上海大学 Virtual machine CPU source monitoring and dynamic distributing method
CN103458525A (en) * 2012-06-01 2013-12-18 北京邮电大学 Method and device for scheduling policy selection in heterogeneous network
CN104079503A (en) * 2013-03-27 2014-10-01 华为技术有限公司 Method and device of distributing resources
CN104537008A (en) * 2014-12-16 2015-04-22 语联网(武汉)信息技术有限公司 Item set capacity value extracting method and device based on matching degree
CN105700948A (en) * 2014-11-24 2016-06-22 阿里巴巴集团控股有限公司 Method and device for scheduling calculation task in cluster
CN108121601A (en) * 2017-11-08 2018-06-05 上海格蒂电力科技有限公司 A kind of application resource dispatching device and method based on weight
CN109992366A (en) * 2017-12-29 2019-07-09 华为技术有限公司 Method for scheduling task and dispatching device
CN110275777A (en) * 2019-06-10 2019-09-24 广州市九重天信息科技有限公司 Resource scheduling system
US20190303308A1 (en) * 2018-04-03 2019-10-03 Vmware, Inc. Distributed storage system and method for managing storage access bandwidth for multiple clients
CN111625331A (en) * 2020-05-20 2020-09-04 拉扎斯网络科技(上海)有限公司 Task scheduling method, device, platform, server and storage medium
CN112131004A (en) * 2020-04-29 2020-12-25 章稳建 Data processing method based on communication of Internet of things and cloud computing server
CN113469423A (en) * 2021-06-18 2021-10-01 北京明略软件系统有限公司 Resource allocation method, device, storage medium and electronic equipment
US20210392087A1 (en) * 2020-06-16 2021-12-16 Hitachi, Ltd. Computer system and operation management method for computer system
US20220058060A1 (en) * 2020-08-18 2022-02-24 Core Scientific, Inc. Ranking computing resources

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110231513A1 (en) * 2000-09-25 2011-09-22 Yevgeny Korsunsky Application distribution control network apparatus
CN103458525A (en) * 2012-06-01 2013-12-18 北京邮电大学 Method and device for scheduling policy selection in heterogeneous network
CN104079503A (en) * 2013-03-27 2014-10-01 华为技术有限公司 Method and device of distributing resources
CN103336722A (en) * 2013-07-16 2013-10-02 上海大学 Virtual machine CPU source monitoring and dynamic distributing method
CN105700948A (en) * 2014-11-24 2016-06-22 阿里巴巴集团控股有限公司 Method and device for scheduling calculation task in cluster
CN104537008A (en) * 2014-12-16 2015-04-22 语联网(武汉)信息技术有限公司 Item set capacity value extracting method and device based on matching degree
CN108121601A (en) * 2017-11-08 2018-06-05 上海格蒂电力科技有限公司 A kind of application resource dispatching device and method based on weight
CN109992366A (en) * 2017-12-29 2019-07-09 华为技术有限公司 Method for scheduling task and dispatching device
US20190303308A1 (en) * 2018-04-03 2019-10-03 Vmware, Inc. Distributed storage system and method for managing storage access bandwidth for multiple clients
CN110275777A (en) * 2019-06-10 2019-09-24 广州市九重天信息科技有限公司 Resource scheduling system
CN112131004A (en) * 2020-04-29 2020-12-25 章稳建 Data processing method based on communication of Internet of things and cloud computing server
CN111625331A (en) * 2020-05-20 2020-09-04 拉扎斯网络科技(上海)有限公司 Task scheduling method, device, platform, server and storage medium
US20210392087A1 (en) * 2020-06-16 2021-12-16 Hitachi, Ltd. Computer system and operation management method for computer system
US20220058060A1 (en) * 2020-08-18 2022-02-24 Core Scientific, Inc. Ranking computing resources
CN113469423A (en) * 2021-06-18 2021-10-01 北京明略软件系统有限公司 Resource allocation method, device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李丽娜: "面向大规模流数据处理的弹性资源调度研究", 《中国博士学位论文全文数据库 (基信息科技辑)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116700999A (en) * 2023-08-07 2023-09-05 上海观安信息技术股份有限公司 Data processing method, device, computer equipment and storage medium
CN116700999B (en) * 2023-08-07 2023-10-03 上海观安信息技术股份有限公司 Data processing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN115269206B (en) 2023-01-10

Similar Documents

Publication Publication Date Title
US8812639B2 (en) Job managing device, job managing method and job managing program
US7206890B2 (en) System and method for reducing accounting overhead during memory allocation
US8108874B2 (en) Minimizing variations of waiting times of requests for services handled by a processor
US8078674B2 (en) Server device operating in response to received request
US20060195845A1 (en) System and method for scheduling executables
US20070053381A1 (en) Method, apparatus and computer program product for sharing resources
CN115269206B (en) Data processing method and platform based on resource allocation
US20200104177A1 (en) Resource allocation system, management device, method, and program
CN104850505B (en) The EMS memory management process and system stacked based on chain type
US20180314435A1 (en) Deduplication processing method, and storage device
WO2019205370A1 (en) Electronic device, task distribution method and storage medium
WO2019024235A1 (en) Electronic device, server allocation control method and computer readable storage medium
CN107704322B (en) Request distribution method and device
CN114155026A (en) Resource allocation method, device, server and storage medium
CN113327053A (en) Task processing method and device
CN111858014A (en) Resource allocation method and device
CN115640113A (en) Multi-plane flexible scheduling method
KR20160139082A (en) Method and System for Allocation of Resource and Reverse Auction Resource Allocation in hybrid Cloud Server
CN108845860B (en) Method and device for managing quota and electronic equipment
CN110073321B (en) Storage controller and IO request processing method
CN111813564B (en) Cluster resource management method and device and container cluster management system
CN112948501B (en) Data analysis method, device and system
CN114528109A (en) Resource request method, device and system
CN114157717A (en) Micro-service dynamic current limiting system and method
WO2021017310A1 (en) Electronic voucher generation method and apparatus, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant