CN116820766A - Computer resource distribution system and method based on big data technology - Google Patents
Computer resource distribution system and method based on big data technology Download PDFInfo
- Publication number
- CN116820766A CN116820766A CN202310779463.1A CN202310779463A CN116820766A CN 116820766 A CN116820766 A CN 116820766A CN 202310779463 A CN202310779463 A CN 202310779463A CN 116820766 A CN116820766 A CN 116820766A
- Authority
- CN
- China
- Prior art keywords
- local
- local computer
- computer
- target
- instruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 542
- 238000005516 engineering process Methods 0.000 title claims abstract description 19
- 230000008569 process Effects 0.000 claims abstract description 507
- 230000014759 maintenance of location Effects 0.000 claims abstract description 38
- 230000000694 effects Effects 0.000 claims abstract description 4
- 238000007726 management method Methods 0.000 claims description 37
- 238000004364 calculation method Methods 0.000 claims description 21
- 238000012544 monitoring process Methods 0.000 claims description 15
- 238000004891 communication Methods 0.000 claims description 14
- 230000005540 biological transmission Effects 0.000 claims description 10
- 238000012163 sequencing technique Methods 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 9
- 238000013468 resource allocation Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 7
- 238000013523 data management Methods 0.000 claims description 6
- 230000005577 local transmission Effects 0.000 claims description 6
- FFBHFFJDDLITSX-UHFFFAOYSA-N benzyl N-[2-hydroxy-4-(3-oxomorpholin-4-yl)phenyl]carbamate Chemical compound OC1=C(NC(=O)OCC2=CC=CC=C2)C=CC(=C1)N1CCOCC1=O FFBHFFJDDLITSX-UHFFFAOYSA-N 0.000 claims description 3
- 238000011112 process operation Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 abstract 1
- 230000009471 action Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Landscapes
- Hardware Redundancy (AREA)
Abstract
The invention discloses a computer resource distribution system and a method based on big data technology, which relate to the technical field of computer networks, and are used for acquiring a historical operation record of a local computer, setting an operation load threshold, extracting overload operation characteristics of the local computer, acquiring a rule model of overload operation of the local computer, intercepting a process meeting a target rule, calculating a local database dependency index and an activity index of a current operation process and an intercepted process in the local computer, calculating a process retention index, primarily screening an alternative process, selecting a process which is unloaded from the alternative process and enters the operation of a cloud computer from the local, managing an instruction which is transmitted into and out of the target process, marking the instruction which is transmitted into the target process, forwarding the instruction to the cloud process of the target process, and managing the instruction returned from the cloud process corresponding to the target process.
Description
Technical Field
The invention relates to the technical field of computer networks, in particular to a system and a method for distributing computer resources based on big data technology.
Background
With the development of computer networks, computer information service resources are deployed on a resource pool formed by a plurality of groups of servers through network technology and virtualization technology. The computing power, storage space and information services provided by the server are used by users through network connections, and cloud computing and cloud storage are all practical cases.
When the local computer resources are exhausted or are about to be exhausted, the reliability of the local computer is reduced, and the local computer may be jammed, dead or even crashed. Expanding the local computer resources at this point requires a significant amount of time and pausing tasks in the local computer to physically expand the computer resources. The excessive local computer resources are prepared in advance and resource waste is caused, so that a flexible computer resource distribution system and method are needed, the general use condition of the local computer is met, and the local computer resources can be flexibly supplemented when the local computer resources are about to be exhausted.
Disclosure of Invention
The present invention is directed to a system and a method for allocating computer resources based on big data technology, so as to solve the problems set forth in the background art.
In order to solve the technical problems, the invention provides the following technical scheme: a computer resource distribution system and method based on big data technology, the method includes:
step S100: acquiring a historical operation record of a local computer, extracting the historical record of each parameter item in the operation process of the local computer, calculating the operation state parameters of the local computer, and collecting the operation state parameters of the local computer to obtain an operation state data set of the local computer;
step S200: setting an operation load threshold, extracting overload operation characteristics of a local computer, acquiring a rule model of overload operation of the local computer, intercepting a process meeting a target rule by a state target rule of overload operation of the local computer in the rule model;
step S300: calculating a local database dependency index and an activity index of a current running process and an intercepted process in a local computer, calculating a process retention index, sequencing the process retention indexes of the current running process and the intercepted process, and selecting an alternative process from the sequence;
step S400: selecting a process which is unloaded from the local into the cloud computer to run from the alternative process, setting the process which is unloaded from the local into the cloud computer to run as a target process, and loading the target process into the cloud computer;
step S500: the method comprises the steps that instructions of an incoming target process and an outgoing target process are managed, the local management module marks the instructions of the incoming target process, the instructions are forwarded to a cloud process of the target process through the local management module, and the instructions returned from the cloud process corresponding to the target process are managed.
Further, step S100 includes:
step S101: acquiring a history running record of a local computer, and extracting the history record of each parameter item in the running process of the local computer;
step S102: calculating the operation state parameter F of the ith moment of the local computer i Wherein F is i =t i ×(n 1 +n 2 +n 3 +…+n m ) Wherein t is i Representing response time delay of ith moment of local computer, wherein n 1 、n 2 、n 3 、…、n m The operation load rates of the 1 st, 2 nd, 3 rd, … … th and m th operation parameter items of the local computer are respectively represented, and the calculation result of each state parameter corresponds to one or a combination of a plurality of processes;
parameter items representing the running state of the local computer are for example: CPU load, GPU load, memory load, disk occupation and network load, the local computer has m operation state parameter items, and n is obtained by calculating the load rate of each operation parameter and obtaining n 1 、n 2 、n 3 、…、n m Is a value of (2).
Further, step S200 includes:
step S201: setting an operation load threshold value, extracting an overload process record of a local computer from an operation state data set of the local computer, wherein when the local computer operates a W process, the operation state parameter of the local computer is lower than the operation load threshold value, and when the local computer operates an H process on the basis of the operation of the W process, the operation state parameter of the local computer is higher than the operation load threshold value, wherein a certain overload process record is k (H, beta), H is a certain overload operation characteristic, H { W, H }, W is a combination of one or more processes, and beta is a value of the operation state parameter value of the local computer in the overload process record exceeding the operation load threshold value;
step S202: setting a certain target process combination X, wherein X=W+H, and setting the obtained j item markHistorical records of process combination X, and calculating excess expected beta of all target process combinations X X ,β X The mathematical expectation of the beta value in j overload process records;
step S203: setting a target rule judgment threshold, and setting the target process combination as an item rule when the excess expected of the target process combination is higher than the target rule judgment threshold;
step S204: intercepting the process started by the application according to a target rule, and setting the current running process of the local computer as B j At this time B e Starting a process application, wherein B is a target process combination corresponding to a certain target rule, and B=B j +B e At this time, for B e Intercepting the process, wherein B j Is a combination of one or more processes.
Further, step S300 includes:
step S301: calculating a local database dependency index of a current process in a local computer, wherein the local database dependency index of a running process A in the local computer is sigma A ,R lA Representing the data quantity read from the local computer in unit time in the current running process of the process A, R tA Representing the total data quantity read in unit time of the process A, wherein the local database is data information stored in a local computer;
the data read by one process is from a local computer or a non-local computer, the sources of the data read by the processes are compared, when more data read by one process is from the sources and the local computer, the process is more suitable for processing by the local computer, and when the amount of the data read by one process is less from the local computer, the process is more suitable for processing by the cloud computer;
step S302: calculating the liveness index of the current process in the local computer, wherein the liveness index of the running process A in the local computer is eta A ,S lA Representing the operation time of the A process in the current running process, S tA The method comprises the steps of representing the total opening time of an A process in the current running process;
calculating the operation time of the process, wherein the operation process of the process comprises the following steps: in the interaction process of the user and the process, the time length of inputting information to the process by the user and the time length of reading the output result of the process by the user are calculated, the process with more frequent user operation is selected to be processed by a local computer, and the process with less frequent user operation is processed by a cloud computer;
step S303: extracting a history running record of the intercepted process from a history running log of a total local computer, and calculating a local database dependency index and an liveness index of the intercepted process according to the methods in the steps S301 and S302, wherein the local database dependency index of the intercepted process C is sigma C The liveness index of the intercepted process C is eta C , Representing the historical average of the data quantity read from the local computer per unit time of process C,/-, for>Representing the historical average of the total data read per unit time of process C,/-> Representing the duration of operation of the C process during the historical running of the run,/for example>Represent CThe total duration of historical operation of the process;
step S304: calculating the process retention index of the currently running process and the intercepted process in the local computer, wherein the process retention index p of the running process A in the local computer A =σ A ×η A Process retention index p of intercepted process C C =σ C ×η C And sequencing the process retention indexes of the processes from large to small, and selecting the process corresponding to the last two process retention indexes of the process retention indexes in the process retention index sequencing sequence as an alternative process.
Further, step S400 includes:
step S401: acquiring instruction communication records of an alternative process and a non-alternative process, calculating the communication density of the process, and acquiring a resource load record of the alternative process in the local computer, wherein the non-alternative process represents a process which is not selected as the alternative process from the processes of the current running process and the intercepted process in the local computer;
step S402: calculating a process load index of the alternative processes, wherein one process in the alternative processes is a process D, and the process load index mu of the process D is calculated D =λ 1 (1-q 1D )+λ 2 q 2D Wherein q is 1D Representing the process communication density of process D,g d representing the number of instructions communicated between process D and non-alternative processes in a unit time, G d Representing the total instruction number, q, of process D processing in a unit time 2D Representing the load factor of a D process, where q 2D =u 1 +u 2 +u 3 +…+u m Wherein u is 1 、u 2 、u 3 、…、u m Respectively representing the running load rate, lambda of the mth running parameter item of the 1 st, 2 nd, 3 rd and … … th and m th running parameter items of the local computer in the running process of the D process 1 And lambda (lambda) 2 Respectively represent q 1D And q 2D Coefficients of (2);
step S403: comparing the process load indexes of two processes in the alternative processes, selecting a process with a larger process load index as a process which is unloaded from the local into the cloud computer to run, setting the process which is unloaded from the local into the cloud computer to run as a target process, loading the target process into the cloud computer, and creating a cloud process corresponding to the target process;
after preliminary selection is carried out on the processes, two processes at the end of the process retention index sequencing are set as alternative processes, the communication relationship between the two alternative processes and other non-alternative processes is calculated respectively, and q is used for the communication relationship between the processes 1 To give a representation, q 1 The larger indicates that the communication relationship between the alternative process and the rest of the non-alternative processes is more intimate, q 2 And representing the amount of local computer resources consumed by the process, and selecting a process which has less close communication relationship with the non-alternative process and consumes more local computer resources from the alternative processes as a process uploaded to the cloud computer.
Further, step S500 includes:
step S501: the method comprises the steps of reserving an instruction incoming interface and an instruction outgoing interface of a target process in a local computer, setting the instruction incoming interface of the target process in the local computer as the local incoming interface, setting the instruction outgoing interface of the target process in the local computer as the local outgoing interface, setting the instruction incoming interface of the cloud process in the cloud computer as the cloud incoming interface, and setting the instruction outgoing interface of the target process in the cloud computer as the cloud outgoing interface;
step S502: when detecting that an instruction is input at a local input interface, setting the instruction as a start instruction, marking the start instruction and encrypting the marked instruction;
step S503: transmitting the encrypted starting instruction to a cloud input interface corresponding to the cloud process, decrypting the starting instruction, inputting the starting instruction into the cloud process, and setting the fed-back instruction as a result instruction after cloud process operation;
step S504: encrypting the result instruction transmitted from the cloud transmission interface, forwarding the result instruction to a local transmission interface of the target process, decrypting the result instruction, and transmitting the result instruction from the local transmission interface of the target process according to the marked time sequence;
the method comprises the steps of reserving an incoming port and an outgoing port of a local process, enabling an instruction of an incoming target process to be incoming according to an original path, marking the instruction, and enabling a receiving sequence of a result instruction to lose a corresponding relation with a starting instruction under the influence of a network environment, and enabling the result instruction to keep a corresponding sequence of a reason corresponding to the starting instruction corresponding to the result instruction when the result instruction is sent out from a local outgoing interface of the target process through the instruction marking.
In order to better implement the method, a computer resource allocation system based on big data technology is also provided, and the system comprises: the system comprises a local computer running state database management module, a process intercepting module, an alternative process selection module and a target process management module, wherein the local computer running state database management module is used for managing a local computer running state database, the process intercepting module is used for intercepting a process, the alternative process selection module is used for selecting an alternative process from a current running process and an intercepted process in a local computer, and the target process management module is used for selecting a target process and managing instructions of the target process.
Further, the local computer running state database management module includes: the system comprises a response time delay monitoring unit, an operation load monitoring unit, an operation state parameter calculating unit and an operation state data management unit, wherein the response time delay monitoring unit is used for detecting corresponding time delay calculated locally, the operation load monitoring unit is used for monitoring operation load rates of all operation parameter items of a local computer, the operation state parameter calculating unit is used for calculating operation state parameters, and the operation state data management unit is used for managing operation state data.
Further, the alternative process selection module includes: the system comprises a local database dependency index calculation unit, an liveness index calculation unit, a process retention index calculation unit and an alternative process selection unit, wherein the local database dependency index calculation unit is used for calculating a local database dependency index, the liveness index calculation unit is used for calculating the liveness index, the process retention index calculation unit is used for calculating the process retention index, and the alternative process selection unit is used for selecting an alternative process.
Further, the target process management module includes: the cloud process instruction transmission interface management unit is used for managing instructions of the transmission cloud process.
Compared with the prior art, the invention has the following beneficial effects: according to the method, the process combination of the local computer in overload operation is found through collecting historical data, the process which possibly causes the local overload operation is intercepted, a process evaluation mechanism is introduced, the liveness and the resource occupation of the process are evaluated, the process which is lower in process liveness and larger in resource occupation is selected to be transmitted into the cloud computer for processing, the influence of process migration on the use of a user is reduced on the basis of reducing the resource load of the local computer, and the consistent instruction transmission mode of the migrated process and the original target is kept through a process management module.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a schematic diagram of a computer resource allocation system based on big data technology according to the present invention;
FIG. 2 is a flow chart of a method for allocating computer resources based on big data technology according to the present invention;
FIG. 3 is a schematic diagram of the target process management of the computer resource allocation system based on big data technology of the present invention patent.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, 2 and 3, the present invention provides the following technical solutions:
step S100: step S100: acquiring a historical operation record of a local computer, extracting the historical record of each parameter item in the operation process of the local computer, calculating the operation state parameters of the local computer, and collecting the operation state parameters of the local computer to obtain an operation state data set of the local computer;
wherein, step S100 includes:
step S101: acquiring a history running record of a local computer, and extracting the history record of each parameter item in the running process of the local computer;
step S102: calculating the operation state parameter F of the ith moment of the local computer i Wherein F is i =t i ×(n 1 +n 2 +n 3 +…+n m ) Wherein t is i Representing response time delay of ith moment of local computer, wherein n 1 、n 2 、n 3 、…、n m The operation load rates of the 1 st, 2 nd, 3 rd, … … th and m th operation parameter items of the local computer are respectively represented, and the calculation result of each state parameter corresponds to one or a combination of a plurality of processes.
Step S200: setting an operation load threshold, extracting overload operation characteristics of a local computer, acquiring a rule model of overload operation of the local computer, intercepting a process meeting a target rule by a state target rule of overload operation of the local computer in the rule model;
wherein, step S200 includes:
step S201: setting an operation load threshold value, extracting an overload process record of a local computer from an operation state data set of the local computer, wherein when the local computer operates a W process, the operation state parameter of the local computer is lower than the operation load threshold value, and when the local computer operates an H process on the basis of the operation of the W process, the operation state parameter of the local computer is higher than the operation load threshold value, wherein a certain overload process record is k (H, beta), H is a certain overload operation characteristic, H { W, H }, W is a combination of one or more processes, and beta is a value of the operation state parameter value of the local computer in the overload process record exceeding the operation load threshold value;
step S202: setting a certain target process combination X, wherein X=W+H, setting a history record of j target process combinations X, and calculating excess expected beta of all target process combinations X X ,β X The mathematical expectation of the beta value in j overload process records;
step S203: setting a target rule judgment threshold, and setting the target process combination as an item rule when the excess expected of the target process combination is higher than the target rule judgment threshold;
step S204: intercepting the process started by the application according to a target rule, and setting the current running process of the local computer as B j At this time B e Starting a process application, wherein B is a target process combination corresponding to a certain target rule, and B=B j +B e At this time, for B e Intercepting the process, wherein B j Is a combination of one or more processes;
let a certain item mark rule be B { B ] 1 ,B 2 ,B 3 ,B 4 }, wherein B is 1 ,B 2 ,B 3 And B 4 Respectively representing 4 different processes, when the local computer runs B simultaneously 1 ,B 2 ,B 3 And B 4 When the process is carried out, the running state parameter of the local computer is larger than the running load threshold, and when the local computer runs B simultaneously 1 ,B 2 ,B 3 And B 4 When any three processes are executed, the running state of the local computerThe state parameter is smaller than the operation load threshold;
example 1 of intercept Process: when the local computer runs B simultaneously 1 ,B 2 And B 3 During the process, B 4 Intercepting B when the process puts forward an operation application 4 A process;
example 2 of intercept procedure: when the local computer runs B simultaneously 1 ,B 3 And B 4 During the process, B 2 Intercepting B when the process puts forward an operation application 2 And (5) a process.
Step S300: calculating a local database dependency index and an activity index of a current running process and an intercepted process in a local computer, calculating a process retention index, sequencing the process retention indexes of the current running process and the intercepted process, and selecting an alternative process from the sequence;
wherein, step S300 includes:
step S301: calculating a local database dependency index of a current process in a local computer, wherein the local database dependency index of a running process A in the local computer is sigma A ,R lA Representing the data quantity read from the local computer in unit time in the current running process of the process A, R tA Representing the total data quantity read in unit time of the process A, wherein the local database is data information stored in a local computer;
step S302: calculating the liveness index of the current process in the local computer, wherein the liveness index of the running process A in the local computer is eta A ,S lA Representing the operation time of the A process in the current running process, S tA The method comprises the steps of representing the total opening time of an A process in the current running process;
step S303: extracting the history running record of the intercepted process from the history running log of the total local computer, and imitatingAccording to the method in steps S301 and S302, a local database dependency index and an liveness index of the intercepted process are calculated, wherein the local database dependency index of the intercepted process C is σ C The liveness index of the intercepted process C is eta C , Representing the historical average of the data quantity read from the local computer per unit time of process C,/-, for>Representing the historical average of the total data read per unit time of process C,/-> Representing the duration of operation of the C process during the historical running of the run,/for example>Representing the historical running total duration of the process C;
step S304: calculating the process retention index of the currently running process and the intercepted process in the local computer, wherein the process retention index p of the running process A in the local computer A =σ A ×η A Process retention index p of intercepted process C C =σ C ×η C And sequencing the process retention indexes of the processes from large to small, and selecting the process corresponding to the last two process retention indexes of the process retention indexes in the process retention index sequencing sequence as an alternative process.
Step S400: selecting a process which is unloaded from the local into the cloud computer to run from the alternative process, setting the process which is unloaded from the local into the cloud computer to run as a target process, and loading the target process into the cloud computer;
wherein, step S400 includes:
step S401: acquiring instruction communication records of an alternative process and a non-alternative process, calculating the communication density of the process, and acquiring a resource load record of the alternative process in the local computer, wherein the non-alternative process represents a process which is not selected as the alternative process from the processes of the current running process and the intercepted process in the local computer;
step S402: calculating a process load index of the alternative processes, wherein one process in the alternative processes is a process D, and the process load index mu of the process D is calculated D =λ 1 (1-q 1D )+λ 2 q 2D Wherein q is 1D Representing the process communication density of process D,g d representing the number of instructions communicated between process D and non-alternative processes in a unit time, G d Representing the total instruction number, q, of process D processing in a unit time 2D Representing the load factor of a D process, where q 2D =u 1 +u 2 +u 3 +…+u m Wherein u is 1 、u 2 、u 3 、…、u m Respectively representing the running load rate, lambda of the mth running parameter item of the 1 st, 2 nd, 3 rd and … … th and m th running parameter items of the local computer in the running process of the D process 1 And lambda (lambda) 2 Respectively represent q 1D And q 2D Coefficients of (2);
step S403: comparing the process load indexes of two processes in the alternative processes, selecting a process with a larger process load index as a process which is unloaded from the local into the cloud computer to run, setting the process which is unloaded from the local into the cloud computer to run as a target process, loading the target process into the cloud computer, and creating a cloud process corresponding to the target process;
cloud computers such as remote server systems, remote computers, and virtual computers.
Step S500: the method comprises the steps that instructions of an incoming target process and an outgoing target process are managed, a local management module marks the instructions of the incoming target process, the instructions are forwarded to a cloud process of the target process through the local management module, and the instructions returned from the cloud process corresponding to the target process are managed;
wherein, step S500 includes:
step S501: the method comprises the steps of reserving an instruction incoming interface and an instruction outgoing interface of a target process in a local computer, setting the instruction incoming interface of the target process in the local computer as the local incoming interface, setting the instruction outgoing interface of the target process in the local computer as the local outgoing interface, setting the instruction incoming interface of the cloud process in the cloud computer as the cloud incoming interface, and setting the instruction outgoing interface of the target process in the cloud computer as the cloud outgoing interface;
step S502: when detecting that an instruction is input at a local input interface, setting the instruction as a start instruction, marking the start instruction and encrypting the marked instruction;
step S503: transmitting the encrypted starting instruction to a cloud input interface corresponding to the cloud process, decrypting the starting instruction, inputting the starting instruction into the cloud process, and setting the fed-back instruction as a result instruction after cloud process operation;
step S504: and encrypting the result instruction transmitted from the cloud transmission interface, forwarding the result instruction to the local transmission interface of the target process, decrypting the result instruction, and transmitting the result instruction from the local transmission interface of the target process according to the marked time sequence.
The system comprises: the system comprises a local computer running state database management module, a process intercepting module, an alternative process selection module and a target process management module, wherein the local computer running state database management module is used for managing a local computer running state database, the process intercepting module is used for intercepting a process, the alternative process selection module is used for selecting an alternative process from a current running process and an intercepted process in a local computer, and the target process management module is used for selecting a target process and managing instructions of the target process.
Fig. 3 shows an example of target process management, where Y is a target process in a local computer, Y is a process in a cloud computer corresponding to the Y process in the local computer, an instruction sent to the Y process in the local computer is forwarded to a cloud process instruction input interface management unit through a target process instruction input interface management unit, the Y process replaces the Y process to process the instruction, a processing result is forwarded to the target process instruction output interface management unit through a cloud process instruction output interface management unit, and a feedback result instruction is used for an operation of a subsequent process.
Wherein the local computer running state database management module comprises: the system comprises a response time delay monitoring unit, an operation load monitoring unit, an operation state parameter calculating unit and an operation state data management unit, wherein the response time delay monitoring unit is used for detecting corresponding time delay calculated locally, the operation load monitoring unit is used for monitoring operation load rates of all operation parameter items of a local computer, the operation state parameter calculating unit is used for calculating operation state parameters, and the operation state data management unit is used for managing operation state data.
Wherein the alternative process selection module comprises: the system comprises a local database dependency index calculation unit, an liveness index calculation unit, a process retention index calculation unit and an alternative process selection unit, wherein the local database dependency index calculation unit is used for calculating a local database dependency index, the liveness index calculation unit is used for calculating the liveness index, the process retention index calculation unit is used for calculating the process retention index, and the alternative process selection unit is used for selecting an alternative process.
Wherein the target process management module comprises: the cloud process instruction transmission interface management unit is used for managing instructions of the transmission cloud process.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. The method for distributing the computer resources based on the big data technology is characterized by comprising the following steps:
step S100: acquiring a historical operation record of a local computer, extracting the historical record of each parameter item in the operation process of the local computer, calculating the operation state parameters of the local computer, and collecting the operation state parameters of the local computer to obtain an operation state data set of the local computer;
step S200: setting an operation load threshold, extracting overload operation characteristics of a local computer, acquiring a rule model of overload operation of the local computer, intercepting a process meeting a target rule by a state target rule of overload operation of the local computer in the rule model;
step S300: calculating a local database dependency index and an activity index of a current running process and an intercepted process in a local computer, calculating a process retention index, sequencing the process retention indexes of the current running process and the intercepted process, and selecting an alternative process from the sequence;
step S400: selecting a process which is locally unloaded into the cloud computer to run from the alternative process, setting the process which is locally unloaded into the cloud computer to run as a target process, loading the target process into a cloud meter to manage instructions which are transmitted into and transmitted out of the target process, marking the instructions which are transmitted into the target process by a local management module, forwarding the instructions to the cloud process of the target process by the local management module, and managing the instructions returned from the cloud process corresponding to the target process by the local management module;
step S500: the method comprises the steps that instructions of an incoming target process and an outgoing target process are managed, the local management module marks the instructions of the incoming target process, the instructions are forwarded to a cloud process of the target process through the local management module, and the instructions returned from the cloud process corresponding to the target process are managed.
2. The method for allocating computer resources based on big data technology according to claim 1, wherein step S100 comprises:
step S101: acquiring a history running record of a local computer, and extracting the history record of each parameter item in the running process of the local computer;
step S102: calculating the operation state parameter F of the ith moment of the local computer i Wherein F is i =t i ×
(n 1 +n 2 +n 3 +…+n m ) Wherein t is i Representing response time delay of ith moment of local computer, wherein n 1 、n 2 、n 3 、…、n m The operation load rates of the 1 st, 2 nd, 3 rd, … … th and m th operation parameter items of the local computer are respectively represented, and the calculation result of each state parameter corresponds to one or a combination of a plurality of processes.
3. The method for allocating computer resources based on big data technology according to claim 2, wherein step S200 comprises:
step S201: setting an operation load threshold value, extracting an overload process record of a local computer from an operation state data set of the local computer, wherein when the local computer operates a W process, the operation state parameter of the local computer is lower than the operation load threshold value, and when the local computer operates an H process on the basis of the operation of the W process, the operation state parameter of the local computer is higher than the operation load threshold value, wherein a certain overload process record is k (H, beta), H is a certain overload operation characteristic, H { W, H }, W is a combination of one or more processes, and beta is a value of the operation state parameter value of the local computer in the overload process record exceeding the operation load threshold value;
step S202: setting a certain target process combination X, wherein X=W+H, setting a history record of j target process combinations X, and calculating excess expected beta of all target process combinations X X ,β X The mathematical expectation of the beta value in j overload process records;
step S203: setting a target rule judgment threshold, and setting the target process combination as an item rule when the excess expected of the target process combination is higher than the target rule judgment threshold;
step S204: intercepting the process started by the application according to a target rule, and setting the current running process of the local computer as B j At this time B e Starting a process application, wherein B is a target process combination corresponding to a certain target rule, and B=B j +B e At this time, for B e Intercepting the process, wherein B j Is a combination of one or more processes.
4. The method for allocating computer resources based on big data technology according to claim 3, wherein step S300 comprises:
step S301: calculating a local database dependency index of a current process in a local computer, wherein the local database dependency index of a running process A in the local computerThe number is sigma A ,R lA Representing the data quantity read from the local computer in unit time in the current running process of the process A, R tA Representing the total data quantity read in unit time of the process A, wherein the local database is data information stored in a local computer;
step S302: calculating the liveness index of the current process in the local computer, wherein the liveness index of the running process A in the local computer is eta A ,S lA Representing the operation time of the A process in the current running process, S tA The method comprises the steps of representing the total opening time of an A process in the current running process;
step S303: extracting a history running record of the intercepted process from a history running log of a total local computer, and calculating a local database dependency index and an liveness index of the intercepted process according to the methods in the steps S301 and S302, wherein the local database dependency index of the intercepted process C is sigma C The liveness index of the intercepted process C is eta C , Representing the historical average of the data quantity read from the local computer per unit time of process C,/-, for>Representing the historical average of the total data read per unit time of process C,/-> Representing the duration of operation of the C process during the historical running of the run,/for example>Representing the historical running total duration of the process C;
step S304: calculating the process retention index of the currently running process and the intercepted process in the local computer, wherein the process retention index p of the running process A in the local computer A =σ A ×η A Process retention index p of intercepted process C C =σ C ×η C And sequencing the process retention indexes of the processes from large to small, and selecting the process corresponding to the last two process retention indexes of the process retention indexes in the process retention index sequencing sequence as an alternative process.
5. The method for allocating computer resources based on big data technology as defined in claim 4, wherein step S400 comprises:
step S401: acquiring instruction communication records of an alternative process and a non-alternative process, calculating the communication density of the process, and acquiring a resource load record of the alternative process in the local computer, wherein the non-alternative process represents a process which is not selected as the alternative process from the processes of the current running process and the intercepted process in the local computer;
step S402: calculating a process load index of the alternative processes, wherein one process in the alternative processes is a process D, and the process load index mu of the process D is calculated D =λ 1 (1-q 1D )+λ 2 q 2D Wherein q is 1D Representing the process communication density of process D,g d representing the progress D and the non-standby in a unit timeSelecting instruction number of inter-process communication, G d Representing the total instruction number, q, of process D processing in a unit time 2D Representing the load factor of a D process, where q 2D =u 1 +u 2 +u 3 +…+u m Wherein u is 1 、u 2 、u 3 、…、u m Respectively representing the running load rate, lambda of the mth running parameter item of the 1 st, 2 nd, 3 rd and … … th and m th running parameter items of the local computer in the running process of the D process 1 And lambda (lambda) 2 Respectively represent q 1D And q 2D Coefficients of (2);
step S403: comparing the process load indexes of two processes in the alternative processes, selecting the process with the larger process load index as the process which is unloaded from the local and enters the cloud computer to operate, setting the process which is unloaded from the local and enters the cloud computer to operate as a target process, loading the target process into the cloud computer, and creating a cloud process corresponding to the target process.
6. The method for allocating computer resources based on big data technology according to claim 5, wherein step S500 comprises:
step S501: the method comprises the steps of reserving an instruction incoming interface and an instruction outgoing interface of a target process in a local computer, setting the instruction incoming interface of the target process in the local computer as the local incoming interface, setting the instruction outgoing interface of the target process in the local computer as the local outgoing interface, setting the instruction incoming interface of the cloud process in the cloud computer as the cloud incoming interface, and setting the instruction outgoing interface of the target process in the cloud computer as the cloud outgoing interface;
step S502: when detecting that an instruction is input at a local input interface, setting the instruction as a start instruction, marking the start instruction and encrypting the marked instruction;
step S503: transmitting the encrypted starting instruction to a cloud input interface corresponding to the cloud process, decrypting the starting instruction, inputting the starting instruction into the cloud process, and setting the fed-back instruction as a result instruction after cloud process operation;
step S504: and encrypting the result instruction transmitted from the cloud transmission interface, forwarding the result instruction to the local transmission interface of the target process, decrypting the result instruction, and transmitting the result instruction from the local transmission interface of the target process according to the marked time sequence.
7. A computer resource allocation system for use in a big data technology based computer resource allocation method according to any of claims 1-6, the system comprising: the system comprises a local computer running state database management module, a process intercepting module, an alternative process selection module and a target process management module, wherein the local computer running state database management module is used for managing a local computer running state database, the process intercepting module is used for intercepting a process, the alternative process selection module is used for selecting an alternative process from a current running process and an intercepted process in a local computer, and the target process management module is used for selecting a target process and managing instructions of the target process.
8. The computer resource allocation system of claim 7, wherein the local computer operating state database management module comprises: the system comprises a response time delay monitoring unit, an operation load monitoring unit, an operation state parameter calculating unit and an operation state data management unit, wherein the response time delay monitoring unit is used for detecting corresponding time delay calculated locally, the operation load monitoring unit is used for monitoring operation load rates of all operation parameter items of a local computer, the operation state parameter calculating unit is used for calculating operation state parameters, and the operation state data management unit is used for managing operation state data.
9. The computer resource allocation system of claim 8, wherein the alternative process selection module comprises: the system comprises a local database dependency index calculation unit, an liveness index calculation unit, a process retention index calculation unit and an alternative process selection unit, wherein the local database dependency index calculation unit is used for calculating a local database dependency index, the liveness index calculation unit is used for calculating the liveness index, the process retention index calculation unit is used for calculating the process retention index, and the alternative process selection unit is used for selecting an alternative process.
10. The computer resource allocation system of claim 9, wherein the target process management module comprises: the cloud process instruction transmission interface management unit is used for managing instructions of the transmission cloud process.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310779463.1A CN116820766A (en) | 2023-06-29 | 2023-06-29 | Computer resource distribution system and method based on big data technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310779463.1A CN116820766A (en) | 2023-06-29 | 2023-06-29 | Computer resource distribution system and method based on big data technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116820766A true CN116820766A (en) | 2023-09-29 |
Family
ID=88118013
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310779463.1A Pending CN116820766A (en) | 2023-06-29 | 2023-06-29 | Computer resource distribution system and method based on big data technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116820766A (en) |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010182287A (en) * | 2008-07-17 | 2010-08-19 | Steven C Kays | Intelligent adaptive design |
US20110019715A1 (en) * | 2009-07-24 | 2011-01-27 | At&T Mobility Ii Llc | Asymmetrical receivers for wireless communication |
CN102479108A (en) * | 2010-11-26 | 2012-05-30 | 中国科学院声学研究所 | Terminal resource management system for multi-application process embedded system and method |
US20150095296A1 (en) * | 2013-09-27 | 2015-04-02 | Ebay Inc. | Method and apparatus for a data confidence index |
US20160283274A1 (en) * | 2015-03-27 | 2016-09-29 | Commvault Systems, Inc. | Job management and resource allocation |
US20160292608A1 (en) * | 2015-04-03 | 2016-10-06 | Alibaba Group Holding Limited | Multi-cluster management method and device |
US10489215B1 (en) * | 2016-11-02 | 2019-11-26 | Nutanix, Inc. | Long-range distributed resource planning using workload modeling in hyperconverged computing clusters |
CN111104208A (en) * | 2019-11-15 | 2020-05-05 | 深圳市优必选科技股份有限公司 | Process scheduling management method and device, computer equipment and storage medium |
WO2020125716A1 (en) * | 2018-12-21 | 2020-06-25 | 中兴通讯股份有限公司 | Method for realizing network optimization and related device |
US20200401938A1 (en) * | 2019-05-29 | 2020-12-24 | The Board Of Trustees Of The Leland Stanford Junior University | Machine learning based generation of ontology for structural and functional mapping |
CN112214388A (en) * | 2020-11-04 | 2021-01-12 | 腾讯科技(深圳)有限公司 | Memory monitoring method, device, equipment and computer readable storage medium |
WO2021136137A1 (en) * | 2019-12-31 | 2021-07-08 | 华为技术有限公司 | Resource scheduling method and apparatus, and related device |
CN114781901A (en) * | 2022-05-07 | 2022-07-22 | 新疆云计算信息科技有限公司 | Method for analyzing economic operation monitoring index |
CN114816763A (en) * | 2022-05-27 | 2022-07-29 | 江苏控智电子科技有限公司 | Computer resource allocation system and method adopting big data technology |
CN115562941A (en) * | 2022-10-14 | 2023-01-03 | 刘祥金 | Big data-based monitoring system and method for computer resource allocation |
US20230091063A1 (en) * | 2021-09-17 | 2023-03-23 | The Toronto-Dominion Bank | Systems and methods for real-time processing of resource requests |
CN116302518A (en) * | 2023-02-24 | 2023-06-23 | 阿里云计算有限公司 | Cloud resource allocation processing method, device and system, storage medium and processor |
-
2023
- 2023-06-29 CN CN202310779463.1A patent/CN116820766A/en active Pending
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010182287A (en) * | 2008-07-17 | 2010-08-19 | Steven C Kays | Intelligent adaptive design |
US20110019715A1 (en) * | 2009-07-24 | 2011-01-27 | At&T Mobility Ii Llc | Asymmetrical receivers for wireless communication |
CN102479108A (en) * | 2010-11-26 | 2012-05-30 | 中国科学院声学研究所 | Terminal resource management system for multi-application process embedded system and method |
US20150095296A1 (en) * | 2013-09-27 | 2015-04-02 | Ebay Inc. | Method and apparatus for a data confidence index |
US20160283274A1 (en) * | 2015-03-27 | 2016-09-29 | Commvault Systems, Inc. | Job management and resource allocation |
US20160292608A1 (en) * | 2015-04-03 | 2016-10-06 | Alibaba Group Holding Limited | Multi-cluster management method and device |
US10489215B1 (en) * | 2016-11-02 | 2019-11-26 | Nutanix, Inc. | Long-range distributed resource planning using workload modeling in hyperconverged computing clusters |
WO2020125716A1 (en) * | 2018-12-21 | 2020-06-25 | 中兴通讯股份有限公司 | Method for realizing network optimization and related device |
US20200401938A1 (en) * | 2019-05-29 | 2020-12-24 | The Board Of Trustees Of The Leland Stanford Junior University | Machine learning based generation of ontology for structural and functional mapping |
CN111104208A (en) * | 2019-11-15 | 2020-05-05 | 深圳市优必选科技股份有限公司 | Process scheduling management method and device, computer equipment and storage medium |
WO2021136137A1 (en) * | 2019-12-31 | 2021-07-08 | 华为技术有限公司 | Resource scheduling method and apparatus, and related device |
CN112214388A (en) * | 2020-11-04 | 2021-01-12 | 腾讯科技(深圳)有限公司 | Memory monitoring method, device, equipment and computer readable storage medium |
US20230091063A1 (en) * | 2021-09-17 | 2023-03-23 | The Toronto-Dominion Bank | Systems and methods for real-time processing of resource requests |
CN114781901A (en) * | 2022-05-07 | 2022-07-22 | 新疆云计算信息科技有限公司 | Method for analyzing economic operation monitoring index |
CN114816763A (en) * | 2022-05-27 | 2022-07-29 | 江苏控智电子科技有限公司 | Computer resource allocation system and method adopting big data technology |
CN115562941A (en) * | 2022-10-14 | 2023-01-03 | 刘祥金 | Big data-based monitoring system and method for computer resource allocation |
CN116302518A (en) * | 2023-02-24 | 2023-06-23 | 阿里云计算有限公司 | Cloud resource allocation processing method, device and system, storage medium and processor |
Non-Patent Citations (4)
Title |
---|
J. WANG, J. CAO, S. WANG, Z. YAO AND W. LI: "IRDA: Incremental Reinforcement Learning for Dynamic Resource Allocation", 《IEEE TRANSACTIONS ON BIG DATA》, 20 April 2020 (2020-04-20) * |
T. KIM AND J. M. CHANG: "Profitable and Energy-Efficient Resource Optimization for Heterogeneous Cloud-Based Radio Access Networks", 《IEEE ACCESS》, 13 March 2019 (2019-03-13) * |
李文升: "分布式资源与电网相互作用的机理及其协同调度技术的研究", 《中国优秀硕士论文电子期刊网》, 15 June 2013 (2013-06-15) * |
李晓飞;关小乾;: "信息技术与电子图书融合下的泛在学习环境研究", 江西广播电视大学学报, no. 04, 27 November 2018 (2018-11-27) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022037337A1 (en) | Distributed training method and apparatus for machine learning model, and computer device | |
TWI620075B (en) | Server and cloud computing resource optimization method thereof for cloud big data computing architecture | |
CN107621973B (en) | Cross-cluster task scheduling method and device | |
KR20200139780A (en) | Graph data processing method, method and device for publishing graph data calculation tasks, storage medium and computer apparatus | |
KR20120102664A (en) | Allocating storage memory based on future use estimates | |
CN110764714B (en) | Data processing method, device and equipment and readable storage medium | |
CN115134368B (en) | Load balancing method, device, equipment and storage medium | |
CN111143039B (en) | Scheduling method and device of virtual machine and computer storage medium | |
CN108153594B (en) | Resource fragment sorting method of artificial intelligence cloud platform and electronic equipment | |
CN113672375B (en) | Resource allocation prediction method, device, equipment and storage medium | |
CN115033340A (en) | Host selection method and related device | |
CN108415962A (en) | A kind of cloud storage system | |
US7096335B2 (en) | Structure and method for efficient management of memory resources | |
CN110602207A (en) | Method, device, server and storage medium for predicting push information based on off-network | |
CN110308901A (en) | Handle data variable method, apparatus, equipment and storage medium in front end page | |
CN116566696B (en) | Security assessment system and method based on cloud computing | |
CN116089477B (en) | Distributed training method and system | |
CN116820766A (en) | Computer resource distribution system and method based on big data technology | |
CN115827178A (en) | Edge calculation task allocation method and device, computer equipment and related medium | |
CN112685157B (en) | Task processing method, device, computer equipment and storage medium | |
CN113347238A (en) | Message partitioning method, system, device and storage medium based on block chain | |
CN110348681B (en) | Power CPS dynamic load distribution method | |
US20060015593A1 (en) | Three dimensional surface indicating probability of breach of service level | |
CN111598390A (en) | Server high availability evaluation method, device, equipment and readable storage medium | |
CN111049919B (en) | User request processing method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |