CN114995984B - Distributed super-concurrent cloud computing system - Google Patents
Distributed super-concurrent cloud computing system Download PDFInfo
- Publication number
- CN114995984B CN114995984B CN202210844330.3A CN202210844330A CN114995984B CN 114995984 B CN114995984 B CN 114995984B CN 202210844330 A CN202210844330 A CN 202210844330A CN 114995984 B CN114995984 B CN 114995984B
- Authority
- CN
- China
- Prior art keywords
- task
- cloud computing
- computing
- data
- concurrent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4812—Task transfer initiation or dispatching by interrupt, e.g. masked
- G06F9/4831—Task transfer initiation or dispatching by interrupt, e.g. masked with variable priority
- G06F9/4837—Task transfer initiation or dispatching by interrupt, e.g. masked with variable priority time dependent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5018—Thread allocation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention provides a distributed super-concurrency cloud computing system which comprises a task receiving module, a task monitoring module, a computing processing module, a buffer area, a task storage module and a database, wherein the task receiving module is used for receiving a task to be processed; the method comprises the steps that a cloud computing task request received by a real-time monitoring server obtains the priority of a task, and a task distribution sequence is established according to the priority and task issuing time; establishing a process state network fitting process state function, obtaining the process state function according to the state value of the process changing along with time, establishing a process state space according to the state function to obtain a process decision, and distributing the tasks according to the process decision. The method and the device solve the problems that in the prior art, the self-operation of the concurrent data cannot be carried out in the super-concurrent data processing system under the cloud computing, the occurrence of concurrent events cannot be effectively avoided, the concurrency probability is high or low, a large amount of data in the system cannot be rapidly computed, and the cloud computing efficiency of the super-concurrent data needs to be improved.
Description
Technical Field
The invention relates to the technical field of cloud computing, in particular to a distributed super-concurrent cloud computing system.
Background
Cloud computing is one of distributed computing, and refers to decomposing a huge data computing processing program into countless small programs through a network cloud, and then processing and analyzing the small programs through a system consisting of a plurality of servers to obtain results and returning the results to a user. In the prior art, the self-operation of the concurrent data cannot be performed in the super-concurrent data processing system under the cloud computing, and the occurrence of the concurrent events cannot be effectively avoided, so that the concurrency probability is high or low, a large amount of data in the system cannot be rapidly computed, and the cloud computing efficiency of the super-concurrent data needs to be improved.
Chinese patent application No.: CN201811174061.4, published as: 2019.02.01, which discloses a concurrent data processing method based on cloud computing, comprising a cloud processing unit, an execution recording module, a data monitoring module, a process monitoring module, a data correcting module, a timing unit, a controller, a display module, an execution recording module and a data input module; the method comprises the steps that a data monitoring module obtains the priority number corresponding to a to-be-processed progress information group, an annotation SSS is added behind the to-be-processed progress when the to-be-processed progress is solved urgently, an annotation SS is added behind the to-be-processed progress when the to-be-processed progress is not solved urgently and is in a normal condition, an annotation S is added behind the to-be-processed progress when the to-be-processed progress can be solved in a delayed mode, and a priority number Hi value corresponding to the to-be-processed progress is obtained according to the annotation behind the to-be-processed progress; and reading the corresponding annotations according to the data monitoring module to obtain the degree of urgency of the process to be processed.
However, in the process of implementing the technical solution in the embodiment of the above application, the inventor of the present application finds that the above technology has at least the following technical problems: the self-operation of the concurrent data cannot be performed in the super-concurrent data processing system under the cloud computing, and the occurrence of the concurrent event cannot be effectively avoided, so that the concurrency probability is only high or not low, a large amount of data in the system cannot be rapidly computed, and the cloud computing efficiency of the super-concurrent data needs to be improved.
Disclosure of Invention
The invention provides a distributed super-concurrency cloud computing system, and solves the problems that in the prior art, the self-operation of concurrent data cannot be carried out in a super-concurrency data processing system under cloud computing, the occurrence of concurrent events cannot be effectively avoided, the concurrency probability is high or low, a large amount of data in the system cannot be rapidly computed, and the cloud computing efficiency of super-concurrency data needs to be improved. The cloud computing task orderly processing is realized, the tasks are reasonably distributed, and the cloud computing efficiency of the super-concurrent data is improved.
The invention provides a distributed super-concurrent cloud computing system, which specifically comprises the following technical scheme:
a distributed super-concurrent cloud computing system, comprising:
the system comprises a task receiving module, a task monitoring module, a calculation processing module, a buffer area, a task storage module and a database;
the task monitoring module is used for monitoring cloud computing task requests received by the server in real time, extracting the maximum number of cloud computing task requests which can be processed by the computing processing module from the cloud computing task requests and sending the cloud computing task requests to the computing processing module when the number of the cloud computing task requests received within a preset time interval is larger than a task number threshold value which can be processed by the computing processing module, storing the residual cloud computing task requests after extraction in a preset buffer area, and connecting the task monitoring module with the computing processing module and the buffer area in a data transmission mode;
the computing processing module comprises a task receiving unit, a task allocation unit, a storage scheduling unit, a plurality of computing units and an algorithm updating unit, wherein the task receiving unit receives a cloud computing task request, stores the cloud computing task request in the task storage module and sends the cloud computing task request to the task allocation unit; the task allocation unit allocates the computing tasks to the computing units; the computing unit sends a data reading request to the storage scheduling unit according to the distributed computing task, creates a process, executes a corresponding computing task according to the cloud computing task request and the original data, realizes distributed computing, and sends a computing result to the algorithm updating unit; the storage scheduling unit reads the storage data from the database based on the data reading request, and combines the read storage data to obtain the original data of the data reading request and sends the original data of the data reading request to the computing unit; the algorithm updating unit updates the algorithm according to the new data in each period, so that the real-time accuracy of the algorithm is ensured, and the super-concurrent computation task is efficiently completed by a distributed processing method; and the computing processing module is connected with the task storage module and the database in a data transmission mode.
A distributed super-concurrent cloud computing processing method comprises the following steps:
the method comprises the following steps that S1, a task monitoring module monitors a cloud computing task request received by a server in real time, the priority of a task is obtained, and a task distribution sequence is established according to the priority and task release time;
and S2, the task allocation unit creates a process state function of each process in the process state network fitting calculation unit, the state values of the processes along with time change obtain the process state function, a process state space is established according to the state function to obtain a process decision, and the tasks are allocated according to the process decision.
Further, the step S1 includes:
the client marks the priority of each cloud computing task request, the priority of the task is obtained, a task distribution sequence is established according to the priority and the task release time, the cloud computing task requests with the same priority are arranged according to the time sequence, and the cloud computing task request with the high priority is located in front of the cloud computing task request with the low priority.
Further, the step S2 includes:
establishing a process state network fitting process state function, wherein the process state function represents process states corresponding to different moments; in the neural network training process, process states of different processes corresponding to different moments are input, and a process state function with time as a variable in the current process is fitted through training.
Further, the step S2 includes:
firstly, carrying out neural network transformation on each variable input into a process state network, then introducing a logic gate, controlling a variable interval by using the logic gate, and dividing a process state function into a plurality of sections of subfunctions; when a certain logic gate is activated, the weight and offset of the corresponding sub-function become non-0 under the wrapping of the logic gate, thereby outputting the function value of the section.
Further, the step S2 includes:
setting a switch function of a logic gate, constructing the switch gate by using an activation function, fitting each section of sub-function according to the switch gate of each interval, wherein different process states comprise different data errors and calculation progress.
Further, the step S2 includes:
obtaining the unit process processing time of the computing unit according to the state functions of different processes:
wherein the content of the first and second substances,representing process progress informationThe time taken for the process to be carried out,representing the total amount of processes which can be processed by the current period computing unit; setting restart threshold parameterIf the current process is inIf the calculation is not completed within the time range, the process is restarted.
Further, the step S2 includes:
the method comprises the steps of establishing a process state space according to a state function, establishing a process decision according to the process state space, solving the process decision, distributing tasks according to the process decision, meanwhile, updating the algorithm according to new data in each period by an algorithm updating unit, and ensuring the real-time accuracy of the algorithm, so that the super-concurrent computation task is efficiently completed through a distributed processing method.
The invention has at least the following technical effects or advantages:
1. and a task distribution sequence is established according to the task priority and the task release time, so that the tasks are processed in order, the task processing speed is increased, and the processing efficiency of the super-concurrent data is improved.
2. The method comprises the steps of obtaining a process state function according to a state value of a process changing along with time, carrying out neural network transformation on each variable input into a process state network, introducing a logic gate control variable interval, and increasing the accuracy of the fitted complex process state function, so that the process processing efficiency can be obtained, the processes with low calculation efficiency are restarted, calculation tasks are distributed according to the processing efficiency of each process, and the cloud calculation efficiency of super-concurrent data is improved.
3. According to the technical scheme, the problem that self-operation of the concurrent data cannot be carried out in the super-concurrent data processing system under cloud computing is effectively solved, the occurrence of concurrent events cannot be effectively avoided, the concurrency probability is high or low, a large amount of data in the system cannot be rapidly computed, and the cloud computing efficiency of the super-concurrent data needs to be improved. Moreover, the system or the method is subjected to a series of effect investigation, and finally can process the cloud computing tasks in order through verification, so that the tasks are reasonably distributed, and the cloud computing efficiency of the super-concurrency data is improved.
Drawings
FIG. 1 is a diagram of a distributed hyper-concurrent cloud computing system according to the present invention;
FIG. 2 is a flowchart of a distributed hyper-concurrent cloud computing processing method according to the present invention;
fig. 3 is a detailed structural diagram of a computing processing module according to the present invention.
Detailed Description
The embodiment of the application provides a distributed super-concurrent cloud computing system, and solves the problems in the prior art: the self-operation of the concurrent data cannot be performed in the super-concurrent data processing system under the cloud computing, and the occurrence of the concurrent event cannot be effectively avoided, so that the concurrency probability is only high or not low, a large amount of data in the system cannot be rapidly computed, and the cloud computing efficiency of the super-concurrent data needs to be improved.
In order to solve the above problems, the technical solution in the embodiment of the present application has the following general idea:
a task distribution sequence is established according to the task priority and the task release time, so that the tasks are processed in order, the task processing speed is increased, and the processing efficiency of the super-concurrent data is improved; the method comprises the steps of obtaining a process state function according to a state value of a process changing along with time, carrying out neural network transformation on each variable input into a process state network, introducing a logic gate control variable interval, and increasing the accuracy of the fitted complex process state function, so that the process processing efficiency can be obtained, the processes with low calculation efficiency can be restarted, calculation tasks can be distributed according to the processing efficiency of each process, and the cloud calculation efficiency of super-concurrent data is improved.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
Referring to fig. 1, a distributed hyper-concurrent cloud computing system according to the present invention includes the following components:
the task receiving module 10, the task monitoring module 20, the calculation processing module 30, the buffer 40, the task storage module 50 and the database 60.
The task receiving module 10 is configured to receive a cloud computing task request sent by a client, and schedule and distribute the cloud computing task request to a task monitoring module 20 of a cloud computing platform, and the task receiving module 10 is connected with the task monitoring module 20 in a data transmission manner;
the task monitoring module 20 is configured to monitor cloud computing task requests received by the server in real time, extract, from the cloud computing task requests, the maximum number of cloud computing task requests that can be processed by the computing processing module 30 and send the cloud computing task requests to the computing processing module 30 when it is monitored that the number of the cloud computing task requests received within a predetermined time interval is greater than a threshold value of the number of tasks that can be processed by the computing processing module 30, store the remaining extracted cloud computing task requests in a preset buffer area 40, and connect the task monitoring module 20 with the computing processing module 30 and the buffer area 40 in a data transmission manner;
the computing processing module 30 includes a task receiving unit 301, a task allocating unit 302, a storage scheduling unit 303, a plurality of computing units 304, and an algorithm updating unit 305, as shown in fig. 3, the task receiving unit 301 receives a cloud computing task request, stores the cloud computing task request in the task storage module 50, and sends the cloud computing task request to the task allocating unit 302; the task allocation unit 302 allocates the calculation tasks to the respective calculation units 304; the computing unit 304 sends a data reading request to the storage scheduling unit 303 according to the allocated computing task, creates a process, executes a corresponding computing task according to the cloud computing task request and the original data, implements distributed computing, and sends a computing result to the algorithm updating unit 305; the storage scheduling unit 303 reads the storage data from the database 60 based on the data reading request, and merges the read storage data to obtain the original data of the data reading request, and sends the original data of the data reading request to the computing unit 304; the algorithm updating unit 305 updates the algorithm according to the new data in each period, so that the real-time accuracy of the algorithm is ensured, and the super-concurrent computation task is efficiently completed by a distributed processing method; the calculation processing module 30 is connected with the task storage module 50 and the database 60 in a data transmission manner;
the buffer area 40 is configured to buffer a cloud computing task request that cannot be processed in time by the computing processing module 30, and the buffer area 40 sends the buffered cloud computing task request to the computing processing module 30 in a data transmission manner;
the task storage module 50 is configured to store historical tasks for subsequent query;
the database 60 is used for storing the raw data.
Referring to fig. 2, a distributed ultra-concurrent cloud computing processing method according to the present invention includes the following steps:
s1, a task monitoring module monitors a cloud computing task request received by a server in real time, obtains the priority of a task, and creates a task distribution sequence according to the priority and the task issuing time;
a cloud computing platform is built at a server, and the task receiving module 10 receives a cloud computing task request sent by a client and dispatches the cloud computing task request to the task monitoring module 20 of the cloud computing platform. The task monitoring module 20 monitors cloud computing task requests received by the server in real time, extracts the maximum number of cloud computing task requests that can be processed by the computing processing module 30 from the cloud computing task requests and sends the cloud computing task requests to the computing processing module 30 when it is monitored that the number of the cloud computing task requests received within a preset time interval is greater than a task number threshold that can be processed by the computing processing module 30, and stores the extracted remaining cloud computing task requests in a preset buffer area 40. The specific extraction method comprises the following steps:
the client marks the priority of each cloud computing task request, acquires the priority of the task, and creates a task distribution sequence according to the priority and the task release time, wherein the task distribution sequence creating method comprises the following steps:
the cloud computing task request is set to have priority,And K is the total number of the levels of the priority, the cloud computing task requests with the same priority are arranged according to the time sequence, and the cloud computing task request with the higher priority is positioned before the cloud computing task request with the lower priority, namely
Wherein the content of the first and second substances,indicates a priority ofThe ranked location of the cloud computing task request of (1),,which indicates the time at which the task was issued,to representThe time distribution has a priority ofThe cloud computing task request of (2) to form a taskAnd distributing the sequence.
The beneficial effects of the step S1 are as follows: and a task distribution sequence is established according to the task priority and the task release time, so that the tasks are processed in order, the task processing speed is increased, and the processing efficiency of the super-concurrent data is improved.
The task allocation unit creates a process state function of each process in the process state network fitting calculation unit, the state values of the processes changing along with time obtain the process state functions, a process state space is created according to the state functions to obtain process decisions, and tasks are allocated according to the process decisions.
The computing processing module 30 includes a task receiving unit 301, a task allocating unit 302, a storage scheduling unit 303, a plurality of computing units 304, and an algorithm updating unit 305, the task receiving unit 301 receives a cloud computing task request, stores the cloud computing task request in the task storage module 50, and sends the cloud computing task request to the task allocating unit 302, the task allocating unit 302 allocates computing tasks to the computing units 304, the computing units 304 execute corresponding computing tasks, the computing units 304 send data reading requests to the storage scheduling unit 303 according to the allocated computing tasks, the storage scheduling unit 303 reads stored data from the database 60 based on the data reading requests, combines the read stored data to obtain original data of the data reading requests, and sends the original data of the data reading requests to the computing units 304, the computing units 304 create processes, and execute corresponding computing tasks according to the cloud computing task request and the original data, so that distributed computing is achieved. The specific allocation process of the task allocation unit 302 is as follows:
first, the total amount of process information that can be processed by each computing unit 304 and the process information being processed by the computing unit 304 in a period are obtained, and the total amount of processes that can be processed by the computing unit 304 in the current period is marked asMarking the process information being processed by the computing unit 304 as,. And obtaining a process state function according to the state value of the process changing along with the time in a period, thereby obtaining the process processing efficiency, restarting the process with low calculation efficiency, distributing calculation tasks according to the processing efficiency of each process and improving the calculation efficiency of the hypercurrent data.
And creating a process state network fitting process state function, wherein the process state function represents the process states corresponding to different moments. In the neural network training process, process states of different processes corresponding to different moments are input, and a process state function of the current process with time as a variable is fitted through training.
In order to increase the accuracy of the fitted complex process state function, firstly, each variable input to the process state network is subjected to neural network transformation, and as a specific embodiment, the neural network transformation process is as follows:
input=Input(time)
dense11=Dense (activation=’softplus’, units=100) (input)
dense12=Dense (activation=’softplus’, units=100) (Dense11)
newinput= Dense (units=100) (dense12)
the transformation process is as follows: inputting time, calling a parameter activation in the command Dense to carry out nonlinear transformation, selecting softplus in the activation as an activation function, and setting the neuron node number units as 100.
Then, a logic gate is introduced, a variable interval is controlled by the logic gate, and the process state function is divided into a plurality of sections of subfunctions. When a logic gate is activated, the weight and offset of the corresponding sub-function become non-0 under the wrapping of the logic gate, thereby outputting the function value of the segment.
Wherein the content of the first and second substances,is shown asAnd (4) layer switching. The switch gate is built using the activation function:
wherein the content of the first and second substances,is the first floor door and the second floor door,is an activation function. The door position is specified by translation:to representTranslationA distance. Then the opening and closing door is combined and/or combined, and as a specific embodiment, the opening and closing door is setIs responsible for activatingSwitch (es)Door with a door panelIs responsible for activatingThen, define:
ready to useIndicating section. And fitting each sub-function according to the switching gates of all the intervals.
Wherein the content of the first and second substances,as a function of the state of the process,in the case of the initial process state,which is indicative of a state error,indicating the progress of the computation of the process at time t. Different process states contain different data errors and calculation progress.
The unit process processing time of the computing unit 304 is obtained according to the state functions of different processes:
wherein the content of the first and second substances,representing process progress informationThe time taken. Setting restart threshold parameterIf the current process is inIf the calculation is not completed within the time range, the process is restarted.
Establishing a process state space T according to a state function:
creating a process decision according to the process state space:
wherein the content of the first and second substances,indicating the process decision, i.e. the allocation basis of the task allocation unit 302,representing the correlation function between the data in the current task,the disturbance function representing the disturbance among the data in the current task is a function change obeyed by all disturbance factors influencing the data calculation.
Solving the process decision, wherein the concrete solving method comprises the following steps:
the tasks are distributed according to the process decision, meanwhile, the algorithm updating unit 305 updates the algorithm according to the new data in each period, the real-time accuracy of the algorithm is guaranteed, and the super-concurrent computing tasks are efficiently completed through a distributed processing method.
The beneficial effects of the step S2 are as follows: the method comprises the steps of obtaining a process state function according to a state value of a process changing along with time, carrying out neural network transformation on each variable input into a process state network, introducing a logic gate control variable interval, and increasing the accuracy of the fitted complex process state function, so that the process processing efficiency can be obtained, the processes with low calculation efficiency are restarted, calculation tasks are distributed according to the processing efficiency of each process, and the calculation efficiency of the super-concurrency data is improved.
In summary, the distributed super-concurrent cloud computing system of the present invention is completed.
The technical scheme in the embodiment of the application at least has the following technical effects or advantages:
effect investigation:
according to the technical scheme, the problem that self-operation of the concurrent data cannot be carried out in the super-concurrent data processing system under cloud computing is effectively solved, the occurrence of concurrent events cannot be effectively avoided, the concurrency probability is high or low, a large amount of data in the system cannot be rapidly computed, and the cloud computing efficiency of the super-concurrent data needs to be improved. Moreover, the system or the method is subjected to a series of effect researches, and finally cloud computing tasks can be processed in order through verification, the tasks are reasonably distributed, and the cloud computing efficiency of super-concurrent data is improved.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (5)
1. A distributed super-concurrent cloud computing system, comprising:
the system comprises a task receiving module, a task monitoring module, a calculation processing module, a buffer area, a task storage module and a database;
the task monitoring module is used for monitoring cloud computing task requests received by the server in real time, extracting the maximum number of cloud computing task requests which can be processed by the computing processing module from the cloud computing task requests and sending the cloud computing task requests to the computing processing module when the number of the cloud computing task requests received within a preset time interval is larger than a threshold value of the number of tasks which can be processed by the computing processing module, storing the residual cloud computing task requests after extraction in a preset buffer area, and connecting the task monitoring module with the computing processing module and the buffer area in a data transmission mode;
the computing processing module comprises a task receiving unit, a task allocation unit, a storage scheduling unit, a plurality of computing units and an algorithm updating unit, wherein the task receiving unit receives a cloud computing task request, stores the cloud computing task request into the task storage module and sends the cloud computing task request to the task allocation unit; the task allocation unit allocates the computing tasks to the computing units; the computing unit sends a data reading request to the storage scheduling unit according to the distributed computing task, creates a process, executes a corresponding computing task according to the cloud computing task request and the original data, realizes distributed computing, and sends a computing result to the algorithm updating unit; the storage scheduling unit reads the storage data from the database based on the data reading request, and combines the read storage data to obtain the original data of the data reading request and sends the original data of the data reading request to the computing unit; the algorithm updating unit updates the algorithm according to the new data in each period, so that the real-time accuracy of the algorithm is ensured, and the super-concurrent computation task is efficiently completed by a distributed processing method; the computing processing module is connected with the task storage module and the database in a data transmission mode;
the operation method of the distributed super-concurrent cloud computing system comprises the following steps:
the method comprises the following steps that S1, a task monitoring module monitors a cloud computing task request received by a server in real time, the priority of a task is obtained, and a task distribution sequence is established according to the priority and task release time;
s2, a task allocation unit creates a process state function of each process in a process state network fitting calculation unit, the state values of the processes changing along with time obtain the process state function, a process state space is established according to the state function to obtain a process decision, and the tasks are allocated according to the process decision;
the step S2 includes:
establishing a process state network fitting process state function, wherein the process state function represents the process states corresponding to different moments; in the neural network training process, inputting process states of different processes corresponding to different moments, and fitting a process state function of the current process by training, wherein the process state function takes time as a variable;
firstly, carrying out neural network transformation on each variable input into a process state network, then introducing a logic gate, controlling a variable interval by using the logic gate, and dividing a process state function into a plurality of sections of subfunctions; when a logic gate is activated, the weight and offset of the corresponding sub-function become non-0 under the wrapping of the logic gate, thereby outputting the function value of the segment.
2. The distributed hyper-concurrent cloud computing system of claim 1, wherein the step S1 comprises:
the client marks the priority of each cloud computing task request, the priority of the task is obtained, a task distribution sequence is established according to the priority and the task release time, the cloud computing task requests with the same priority are arranged according to the time sequence, and the cloud computing task request with the high priority is located in front of the cloud computing task request with the low priority.
3. The distributed hyper-concurrent cloud computing system of claim 1, wherein the step S2 comprises:
setting a switch function of a logic gate, constructing the switch gate by using an activation function, fitting each section of sub-function according to the switch gate of each interval, wherein different process states comprise different data errors and calculation progress.
4. The distributed hyper-concurrent cloud computing system of claim 3, wherein the step S2 comprises:
obtaining the unit process processing time of the computing unit according to the state functions of different processes:
wherein the content of the first and second substances,representing process progress informationThe time taken for the process to be carried out,representing the total amount of processes which can be processed by the current period computing unit; setting restart threshold parameterIf the current process is inIf the calculation is not completed within the time range, the process is restarted.
5. The distributed hyper-concurrent cloud computing system of claim 4, wherein the step S2 comprises:
and meanwhile, the algorithm updating unit updates the algorithm according to new data in each period, so that the real-time accuracy of the algorithm is ensured, and the super-concurrent computation task is efficiently completed by a distributed processing method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210844330.3A CN114995984B (en) | 2022-07-19 | 2022-07-19 | Distributed super-concurrent cloud computing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210844330.3A CN114995984B (en) | 2022-07-19 | 2022-07-19 | Distributed super-concurrent cloud computing system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114995984A CN114995984A (en) | 2022-09-02 |
CN114995984B true CN114995984B (en) | 2022-10-25 |
Family
ID=83021662
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210844330.3A Active CN114995984B (en) | 2022-07-19 | 2022-07-19 | Distributed super-concurrent cloud computing system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114995984B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019196127A1 (en) * | 2018-04-11 | 2019-10-17 | 深圳大学 | Cloud computing task allocation method and apparatus, device, and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101739293B (en) * | 2009-12-24 | 2012-09-26 | 航天恒星科技有限公司 | Method for scheduling satellite data product production tasks in parallel based on multithread |
CN104317654A (en) * | 2014-10-09 | 2015-01-28 | 南京大学镇江高新技术研究院 | Data center task scheduling method based on dynamic temperature prediction model |
CN108762896B (en) * | 2018-03-26 | 2022-04-12 | 福建星瑞格软件有限公司 | Hadoop cluster-based task scheduling method and computer equipment |
US11900155B2 (en) * | 2019-11-28 | 2024-02-13 | EMC IP Holding Company LLC | Method, device, and computer program product for job processing |
CN112925616A (en) * | 2019-12-06 | 2021-06-08 | Oppo广东移动通信有限公司 | Task allocation method and device, storage medium and electronic equipment |
-
2022
- 2022-07-19 CN CN202210844330.3A patent/CN114995984B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019196127A1 (en) * | 2018-04-11 | 2019-10-17 | 深圳大学 | Cloud computing task allocation method and apparatus, device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114995984A (en) | 2022-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105573866B (en) | The method and system of batch input data is handled with fault-tolerant way | |
CN110837592B (en) | Method, apparatus and computer readable storage medium for data archiving | |
US11556389B2 (en) | Resource usage prediction for cluster provisioning | |
CN103645957B (en) | A kind of resources of virtual machine management-control method and device | |
CN109213600A (en) | A kind of GPU resource dispatching method and device based on AI cloud | |
CN107407918A (en) | Programmable logic controller (PLC) is extended using app | |
CN105184367A (en) | Model parameter training method and system for depth neural network | |
CN108694090A (en) | A kind of cloud computing resource scheduling method of Based on Distributed machine learning | |
CN106844483A (en) | A kind of daily record data method for stream processing | |
CN106610870A (en) | Method and device for adjusting quantity of processing nodes | |
CN115981562A (en) | Data processing method and device | |
CN114995984B (en) | Distributed super-concurrent cloud computing system | |
CN112884164B (en) | Federal machine learning migration method and system for intelligent mobile terminal | |
CN111679970B (en) | Method for predicting running environment state of robot software system | |
WO2023193653A1 (en) | Content operation method and apparatus, and server and storage medium | |
CN111277626A (en) | Server upgrading method and device, electronic equipment and medium | |
CN114792133B (en) | Deep reinforcement learning method and device based on multi-agent cooperation system | |
DE112012004468T5 (en) | Application-level speculative processing | |
CN110826695A (en) | Data processing method, device and computer readable storage medium | |
JP7453229B2 (en) | Data processing module, data processing system, and data processing method | |
CN115187097A (en) | Task scheduling method and device, electronic equipment and computer storage medium | |
US9152451B2 (en) | Method of distributing processor loading between real-time processor threads | |
CN114398163A (en) | Thread pool management method and device, computer equipment and storage medium | |
CN117453376B (en) | Control method, device, equipment and storage medium for high-throughput calculation | |
CN108733502A (en) | Method for the wrong identification in operating system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |