CN104580503A - Efficient dynamic load balancing system and method for processing large-scale data - Google Patents

Efficient dynamic load balancing system and method for processing large-scale data Download PDF

Info

Publication number
CN104580503A
CN104580503A CN201510037687.0A CN201510037687A CN104580503A CN 104580503 A CN104580503 A CN 104580503A CN 201510037687 A CN201510037687 A CN 201510037687A CN 104580503 A CN104580503 A CN 104580503A
Authority
CN
China
Prior art keywords
interior joint
computing cluster
central control
system interior
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510037687.0A
Other languages
Chinese (zh)
Inventor
高永虎
张清
张广勇
沈铂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Electronic Information Industry Co Ltd
Original Assignee
Inspur Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Electronic Information Industry Co Ltd filed Critical Inspur Electronic Information Industry Co Ltd
Priority to CN201510037687.0A priority Critical patent/CN104580503A/en
Publication of CN104580503A publication Critical patent/CN104580503A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an efficient dynamic load balancing system and method for processing large-scale data, and belongs to the technical field of processing the large-scale data. The efficient dynamic load balancing system structurally comprises a central control system, a computing cluster system, a storage system and a high-speed network. A CPU and GPU mixed heterogeneous architecture is adopted in a middle node of the central control system; a CPU and GPU mixed heterogeneous architecture is adopted in middle nodes of the computing cluster system, or a CPU architecture is adopted in nodes of the computing cluster system; the storage system is divided into a shared storage and a local storage, a CPU architecture is adopted in middle nodes of the shared storage, and the local storage is used for storing data of nodes of the central control system or the nodes of the computing cluster system; the high-speed network is used for connecting the middle node of the central control system, the middle nodes of the computing cluster system and the middle nodes of the shared storage to form the centralized system for processing the large-scale data. The problem that a current server computing system is insufficient in network bandwidth and small in storage capacity and therefore cannot process the large-scale data is solved.

Description

A kind of system and method for process large-scale data of high-efficiency dynamic load balancing
Technical field
The present invention relates to a kind of process large-scale data technical field, specifically a kind of system and method for process large-scale data of high-efficiency dynamic load balancing.
Background technology
The data huge explosion of the current social mankind, information data gets more and more, the requirement of people to the disposal ability of information data is also more and more higher, the demand high-performance calculations such as not only oil exploration, weather forecast, space flight national defence, scientific research, finance, e-government, education, enterprise, online game etc. widely field to the demand rapid growth of high-performance calculation.
Computational speed is particularly important for high-performance calculation, and high-performance calculation is to multinuclear, the development of many core, and adopt isomerism parallel to promote computation speed, current CPU+GPU is very ripe isomery cooperated computing pattern, is applicable to application or the algorithm of highly-parallel calculating.But because some application operational data scales are always larger, be limited to the network bandwidth, the reasons such as Installed System Memory add the mode of hardware device in single server, cannot meet current demand, need a kind of method according to large-scale data can be processed under existing limited hardware device of design further.
Summary of the invention
Technical assignment of the present invention is to provide a kind of system and method for process large-scale data of high-efficiency dynamic load balancing; Realize the CPU+GPU mixing Heterogeneous Cluster Environment of dynamic load leveling, make full use of the performance of equipment, significantly promote to realize whole system efficiency, and solve the situations such as the current server computing system network bandwidth is not enough, memory size is little, and the problem that cannot process fairly large data.
Technical assignment of the present invention realizes in the following manner,
A system for the process large-scale data of high-efficiency dynamic load balancing, for CPU and GPU mixes Heterogeneous Cluster Environment, comprises central control system, computing cluster system, storage system, express network; Central control system interior joint adopts CPU and GPU to mix isomery framework; Computing cluster system interior joint adopts CPU and GPU mix isomery framework or adopt CPU architecture; Storage system divides shared storage and local storage, share and store interior joint employing CPU architecture, local storage is arranged in the node of central control system and each node of computing cluster system, shared storage is divided into primary storage and back-up storage, primary storage and back-up storage as redundant storage, store identical calculated data, the local data stored for the node of central control system or the node of computing cluster system storing place; Express network is used for central control system interior joint, computing cluster system interior joint, the shared interior joint that stores to be connected to each other, and forms the system of centralized process large-scale data.
A system for the process large-scale data of high-efficiency dynamic load balancing, central control system interior joint controlling calculation group system interior joint, storage system interior joint.
A system for the process large-scale data of high-efficiency dynamic load balancing, central control system interior joint is provided with 1, and share storage interior joint and be provided with at least 1, computing cluster system interior joint is provided with at least 2.
A method for the process large-scale data of high-efficiency dynamic load balancing, adopt above-mentioned in any one process large-scale data system, large-scale data is processed, comprises the steps:
(1), central control system interior joint is interconnected by express network and all computing cluster system interior joint, central control system interior joint controls each computing cluster system interior joint, the dynamic Distribution Calculation task of central control system interior joint is to computing cluster system interior joint, and central control system interior joint receives returning results of computing cluster system interior joint;
(2), computing cluster system interior joint stored interior joint and interconnected by express network with sharing, and central control system interior joint is stored interior joint interconnected by express network with shared; Share and store interior joint according to the order of central control system interior joint to computing cluster system interior joint transmission calculation task data;
(3), computing cluster system interior joint is responsible for calculation task, has the GPU processor of multiple same model to calculate in computing cluster system interior joint; Can improve the degree of parallelism of calculating, improve the computing capability of single node, the GPU of same model is easy to the division of calculation task simultaneously;
(4) this locality, in the node of central control system or the node of computing cluster system stores, for the local necessary data of buffer memory;
(5) calculated data, required for shared storage interior joint storage computing cluster system interior joint and calculation result data, send calculated data by express network to computing cluster system interior joint; Share simultaneously and store the storage mode that interior joint adopts primary storage and back-up storage, ensure that the fail safe of data.
A kind of method of process large-scale data of high-efficiency dynamic load balancing, central control system interior joint collects the computing capability information of all computing cluster system interior joint, calculated data divides by central control system interior joint dynamically, and calculated data is sent to the computing cluster system interior joint chosen by the shared interior joint that stores of order; Share storage interior joint first calculated data to be divided in units of data block according to the order of central control system interior joint, then the data block of varying number is sent to dynamically corresponding computing cluster system interior joint; Computing cluster system interior joint receives the calculated data shared and store interior joint and send, and calculation result data is transferred to central control system interior joint, central control system interior joint stores interior joint by being stored into share after unified for the result of calculation received process.
Computing cluster system interior joint, while the next data block of reception, calculates current data block, sends a data block completed as calculated simultaneously.
A method for the process large-scale data of high-efficiency dynamic load balancing, the workflow of the system of process large-scale data is:
1., central control system interior joint is responsible for the quantity of the GPU card collecting each computing cluster system interior joint, the card quantity different according to each computing cluster system interior joint, generate the computing capability information of each computing cluster system interior joint, this computing capability information is sent to and shares storage interior joint; Computing capability information comprises the quantity of each computing cluster system interior joint GPU card, the communication capacity of express network, the computing capability of GPU card;
2. the computing capability information storing interior joint and send according to central control system interior joint, is shared, first data are divided into the suitable basic data block sent, then be the calculated data block that each computing cluster system interior joint distributes respective amount, then data block sent to dynamically computing cluster system interior joint;
3., computing cluster system interior joint reception data carry out calculating simultaneously, if transmit the very fast of data and calculating does not complete, calculated data can be stored into temporarily in local storage, if when there is no transfer of data, then obtain from this locality stores, if this locality also not, does not need to wait for;
4. the result of calculating can be sent to central control system interior joint while, computing cluster system interior joint completes calculated data block, if transmit busy, can first by data temporary storage in this locality store in, wait for network idle time send it to central control system interior joint again;
5., the result of calculation of each computing cluster system interior joint that will receive of central control system interior joint, carry out necessary process operation, then send to shared storage interior joint, store at the information cache of necessity of the collection computing cluster system interior joint of computing interval central control system interior joint timing to local and be stored into shared storage interior joint.
The system and method for the process large-scale data of a kind of high-efficiency dynamic load balancing of the present invention has the following advantages:
1, the data task amount processed as required disposes cluster in existing hardware device condition, and dynamic load balancing method, has for system platform self adaptation, feasible system reliable, efficient;
2, this load-balancing method can be adaptive to group system, the COMPLEX MIXED group system that this system can be made up of pure CPU cluster subsystem, the one or more subsystem of CPU+GPU isomeric group subsystem;
3, dynamic load leveling is realized, load balancing is realized between each computing cluster system interior joint in group system, system equipment utilance is high, different computing equipment, as CPU, GPU can realize calculating equilibrium, do not wait for each other, in system there is not idle condition in computing equipment, and whole group system will realize high-efficiency operation;
4, when the condition of existing hardware is as memory size, network bandwidth deficiency, take deblocking to transmit and the mode calculating asynchronous process, can effectively process large-scale data;
5, run this system and will realize high-performance, this load-balancing method dynamically divides different calculation tasks on different computing cluster system interior joint by according to the computing capability of different computing equipment in the computing capability of different computing cluster system interior joint and node, realize the flexibility that calculation task divides, promote the high efficiency of dynamic load leveling;
6, according to application algorithm characteristic, and the different computing equipment computing capability of CPU, GPU is different, and the calculation task of computing equipment institute Dynamic Acquisition should be set to difference;
7, the present invention is expanded on multiple servers, fairly large data can be processed, and make the computing equipment between the node of group system, in node reach the load balancing of calculating, thus utilize the performance of existing equipment to greatest extent, the efficiency of the overall operation of raising system, shortens the running time of program greatly.
Accompanying drawing explanation
Below in conjunction with accompanying drawing, the present invention is further described.
Accompanying drawing 1 is a kind of structural schematic block diagram of system of process large-scale data of high-efficiency dynamic load balancing;
Accompanying drawing 2 is the communication schematic block diagram of each node in a kind of system of process large-scale data of high-efficiency dynamic load balancing.
Embodiment
Be described in detail below with reference to Figure of description and the system and method for specific embodiment to the process large-scale data of a kind of high-efficiency dynamic load balancing of the present invention.
Embodiment 1:
The system of the process large-scale data of a kind of high-efficiency dynamic load balancing of the present invention, for CPU and GPU mixes Heterogeneous Cluster Environment, comprises central control system, computing cluster system, storage system, express network; Central control system interior joint adopts CPU and GPU to mix isomery framework; Computing cluster system interior joint adopts CPU and GPU mix isomery framework or adopt CPU architecture; Storage system divides shared storage and local storage, share and store interior joint employing CPU architecture, local storage is arranged in the node of central control system and each node of computing cluster system, shared storage is divided into primary storage and back-up storage, primary storage and back-up storage as redundant storage, store identical calculated data, the local data stored for the node of central control system or the node of computing cluster system storing place; Express network is used for central control system interior joint, computing cluster system interior joint, the shared interior joint that stores to be connected to each other, and forms the system of centralized process large-scale data.
Central control system interior joint controlling calculation group system interior joint, storage system interior joint.
Central control system interior joint is provided with 1, and share storage interior joint and be provided with 1, computing cluster system interior joint is provided with 2.
Embodiment 2:
The system of the process large-scale data of a kind of high-efficiency dynamic load balancing of the present invention, for CPU and GPU mixes Heterogeneous Cluster Environment, comprises central control system, computing cluster system, storage system, express network; Central control system interior joint adopts CPU and GPU to mix isomery framework; Computing cluster system interior joint adopts CPU and GPU mix isomery framework or adopt CPU architecture; Storage system divides shared storage and local storage, share and store interior joint employing CPU architecture, local storage is arranged in the node of central control system and each node of computing cluster system, shared storage is divided into primary storage and back-up storage, primary storage and back-up storage as redundant storage, store identical calculated data, the local data stored for the node of central control system or the node of computing cluster system storing place; Express network is used for central control system interior joint, computing cluster system interior joint, the shared interior joint that stores to be connected to each other, and forms the system of centralized process large-scale data.
Central control system interior joint controlling calculation group system interior joint, storage system interior joint.
Central control system interior joint is provided with 1, and share storage interior joint and be provided with 2, computing cluster system interior joint is provided with 5.
Embodiment 3:
The method of the process large-scale data of a kind of high-efficiency dynamic load balancing of the present invention, adopt above-mentioned in any one process large-scale data system, large-scale data is processed, comprises the steps:
(1), central control system interior joint is interconnected by express network and all computing cluster system interior joint, central control system interior joint controls each computing cluster system interior joint, the dynamic Distribution Calculation task of central control system interior joint is to computing cluster system interior joint, and central control system interior joint receives returning results of computing cluster system interior joint;
(2), computing cluster system interior joint stored interior joint and interconnected by express network with sharing, and central control system interior joint is stored interior joint interconnected by express network with shared; Share and store interior joint according to the order of central control system interior joint to computing cluster system interior joint transmission calculation task data;
(3), computing cluster system interior joint is responsible for calculation task, has the GPU processor of multiple same model to calculate in computing cluster system interior joint; Can improve the degree of parallelism of calculating, improve the computing capability of single node, the GPU of same model is easy to the division of calculation task simultaneously;
(4) this locality, in the node of central control system or the node of computing cluster system stores, for the local necessary data of buffer memory;
(5) calculated data, required for shared storage interior joint storage computing cluster system interior joint and calculation result data, send calculated data by express network to computing cluster system interior joint; Share simultaneously and store the storage mode that interior joint adopts primary storage and back-up storage, ensure that the fail safe of data.
Central control system interior joint collects the computing capability information of all computing cluster system interior joint, calculated data divides by central control system interior joint dynamically, and calculated data is sent to the computing cluster system interior joint chosen by the shared interior joint that stores of order; Share storage interior joint first calculated data to be divided in units of data block according to the order of central control system interior joint, then the data block of varying number is sent to dynamically corresponding computing cluster system interior joint; Computing cluster system interior joint receives the calculated data shared and store interior joint and send, and calculation result data is transferred to central control system interior joint, central control system interior joint stores interior joint by being stored into share after unified for the result of calculation received process.
Computing cluster system interior joint, while the next data block of reception, calculates current data block, sends a data block completed as calculated simultaneously.
Have employed centralized control and storage mode, adopt the mode of large deblocking, be distributed to each computing cluster system interior joint dynamically by shared storage interior joint; In calculating with under the asynchronous mode that communicates, central control system interior joint needs the communication capacity according to network, Data Placement is suitable data block by the computing capability of computing cluster system interior joint, makes to calculate and mutually covering of transmitting, and reaches optimum performance with it.Transmission not only shortens time of calculating with the asynchronous system calculated, and simultaneously due to section technique, also can apply and the calculating of large-scale data to the little internal memory of system, the hardware device of low bandwidth.
Embodiment 4:
The method of the process large-scale data of a kind of high-efficiency dynamic load balancing of the present invention, the workflow of the system of process large-scale data is:
1., central control system interior joint is responsible for the quantity of the GPU card collecting each computing cluster system interior joint, the card quantity different according to each computing cluster system interior joint, generate the computing capability information of each computing cluster system interior joint, this computing capability information is sent to and shares storage interior joint; Computing capability information comprises the quantity of each computing cluster system interior joint GPU card, the communication capacity of express network, the computing capability of GPU card;
2. the computing capability information storing interior joint and send according to central control system interior joint, is shared, first data are divided into the suitable basic data block sent, then be the calculated data block that each computing cluster system interior joint distributes respective amount, then data block sent to dynamically computing cluster system interior joint;
3., computing cluster system interior joint reception data carry out calculating simultaneously, if transmit the very fast of data and calculating does not complete, calculated data can be stored into temporarily in local storage, if when there is no transfer of data, then obtain from this locality stores, if this locality also not, does not need to wait for;
4. the result of calculating can be sent to central control system interior joint while, computing cluster system interior joint completes calculated data block, if transmit busy, can first by data temporary storage in this locality store in, wait for network idle time send it to central control system interior joint again;
5., the result of calculation of each computing cluster system interior joint that will receive of central control system interior joint, carry out necessary process operation, then send to shared storage interior joint, store at the information cache of necessity of the collection computing cluster system interior joint of computing interval central control system interior joint timing to local and be stored into shared storage interior joint.
A kind ofly process in the system of large-scale data, calculate and the mode adopting disparate step that communicates, namely often calculated a data block and can be sent in network.Ensure that the hiding computing time shortening whole system calculated with transmission, adopt the processing mode of calculated data piecemeal simultaneously, system can be made well to be applied to the low and memory space inadequate of the network bandwidth but the situation of large data can be processed.The backup of timing serves fault-tolerant function, effectively prevent the interruption of system interior joint and occurs system crash.
By embodiment above, described those skilled in the art can be easy to realize the present invention.But should be appreciated that the present invention is not limited to above-mentioned embodiment.On the basis of disclosed execution mode, described those skilled in the art can the different technical characteristic of combination in any, thus realizes different technical schemes.

Claims (7)

1. a system for the process large-scale data of high-efficiency dynamic load balancing, is characterized in that, for CPU and GPU mixes Heterogeneous Cluster Environment, comprising central control system, computing cluster system, storage system, express network; Central control system interior joint adopts CPU and GPU to mix isomery framework; Computing cluster system interior joint adopts CPU and GPU mix isomery framework or adopt CPU architecture; Storage system divides shared storage and local storage, share and store interior joint employing CPU architecture, local storage is arranged in the node of central control system and each node of computing cluster system, shared storage is divided into primary storage and back-up storage, primary storage and back-up storage as redundant storage, store identical calculated data, the local data stored for the node of central control system or the node of computing cluster system storing place; Express network is used for central control system interior joint, computing cluster system interior joint, the shared interior joint that stores to be connected to each other, and forms the system of centralized process large-scale data.
2. the system of the process large-scale data of a kind of high-efficiency dynamic load balancing according to claim 1, is characterized in that central control system interior joint controlling calculation group system interior joint, storage system interior joint.
3. the system of the process large-scale data of a kind of high-efficiency dynamic load balancing according to claim 1, it is characterized in that central control system interior joint is provided with 1, share storage interior joint and be provided with at least 1, computing cluster system interior joint is provided with at least 2.
4. a method for the process large-scale data of high-efficiency dynamic load balancing, is characterized in that adopting the system of any one process large-scale data in claim 1-3, processes, comprise the steps: large-scale data
(1), central control system interior joint is interconnected by express network and all computing cluster system interior joint, central control system interior joint controls each computing cluster system interior joint, the dynamic Distribution Calculation task of central control system interior joint is to computing cluster system interior joint, and central control system interior joint receives returning results of computing cluster system interior joint;
(2), computing cluster system interior joint stored interior joint and interconnected by express network with sharing, and central control system interior joint is stored interior joint interconnected by express network with shared; Share and store interior joint according to the order of central control system interior joint to computing cluster system interior joint transmission calculation task data;
(3), computing cluster system interior joint is responsible for calculation task, has the GPU processor of multiple same model to calculate in computing cluster system interior joint;
(4) this locality, in the node of central control system or the node of computing cluster system stores, for the local necessary data of buffer memory;
(5) calculated data, required for shared storage interior joint storage computing cluster system interior joint and calculation result data, send calculated data by express network to computing cluster system interior joint; Share simultaneously and store the storage mode that interior joint adopts primary storage and back-up storage.
5. the method for the process large-scale data of a kind of high-efficiency dynamic load balancing according to claim 4, it is characterized in that central control system interior joint collects the computing capability information of all computing cluster system interior joint, calculated data divides by central control system interior joint dynamically, and calculated data is sent to the computing cluster system interior joint chosen by the shared interior joint that stores of order; Share storage interior joint first calculated data to be divided in units of data block according to the order of central control system interior joint, then the data block of varying number is sent to dynamically corresponding computing cluster system interior joint; Computing cluster system interior joint receives the calculated data shared and store interior joint and send, and calculation result data is transferred to central control system interior joint, central control system interior joint stores interior joint by being stored into share after unified for the result of calculation received process.
6. the method for the process large-scale data of a kind of high-efficiency dynamic load balancing according to claim 5, it is characterized in that computing cluster system interior joint is while the next data block of reception, calculate current data block, send a data block completed as calculated simultaneously.
7. the method for the process large-scale data of a kind of high-efficiency dynamic load balancing according to claim 4, is characterized in that the workflow of the system processing large-scale data is:
1., central control system interior joint is responsible for the quantity of the GPU card collecting each computing cluster system interior joint, the card quantity different according to each computing cluster system interior joint, generate the computing capability information of each computing cluster system interior joint, this computing capability information is sent to and shares storage interior joint; Computing capability information comprises the quantity of each computing cluster system interior joint GPU card, the communication capacity of express network, the computing capability of GPU card;
2. the computing capability information storing interior joint and send according to central control system interior joint, is shared, first data are divided into the suitable basic data block sent, then be the calculated data block that each computing cluster system interior joint distributes respective amount, then data block sent to dynamically computing cluster system interior joint;
3., computing cluster system interior joint reception data carry out calculating simultaneously, if transmit the very fast of data and calculating does not complete, calculated data can be stored into temporarily in local storage, if when there is no transfer of data, then obtain from this locality stores, if this locality also not, does not need to wait for;
4. the result of calculating can be sent to central control system interior joint while, computing cluster system interior joint completes calculated data block, if transmit busy, can first by data temporary storage in this locality store in, wait for network idle time send it to central control system interior joint again;
5., the result of calculation of each computing cluster system interior joint that will receive of central control system interior joint, carry out necessary process operation, then send to shared storage interior joint, store at the information cache of necessity of the collection computing cluster system interior joint of computing interval central control system interior joint timing to local and be stored into shared storage interior joint.
CN201510037687.0A 2015-01-26 2015-01-26 Efficient dynamic load balancing system and method for processing large-scale data Pending CN104580503A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510037687.0A CN104580503A (en) 2015-01-26 2015-01-26 Efficient dynamic load balancing system and method for processing large-scale data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510037687.0A CN104580503A (en) 2015-01-26 2015-01-26 Efficient dynamic load balancing system and method for processing large-scale data

Publications (1)

Publication Number Publication Date
CN104580503A true CN104580503A (en) 2015-04-29

Family

ID=53095660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510037687.0A Pending CN104580503A (en) 2015-01-26 2015-01-26 Efficient dynamic load balancing system and method for processing large-scale data

Country Status (1)

Country Link
CN (1) CN104580503A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897148A (en) * 2017-02-28 2017-06-27 郑州云海信息技术有限公司 A kind of system and method for generating micro-downburst
CN107920080A (en) * 2017-11-22 2018-04-17 郑州云海信息技术有限公司 A kind of characteristic acquisition method and system
CN108989398A (en) * 2018-06-27 2018-12-11 郑州云海信息技术有限公司 A kind of virtual shared memory cell and the cluster storage system based on cloud storage
CN109343791A (en) * 2018-08-16 2019-02-15 武汉元鼎创天信息科技有限公司 A kind of big data all-in-one machine
CN110333945A (en) * 2019-05-09 2019-10-15 成都信息工程大学 A kind of dynamic load balancing method, system and terminal
CN112511576A (en) * 2019-09-16 2021-03-16 触景无限科技(北京)有限公司 Internet of things data processing system and data processing method
CN113094183A (en) * 2021-06-09 2021-07-09 苏州浪潮智能科技有限公司 Training task creating method, device, system and medium of AI (Artificial Intelligence) training platform
CN113225362A (en) * 2020-02-06 2021-08-06 北京京东振世信息技术有限公司 Server cluster system and implementation method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050243094A1 (en) * 2004-05-03 2005-11-03 Microsoft Corporation Systems and methods for providing an enhanced graphics pipeline
CN101751376A (en) * 2009-12-30 2010-06-23 中国人民解放军国防科学技术大学 Quickening method utilizing cooperative work of CPU and GPU to solve triangular linear equation set
CN104301434A (en) * 2014-10-31 2015-01-21 浪潮(北京)电子信息产业有限公司 High speed communication architecture and method based on trunking

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050243094A1 (en) * 2004-05-03 2005-11-03 Microsoft Corporation Systems and methods for providing an enhanced graphics pipeline
CN101751376A (en) * 2009-12-30 2010-06-23 中国人民解放军国防科学技术大学 Quickening method utilizing cooperative work of CPU and GPU to solve triangular linear equation set
CN104301434A (en) * 2014-10-31 2015-01-21 浪潮(北京)电子信息产业有限公司 High speed communication architecture and method based on trunking

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897148A (en) * 2017-02-28 2017-06-27 郑州云海信息技术有限公司 A kind of system and method for generating micro-downburst
CN107920080A (en) * 2017-11-22 2018-04-17 郑州云海信息技术有限公司 A kind of characteristic acquisition method and system
CN108989398A (en) * 2018-06-27 2018-12-11 郑州云海信息技术有限公司 A kind of virtual shared memory cell and the cluster storage system based on cloud storage
CN108989398B (en) * 2018-06-27 2021-02-02 苏州浪潮智能科技有限公司 Virtual shared storage unit and cluster storage system based on cloud storage
CN109343791A (en) * 2018-08-16 2019-02-15 武汉元鼎创天信息科技有限公司 A kind of big data all-in-one machine
CN109343791B (en) * 2018-08-16 2021-11-09 武汉元鼎创天信息科技有限公司 Big data all-in-one
CN110333945A (en) * 2019-05-09 2019-10-15 成都信息工程大学 A kind of dynamic load balancing method, system and terminal
CN112511576A (en) * 2019-09-16 2021-03-16 触景无限科技(北京)有限公司 Internet of things data processing system and data processing method
CN113225362A (en) * 2020-02-06 2021-08-06 北京京东振世信息技术有限公司 Server cluster system and implementation method thereof
CN113225362B (en) * 2020-02-06 2024-04-05 北京京东振世信息技术有限公司 Server cluster system and implementation method thereof
CN113094183A (en) * 2021-06-09 2021-07-09 苏州浪潮智能科技有限公司 Training task creating method, device, system and medium of AI (Artificial Intelligence) training platform
CN113094183B (en) * 2021-06-09 2021-09-17 苏州浪潮智能科技有限公司 Training task creating method, device, system and medium of AI (Artificial Intelligence) training platform

Similar Documents

Publication Publication Date Title
CN104580503A (en) Efficient dynamic load balancing system and method for processing large-scale data
CN110619595B (en) Graph calculation optimization method based on interconnection of multiple FPGA accelerators
CN105159610B (en) Large-scale data processing system and method
CN111221624B (en) Container management method for regulation and control cloud platform based on Docker container technology
CN101778002B (en) Large-scale cluster system and building method thereof
CN102567080B (en) Virtual machine position selection system facing load balance in cloud computation environment
CN103227838B (en) A kind of multi-load equilibrium treatment apparatus and method
CN105471985A (en) Load balance method, cloud platform computing method and cloud platform
CN104023088A (en) Storage server selection method applied to distributed file system
CN103188345A (en) Distributive dynamic load management system and distributive dynamic load management method
CN102158513A (en) Service cluster and energy-saving method and device thereof
CN105491138A (en) Load rate based graded triggering distributed load scheduling method
CN104301434B (en) A kind of high-speed communication framework and method based on cluster
CN104023062A (en) Heterogeneous computing-oriented hardware architecture of distributed big data system
CN110109756A (en) A kind of network target range construction method, system and storage medium
CN103595780A (en) Cloud computing resource scheduling method based on repeat removing
CN104239555A (en) MPP (massively parallel processing)-based parallel data mining framework and MPP-based parallel data mining method
CN103441918A (en) Self-organizing cluster server system and self-organizing method thereof
CN104375882A (en) Multistage nested data drive calculation method matched with high-performance computer structure
CN104618406A (en) Load balancing algorithm based on naive Bayesian classification
CN104461748A (en) Optimal localized task scheduling method based on MapReduce
CN110990154A (en) Big data application optimization method and device and storage medium
CN113259469A (en) Edge server deployment method, system and storage medium in intelligent manufacturing
CN107197039B (en) A kind of PAAS platform service packet distribution method and system based on CDN
CN102480502A (en) I/O load equilibrium method and I/O server

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150429