CN103870338A - Distributive parallel computing platform and method based on CPU (central processing unit) core management - Google Patents

Distributive parallel computing platform and method based on CPU (central processing unit) core management Download PDF

Info

Publication number
CN103870338A
CN103870338A CN201410079473.5A CN201410079473A CN103870338A CN 103870338 A CN103870338 A CN 103870338A CN 201410079473 A CN201410079473 A CN 201410079473A CN 103870338 A CN103870338 A CN 103870338A
Authority
CN
China
Prior art keywords
task
computing
cpu
platform
check
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410079473.5A
Other languages
Chinese (zh)
Inventor
杨冬
何春江
李文博
周智强
张丹丹
张松树
麻常辉
陈勇
裘微江
刘铭
臧主峰
李星
陈继林
郭中华
康建东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, China Electric Power Research Institute Co Ltd CEPRI, Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201410079473.5A priority Critical patent/CN103870338A/en
Publication of CN103870338A publication Critical patent/CN103870338A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Multi Processors (AREA)

Abstract

The invention discloses a distributive parallel computing platform based on CPU (central processing unit) core management. The distributive parallel computing platform comprises a data node server, a dispatching node server and a computing node server, wherein the data node server comprises at least one computer with CPU multi-core capability, and is used for storing historical results and database service nodes, the dispatching node server comprises at least one computer with CPU multi-core capability, and is used for computing task dispatching and management, the computing node service comprises at least one computer with CPU multi-core capability and is used for processing data submitted by users and invoking third-party computing program to participate in the computing, and the third-party core computing program is deployed on the computing node. The distributive parallel computing platform based on CPU core management has the beneficial effects that the CPU multi-core technology is sufficiently utilized, the computing efficiency is greatly improved on the multi-task processing, computer resources are sufficiently utilized, a multi-core processing unit has better performance and efficiency advantages that a single-core processor, and the multi-core processing unit can become a widely adopted computing model.

Description

A kind of distributed paralleling calculation platform and method based on the management of CPU core
Technical field
The present invention relates to electric system simulation distributed parallel and calculate field, relate in particular to a kind of distributed paralleling calculation platform and method based on the management of CPU core.
Background technology
Distributed paralleling calculation platform be under multi-machine surroundings, realize that distribution of computation tasks, task scheduling, result of calculation gather, error handling processing etc., can complete fast the simulation calculation of electric system, and realize mutual between application software and parallel computing platform by standard extensive interface.
The rapid development of nearly 3 years of parallel computer group hardware, parallel computation unit hardware rises to four tunnel six cores up till now by unit two-way double-core before, can be for utilizing computational resource CPU check figure to be developed to unit 24 cores by unit four cores before 3 years.Along with D5000 platform project and Shandong cloud emulation pilot project are built, distributed paralleling calculation platform is clearly integrated to D5000 platform and cloud emulation platform as basic function, electric system calculate with analysis in play key, basic platform supporting role.
At present, distributed paralleling calculation platform successful Application to on-line operation state, the large electrical network early warning of online study state and aid decision-making system, security analysis system, the early warning of off-line research state is calculated, comprehensive stability computational grid version, cloud emulation platform etc.
From application present situation performance, relative merits all clearly: calculate no matter on-line operation state distributed paralleling calculation platform is computation of Period or event or manual activation, in the saturated situation of the relative computational resource of calculation task, operation is efficient, stable; Online, the total CPU check figure of off-line research state parallel computation management cluster is less than general assignment number, in the situation that namely task is saturated, operation is efficient, stable; Online, in the unsaturated situation of off-line research state distributed paralleling calculation platform cluster task, stable, but computational resource utilization factor is low, and current Distributed Computing Platform is not all considered the advantage of cpu multinuclear processing, low to multitask computing efficiency, and stability is not high.For example: calculating number of faults is 40, and cluster computing blade is 52, and every possesses computational resource 8 cores, and the CPU check figure that a group of planes can provide is 416 cores, and single task is monopolized whole parallel computing trunking time durations, and computational resource core and Duty-circle are only all 9.6%; It is obvious that distributed paralleling calculation platform cluster calculates wooden barrel effect consuming time, the total the longest calculating holding time consuming time in round that is equal to consuming time of single batch of task (stage and round) analytical calculation.
Summary of the invention
Object of the present invention is exactly in order to address the above problem, and has proposed a kind of distributed paralleling calculation platform and method based on the management of CPU core.The method can improve electrical network computational analysis ability greatly, and can improve operational efficiency and the stability of Distributed Computing Platform, high internal memory, the distributed storage technology etc. of computing machine are also managed strong technical support are provided for the multinuclear of distributed paralleling calculation platform at present in addition.
To achieve these goals, the present invention adopts following technical scheme:
Based on a distributed paralleling calculation platform for CPU core management, comprising:
Back end server: comprise at least one the computing machine with cpu multinuclear ability, for depositing historical results and database service node.
Scheduling node server: comprise at least one the computing machine with cpu multinuclear ability, for scheduling and the management of calculation task.
Computing node server: comprise at least one the computing machine with cpu multinuclear ability, process for the data that user is submitted to, and call the participation of third party's calculation procedure and calculate, described third party's core calculations program division is deployed on computing node.
Calculation task is sent to scheduling node server by computing node server, scheduling node server is by calculating whole cluster idling-resource cpu check figure, carry out the management and running of calculation task and the distributed parallel of data and calculate, and result of calculation is stored to back end server.
On described computing machine, (SuSE) Linux OS is installed, and the SSH server that configures Linux makes to log in without password between each computing machine.
Based on distributed parallel computing method for CPU core management, comprising:
Build the parallel distributed computing platform based on core management, loading configuration file also reads configuration information.
Parallel distributed computing platform gathers statistics to whole cluster idling-resource cpu check figure.
The calculation task data that parallel distributed computing platform is submitted to for client user, and the configuration file TaskList that comprises task computation classification, time-out time, calculating parameter information, real-time update cluster idling-resource cpu check figure, carries out parallel processing to calculation task.
The concrete grammar that described parallel distributed computing platform gathers statistics to whole cluster idling-resource cpu check figure is:
Computing node is sent to scheduling node server every setting-up time by the local resource packing that comprises cpu check figure, hard drive space, memory size information.
Scheduling node server is received after the resource information bag of computing node, and cpu check figure information is stored in a global structure body variable.
By scheduling node DistComp process opening timing device function, described timing function is made regular check on all computing nodes and is reported resource information situation, judge that by the decision node information updating time whether computing node resource reporting information is overtime, if overtime, in the middle of the cpu check figure that DistComp can use the cpu check figure of this computing node from cluster, reject, and store the cpu check figure that current cluster can be used.
The idiographic flow that described parallel distributed computing platform is carried out parallel processing to calculation task is:
When scheduling node initialization, open a task processing threads and a message sink thread, for continuous Processing tasks and beam back the message of coming from other node respectively.
Subscription client issues after a calculation task request, and scheduling node is processed after the necessary information in described task requests, and described calculation task request is stored as to new task, and new task is sent in the middle of the pending queue of task.
When task processing threads finds that there is new task and arrives, first judge whether that available free the endorsing of current cluster utilize, if do not had, get back to thread and continue to wait for; If there is utilizable idle nuclear resource, new task is added to being bundled to computing node request together with data after task name calculates, upgrade the total idle check figure of platform and be the total idle check figure of current platform and deduct this task and take check figure.
A newly-built task timer function is used for checking that whether this task is overtime, when scheduling node receive this task complete message or this task occur to calculate overtime after, it is that the total idle check figure of current platform adds that this task takies check figure that scheduling node upgrades the total idle check figure of platform, and by task write into Databasce result table.
The invention has the beneficial effects as follows: the present invention takes full advantage of cpu multi-core technology, multitasking has been improved to counting yield greatly, take full advantage of computer resource, polycaryon processor has performance and odds for effectiveness than single core processor, and polycaryon processor will become the computation model of extensive employing.
Distributed Computing Platform before is not all considered the advantage of cpu multinuclear processing, low to multitask computing efficiency, and stability is not high, through framework again, cpu multi-core technology is applied to after distributed paralleling calculation platform, computing velocity and stability have obviously improved, as long as computational resource abundance, how many tasks all can have been calculated in the short period of time.
Accompanying drawing explanation
Fig. 1 is distributed paralleling calculation platform network diagram of the present invention;
Fig. 2 is distributed paralleling calculation platform cpu check figure collecting flowchart figure of the present invention;
Fig. 3 is distributed paralleling calculation platform task processing flow chart of the present invention;
Fig. 4 is distributed paralleling calculation platform task data flow diagram of the present invention;
Fig. 5 is distributed paralleling calculation platform result data flow diagram of the present invention.
Embodiment:
Below in conjunction with accompanying drawing and embodiment, the present invention will be further described:
One, build a kind of distributed paralleling calculation platform based on the management of CPU core
Distributed paralleling calculation platform structure as shown in Figure 1.
Hardware configuration
Back end server: comprise at least one the computing machine with cpu multinuclear ability, for depositing historical results and database service node.
Scheduling node server: comprise at least one the computing machine with cpu multinuclear ability, for scheduling and the management of calculation task.
Computing node server: comprise at least one the computing machine with cpu multinuclear ability, process for the data that user is submitted to, and call the participation of third party's calculation procedure and calculate, described third party's core calculations program division is deployed on computing node.
Calculation task is sent to scheduling node server by computing node server, scheduling node server is by calculating whole cluster idling-resource cpu check figure, carry out the management and running of calculation task and the distributed parallel of data and calculate, and result of calculation is stored to back end server.
Software configuration
Operating system: linux(main flow unix operating system all can)
Internal memory: more than 2G
Hard disk: more than 30G
More than CPU:1 core
Platform deployment
1, newly-built user
Newly-built parallel computing platform user (ndsa, if there is this user on machine, first deletes), this user carries out electric system simulation calculating under distributed environment, on all nodes, proceeds as follows:
$su-root
$groupadd?ndsa
$useradd–m–g?ndsa?ndsa
$passwdndsa(ndsa,ndsa)
2, platform bag is installed
The relevant tgz compressed package of difference decompress(ion) on data, scheduling, computing node, the description of directory structure after decompress(ion) is as follows:
Bin: storage platform and communication middleware executable program
Conf: platform configuration file
Data: platform computational data catalogue
Lib: platform library file directory
Log: platform running log file directory
Temp: platform test catalogue
Tools: platform tools file directory
Task: two-stage destination file storage directory
Result: interim destination file storage directory
Tools: parallel computing platform control script
Senddata: simulated data issues submits end storing directory to
3, ssh configuration
This is provided between platform node ndsa user without verification password login
The first step:
Login on scheduling and all computing nodes and carry out to issue orders with ndsa user:
rm–rf/home/ndsa/.ssh
Ssh-keygen – t rsa (carrying out by carriage return when this command cue input)
Second step:
On scheduling node, carry out
cp/home/ndsa/.ssh/id_rsa.pub/home/ndsa/.ssh/authorized_keys
The 3rd step:
On scheduling node, carry out copy command
Scp – rp/home/ndsa/.ssh/*ndsa@computing node name 1:/home/ndsa/.ssh/
Scp – rp/home/ndsa/.ssh/*ndsa@computing node name 2:/home/ndsa/.ssh/
. (representing other computing nodes)
4, environmental variance configuration
Editor .bashrc file
$cd/home/ndsa/
$vi.bashrc
Increase:
export?LD_LIBRARY_PATH=/home/ndsa/lib:/home/ndsa/lib64
Fill order
$ source.bashrc (or restarting system)
Editor .bash_profile file
$vi.bash_profile
Increase:
PATH=$PATH:$HOME/bin:/sbin
export?PATH
Fill order:
$ source.bash_profile (or restarting system)
Two, platform cpu core management
After above-mentioned steps, the whole parallel distributed computing platform based on core management has been built substantially.After platform starts, first load respective profiles and read necessary configuration information, such as platform model (online or off-line), network interface card information, nodal information etc., then enter event loop, the every category node function of platform is compact and single, in line with modularization and Object-Oriented Design thought, nodal function is described as follows:
1, gateway node (configurable): yjq triggers node online, is mainly used in that distribution on line formula platform calculation task issues and some third party's control programs etc.
2, back end: historical results storing directory, and most important database service node.
3, scheduling node: be the core of whole parallel distributed computing platform, play task scheduling and control function, result recovery, database manipulation, platform management.
4, computing node: the data of being responsible for user to submit to are processed, and call the participation of third party's calculation procedure and calculate, and result of calculation is sent to scheduling node.
Parallel distributed computing platform is for the collection of computational resource (cpu check figure) as shown in Figure 2:
First by computing node every 20 seconds (adjustable) by local resource, comprise cpu check figure, hard drive space, the information package such as memory size send to dispatch server, then scheduling node is received after the resource information bag of this computing node, cpu check figure information is stored in a global structure body variable, finally open a timing function by scheduling node DistComp process, make regular check on all computing nodes and report resource information situation, judge that by the decision node information updating time whether computing node resource reporting information is overtime, if overtime, in the middle of the check figure that DistComp can use the cpu check figure of this computing node from cluster, reject, so both can effectively safeguard cpu check figure resource information, also can judge that whether calculating is in service state, stability and the maintainability of platform are greatly strengthened.
Three, User Data Protocol
The off-line one-phase calculation task data of submitting to for client user are except necessary computational data catalogue datas such as () conf, data, para, also need to comprise a configuration file TaskList that task is divided in detail, for the information such as calculating classification, time-out time, fault or section number of task is described, scheduling node just can be stored very clearly and distribute, issues calculation task like this, is unlikely to produce to obscure.
TaskList file content form is as follows:
Figure BDA0000473302460000071
Four, parallel computing platform task processing
Platform is processed more complicated for calculation task, and according to emphasis difference, flow process presents variation, for cpu core managerial role treatment scheme as shown in Figure 3:
Learn that from process analysis platform is higher to the degree of dependence of cpu core, only have in the time that core is available free, platform just can be issued to calculation task computing node and participate in calculating, otherwise platform can wait for always, until available free core.
When dispatching platforms node initializing, open a task processing threads and a message sink thread, for continuous Processing tasks and beam back the message of coming from other node respectively, when subscription client issues after a calculation task request, scheduling node sends to this new task in the middle of the pending queue of task after processing some necessary informations of this task.
When task processing threads finds that there is new task arrival, first judge whether that available free the endorsing of current cluster utilize, if do not had, getting back to thread continues to wait for, if now there is utilizable idle nuclear resource, this new task is added to being bundled to computing node request together with data after header packet information calculates, then upgrade the total idle check figure of current platform and be the total idle check figure of previous platform and deduct this task and take check figure.
Finally a newly-built task timer function is used for checking that whether this task is overtime again, when scheduling node receive this task complete message or this task occur to calculate overtime after, it is that the total idle check figure of previous platform adds that this task takies check figure that scheduling node upgrades the total idle check figure of current platform, subsequently by task write into Databasce result table.
Five, platform program explanation
Figure BDA0000473302460000072
Figure BDA0000473302460000081
Platform starts and stops
Operating platform start and stop function under dispatch server node ndsa user command row prompt:
Start platform command: startplatformd
Stop platform command: stopplatformd
Fig. 4 and Fig. 5 are respectively distributed paralleling calculation platform task data flow and result data flow diagram, and client is submitted to after mission bit stream, obtain calculation result data through management and running and parallel computation, and result data returns to client by management and running.
By reference to the accompanying drawings the specific embodiment of the present invention is described although above-mentioned; but not limiting the scope of the invention; one of ordinary skill in the art should be understood that; on the basis of technical scheme of the present invention, those skilled in the art do not need to pay various modifications that creative work can make or distortion still in protection scope of the present invention.

Claims (5)

1. the distributed paralleling calculation platform based on the management of CPU core, is characterized in that, comprising:
Back end server: comprise at least one the computing machine with cpu multinuclear ability, for depositing historical results and database service node;
Scheduling node server: comprise at least one the computing machine with cpu multinuclear ability, for scheduling and the management of calculation task;
Computing node server: comprise at least one the computing machine with cpu multinuclear ability, process for the data that user is submitted to, and call the participation of third party's calculation procedure and calculate, described third party's core calculations program division is deployed on computing node;
Calculation task is sent to scheduling node server by computing node server, scheduling node server is by calculating whole cluster idling-resource cpu check figure, carry out the management and running of calculation task and the distributed parallel of data and calculate, and result of calculation is stored to back end server.
2. the distributed paralleling calculation platform based on CPU core management as claimed in claim 1, is characterized in that, on described computing machine, (SuSE) Linux OS is installed, and the SSH server that configures Linux makes to log in without password between each computing machine.
3. the distributed parallel computing method based on the management of CPU core as claimed in claim 1, is characterized in that, comprising:
Build the parallel distributed computing platform based on core management, loading configuration file also reads configuration information;
Parallel distributed computing platform gathers statistics to whole cluster idling-resource cpu check figure;
The calculation task data that parallel distributed computing platform is submitted to for client user, and the configuration file TaskList that comprises task computation classification, time-out time, calculating parameter information, real-time update cluster idling-resource cpu check figure, carries out parallel processing to calculation task.
4. a kind of distributed parallel computing method based on the management of CPU core as claimed in claim 3, is characterized in that, the concrete grammar that described parallel distributed computing platform gathers statistics to whole cluster idling-resource cpu check figure is:
Computing node is sent to scheduling node server every setting-up time by the local resource packing that comprises cpu check figure, hard drive space, memory size information;
Scheduling node server is received after the resource information bag of computing node, and cpu check figure information is stored in a global structure body variable;
By scheduling node DistComp process opening timing device function, described timing function is made regular check on all computing nodes and is reported resource information situation, judge that by the decision node information updating time whether computing node resource reporting information is overtime, if overtime, in the middle of the cpu check figure that DistComp can use the cpu check figure of this computing node from cluster, reject, and store the cpu check figure that current cluster can be used.
5. a kind of distributed parallel computing method based on the management of CPU core as claimed in claim 3, is characterized in that, the idiographic flow that described parallel distributed computing platform is carried out parallel processing to calculation task is:
When scheduling node initialization, open a task processing threads and a message sink thread, for continuous Processing tasks and beam back the message of coming from other node respectively;
Subscription client issues after a calculation task request, and scheduling node is processed after the necessary information in described task requests, and described calculation task request is stored as to new task, and new task is sent in the middle of the pending queue of task;
When task processing threads finds that there is new task and arrives, first judge whether that available free the endorsing of current cluster utilize, if do not had, get back to thread and continue to wait for; If there is utilizable idle nuclear resource, new task is added to being bundled to computing node request together with data after task name calculates, upgrade the total idle check figure of platform and be the total idle check figure of current platform and deduct this task and take check figure;
A newly-built task timer function is used for checking that whether this task is overtime, when scheduling node receive this task complete message or this task occur to calculate overtime after, it is that the total idle check figure of current platform adds that this task takies check figure that scheduling node upgrades the total idle check figure of platform, and by task write into Databasce result table.
CN201410079473.5A 2014-03-05 2014-03-05 Distributive parallel computing platform and method based on CPU (central processing unit) core management Pending CN103870338A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410079473.5A CN103870338A (en) 2014-03-05 2014-03-05 Distributive parallel computing platform and method based on CPU (central processing unit) core management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410079473.5A CN103870338A (en) 2014-03-05 2014-03-05 Distributive parallel computing platform and method based on CPU (central processing unit) core management

Publications (1)

Publication Number Publication Date
CN103870338A true CN103870338A (en) 2014-06-18

Family

ID=50908900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410079473.5A Pending CN103870338A (en) 2014-03-05 2014-03-05 Distributive parallel computing platform and method based on CPU (central processing unit) core management

Country Status (1)

Country Link
CN (1) CN103870338A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104035817A (en) * 2014-07-08 2014-09-10 领佰思自动化科技(上海)有限公司 Distributed parallel computing method and system for physical implementation of large scale integrated circuit
CN104331327A (en) * 2014-12-02 2015-02-04 山东乾云启创信息科技有限公司 Optimization method and optimization system for task scheduling in large-scale virtualization environment
CN106055311A (en) * 2016-05-26 2016-10-26 浙江工业大学 Multi-threading Map Reduce task parallelizing method based on assembly line
CN107370796A (en) * 2017-06-30 2017-11-21 香港红鸟科技股份有限公司 A kind of intelligent learning system based on Hyper TF
CN107506932A (en) * 2017-08-29 2017-12-22 广州供电局有限公司 Power grid risk scenes in parallel computational methods and system
CN108256263A (en) * 2018-02-07 2018-07-06 中国电力科学研究院有限公司 A kind of electric system hybrid simulation concurrent computational system and its method for scheduling task
CN108762725A (en) * 2018-05-31 2018-11-06 飞天诚信科技股份有限公司 A kind of method and system that distributed random number is generated and detected
CN108762929A (en) * 2018-05-30 2018-11-06 郑州云海信息技术有限公司 The method and apparatus that processor core number is managed under SQL database
CN109189580A (en) * 2018-09-17 2019-01-11 武汉虹旭信息技术有限责任公司 A kind of multitask development model and its method based on multi-core platform
CN109815002A (en) * 2017-11-21 2019-05-28 中国电力科学研究院有限公司 A kind of distributed paralleling calculation platform and its method based on in-circuit emulation
CN109951470A (en) * 2019-03-12 2019-06-28 湖北大学 A kind of information of multiple computing device Distributed Parallel Computing issues and result method for uploading
CN111427690A (en) * 2020-03-25 2020-07-17 杭州意能电力技术有限公司 Parallel computing method for distributed processing units
CN111459665A (en) * 2020-03-27 2020-07-28 重庆电政信息科技有限公司 Distributed edge computing system and distributed edge computing method
CN112182770A (en) * 2020-10-10 2021-01-05 中国运载火箭技术研究院 Online iterative computation method and device, computer storage medium and electronic equipment
CN112671889A (en) * 2020-12-21 2021-04-16 高新兴智联科技有限公司 Method for realizing distributed Internet of things middleware supporting multiple protocols
CN113590331A (en) * 2021-08-05 2021-11-02 山东派盟网络科技有限公司 Task processing method, control device and storage medium
CN114077581A (en) * 2021-11-24 2022-02-22 北京白板科技有限公司 Database based on data aggregation storage mode

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101349974A (en) * 2007-07-16 2009-01-21 中兴通讯股份有限公司 Method for improving multi-core CPU processing ability in distributed system
CN101436098A (en) * 2008-12-24 2009-05-20 华为技术有限公司 Method and apparatus for reducing power consumption of multiple-core symmetrical multiprocessing system
CN101977313A (en) * 2010-09-20 2011-02-16 中国科学院计算技术研究所 Video signal coding device and method
US20110246748A1 (en) * 2010-04-06 2011-10-06 Vanish Talwar Managing Sensor and Actuator Data for a Processor and Service Processor Located on a Common Socket

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101349974A (en) * 2007-07-16 2009-01-21 中兴通讯股份有限公司 Method for improving multi-core CPU processing ability in distributed system
CN101436098A (en) * 2008-12-24 2009-05-20 华为技术有限公司 Method and apparatus for reducing power consumption of multiple-core symmetrical multiprocessing system
US20110246748A1 (en) * 2010-04-06 2011-10-06 Vanish Talwar Managing Sensor and Actuator Data for a Processor and Service Processor Located on a Common Socket
CN101977313A (en) * 2010-09-20 2011-02-16 中国科学院计算技术研究所 Video signal coding device and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘曌: "基于云计算的海量视频转换平台的设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104035817A (en) * 2014-07-08 2014-09-10 领佰思自动化科技(上海)有限公司 Distributed parallel computing method and system for physical implementation of large scale integrated circuit
CN104331327B (en) * 2014-12-02 2017-07-11 山东乾云启创信息科技股份有限公司 The optimization method of task scheduling and optimization system in large scale virtualisation environment
CN104331327A (en) * 2014-12-02 2015-02-04 山东乾云启创信息科技有限公司 Optimization method and optimization system for task scheduling in large-scale virtualization environment
CN106055311B (en) * 2016-05-26 2018-06-26 浙江工业大学 MapReduce tasks in parallel methods based on assembly line multithreading
CN106055311A (en) * 2016-05-26 2016-10-26 浙江工业大学 Multi-threading Map Reduce task parallelizing method based on assembly line
CN107370796A (en) * 2017-06-30 2017-11-21 香港红鸟科技股份有限公司 A kind of intelligent learning system based on Hyper TF
CN107370796B (en) * 2017-06-30 2021-01-08 深圳致星科技有限公司 Intelligent learning system based on Hyper TF
CN107506932A (en) * 2017-08-29 2017-12-22 广州供电局有限公司 Power grid risk scenes in parallel computational methods and system
CN109815002A (en) * 2017-11-21 2019-05-28 中国电力科学研究院有限公司 A kind of distributed paralleling calculation platform and its method based on in-circuit emulation
CN108256263A (en) * 2018-02-07 2018-07-06 中国电力科学研究院有限公司 A kind of electric system hybrid simulation concurrent computational system and its method for scheduling task
CN108762929A (en) * 2018-05-30 2018-11-06 郑州云海信息技术有限公司 The method and apparatus that processor core number is managed under SQL database
CN108762929B (en) * 2018-05-30 2022-03-22 郑州云海信息技术有限公司 Method and device for managing number of processor cores under SQL database
CN108762725B (en) * 2018-05-31 2021-01-01 飞天诚信科技股份有限公司 Distributed random number generation and detection method and system
CN108762725A (en) * 2018-05-31 2018-11-06 飞天诚信科技股份有限公司 A kind of method and system that distributed random number is generated and detected
CN109189580A (en) * 2018-09-17 2019-01-11 武汉虹旭信息技术有限责任公司 A kind of multitask development model and its method based on multi-core platform
CN109951470A (en) * 2019-03-12 2019-06-28 湖北大学 A kind of information of multiple computing device Distributed Parallel Computing issues and result method for uploading
CN111427690A (en) * 2020-03-25 2020-07-17 杭州意能电力技术有限公司 Parallel computing method for distributed processing units
CN111427690B (en) * 2020-03-25 2023-04-18 杭州意能电力技术有限公司 Parallel computing method for distributed processing units
CN111459665A (en) * 2020-03-27 2020-07-28 重庆电政信息科技有限公司 Distributed edge computing system and distributed edge computing method
CN112182770A (en) * 2020-10-10 2021-01-05 中国运载火箭技术研究院 Online iterative computation method and device, computer storage medium and electronic equipment
CN112671889A (en) * 2020-12-21 2021-04-16 高新兴智联科技有限公司 Method for realizing distributed Internet of things middleware supporting multiple protocols
CN112671889B (en) * 2020-12-21 2022-05-10 高新兴智联科技有限公司 Method for realizing distributed Internet of things middleware supporting multiple protocols
CN113590331A (en) * 2021-08-05 2021-11-02 山东派盟网络科技有限公司 Task processing method, control device and storage medium
CN114077581A (en) * 2021-11-24 2022-02-22 北京白板科技有限公司 Database based on data aggregation storage mode

Similar Documents

Publication Publication Date Title
CN103870338A (en) Distributive parallel computing platform and method based on CPU (central processing unit) core management
Sun et al. Modeling a dynamic data replication strategy to increase system availability in cloud computing environments
CN107729138B (en) Method and device for analyzing high-performance distributed vector space data
CN108810115B (en) Load balancing method and device suitable for distributed database and server
CN104166600A (en) Data backup and recovery methods and devices
Zhang et al. Accelerating MapReduce with distributed memory cache
CN103617067A (en) Electric power software simulation system based on cloud computing
CN103701635A (en) Method and device for configuring Hadoop parameters on line
Thakkar et al. Renda: resource and network aware data placement algorithm for periodic workloads in cloud
CN101794993A (en) Grid simulation real-time parallel computing platform based on MPI (Multi Point Interface) and application thereof
CN109614227A (en) Task resource concocting method, device, electronic equipment and computer-readable medium
dos Anjos et al. Smart: An application framework for real time big data analysis on heterogeneous cloud environments
Lu et al. Assessing MapReduce for internet computing: a comparison of Hadoop and BitDew-MapReduce
WO2023231145A1 (en) Data processing method and system based on cloud platform, and electronic device and storage medium
CN106383861A (en) Data synchronization method and apparatus used for databases
Krevat et al. Applying performance models to understand data-intensive computing efficiency
US9934268B2 (en) Providing consistent tenant experiences for multi-tenant databases
Huang et al. Research and application of microservice in power grid dispatching control system
Li et al. Building an HPC-as-a-service toolkit for user-interactive HPC services in the cloud
CN106033211B (en) A kind of method and device of control gluing board rubber head cleaning
Fang et al. A parallel computing framework for cloud services
Abyaneh et al. Malcolm: Multi-agent learning for cooperative load management at rack scale
CN111367875B (en) Ticket file processing method, system, equipment and medium
Leijiao et al. Framework design of cloud computing technology application in power system transient simulation
Pugdeethosapol et al. Dynamic configuration of the computing nodes of the ALICE O 2 system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140618