CN102306205A - Method and device for allocating transactions - Google Patents

Method and device for allocating transactions Download PDF

Info

Publication number
CN102306205A
CN102306205A CN201110303344A CN201110303344A CN102306205A CN 102306205 A CN102306205 A CN 102306205A CN 201110303344 A CN201110303344 A CN 201110303344A CN 201110303344 A CN201110303344 A CN 201110303344A CN 102306205 A CN102306205 A CN 102306205A
Authority
CN
China
Prior art keywords
affairs
resource
degree
cpu
parallelism
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201110303344A
Other languages
Chinese (zh)
Inventor
赵雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN201110303344A priority Critical patent/CN102306205A/en
Publication of CN102306205A publication Critical patent/CN102306205A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the invention discloses a method and a device for allocating transactions. The method and the device are applied to an asymmetric multiprocessor colony, for allocating the transactions to an appropriate central processing unit (CPU) according to the inner parallelism degree of the transactions and executing in the appropriate CPU. The method disclosed by the embodiment of the invention comprises the following steps: acquiring the inner parallelism degree of each transaction in a queue; according to the inner parallelism degree of the transactions, sequencing the transactions in the queue according to a sequencing rule; according to the number of computing kernels of the CPU, sequencing idle CPUs in a system according to the sequencing rule; and sequentially allocating the transactions in the sequenced queue to the sequenced CPUs and executing in the sequenced CPUs, thereby causing the CPU with more kernels to execute the transactions with higher inner parallelism degree and the CPU with less kernels to execute the transactions with lower inner parallelism degree. By applying the technical scheme of the invention, the problem that load of the asymmetric multiprocessor colony is unbalanced can be solved, and the performance of a database management system is improved.

Description

A kind of transaction distribution method and device
Technical field
The present invention relates to database technical field, relate in particular to a kind of transaction distribution method and device.
Background technology
Data base management system (DBMS) is a kind of comparatively desirable data processing core mechanism that grows up for the needs that adapt to data processing.A program execution unit of visiting various data in the storehouse that also possibly Update Information in the data base management system (DBMS) is called affairs, and (Central Processing Unit CPU) bears the operation task of affairs by central processing unit in the data base management system (DBMS).Originally; Data base management system (DBMS) only uses single monokaryon CPU to carry out affairs; Afterwards; Raising along with the users' level of application; Only use single monokaryon CPU to be difficult to satisfy the demand of practical application; Thereby each manufacturer passes through to adopt symmetric multiprocessor one after another, and (Symmetric Multi Processor, SMP) cluster solves this problem.So-called smp system is meant the db transaction execution cluster that has compiled a plurality of monokaryon CPU, and data base management system (DBMS) is allocated in affairs symmetrically on the SMP cluster and carries out, and has greatly improved the data-handling capacity of system.
At present, the appearance of multi-core CPU technology has improved the processing power of single cpu greatly, because multi-core CPU is suitable for carrying out the higher affairs of degree of parallelism in the affairs.Simultaneously, in the data base administration, the users' level of application is also improving constantly; In order to satisfy the demand of higher practical application; The user equips multi-core CPU in the SMP cluster, multi-core CPU can carry out executed in parallel to many statements in the same affairs, makes the processing power of cluster greatly strengthen.Be equipped with the CPU that check figure does not wait in the cluster, no longer be symmetric multiprocessor SMP cluster, and produce a kind of new asymmetric multiprocessor cluster.
But; As far as asymmetric multiprocessor cluster; Present transaction distribution method is used to carry out affairs for Random assignment CPU; Possibly cause the many CPU of check figure to carry out the low affairs of degree of parallelism in the affairs like this; Carry out the high affairs of degree of parallelism in the affairs and examine few CPU; Cause the problem of asymmetric multiprocessor cluster load imbalance, reduced the performance of data base management system (DBMS).
Summary of the invention
The embodiment of the invention provides a kind of transaction distribution method and device; Be used at asymmetric multiprocessor cluster; According to degree of parallelism in the affairs of affairs affairs are assigned among the appropriate C PU and carry out; Solve the problem of asymmetric multiprocessor cluster load imbalance, improve the performance of data base management system (DBMS).
A kind of transaction distribution method comprises:
Obtain the interior degree of parallelism of affairs of each affairs in the formation;
According to degree of parallelism in the said affairs, the affairs in the formation are sorted according to ordering rule;
According to the calculating nuclear volume of central processor CPU, the idle CPU in the system is sorted according to said ordering rule;
Affairs in the formation after the said ordering are distributed among the CPU after the said ordering successively carry out, make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs, examine few CPU and carry out the interior low affairs of degree of parallelism of affairs.
A kind of affairs distributor comprises:
The degree of parallelism acquisition module is used for obtaining degree of parallelism in the affairs of each affairs of formation;
The affairs order module is used for the affairs in the formation being sorted according to ordering rule according to degree of parallelism in the said affairs;
The central processor CPU order module is used for the calculating nuclear volume according to central processor CPU, according to said ordering rule the idle CPU in the system is sorted;
Distribution module is used for the affairs of the formation after the said ordering are distributed among the CPU after the said ordering successively and carries out, and makes the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs, examine few CPU and carries out the interior low affairs of degree of parallelism of affairs.
Can find out that from above technical scheme the embodiment of the invention has the following advantages:
According to degree of parallelism in the affairs of affairs; According to ordering rule affairs are sorted; Again according to the check figure of CPU; According to aforesaid ordering rule idle CPU is sorted; Affairs after will sorting at last are assigned to successively among the CPU after the ordering and carry out, and make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs, examine few CPU and carry out the low affairs of degree of parallelism in the affairs; Realize asymmetric multiprocessor cluster load balance, thereby improve the performance of data base management system (DBMS).
Description of drawings
Fig. 1 is the transaction distribution method basic flow sheet of first embodiment of the invention;
Fig. 2 is the transaction distribution method detail flowchart of second embodiment of the invention;
Fig. 3 is the give an example resource precedence graph of affairs of second embodiment of the invention;
Fig. 4 is the transaction distribution method detail flowchart of third embodiment of the invention;
Fig. 5 is the give an example resource precedence graph of affairs of third embodiment of the invention;
Fig. 6 is the affairs distributor basic block diagram of fourth embodiment of the invention;
Fig. 7 is the affairs distributor detailed structure view of fifth embodiment of the invention.
Embodiment
The embodiment of the invention provides a kind of transaction distribution method; Be used at asymmetric multiprocessor cluster; According to degree of parallelism in the affairs of affairs affairs are assigned among the appropriate C PU and carry out, solve the problem of asymmetric multiprocessor cluster load imbalance, improve the performance of data base management system (DBMS).The embodiment of the invention also is provided for realizing the relevant apparatus of this method, below will be elaborated respectively.
The transaction distribution method basic procedure of first embodiment of the invention sees also Fig. 1, mainly comprises step:
101, obtain the interior degree of parallelism of affairs of each affairs in the formation.
Data base management system (DBMS) is obtained the interior degree of parallelism of affairs of each affairs in the formation.
Wherein, Degree of parallelism is certain numerical value of N in the affairs of affairs; But N bar statement executed in parallel is arranged at most in these affairs promptly; For fear of but the situation that parallel subqueries is waited in carrying out this business process, occurring; Needs distribution check figure is that the CPU of N carries out this affairs; The statement that so then just in time can avoid walking abreast is waited for, also can not cause the waste of cpu resource simultaneously.
102,, the affairs in the formation are sorted according to ordering rule according to degree of parallelism in the affairs.
Data base management system (DBMS) sorts to all affairs in the formation according to predefined ordering rule according to degree of parallelism in the affairs of each affairs in the formation of being obtained in the step 101.
103, according to the calculating nuclear volume of CPU, the idle CPU in the system is sorted according to ordering rule.
Data base management system (DBMS) is according to the calculating nuclear volume of CPU, according to step 102 in the identical ordering rule of employed ordering rule the idle CPU in the system is sorted.
104, the affairs in the formation after will sorting are distributed to successively among the CPU after the ordering and are carried out.
Data base management system (DBMS) is distributed to the affairs in the formation after sorting in the step 102 successively in the step 103 and is carried out among the CPU after the ordering; Make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs; Examine few CPU and carry out the low affairs of degree of parallelism in the affairs; So operation is accomplished affairs and is shared out the work.
In the present embodiment; The setting of the ordering rule described in step 102 and the step 103 need be satisfied the enforcement needs of present embodiment method; Affairs after promptly sorting according to this ordering rule are assigned to according to this ordering rule back of sorting and carry out among the CPU; Need make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs, examine few CPU and carry out the low affairs of degree of parallelism in the affairs.
The present embodiment method; At first obtain the interior degree of parallelism of affairs of affairs; According to degree of parallelism in the affairs of affairs; According to ordering rule affairs are sorted; Again according to the check figure of CPU; According to aforesaid ordering rule idle CPU is sorted; Affairs after will sorting at last are assigned to successively among the CPU after the ordering and carry out; Make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs; Examine few CPU and carry out the low affairs of degree of parallelism in the affairs; Realize asymmetric multiprocessor cluster load balance, thereby improve the performance of data base management system (DBMS).
Second embodiment of the invention will be described in detail the transaction distribution method of first embodiment, and the transaction distribution method detailed process of second embodiment of the invention sees also Fig. 2, mainly comprises step:
201, obtain the resource snapshot of every statement in each affairs in the waiting list.
Data base management system (DBMS) is carried out precompile to every statement in each affairs in the said waiting list, promptly it is carried out grammer and lexical analysis.Concrete operations are operation key word and operands of finding out in the statement.Operation key word in the statement has 4: select select, insert insert, delete delete and upgrade update, wherein, except that select was read operation, all the other were write operation.Operand is meant operated tables of data.
Through above-mentioned grammatical analysis and lexical analysis, can obtain the resource snapshot of statement, said resource snapshot is used to indicate reading resource collection and writing resource collection of said statement.For example, resources in the form of snapshots:
Figure BDA0000095194350000041
where x represents a unique transaction number, y statement on behalf of the transaction sequence number set {A, B} represents the operation object read operation tables A and B data, set {C} represents a write operation to write data to the operation target table C.Set { A; B} is the resource collection of reading of statement , and { C} is the resource collection of writing of statement in set.
202,, obtain reading resource collection and writing resource collection of said statement according to said resource snapshot.
The resource snapshot is used for indicating reading resource collection and writing resource collection of statement in the said waiting list affairs, and system can obtain reading resource collection and writing resource collection of said statement according to the resource snapshot.
203, according to reading resource collection and writing resource collection, for each affairs in the waiting list are set up the resource precedence graph.
Data base management system (DBMS) is read resource collection and is write resource collection according to what obtain in the step 202, sets up the resource precedence graph for each affairs in the waiting list, and said resource precedence graph is indicated the dependence between statement in the said affairs.
Wherein, every statement in the affairs is expressed as a node in the resource precedence graph.For example; 9 statements are arranged in the affairs; Be followed successively by E1~E9; Any two statements: first statement and second statement have dependence; Promptly the common factor of writing resource collection of reading the resource collection and second statement of first statement is not for empty; Or the common factor of reading resource collection of writing the resource collection and second statement of first statement is not for empty; Or the common factor of writing resource collection of writing the resource collection and second statement of first statement is not for empty; Then draw a directive line segment representing between two nodes of first statement and second statement, the serve as reasons node of the more preceding statement of representative ordering of direction points to the node of the statement after the representative ordering.Be assumed to be resource precedence graph such as Fig. 3 that above-mentioned affairs are set up.
204, the width of computational resource precedence graph, the width of resource precedence graph equal the interior degree of parallelism of affairs of affairs.
Resource precedence graph as shown in Figure 3, in-degree is 0 node number in the calculating chart, and the in-degree of having only E1 among the figure is 0, and note resource precedence graph width is max=1; Remove in-degree and be 0 node and be that 0 node is the directed line segment of starting point with these in-degrees; The reservation out-degree is 0 node; E1 and be that the directed line segment of starting point is removed with E1 after this operation steps; This moment, in-degree was that 0 node number is 3; These three nodes are E2, E3 and E4; Because in-degree is that 0 node number is that the number of 0 node is big than in-degree in the last action in this operation, remembers that then resource precedence graph width is max=3, otherwise keep the max value constant; Remove in-degree and be 0 node and be that 0 node is the directed line segment of starting point with these in-degrees, keep out-degree and be 0 node, in-degree is that 0 node number is 2 after this operation steps, and these two nodes are E4 and E5, keep max=3 constant; Remove in-degree and be 0 node and be that 0 node is the directed line segment of starting point with these in-degrees, keep out-degree and be 0 node, in-degree is that 0 node number is 4 after this operation steps, and these two nodes are E4, E6, E7 and E8, note max=4; Remove in-degree and be 0 node and be that 0 node is the directed line segment of starting point with these in-degrees, keep out-degree and be 0 node, in-degree is that 0 node number is 3 after this operation steps, and these three nodes are E4, E8 and E9, keep max=4 constant.So the width of the said resource precedence graph that gets access to is 4, the interior affairs degree of parallelism of its affairs is 4.
205,, the affairs in the waiting list are sorted according to the descending sort rule according to degree of parallelism in the affairs.
Data base management system (DBMS) sorts to all affairs in the waiting list according to the descending sort rule according to degree of parallelism in the affairs of each affairs in the waiting list that is obtained in the step 204.
206, according to the calculating nuclear volume of CPU, the idle CPU in the system is sorted according to the descending sort rule.
Data base management system (DBMS) is according to the calculating nuclear volume of CPU, according to step 205 in identical descending sort rule the idle CPU in the system is sorted.
207, the affairs in the waiting list after will sorting are distributed to successively among the CPU after the ordering and are carried out.
Data base management system (DBMS) is distributed to the affairs in the waiting list after sorting in the step 205 successively in the step 206 and is carried out among the CPU after the ordering; Make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs; Examine few CPU and carry out the low affairs of degree of parallelism in the affairs; So operation is accomplished affairs and is shared out the work.
In the present embodiment; The enforcement needs of present embodiment method are satisfied in the setting of the ordering rule described in step 205 and the step 206; Affairs after promptly sorting according to the descending sort rule are assigned to according to this ordering rule back of sorting and carry out among the CPU; Make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs, examine few CPU and carry out the low affairs of degree of parallelism in the affairs.
The present embodiment method; At first obtain the resource snapshot of every statement in each affairs of waiting list; Obtain writing resource collection and reading resource collection of statement through said resource snapshot; Thereby set up the resource precedence graph of each affairs; Calculate the width of this resource precedence graph again; Obtain the interior degree of parallelism of affairs of affairs; According to degree of parallelism in the affairs of affairs; According to the descending sort rule affairs are sorted; Again according to the check figure of CPU; According to aforesaid descending sort rule idle CPU is sorted; Affairs after will sorting at last are assigned to successively among the CPU after the ordering and carry out, and make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs, examine few CPU and carry out the low affairs of degree of parallelism in the affairs; Realize asymmetric multiprocessor cluster load balance, thereby improve the performance of data base management system (DBMS).
Third embodiment of the invention will be described in detail the transaction distribution method of first embodiment, and the transaction distribution method detailed process of third embodiment of the invention sees also Fig. 4, mainly comprises step:
401, obtain the resource snapshot of every statement in each affairs in the waiting list.
Data base management system (DBMS) is carried out precompile to every statement in each affairs in the said waiting list, promptly it is carried out grammer and lexical analysis.Concrete operations are operation key word and operands of finding out in the statement.Operation key word in the statement has 4: select select, insert insert, delete delete and upgrade update, wherein, except that select was read operation, all the other were write operation.Operand is meant operated tables of data.
Through above-mentioned grammatical analysis and lexical analysis, can obtain the resource snapshot of statement, said resource snapshot is used to indicate reading resource collection and writing resource collection of said statement.For example, resources in the form of snapshots:
Figure BDA0000095194350000061
where x represents a unique transaction number, y statement on behalf of the transaction sequence number set {A, B} represents the operation object read operation tables A and B data, set {C} represents a write operation to write data to the operation target table C.Set {A, B} for the statement
Figure BDA0000095194350000071
read resource collection, a collection {C} for the statement
Figure BDA0000095194350000072
write a collection of resources.
402,, obtain reading resource collection and writing resource collection of said statement according to said resource snapshot.
The resource snapshot is used for indicating reading resource collection and writing resource collection of statement in the said waiting list affairs, and system can obtain reading resource collection and writing resource collection of said statement according to the resource snapshot.
403, according to reading resource collection and writing resource collection, for each affairs in the waiting list are set up the resource precedence graph.
Data base management system (DBMS) is read resource collection and is write resource collection according to what obtain in the step 402, sets up the resource precedence graph for each affairs in the waiting list, and said resource precedence graph is indicated the dependence between statement in the said affairs.
Wherein, every statement in the affairs is expressed as a node in the resource precedence graph.For example; 9 statements are arranged in the affairs; Be followed successively by F1~F9; Any two statements: first statement and second statement have dependence; Promptly the common factor of writing resource collection of reading the resource collection and second statement of first statement is not for empty; Or the common factor of reading resource collection of writing the resource collection and second statement of first statement is not for empty; Or the common factor of writing resource collection of writing the resource collection and second statement of first statement is not for empty; Then draw a directive line segment representing between two nodes of first statement and second statement, the serve as reasons node of the more preceding statement of representative ordering of direction points to the node of the statement after the representative ordering.Be assumed to be resource precedence graph such as Fig. 5 that above-mentioned affairs are set up.
404, the width of computational resource precedence graph, the width of resource precedence graph equal the interior degree of parallelism of affairs of affairs.
Resource precedence graph as shown in Figure 5, in-degree is 0 node number in the calculating chart, and the in-degree of having only F1 among the figure is 0, and note resource precedence graph width is max=1; Remove in-degree and be 0 node and be that 0 node is the directed line segment of starting point with these in-degrees; The reservation out-degree is 0 node; F1 and be that the directed line segment of starting point is removed with F1 after this operation steps; This moment, in-degree was that 0 node number is 3; These three nodes are F2, F3 and F4; Because in-degree is that 0 node number is that the number of 0 node is big than in-degree in the last action in this operation, remembers that then resource precedence graph width is max=3, otherwise keep the max value constant; Remove in-degree and be 0 node and be that 0 node is the directed line segment of starting point with these in-degrees, keep out-degree and be 0 node, in-degree is that 0 node number is 2 after this operation steps, and these two nodes are F4 and F5, keep max=3 constant; Remove in-degree and be 0 node and be that 0 node is the directed line segment of starting point with these in-degrees, keep out-degree and be 0 node, in-degree is that 0 node number is 4 after this operation steps, and these two nodes are F4, F6, F7 and F8, note max=4; Remove in-degree and be 0 node and be that 0 node is the directed line segment of starting point with these in-degrees, keep out-degree and be 0 node, in-degree is that 0 node number is 3 after this operation steps, and these three nodes are F4, F8 and F9, keep max=4 constant.So the width of the said resource precedence graph that gets access to is 4, the interior affairs degree of parallelism of its affairs is 4.
405,, the affairs in the waiting list are sorted according to the ascending order queueing discipline according to degree of parallelism in the affairs.
Data base management system (DBMS) sorts to all affairs in the waiting list according to the ascending order queueing discipline according to degree of parallelism in the affairs of each affairs in the waiting list that is obtained in the step 404.
406, according to the calculating nuclear volume of CPU, the idle CPU in the system is sorted according to the ascending order queueing discipline.
Data base management system (DBMS) is according to the calculating nuclear volume of CPU, according to step 405 in identical ascending order queueing discipline the idle CPU in the system is sorted.
407, the affairs in the waiting list after will sorting are distributed to successively among the CPU after the ordering and are carried out.
Data base management system (DBMS) is distributed to the affairs in the waiting list after sorting in the step 405 successively in the step 406 and is carried out among the CPU after the ordering; Make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs; Examine few CPU and carry out the low affairs of degree of parallelism in the affairs; So operation is accomplished affairs and is shared out the work.
In the present embodiment; The enforcement needs of present embodiment method are satisfied in the setting of the ordering rule described in step 405 and the step 406; Affairs after promptly sorting according to the descending sort rule are assigned to according to this ordering rule back of sorting and carry out among the CPU; Make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs, examine few CPU and carry out the low affairs of degree of parallelism in the affairs.
The present embodiment method; At first obtain the resource snapshot of every statement in each affairs of waiting list; Obtain writing resource collection and reading resource collection of statement through said resource snapshot; Thereby set up the resource precedence graph of each affairs; Calculate the width of this resource precedence graph again; Obtain the interior degree of parallelism of affairs of affairs; According to degree of parallelism in the affairs of affairs; According to the descending sort rule affairs are sorted; Again according to the check figure of CPU; According to aforesaid descending sort rule idle CPU is sorted; Affairs after will sorting at last are assigned to successively among the CPU after the ordering and carry out, and make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs, examine few CPU and carry out the low affairs of degree of parallelism in the affairs; Realize asymmetric multiprocessor cluster load balance, thereby improve the performance of data base management system (DBMS).
The affairs distributor structure of fourth embodiment of the invention sees also Fig. 6, mainly comprises:
Degree of parallelism acquisition module 601, degree of parallelism acquisition module 601 obtain the interior degree of parallelism of affairs of each affairs in the formation, and degree of parallelism data in the said affairs are sent to affairs order module 602.
Affairs order module 602, affairs order module 602 receive the interior degree of parallelism data of affairs that degree of parallelism acquisition module 601 sends, and according to degree of parallelism in the affairs, according to ordering rule the affairs in the formation are sorted.
CPU order module 603, CPU order module 603 sort to the idle CPU in the system according to said ordering rule according to the calculating nuclear volume of central processor CPU.
Distribution module 604, the affairs in the formation after distribution module 604 will sort are distributed to successively among the CPU after the ordering and are carried out, and make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs, examine few CPU and carry out the low affairs of degree of parallelism in the affairs.
The present embodiment method; At first degree of parallelism acquisition module 601 obtains the interior degree of parallelism of affairs of affairs; Affairs order module 602 is according to degree of parallelism in the affairs of affairs; According to ordering rule affairs are sorted; CPU order module 603 is according to the check figure of CPU; According to aforesaid ordering rule idle CPU is sorted; Affairs after last distribution module 604 will sort are assigned to successively among the CPU after the ordering and carry out; Make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs; Examine few CPU and carry out the low affairs of degree of parallelism in the affairs; Realize asymmetric multiprocessor cluster load balance, thereby improve the performance of data base management system.
Fifth embodiment of the invention will be described in detail the affairs distributor among the 4th embodiment, and the affairs distributor detailed structure view of the 5th embodiment sees also Fig. 7, mainly comprises:
Degree of parallelism acquisition module 701, degree of parallelism acquisition module 701 obtain the interior degree of parallelism of affairs of each affairs in the formation, and degree of parallelism data in the said affairs are sent to affairs order module 702.Degree of parallelism acquisition module 701 also comprises: resource snapshot acquiring unit 7011, and resource snapshot acquiring unit 7011 obtains the resource snapshot of every statement in each affairs in the formation; Resource collection acquiring unit 7012, resource collection acquiring unit 7012 obtain reading resource collection and writing resource collection of statement according to the resource snapshot that resource snapshot acquiring unit 7011 is obtained; Precedence graph is set up unit 7013, and precedence graph is set up unit 7013 and read resource collection and write resource collection according to what resource collection acquiring unit 7012 obtained, for each affairs in the formation are set up the resource precedence graph; Precedence graph width computing unit 7014, precedence graph width computing unit 7014 calculates the width that precedence graph is set up the resource precedence graph of being set up unit 7013, and the width of said resource precedence graph equals the interior degree of parallelism of affairs of each affairs in the said formation.
Affairs order module 702, affairs order module 702 receive the interior degree of parallelism data of affairs that degree of parallelism acquisition module 701 sends, and according to degree of parallelism in the affairs, according to ordering rule the affairs in the formation are sorted.
CPU order module 703, CPU order module 703 sort to the idle CPU in the system according to said ordering rule according to the calculating nuclear volume of central processor CPU.
Distribution module 704, the affairs in the formation after distribution module 704 will sort are distributed to successively among the CPU after the ordering and are carried out, and make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs, examine few CPU and carry out the low affairs of degree of parallelism in the affairs.
The present embodiment method; At first resource snapshot acquiring unit 7011 obtains the resource snapshot of every statement in each affairs of formation; Resource collection acquiring unit 7012 obtains writing resource collection and reading resource collection of statement by said resource snapshot; Thereby precedence graph is set up the resource precedence graph that each affairs is set up in unit 7013; Precedence graph width computing unit 7014 is the width of computational resource precedence graph again; Promptly obtain the interior degree of parallelism of affairs of affairs; Affairs order module 702 is according to degree of parallelism in the affairs of affairs; According to ordering rule affairs are sorted; CPU order module 703 is according to the check figure of CPU; According to aforesaid ordering rule idle CPU is sorted; Affairs after last distribution module 704 will sort are assigned to successively among the CPU after the ordering and carry out; Make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs; Examine few CPU and carry out the low affairs of degree of parallelism in the affairs; Realize asymmetric multiprocessor cluster load balance, thereby improve the performance of data base management system.
One of ordinary skill in the art will appreciate that all or part of step that realizes in the foregoing description method is to instruct relevant hardware to accomplish through program; Described program can be stored in a kind of computer-readable recording medium; The above-mentioned storage medium of mentioning can be a ROM (read-only memory), disk or CD etc.
More than a kind of transaction distribution method provided by the present invention and device have been carried out detailed introduction; For one of ordinary skill in the art; Thought according to the embodiment of the invention; Part all can change on embodiment and range of application; In sum, this description should not be construed as limitation of the present invention.

Claims (6)

1. a transaction distribution method is characterized in that, comprising:
Obtain the interior degree of parallelism of affairs of each affairs in the formation;
According to degree of parallelism in the said affairs, the affairs in the formation are sorted according to ordering rule;
According to the calculating nuclear volume of central processor CPU, the idle CPU in the system is sorted according to said ordering rule;
Affairs in the formation after the said ordering are distributed among the CPU after the said ordering successively carry out, make the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs, examine few CPU and carry out the interior low affairs of degree of parallelism of affairs.
2. method according to claim 1 is characterized in that, the said interior degree of parallelism of affairs that obtains each affairs in the formation comprises step:
Obtain the resource snapshot of every statement in each affairs in the formation; Said resource snapshot is used to indicate reading resource collection and writing resource collection of said statement; Saidly read the set that resource collection is the operand of read operation in the said statement, saidly write the set that resource collection is the operand of write operation in the said statement;
According to said resource snapshot, obtain reading resource collection and writing resource collection of said statement;
Read resource collection and write resource collection according to said, for each affairs in the formation are set up the resource precedence graph, said resource precedence graph is used for indicating the dependence between said affairs statement;
Calculate the width of said resource precedence graph, the width of said resource precedence graph equals the interior degree of parallelism of affairs of each affairs in the said formation.
3. method according to claim 1 and 2 is characterized in that,
Said ordering rule is ascending order queueing discipline or descending sort rule.
4. method according to claim 1 and 2 is characterized in that,
Said formation is a waiting list.
5. an affairs distributor is characterized in that, comprising:
The degree of parallelism acquisition module is used for obtaining degree of parallelism in the affairs of each affairs of formation;
The affairs order module is used for the affairs in the formation being sorted according to ordering rule according to degree of parallelism in the said affairs;
The central processor CPU order module is used for the calculating nuclear volume according to CPU, according to said ordering rule the idle CPU in the system is sorted;
Distribution module is used for the affairs of the formation after the said ordering are distributed among the CPU after the said ordering successively and carries out, and makes the many CPU of check figure carry out the high affairs of degree of parallelism in the affairs, examine few CPU and carries out the interior low affairs of degree of parallelism of affairs.
6. device according to claim 5 is characterized in that, said degree of parallelism acquisition module comprises:
Resource snapshot acquiring unit; Be used for obtaining the resource snapshot of every statement in each affairs of formation; Said resource snapshot is used to indicate reading resource collection and writing resource collection of said statement; Saidly read the set that resource collection is the operand of read operation in the said statement, saidly write the set that resource collection is the operand of write operation in the said statement;
The resource collection acquiring unit is used for according to said resource snapshot, obtains reading resource collection and writing resource collection of said statement;
Precedence graph is set up the unit, is used for reading resource collection and writing resource collection according to said, and for each affairs in the formation are set up the resource precedence graph, said resource precedence graph is used for indicating the dependence between said affairs statement;
Precedence graph width computing unit is used to calculate the width of said resource precedence graph, and the width of said resource precedence graph equals degree of parallelism in the affairs of each affairs in the said formation.
CN201110303344A 2011-09-30 2011-09-30 Method and device for allocating transactions Pending CN102306205A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110303344A CN102306205A (en) 2011-09-30 2011-09-30 Method and device for allocating transactions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110303344A CN102306205A (en) 2011-09-30 2011-09-30 Method and device for allocating transactions

Publications (1)

Publication Number Publication Date
CN102306205A true CN102306205A (en) 2012-01-04

Family

ID=45380067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110303344A Pending CN102306205A (en) 2011-09-30 2011-09-30 Method and device for allocating transactions

Country Status (1)

Country Link
CN (1) CN102306205A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831016A (en) * 2012-08-01 2012-12-19 浪潮(北京)电子信息产业有限公司 Physical machine recycle method of cloud computing and device thereof
WO2014107958A1 (en) * 2013-01-10 2014-07-17 惠州Tcl移动通信有限公司 Method and mobile device for application automatically adapting to mode of multi-core processor
CN104102684A (en) * 2013-04-11 2014-10-15 株式会社日立制作所 Data reflecting method
CN108494848A (en) * 2015-10-30 2018-09-04 大连大学 Enterprise message method for pushing based on MQTT
CN109542516A (en) * 2018-11-13 2019-03-29 西安邮电大学 A kind of acceleration arm processor concurrent working system and its working method
CN109710387A (en) * 2018-12-06 2019-05-03 成都佰纳瑞信息技术有限公司 A kind of policy engine and its application method for block chain affairs priority ranking
CN110413419A (en) * 2018-04-28 2019-11-05 北京京东尚科信息技术有限公司 A kind of method and apparatus that rule executes
CN113535367A (en) * 2021-09-07 2021-10-22 北京达佳互联信息技术有限公司 Task scheduling method and related device
CN115509694A (en) * 2022-10-08 2022-12-23 北京火山引擎科技有限公司 Transaction processing method and device, electronic equipment and storage medium
CN115509694B (en) * 2022-10-08 2024-04-30 北京火山引擎科技有限公司 Transaction processing method, device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7152026B1 (en) * 2001-12-07 2006-12-19 Ncr Corp. Versioned node configurations for parallel applications
CN101216783A (en) * 2007-12-29 2008-07-09 中国建设银行股份有限公司 Process for optimizing ordering processing for multiple affairs
CN102354289A (en) * 2011-09-21 2012-02-15 苏州大学 Concurrent transaction scheduling method and related device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7152026B1 (en) * 2001-12-07 2006-12-19 Ncr Corp. Versioned node configurations for parallel applications
CN101216783A (en) * 2007-12-29 2008-07-09 中国建设银行股份有限公司 Process for optimizing ordering processing for multiple affairs
CN102354289A (en) * 2011-09-21 2012-02-15 苏州大学 Concurrent transaction scheduling method and related device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《计算机研究与发展》 20110215 李鑫等 多核平台下事务处理类应用性能分析及评价 348-353 1-6 , 第S1期 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831016B (en) * 2012-08-01 2014-10-01 浪潮(北京)电子信息产业有限公司 Physical machine recycle method of cloud computing and device thereof
CN102831016A (en) * 2012-08-01 2012-12-19 浪潮(北京)电子信息产业有限公司 Physical machine recycle method of cloud computing and device thereof
WO2014107958A1 (en) * 2013-01-10 2014-07-17 惠州Tcl移动通信有限公司 Method and mobile device for application automatically adapting to mode of multi-core processor
CN104102684A (en) * 2013-04-11 2014-10-15 株式会社日立制作所 Data reflecting method
CN108494848B (en) * 2015-10-30 2020-09-22 大连大学 Enterprise message pushing method based on MQTT
CN108494848A (en) * 2015-10-30 2018-09-04 大连大学 Enterprise message method for pushing based on MQTT
CN110413419A (en) * 2018-04-28 2019-11-05 北京京东尚科信息技术有限公司 A kind of method and apparatus that rule executes
CN109542516A (en) * 2018-11-13 2019-03-29 西安邮电大学 A kind of acceleration arm processor concurrent working system and its working method
CN109710387A (en) * 2018-12-06 2019-05-03 成都佰纳瑞信息技术有限公司 A kind of policy engine and its application method for block chain affairs priority ranking
CN109710387B (en) * 2018-12-06 2020-12-15 成都佰纳瑞信息技术有限公司 Policy engine for block chain transaction priority ordering and use method thereof
CN113535367A (en) * 2021-09-07 2021-10-22 北京达佳互联信息技术有限公司 Task scheduling method and related device
CN115509694A (en) * 2022-10-08 2022-12-23 北京火山引擎科技有限公司 Transaction processing method and device, electronic equipment and storage medium
CN115509694B (en) * 2022-10-08 2024-04-30 北京火山引擎科技有限公司 Transaction processing method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN102306205A (en) Method and device for allocating transactions
CN102354289B (en) Concurrent transaction scheduling method and related device
CN105487930B (en) A kind of optimizing and scheduling task method based on Hadoop
Slagter et al. An improved partitioning mechanism for optimizing massive data analysis using MapReduce
CN101446962B (en) Data conversion method, device thereof and data processing system
US8813073B2 (en) Compiling apparatus and method of a multicore device
US20070143759A1 (en) Scheduling and partitioning tasks via architecture-aware feedback information
US8996464B2 (en) Efficient partitioning techniques for massively distributed computation
US11132383B2 (en) Techniques for processing database tables using indexes
Bender et al. Cache-adaptive algorithms
US20150227586A1 (en) Methods and Systems for Dynamically Allocating Resources and Tasks Among Database Work Agents in an SMP Environment
CN101743534A (en) By increasing and shrinking resources allocation and dispatch
CN102799486A (en) Data sampling and partitioning method for MapReduce system
US20170193077A1 (en) Load balancing for large in-memory databases
CN104111936A (en) Method and system for querying data
CN110347515B (en) Resource optimization allocation method suitable for edge computing environment
JP2018515844A (en) Data processing method and system
CN102708009A (en) Method for sharing GPU (graphics processing unit) by multiple tasks based on CUDA (compute unified device architecture)
CN105867998A (en) Virtual machine cluster deployment algorithm
Wang et al. A fast work-efficient sssp algorithm for gpus
CN103064991A (en) Mass data clustering method
CN107391508B (en) Data loading method and system
CN104778088A (en) Method and system for optimizing parallel I/O (input/output) by reducing inter-progress communication expense
US8700822B2 (en) Parallel aggregation system
CN114860449A (en) Data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20120104