WO2001025962A1 - Database organization for increasing performance by splitting tables - Google Patents

Database organization for increasing performance by splitting tables Download PDF

Info

Publication number
WO2001025962A1
WO2001025962A1 PCT/RO1999/000017 RO9900017W WO0125962A1 WO 2001025962 A1 WO2001025962 A1 WO 2001025962A1 RO 9900017 W RO9900017 W RO 9900017W WO 0125962 A1 WO0125962 A1 WO 0125962A1
Authority
WO
WIPO (PCT)
Prior art keywords
splitting
split
tables
records
subtables
Prior art date
Application number
PCT/RO1999/000017
Other languages
French (fr)
Inventor
Alin Alexandru Alexandrescu
Original Assignee
S.C. Medicarom Group S.R.L.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by S.C. Medicarom Group S.R.L. filed Critical S.C. Medicarom Group S.R.L.
Priority to AU36837/00A priority Critical patent/AU3683700A/en
Publication of WO2001025962A1 publication Critical patent/WO2001025962A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2246Trees, e.g. B+trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures

Definitions

  • This invention relates to database organization, and the retrieval of data from tables. It is particularly suited to improving performance for large relational databases.
  • Each row of the table is a data record, or part of a data record, and the columns of the table represent the individual fields or elements of each data record.
  • a row of a table may also be referred to as an element of the table.
  • a table of patients in a hospital database consists of a collection of patient records. Each record may contain the patient's number, name, address, telephone number (and other contact information).
  • Figure 1 illustrates the data in such a table of Patient Data, PTable 100.
  • Each row (105, 106, 107, and so on) in the table 100 is a separate data record, and each column (101, 102,...) represents a particular attribute.
  • PNumber 101 is the patient's number
  • PName 102 is the patient's name
  • PAddress 103 is the address
  • PTelNumber 104 is the telephone number.
  • the table has several properties: There is generally a primary key: one or more attributes that uniquely identify a row. In PTable in Figure 1, it might be the patient number (PNumber 101). There may be one or more secondary keys, used for lookup and joining tables. For example, in Figure 1, the patient name (PName 102) might be used for lookup, although the name is not necessarily unique.
  • a hospital database might need to track which room or ward was occupied by what patient, and the facilities available in that room.
  • Figure 2 illustrates such a table 200.
  • the size of tables varies and is not usually known a priori.
  • PTable a table of patients, including those discharged, might contain tens of thousands of records, and be far larger than table 200, RTable, in Figure 2.
  • Table 200 is a table of hospital rooms and who is currently occupying them. Accordingly it is generally smaller than table 100.
  • the table organization above is merely a logical organization.
  • Implementing a database requires storing tables in one or more files.
  • each table is stored in just one file, though internally the data may be organized in many ways to speed access.
  • speeding up access to records within large tables is critical in managing large databases. If a record can be located "directly", with no searching, then this will be thousands of times faster than if each record in the table must be retrieved and examined to locate the desired record.
  • Fig. 3 shows a B-tree 300.
  • each node in the tree is a bucket that can hold a few numbers.
  • the size of the buckets is fixed.
  • buckets are labelled with the range of numbers they contain, or a single number if the bucket contains just that one number. For example, bucket 301 contains the two numbers 500 and 501, and bucket 302 contains the number 280.
  • the tree grows dynamically as numbers are inserted or removed, i.e., buckets may be created or deleted, and the distribution of numbers to buckets changed. Numbers are located by searching from the top or root bucket (301).
  • the time required to find, insert, or delete a number from such a tree organization is proportional to the logarithm of the number of objects in the table (o(logN)).
  • Logarithmic access times mean that as the number of objects in the table doubles, one more level of the tree is added, and one more operation is required for access. With 10 items in each bucket, and 10,000,000 numbers in the tree, there will be about 20 levels in the tree, and at worst about twenty accesses are required to find a number if the tree is kept balanced. If the number of items doubles to 20,000,000 then there will be 21 levels in the tree.
  • Fig. 4 shows a hash table 400 containing the same numbers as the tree table 300 shown in Fig. 3.
  • a hashing function is simply a function that takes a key as a parameter, and generates a bucket number as a result.
  • the hashing function used is simply that numbers in the range 0-200 go in the first bucket 401; 200- 400 in the second bucket; and so on.
  • each bucket can contain at most 4 numbers. With such an approach, if the numbers are not uniformly distributed; then buckets will start to overflow. In the example above, two buckets have overflowed (402 and 404).
  • a hash table is a very efficient organization.
  • the hashing function allows a direct lookup of a number.
  • performance can start to become bad very quickly.
  • new numbers are added in the range 600-800, a longer and longer chain of numbers will form after bucket 404.
  • Many refinements and improvements over the two basic organizations above have been proposed, implemented and tested.
  • tree structures are the most common approach used for commercial database tools, as tree structures provide reasonably good performance under most circumstances. Hashing provides better performance, but only if the distribution of the data is know a priori. Custom databases may use a combination of hashing and tree organizations.
  • the present invention uses a supplementary approach: dynamically and transparently splitting large tables into smaller tables.
  • the table locating the data (and position of the data in the table) is rapidly located with little or no overhead.
  • the splitting is based upon the data in one or more predetermined fields (typically primary key fields or secondary key fields).
  • the methods of the present invention may be transparent because no modification to existing database processes is requires (such processes typically use industry standard Structured Query Language SQL commands).
  • the process is automatic because no manual intervention is required to split tables.
  • the methods used are customizable to different environments and database characteristics by adjusting parameters such as the table size before a split occurs.
  • One method of improving database performance includes splitting a table into sub-tables recursively.
  • a phrase such as "recursively splitting a table” implies splitting a table into new sub-tables and, generally, splitting at least one of those sub-tables into subsequent tables, and so on.
  • the first table is generally not itself split multiple times, but rather resultant split tables may be further split.
  • the first table may be an original, or schema, table wherein the data therein is transferred to the sub-tables (or split tables), resulting in an "empty" schema table. While the entire schema table is "emptied” in a preferred embodiment, in other embodiments only a portion of the data is transferred. These will be more suited to some applications and optimization techniques.
  • the table splitting is generally based on a splitting threshold (maximum desirable table record size) and a splitting function (either a built-in function or a user-supplied function), and one or more predetermined fields upon which tables shall be split.
  • Data may be organized in a database on a computer system by forming the data into a table having records and at least one field.
  • a first table threshold which may or may not be predetermined
  • the table is split into plurality of subtables.
  • the table split is based upon the field e.g. PATIENTS split on NAME. This includes placing records having a parameter, e.g., a letter, of a corresponding value, e.g. "A”, into one of the plurality of subtables corresponding to the corresponding value, e.g., Table A.
  • the data may be further organized by splitting at least one of the subtables, e.g. Table- C, into a plurality subsequent subtables, e.g., Table-CA, Table-CB, ...Table-CZ.
  • the one sub-table, here Table-C is split when it exceeds a second table threshold, which may be or may not be predetermined.
  • the thresholds are optimized based upon hardware specifications; the threshold may be set equal due to hardware criteria or other criteria.
  • the subtable split, here Table-C is based upon the same field, e.g. NAME, and includes placing records having a subsequent parameter, e.g., a second letter, of a second corresponding value, e.g.
  • a method of managing data in a computer system includes logically ordering the data into records having at least one field having a predetermined format, e.g. alphabetic.
  • the records are placed into the table and the table is dynamically split into split tables such that the records are placed into split tables.
  • the split is based upon a splitting function.
  • the splitting function may be applied to a key field in each of the records, generally the same key field, e.g. NAME.
  • a method of computer database management of the present invention includes splitting a table of data into a fixed number of tables to form split tables when the number of records in the table exceed a threshold. At least one of the split tables is split into a number of tables, typically greater than two, when the records in the one split table (i.e., the split table to be further split) exceed the threshold.
  • the number of tables may be equal to the fixed number of tables. For instance, the table may be split into twenty-six split tables, each associated with a letter, A to Z; and the split table association with the letter "A" may be further split into twenty-six tables, A to Z.
  • the splitting is based upon a predetermined field of the records, e.g. the NAME field.
  • an objective of the present invention is to a computer system having improved database performance.
  • Another objective is to provide an improved method of organizing data.
  • a further objective is to provide improved database performance by providing access methods transparent to the user.
  • a further objective is to provide a more reliable computer system.
  • a still further objective is to improve reliability by using relatively smaller files to store relatively smaller tables of data records.
  • Another objective of the present invention is to reduce administrative costs of a business or service.
  • a further objective is to reduce these costs by reducing errors, malfunctions, costly investment, repairs and the like associated with operating a large- scale database computer system.
  • a still further objective is to reduce these costs by improving a user's efficiency by reducing computer lag time.
  • FIG. 1 illustrates elements in prior art relational database table.
  • FIG. 2 illustrates elements in a prior art database which may be linked to the table in Fig. 1.
  • FIG. 3 shows a tree organization of data elements
  • FIG .4 shows a hash table organization of the data elements shown in Fig. 3.
  • FIG. 5 is a secondary table used to support fast indexing of a table by a secondary key.
  • FIG. 6 is a table prior to splitting such as a schema table.
  • FIG. 7 is the split control tree using the present invention.
  • FIG. 8 is the collection of split tables for the original schema table, corresponding to the split control tree.
  • FIG. 9 illustrates how an optimal splitting of the table can be determined.
  • each split table itself is a candidate for splitting into further subtables.
  • each subtable is split into a fixed number of tables where the rows of data in a subtable exceed its respective threshold.
  • the invention can ensure that no table ever exceeds the size of the splitting threshold, even if the table is arbitrarily large. Coalescing of small tables into a single larger table, in a situation where repeated deletion of records from split tables means that coalescing all the split tables would result in a single table of smaller size than the split table.
  • Transparency whether or not a table was split should not be visible to the database programmer, with the exception of an increase of performance.
  • the major parameters of the splitting algorithm should be customizable.
  • the major characteristics include the splitting threshold (the largest table size before splitting commences), and the splitting factor (the number of sub-tables into which a single table is split.
  • Criterion 1 is satisfied in this invention by the use of "recursion”. Once a table reaches the splitting threshold it is broken into smaller tables, and when one of these sub-tables again reaches the splitting threshold it again is split into smaller tables, and so on.
  • Automation is achieved by the management system described below, that tracks the size of each table, and what level to what level it has been split.
  • Criterion 2 is again satisfied by the management system described below.
  • Criterion 3 is transparency.
  • SQL queries always have explicit table names as parameters.
  • the names of tables and their properties are normally stored in an externaly visible database schema.
  • Transparency is achieved in the invention by maintaining the original database schema, and mapping any access to a schema table (original table) into an access of one or more mapped (potentially split) tables at run time (i.e. a split table access process). Other access techniques will be apparent to those skilled in the art.
  • Criterion 4 is satisfied in the invention by two techniques. Firstly, the characteristics of the splitting algorithm are not "hard coded" (meaning that the characteristics can be changed on database initialization, or even on the fly with an automatic database reorganization, which, although slow, requires no manual intervention, and can potentially be accomplished without requiring the database to be off-line).
  • the splitting of tables into sub-tables may be done using some simple criteria that improves performance for most database accesses by localizing access to a single sub- table.
  • Most accesses to tables use primary keys: a unique value that is normally used to locate a record. Typical examples of primary keys are a customer number, account number, social security number or a drivers license number.
  • primary keys are a customer number, account number, social security number or a drivers license number.
  • Figure 5 illustrates a secondary table 500 to provide fast indexing on a secondary key.
  • An alternative approach is to use a multidimensional split. Instead of splitting the table by a single attribute or key, it can be simultaneously split by several keys at once. For example, a single level split of PTable 100 (shown in figure 1), might be split based upon PName 102 into 26 separate tables (a separate table for each level of the alphabet, with the first letter of the patient's name determining the table).
  • the table could be split into 10 separate tables based on the PNumber 101 (with the first digit of the patient number determining which table the data resided in). Or the split could be accomplished on both PName 102 and PNumber 101 at once (resulting in 260 tables).
  • splitting can be conceptualized as “external hashing” or “large-scale hashing", in which the splitting function takes a key and generates a table number, whereas a hashing function takes a key and generates a bucket number within a table.
  • the splitting function is not limited to "hashing functions”.
  • hashing the goal is to have just a few records at each hashed location, or bucket, and typically thousands of buckets.
  • a whole table is a bucket and there are thousands of records in each bucket.
  • a file (or a table, or both) is typically split into just a dozen or more split files.
  • the goal is a very uniform hashing function, as there is a high penalty for non-uniform hashing function.
  • uniformity although a desirable property of splitting, is not a requirement.
  • hashing there must be an explicit mechanism to deal with overflow. In splitting, there is no explicit mechanism required. If the size of the table goes beyond an optimal splitting threshold, then the penalty is merely slower access to the table. It should be understood that the splitting is delayed until an optimum time, or may be "real-time".
  • One hash function used to split the tables may be based purely upon the data type of the field upon which the data is split, the splitting factor, and the depth of splitting.
  • strings e.g. ASCII alphanumeric string
  • numeric e.g. binary
  • ASCII data can be considered as a string of characters, STRING, indexed by the character number I.
  • STRING[1] is 'A'
  • STRING[2] is 'p' and so on.
  • NUMBER MOD SP for a first level split (D l) NUMBER/((D-1)*SP) MOD SP for level splits greater than 1
  • a first level split subtable index number will be 6, and a second level subtable index number will be 1.
  • a hashing operation using division is very fast, especially if D is a power of 2.
  • the implementation could, additionally, "test" each letter or digit to see which letter or digit generated the most uniform distribution, or bypass letters or digits with a non-uniform distribution. For example, if the key field was a patient id string, then the first letter of all patient id might be either 'X' or 'E', so that a split with a splitting factor of 26 would yield 24 empty tables and a table containing only patients whose id's began with 'X' and another table whose patient id's began with 'E'. Splitting on the second letter might produce a much more uniform distribution of data, and an implementation could decide to only split on their second letter (by counting to determine the uniformity of a split before performing the split).
  • the resulting table must be named, and a record kept of how many record elements are in each split table and subtable and the depth to which a schema, or original table, has been split.
  • a simple method is that the name of the original table will have a suffix for each level of split, where the suffix corresponds to the index number of the table. For example, consider a key value of "345", in an original schema table, called "PTable”. If this table were split into a first level set of 10 tables (using a splitting factor of 10), then the key would be found in the table "PTable-5", and if this table were split into a further level, then the key value would be found in the table "PTable-5-4".
  • Figures 6,7 and 8 illustrate the technique, using a simplified example of a table 600 with just 18 numbers (here records), a splitting threshold of 10, and a splitting factor of 5; note the 5 subtables in control tree 700 of fig. 7.
  • the split is being performed on a numeric field 602, and the modulus function above implies that in a first level split 701, numbers ending in 0 or 5 will be in index PTable 0 (reference number 702), numbers ending in 1 or 6 will be in table 704, numbers ending in 2 or 7 will be in table 706, and so on.
  • the initial split of the table 600, of Fig. 6, yielded a subtable 706 with over 10 records.
  • the algorithm was applied again, resulting in level two 711 of the control tree 700.
  • the final tables are illustrated in Fig. 8 with the associated data.
  • a splitting factor of 5 means that the table that a data item is in is obtained by successive divisions by 5. For example the number “167”, when divided by 5 yields "33" and a remainder of 2. The remainder of 2 implies that the data must reside in PTable-2. However, this table has already been split. To determine the next level subtable, we further divide "33” by 5, yielding "6" and a remainder of 3. This now implies that the data will reside in PTable-2-3 labelled 718 in Figs. 7 and 8 (where it indeed is located).
  • the split control tree illustrated in Figure 7 contains all the information about the current state of the split, including the number of record elements at each level of the tree.
  • Nodes in a tree can be classified as either interior nodes or leaf nodes.
  • Interior nodes such as PTable 600 and PTable-2 706 in Figure 7) have descendants. These interior nodes are "split" into split tables which are the descendants of the interior nodes.
  • Such interior or intermediate tables generally contain no records (all of the data they used to contain are now contained in subtables or split tables).
  • leaf nodes have no descendants, and correspond to tables that have not been split.
  • the split control tree keeps track of the number of record elements, and will invoke a split of a leaf table when an addition of a record causes it to contain more than the splitting threshold; or a merger of split tables, when a deletion of a record causes the sum of the records in the descendants of a table to be less than the splitting threshold.
  • splitting and merging may be a relatively rare and expensive operation, and there is no requirement that it be done at the point at which the threshold is exceeded. Instead, the splitting and merging might be done when the database is offline or relatively quiescent, and only when a table has gone more that, say 5% above, or 5% below the splitting threshold (to avoid repeatedly splitting and merging a table if the number of records in the table is fluctuating close to the splitting threshold).
  • mapping of SQL table accesses to accesses in the split table Another issue in the implementation is the mapping of SQL table accesses to accesses in the split table.
  • the mechanism by which this is done is that any SELECT statement applied to a scheme table is converted to a SELECT statement applied to a split table, using the hashing function. For example:
  • PTableA is located by searching down the split control table for PTable, applying the hashing function recursively to the key "Bill Andrew", to locate the name of the leaf split table "PTableA" that contains the record.
  • the same approach can be applied to any access of the table using a key for which the table has been split. If the table has not been split using that key field, then each of the subtables must be accessed.
  • the table can be normalized to split out the secondary keys into a separate table and split on the secondary key (as described earlier and shown in Figure 5).
  • the split control tree can be stored and maintained as a table itself (with each row consisting of the name of the table, number of records it contains, parent table, and whether it has been split). However, because all access to tables is via this tree, and the tree is updated on any addition of deleted records to any table, the split control tree for each table would normally be stored in memory for speed of access.
  • splitting factor is not particularly significant, as a lower choice of splitting factor will merely make the tree deeper and the lookup of which table contains the data a little slower. However, a higher choice of splitting factor will make the distribution of table sizes more irregular. For example, consider a table of 8,000 records. With a splitting factor of 8, more smaller tables are likely to be created than with a splitting factor of 2.
  • splitting threshold > size of table
  • splitting threshold > size of table
  • Fig. 9 is a graph 900 if table access time 904 versus splitting threshold 906.
  • the splitting threshold (which may be directly related to the table size) is plotted on a log scale.
  • a synthetic benchmark is simply a small test program that determines a good splitting threshold for a given environment.
  • the synthetic benchmark generates pseudo-random data to create a temporary table of any specified size, then applies a query to the table and times the access. From this, the synthetic benchmark can plot the access time versus table size and display the result to the user, who can then decide at what size to start splitting tables.
  • a typical curve of access time versus table size (the curve is labelled 902) and is plotted in Fig. 9 with the dotted line. It will almost invariably be monotonic increasing (as the size of the table increases, the access time will always increase). If the overhead for determining the subtable in which a data item lies is sufficiently small, this implies a very high level of splitting may be desirable.
  • MAX the access time by primary key vs. the size of the table (again by generating synthetic data) as the table is split into smaller pieces. This is curve 901. This approach simply ignores the relatively small overhead (for large splitting) of hashing to determine in which subtable a data item lies.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention is a new technique for improving database performance for large tables, by splitting large tables into smaller tables. The process may be performed dynamically, automatically and transparently. The splitting is performed upon any field of the table, generally a key field is preferred. A control table is used to keep track of which tables are split to what depth, and to determine when to split tables further, or merge tables. All sequenced query language (SQL) accesses to the original tables are mapped into SQL accesses to the split tables without the intervention of the programmer.

Description

DATABASE ORGANIZATION FOR INCREASING PERFORMANCE BY
SPLITTING TABLES
TECHNICAL FIELD
This invention relates to database organization, and the retrieval of data from tables. It is particularly suited to improving performance for large relational databases.
BACKGROUND ART
Since the earliest days of computer databases , computer scientists and programmers have, and continue to, analyze, design, and implement new techniques for faster data access. Almost all modern databases are built around the relational database model in which data is organised logically into tables. Each row of the table is a data record, or part of a data record, and the columns of the table represent the individual fields or elements of each data record. A row of a table may also be referred to as an element of the table. For example, a table of patients in a hospital database consists of a collection of patient records. Each record may contain the patient's number, name, address, telephone number (and other contact information).
Figure 1 illustrates the data in such a table of Patient Data, PTable 100. Each row (105, 106, 107, and so on) in the table 100 is a separate data record, and each column (101, 102,...) represents a particular attribute. PNumber 101 is the patient's number, PName 102 is the patient's name, PAddress 103 is the address, and PTelNumber 104 is the telephone number. In the relational database model used by most modern databases, the table has several properties: There is generally a primary key: one or more attributes that uniquely identify a row. In PTable in Figure 1, it might be the patient number (PNumber 101). There may be one or more secondary keys, used for lookup and joining tables. For example, in Figure 1, the patient name (PName 102) might be used for lookup, although the name is not necessarily unique.
The number of attributes and their types are typically fixed. Data types are chosen from a small set of common types, including Strings (fixed length ands variable length), Integers, Date and Time. In Figure 1, PName 102 and PAddress 103 are strings, whereas PNumber 101 is an integer.
The number of rows, or records of data, can grow without bound. The rows are unordered. Thus access to a record by either a primary or secondary key should be rapid, but search access via any other attribute will be slow, as it requires searching the entire table sequentially. In Figure 1 just three records are shown, but the table could have many more records.
In a typical database there are many tables. For example, a hospital database might need to track which room or ward was occupied by what patient, and the facilities available in that room. Figure 2 illustrates such a table 200. The size of tables varies and is not usually known a priori. For example, in Figure 1, PTable, a table of patients, including those discharged, might contain tens of thousands of records, and be far larger than table 200, RTable, in Figure 2. Table 200 is a table of hospital rooms and who is currently occupying them. Accordingly it is generally smaller than table 100.
The table organization above is merely a logical organization. Implementing a database requires storing tables in one or more files. Traditionally, each table is stored in just one file, though internally the data may be organized in many ways to speed access. In medium to large applications, there may be hundreds of thousands of records in a single table, such as table 100. Hence, speeding up access to records within large tables is critical in managing large databases. If a record can be located "directly", with no searching, then this will be thousands of times faster than if each record in the table must be retrieved and examined to locate the desired record.
There are two competing technologies for organizing records within a table to speed access: trees and hashing. The advantages and disadvantages of such approaches can be understood by considering an example. Suppose we wish to store a table of numbers, and allow for numbers to be inserted, removed and searched for rapidly. The simplest organization is simply to store the numbers in a sorted or unsorted list. The drawback of this approach is that if the list is unsorted, searches are slow, whereas if the list is sorted, insertions are slow (inserting a number in the middle will require moving all the numbers above it up to create more space). Both trees and hashing perform better than sorted or unsorted lists.
Fig. 3 shows a B-tree 300. In tree organization, each node in the tree is a bucket that can hold a few numbers. The size of the buckets is fixed. In tree 300, buckets are labelled with the range of numbers they contain, or a single number if the bucket contains just that one number. For example, bucket 301 contains the two numbers 500 and 501, and bucket 302 contains the number 280. The tree grows dynamically as numbers are inserted or removed, i.e., buckets may be created or deleted, and the distribution of numbers to buckets changed. Numbers are located by searching from the top or root bucket (301).
From the "root", and any other non-leaf bucket, "greater" numbers are to the right and lesser numbers to the left. Searching for a number requires descending a tree from the top. For example, searching for number "511", starting from the root 301, would find that the number was greater than the root bucket range "500-510", descending to the right reaches the bucket 303 containing "750-790". The number is less than this range, so descending to the left reaches the bucket 304 containing "620-630". The number is again less than this range, so descent continues to the left, where the number is found in the bucket 305.
The time required to find, insert, or delete a number from such a tree organization is proportional to the logarithm of the number of objects in the table (o(logN)). Logarithmic access times mean that as the number of objects in the table doubles, one more level of the tree is added, and one more operation is required for access. With 10 items in each bucket, and 10,000,000 numbers in the tree, there will be about 20 levels in the tree, and at worst about twenty accesses are required to find a number if the tree is kept balanced. If the number of items doubles to 20,000,000 then there will be 21 levels in the tree.
Fig. 4 shows a hash table 400 containing the same numbers as the tree table 300 shown in Fig. 3. In a hash table organization, the number and size of each bucket is fixed, and a "hashing" function is used to decide which bucket to place an object in. A hashing function is simply a function that takes a key as a parameter, and generates a bucket number as a result. In the example shown in Fig. 4 there are 5 buckets; the hashing function used is simply that numbers in the range 0-200 go in the first bucket 401; 200- 400 in the second bucket; and so on. In the example in Fig. 4, each bucket can contain at most 4 numbers. With such an approach, if the numbers are not uniformly distributed; then buckets will start to overflow. In the example above, two buckets have overflowed (402 and 404).
If no overflow occurs, a hash table is a very efficient organization. The hashing function allows a direct lookup of a number. However, once overflow occurs, performance can start to become bad very quickly. In the example above, as new numbers are added in the range 600-800, a longer and longer chain of numbers will form after bucket 404. Many refinements and improvements over the two basic organizations above have been proposed, implemented and tested. In general, tree structures are the most common approach used for commercial database tools, as tree structures provide reasonably good performance under most circumstances. Hashing provides better performance, but only if the distribution of the data is know a priori. Custom databases may use a combination of hashing and tree organizations.
Commercial database tools typically place each table in a single file, and searching for data in a table using a key typically requires random access to that file (no matter how the data is organized internally e.g. tree structure or hash table). Hence large databases, with large tables, will generate large files. Operating systems are generally optimized for dealing with small to medium sized files, and sequential access to such files. A major impediment to database performance is random access to large files. Optimizing of access and organization within a table is one technique for improving performance, but another technique is reducing the size of the table. This allows the table's associated file to itself be reduced. No practical, transparent, general-purpose scheme for achieving such reduction is known in the prior art.
SUMMARY OF THE INVENTION
As the number of record elements (records), or rows in a table increase, the time required to access and update record elements (generally located by a key) increases. Hence the larger a database becomes, the slower access to record data becomes. Many different algorithms and methods have been developed to improve access times to large tables of data, including B* -trees and hashing. Such methods are designed to internally organize data to speed access to data within a single table. The present invention uses a supplementary approach: dynamically and transparently splitting large tables into smaller tables. In one embodiment, when a data record element is requested by a key, the table locating the data (and position of the data in the table) is rapidly located with little or no overhead. The splitting is based upon the data in one or more predetermined fields (typically primary key fields or secondary key fields).
The methods of the present invention may be transparent because no modification to existing database processes is requires (such processes typically use industry standard Structured Query Language SQL commands). The process is automatic because no manual intervention is required to split tables. The methods used are customizable to different environments and database characteristics by adjusting parameters such as the table size before a split occurs.
One method of improving database performance according to the present invention includes splitting a table into sub-tables recursively. As used herein, a phrase such as "recursively splitting a table" implies splitting a table into new sub-tables and, generally, splitting at least one of those sub-tables into subsequent tables, and so on. Thus, the first table is generally not itself split multiple times, but rather resultant split tables may be further split. The first table may be an original, or schema, table wherein the data therein is transferred to the sub-tables (or split tables), resulting in an "empty" schema table. While the entire schema table is "emptied" in a preferred embodiment, in other embodiments only a portion of the data is transferred. These will be more suited to some applications and optimization techniques.
The table splitting is generally based on a splitting threshold (maximum desirable table record size) and a splitting function (either a built-in function or a user-supplied function), and one or more predetermined fields upon which tables shall be split. Data may be organized in a database on a computer system by forming the data into a table having records and at least one field. When the number of records exceeds a first table threshold, which may or may not be predetermined, the table is split into plurality of subtables. The table split is based upon the field e.g. PATIENTS split on NAME. This includes placing records having a parameter, e.g., a letter, of a corresponding value, e.g. "A", into one of the plurality of subtables corresponding to the corresponding value, e.g., Table A.
The data may be further organized by splitting at least one of the subtables, e.g. Table- C, into a plurality subsequent subtables, e.g., Table-CA, Table-CB, ...Table-CZ. The one sub-table, here Table-C, is split when it exceeds a second table threshold, which may be or may not be predetermined. In one preferred embodiment the thresholds are optimized based upon hardware specifications; the threshold may be set equal due to hardware criteria or other criteria. The subtable split, here Table-C, is based upon the same field, e.g. NAME, and includes placing records having a subsequent parameter, e.g., a second letter, of a second corresponding value, e.g. "I", into one of the plurality of subsequent sub-tables corresponding to the second corresponding value e.g. Table-CI. It will be apparent from the teachings herein that the parameters not limited to alphabetic data and that the "second" parameter need not be "adjacent" to the first parameter. Further, the sorting scheme may be optimized to account for disproportionate distributions, e.g. splitting the "A" and "E" tables, but combining the "Q" and "Z" tables, wherein there is a large proportion of data associated with "A" and "E" as compared to "Q" and "Z".
A method of managing data in a computer system according to the present invention includes logically ordering the data into records having at least one field having a predetermined format, e.g. alphabetic. The records are placed into the table and the table is dynamically split into split tables such that the records are placed into split tables. The split is based upon a splitting function. The splitting function may be applied to a key field in each of the records, generally the same key field, e.g. NAME.
A method of computer database management of the present invention includes splitting a table of data into a fixed number of tables to form split tables when the number of records in the table exceed a threshold. At least one of the split tables is split into a number of tables, typically greater than two, when the records in the one split table (i.e., the split table to be further split) exceed the threshold. The number of tables (into which the split table is split) may be equal to the fixed number of tables. For instance, the table may be split into twenty-six split tables, each associated with a letter, A to Z; and the split table association with the letter "A" may be further split into twenty-six tables, A to Z. Generally, the splitting is based upon a predetermined field of the records, e.g. the NAME field.
Accordingly, an objective of the present invention is to a computer system having improved database performance.
Another objective is to provide an improved method of organizing data.
A further objective is to provide improved database performance by providing access methods transparent to the user.
A further objective is to provide a more reliable computer system. A still further objective is to improve reliability by using relatively smaller files to store relatively smaller tables of data records.
Another objective of the present invention is to reduce administrative costs of a business or service. A further objective is to reduce these costs by reducing errors, malfunctions, costly investment, repairs and the like associated with operating a large- scale database computer system. A still further objective is to reduce these costs by improving a user's efficiency by reducing computer lag time.
Other objectives and advantages of the present invention will be apparent to those of the skill of database organization from the teachings disclosed herein.
BRIEF DESCRIPTION OF THE DRAWINGS
To more clearly convey the concepts of the present invention, exemplary embodiments are discussed and described without extensive discussion of the prior art. The present invention will be further understood by reference to the attached drawings described briefly below, the detailed description of exemplary embodiments and the claims. FIG. 1 illustrates elements in prior art relational database table. FIG. 2 illustrates elements in a prior art database which may be linked to the table in Fig. 1.
FIG. 3 shows a tree organization of data elements
FIG .4 shows a hash table organization of the data elements shown in Fig. 3. FIG. 5 is a secondary table used to support fast indexing of a table by a secondary key.
FIG. 6 is a table prior to splitting such as a schema table. FIG. 7 is the split control tree using the present invention.
FIG. 8 is the collection of split tables for the original schema table, corresponding to the split control tree.
FIG. 9 illustrates how an optimal splitting of the table can be determined. DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
While the exemplary embodiments of the invention are discussed in terms of medium to large-scale relational databases, other applications will be apparent from the teachings herein. Improvements in performance can be gained by "splitting" large tables into smaller sized tables. For example, a table of Patients could be split into a separate table for each state of residence. Such splitting is occasionally used in very large commercial databases, such as a multi-state bank that might have a separate account table for each state. However, prior art splitting is static: any change in the states that the banks operate in, or the addition of a new state to the union, requires costly manual reorganization and re-splitting of the original table. In addition, as data is added to the tables, what might have been a very "even" distribution of data may become very irregular. For example, the aforementioned bank might divest itself of branches in one state and acquire more in its home state, leading to a situation where almost all the data resides in just one table.
What is needed is a method of automatically splitting the table into smaller tables, to meet the following criteria:
Automatic splitting of tables, so that when a table reaches a certain size, or splitting threshold it would automatically be split into smaller tables recursively. By recursively, we mean that each split table itself is a candidate for splitting into further subtables. In s common embodiment, each subtable is split into a fixed number of tables where the rows of data in a subtable exceed its respective threshold. By so doing, the invention can ensure that no table ever exceeds the size of the splitting threshold, even if the table is arbitrarily large. Coalescing of small tables into a single larger table, in a situation where repeated deletion of records from split tables means that coalescing all the split tables would result in a single table of smaller size than the split table.
Transparency: whether or not a table was split should not be visible to the database programmer, with the exception of an increase of performance.
Tuneability: different database applications, implementations and host operating systems have different performance characteristics. Such characteristics are highly dependant upon the local environment: number and size of discs, processor speed, disc to memory bandwidth, memory size, disc cache size, version of the operating system and database, no name but a few! Hence, the major parameters of the splitting algorithm should be customizable. The major characteristics include the splitting threshold (the largest table size before splitting commences), and the splitting factor (the number of sub-tables into which a single table is split.
Criterion 1 is satisfied in this invention by the use of "recursion". Once a table reaches the splitting threshold it is broken into smaller tables, and when one of these sub-tables again reaches the splitting threshold it again is split into smaller tables, and so on.
Automation is achieved by the management system described below, that tracks the size of each table, and what level to what level it has been split.
Criterion 2 is again satisfied by the management system described below. Criterion 3 is transparency. The requirements that the splitting is not "visible" to database programmers would seem to be diffficult to achieve, as all database access is normally via access process such as structured query language (SQL, or SQL queries). Such SQL queries always have explicit table names as parameters. The names of tables and their properties are normally stored in an externaly visible database schema. Transparency is achieved in the invention by maintaining the original database schema, and mapping any access to a schema table (original table) into an access of one or more mapped (potentially split) tables at run time (i.e. a split table access process). Other access techniques will be apparent to those skilled in the art.
Criterion 4 is satisfied in the invention by two techniques. Firstly, the characteristics of the splitting algorithm are not "hard coded" (meaning that the characteristics can be changed on database initialization, or even on the fly with an automatic database reorganization, which, although slow, requires no manual intervention, and can potentially be accomplished without requiring the database to be off-line).
The splitting of tables into sub-tables may be done using some simple criteria that improves performance for most database accesses by localizing access to a single sub- table. Most accesses to tables use primary keys: a unique value that is normally used to locate a record. Typical examples of primary keys are a customer number, account number, social security number or a drivers license number. By splitting the table based upon primary keys, any access to the schema table based upon a primary key can be mapped into (faster) access to a single table. In some cases, a table is accessed by both a primary and a secondary key. A secondary key is an access key that is used for other queries or "joins". For example, the patient name might be a secondary key on the PTable 100, shown in Figure 1. By creating a separate table of just the patient name and number, PName and PNumber, and making PName the primary key of this table, fast access to a patient name is possible. Figure 5 illustrates a secondary table 500 to provide fast indexing on a secondary key. An alternative approach is to use a multidimensional split. Instead of splitting the table by a single attribute or key, it can be simultaneously split by several keys at once. For example, a single level split of PTable 100 (shown in figure 1), might be split based upon PName 102 into 26 separate tables (a separate table for each level of the alphabet, with the first letter of the patient's name determining the table). Or the table could be split into 10 separate tables based on the PNumber 101 (with the first digit of the patient number determining which table the data resided in). Or the split could be accomplished on both PName 102 and PNumber 101 at once (resulting in 260 tables).
A fundamental issue is how splitting should be accomplished. Splitting can be conceptualized as "external hashing" or "large-scale hashing", in which the splitting function takes a key and generates a table number, whereas a hashing function takes a key and generates a bucket number within a table. The splitting function is not limited to "hashing functions". There are several fundamental differences between prior art hashing requirements and the less stringent requirements of splitting used in the instant invention. In hashing, the goal is to have just a few records at each hashed location, or bucket, and typically thousands of buckets. In contrast, according to one method of the invention, a whole table is a bucket and there are thousands of records in each bucket. A file (or a table, or both) is typically split into just a dozen or more split files. Also, in hashing, the goal is a very uniform hashing function, as there is a high penalty for non-uniform hashing function. By contrast, uniformity, although a desirable property of splitting, is not a requirement. Finally, in hashing, there must be an explicit mechanism to deal with overflow. In splitting, there is no explicit mechanism required. If the size of the table goes beyond an optimal splitting threshold, then the penalty is merely slower access to the table. It should be understood that the splitting is delayed until an optimum time, or may be "real-time".
One hash function used to split the tables may be based purely upon the data type of the field upon which the data is split, the splitting factor, and the depth of splitting. In general, there are two broad classes of data: strings (e.g. ASCII alphanumeric string) and numeric (e.g. binary), however hybrids of classes are not unknown. ASCII data can be considered as a string of characters, STRING, indexed by the character number I. Thus in ASCII data "Apple", STRING[1] is 'A', STRING[2] is 'p' and so on. If the splitting factor is SP, and the depth of the splitting is D, then the subtable number of a given STRING is given by the splitting function: STRING[D] MOD SP
(where MOD is the modulus or "remainder" operation; for example, an ASCII code of 216 MOD 10 yields a result 6).
Binary or numerical data is split in a similar fashion, the subtable number of a given binary NUMBER is given by the splitting function:
NUMBER MOD SP for a first level split (D=l) NUMBER/((D-1)*SP) MOD SP for level splits greater than 1
Other types of data are sometimes used, such as DATES and UNICODE strings. Others skilled in the art can easily see how to apply similar functions to such data types.
For example, consider the NUMBER "316", and splitting factor of 10.
A first level split subtable index number will be 6, and a second level subtable index number will be 1.
A hashing operation using division is very fast, especially if D is a power of 2.
There are several refinements possible to this method. One refinement is that it is not strictly necessary for depth of the split and the index number to be the same. As described above, "Apple" would be allocated to a first level split table based on the first letter 'A', and "316" would be allocated to a first level split table based in the last digit '6' (strings are read and split left to right, whereas numbers are stored and split right to left).
Another possible refinement is that the implementation could, additionally, "test" each letter or digit to see which letter or digit generated the most uniform distribution, or bypass letters or digits with a non-uniform distribution. For example, if the key field was a patient id string, then the first letter of all patient id might be either 'X' or 'E', so that a split with a splitting factor of 26 would yield 24 empty tables and a table containing only patients whose id's began with 'X' and another table whose patient id's began with 'E'. Splitting on the second letter might produce a much more uniform distribution of data, and an implementation could decide to only split on their second letter (by counting to determine the uniformity of a split before performing the split).
In practice, such refinements may not result in a significant performance gain. If a split is not uniform, resulting in the split tables being larger than the splitting threshold, then the resulting tables will be split again. What is important is that the splitting function be consistent, so that the same key field in different tables will be split in the same way. Finally, since the size of the tables is tracked, there is no requirement that the same splitting threshold be used for all tables or subtables. For space or performance and hardware considerations, a user might decide not to split a certain table, even if it exceeded a given splitting threshold.
Once a split has been performed, the resulting table must be named, and a record kept of how many record elements are in each split table and subtable and the depth to which a schema, or original table, has been split. A simple method is that the name of the original table will have a suffix for each level of split, where the suffix corresponds to the index number of the table. For example, consider a key value of "345", in an original schema table, called "PTable". If this table were split into a first level set of 10 tables (using a splitting factor of 10), then the key would be found in the table "PTable-5", and if this table were split into a further level, then the key value would be found in the table "PTable-5-4". After a split has been performed, the parent table (also referred to as a "first table") no longer has any data. Alternatively, only as portion of the data need be split out and down. Figures 6,7 and 8 illustrate the technique, using a simplified example of a table 600 with just 18 numbers (here records), a splitting threshold of 10, and a splitting factor of 5; note the 5 subtables in control tree 700 of fig. 7. The split is being performed on a numeric field 602, and the modulus function above implies that in a first level split 701, numbers ending in 0 or 5 will be in index PTable 0 (reference number 702), numbers ending in 1 or 6 will be in table 704, numbers ending in 2 or 7 will be in table 706, and so on. The initial split of the table 600, of Fig. 6, yielded a subtable 706 with over 10 records. The algorithm was applied again, resulting in level two 711 of the control tree 700. The final tables are illustrated in Fig. 8 with the associated data.
A splitting factor of 5 means that the table that a data item is in is obtained by successive divisions by 5. For example the number "167", when divided by 5 yields "33" and a remainder of 2. The remainder of 2 implies that the data must reside in PTable-2. However, this table has already been split. To determine the next level subtable, we further divide "33" by 5, yielding "6" and a remainder of 3. This now implies that the data will reside in PTable-2-3 labelled 718 in Figs. 7 and 8 (where it indeed is located).
The split control tree illustrated in Figure 7 contains all the information about the current state of the split, including the number of record elements at each level of the tree. Nodes in a tree can be classified as either interior nodes or leaf nodes. Interior nodes (such as PTable 600 and PTable-2 706 in Figure 7) have descendants. These interior nodes are "split" into split tables which are the descendants of the interior nodes. Such interior or intermediate tables generally contain no records (all of the data they used to contain are now contained in subtables or split tables). By contrast, leaf nodes have no descendants, and correspond to tables that have not been split. As records are added and deleted, the split control tree keeps track of the number of record elements, and will invoke a split of a leaf table when an addition of a record causes it to contain more than the splitting threshold; or a merger of split tables, when a deletion of a record causes the sum of the records in the descendants of a table to be less than the splitting threshold.
For example, in Fig. 8, adding more records that map to table 718 (PTable 2-3)(such as 567, 667, 117) may cause it to split into subtables if the total exceeds 10. Alternatively, deleting 2 records from this table will cause the total number of records in PTable-2, Table 706, (which has been split) to fall below the splitting threshold. In that case the tables PTable-2-0 (712), PTable-2-1 (714), PTable-2-2 (716), PTable-2-3 (718), and PTable-2-4 (720) will be merged back into a single table PTable-2 (706).
Such "splitting" and "merging" of tables may be a relatively rare and expensive operation, and there is no requirement that it be done at the point at which the threshold is exceeded. Instead, the splitting and merging might be done when the database is offline or relatively quiescent, and only when a table has gone more that, say 5% above, or 5% below the splitting threshold (to avoid repeatedly splitting and merging a table if the number of records in the table is fluctuating close to the splitting threshold). It is possible to split or merge tables while the database is still online, by a slightly more complex encoding of the access algorithm (once a split is commenced, updates will not actually change the state of the table itself until the split is complete; instead a change log will be kept and searched and updated until the split is complete, at which time it will be applied to the split tables).
Another issue in the implementation is the mapping of SQL table accesses to accesses in the split table. The mechanism by which this is done is that any SELECT statement applied to a scheme table is converted to a SELECT statement applied to a split table, using the hashing function. For example:
SELECT PNumber, PAddress FROM PTable WHERE PName == "Bill Andrew" is mapped into SELECT PNumber, PAddress FROM PTableA WHERE PName = "Bill Andrew"
Where PTableA is located by searching down the split control table for PTable, applying the hashing function recursively to the key "Bill Andrew", to locate the name of the leaf split table "PTableA" that contains the record. The same approach can be applied to any access of the table using a key for which the table has been split. If the table has not been split using that key field, then each of the subtables must be accessed. In the case of secondary keys, the table can be normalized to split out the secondary keys into a separate table and split on the secondary key (as described earlier and shown in Figure 5).
Other database operations such as INSERT, DELETE, and JOIN are handled similarly.
The split control tree can be stored and maintained as a table itself (with each row consisting of the name of the table, number of records it contains, parent table, and whether it has been split). However, because all access to tables is via this tree, and the tree is updated on any addition of deleted records to any table, the split control tree for each table would normally be stored in memory for speed of access.
An important issue is determining "optimal" values for the splitting factor and the splitting threshold. In fact, the splitting factor is not particularly significant, as a lower choice of splitting factor will merely make the tree deeper and the lookup of which table contains the data a little slower. However, a higher choice of splitting factor will make the distribution of table sizes more irregular. For example, consider a table of 8,000 records. With a splitting factor of 8, more smaller tables are likely to be created than with a splitting factor of 2.
One way to determine the optimal splitting threshold is via benchmarking (testing), as the optimal size of the splitting is so dependent upon the application and its environment. There are two extremes: (1) No splitting whatsoever (splitting threshold > size of table); (2) A split down to the point that each table has only one record (i.e. a splitting threshold
of l).
Fig. 9 is a graph 900 if table access time 904 versus splitting threshold 906. The splitting threshold (which may be directly related to the table size) is plotted on a log scale.
Both the above discussed extremes have worse performance than some intermediate splitting threshold, e.g. the minimum 903 in Fig. 9. In practice, there is no easy way to formally determine where the minima 903 in the curve is. Instead, an approximation to the minima 903 can be determined by a synthetic benchmark. A synthetic benchmark is simply a small test program that determines a good splitting threshold for a given environment. The synthetic benchmark generates pseudo-random data to create a temporary table of any specified size, then applies a query to the table and times the access. From this, the synthetic benchmark can plot the access time versus table size and display the result to the user, who can then decide at what size to start splitting tables. A typical curve of access time versus table size (the curve is labelled 902) and is plotted in Fig. 9 with the dotted line. It will almost invariably be monotonic increasing (as the size of the table increases, the access time will always increase). If the overhead for determining the subtable in which a data item lies is sufficiently small, this implies a very high level of splitting may be desirable. Alternatively, starting with a single table, of a given size, MAX, one can plot the access time by primary key vs. the size of the table (again by generating synthetic data) as the table is split into smaller pieces. This is curve 901. This approach simply ignores the relatively small overhead (for large splitting) of hashing to determine in which subtable a data item lies.
In practice, several other factors limit the degree to which a table can be split, notably the maximum number of tables that the implementation of the database can support. Suppose that this number is MAXTABLES, and the design has NUMTABLES tables. Then no one table can be split into more than MAXTABLES-NUMTABLES subtables. Another significant factor is that splitting a table into smaller tables is of no benefit if the access time falls below a threshold of acceptable performance.
Thus, although there have been described particular embodiments of the present invention for improved methods of database organization, it is not intended that such references be construed as limitations upon the scope of the invention except as set forth in the following claims.

Claims

CLAIMSWhat is claimed is:
1. A method of improving database performance in a computer system comprising the
steps of: splitting a first table into a plurality of subtables; and basing the splitting upon a predetermined field of the first table.
2. The method of claim 1, comprising the steps of: recursively splitting the subtables; and basing the recursive splitting upon the predetermined field of the first table.
3. The method of claim 1, wherein the step of basing the splitting upon a predetermined field comprises the step of applying a splitting function to the predetermined field to sort the data.
4. The method of claim 3, wherein the step of applying the splitting function includes applying a hashing function.
5. The method of claim 1, comprising the steps of: providing the first table with a predetermined number of fields; and providing each of the subtables with a second predetermined number of fields.
6. The method of claim 5, comprising the step of selecting the second predetermined number of fields to equal the first predetermined number of fields.
7. The method of claim 1, comprising the step of collapsing the subtables into the first table.
8. The method of claim 7, comprising the steps of: splitting the first table when a plurality of records in the first table are at least equal to a predetermined threshold; and collapsing the subtables into the first table when the records in the subtables fall below a predetermined threshold.
9. The method of claim 1, comprising the steps of: recursively splitting the first table; and tracking the splitting.
10. The method of claim 9, wherein the step of tracking includes tracking how many tables are split and how deep the original table is split.
11. The method of claim 9, wherein the step of tracking includes forming a control tree.
12. The method of claim 11, comprising the step of mapping an access process to an original table into a split access process to a split table.
13. The method of claim 9, comprising the step of mapping an access process to an original table into a split access process to a split table.
14. The method of claim 13, wherein the access process is a sequence query language
(SQL) process.
15. A method of organizing data stored in a database on a computer system comprising the steps of: forming the data into a table having records and at least one field splitting the table into a plurality of subtables when the number of records in the table exceeds a first table threshold; basing the table split on the field, including placing records having a parameter of a corresponding value into one of the plurality of subtables corresponding to the corresponding value; splitting at least one of the subtables into a plurality of subsequent subtables when the number of records in the one subtable exceeds a second table threshold; and basing the subtable split on the field, including placing records having a subsequent parameter of a second corresponding value into one of the plurality of subsequent subtables corresponding to the second corresponding value.
16. The method of claim 15, wherein the second table threshold equals the first table threshold.
17. The method of claim 15, wherein the field is an alphabetic field.
18. The method of claim 15, wherein the field is a numeric field.
19. The method of claim 15, wherein the step of splitting the table includes determining at least one optimal parameter for the table splitting.
20. The method of claim 15, comprising the step of collapsing the subtables into the table when the number of records at least equals the table limit number.
21. A method in a computer system of managing data comprising the steps of: logically ordering the data into records having at least one field having a predetermined format; placing the records into a table; and dynamically splitting the table into split tables such that the records are placed in the split tables based upon a splitting function applied to a key field in each of the records.
22. The method of claim 21, further comprising the step of recursively splitting the table.
23. The method of claim 22, further comprising the step of recursively splitting the table such that the records are placed in split tables based upon a respective second parameter in each of the records.
24. The method of claim 21 comprising the step of allowing the predetermined format
to be a hybrid.
25. The method of claim 24, comprising the step of allowing the hybrid format to be an alphanumeric format.
26. The method of claim 21, further comprising the step of mapping an access process to an original table into a split table access process.
27. The method of claim 25, wherein the step of mapping occurs without programmer intervention and is transparent to a user.
28. A method of computer database management comprising the steps of: splitting a table of data into a fixed number of tables to form split tables when the number of records in the table exceed a threshold; splitting at least one of the split tables into a number of tables when the records in the one split table exceed the threshold, when the number of tables equals the fixed number of tables; and basing the splitting upon a predetermined field of the records.
PCT/RO1999/000017 1999-10-05 1999-11-25 Database organization for increasing performance by splitting tables WO2001025962A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU36837/00A AU3683700A (en) 1999-10-05 1999-11-25 Database organization for increasing performance by splitting tables

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
RO9901067 1999-10-05
RO99-01067 1999-10-05

Publications (1)

Publication Number Publication Date
WO2001025962A1 true WO2001025962A1 (en) 2001-04-12

Family

ID=20107415

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/RO1999/000017 WO2001025962A1 (en) 1999-10-05 1999-11-25 Database organization for increasing performance by splitting tables

Country Status (2)

Country Link
AU (1) AU3683700A (en)
WO (1) WO2001025962A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715076A (en) * 2015-04-13 2015-06-17 东信和平科技股份有限公司 Multi-threaded data processing method and device
JP2018060570A (en) * 2012-10-01 2018-04-12 オラクル・インターナショナル・コーポレイション Reference data segmentation from single to multiple tables
CN109684379A (en) * 2018-12-17 2019-04-26 浩云科技股份有限公司 A kind of analysis method and system of multivariate data
CN110413606A (en) * 2018-04-27 2019-11-05 贵州白山云科技股份有限公司 A kind of horizontal method for splitting of tables of data and device
CN110866002A (en) * 2018-08-27 2020-03-06 北京京东尚科信息技术有限公司 Method and device for processing sub-table data
CN112712183A (en) * 2020-12-23 2021-04-27 北京旋极伏羲科技有限公司 Transformer substation unmanned inspection equipment data management method based on space grid
CN117421337A (en) * 2023-09-26 2024-01-19 东土科技(宜昌)有限公司 Data acquisition method, device, equipment and computer readable medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0772139A2 (en) * 1995-11-01 1997-05-07 FileTek, Inc. Method and apparatus for segmenting a database

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0772139A2 (en) * 1995-11-01 1997-05-07 FileTek, Inc. Method and apparatus for segmenting a database

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MASUNAGA Y: "UPDATE AND QUERY PROCESSING SCHEME IN PARTITIONED AND REPLICATED DISTRIBUTED RELATIONAL DATABASE SYSTEMS", PROCEEDINGS OF THE ANNUAL INTERNATIONAL COMPUTER SOFTWARE AND APPLICATIONS CONFERENCE. (COMPSAC),US,WASHINGTON, IEEE COMP. SOC. PRESS, vol. CONF. 11, 1 October 1987 (1987-10-01), pages 521 - 530, XP000756404 *
SEGEV A: "OPTIMISING TWO-WAY JOINS IN HORIZONTALLY PARTITIONED DATABASE SYSTEMS", COMPUTER JOURNAL,GB,OXFORD UNIVERSITY PRESS, SURREY, vol. 30, no. 5, 1 October 1987 (1987-10-01), pages 458 - 468, XP000050115, ISSN: 0010-4620 *
WHANG K ET AL: "Dynamic Maintenance of Data Distribution for Selectivity Estimation", VLDB JOURNAL, vol. 3, no. 1, January 1994 (1994-01-01), pages 29 - 51, XP002143965, Retrieved from the Internet <URL:http://dblab.kaist.ac.kr/Publication/main.html> [retrieved on 20000801] *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018060570A (en) * 2012-10-01 2018-04-12 オラクル・インターナショナル・コーポレイション Reference data segmentation from single to multiple tables
CN104715076A (en) * 2015-04-13 2015-06-17 东信和平科技股份有限公司 Multi-threaded data processing method and device
CN110413606A (en) * 2018-04-27 2019-11-05 贵州白山云科技股份有限公司 A kind of horizontal method for splitting of tables of data and device
CN110866002A (en) * 2018-08-27 2020-03-06 北京京东尚科信息技术有限公司 Method and device for processing sub-table data
CN109684379A (en) * 2018-12-17 2019-04-26 浩云科技股份有限公司 A kind of analysis method and system of multivariate data
CN112712183A (en) * 2020-12-23 2021-04-27 北京旋极伏羲科技有限公司 Transformer substation unmanned inspection equipment data management method based on space grid
CN117421337A (en) * 2023-09-26 2024-01-19 东土科技(宜昌)有限公司 Data acquisition method, device, equipment and computer readable medium
CN117421337B (en) * 2023-09-26 2024-05-28 东土科技(宜昌)有限公司 Data acquisition method, device, equipment and computer readable medium

Also Published As

Publication number Publication date
AU3683700A (en) 2001-05-10

Similar Documents

Publication Publication Date Title
US5852822A (en) Index-only tables with nested group keys
EP1234258B1 (en) System for managing rdbm fragmentations
US5379422A (en) Simple random sampling on pseudo-ranked hierarchical data structures in a data processing system
US5257365A (en) Database system with multi-dimensional summary search tree nodes for reducing the necessity to access records
JP3771271B2 (en) Apparatus and method for storing and retrieving ordered collections of keys in a compact zero complete tree
US11520743B2 (en) Storing compression units in relational tables
US5899992A (en) Scalable set oriented classifier
US6973452B2 (en) Limiting scans of loosely ordered and/or grouped relations using nearly ordered maps
KR100886189B1 (en) Database
US6360213B1 (en) System and method for continuously adaptive indexes
US7734618B2 (en) Creating adaptive, deferred, incremental indexes
US6415375B2 (en) Information storage and retrieval system
Lo et al. The design and implementation of seeded trees: An efficient method for spatial joins
WO2001025962A1 (en) Database organization for increasing performance by splitting tables
Putz Using a relational database for an inverted text index
Hua et al. Efficient evaluation of traversal recursive queries using connectivity index
CN108959308A (en) A kind of reply can supplemental data indexing means
Omiecinski Concurrent file conversion between B+-tree and linear hash files
Omiecinski et al. Analysis of a deferred and incremental update strategy for secondary indexes
Carter et al. Performance Improvement in the Implementation of DBLEARN
Sainui et al. Optimizing encoded bitmap index using frequent itemsets mining
Kim et al. Two-dimensional dynamic signature file method using extendible hashing and frame-slicing techniques
Tamassia et al. Basic data structures
Fischbeck The ubiquitous b-tree: volume ii
Goczyla Indexing on multivalued attributes: partial-order trees vs. bitmaps

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase