WO2002025481A2 - High performance relational database management system - Google Patents

High performance relational database management system Download PDF

Info

Publication number
WO2002025481A2
WO2002025481A2 PCT/CA2001/000665 CA0100665W WO0225481A2 WO 2002025481 A2 WO2002025481 A2 WO 2002025481A2 CA 0100665 W CA0100665 W CA 0100665W WO 0225481 A2 WO0225481 A2 WO 0225481A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
performance
database
histogram
hunks
Prior art date
Application number
PCT/CA2001/000665
Other languages
French (fr)
Other versions
WO2002025481A3 (en
Inventor
Lore Christensen
Original Assignee
Linmor Technologies Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CA002319918A external-priority patent/CA2319918A1/en
Application filed by Linmor Technologies Inc. filed Critical Linmor Technologies Inc.
Priority to AU2001258115A priority Critical patent/AU2001258115A1/en
Priority to GB0306173A priority patent/GB2382903A/en
Publication of WO2002025481A2 publication Critical patent/WO2002025481A2/en
Publication of WO2002025481A3 publication Critical patent/WO2002025481A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/278Data partitioning, e.g. horizontal or vertical partitioning

Definitions

  • the present invention relates to the parallel processing of relational databases within a high speed data network, and more particularly to a system for the high performance management of relational databases.
  • Network management is a large field that is expanding in both users and technology.
  • the network manager of choice is the Simple Network
  • SNMP SN Management Protocol
  • SNMP consists of a simply composed set of network communication specifications that cover all the basics of network management in a method that can be configured to exert minimal management traffic on an existing network.
  • a typical example of an object would be a PNC element VPI/NCI pair on an incoming or outgoing port) on an ATM (Asynchronous Transfer Mode) switch.
  • the present invention is directed to a high performance relational database management system that satisfies this need.
  • the system leveraging the functionality of a high speed communications network, comprises receiving collected data objects from at least one data collection node using at least one performance monitoring server computer whereby a distributed database is created.
  • the distributed database is then partitioned into data hunks using a histogram routine running on at least one performance monitoring server computer.
  • the data hunks are then imported into at least one delegated database engine instance located on at least one performance monitoring server computer so as to parallel process the data hunks whereby processed data is generated.
  • the processed data is then accessed using at least one performance monitoring client computer to monitor data object performance.
  • the performance monitor server computers are comprised of at least one central processing unit. At least one database engine instance is located on the performance monitor server computers on a ratio of one engine instance to one central processing unit whereby the total number of engine instances is at least two so as to enable the parallel processing of the distributed database.
  • At least one database engine instance is used to maintain a versioned master vector table.
  • the versioned master vector table generates a histogram routine used to facilitate the partitioning of the distributed database.
  • Figure 1 is a schematic overview of the high performance relational database management system
  • Figure 2 is a schematic view ofthe performance monitor server computer and its components
  • Figure 3 is a schematic overview ofthe high performance relational database management system.
  • the high performance relational database management system leveraging the functionality of a high speed communications network 14, comprises at least one performance monitor server computer 10 connected to the network 14 for receiving network management data obj ects from at least one data collection node device 12 so as to create a distributed database 16.
  • a histogram routine 20 running on the performance monitoring server computers 10 partitions the distributed database 16 into data hunks 24.
  • the data hunks 24 are then imported into a plurality of delegated database engine instances 22 running on the performance monitoring server computers 10 so as to parallel process the data hunks 24 whereby processed data 26 is generated.
  • At least one performance monitor client computer 28 connected to the network 14 accesses the processed data 26 whereby data object performance is monitored.
  • At least one database engine instance 22 is used to maintain a versioned master vector table 30.
  • the versioned master vector table 30 generates the histogram routine 20 used to facilitate the partitioning of the distributed database 16.
  • the histogram routine 20 divides indices active at the time of a topology update into the required number of work ranges. Dividing the highest active index by the number of sub-partitions is not an option, since there is no guarantee that retired objects will be linearly distributed throughout the partitions.
  • the histogram routine 20 comprises dividing the total number of active object identifiers by the desired number of partitions so as to establish the optimum number of objects per partition, generating an n point histogram of desired granularity from the active indices, and summing adjacent histogram routine generated values until a target partition size is reached, but not exceeded. This could be understood as so inherently parallel that it is embarrassing to attack them serially from the active indices.
  • a versioned master vector table 30 is created on the prime database engine 32.
  • the topology and data import tasks refer to this table to determine the latest index division information.
  • the table is maintained by the topology import process.
  • Objects are instantiated in the subservient topological tables by means of a bulk update routine. Most RDBMS ' s provide a facility for bulk update. This command allows arbitrarily separated and fo ⁇ natted data to be opened and read into a table by the server back end directly.
  • a task is provided, which when invoked, opens up the object table file and reads in each entry sequentially.
  • Each new or redistributed obj ect record is massaged into a format acceptable to an update routine, and the result written to one of n temporary copy files or relations based on the object index ranges in the current histogram.
  • the task opens a command channel to each back end and issues the copy command and update commands are issued to set "lastseen" times for objects that have either left the system's management sphere, or been locally reallocated to another back end.
  • the smaller tables are pre-processed in the same way, and are not divided prior to the copy. This ensures that each back end will see these relations identically.
  • aroutine is invoked against the most recent flat file data hunk and it' s output treated as a streaming data source.
  • the distribution strategy is analogous to that used for the topology data.
  • the data import transforms the routine output into a series of lines suitable for the back end's copy routine.
  • the task compares the object index of each performance record againstthe ranges in the current histogram, and appends it to the respective copy file.
  • a command channel is opened to each back end and the copy command given.
  • reallocation tracking is automatic since the histogram ranges are always current.
  • partitioning One common paradigm used in distributed-memory parallel computing is data decomposition, or partitioning. This involves dividing the working data set into independent partitions . Identical tasks, running on distinct hardware can then operate on different portions ofthe data concurrently. Data decomposition is often favored as a first choice by parallel application designers, since the approach minimizes communication and task synchronization overhead during the computational phase. For a very large relational database, partitioning can lead to impressive gains in performance. When certain conditions are met, many common database operations can be applied in parallel to subsections ofthe data set. For example, if a table D is partitioned into work units D°, D 1 , - , D", then unary operator/is a candidate for parallelism, if and only if
  • f(D, O) f(Do, Oo)U/(Z> ⁇ , O ⁇ )U- • • f(Dn, On)
  • Nersioned entities include monitored objects, collection classes and network management variables.
  • a timeline contains an arbitrary interval spanning two instants, start and end, an entity can appear or disappear in one of seven possible relative positions. An entity cannot disappear before it becomes known, and it is not permissible for existence to have a zero duration. This means that there are six possible endings for the first start position, five for the second, and so on until the last.
  • Time domain versioning of tables is a salient feature ofthe design.
  • a simple and computationally cheap intersection can be used since the domains are equivalent for both selections .
  • Each element ofthe table need only be processed once, with both conditions applied together.
  • Application programmers will access the distributed database via an application programming interface (API) providing C, C++, TCL and PERL bindings.
  • API application programming interface
  • the library establishes read-only connections to the partitioned database servers, and queries are executed by broadcasting selection and join criteria to each server. Results returned are aggregated and returned to the application. To minimize memory requirements in large queries, provision is made for returning the results as either an input stream or cache file. This allows applications to process very large data arrays in a flow through manner.
  • a limited debug and general access user interface is provided in the form of an interactive user interface, familiar to many database users.
  • the monitor handles the multiple connections and uses a simple query rewrite rule system to ensure that returns match the expected behavior of a non-parallel database.
  • a built-in limit on the maximum number of rows returned is set at monitor startup. Provision is made for increasing the limit during a session.
  • the corresponding object and variable data tables increase at a non-linear rate. For example, it was found through one test implementation that one million managed obj ects with a thirty-minute data sample transport interval generated incoming performance management data on the order of 154 Megabytes. A one million element obj ect table will b e about 250 Megabytes at it' s initial creation. This file will also grow over time as some objects are retired and new discoveries appear.
  • API and user debug and access interfaces are compliant with standard relational database access methods thereby permitting legacy or in-place implementations to be compatible.
  • This invention addresses the storage and retrieval of very large numbers of collected network performance data, allowing database operations to be applied in parallel to subsections of the working data set using multiple instances of a database by making parallel the above operations which were previously executed serially.
  • Complex performance reports consisting of data from millions of managed network objects can now be generated in real time. This results in exceptional advancements in scalability for real-time performance management solutions, since each component has it's own level of scalability.
  • Today's small computers are capable of delivering several tens of millions of operations per second, and continuing increases in power are foreseen.
  • Such computer systems' combined computational power when interconnected by an appropriate high- speed network, can be applied to solve a variety of computationally intensive applications.
  • network computing when coupled with prudent application design, can provide supercomputer-level performance.
  • the network-based approach can also be effective in aggregating several similar multiprocessors, resulting in a configuration that might otherwise be economically and technically difficult to achieve, even with prohibitively expensive supercomputer hardware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

A high performance relational database management system, leveraging the functionality of a high speed communications network, comprising at least one performance moitor server computer connected to the network for receiving network management data objects from at least one data collector node device so as to create a distributed database. A histogram routine running on the performance monitoring server computers partitions the distributed database into data hunks. The data hunks are then imported into a plurality of delegated database engine instances running on the performance monitoring server computers so as to parallel process the data hunks. A perforamcne monitor client coputer connected to the network is then typically used to access the processed data to monitor object performance.

Description

High Performance Relational Database Management System
Field of the Invention
The present invention relates to the parallel processing of relational databases within a high speed data network, and more particularly to a system for the high performance management of relational databases.
Background of the Invention
Network management is a large field that is expanding in both users and technology. On UNIX networks, the network manager of choice is the Simple Network
Management Protocol (SNMP). This has gained great acceptance and is now spreading rapidly into the field of PC networks. On the Internet, Java-based SNMP applications are becoming readily available.
SNMP consists of a simply composed set of network communication specifications that cover all the basics of network management in a method that can be configured to exert minimal management traffic on an existing network.
The problems seen in high capacity management implementations were only manifested recently with the development of highly scalable versions of relational database management solutions. In the scalability arena, performance degradation becomes apparent when numbers of managed objects reach a few hundreds.
The known difficulties relate either to the lack of a relational database engine and query language in the design, or to memory intensive serial processing in the implementation, specifically access speed scalability limitations, inter-operability problems, custom-designed query interfaces that don't provide the flexibility and ease-of- use that a commercial interface would offer. Networks are now having to manage ever larger number of network objects as true scalability takes hold, and with vendors developing hardware having ever finer granularity of network objects under management, be they via SNMP or other means, the number of objects being monitored by network management systems is now in the millions. Database sizes are growing at a corresponding rate, leading to increased processing times. As well, the applications that work with the processed data are being called upon to deliver their results in real-time or near-real-time, thereby adding yet another demand on more efficient database methods.
The current trend is towards hundreds of physical devices, which translates to millions of managed objects. A typical example of an object would be a PNC element VPI/NCI pair on an incoming or outgoing port) on an ATM (Asynchronous Transfer Mode) switch.
The effect of high scalability on the volume of managed obj ects grew rapidly as industry started increasing the granularity of databases. This uncovered still another problem that typically manifested as processing bottlenecks within the network. As one problem was solved it created another that was previously masked.
In typical management implementations, when scalability processing bottlenecks appear in one area, a plan is developed and implemented to eliminate them, at which point they typically will just "move" down the system to mamfest themselves in another area. Each subsequent processing bottleneck is uncovered through performance bench marking measurements once the previous hurdle has been cleared. The limitations imposed by the lack of parallel database processing operations, and other scalability bottlenecks translates to a limit on the number of managed objects that can be reported on in a timely fashion.
The serial nature of the existing accessors precludes their application in reporting on large managed networks . While some speed and throughput improvements have b een demonstrated by modifying existing reporting scripts to fork multiple concurrent instances of a program, the repeated and concurrent raw access to the flat files imposes a fundamental limitation on this approach.
For the foregoing reasons, there exists in the industry a need for an improved relational database management system that provides for high capacity, scalability, backwards compatibility and real-time or near-real-time results.
Summary of the Invention
The present invention is directed to a high performance relational database management system that satisfies this need. The system, leveraging the functionality of a high speed communications network, comprises receiving collected data objects from at least one data collection node using at least one performance monitoring server computer whereby a distributed database is created.
The distributed database is then partitioned into data hunks using a histogram routine running on at least one performance monitoring server computer. The data hunks are then imported into at least one delegated database engine instance located on at least one performance monitoring server computer so as to parallel process the data hunks whereby processed data is generated. The processed data is then accessed using at least one performance monitoring client computer to monitor data object performance.
The performance monitor server computers are comprised of at least one central processing unit. At least one database engine instance is located on the performance monitor server computers on a ratio of one engine instance to one central processing unit whereby the total number of engine instances is at least two so as to enable the parallel processing of the distributed database.
At least one database engine instance is used to maintain a versioned master vector table. The versioned master vector table generates a histogram routine used to facilitate the partitioning of the distributed database. This invention addresses the storage and retrieval of very large numbers of collected network performance data, allowing database operations to be applied in parallel to subsections of the working data set using multiple instances of a database by making parallel the above operations, which were previously executed serially. Complex performance reports consisting of data from millions of managed network objects can now be generated in real time. This results in impressive gains in scalability for real-time performance management solutions. Each component has it's own level of scalability.
Other aspects andfeatures ofthe present invention will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments ofthe invention in conjunction with the accompanying figures.
Brief Description ofthe Drawings
These and other features, aspects, and advantages ofthe present invention will becomebetter understood with regard to the following description, appended claims, and accompanying drawings where:
Figure 1 is a schematic overview of the high performance relational database management system; Figure 2 is a schematic view ofthe performance monitor server computer and its components; and
Figure 3 is a schematic overview ofthe high performance relational database management system.
Detailed Description ofthe Presently Preferred Embodiment
As shown in figure 1, the high performance relational database management system, leveraging the functionality of a high speed communications network 14, comprises at least one performance monitor server computer 10 connected to the network 14 for receiving network management data obj ects from at least one data collection node device 12 so as to create a distributed database 16. As shown in figure 2, a histogram routine 20 running on the performance monitoring server computers 10 partitions the distributed database 16 into data hunks 24. The data hunks 24 are then imported into a plurality of delegated database engine instances 22 running on the performance monitoring server computers 10 so as to parallel process the data hunks 24 whereby processed data 26 is generated.
As shown in figure 3, at least one performance monitor client computer 28 connected to the network 14 accesses the processed data 26 whereby data object performance is monitored.
At least one database engine instance 22 is used to maintain a versioned master vector table 30. The versioned master vector table 30 generates the histogram routine 20 used to facilitate the partitioning of the distributed database 16. In order to divide the total number on managed objects among the database engines 22, the histogram routine 20 divides indices active at the time of a topology update into the required number of work ranges. Dividing the highest active index by the number of sub-partitions is not an option, since there is no guarantee that retired objects will be linearly distributed throughout the partitions.
The histogram routine 20 comprises dividing the total number of active object identifiers by the desired number of partitions so as to establish the optimum number of objects per partition, generating an n point histogram of desired granularity from the active indices, and summing adjacent histogram routine generated values until a target partition size is reached, but not exceeded. This could be understood as so inherently parallel that it is embarrassing to attack them serially from the active indices.
In order to make the current distribution easily available to all interested processes, a versioned master vector table 30 is created on the prime database engine 32. The topology and data import tasks refer to this table to determine the latest index division information. The table is maintained by the topology import process. Objects are instantiated in the subservient topological tables by means of a bulk update routine. Most RDBMS ' s provide a facility for bulk update. This command allows arbitrarily separated and foπnatted data to be opened and read into a table by the server back end directly. A task is provided, which when invoked, opens up the object table file and reads in each entry sequentially. Each new or redistributed obj ect record is massaged into a format acceptable to an update routine, and the result written to one of n temporary copy files or relations based on the object index ranges in the current histogram. Finally, the task opens a command channel to each back end and issues the copy command and update commands are issued to set "lastseen" times for objects that have either left the system's management sphere, or been locally reallocated to another back end.
The smaller tables are pre-processed in the same way, and are not divided prior to the copy. This ensures that each back end will see these relations identically. In order to distribute the incoming reporting data across the partitioned database engines, aroutine is invoked against the most recent flat file data hunk and it' s output treated as a streaming data source. The distribution strategy is analogous to that used for the topology data. The data import transforms the routine output into a series of lines suitable for the back end's copy routine. The task compares the object index of each performance record againstthe ranges in the current histogram, and appends it to the respective copy file. A command channel is opened to each back end and the copy command given. For data import, reallocation tracking is automatic since the histogram ranges are always current.
One common paradigm used in distributed-memory parallel computing is data decomposition, or partitioning. This involves dividing the working data set into independent partitions . Identical tasks, running on distinct hardware can then operate on different portions ofthe data concurrently. Data decomposition is often favored as a first choice by parallel application designers, since the approach minimizes communication and task synchronization overhead during the computational phase. For a very large relational database, partitioning can lead to impressive gains in performance. When certain conditions are met, many common database operations can be applied in parallel to subsections ofthe data set. For example, if a table D is partitioned into work units D°, D1, - , D", then unary operator/is a candidate for parallelism, if and only if
f(D) = f(D )XJf(Dι)υ-f(Dn)
Similarly, if a second relation 0, is decomposed using the same scheme, then certain binary operators can be invoked in parallel, if and only if
f(D, O) = f(Do, Oo)U/(Z>ι, Oι)U- • • f(Dn, On)
The unary operators projection and selection, and binary operators union, intersection and set difference are unconditionally partitionable. Taken together, these operators are members of a class of problems that can collectively be termed "embarrassingly parallel". This could be understood as so inherently parallel that it is embarrassing to attack them serially.
Certain operators are amenable to parallelism conditionally. Grouping and Join are in this category. Grouping works as long as partitioning is done by the grouping attribute. Similarly, a join requires that the join attribute also be used for partitioning. That satisfied, tables do not grow symmetrically as the number of total managed objects increases . The obj ect and variable tables soon dwarf the others as more obj ects are placed under management. For one million managed objects and a thirty minute transport interval, the incoming data to be processed can be on the order of 154 Megabytes in size. A million element object table will be about 0.25 Gigabytes at it's initial creation. This file will also grow over time, as some objects are retired, and new discoveries appear. Considering the operations required in the production of a performance report, it is possible to design a parallel database scheme that will allow a parallel join of distributed sub -components of the data and object tables by using the object identifiers as the partitioning attribute. The smaller attribute, class and variable tables need not be partitioned. In order to make them available for binary operators such as joins, they need only be replicated across the separate database engines . This replication is cheap and easy given the small size ofthe files in question.
The appearance and retirement of entities in tables is tracked by two time-stamp attributes, representing the time the entity became known to the system, and the time it departed, respectively. Nersioned entities include monitored objects, collection classes and network management variables.
If a timeline contains an arbitrary interval spanning two instants, start and end, an entity can appear or disappear in one of seven possible relative positions. An entity cannot disappear before it becomes known, and it is not permissible for existence to have a zero duration. This means that there are six possible endings for the first start position, five for the second, and so on until the last.
One extra case is required to express an obj ect that both appears and disappears within the subject interval. Therefore, the final count of the total number of cases is determined by the formula:
i+ Σ n n=\
There are twenty-two possible entity existence scenarios for any interval with a real duration. Time domain versioning of tables is a salient feature ofthe design.
A simple and computationally cheap intersection can be used since the domains are equivalent for both selections . Each element ofthe table need only be processed once, with both conditions applied together. Application programmers will access the distributed database via an application programming interface (API) providing C, C++, TCL and PERL bindings. Upon initialization the library establishes read-only connections to the partitioned database servers, and queries are executed by broadcasting selection and join criteria to each server. Results returned are aggregated and returned to the application. To minimize memory requirements in large queries, provision is made for returning the results as either an input stream or cache file. This allows applications to process very large data arrays in a flow through manner.
A limited debug and general access user interface is provided in the form of an interactive user interface, familiar to many database users. The monitor handles the multiple connections and uses a simple query rewrite rule system to ensure that returns match the expected behavior of a non-parallel database. To prevent poorly conceived queries from swamping the system's resources, a built-in limit on the maximum number of rows returned is set at monitor startup. Provision is made for increasing the limit during a session.
As the number of total managed objects increases, the corresponding object and variable data tables increase at a non-linear rate. For example, it was found through one test implementation that one million managed obj ects with a thirty-minute data sample transport interval generated incoming performance management data on the order of 154 Megabytes. A one million element obj ect table will b e about 250 Megabytes at it' s initial creation. This file will also grow over time as some objects are retired and new discoveries appear.
Considering the operations required in the production of a performance report, it is possible to design a parallel database scheme that will allow a parallel join of distributed sub-components ofthe data and object tables by using the object identifiers as the partitioning attribute. This involves partitioning data and object tables by index, importing the partitioned network topology data delegated to multiple instances of the database engine, and invoking an application routine against the most recent flat file performance data hunk and directing the output to multiple database engines.
The API and user debug and access interfaces are compliant with standard relational database access methods thereby permitting legacy or in-place implementations to be compatible.
This invention addresses the storage and retrieval of very large numbers of collected network performance data, allowing database operations to be applied in parallel to subsections of the working data set using multiple instances of a database by making parallel the above operations which were previously executed serially. Complex performance reports consisting of data from millions of managed network objects can now be generated in real time. This results in exceptional advancements in scalability for real-time performance management solutions, since each component has it's own level of scalability.
Today's small computers are capable of delivering several tens of millions of operations per second, and continuing increases in power are foreseen. Such computer systems' combined computational power, when interconnected by an appropriate high- speed network, can be applied to solve a variety of computationally intensive applications. In other words network computing, when coupled with prudent application design, can provide supercomputer-level performance. The network-based approach can also be effective in aggregating several similar multiprocessors, resulting in a configuration that might otherwise be economically and technically difficult to achieve, even with prohibitively expensive supercomputer hardware.
With this invention scalability limits are advanced, achieving an unprecedented level of monitoring influence.

Claims

What is claimed is:
1. A high performance relational database management system, leveraging the functionality of a high speed communications network, comprising the steps of: (i) receiving collected data objects from at least one data collection node using at least one performance monitoring computer whereby a distributed database is created;
(ii) partitioning the distributed database into data hunks using a histogram routine running on at least one performance monitoring server computer; (iii) importing the data hunks into a plurality of delegated database engine instances located on at least one performance monitoring server computer so as to parallel process the data hunks whereby processed data is generated; and
(iv) accessing the processed data using at least one performance client computer to momtor data object performance.
2. The system according to claim 1, wherein at least one database engine instance is located on the performance momtor server computers on a ratio of one engine instance to one central processing unit whereby the total number of engine instances is at least two so as to enable the parallel processing ofthe distributed database.
3. The system according to claim 2, wherein at least one database engine instance is used to maintain a versioned master vector table.
4. The system according to claim 3 , wherein the versioned master vector table generates a histogram routine used to facilitate the partitioning of the distributed database.
5. The system according to claim 4, wherein the histogram routine comprises the steps of: (i) dividing the total number of active object identifiers by the desired number of partitions so as to establish the optimum number of objects per partition;
(ii) generating an n point histogram of desired granularity from the active indices; and (iii) summing adj acent histogram routine generated values until a target partition size is reached but not exceeded.
6. The system according to claim 1, wherein the performance monitor server comprises an application programming interface compliant with a standard relational datab as e query language.
7. A high performance relational database management system, leveraging the functionality of a high speed communications network, comprising:
(i) at least one performance monitor server computer connected to the network for receiving network management data obj ects from at least one data collection node device whereby a distributed database is created;
(ii) a histogram routine running on the performance monitoring server computers for partitioning the distributed database into data hunks;
(iii) at least two database engine instances running on the performance monitoring server computers so as to parallel process the data hunks whereby processed data is generated; and
(iv) at least one performance momtor client computer connected to the network for accessing the processed data whereby data object performance is monitored.
8. The system according to claim 7, wherein at least one database engine instance is located on the performance monitoring server computers on a ratio of one engine instance to one central processing unit whereby the total number of engine instances for the system is at least two so as to enable the parallel processing ofthe distributed database.
9. The system according to claim 8, wherein at least one database engine instance is used to maintain a versioned master vector table.
10. The system according to claim 9, wherein the versioned master vector table generates a histogram routine used to facilitate the partitioning ofthe distributed database.
11. The system according to claim 10, wherein the histogram routine comprises the steps of:
(i) dividing the total numb er of active obj ect identifiers by the desired numb er of partitions so as to establish the optimum number of objects per partition;
(ii) generating an n point histogram of desired granularity from the active indices; and
(iii) summing adj acent histogram routine generated values until a target partition size is reached but not exceeded.
12. The system according to claim 7, wherein the performance monitor server comprises an application programming interface compliant with a standard relational database query language.
13. The system according to claim 1, wherein at least one performance monitor client computer is connected to the network so as to communicate remotely with the performance monitor server computers.
14. A storage medium readable by an install server computer in a high performance relational database management system including the install server, leveraging the functionality of a high speed communications network, the storage medium encoding a computer process comprising: (i) a processing portion for receiving collected data objects from at least one data collection node using at least one performance monitoring computer whereby a distributed database is created;
(ii) a processing portion for partitioning the distributed database into data hunks using a histogram routine running on at least one performance monitoring server computer;
(iii) a processing portionfor importingthe data hunks into a plurality of delegated database engine instances located on at least one performance monitoring server computer so as to parallel process the data hunks whereby processed data is generated; and
(iv) a processing portion for accessing the processed data using at least one performance client computer to monitor data object performance.
15. The system according to claim 14, wherein at least one database engine instance is located on the data processor server computers on a ratio of one engine instance to one centralprocessingunitwhereby the total number of engine instances is atleasttwo so as to enable the parallel processing ofthe distributed database.
16. The system according to claim 15, wherein one ofthe database engine instances is designated as a prime database engine instance used to maintain a versioned master vector table.
17. The system according to claim 16, wherein the versioned master vector table generates a histogram routine used to facilitate the partitioning of the distributed database.
18. The system according to claim 14, wherein the histogram routine compris es the steps of: (i) dividing the total number of active obj ect identifiers by the desired number of partitions so as to establish the optimum number of objects per partition;
(ii) generating an n point histogram of desired granularity from the active indices; and (iii) summing adj acent histogram routine generated values until a target partition size is reached but not exceeded.
19. The system according to claim 14, wherein the performance momtor server comprises an application programming interface compUantwith a standard relational database query language.
PCT/CA2001/000665 2000-09-18 2001-05-23 High performance relational database management system WO2002025481A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2001258115A AU2001258115A1 (en) 2000-09-18 2001-05-23 High performance relational database management system
GB0306173A GB2382903A (en) 2000-09-18 2001-05-23 High performance relational database management system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CA2,319,918 2000-09-18
CA002319918A CA2319918A1 (en) 2000-09-18 2000-09-18 High performance relational database management system
CA002345309A CA2345309A1 (en) 2000-09-18 2001-04-26 High performance relational database management system
CA2,345,309 2001-04-26

Publications (2)

Publication Number Publication Date
WO2002025481A2 true WO2002025481A2 (en) 2002-03-28
WO2002025481A3 WO2002025481A3 (en) 2003-01-16

Family

ID=25682088

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2001/000665 WO2002025481A2 (en) 2000-09-18 2001-05-23 High performance relational database management system

Country Status (4)

Country Link
AU (1) AU2001258115A1 (en)
CA (1) CA2345309A1 (en)
GB (1) GB2382903A (en)
WO (1) WO2002025481A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005098655A2 (en) 2004-02-21 2005-10-20 Datallegro, Inc. Ultra-shared-nothing parallel database
EP1647901A1 (en) * 2004-10-15 2006-04-19 Samsung Electronics Co.,Ltd. System and method for collecting network performance data and storing it in a single relational table.
WO2014026270A1 (en) * 2012-08-13 2014-02-20 Aria Solutions, Inc. High performance real-time relational database system and methods for using same
CN109992206A (en) * 2019-03-27 2019-07-09 新华三技术有限公司成都分公司 Distributed data storage method and relevant apparatus

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0849911A2 (en) * 1996-12-18 1998-06-24 Nortel Networks Corporation Communications network monitoring
WO1999053703A1 (en) * 1998-04-14 1999-10-21 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for radio network management

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0849911A2 (en) * 1996-12-18 1998-06-24 Nortel Networks Corporation Communications network monitoring
WO1999053703A1 (en) * 1998-04-14 1999-10-21 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for radio network management

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
BARLOS F ET AL: "A load balanced multicomputer relational database system for highly skewed data" PARALLEL COMPUTING, ELSEVIER PUBLISHERS, AMSTERDAM, NL, vol. 21, no. 9, 1 September 1995 (1995-09-01), pages 1451-1483, XP004062644 ISSN: 0167-8191 *
BARU C K ET AL: "DB2 PARALLEL EDITION" IBM SYSTEMS JOURNAL, IBM CORP. ARMONK, NEW YORK, US, vol. 34, no. 2, 21 March 1995 (1995-03-21), pages 292-322, XP000526619 ISSN: 0018-8670 *
CHAMBERLIN, DON: "A complete Guide to DB2 universal database" 1998 , MORGAN KAUFMANN PUBLISHERS , USA XP002218606 ISBN 1-55860-482-0; excerpt: pages 589-599 and 656-658 the whole document *
DEWITT D J ET AL: "Gamma-a high performance dataflow database machine" PROCEEDINGS OF VERY LARGE DATA BASES. TWELFTH INTERNATIONAL CONFERENCE ON VERY LARGE DATA BASES, KYOTO, JAPAN, 25-28 AUG. 1986, pages 228-237, XP002218604 1986, Los Altos, CA, USA, Morgan Kaufmann Publishers, USA ISBN: 0-934613-18-4 *
HUA K A ET AL: "An adaptive data placement scheme for parallel database computer systems" VERY LARGE DATA BASES. 16TH INTERNATIONAL CONFERENCE ON VERY LARGE DATA BASES, BRISBANE, QLD., AUSTRALIA, 13-16 AUG. 1990, pages 493-506, XP002218605 1990, Palo Alto, CA, USA, Morgan Kaufmann, USA *
KHANH QUOC NGUYEN ET AL: "An enhanced hybrid range partitioning strategy for parallel database systems" DATABASE AND EXPERT SYSTEMS APPLICATIONS, 1997. PROCEEDINGS., EIGHTH INTERNATIONAL WORKSHOP ON TOULOUSE, FRANCE 1-2 SEPT. 1997, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 1 September 1997 (1997-09-01), pages 289-294, XP010243307 ISBN: 0-8186-8147-0 *
MUTHUKRISHNAN S ET AL: "On rectangular partitionings in two dimensions: algorithms, complexity, and applications" DATABASE THEORY - ICDT'99. 7TH INTERNATIONAL CONFERENCE. PROCEEDINGS, PROCEEDINGS OF INTERNATIONAL CONFERENCE ON DATABASE THEORY, JERUSALEM, ISRAEL, 10-12 JAN. 1999, pages 236-256, XP002218603 1999, Berlin, Germany, Springer-Verlag, Germany ISBN: 3-540-65452-6 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005098655A2 (en) 2004-02-21 2005-10-20 Datallegro, Inc. Ultra-shared-nothing parallel database
EP1716505A2 (en) * 2004-02-21 2006-11-02 Datallegro, Inc. Ultra-shared-nothing parallel database
EP1716505A4 (en) * 2004-02-21 2009-10-21 Datallegro Inc Ultra-shared-nothing parallel database
AU2005231230B2 (en) * 2004-02-21 2010-05-27 Microsoft Technology Licensing, Llc Ultra-shared-nothing parallel database
US7818349B2 (en) 2004-02-21 2010-10-19 Datallegro, Inc. Ultra-shared-nothing parallel database
EP1647901A1 (en) * 2004-10-15 2006-04-19 Samsung Electronics Co.,Ltd. System and method for collecting network performance data and storing it in a single relational table.
WO2014026270A1 (en) * 2012-08-13 2014-02-20 Aria Solutions, Inc. High performance real-time relational database system and methods for using same
CN109992206A (en) * 2019-03-27 2019-07-09 新华三技术有限公司成都分公司 Distributed data storage method and relevant apparatus
CN109992206B (en) * 2019-03-27 2022-05-10 新华三技术有限公司成都分公司 Data distribution storage method and related device

Also Published As

Publication number Publication date
AU2001258115A1 (en) 2002-04-02
GB2382903A (en) 2003-06-11
CA2345309A1 (en) 2002-03-18
WO2002025481A3 (en) 2003-01-16
GB0306173D0 (en) 2003-04-23

Similar Documents

Publication Publication Date Title
US11816126B2 (en) Large scale unstructured database systems
US20020049759A1 (en) High performance relational database management system
EP1654683B1 (en) Automatic and dynamic provisioning of databases
EP2182448A1 (en) Federated configuration data management
Emara et al. Distributed data strategies to support large-scale data analysis across geo-distributed data centers
Su et al. Sdquery dsi: integrating data management support with a wide area data transfer protocol
Malik et al. Sketching distributed data provenance
CN105677761A (en) Data sharding method and system
CN113407600A (en) Enhanced real-time calculation method for dynamically synchronizing multi-source large table data in real time
Mehmood et al. Distributed real-time ETL architecture for unstructured big data
CN110381136A (en) A kind of method for reading data, terminal, server and storage medium
CN112507026B (en) Distributed high-speed storage method based on key value model, document model and graph model
Liu et al. Using provenance to efficiently improve metadata searching performance in storage systems
Benlachmi et al. A comparative analysis of hadoop and spark frameworks using word count algorithm
US11960616B2 (en) Virtual data sources of data virtualization-based architecture
Polak et al. Organization of quality-oriented data access in modern distributed environments based on semantic interoperability of services and systems
WO2002025481A2 (en) High performance relational database management system
CN108306916A (en) Big data multi-internet integration scientific research all-in-one machine stage apparatus
Bhuiyan et al. MIRAGE: An Iterative MapReduce based FrequentSubgraph Mining Algorithm
Kumova Dynamically adaptive partition-based data distribution management
Bhuiyan et al. FSM-H: frequent subgraph mining algorithm in hadoop
Papanikolaou Distributed algorithms for skyline computation using apache spark
CN110569310A (en) Management method of relational big data in cloud computing environment
Xu et al. VSFS: A versatile searchable file system for HPC analytics
Guerrieri et al. Etsch: Partition-centric graph processing

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

ENP Entry into the national phase

Ref document number: 0306173

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20010523

Format of ref document f/p: F

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP