US20120215756A1 - Gateways having localized in memory databases and business logic execution - Google Patents
Gateways having localized in memory databases and business logic execution Download PDFInfo
- Publication number
- US20120215756A1 US20120215756A1 US13/461,401 US201213461401A US2012215756A1 US 20120215756 A1 US20120215756 A1 US 20120215756A1 US 201213461401 A US201213461401 A US 201213461401A US 2012215756 A1 US2012215756 A1 US 2012215756A1
- Authority
- US
- United States
- Prior art keywords
- hyper
- memory
- engine
- gateway
- datasets
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000010354 integration Effects 0.000 claims abstract description 33
- 238000004891 communication Methods 0.000 claims description 33
- 238000010586 diagram Methods 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 238000000034 method Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 230000007717 exclusion Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 101100498823 Caenorhabditis elegans ddr-2 gene Proteins 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003339 best practice Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000010367 cloning Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 230000002939 deleterious effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/958—Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
- G06F16/972—Access to data in other repository systems, e.g. legacy data or dynamic Web page generation
Definitions
- the present disclosure is related to gateways. More particularly, the present disclosure is related to gateways having localized in-memory databases and business logic execution capabilities.
- gateway An application known as a gateway.
- This application sits between the consumers of its services and data, and the actual sources of the data and business logic, which are typically based on older relational database technology and programming languages.
- prior gateways have not provided sufficient resolution or query time, when using the large amounts of data and business logic common in today's enterprise applications, because of their reliance on older relational database technology and methods of invoking and executing said business logic.
- object churning refers to the act within a JAVA construct of continually creating and discarding objects (including arrays) from the memory heap. This object creation and destruction is managed by a software component known as a “garbage collector”, and the overhead involved becomes a bottleneck when such a system is under high load.
- gateways and methods that overcome, alleviate and/or mitigate one or more of the aforementioned and other deleterious effects of the prior art.
- a gateway includes an integration gateway portion, a domain gateway portion, and a hyper-memory portion is provided.
- the integration gateway portion has an integration rules engine, a search engine, and a first virtual machine.
- the domain gateway portion has a domain rules engine.
- the hyper-memory portion has a hyper-memory engine, a hyper-memory, and a second virtual machine.
- the integration portion accesses a database via the integration rules engine and the first virtual machine or via the search engine and the first virtual machine.
- the domain gateway portion accesses datasets of the database that are resident in the hyper-memory via the domain objects rules engine and the hyper-memory engine or via the search engine, the second virtual machine, and the hyper-memory engine.
- a gateway is also provided that includes a search engine, a first virtual machine, a second virtual machine, a hyper-memory engine, and a hyper-memory having datasets of a database resident thereon.
- the search engine and the first virtual machine access the database upon receipt of an integration gateway search request.
- the search engine, the second virtual machine, and the hyper-memory engine access the datasets from the hyper-memory upon receipt of a domain gateway search request.
- a hyper-memory portion for a gateway is also provided.
- the hyper-memory portion includes a hyper-memory and a hyper-memory engine in the hyper-memory for storing and indexing data entirely within the hyper-memory.
- FIG. 1 is a schematic depiction of an exemplary embodiment of a gateway according to the present disclosure in use between a legacy software system and an enterprise application;
- FIG. 2 is an exemplary embodiment of the gateway of FIG. 1 ;
- FIG. 3 is an exemplary embodiment of a rules service sequence diagram of an integration gateway portion of FIG. 2 ;
- FIG. 4 is an exemplary embodiment of a query service sequence diagram of the integration gateway portion of FIG. 2 ;
- FIG. 5 is an exemplary embodiment of a rules service sequence diagram of the domain gateway portion of FIG. 2 ;
- FIG. 6 is an exemplary embodiment of a query service sequence diagram of the domain gateway portion of FIG. 2 .
- Gateway 10 provides a clusterable solution that provides programmatic access to very large datasets resident on the gateway.
- gateway 10 includes an in-memory database that allows for the storing, indexing, updating, and searching of large amounts of structured data from database 16 entirely in the memory of the gateway. In this manner, and by levering the 64-bit technology currently available, gateway 10 is configured to provide a resolution or query time for vast quantities of data at the microsecond level.
- Gateway 10 is shown, by way of example, in use between one or more legacy software systems 12 (two shown) and an enterprise application 14 having a database 16 .
- gateway 10 enables real time exchange of data between software systems 12 and database 16 of enterprise application 14 .
- Gateway 10 is in communication with software system 12 via a first communication channel 18 . Similarly, gateway 10 is in communication with enterprise application 14 via a second communication channel 20 .
- First and second communication channels can be any know communication device and/or protocol such as, but not limited to, the Internet, a Local Area Network (LAN), a Wide Area Network (WAN), a wireless network, and others.
- Data within database 16 can include data in Extensible Markup Language (XML) metadata, and other formats.
- gateway 10 , software system 12 , first communication channel 18 , enterprise application 14 , and second communication channel 20 are configured to communicate the data resident in database 16 .
- the data in database 16 it is contemplated by the present disclosure for the data in database 16 to be any type of object oriented data, data fields, Hyper Text Markup Language (HTML), any script, such as JavaScript, Jython, SQL, and any other data format.
- HTML Hyper Text Markup Language
- Software systems 12 can be those commonly used by a health care organization, while enterprise application 14 can be the Portico Foundation Software, which is shown and described in co-pending U.S. application Ser. No. 11/430,753 filed on May 9, 2006, the entire contents of which are incorporated by reference herein. However, it is contemplated by the present disclosure for gateway 10 to find uses between any software applications where the transmission of object oriented data is required.
- Gateway 10 is described in more detail with reference to the architecture of the gateway as illustrated in FIG. 2 .
- Gateway 10 is configured to support asynchronous and synchronous communications, as well as request and reply communication models.
- gateway 10 is both horizontally and vertically extensible and scaleable by adding additional nodes of the gateway on a particular server and/or by adding additional servers running additional nodes of the gateway.
- node means one instance of gateway 10 running in one instance of a Java Virtual Machine (JVM).
- JVM Java Virtual Machine
- gateway 10 has an architecture that allows gateway 10 to be assembled as a cluster of nodes, where each node does not communicate with or even know of the existence of the remaining nodes in the cluster. Thus, each instance of gateway 10 shares nothing with other instances of the gateway when assembled in a cluster.
- a cluster means a plurality of nodes running on separate servers and/or separate processors on the same server.
- the gateways are typically placed behind a hardware load balancer (not shown) to balance the load of interactions among the various nodes in the cluster. Therefore, gateway 10 can be extended horizontally. Gateway 10 can also be scaled vertically due to a multi-threaded architecture that provides a nearly linear increase in performance as additional processors are added to an individual node in the cluster.
- gateway 10 is described herein by way of example only in use with JAVA programming construct. However, it is contemplated by the present disclosure for gateway 10 to find use with other programming constructs such as, but not limited to, MICROSOFT.NET, Java Server Pages, Java Server Faces, C++, C, Perl, Jython, Python, Ruby, Groovy, PHP, and others, and others.
- Gateway 10 includes an integration gateway portion 22 , a domain gateway portion 24 , and a hyper-memory portion 26 .
- hyper-memory portion 26 is an in-memory database that allows for the storing, indexing, updating, and searching of large amounts of structured data from database 16 via integration and domain gateway portions 22 , 24 .
- Hyper-memory portion 26 does not replace the persistent, transactional, relational data store such as is implemented by modern Relational Database Management Systems. Rather, hyper-memory portion 26 loads, indexes, and searches large quantities of data in a scalable and high-performing fashion.
- gateway 10 is described herein by way of example including both integration and domain gateway portions 22 , 24 . Of course, it is contemplated by the present disclosure for gateway 10 to have only integration gateway portion 22 , only domain gateway portion 24 , or any combinations thereof. Moreover, it is contemplated by the present disclosure for gateway 10 to include only hyper-memory portion 26 .
- gateway 10 can include a Request Broker Service (RBS) service 28 , a communication layer 30 , and an Interface Definition Service 32 .
- RBS Request Broker Service
- Integration gateway portion 22 includes an integration rules engine 34 , a search engine 36 , and a first virtual machine 38 .
- First virtual machine 38 is an implementation of a computational engine, also known as a processor, that is capable of executing an instruction set. This instruction set is a list of all instructions and their variations that the processor is capable of executing.
- the instruction set of first virtual machine 38 includes operations specific, but not limited to, using the business objects 40 to search and retrieve data stored in database 16 .
- the instruction set of virtual machine 38 is known as a “byte code”.
- Search engine 36 parses XML search queries and generates corresponding sets of instructions in this instruction set “language” that are then given to virtual machine 38 to execute, in a manner similar to how the Java byte code language is processed by the Java Virtual Machine (JVM).
- JVM Java Virtual Machine
- Rules engine 34 enables external system 12 to execute a business object or rule 40 resident on enterprise software 14 and retrieve the results.
- Search engine 36 enables external system 12 to query database 16 resident on enterprise software 14 so that the external system is insulated from the database structure of the enterprise software and has to only specify what information is required and the output format.
- integration gateway portion 22 is described by way of example as including both integration rules engine 34 and search engine 36 . However, it is contemplated by the present disclosure for integration gateway portion 22 to have only rules engine 34 , only search engine 36 , or any combinations thereof.
- Domain gateway portion 24 includes a domain rules engine 42 .
- Domain rules engine 42 enables external system 12 to execute a domain object or rule 44 resident in enterprise software 14 and retrieve the results from hypermemory 48 .
- Hyper-memory portion 26 includes a hyper-memory engine 46 , a hyper-memory 48 , and a second virtual machine 50 .
- Hyper-memory engine 46 is an in-memory data engine that stores and indexes entirely in memory 48 .
- hyper-memory portion 26 provides a high speed caching layer within gateway 10 in which datasets of information are accessed.
- Hyper-memory 48 includes data structures stored in random access memory (RAM) which usually includes, but is not limited to, integrated circuits (IC) attached directly or via sockets to the motherboard of a computer.
- RAM random access memory
- Types of RAM that Hyper-memory 48 may be stored in include, but are not limited to, SDRAM, DDR, RDRAM, DDR 2, DDR 3, and others.
- Hyper-memory 48 is intended for high-speed (microsecond level) access to records. Therefore, hyper-memory engine 46 avoids unnecessary JAVA object creation and leverages performance best practices, algorithms, and data structures that enable such high speeds.
- Second virtual machine 50 is an implementation of a computational engine, also known as a processor, that is capable of executing an instruction set.
- This instruction set is a list of all instructions and their variations that the processor is capable of executing.
- the instruction set of virtual machine 50 includes operations specific, but not limited to, using the hyper memory portion 46 to search and retrieve data stored in hyper-memory 48 .
- the instruction set of virtual machine 50 is known as a “byte code”.
- Search engine 36 parses XML search queries and generates corresponding sets of instructions in this instruction set “language” that are then given to virtual machine 50 to execute, in a manner similar to how the Java byte code language is processed by the Java Virtual Machine (JVM).
- JVM Java Virtual Machine
- Hyper-memory portion 26 also includes a mass Loader interface 52 and an event reader 54 .
- Mass loader interface 52 reads data from database 16 and, via hyper-memory engine 46 , stores the data in hyper-memory 48 .
- Event reader 54 is the top-level interface that parses events that are stored in, but not limited to, an XML format in database 16 .
- Hyper-memory engine 46 hosts and manages hyper-memory 48 by providing programmatic access to the data and structures loaded into hyper-memory 48 via application program interface (API) calls.
- API application program interface
- the API calls from hyper-memory engine 46 include schema definition, data manipulation, data searching, data processing, and instrumentation.
- FIG. 3 is an exemplary embodiment of a rules service sequence diagram 60 of integration gateway portion 22 .
- a request for rule execution is placed in the JAVA message service (JMS) queue.
- JMS JAVA message service
- the JMS request is unpacked by communication layer 30 and the payload of the request is passed on to Request Broker Service 28 .
- the JMS request is parsed by RBS 28 and classified as a rule.
- RBS 28 then invokes integration rules engine 34 so that the integration rules engine 34 executes the rule, which invoke business objects 40 as needed.
- business objects 40 retrieve data from database 16 and the database serves the data to the business objects.
- the rule is executed by integration rules engine 34 using the data from business objects 40 so that the output is formatted according to the rule.
- integration rules engine 34 sends the output to RBS 38 , which passes the output to the communication layer 30 .
- the communication layer 30 places the output in the JMS queue.
- FIG. 4 is an exemplary embodiment of a query service sequence diagram 62 of integration gateway portion 22 .
- a request for rule execution is placed in the JAVA message service (JMS) queue.
- JMS JAVA message service
- the JMS request is unpacked by communication layer 30 and the payload of the request is passed on to Request Broker Service 28 .
- the JMS request is parsed by RBS 28 and classified as a query.
- RBS 28 then invokes search engine 36 .
- Search engine 36 generates the query path based on the metadata from the Object Relational Service (ORS) of the query engine.
- the first virtual machine 38 leverages the query path to invoke the business objects 40 in the right sequence.
- the business objects 40 retrieve data from database 16 and the database serves the data to the business objects.
- ORS Object Relational Service
- First virtual machine 38 returns the results from business objects 40 to the search engine 36 .
- the search engine 36 prepares the reply based on the metadata from the IDS 32 .
- The, RBS 38 passes the output to the communication layer 30 and the communication layer 30 places the reply in the JMS queue.
- FIG. 5 is an exemplary embodiment of a rules service sequence diagram 64 of domain gateway portion 24 .
- a request for rule execution is placed in the JAVA message service (JMS) queue.
- JMS JAVA message service
- the JMS request is unpacked by communication layer 30 and the payload of the request is passed on to Request Broker Service 28 .
- the JMS request is parsed by RBS 28 and classified as a rule.
- RBS 28 then invokes rules engine 42 so that the engine executes the rule, which invoke domain objects 44 as needed.
- domain objects 44 request hyper-memory 46 to retrieve data from hyper-memory 48 .
- Domain objects 44 return the data to rules engine 42 using the rule from domain objects 44 so that the output is formatted according to the rule.
- engine 42 sends the output to RBS 28 , which passes the output to the communication layer 30 .
- the communication layer 30 places the output in the JMS queue.
- FIG. 6 is an exemplary embodiment of a query service sequence diagram 66 of domain gateway portion 24 .
- a request for rule execution is placed in the JAVA message service (JMS) queue.
- the JMS request is unpacked by communication layer 30 and the payload of the request is passed on to Request Broker Service 28 .
- the JMS request is parsed by RBS 28 and classified as a query.
- RBS 28 then invokes search engine 36 .
- Search engine 36 generates the query path based on the metadata from the ORS.
- the second virtual machine 50 leverages the query path to invoke the hyper-memory machine 46 in the right sequence.
- hyper-memory machine 46 retrieves data from hyper-memory 48 and serves the data to second virtual machine 50 .
- Second virtual machine 50 returns the results to the search engine 36 .
- the search engine 36 prepares the reply based on the metadata from the IDS 32 .
- The, RBS 38 passes the output to the communication layer 30 and the communication layer 30 places the reply in the JMS queue.
- the schema definition includes but is not limited to: table creation, table dropping, index creation, reindexing, and index deletion.
- the data manipulation includes but is not limited to: record insertion, record deletion, record updating, reloading tables, cloning tables, and truncating tables.
- the data searching includes but is not limited to: Rowid Search by Key/Value pair, Rowid Search by Map of Key/Values, Inner Rowid Join by one Joining Column, Tablescan Search by Map of Key/Values, Tablescan Search by Map of Key/Values and word-distance algorithm, and Get All Rowids by Tablename.
- the data processing includes but is not limited to: Rowid UNION, Rowid INTERSECTION, Convert Rowids To List of String Maps, Convert Rowid to Char Arrays, and Convert Rowid to String Map.
- the instrumentation includes but is not limited to: Get Record Count by Tablename, Get Schema for JMX Interrogation, Get Statistics in a User-friendly String, Get Table Names, and Get Total Records Loaded.
- hyper-memory portion 26 stores data in hyper-memory 48 in logical “tables” that have a “schema” including column names.
- the indexes of the tables use data structures such as, but not limited to, two level digital ternary search trees (trie) that branch to ternary trees. Each node in the index has a set that lists all matching rowids, which map to individual records stored in hyper-memory 48 . Additional metadata is stored on the index nodes.
- the tables in hyper-memory 48 are comprised of “records” which are made up of individual “fields”, similar to a relational database. At a physical level the records are stored in “blocks”, and blocks in “extents”. Any given record in hyper-memory 48 has a unique “rowid” that identifies its absolute location in the extent/block/record data structures, and is never changing.
- a rowid contains the extent, block, and record number in a bit shifted format. In other words, by doing simple bit shifting operations on the rowid, the extent, block, and record number for the rowid is calculated by hyper-memory engine 46 .
- indexes generated by hyper-memory engine 46 support fuzzy searches, and have their statistics maintained dynamically. Therefore there is never a need to recompute statistics, as they are always up to date.
- the indexes assigned by hyper-memory engine 46 to records in hyper-memory 48 can track the following in real time: how many total records indexed, how many distinct values are indexed, at any given node in the index, who are the children nodes (to support fuzzy searching), and at any given node in the index, how many children nodes below that node.
- indexes assigned by hyper-memory engine 46 to records in hyper-memory 48 employ a modular architecture to support not only standard “greater than, lesser than” indexing schemes, but “fuzzy” and “domain specific” indexing schemes.
- an index that uses special address specific fuzzy matching logic can be created and managed by hyper-memory engine 46 so as to allow for instantaneous “fuzzy address searches”.
- hyper-memory portion 26 works by organizing hyper-memory 48 to optimally store Data and Indexes. Data is stored in memory 48 by the creation and management of Extents, Blocks, and Records by engine 46 .
- the Record represents an individual set of keys and values.
- the key values are unique within a given Record.
- the values are stored in memory 48 , in a static char array.
- Hyper-memory engine 46 keeps track at the “table schema” level what order the columns are stored in, and therefore what column each individual value is mapped to.
- the Block represents a set of Records.
- the Extent represents a set of Blocks.
- Each extent logically includes a “Block map”, which is a static array of Blocks. Therefore each “extent” is one instance of a “Block map”.
- Each Block within the Block map is a set of records, the number of records being up to the configured Block Size. So the total record capacity of a table in hyper memory 48 is equal to:
- a table including one extent, with a block map size of 1 million, and a block size of 16, can contain up to 16 million records.
- the size of the Block and/or the size of the Extent within hyper-memory 48 are configurable via deployment descriptors and runtime property configuration files at the table level.
- Indexes are stored entirely in memory 48 as sets of integer arrays that are initially scaled in size based on the size of data Extent and Block sizes. It is impossible to determine how much RAM an index is going to need just based on Extent and Block size until the actual records are known. This is because the distribution of data (its uniqueness or lack thereof) impact the amount of RAM the index data structure will finally require once generated.
- the indexes automatically grow to accommodate additional RAM as needed.
- the Indexes support a modular architecture in which the actual indexing scheme may be extended and enhanced beyond basic “greater than less than” logic.
- hyper-memory portion 26 This type of indexing is supported by default in hyper-memory portion 26 .
- hyper-memory portion 26 also indexes the above two values with an index that knows and understands that the two strings are addresses, and that in the domain of addresses they are logically equivalent.
- address Index according to the present disclosure:
- gateway 10 having hyper-memory portion 26 allows for domain specific indexing and provides the ability to define, load, index, access and query vast quantities of data with absolutely no disk input-output. Additionally, gateway 10 having hyper-memory portion 26 provides query resolution times of less than about 1 millisecond.
- Gateway 10 does not require the computational overhead related to transactional integrity (i.e., Oracle) and does not require computational or network input-output overhead related to managing a distributed cache (i.e., Tangesol's Coherence).
- Hyper-memory portion 26 uses JAVA static types to avoid “churning”.
- Hyper-memory portion 26 can be deployed in a clustered environment, in which multiple nodes in the cluster all load and provide access to the same, or different, sets of data.
- the configuration of what node loads what data into hyper-memory 48 is provided by the Object Relational Service (ORS) of search engine 36 and the Interface Definition Service 32 .
- ORS Object Relational Service
- Hyper-memory portion 26 provides query times at the microsecond level by using the digital trie and ternary tree based indexes, by minimizing JAVA object creation in code execution path, and optimizing the data structures within memory 48 .
- Hyper-memory portion 26 is easily scaled to provide the ability to handle many simultaneous queries and lookups. High performance within hyper-memory portion 26 is provided by the minimization of JAVA object creation in code execution path, the use of JAVA primitive types as opposed to JAVA objects, the Index data structures, and read-write locking synchronization in preference to mutual exclusion locking wherever possible.
- a read-write lock is a synchronization strategy in which multiple readers may simultaneously hold a lock when there are no threads attempting to write to hyper-memory 48 . Multiple threads may read data and use indexes in hyper-memory 48 without “blocking”, even when the same data structure elements and indexes are being used at the same time. Read-write locking provides a higher level of concurrency than is possible with older technologies based on more common locking strategies such as mutual exclusion locking. Therefore hyper-memory portion 26 supports higher throughput than previously possible.
- hyper-memory portion 26 also allows for Domain specific Indexing, which includes a modular Index architecture allowing for development of new indexing schemes as needed.
- gateway 10 can use a business rule 40 from rules engine 34 to load foreign database objects into hyper-memory 48 , index the objects, and reload them periodically using the embedded scheduler and possibly solve some of the data store synchronization issues.
- gateway 10 can be used to gain orders of magnitude in performance by loading parts of a foreign system's data store into Hyper-memory 48 and join it with data already in the hyper-memory instead of making fine-grained calls to the external system 12 .
- gateway 10 not only ensures real time exchange of data, but is also configured to solve data integration problems experienced during data transformation between two or more non-coherent databases.
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- This application is a continuation of U.S. application Ser. No. 11/566,073 filed Dec. 1, 2006, now pending, the entire contents of which are incorporated herein by reference. Additionally, this application is related to U.S. application Ser. No. 11/430,753 filed on May 9, 2006, the entire contents of which are incorporated by reference herein.
- 1. Field of the Invention
- The present disclosure is related to gateways. More particularly, the present disclosure is related to gateways having localized in-memory databases and business logic execution capabilities.
- 2. Description of Related Art
- Enterprises are increasingly being asked to expose business logic and information through service layers. The performance of the exposed business logic and data access can be a limiting factor on their usefulness and commercial success.
- One approach for exposing business logic and information via a service layer is to install an application known as a gateway. This application sits between the consumers of its services and data, and the actual sources of the data and business logic, which are typically based on older relational database technology and programming languages. Unfortunately, prior gateways have not provided sufficient resolution or query time, when using the large amounts of data and business logic common in today's enterprise applications, because of their reliance on older relational database technology and methods of invoking and executing said business logic.
- Many prior solutions attempt to lower the resolution time by better organizing, relating, and indexing the data passing through the gateway, and providing Remote Procedure Call style access to business logic. Unfortunately, such database indexing schemes offered by relational databases are disk based, and the data and business logic is typically located on a separate physical server. As such, all of these solutions are limited by the input-output rate of the disk, as well as network latency.
- Recently, some solutions that may be used as an alternate “backend” for a gateway instead of a relational database, have attempted to accelerate the data resolution time by storing some of the data directly in the RAM or cache memory of the gateway. These solutions, such as those available from Prevayler and Tangesol, have shown some success at overcoming the limitations imposed on the resolution time by the disk input-output rate. Unfortunately, scalability of such systems has proven difficult and expensive because of their reliance on distributed caches that introduce an element of network latency into data and business logic access, their usage of Java objects in their implementation which causes undesirable “object churning” of the items in the database and negatively impacts the performance and speed of these systems, and the fact that these components are not tightly coupled with a complete gateway solution.
- As used herein, the term “object churning” refers to the act within a JAVA construct of continually creating and discarding objects (including arrays) from the memory heap. This object creation and destruction is managed by a software component known as a “garbage collector”, and the overhead involved becomes a bottleneck when such a system is under high load.
- Accordingly, there is a continuing need for gateways and methods that overcome, alleviate and/or mitigate one or more of the aforementioned and other deleterious effects of the prior art.
- A gateway is provided that includes an integration gateway portion, a domain gateway portion, and a hyper-memory portion is provided. The integration gateway portion has an integration rules engine, a search engine, and a first virtual machine. The domain gateway portion has a domain rules engine. The hyper-memory portion has a hyper-memory engine, a hyper-memory, and a second virtual machine. The integration portion accesses a database via the integration rules engine and the first virtual machine or via the search engine and the first virtual machine. The domain gateway portion accesses datasets of the database that are resident in the hyper-memory via the domain objects rules engine and the hyper-memory engine or via the search engine, the second virtual machine, and the hyper-memory engine.
- A gateway is also provided that includes a search engine, a first virtual machine, a second virtual machine, a hyper-memory engine, and a hyper-memory having datasets of a database resident thereon. The search engine and the first virtual machine access the database upon receipt of an integration gateway search request. The search engine, the second virtual machine, and the hyper-memory engine access the datasets from the hyper-memory upon receipt of a domain gateway search request.
- A hyper-memory portion for a gateway is also provided. The hyper-memory portion includes a hyper-memory and a hyper-memory engine in the hyper-memory for storing and indexing data entirely within the hyper-memory.
- The above-described and other features and advantages of the present disclosure will be appreciated and understood by those skilled in the art from the following detailed description, drawings, and appended claims.
-
FIG. 1 is a schematic depiction of an exemplary embodiment of a gateway according to the present disclosure in use between a legacy software system and an enterprise application; -
FIG. 2 is an exemplary embodiment of the gateway ofFIG. 1 ; -
FIG. 3 is an exemplary embodiment of a rules service sequence diagram of an integration gateway portion ofFIG. 2 ; -
FIG. 4 is an exemplary embodiment of a query service sequence diagram of the integration gateway portion ofFIG. 2 ; -
FIG. 5 is an exemplary embodiment of a rules service sequence diagram of the domain gateway portion ofFIG. 2 ; and -
FIG. 6 is an exemplary embodiment of a query service sequence diagram of the domain gateway portion ofFIG. 2 . - Referring to the drawings and in particular to
FIG. 1 , an exemplary embodiment of a gateway according to the present disclosure is generally referred to byreference numeral 10. Gateway 10 provides a clusterable solution that provides programmatic access to very large datasets resident on the gateway. - Advantageously,
gateway 10 includes an in-memory database that allows for the storing, indexing, updating, and searching of large amounts of structured data fromdatabase 16 entirely in the memory of the gateway. In this manner, and by levering the 64-bit technology currently available,gateway 10 is configured to provide a resolution or query time for vast quantities of data at the microsecond level. -
Gateway 10 is shown, by way of example, in use between one or more legacy software systems 12 (two shown) and anenterprise application 14 having adatabase 16. In this embodiment,gateway 10 enables real time exchange of data betweensoftware systems 12 anddatabase 16 ofenterprise application 14. - Gateway 10 is in communication with
software system 12 via afirst communication channel 18. Similarly,gateway 10 is in communication withenterprise application 14 via asecond communication channel 20. First and second communication channels can be any know communication device and/or protocol such as, but not limited to, the Internet, a Local Area Network (LAN), a Wide Area Network (WAN), a wireless network, and others. - Data within
database 16 can include data in Extensible Markup Language (XML) metadata, and other formats. In this embodiment,gateway 10,software system 12,first communication channel 18,enterprise application 14, andsecond communication channel 20 are configured to communicate the data resident indatabase 16. However, it is contemplated by the present disclosure for the data indatabase 16 to be any type of object oriented data, data fields, Hyper Text Markup Language (HTML), any script, such as JavaScript, Jython, SQL, and any other data format. -
Software systems 12 can be those commonly used by a health care organization, whileenterprise application 14 can be the Portico Foundation Software, which is shown and described in co-pending U.S. application Ser. No. 11/430,753 filed on May 9, 2006, the entire contents of which are incorporated by reference herein. However, it is contemplated by the present disclosure forgateway 10 to find uses between any software applications where the transmission of object oriented data is required. -
Gateway 10 is described in more detail with reference to the architecture of the gateway as illustrated inFIG. 2 . Gateway 10 is configured to support asynchronous and synchronous communications, as well as request and reply communication models. Moreover,gateway 10 is both horizontally and vertically extensible and scaleable by adding additional nodes of the gateway on a particular server and/or by adding additional servers running additional nodes of the gateway. As used herein, the term node means one instance ofgateway 10 running in one instance of a Java Virtual Machine (JVM). - Advantageously,
gateway 10 has an architecture that allowsgateway 10 to be assembled as a cluster of nodes, where each node does not communicate with or even know of the existence of the remaining nodes in the cluster. Thus, each instance ofgateway 10 shares nothing with other instances of the gateway when assembled in a cluster. As used herein, a cluster means a plurality of nodes running on separate servers and/or separate processors on the same server. When arranginggateway 10 in a cluster, the gateways are typically placed behind a hardware load balancer (not shown) to balance the load of interactions among the various nodes in the cluster. Therefore,gateway 10 can be extended horizontally.Gateway 10 can also be scaled vertically due to a multi-threaded architecture that provides a nearly linear increase in performance as additional processors are added to an individual node in the cluster. - For purposes of clarity,
gateway 10 is described herein by way of example only in use with JAVA programming construct. However, it is contemplated by the present disclosure forgateway 10 to find use with other programming constructs such as, but not limited to, MICROSOFT.NET, Java Server Pages, Java Server Faces, C++, C, Perl, Jython, Python, Ruby, Groovy, PHP, and others, and others. -
Gateway 10 includes anintegration gateway portion 22, adomain gateway portion 24, and a hyper-memory portion 26. Advantageously, hyper-memory portion 26 is an in-memory database that allows for the storing, indexing, updating, and searching of large amounts of structured data fromdatabase 16 via integration anddomain gateway portions memory portion 26 does not replace the persistent, transactional, relational data store such as is implemented by modern Relational Database Management Systems. Rather, hyper-memory portion 26 loads, indexes, and searches large quantities of data in a scalable and high-performing fashion. - It should be recognized that
gateway 10 is described herein by way of example including both integration anddomain gateway portions gateway 10 to have onlyintegration gateway portion 22, onlydomain gateway portion 24, or any combinations thereof. Moreover, it is contemplated by the present disclosure forgateway 10 to include only hyper-memory portion 26. - Integration and
domain gateway portions software system 12 andenterprise application 14 via first andsecond communication channels gateway 10 can include a Request Broker Service (RBS)service 28, acommunication layer 30, and anInterface Definition Service 32. -
Integration gateway portion 22 includes an integration rulesengine 34, asearch engine 36, and a firstvirtual machine 38. Firstvirtual machine 38 is an implementation of a computational engine, also known as a processor, that is capable of executing an instruction set. This instruction set is a list of all instructions and their variations that the processor is capable of executing. The instruction set of firstvirtual machine 38 includes operations specific, but not limited to, using the business objects 40 to search and retrieve data stored indatabase 16. The instruction set ofvirtual machine 38 is known as a “byte code”.Search engine 36 parses XML search queries and generates corresponding sets of instructions in this instruction set “language” that are then given tovirtual machine 38 to execute, in a manner similar to how the Java byte code language is processed by the Java Virtual Machine (JVM). -
Rules engine 34 enablesexternal system 12 to execute a business object orrule 40 resident onenterprise software 14 and retrieve the results.Search engine 36 enablesexternal system 12 to querydatabase 16 resident onenterprise software 14 so that the external system is insulated from the database structure of the enterprise software and has to only specify what information is required and the output format. - In the illustrated embodiment,
integration gateway portion 22 is described by way of example as including bothintegration rules engine 34 andsearch engine 36. However, it is contemplated by the present disclosure forintegration gateway portion 22 to haveonly rules engine 34, onlysearch engine 36, or any combinations thereof. -
Domain gateway portion 24 includes a domain rulesengine 42.Domain rules engine 42 enablesexternal system 12 to execute a domain object orrule 44 resident inenterprise software 14 and retrieve the results fromhypermemory 48. - Hyper-
memory portion 26 includes a hyper-memory engine 46, a hyper-memory 48, and a secondvirtual machine 50. Hyper-memory engine 46 is an in-memory data engine that stores and indexes entirely inmemory 48. Thus, hyper-memory portion 26 provides a high speed caching layer withingateway 10 in which datasets of information are accessed. - Hyper-
memory 48 includes data structures stored in random access memory (RAM) which usually includes, but is not limited to, integrated circuits (IC) attached directly or via sockets to the motherboard of a computer. Types of RAM that Hyper-memory 48 may be stored in include, but are not limited to, SDRAM, DDR, RDRAM, DDR 2, DDR 3, and others. - Hyper-
memory 48 is intended for high-speed (microsecond level) access to records. Therefore, hyper-memory engine 46 avoids unnecessary JAVA object creation and leverages performance best practices, algorithms, and data structures that enable such high speeds. - Second
virtual machine 50 is an implementation of a computational engine, also known as a processor, that is capable of executing an instruction set. This instruction set is a list of all instructions and their variations that the processor is capable of executing. The instruction set ofvirtual machine 50 includes operations specific, but not limited to, using thehyper memory portion 46 to search and retrieve data stored in hyper-memory 48. The instruction set ofvirtual machine 50 is known as a “byte code”.Search engine 36 parses XML search queries and generates corresponding sets of instructions in this instruction set “language” that are then given tovirtual machine 50 to execute, in a manner similar to how the Java byte code language is processed by the Java Virtual Machine (JVM). - Hyper-
memory portion 26 also includes amass Loader interface 52 and anevent reader 54.Mass loader interface 52 reads data fromdatabase 16 and, via hyper-memory engine 46, stores the data in hyper-memory 48.Event reader 54 is the top-level interface that parses events that are stored in, but not limited to, an XML format indatabase 16. - Hyper-
memory engine 46 hosts and manages hyper-memory 48 by providing programmatic access to the data and structures loaded into hyper-memory 48 via application program interface (API) calls. The API calls from hyper-memory engine 46 include schema definition, data manipulation, data searching, data processing, and instrumentation. - The operation of integration and
domain gateway portions gateway 10 are described with reference to sequence diagrams shown inFIGS. 3 through 6 . -
FIG. 3 is an exemplary embodiment of a rules service sequence diagram 60 ofintegration gateway portion 22. At the beginning of rules service sequence diagram 60, a request for rule execution is placed in the JAVA message service (JMS) queue. The JMS request is unpacked bycommunication layer 30 and the payload of the request is passed on toRequest Broker Service 28. The JMS request is parsed byRBS 28 and classified as a rule.RBS 28 then invokesintegration rules engine 34 so that theintegration rules engine 34 executes the rule, which invokebusiness objects 40 as needed. In response, business objects 40 retrieve data fromdatabase 16 and the database serves the data to the business objects. The rule is executed byintegration rules engine 34 using the data frombusiness objects 40 so that the output is formatted according to the rule. Then,integration rules engine 34 sends the output toRBS 38, which passes the output to thecommunication layer 30. Finally, thecommunication layer 30 places the output in the JMS queue. -
FIG. 4 is an exemplary embodiment of a query service sequence diagram 62 ofintegration gateway portion 22. At the beginning of query service sequence diagram 62, a request for rule execution is placed in the JAVA message service (JMS) queue. The JMS request is unpacked bycommunication layer 30 and the payload of the request is passed on toRequest Broker Service 28. The JMS request is parsed byRBS 28 and classified as a query.RBS 28 then invokessearch engine 36.Search engine 36 generates the query path based on the metadata from the Object Relational Service (ORS) of the query engine. The firstvirtual machine 38 leverages the query path to invoke the business objects 40 in the right sequence. In turn, the business objects 40 retrieve data fromdatabase 16 and the database serves the data to the business objects. Firstvirtual machine 38 returns the results frombusiness objects 40 to thesearch engine 36. Thesearch engine 36 prepares the reply based on the metadata from theIDS 32. The,RBS 38 passes the output to thecommunication layer 30 and thecommunication layer 30 places the reply in the JMS queue. -
FIG. 5 is an exemplary embodiment of a rules service sequence diagram 64 ofdomain gateway portion 24. At the beginning of rules service sequence diagram 64, a request for rule execution is placed in the JAVA message service (JMS) queue. The JMS request is unpacked bycommunication layer 30 and the payload of the request is passed on toRequest Broker Service 28. The JMS request is parsed byRBS 28 and classified as a rule.RBS 28 then invokesrules engine 42 so that the engine executes the rule, which invoke domain objects 44 as needed. In response, domain objects 44 request hyper-memory 46 to retrieve data from hyper-memory 48. Domain objects 44 return the data torules engine 42 using the rule from domain objects 44 so that the output is formatted according to the rule. Then,engine 42 sends the output toRBS 28, which passes the output to thecommunication layer 30. Finally, thecommunication layer 30 places the output in the JMS queue. -
FIG. 6 is an exemplary embodiment of a query service sequence diagram 66 ofdomain gateway portion 24. At the beginning of query service sequence diagram 66, a request for rule execution is placed in the JAVA message service (JMS) queue. The JMS request is unpacked bycommunication layer 30 and the payload of the request is passed on toRequest Broker Service 28. The JMS request is parsed byRBS 28 and classified as a query.RBS 28 then invokessearch engine 36.Search engine 36 generates the query path based on the metadata from the ORS. The secondvirtual machine 50 leverages the query path to invoke the hyper-memory machine 46 in the right sequence. In turn, hyper-memory machine 46 retrieves data from hyper-memory 48 and serves the data to secondvirtual machine 50. Secondvirtual machine 50 returns the results to thesearch engine 36. Thesearch engine 36 prepares the reply based on the metadata from theIDS 32. The,RBS 38 passes the output to thecommunication layer 30 and thecommunication layer 30 places the reply in the JMS queue. - The schema definition includes but is not limited to: table creation, table dropping, index creation, reindexing, and index deletion. The data manipulation includes but is not limited to: record insertion, record deletion, record updating, reloading tables, cloning tables, and truncating tables. The data searching includes but is not limited to: Rowid Search by Key/Value pair, Rowid Search by Map of Key/Values, Inner Rowid Join by one Joining Column, Tablescan Search by Map of Key/Values, Tablescan Search by Map of Key/Values and word-distance algorithm, and Get All Rowids by Tablename. The data processing includes but is not limited to: Rowid UNION, Rowid INTERSECTION, Convert Rowids To List of String Maps, Convert Rowid to Char Arrays, and Convert Rowid to String Map. The instrumentation includes but is not limited to: Get Record Count by Tablename, Get Schema for JMX Interrogation, Get Statistics in a User-friendly String, Get Table Names, and Get Total Records Loaded.
- Thus, hyper-
memory portion 26 stores data in hyper-memory 48 in logical “tables” that have a “schema” including column names. The indexes of the tables use data structures such as, but not limited to, two level digital ternary search trees (trie) that branch to ternary trees. Each node in the index has a set that lists all matching rowids, which map to individual records stored in hyper-memory 48. Additional metadata is stored on the index nodes. - The tables in hyper-
memory 48 are comprised of “records” which are made up of individual “fields”, similar to a relational database. At a physical level the records are stored in “blocks”, and blocks in “extents”. Any given record in hyper-memory 48 has a unique “rowid” that identifies its absolute location in the extent/block/record data structures, and is never changing. A rowid contains the extent, block, and record number in a bit shifted format. In other words, by doing simple bit shifting operations on the rowid, the extent, block, and record number for the rowid is calculated by hyper-memory engine 46. - Additionally, the indexes generated by hyper-
memory engine 46 support fuzzy searches, and have their statistics maintained dynamically. Therefore there is never a need to recompute statistics, as they are always up to date. - The indexes assigned by hyper-
memory engine 46 to records in hyper-memory 48 can track the following in real time: how many total records indexed, how many distinct values are indexed, at any given node in the index, who are the children nodes (to support fuzzy searching), and at any given node in the index, how many children nodes below that node. - The indexes assigned by hyper-
memory engine 46 to records in hyper-memory 48 employ a modular architecture to support not only standard “greater than, lesser than” indexing schemes, but “fuzzy” and “domain specific” indexing schemes. For example, an index that uses special address specific fuzzy matching logic can be created and managed by hyper-memory engine 46 so as to allow for instantaneous “fuzzy address searches”. - Thus, hyper-
memory portion 26 works by organizing hyper-memory 48 to optimally store Data and Indexes. Data is stored inmemory 48 by the creation and management of Extents, Blocks, and Records byengine 46. - In hyper-
memory portion 26, the Record represents an individual set of keys and values. The key values are unique within a given Record. At the physical level, only the values are stored inmemory 48, in a static char array. Hyper-memory engine 46 keeps track at the “table schema” level what order the columns are stored in, and therefore what column each individual value is mapped to. - In hyper-
memory portion 26, the Block represents a set of Records. Further, the Extent represents a set of Blocks. Each extent logically includes a “Block map”, which is a static array of Blocks. Therefore each “extent” is one instance of a “Block map”. Each Block within the Block map is a set of records, the number of records being up to the configured Block Size. So the total record capacity of a table inhyper memory 48 is equal to: -
(Number of Extents)×(Block Map Size)×(Block Size) - So a table including one extent, with a block map size of 1 million, and a block size of 16, can contain up to 16 million records. Advantageously, the size of the Block and/or the size of the Extent within hyper-
memory 48 are configurable via deployment descriptors and runtime property configuration files at the table level. - Indexes are stored entirely in
memory 48 as sets of integer arrays that are initially scaled in size based on the size of data Extent and Block sizes. It is impossible to determine how much RAM an index is going to need just based on Extent and Block size until the actual records are known. This is because the distribution of data (its uniqueness or lack thereof) impact the amount of RAM the index data structure will finally require once generated. Advantageously, as records are added to the tables in hyper-memory 48, the indexes automatically grow to accommodate additional RAM as needed. The Indexes support a modular architecture in which the actual indexing scheme may be extended and enhanced beyond basic “greater than less than” logic. - For example, a prior art index might consider the following two unique and distinct strings of data:
-
- 123 Any Street USA
- 123 Any St. USA.
- This type of indexing is supported by default in hyper-
memory portion 26. By implementing domain specific indexes, hyper-memory portion 26 also indexes the above two values with an index that knows and understands that the two strings are addresses, and that in the domain of addresses they are logically equivalent. In other words, according to a domain specific “Address Index” according to the present disclosure: -
- 123 Any Street USA=123 Any St. USA
- Accordingly,
gateway 10 having hyper-memory portion 26 allows for domain specific indexing and provides the ability to define, load, index, access and query vast quantities of data with absolutely no disk input-output. Additionally,gateway 10 having hyper-memory portion 26 provides query resolution times of less than about 1 millisecond. -
Gateway 10 does not require the computational overhead related to transactional integrity (i.e., Oracle) and does not require computational or network input-output overhead related to managing a distributed cache (i.e., Tangesol's Coherence). Hyper-memory portion 26 uses JAVA static types to avoid “churning”. - [0061]Hyper-
memory portion 26 can be deployed in a clustered environment, in which multiple nodes in the cluster all load and provide access to the same, or different, sets of data. The configuration of what node loads what data into hyper-memory 48 is provided by the Object Relational Service (ORS) ofsearch engine 36 and theInterface Definition Service 32. - Hyper-
memory portion 26 provides query times at the microsecond level by using the digital trie and ternary tree based indexes, by minimizing JAVA object creation in code execution path, and optimizing the data structures withinmemory 48. - Hyper-
memory portion 26 is easily scaled to provide the ability to handle many simultaneous queries and lookups. High performance within hyper-memory portion 26 is provided by the minimization of JAVA object creation in code execution path, the use of JAVA primitive types as opposed to JAVA objects, the Index data structures, and read-write locking synchronization in preference to mutual exclusion locking wherever possible. A read-write lock is a synchronization strategy in which multiple readers may simultaneously hold a lock when there are no threads attempting to write to hyper-memory 48. Multiple threads may read data and use indexes in hyper-memory 48 without “blocking”, even when the same data structure elements and indexes are being used at the same time. Read-write locking provides a higher level of concurrency than is possible with older technologies based on more common locking strategies such as mutual exclusion locking. Therefore hyper-memory portion 26 supports higher throughput than previously possible. - Further, hyper-
memory portion 26 also allows for Domain specific Indexing, which includes a modular Index architecture allowing for development of new indexing schemes as needed. - In some embodiments,
gateway 10 can use abusiness rule 40 fromrules engine 34 to load foreign database objects into hyper-memory 48, index the objects, and reload them periodically using the embedded scheduler and possibly solve some of the data store synchronization issues. Alternately,gateway 10 can be used to gain orders of magnitude in performance by loading parts of a foreign system's data store into Hyper-memory 48 and join it with data already in the hyper-memory instead of making fine-grained calls to theexternal system 12. Thus,gateway 10 not only ensures real time exchange of data, but is also configured to solve data integration problems experienced during data transformation between two or more non-coherent databases. - It should also be noted that the terms “first”, “second”, “third”, “upper”, “lower”, and the like may be used herein to modify various elements. These modifiers do not imply a spatial, sequential, or hierarchical order to the modified elements unless specifically stated.
- While the present disclosure has been described with reference to one or more exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the present disclosure. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the scope thereof. Therefore, it is intended that the present disclosure not be limited to the particular embodiment(s) disclosed as the best mode contemplated, but that the disclosure will include all embodiments falling within the scope of the appended claims.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/461,401 US20120215756A1 (en) | 2006-12-01 | 2012-05-01 | Gateways having localized in memory databases and business logic execution |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/566,073 US8181187B2 (en) | 2006-12-01 | 2006-12-01 | Gateways having localized in-memory databases and business logic execution |
US13/461,401 US20120215756A1 (en) | 2006-12-01 | 2012-05-01 | Gateways having localized in memory databases and business logic execution |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/566,073 Continuation US8181187B2 (en) | 2006-12-01 | 2006-12-01 | Gateways having localized in-memory databases and business logic execution |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120215756A1 true US20120215756A1 (en) | 2012-08-23 |
Family
ID=39477059
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/566,073 Active 2030-12-13 US8181187B2 (en) | 2006-12-01 | 2006-12-01 | Gateways having localized in-memory databases and business logic execution |
US13/461,401 Abandoned US20120215756A1 (en) | 2006-12-01 | 2012-05-01 | Gateways having localized in memory databases and business logic execution |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/566,073 Active 2030-12-13 US8181187B2 (en) | 2006-12-01 | 2006-12-01 | Gateways having localized in-memory databases and business logic execution |
Country Status (1)
Country | Link |
---|---|
US (2) | US8181187B2 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120303901A1 (en) * | 2011-05-28 | 2012-11-29 | Qiming Chen | Distributed caching and analysis system and method |
CN104252509A (en) * | 2013-11-05 | 2014-12-31 | 深圳市华傲数据技术有限公司 | Expression execution method and expression execution device |
CN106897458A (en) * | 2017-03-10 | 2017-06-27 | 广州白云电器设备股份有限公司 | A kind of storage and search method towards electromechanical equipment data |
CN111915095B (en) * | 2020-08-12 | 2022-08-05 | 华侨大学 | Passenger transport line recommendation method, device and equipment based on ternary tree search |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010056422A1 (en) * | 2000-02-16 | 2001-12-27 | Benedict Charles C. | Database access system |
US20030149683A1 (en) * | 1998-11-19 | 2003-08-07 | Lee Terry Seto | Method and apparatus for obtaining an identifier for a logical unit of data in a database |
US20040267731A1 (en) * | 2003-04-25 | 2004-12-30 | Gino Monier Louis Marcel | Method and system to facilitate building and using a search database |
US20060148572A1 (en) * | 2004-12-17 | 2006-07-06 | Lee Hun J | Database cache system |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5557514A (en) * | 1994-06-23 | 1996-09-17 | Medicode, Inc. | Method and system for generating statistically-based medical provider utilization profiles |
US6256637B1 (en) * | 1998-05-05 | 2001-07-03 | Gemstone Systems, Inc. | Transactional virtual machine architecture |
US6457021B1 (en) * | 1998-08-18 | 2002-09-24 | Microsoft Corporation | In-memory database system |
US6178519B1 (en) * | 1998-12-10 | 2001-01-23 | Mci Worldcom, Inc. | Cluster-wide database system |
US6350219B1 (en) * | 1999-07-01 | 2002-02-26 | Pendulum Fitness, Inc. | Variable resistance exercise machine |
US7099898B1 (en) * | 1999-08-12 | 2006-08-29 | International Business Machines Corporation | Data access system |
US6957212B2 (en) * | 2001-04-24 | 2005-10-18 | Innopath Software, Inc. | Apparatus and methods for intelligently caching applications and data on a gateway |
US6687702B2 (en) * | 2001-06-15 | 2004-02-03 | Sybass, Inc. | Methodology providing high-speed shared memory access between database middle tier and database server |
US20030135495A1 (en) * | 2001-06-21 | 2003-07-17 | Isc, Inc. | Database indexing method and apparatus |
MXPA04004201A (en) * | 2001-11-01 | 2005-01-25 | Verisign Inc | Method and system for updating a remote database. |
AU2003276815A1 (en) * | 2002-06-13 | 2003-12-31 | Cerisent Corporation | Xml-db transactional update system |
EP1552426A4 (en) * | 2002-06-13 | 2009-01-21 | Mark Logic Corp | A subtree-structured xml database |
EP1649390B1 (en) * | 2003-07-07 | 2014-08-20 | IBM International Group BV | Optimized sql code generation |
US7409379B1 (en) * | 2003-07-28 | 2008-08-05 | Sprint Communications Company L.P. | Application cache management |
US7243088B2 (en) * | 2003-08-06 | 2007-07-10 | Oracle International Corporation | Database management system with efficient version control |
US20050188055A1 (en) * | 2003-12-31 | 2005-08-25 | Saletore Vikram A. | Distributed and dynamic content replication for server cluster acceleration |
US7424467B2 (en) * | 2004-01-26 | 2008-09-09 | International Business Machines Corporation | Architecture for an indexer with fixed width sort and variable width sort |
US20050198062A1 (en) * | 2004-03-05 | 2005-09-08 | Shapiro Richard B. | Method and apparatus for accelerating data access operations in a database system |
US8880502B2 (en) * | 2004-03-15 | 2014-11-04 | International Business Machines Corporation | Searching a range in a set of values in a network with distributed storage entities |
US7493305B2 (en) * | 2004-04-09 | 2009-02-17 | Oracle International Corporation | Efficient queribility and manageability of an XML index with path subsetting |
US7516121B2 (en) * | 2004-06-23 | 2009-04-07 | Oracle International Corporation | Efficient evaluation of queries using translation |
US20050289175A1 (en) * | 2004-06-23 | 2005-12-29 | Oracle International Corporation | Providing XML node identity based operations in a value based SQL system |
US20050289450A1 (en) * | 2004-06-23 | 2005-12-29 | Microsoft Corporation | User interface virtualization |
US20050289186A1 (en) * | 2004-06-29 | 2005-12-29 | Microsoft Corporation | DDL replication without user intervention |
US7395258B2 (en) * | 2004-07-30 | 2008-07-01 | International Business Machines Corporation | System and method for adaptive database caching |
US7739244B2 (en) * | 2004-10-14 | 2010-06-15 | Oracle International Corporation | Operating logging for online recovery in shared memory information systems |
US8212832B2 (en) * | 2005-12-08 | 2012-07-03 | Ati Technologies Ulc | Method and apparatus with dynamic graphics surface memory allocation |
-
2006
- 2006-12-01 US US11/566,073 patent/US8181187B2/en active Active
-
2012
- 2012-05-01 US US13/461,401 patent/US20120215756A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030149683A1 (en) * | 1998-11-19 | 2003-08-07 | Lee Terry Seto | Method and apparatus for obtaining an identifier for a logical unit of data in a database |
US20010056422A1 (en) * | 2000-02-16 | 2001-12-27 | Benedict Charles C. | Database access system |
US20040267731A1 (en) * | 2003-04-25 | 2004-12-30 | Gino Monier Louis Marcel | Method and system to facilitate building and using a search database |
US20060148572A1 (en) * | 2004-12-17 | 2006-07-06 | Lee Hun J | Database cache system |
Also Published As
Publication number | Publication date |
---|---|
US20080133537A1 (en) | 2008-06-05 |
US8181187B2 (en) | 2012-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11263211B2 (en) | Data partitioning and ordering | |
Pokorny | NoSQL databases: a step to database scalability in web environment | |
US6240422B1 (en) | Object to relational database mapping infrastructure in a customer care and billing system | |
US7779386B2 (en) | Method and system to automatically regenerate software code | |
US7599948B2 (en) | Object relational mapping layer | |
US8983919B2 (en) | Systems and methods for improving database performance | |
US8204875B2 (en) | Support for user defined aggregations in a data stream management system | |
US8244658B2 (en) | System, method and computer program product for generating a set of instructions to an on-demand database service | |
JP4814628B2 (en) | Data access layer class generator | |
US6321235B1 (en) | Global caching and sharing of SQL statements in a heterogeneous application environment | |
US7818346B2 (en) | Database heap management system with variable page size and fixed instruction set address resolution | |
US20090106440A1 (en) | Support for incrementally processing user defined aggregations in a data stream management system | |
US9171036B2 (en) | Batching heterogeneous database commands | |
US20050234882A1 (en) | Data structure for a hardware database management system | |
US20120215756A1 (en) | Gateways having localized in memory databases and business logic execution | |
US10671411B2 (en) | Cloning for object-oriented environment | |
US20080114735A1 (en) | Systems and methods for managing information | |
US20070255750A1 (en) | System to disclose the internal structure of persistent database objects | |
US9424296B2 (en) | Indexing of database queries | |
US8433720B2 (en) | Enabling an application to interact with an LDAP directory as though the LDAP directory were a database object | |
Binder et al. | Multiversion concurrency control for the generalized search tree | |
Liao et al. | Research of CouchDB Storage Plugin for Big Data Query Engine Apache Drill | |
US20080098345A1 (en) | Accessing extensible markup language documents | |
Gahm | ABAP performance tuning | |
US20090077006A1 (en) | Computer-implemented database systems and related methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: PORTICO SYSTEMS, PENNSYLVANIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FRASER, SCOTT EDWARD, MR.;MUPPALLA, SURESH VENKATA, MR.;REEL/FRAME:028137/0821 Effective date: 20070315 |
|
AS | Assignment |
Owner name: MCKESSON HEALTH SOLUTIONS LLC, CALIFORNIA Free format text: MERGER;ASSIGNOR:PORTICO SYSTEMS OF DELAWARE, INC.;REEL/FRAME:028901/0231 Effective date: 20120330 |
|
AS | Assignment |
Owner name: MCKESSON TECHNOLOGIES INC., CALIFORNIA Free format text: MERGER;ASSIGNOR:MCKESSON HEALTH SOLUTIONS LLC;REEL/FRAME:032635/0172 Effective date: 20131220 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |