US9436710B2 - Systems, methods, and media for managing an in-memory NoSQL database - Google Patents

Systems, methods, and media for managing an in-memory NoSQL database Download PDF

Info

Publication number
US9436710B2
US9436710B2 US13/853,030 US201313853030A US9436710B2 US 9436710 B2 US9436710 B2 US 9436710B2 US 201313853030 A US201313853030 A US 201313853030A US 9436710 B2 US9436710 B2 US 9436710B2
Authority
US
United States
Prior art keywords
key
server
keys
group
command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/853,030
Other versions
US20130226971A1 (en
Inventor
Yiftach Shoolman
Ofer Bengal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Redis Ltd
Original Assignee
REDIS LABS Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/247,371 external-priority patent/US8972488B2/en
Priority to US13/853,030 priority Critical patent/US9436710B2/en
Application filed by REDIS LABS Ltd filed Critical REDIS LABS Ltd
Assigned to GARANTIA DATA, LTD. reassignment GARANTIA DATA, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BENGAL, OFER, SHOOLMAN, YIFTACH
Publication of US20130226971A1 publication Critical patent/US20130226971A1/en
Assigned to REDIS LABS LTD. reassignment REDIS LABS LTD. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GARANTIA DATA LTD
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK SECURITY AGREEMENT Assignors: REDIS LABS LTD
Priority to US15/255,680 priority patent/US9984106B2/en
Application granted granted Critical
Publication of US9436710B2 publication Critical patent/US9436710B2/en
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK FIRST AMENDMENT TO INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: REDIS LABS LTD.
Priority to US15/988,880 priority patent/US10635649B2/en
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK SECOND AMENDMENT TO INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: REDIS LABS LTD
Assigned to REDIS LTD reassignment REDIS LTD CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: REDIS LABS LTD
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: REDIS LTD
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • G06F17/30289
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2255Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F17/30321
    • G06F17/30575
    • G06F17/30587
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3084Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction using adaptive string matching, e.g. the Lempel-Ziv method
    • H03M7/3088Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction using adaptive string matching, e.g. the Lempel-Ziv method employing the use of a dictionary, e.g. LZ78
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/70Type of the data to be coded, other than image and sound
    • H03M7/707Structured documents, e.g. XML

Definitions

  • NoSQL Not Only SQL
  • database management systems which are also referred to as non-relational databases or unstructured databases
  • NoSQL is a broad class of database management systems that can differ from classic RDBMS in some significant ways: (1) there are no inherent relations between stored objects; (2) the data stores may not require fixed table schemas; and (3) NoSQL avoids join operations and typically scales horizontally.
  • In-memory non-relational databases are subset of NoSQL databases, and are designed in a way that all of (or a major part of) the users dataset is stored in RAM Memory.
  • NoSQL databases i.e., NoSQL databases entirely served from RAM
  • Benchmarks have shown that in-memory NoSQL databases can be up to 100 times faster than HDD-based NoSQL databases and 10-20 times faster than SSD-based NoSQL databases.
  • in-memory NoSQL databases can suffer when the databases are implemented in a distributed fashion where database data structures are spread over multiple nodes.
  • complex multi-key commands such as UNION, INTERSECT, and SORT
  • the delays resulting from object migration and operation blocking may be especially harmful to the performance of in-memory NoSQL databases, such as Redis, that are expected to execute commands at sub-millisecond latency.
  • systems, methods, and media for managing an in-memory NoSQL database comprising a hardware processor that is configured to: identify a group of keys capable of being included in a complex multi-key database command, the group including a first key and a second key, the first key being associated with a first value and the second key being associated with a second value, the first value and the second value being stored on different servers that are part of the database; and cause the first value and the second value to be both stored on a given server.
  • methods for managing an in-memory NoSQL database comprising: identifying, by a hardware processor, a group of keys capable of being included in a complex multi-key database command, the group including a first key and a second key, the first key being associated with a first value and tire second key being associated with a second value, the first value and the second value being stored on different servers that are part of the database; and causing the first value and the second value to be both stored on a given server.
  • non-transitory computer-readable media containing computer-executable instructions that, when executed by a processor, cause the processor to perform a method for managing an in-memory NoSQL database
  • the method comprising: identifying a group of keys capable of being included in a complex multi-key database command, the group including a first key and a second key, the first key being associated with a first value and the second key being associated with a second value, the first value and the second value being stored on different servers that are part of the database; and causing the first value and the second value to be both stored on a given server.
  • FIG. 1 is a diagram of an example of a cluster architecture in accordance with some embodiments.
  • FIG. 2 is a diagram of an example of a node architecture in accordance with some embodiments.
  • FIG. 3 is a diagram of an example of a process for managing a database in accordance with some embodiments.
  • FIG. 4A is a diagram of an example of a process for identifying a group of keys in accordance with some embodiments.
  • FIG. 4B is a diagram of another example of a process for identifying a group of keys in accordance with some embodiments.
  • FIG. 4C is a diagram of yet another example of a process for identifying a group of keys in accordance with some embodiments.
  • FIG. 5A is a diagram of an example of a process for executing database commands in accordance with some embodiments.
  • FIG. 5B is a diagram of another example of a process for executing, database commands in accordance with some embodiments.
  • FIG. 5C is a diagram of yet another example of a process for executing database commands in accordance with some embodiments.
  • FIG. 6 is a diagram of an example of a connection between a data management controller and servers and other data management controllers in accordance with some embodiments.
  • an in-memory non-relational database can be implemented as a system 100 as illustrated in FIG. 1 .
  • system 100 can include one or more clusters 102 , one or more applications 104 , one or more service managers 106 , and one or more domain name servers 108 .
  • Clusters 102 can be one or more clusters of nodes 110 for providing in-memory data storage as well as related functionality as described further below.
  • Applications 104 can be one or more applications that use data and related information stored in nodes 102 .
  • Service manager (SM) 106 can be an entity that manages the user interaction with the system, as well as maps user configuration to cluster resources.
  • the SM can be designed to support single- and multi-tenancy modes of operation, and can allow the system to be supplied as a service.
  • the SM can also communicate with a cluster node manager of a node (described below) for aggregating service statistics and alarms. When deployed as a service, the SM can also be responsible for usage reports and billing.
  • DNS 108 can provide a mechanism for domain name server functions as known in the art.
  • node 200 can include a data management controller (DMC) 202 , a cluster node manager (CNM) 204 , zero or more servers 206 , and a common cluster storage 208 .
  • DMC data management controller
  • CCM cluster node manager
  • server(s) 206 can be used to provide in-memory non-relational database functionality.
  • the server(s) can be based on the open-source Redis server with enhanced functionality.
  • the server(s) can represent one of the following options: (1) a Memcached Bucket; (2) a partition of a Memcached Bucket; (3) a Redis database (DB); or (4) a partition of a Redis DB.
  • node 200 may be implemented using physical hardware, such as a computer and/or a blade server. Additionally or alternatively, in some embodiments, node 200 may be implemented as a virtual server in a virtual environment, such as a computer cloud. In some embodiments, DMC 202 and CNM 204 may be implemented as one or more processes that are executed on node 200 . Additionally or alternatively, in some embodiments, DMC 202 and CNM 204 may be implemented as one or more processes that are executed on a device other than a device on which other components of node 200 are executed. Any suitable system topology may be used, in some embodiments. Any suitable system topology may be used, in some embodiments.
  • each of servers 206 may be implemented as one or more software processes that are executed on node 200 .
  • any of servers 206 may be implemented in 32-bit computer software instead of 64-bit software. Because the size of a word in 32-bit software is half the size of a 64-bit word, using, 32-bit computer software may cause a decrease in memory overhead resulting from the use of pointers (which are one word in size).
  • servers 206 may be executed on 64-bit hardware (e.g., executed by a 64-bit hardware processor).
  • servers 206 are depicted as processes executed on the same node, in some embodiments, one or more of servers 206 may be executed one or more different devices than the rest of servers 206 .
  • complex data structures may include large numbers of pointers for each value stored in them in order to permit commands, such as search, intersect, or union, to be executed.
  • the large numbers of pointers may result in a significant memory overhead.
  • elements in a skip list 500 when skip list 500 is implemented using 64-bit software may have 4-byte values, with each 4-byte value being associated with 3 pointers (24 bytes in total).
  • the overhead costs introduced by the management of pointers may reach 87.5% (including, memory alignment overhead).
  • this overhead may be reduced by implementing one or more of servers 206 in 32-bit software.
  • servers implemented in 32-bit software may reduce the memory overhead by half without causing any significant loss of performance and/or functionality, as compared to servers implemented in 64-bit software.
  • pointers can be only 4-bytes in size, rather than 8 bytes as is the case with 64-bit architectures. This can represent a 50% savings in memory overhead for every stored object.
  • servers running on 32-bit software architectures may run out of memory faster than those servers that are implemented in 64-bit software.
  • the memory address space of 32-bit servers may include 4096 billion different addresses, whereas the memory space available in 64-bit architectures may be virtually unlimited (e.g., it may include 1.8 ⁇ 10 19 addresses).
  • 32-bit servers may be much more likely to run out of available memory space than their 64-bit counterparts when deployed in a data-intensive setting. Accordingly, as is discussed further below, techniques for rebalancing the data that is stored on servers may be deployed that prevent, the servers from running out of available memory addresses when the servers are implemented using low-bit (e.g., 32-bit) software.
  • DMC 202 and/or CNM 204 may be used to implement a database (or bucket) that is hosted on a plurality of servers (e.g., servers 206 ). Any suitable type of servers may be used to implement the database.
  • the database may include low-bit servers only (e.g., servers that are implemented in 32-bit software or otherwise have a smaller address space than high-bit servers), high-bit servers only (e.g., servers having a larger address space than the low-bit servers, such as 64-bit servers), or any suitable combination of low-bit and high-bit servers (e.g., a combination of 32-bit servers and 64-bit servers executed on 64-bit processors).
  • DMC 202 and/or CNM 204 may implement an intermediate layer that is used to organize the database into partitions.
  • the intermediate layer may be situated between applications that use the database and the plurality of servers.
  • the intermediate layer may be transparent to both client applications that use the database and to the servers that host the database.
  • Each partition may be a logical unit that includes a plurality of database data values.
  • the number of partitions in a database may be fixed throughout the life of the database, in some embodiments.
  • the size of the partitions may be variable, in some embodiments. Whether a data structure belongs to a given partition may depend on whether a key associated with the database data structure is mapped (e.g., by a hashing function and/or a mapping structure) to an ID of given the partition.
  • Servers that host the database may store multiple partitions or only one partition. When the address space capacity of servers becomes close to being reached, new servers may be instantiated, and partitions stored in the latter servers may be relocated to the new servers. The implementation and use of partitions is discussed in further detail in U.S. patent application Ser.
  • data that is stored in the database may be organized in key-value pairs.
  • each key-value pair may be a database object.
  • the key of each pair may be any symbol, letter, digit, number, string, and/or any other suitable information representation that is usable to retrieve the same pair's data structure from the database.
  • the value of each pair may be a digit, a string, a number, a hash, a linked list, a data structure, and/or any other suitable information representation. Any suitable implementation of database data objects may be used in some embodiments.
  • a database data structure (or a value that is part of the data structure) may be considered to be associated with a key, if that key is usable to retrieve the data structure's value from the database. Additionally or alternatively, in some embodiments, a database data structure may be considered to be associated with a key, if that key references the data structure.
  • DMC 202 may assign database values that tend to be requested by the same complex multi-key commands on the same server in the database.
  • a complex multi-key operation may be an operation that requires the database to combine and/or compare the values associated with each of the keys specified by the complex multi-key operation. Examples of complex multi-key operations include the UNION and INTERSECT commands. As noted above, complex multi-key operations may have increased latency (as compared to other operations) because they may require data to be accessed on multiple servers and provided to the server that ultimately executes the complex multi-key operations. Thus, by grouping database values that tend to be referenced together in complex-multi-key operations, a decrease in the latency of these operations may be achieved.
  • FIG. 3 is a diagram of an example of a process 300 for managing a database in accordance with some embodiments of the disclosed subject matter.
  • process 300 may be executed by DMC 200 and/or by any other suitable component of the database and/or of node 200 .
  • database values that are likely to be referenced together by complex multi-key commands are identified and stored on the same server. More particularly, in some embodiments, a group of two or more keys is identified and database values associated with the keys in the group are caused to be stored on the same server. Examples of processes for identifying the group of keys are provided with respect to FIGS. 4A-4C .
  • a first mapping structure that relates keys to group IDs (i.e., IDs of groups of keys) is configured.
  • Configuring the first mapping structure may include one or more of: instantiating the first mapping structure, updating the first mapping structure, and/or performing any other suitable action.
  • Any of the group IDs may include a number, a string, an alphanumerical string, and/or any other suitable type of identifiers.
  • the first mapping, structure may include a table that relates a set of keys with an ID for a group the keys in the set are considered to be part of.
  • the first mapping structure may include a regular expression that defines a group of keys.
  • the first mapping structure may include data (e.g., a table), executable code, a hash function, and/or any suitable type of function and/or information that is usable to identify the ID of a group that a given key is part of.
  • the first mapping structure may be configured to relate the keys in the group identified at step 310 to an H) of the group.
  • a second mapping structure is configured that relates group IDs to server IDs.
  • the second mapping structure may include data (e.g., a table), executable code, a hash function, and/or any suitable type of function and/or information that is capable of being used to identify a server ID that is associated with a given group ID. Any suitable implementation of the second mapping structure may be used in some embodiments.
  • the second mapping structure may be implemented as a table that relates group IDs to server IDs. Configuring the second mapping structure may include one or more of instantiating, the mapping structure, updating the mapping structure, and/or performing any other suitable action.
  • server IDs may include a number, a string, an alphanumerical string, and/or any other suitable type of identifiers.
  • the second mapping structure may be configured to relate the ID of the group of keys that is identified at step 310 to the server where values associated with keys from the group are relocated.
  • the second mapping structure may be configured to relate a group ID to a partition ID.
  • the second mapping structure may include data (e.g., a table), executable code, a hash function, and/or any suitable type of function and/or information that is capable of being used to identify a partition ID that is associated with a given group ID. Any suitable implementation of the second mapping structure may be used in some embodiments.
  • the second mapping structure may be implemented as a table that relates group IDs to partition IDs. Configuring the second mapping structure may include one or more of: instantiating the mapping structure, updating the mapping structure, and/or performing any other suitable action.
  • any of the partition IDs may include a number, a string, an alphanumerical string, and/or any other suitable type of identifiers.
  • the second mapping structure may identify a partition ID that is associated with that group of keys.
  • data corresponding to a partition may include a hashing function that maps group IDs to partition IDs.
  • the hashing function may be one that maps a space of Group IDs to a space of partition IDs.
  • a third mapping structure is configured that relates server IDs to node IDs.
  • the third mapping structure may include data (e.g., a table), executable code, and/or any suitable type of information that is capable of being used to identify a node ID that is associated with a given server ID.
  • the third mapping structure may be implemented as a table that relates server IDs to node IDs. Any suitable implementation of the third mapping structure may be used in some embodiments.
  • Configuring the third mapping structure may include one or more of: instantiating the mapping structure, updating the mapping structure, and/or performing any other suitable action.
  • Any of the server IDs may include a number, a string, an alphanumerical string, and/or any other suitable type of identifiers.
  • the third mapping structure may identify a node (e.g., physical hardware, such as a computer and/or a blade server, and/or a virtual server in a virtual environment, such as a computer cloud) on which the server is executing.
  • a node e.g., physical hardware, such as a computer and/or a blade server, and/or a virtual server in a virtual environment, such as a computer cloud
  • one or more database commands are executed. Examples of processes for executing commands are provided in relation to FIGS. 5A-5C . Further examples of processes for executing commands are also provided in the '011 application.
  • the group of keys that is identified at step 310 is updated. Updating the group of keys may include adding one or more keys to the group and/or removing one or more keys from the group. Adding a key to the group of keys may include configuring the first mapping structure to associate the key with the ID of the group. Removing a key from the group may include modifying the first mapping structure to remove any associations between the key being removed and the ID of the group. Additionally or alternatively, in some embodiments, updating the group of keys may include deleting, from the first mapping structure, the ID of the group as well as any associations between the ID and keys in the group, and identifying a new group of keys in the manner discussed with respect to step 310 . In some embodiments, the group of keys may be updated at predetermined time intervals (e.g., 10 minutes) and/or after the execution of a predetermined number of commands (e.g., 10 commands.).
  • predetermined time intervals e.g. 10 minutes
  • a predetermined number of commands e.g. 10 commands.
  • FIG. 4A depicts a diagram of an example of a process 400 A for identifying, a group of keys in accordance with some embodiments of the disclosed subject matter.
  • process 400 A may be used to identify a group of keys as specified by step 310 of process 300 .
  • Process 400 A may be used additionally of alternatively to processes 400 B and 400 C (described herein) in some embodiments.
  • a definition for a group of keys is obtained.
  • the definition may be received as user input via a local input device, such as a keyboard, or over a communications network.
  • the group definition may specify a symbol pattern. Keys associated with values in the database that match the specified pattern may be considered to belong to the group.
  • the group definition may include a regular expression and/or any other suitable type of expression. For example, a group of keys defined as “ ⁇ user#[0-9]*” may include all keys that start with the string “user#” followed by at least one digit.
  • a server is identified. Any suitable method for identifying the server may be used, in some embodiments.
  • the server may be identified based on user input, based on a hash function of a group identifier, based on a predetermined rule, or based on any other suitable function, information, and/or criterion.
  • the identified server may be one in which a value associated with one of the keys in the group is already stored.
  • At 430 A at least some values stored in the database that are associated with different keys in the group are relocated to the identified server. Relocating the values may include, retrieving each value from its current location, storing the value on the identified server, and deleting, the value from its current location, thus leaving a copy of the value on the identified server only.
  • the value retrieval and storage may be performed in accordance with the techniques discussed in the '011 application. As discussed above, storing values that are associated with keys from a group of keys may increase the speed at which complex multi-key commands are executed by the database.
  • FIG. 4B depicts a diagram of an example of a process 400 B for identifying a group of keys in accordance with some embodiments of the disclosed subject matter, in some embodiments, process 400 B may be used to identify a group of keys as specified by step 310 of process 300 . Process 400 B may be used additionally or alternatively to processes 400 A and 400 C (described herein) in some embodiments.
  • a command is received that includes at least a first key and a second key.
  • Any suitable type of command may be received, in some embodiments.
  • the command may be a UNION command, an INTERSECT command, or a SORT command.
  • process 400 B proceeds to 440 B. Otherwise, process 400 B terminates.
  • adding the first key may include modifying the first mapping structure to relate the second key with the ID of the first group of keys.
  • a first server that is hosting a first value that is associated with the first key is identified.
  • the first server may be identified by using techniques discussed in the '011 application.
  • a second server that is hosting a second value that is associated with the first key is identified.
  • the second server may also be identified by using techniques discussed in the '011 application.
  • the second value is relocated from the first server to the second server.
  • the second database value may include copying the second database value to the server identified at 460 B.
  • the relocation may be performed in the manner discussed with respect to step 310 .
  • FIG. 4C depicts a diagram of an example of a process 400 C for identifying a group of keys in accordance with some embodiments of the disclosed subject matter.
  • process 400 C may be used to identify it group of keys as specified by step 310 of process 300 .
  • Process 400 C may be used additionally or alternatively to processes 400 A and 400 B (disclosed herein) in some embodiments.
  • Each pair may include a first key and a second key.
  • Each first key and each second key may be associated with a different value that is stored in the database.
  • a ranking score is assigned to each of the pairs.
  • the ranking score of each pair may be based on a count of commands that include both the pair's first key and the pair's second key that have been executed by the database.
  • the count may be based on the instructions executed during a predetermined period of time (e.g., in the last 10 minutes), on the number of instructions that include both keys in the set of N most recently-executed instruction, or based on any other suitable criterion.
  • the greater the count the higher the ranking score.
  • Any suitable type of instructions may be used to keep the count, in some embodiments.
  • the count may be based on INTERSECT instructions, UNION instructions, and/or any suitable type of instructions. Additionally or alternatively, the count may be based only on the number of complex multi-key instructions that have been invoked during the period.
  • the pairs in the plurality are grouped based on their ranking scores to identify a group of keys.
  • the pairs may be grouped based on their ranking scores to determine group of pairs. Once the group of pairs is determined the keys from the pairs in the group may be included in the group of keys.
  • the keys may be grouped by identifying a subset of the plurality of pairs based on the pairs ranking scores, and identifying a group of pairs from the subset, such that the first element in each pair of the group is also part of another pair in the group.
  • the subset of the pairs may be identified by selecting all pairs whose ranking score exceeds a predetermined threshold or by selecting a subset of the plurality of pairs that have the highest ranking scores out of all pairs in the plurality. Any suitable method for identifying the subset of pairs may be used in some embodiments.
  • a server is identified.
  • the server may be identified in the manner discussed with respect to step 420 A.
  • a value associated with that key is identified.
  • the value may be identified by using techniques discussed in the '011 application.
  • At 460 C at least some of the values identified at step 450 C are relocated to the server identified at 440 C.
  • the relocation may be performed in the manner discussed with respect to step 420 A.
  • FIG. 5A depicts a diagram of an example of a process 500 A for executing database commands in accordance with some embodiments of the disclosed subject matter.
  • process 500 A may be used to execute one or more commands as specified by step 350 of process 300 .
  • Process 500 A may be used additionally or alternatively to processes 500 B and 500 C (described herein) in some embodiments.
  • a database command is received. Any suitable command may be received in some embodiments.
  • the command may be a SET command, a GET command, an INTERSECT command, a UNION command, and/or an other suitable type of command.
  • the command may be received from any application and/or device that is capable of using the database.
  • the ID of the group of keys which the first key and the second key are part of is determined.
  • the ID of the group of keys may be determined by using the first mapping structure.
  • the ID of the group of keys is mapped to a server ID in order to identify a server that is hosting values that are associated with keys from the group of keys.
  • the mapping may be performed based on the second mapping structure.
  • the ID of the group of keys may be mapped to a partition ID (e.g., by using the second mapping structure, such as a hash function). Afterwards, the partition ID may be used to identify a server ID. Each partition may be a logical unit that includes a plurality of database data values. Further discussion of the implementation and use of partitions is provided in the '011 application.
  • the mapping of the ID of the group of keys to the partition ID may be performed by using the second mapping structure.
  • the second mapping structure may include (or implement) a hashing function.
  • the server ID is mapped to a node 1 D in order to identify the node (e.g., a physical device, such as a computer, or a virtual machine) that is hosting the server.
  • the mapping may be performed based on the third mapping structure.
  • the command is forwarded to the server hosting the values associated with keys in the group of keys and subsequently executed by that server.
  • a response to the command may be received form the first server and forwarded to the application and/or device that transmitted the command.
  • FIG. 5B depicts a diagram of an example of a process 500 B for executing database commands in accordance with some embodiments of the disclosed subject matter.
  • process 500 B may be used to execute one or more commands as specified by step 350 of process 300 .
  • Process 500 B may be used additionally or alternatively to processes 500 A and 500 C (described herein) in some embodiments.
  • a command is received that includes a first key.
  • the command may be received from any application and/or device that is capable of using the database. Any suitable type of command may be received in some embodiments.
  • any suitable type of first key may be received, in some embodiments.
  • the command may be a SORT command, and the first key may be associated with a value that includes a set of references to other values (e.g., a set of second keys) that are to be sorted.
  • Executing the command, by a database server may include retrieving each of the values referenced by the set of second keys, sorting the values, and providing an indication of the sorted values to the application and/or device that submitted the command.
  • a value that is associated with the first key is identified.
  • the value may be identified by using the techniques discussed in the '011 application.
  • the execution of commands, other than the command received at 510 is blocked.
  • the blocking of other commands may include refraining from executing any commands that arrive at the entity executing process 500 B (e.g., DMC 200 ).
  • a value associated with that second key is identified.
  • An suitable method for identifying values with second keys may be used in some embodiments.
  • each of the obtained copies is stored on a predetermined server.
  • the blocking of the execution of commands other than the command received at 510 B is stopped, and the execution of the commands is allowed to proceed.
  • the instruction is forwarded to the predetermined server and subsequently executed by that server.
  • a response to the command may be received from the first server and forwarded to the application and/or device that transmitted the command.
  • FIG. 5C depicts a diagram of an example of a process 500 C for executing, database commands in accordance with some embodiments of the disclosed subject matter.
  • process 500 C may be used to execute one or more commands as specified by step 350 of process 300 .
  • Process 500 C may be used additionally or alternatively to processes 500 A and 500 B (described herein) in some embodiments.
  • a command is received indicating the start of a database transaction.
  • Any suitable command that indicates the start of a transaction may be received in some embodiments.
  • the command may be a MULTI command.
  • a second command that is part of the transaction is received.
  • Any suitable command that can be part of a transaction may be received in some embodiments.
  • the command may be an INTERSECT command, a UNION command, a SORT command, a GET command, or a SET command.
  • the command may include at least one key.
  • a server is identified based on the key.
  • the identified server may be one that is storing a database value that is associated with the key. Any suitable method for identifying the server may be used.
  • the key may be mapped to group ID and the group ID may be mapped to a server ID (e.g., using a hash function).
  • the key may be mapped to a partition ID and the partition ID may be mapped to the server ID.
  • a command for initiating a transaction is transmitted to the identified server.
  • Any suitable command for initiating a transaction may be transmitted in some embodiments.
  • a MULTI command may be transmitted.
  • the first command may be transmitted to the server only when a first command has not been transmitted during a previous iteration of steps 520 C- 560 C.
  • the second command is transmitted to the second server.
  • the second server may execute the command and generate a response.
  • a command marking the end of a transaction is transmitted to all servers on which a transaction has been started at step 540 C during various iterations of steps 520 C- 540 C. Any suitable command for ending a transaction may be transmitted. For example, an EXEC command may be transmitted to each of the servers.
  • DMC 202 can also be used to perform client authentication in some embodiments. Any suitable one or more client authentication mechanisms can be used. For example, Simple Authentication and Security Layer (SASL) authentication, password authentication, source IP authentication, Amazon Web Service Security Group, and/or any other suitable authentication mechanisms can be used in some embodiments.
  • SASL Simple Authentication and Security Layer
  • a server can be identified by a port number that is appended to an IP address of a node (e.g., NodeIP:ServerPort).
  • DMC 202 can also be used to perform protocol translation.
  • DMC 202 can perform a translation from a protocol used by clients/applications requesting data to a protocol used by servers 206 .
  • a client/application can request data using Memcached ASCII, Memcached binary, Redis old protocol, Redis unified protocol, and/or any other suitable protocol.
  • Servers 206 can communicate using a Redis unified protocol modified to include extensions that allows the servers to support operations which are natively not supported by a Redis server (e.g. Memcached commands) without breaking the Redis unified protocol semantics.
  • DMC 202 can perform a translation between these protocols.
  • DMC 202 can perform translation between the Memcached protocol and a modified Redis unified protocol. In doing so, the following can be performed.
  • a client may issue a Memcached cas command using the following format: cas ⁇ key> ⁇ flags> ⁇ exptime> ⁇ bytes> ⁇ cas unique> [noreply] ⁇ r ⁇ n.
  • cas is a check and set operation, which means “store this data but only if no one else has updated since I last fetched it.”
  • This command can then be translated to a new command, RCAS, using the following format: RCAS ⁇ key> ⁇ flags> F( ⁇ exptime>) ⁇ cas unique> ⁇ bytes/body>.
  • the DMC can perform the following steps when translating, a Memcached cas command received from the client to the new RCAS command. If a noreply argument is used by the client, the DMC does not forward this argument to the server. Instead the DMC remembers that the initial request included this option, and therefore will not transfer the Redis reply to the client. If Memcached expiration time (exptime) uses Unix time format, the DMC can translate the Unix time to offset in seconds from the current time. The DMC can then receive a response to the RCAS command as follows. A “+OK ⁇ r ⁇ n” response can be received, which indicates success. A “-NOT_STORED” response can be received, which indicates that the data was not stored, but not because of an error.
  • a “ ⁇ EXIST” response can be received, which indicates that the item trying to be stored with the RCAS command has been modified since it was last fetched.
  • a “-NOT_FOUND ⁇ r ⁇ n” response can be received, which indicates that the item trying to be stored with a RCAS command did not exist or has been deleted.
  • DMC 202 can also perform load balancing in some embodiments to balance loads among a master server 206 (which can respond to read and write requests for values) for a database (or bucket) partition and one or more slave servers 206 (which can respond to read requests for data objects) for the same partition.
  • the DMC for a partition that resides over multiple servers 206 e.g., one master server and N (zero or more) slave servers
  • DMC 202 can redirect requests to and responses from a node on which a data object is stored when a request for that data object is received at the wrong node. In some embodiments, this redirection of responses can occur via dedicated connections 604 to one or more other DMCs in other nodes as shown in FIG. 6 or one or more servers in other nodes.
  • DMC 202 can perform connection pooling in some embodiments. Any suitable technique for connection pooling can be used in some embodiments. For example, as shown in FIG. 6 , in some embodiments, the DMC can maintain a number of connections 602 with each server 206 so that each addition or removal of a connection from a server will degrade its throughput and increase its latency. In some embodiments, connections 602 between a DMC 202 and servers 206 can be long-lived, persistent connections that reduce the overhead associated with connection setup and teardown between client and server.
  • DMC 202 can additionally or alternatively use Unix domain socket connections between the DMC and the servers in some embodiments.
  • Unix domain socket connections can be used to eliminate the processing, overhead associated with managing TCP connection between the DMC and the servers.
  • DMC 202 can additionally or alternatively perform request pipelining in some embodiments. Requests can be pipelined by the DMC by sending multiple requests to the same server before responses to the earlier of those requests are received from the server on requests that were previously sent on the same connection.
  • the number of connections and the size of a pipeline between DMC 202 and each server 206 can additionally or alternatively be changed based on the average object size in the server. For example, in some embodiments, when the average object size in a server is less than 1 kB, the DMC may open fewer (or reduce the number of) persistent connections with the server and increase the pipeline size (i.e., the number of requests that can be sent to the server without waiting for its response). However, when the average file size is over 20 kB, for example, the DMC may increase the number of persistent connections with the server and reduce the pipeline size.
  • the DMC can additionally or alternatively prioritize light processing requests over heavy processing requests coming from different client connections. Any suitable basis for selecting a processing request as being light or heavy can be used in some embodiments.
  • a Redis GET request can be considered to be a light request, where as a Redis ZRANGEBYSCORE request can be considered to be a heavy request.
  • the DMC can additionally or alternatively send the same response to multiple requesters when a read request for a data object is received while another read request for the same data object has already been sent by the DMC to a server, but a corresponding response not received. In doing so, the DMC can queue one or more later requests until the response for the first request is received. The DMC can then send that response to the client that sent the first request and to all the clients with queued requests.
  • the DMC can additionally or alternatively provide a cache for frequently requested data objects.
  • the frequently requested objects can be more frequently requested than the frequently requested objects similarly replicated across servers 206 as described below.
  • data management controller (DMC) 202 can be used to manage the data flow of node 200 , to manage the data flow between node 200 and one or more other nodes in the same cluster, and to manage the data flow between node 200 and one or more nodes in another cluster.
  • DMC data management controller
  • Cluster Node Manager CNM 204 as a cluster manager can make auto-sharding (scaling-out) decisions when a specific database (or Bucket) has to be split over more than one server due to load or memory constraints. This can be accomplished in any suitable manner.
  • the CNM can instruct a server to send one or more user data set partitions to a new server, and then the CNM can instruct the DMC to redirect requests destined to this partition(s) to the new server.
  • CNM 204 as a cluster manager can make auto merging (scaling-in) decisions when a specific database (or Bucket) has to be merged from two or more servers to fewer servers due to reduced load or a reduction in memory size.
  • This can be accomplished in any suitable manner.
  • the CNM can instruct a server to send one or male user data set partitions to a target server that already manages a portion of the user data set, and then the CNM can instruct the DMC to redirect requests destined to this partition(s) to the target server.
  • duster node manager (CNM) 204 can be used to perform node management functions and cluster management functions.
  • the cluster node manager can be used to monitor the status of each node, configure the node, control inter- and intra-node communications, elect nodes for aggregation and supervision functions, and provide a management interface for the cluster operation.
  • CNM 204 as a node manager can monitor for failures and/or degraded status in other nodes and in servers of its own node. Any suitable mechanism can be used for determining that another node or server has failed or become degraded in some embodiments.
  • periodic messages can be sent out by each server in a node to the CNM of the node.
  • the CNM can detect a failure when a message is missing.
  • each CNM can send out a periodic message to CNM(s) in one or more other nodes.
  • the CNM(s) in the one or more other nodes can detect a failure when this message is missing.
  • each server can send out a memory fragmentation status message to the CNM in the same node to alert the CNM if its performance has become degraded.
  • Any suitable action can be taken in response to a failure or degraded status.
  • a CNM can cause a server or another node to restart its software upon a failure being detected, to perform a de-fragmentation process upon a degraded status being detected, etc.
  • the fragmentation ratio crosses a pre-defined threshold, the following can be performed: (1) another server on the same node or on a different node can be started; (2) the fragmented server can be backed-up; (3) all clients can be switched to the new server; and (4) the fragmented server can be released.
  • CNM 204 as a node manager can receive, process, and monitor local configuration data, and/or monitor and collect node statistics, such as DMC statistics and server statistics.
  • CNM 204 as a node manager can translate server statistics to other non-relational database statistics. For example, when a Memcached user asks to get his/her Memcached statistics for the user's server 206 that is based on a Redis server, the request can be forwarded to the relevant node that contains the user's server 206 that holds the user dataset, and the CNM in that node can process this request and translate the Redis statistics to Memcached statistics.
  • CNM 204 as a node manager can provide statistics information to other components upon request.
  • CNM 204 as a node manager can monitor the DMC and server(s) for performance alerts.
  • the CNM as a node manager can monitor cases where the throughput or the latency of the node or the server crosses pre-defined thresholds.
  • CNM 204 as a node manager can control the load of the replication processes across the servers which reside on the same node, and control the multi-cluster replication operation of the server(s) within a node.
  • CNM 204 as a cluster manager can manage cluster configuration by communicating with another CNM designated as a Node Configuration Manager for allocating and releasing resources.
  • CNM 204 as a cluster manager can update the cluster DNS servers with any configuration change of the database or Bucket). These updates can be used to ensure that only one protocol hop will be used between the clients and the cluster nodes in some embodiments.
  • CNM 204 as a cluster manager can maintain a global configuration map that maps of all users datasets (e.g., databases or Buckets) with their partitions to cluster nodes and servers.
  • the CNM when the CNM receives a request to create a user dataset, or receives a request to increase the size of a user dataset, the CNM can perform an admission control process which will look for additional resources over existing nodes of a cluster or alternatively will add a new node to the cluster.
  • the CNM when the CNM receives a request to delete a user dataset, or receives a request to decrease the size of a user data set, the CNM can perform an admission control process which will release the resources.
  • CNM 204 as a duster manager can maintain a global rebalancing process in order to ensure that all resources of a cluster are spread equally across nodes.
  • CNM 204 as a cluster manager can forward monitoring requests from the system manager to the proper CNM and aggregate statistics from multiple servers that serve the same database (or Bucket). Information about levels of utilization of the servers 206 can be exchanged between cluster node managers (CNM) 204 in different nodes.
  • CCM cluster node managers
  • CNM 204 as a cluster manager together with the relevant another CNM designated as a Node Replication Controller, can perform cross-clusters replication operations.
  • DMC 202 and CNM 204 can be combined to form a combined control process.
  • server(s) 206 can be used to provide in-memory non-relational database functionality and any other suitable server process(es).
  • the server(s) can be based on the open-source Redis server with enhanced functionality.
  • the server(s) can represent one of the following options: (1) a Memcached Bucket; (2) a partition of a Memcached Bucket; (3) a Redis database (DB); or (4) a partition of a Redis DB.
  • each cluster's node includes N servers, and there can be any suitable number of servers, including zero, in some embodiments.
  • server 206 can also support data operations which are not natively supported by a Redis server, such as cas and prepend commands and flag and cas arguments of Memcached. Commands and arguments which are not natively supported by the Redis server are transmitted to server 206 using the extension(s) of the modified Redis unified protocol, which extends the supported Redis command while preserving Redis unified protocol semantics. Server 206 supports these extensions by adding new elements to the object key data structure and by implementing the command logic behind the new commands. Examples for the new elements that are added to the Redis Object Key Data Structure are:
  • the server in addition to functionality performed by a Redis server, can also perform real-time (or near real-time) class-based compression.
  • real-time or near real-time
  • a set of global dictionaries for finding duplicated strings across multiple objects can be provided. These dictionaries can treat all the non-relational objects that belong to a certain class as a one big object. As a result, an object can compressed based on many data patterns which appeared in objects previously processed by the compression process.
  • any suitable compression dictionaries can be used in some embodiments.
  • the compression dictionaries can be a LZW/LZ78 dictionary, a LZP dictionary, or any other suitable compression dictionary.
  • a classification mechanism can be used to classify objects according to data patterns that were found in the objects and can provide a dedicated global dictionary per object class. This mechanism can find new classes of objects and merge classes of objects. As a result, the total size of each compression dictionary can be reduced, which contributes to additional enhancement in the compression ratio.
  • a real-time for near real-time compaction mechanism for cached objects can be provided.
  • this compaction mechanism can delete cached objects immediately (or a soon after) after they have been expired.
  • a mechanism for storing frequently used objects in a partition that is replicated across multiple nodes can be provided. In this way, multiple nodes can each serve the frequently used objects simultaneously thus improving performance. In some embodiments, these frequently used objects can always be stored in an uncompressed format.
  • CCS 208 is an internal cluster repository service (based on the Redis architecture).
  • this service can include a per-node and a per-cluster configuration, statistics, and alert information. All the nodes in a cluster can be synchronized with the per-cluster configuration.
  • the DMC and CNM can be registered to the CCS to receive configuration change events which are relevant to their operations.
  • each cluster can include one or more Backup Nodes 112 .
  • the cluster architecture described herein can be provided as a service.
  • a user of this service can be registered through the service manager (SM), which communicates with a CNM designated as a Cluster Orchestrator to allocate user resources in the cluster.
  • SM service manager
  • the SM may allocate resources across multiple clusters.
  • a user resource can be a Memcached Bucket, a Redis database (DB), or any other suitable non-relational database.
  • Each database (or bucket) can be hosted over one or more servers 206 , depending on the transaction load imposed and the memory size of its dataset.
  • each database for bucket can be composed of multiple partitions (e.g., 4096 or any other suitable number).
  • keys can be equally spread over the different partitions by applying a hash function technique. Partition space may scaled dynamically (in or out) if necessary, using a consistent hashing, algorithm.
  • a partition in a cluster can be hosted by one master server 206 , and by zero or more slave servers 206 .
  • Master servers can serve both read and write requests, and slave, servers can only serve read requests, in some embodiments.
  • a database or a bucket
  • the user can be provided with a single DNS address or a list of DNS addresses to be used for accessing the database (or the bucket) in some embodiments.
  • the user then needs to configure its application servers to use these addresses.
  • This list can include DNS addresses for both master and slave servers 206 that hold the database (or the bucket). Multiple DNS addresses may point to the same server 206 .
  • Application requests may be sent directly to the node where the key is hosted, or may be redirected to that node by another node (through the DMC).
  • any one or more node, any one or more duster, the service manager, and the DNS can be implemented in one or more general or special purpose devices, such as a general purpose computer, a special purpose computer, a client, a server, etc.
  • a general purpose computer such as a general purpose computer, a special purpose computer, a client, a server, etc.
  • Any of these general or special purpose devices can include any suitable components such as one or more hardware processor (each of which can be a microprocessor, digital signal processor, a controller, etc.), memory, communication interfaces, display controllers, input devices, etc.
  • Any one or more of the connections between different components herein can be formed from any suitable communication link or combination of communication links.
  • the communication links can include the Internet, a local area network, a wide area network, a telephone network, a satellite network, a cable network, etc.
  • any suitable computer readable media can be used for storing instructions for performing the processes described herein.
  • computer readable media can be transitory or non-transitory.
  • non-transitory computer readable media can include media such as magnetic media (such as hard disks, floppy disks, etc.), optical media (such as compact discs, digital video discs. Blu-ray discs, etc.), semiconductor media (such as flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media.
  • transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
  • FIGS. 3, 4A-4C, and 5A-5C are provided as examples only. At least some of the steps of processes 300 , 400 A- 400 C, and 500 A- 500 B may be performed in a different order than represented, performed concurrently, or omitted. It is to be understood that the processes may be executed by at least one of a processor that is part of an in-memory, NoSQL database (e.g., a processor that is part of Node 200 or any other suitable processor). In some embodiments, processes 300 , 400 A- 400 C, and 500 A- 500 B may be executed as part of the operation of the in-memory NoSQL database.
  • a processor that is part of an in-memory, NoSQL database e.g., a processor that is part of Node 200 or any other suitable processor.
  • processes 300 , 400 A- 400 C, and 500 A- 500 B may be executed as part of the operation of the in-memory NoSQL database.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Systems, methods, and media for managing an in-memory NoSQL database are provided. In some embodiments, systems for managing an in-memory NoSQL database are provided, the systems comprising a hardware processor that is configured to: identify a group of keys capable of being included in a complex multi-key database command, the group including a first key and a second key, the first key being associated with a first value and the second key being associated with a second value, the first value and the second value being stored on different servers that are part of the database; and cause the first value and the second value to be both stored on a given server.

Description

CROSS REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of U.S. Provisional Patent Application No. 61/616,426, filed Mar. 28, 2012, which is hereby incorporated by reference herein in its entirety. This application also claims the benefit of U.S. Provisional Patent Application No. 61/802,062, filed Mar. 15, 2013, which is hereby incorporated by reference herein in its entirety. This application is also a continuation-in-part of U.S. patent application Ser. No. 13/247,371, filed Sep. 28, 2011, which claims the benefit of U.S. Provisional Patent Application No. 61/387,255, filed Sep. 28, 2010, and U.S. Provisional Patent Application No. 61/427,492, filed Dec. 28, 2010, each of which is hereby incorporated by reference herein in its entirety.
BACKGROUND
Many Web 2.0 and Software as a Service (SaaS) applications rely heavily on user created content. This reliance drives the need for (a) efficient and reliable scaling technologies for supporting rapid data growth; and (b) better storage and retrieval technology. Much of this user-created content only requires a primary key for store and retrieve commands rather than complex querying and management functionality offered by traditional Relational Database Management Systems (RDBMS's). The excess RDBMS functionality involves expensive hardware and highly skilled personnel, typically making it unsuitable for these types of applications. In-addition, RDBMS replication capabilities are limited and typically prefer consistency over performance and availability. Despite many developments in recent years, scaling-out a relational database is still very complex.
During recent years NoSQL (Not Only SQL) database management systems (which are also referred to as non-relational databases or unstructured databases) have emerged in-order to solve these RDBMS deficiencies. NoSQL is a broad class of database management systems that can differ from classic RDBMS in some significant ways: (1) there are no inherent relations between stored objects; (2) the data stores may not require fixed table schemas; and (3) NoSQL avoids join operations and typically scales horizontally.
In-memory non-relational databases are subset of NoSQL databases, and are designed in a way that all of (or a major part of) the users dataset is stored in RAM Memory. In memory NoSQL databases (i.e., NoSQL databases entirely served from RAM) have become popular among developers for their ability to improve, application performance and reduce operational costs. Benchmarks have shown that in-memory NoSQL databases can be up to 100 times faster than HDD-based NoSQL databases and 10-20 times faster than SSD-based NoSQL databases.
The performance of in-memory NoSQL databases, however, can suffer when the databases are implemented in a distributed fashion where database data structures are spread over multiple nodes. In such implementations, the execution of complex multi-key commands, such as UNION, INTERSECT, and SORT, may require migrating, objects across nodes and even blocking database operation in-order to ensure data consistency. The delays resulting from object migration and operation blocking may be especially harmful to the performance of in-memory NoSQL databases, such as Redis, that are expected to execute commands at sub-millisecond latency.
Accordingly, the need exists for systems, methods, and media for managing an in-memory NoSQL database.
SUMMARY
Systems, methods, and media for managing an in-memory NoSQL database are provided. In some embodiments, systems for managing an in-memory NoSQL database are provided, the systems comprising a hardware processor that is configured to: identify a group of keys capable of being included in a complex multi-key database command, the group including a first key and a second key, the first key being associated with a first value and the second key being associated with a second value, the first value and the second value being stored on different servers that are part of the database; and cause the first value and the second value to be both stored on a given server.
In some embodiments, methods for managing an in-memory NoSQL database are provided, the methods comprising: identifying, by a hardware processor, a group of keys capable of being included in a complex multi-key database command, the group including a first key and a second key, the first key being associated with a first value and tire second key being associated with a second value, the first value and the second value being stored on different servers that are part of the database; and causing the first value and the second value to be both stored on a given server.
In some embodiments, non-transitory computer-readable media containing computer-executable instructions that, when executed by a processor, cause the processor to perform a method for managing an in-memory NoSQL database are provided, the method comprising: identifying a group of keys capable of being included in a complex multi-key database command, the group including a first key and a second key, the first key being associated with a first value and the second key being associated with a second value, the first value and the second value being stored on different servers that are part of the database; and causing the first value and the second value to be both stored on a given server.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a diagram of an example of a cluster architecture in accordance with some embodiments.
FIG. 2 is a diagram of an example of a node architecture in accordance with some embodiments.
FIG. 3 is a diagram of an example of a process for managing a database in accordance with some embodiments.
FIG. 4A is a diagram of an example of a process for identifying a group of keys in accordance with some embodiments.
FIG. 4B is a diagram of another example of a process for identifying a group of keys in accordance with some embodiments.
FIG. 4C is a diagram of yet another example of a process for identifying a group of keys in accordance with some embodiments.
FIG. 5A is a diagram of an example of a process for executing database commands in accordance with some embodiments.
FIG. 5B is a diagram of another example of a process for executing, database commands in accordance with some embodiments.
FIG. 5C is a diagram of yet another example of a process for executing database commands in accordance with some embodiments.
FIG. 6 is a diagram of an example of a connection between a data management controller and servers and other data management controllers in accordance with some embodiments.
DETAILED DESCRIPTION
Systems, methods, and media for providing in-memory NoSQL cluster with complex data structures are provided.
In accordance with some embodiments, an in-memory non-relational database can be implemented as a system 100 as illustrated in FIG. 1. As shown, system 100 can include one or more clusters 102, one or more applications 104, one or more service managers 106, and one or more domain name servers 108. Clusters 102 can be one or more clusters of nodes 110 for providing in-memory data storage as well as related functionality as described further below. Applications 104 can be one or more applications that use data and related information stored in nodes 102. Service manager (SM) 106 can be an entity that manages the user interaction with the system, as well as maps user configuration to cluster resources. The SM can be designed to support single- and multi-tenancy modes of operation, and can allow the system to be supplied as a service. The SM can also communicate with a cluster node manager of a node (described below) for aggregating service statistics and alarms. When deployed as a service, the SM can also be responsible for usage reports and billing. DNS 108 can provide a mechanism for domain name server functions as known in the art.
Turning to FIG. 2, an example of a node 200 that can be used as a node 110 in a duster 102 in accordance with some embodiments is illustrated. As shown, node 200 can include a data management controller (DMC) 202, a cluster node manager (CNM) 204, zero or more servers 206, and a common cluster storage 208. As described more fully below, in some embodiments, server(s) 206 can be used to provide in-memory non-relational database functionality. In some embodiments, the server(s) can be based on the open-source Redis server with enhanced functionality. In some embodiments, the server(s) can represent one of the following options: (1) a Memcached Bucket; (2) a partition of a Memcached Bucket; (3) a Redis database (DB); or (4) a partition of a Redis DB.
In some embodiments, node 200 may be implemented using physical hardware, such as a computer and/or a blade server. Additionally or alternatively, in some embodiments, node 200 may be implemented as a virtual server in a virtual environment, such as a computer cloud. In some embodiments, DMC 202 and CNM 204 may be implemented as one or more processes that are executed on node 200. Additionally or alternatively, in some embodiments, DMC 202 and CNM 204 may be implemented as one or more processes that are executed on a device other than a device on which other components of node 200 are executed. Any suitable system topology may be used, in some embodiments. Any suitable system topology may be used, in some embodiments.
In some embodiments, each of servers 206 may be implemented as one or more software processes that are executed on node 200. Moreover, in some embodiments, any of servers 206 may be implemented in 32-bit computer software instead of 64-bit software. Because the size of a word in 32-bit software is half the size of a 64-bit word, using, 32-bit computer software may cause a decrease in memory overhead resulting from the use of pointers (which are one word in size). Regardless of whether any of servers 206 are implemented in 32-bit software or 64-bit software, servers 206 may be executed on 64-bit hardware (e.g., executed by a 64-bit hardware processor). Although servers 206 are depicted as processes executed on the same node, in some embodiments, one or more of servers 206 may be executed one or more different devices than the rest of servers 206.
In some embodiments, complex data structures, such as skip lists and linked lists, may include large numbers of pointers for each value stored in them in order to permit commands, such as search, intersect, or union, to be executed. The large numbers of pointers may result in a significant memory overhead. For example, as illustrated in FIG. 5, elements in a skip list 500 (when skip list 500 is implemented using 64-bit software) may have 4-byte values, with each 4-byte value being associated with 3 pointers (24 bytes in total). Thus, in some instances, the overhead costs introduced by the management of pointers may reach 87.5% (including, memory alignment overhead).
In some embodiments, this overhead may be reduced by implementing one or more of servers 206 in 32-bit software. Using servers implemented in 32-bit software may reduce the memory overhead by half without causing any significant loss of performance and/or functionality, as compared to servers implemented in 64-bit software. As noted above, in embodiments in which servers 206 are implemented in 32-bit computer software, pointers can be only 4-bytes in size, rather than 8 bytes as is the case with 64-bit architectures. This can represent a 50% savings in memory overhead for every stored object.
In some aspects, servers running on 32-bit software architectures may run out of memory faster than those servers that are implemented in 64-bit software. The memory address space of 32-bit servers may include 4096 billion different addresses, whereas the memory space available in 64-bit architectures may be virtually unlimited (e.g., it may include 1.8×1019 addresses). Thus, 32-bit servers may be much more likely to run out of available memory space than their 64-bit counterparts when deployed in a data-intensive setting. Accordingly, as is discussed further below, techniques for rebalancing the data that is stored on servers may be deployed that prevent, the servers from running out of available memory addresses when the servers are implemented using low-bit (e.g., 32-bit) software.
In some embodiments, DMC 202 and/or CNM 204 may be used to implement a database (or bucket) that is hosted on a plurality of servers (e.g., servers 206). Any suitable type of servers may be used to implement the database. The database may include low-bit servers only (e.g., servers that are implemented in 32-bit software or otherwise have a smaller address space than high-bit servers), high-bit servers only (e.g., servers having a larger address space than the low-bit servers, such as 64-bit servers), or any suitable combination of low-bit and high-bit servers (e.g., a combination of 32-bit servers and 64-bit servers executed on 64-bit processors).
DMC 202 and/or CNM 204 may implement an intermediate layer that is used to organize the database into partitions. The intermediate layer may be situated between applications that use the database and the plurality of servers. The intermediate layer may be transparent to both client applications that use the database and to the servers that host the database.
Each partition may be a logical unit that includes a plurality of database data values. The number of partitions in a database may be fixed throughout the life of the database, in some embodiments. The size of the partitions may be variable, in some embodiments. Whether a data structure belongs to a given partition may depend on whether a key associated with the database data structure is mapped (e.g., by a hashing function and/or a mapping structure) to an ID of given the partition. Servers that host the database may store multiple partitions or only one partition. When the address space capacity of servers becomes close to being reached, new servers may be instantiated, and partitions stored in the latter servers may be relocated to the new servers. The implementation and use of partitions is discussed in further detail in U.S. patent application Ser. No. 13/853,011, filed Mar. 28, 2013, and entitled “SYSTEMS, METHODS, AND MEDIA FOR MANAGING RAM RESOURCES FOR IN-MEMORY NoSQL DATABASES” (hereinafter “the '011 application”), which is hereby incorporated by reference herein in its entirety.
In some embodiments, data that is stored in the database may be organized in key-value pairs. In some embodiments, each key-value pair may be a database object. The key of each pair may be any symbol, letter, digit, number, string, and/or any other suitable information representation that is usable to retrieve the same pair's data structure from the database. The value of each pair may be a digit, a string, a number, a hash, a linked list, a data structure, and/or any other suitable information representation. Any suitable implementation of database data objects may be used in some embodiments. In some embodiments, a database data structure (or a value that is part of the data structure) may be considered to be associated with a key, if that key is usable to retrieve the data structure's value from the database. Additionally or alternatively, in some embodiments, a database data structure may be considered to be associated with a key, if that key references the data structure.
In some embodiments, DMC 202 may assign database values that tend to be requested by the same complex multi-key commands on the same server in the database. In some embodiments, a complex multi-key operation may be an operation that requires the database to combine and/or compare the values associated with each of the keys specified by the complex multi-key operation. Examples of complex multi-key operations include the UNION and INTERSECT commands. As noted above, complex multi-key operations may have increased latency (as compared to other operations) because they may require data to be accessed on multiple servers and provided to the server that ultimately executes the complex multi-key operations. Thus, by grouping database values that tend to be referenced together in complex-multi-key operations, a decrease in the latency of these operations may be achieved.
FIG. 3 is a diagram of an example of a process 300 for managing a database in accordance with some embodiments of the disclosed subject matter. In some embodiments, process 300 may be executed by DMC 200 and/or by any other suitable component of the database and/or of node 200.
At 310, database values that are likely to be referenced together by complex multi-key commands are identified and stored on the same server. More particularly, in some embodiments, a group of two or more keys is identified and database values associated with the keys in the group are caused to be stored on the same server. Examples of processes for identifying the group of keys are provided with respect to FIGS. 4A-4C.
At 320, a first mapping structure that relates keys to group IDs (i.e., IDs of groups of keys) is configured. Configuring the first mapping structure may include one or more of: instantiating the first mapping structure, updating the first mapping structure, and/or performing any other suitable action. Any of the group IDs may include a number, a string, an alphanumerical string, and/or any other suitable type of identifiers. In some embodiments, for example, the first mapping, structure may include a table that relates a set of keys with an ID for a group the keys in the set are considered to be part of. Additionally or alternatively, in some embodiments, the first mapping structure may include a regular expression that defines a group of keys. Thus, the first mapping structure may include data (e.g., a table), executable code, a hash function, and/or any suitable type of function and/or information that is usable to identify the ID of a group that a given key is part of. In some embodiments, the first mapping structure may be configured to relate the keys in the group identified at step 310 to an H) of the group.
At 330, a second mapping structure is configured that relates group IDs to server IDs. The second mapping structure may include data (e.g., a table), executable code, a hash function, and/or any suitable type of function and/or information that is capable of being used to identify a server ID that is associated with a given group ID. Any suitable implementation of the second mapping structure may be used in some embodiments. For example, the second mapping structure may be implemented as a table that relates group IDs to server IDs. Configuring the second mapping structure may include one or more of instantiating, the mapping structure, updating the mapping structure, and/or performing any other suitable action. Any of the server IDs may include a number, a string, an alphanumerical string, and/or any other suitable type of identifiers. In some embodiments, the second mapping structure may be configured to relate the ID of the group of keys that is identified at step 310 to the server where values associated with keys from the group are relocated.
Additionally or alternatively, in some embodiments, the second mapping structure may be configured to relate a group ID to a partition ID. The second mapping structure may include data (e.g., a table), executable code, a hash function, and/or any suitable type of function and/or information that is capable of being used to identify a partition ID that is associated with a given group ID. Any suitable implementation of the second mapping structure may be used in some embodiments. In some embodiments, the second mapping structure may be implemented as a table that relates group IDs to partition IDs. Configuring the second mapping structure may include one or more of: instantiating the mapping structure, updating the mapping structure, and/or performing any other suitable action. Any of the partition IDs may include a number, a string, an alphanumerical string, and/or any other suitable type of identifiers. In some embodiments, for any one of the groups of keys identified in the first mapping structure, the second mapping structure may identify a partition ID that is associated with that group of keys. In some embodiments, data corresponding to a partition may include a hashing function that maps group IDs to partition IDs. For example, the hashing function may be one that maps a space of Group IDs to a space of partition IDs.
At 340, a third mapping structure is configured that relates server IDs to node IDs. The third mapping structure may include data (e.g., a table), executable code, and/or any suitable type of information that is capable of being used to identify a node ID that is associated with a given server ID. In some embodiments, the third mapping structure may be implemented as a table that relates server IDs to node IDs. Any suitable implementation of the third mapping structure may be used in some embodiments. Configuring the third mapping structure may include one or more of: instantiating the mapping structure, updating the mapping structure, and/or performing any other suitable action. Any of the server IDs may include a number, a string, an alphanumerical string, and/or any other suitable type of identifiers. In some embodiments, for any one of the plurality of servers that are used to host the database, the third mapping structure may identify a node (e.g., physical hardware, such as a computer and/or a blade server, and/or a virtual server in a virtual environment, such as a computer cloud) on which the server is executing.
At 350, one or more database commands are executed. Examples of processes for executing commands are provided in relation to FIGS. 5A-5C. Further examples of processes for executing commands are also provided in the '011 application.
At 360, the group of keys that is identified at step 310 is updated. Updating the group of keys may include adding one or more keys to the group and/or removing one or more keys from the group. Adding a key to the group of keys may include configuring the first mapping structure to associate the key with the ID of the group. Removing a key from the group may include modifying the first mapping structure to remove any associations between the key being removed and the ID of the group. Additionally or alternatively, in some embodiments, updating the group of keys may include deleting, from the first mapping structure, the ID of the group as well as any associations between the ID and keys in the group, and identifying a new group of keys in the manner discussed with respect to step 310. In some embodiments, the group of keys may be updated at predetermined time intervals (e.g., 10 minutes) and/or after the execution of a predetermined number of commands (e.g., 10 commands.).
FIG. 4A depicts a diagram of an example of a process 400A for identifying, a group of keys in accordance with some embodiments of the disclosed subject matter. In some embodiments, process 400A may be used to identify a group of keys as specified by step 310 of process 300. Process 400A may be used additionally of alternatively to processes 400B and 400C (described herein) in some embodiments.
At 410A, a definition for a group of keys is obtained. In some embodiments, the definition may be received as user input via a local input device, such as a keyboard, or over a communications network. In some embodiments, the group definition may specify a symbol pattern. Keys associated with values in the database that match the specified pattern may be considered to belong to the group. In some embodiments, the group definition may include a regular expression and/or any other suitable type of expression. For example, a group of keys defined as “^user#[0-9]*” may include all keys that start with the string “user#” followed by at least one digit.
At 420A, a server is identified. Any suitable method for identifying the server may be used, in some embodiments. For example, in some embodiments, the server may be identified based on user input, based on a hash function of a group identifier, based on a predetermined rule, or based on any other suitable function, information, and/or criterion. Additionally or alternatively, in some embodiments, the identified server may be one in which a value associated with one of the keys in the group is already stored.
At 430A, at least some values stored in the database that are associated with different keys in the group are relocated to the identified server. Relocating the values may include, retrieving each value from its current location, storing the value on the identified server, and deleting, the value from its current location, thus leaving a copy of the value on the identified server only. In some embodiments, the value retrieval and storage may be performed in accordance with the techniques discussed in the '011 application. As discussed above, storing values that are associated with keys from a group of keys may increase the speed at which complex multi-key commands are executed by the database.
FIG. 4B depicts a diagram of an example of a process 400B for identifying a group of keys in accordance with some embodiments of the disclosed subject matter, in some embodiments, process 400B may be used to identify a group of keys as specified by step 310 of process 300. Process 400B may be used additionally or alternatively to processes 400A and 400C (described herein) in some embodiments.
At 410B, a command is received that includes at least a first key and a second key. Any suitable type of command may be received, in some embodiments. For example, the command may be a UNION command, an INTERSECT command, or a SORT command.
At 420B, a determination is made whether the first key and the second key are assigned to different groups of keys. The determination may be performed based on the first mapping structure. If it is determined that the first key and the second key are assigned to different groups, the execution of process 400B terminates. Otherwise, the execution of process 400B proceeds to step 430B.
At 430B, a determination is made whether the first key is assigned to a first group. If it is determined that the first key is assigned to the first group of keys, process 400B proceeds to 440B. Otherwise, process 400B terminates.
At 440B, the second key is added to the first group. In some embodiments, adding the first key may include modifying the first mapping structure to relate the second key with the ID of the first group of keys.
At 450B, a first server that is hosting a first value that is associated with the first key is identified. In some embodiments, the first server may be identified by using techniques discussed in the '011 application.
At 460B, a second server that is hosting a second value that is associated with the first key is identified. In some embodiments, the second server may also be identified by using techniques discussed in the '011 application.
At 470B, the second value is relocated from the first server to the second server. Relocating, the second database value may include copying the second database value to the server identified at 460B. In some embodiments the relocation may be performed in the manner discussed with respect to step 310.
FIG. 4C depicts a diagram of an example of a process 400C for identifying a group of keys in accordance with some embodiments of the disclosed subject matter. In some embodiments, process 400C may be used to identify it group of keys as specified by step 310 of process 300. Process 400C may be used additionally or alternatively to processes 400A and 400B (disclosed herein) in some embodiments.
At 410C, a plurality of key pairs is identified. Each pair may include a first key and a second key. Each first key and each second key may be associated with a different value that is stored in the database.
At 420C, a ranking score is assigned to each of the pairs. In some embodiments, the ranking score of each pair may be based on a count of commands that include both the pair's first key and the pair's second key that have been executed by the database. For example, the count may be based on the instructions executed during a predetermined period of time (e.g., in the last 10 minutes), on the number of instructions that include both keys in the set of N most recently-executed instruction, or based on any other suitable criterion. In some embodiments, the greater the count, the higher the ranking score. Any suitable type of instructions may be used to keep the count, in some embodiments. For example, the count may be based on INTERSECT instructions, UNION instructions, and/or any suitable type of instructions. Additionally or alternatively, the count may be based only on the number of complex multi-key instructions that have been invoked during the period.
At 430C, the pairs in the plurality are grouped based on their ranking scores to identify a group of keys. In some embodiments, the pairs may be grouped based on their ranking scores to determine group of pairs. Once the group of pairs is determined the keys from the pairs in the group may be included in the group of keys. In some embodiments, the keys may be grouped by identifying a subset of the plurality of pairs based on the pairs ranking scores, and identifying a group of pairs from the subset, such that the first element in each pair of the group is also part of another pair in the group. The subset of the pairs may be identified by selecting all pairs whose ranking score exceeds a predetermined threshold or by selecting a subset of the plurality of pairs that have the highest ranking scores out of all pairs in the plurality. Any suitable method for identifying the subset of pairs may be used in some embodiments.
At 440C, a server is identified. The server may be identified in the manner discussed with respect to step 420A.
At 450C, for each key in the group of keys, a value associated with that key is identified. In some embodiments, the value may be identified by using techniques discussed in the '011 application.
At 460C, at least some of the values identified at step 450C are relocated to the server identified at 440C. The relocation may be performed in the manner discussed with respect to step 420A.
FIG. 5A depicts a diagram of an example of a process 500A for executing database commands in accordance with some embodiments of the disclosed subject matter. In some embodiments, process 500A may be used to execute one or more commands as specified by step 350 of process 300. Process 500A may be used additionally or alternatively to processes 500B and 500C (described herein) in some embodiments.
At 510A, a database command is received. Any suitable command may be received in some embodiments. For example, the command may be a SET command, a GET command, an INTERSECT command, a UNION command, and/or an other suitable type of command. The command may be received from any application and/or device that is capable of using the database.
At 520A, a determination is made whether the first key and the second key are part of the same group of keys. In some embodiments, the determination may be made by using the first mapping structure. If the first key and the second key are not part of the same group of keys, the execution of process 500A terminates. In such instances, the command may be executed using the technique(s) discussed in the '011 application. More specifically, for example, the first key may be mapped to a partition ID, the partition ID may be mapped to the ID of a server storing a value that is associated with the first key, and afterwards the server may be used, at least in part, to execute the instruction. Otherwise, if the first key and the second key are found to be part of the same group keys, process 500A proceeds to step 530A.
At 530A, the ID of the group of keys which the first key and the second key are part of is determined. In some embodiments, the ID of the group of keys may be determined by using the first mapping structure.
At 540A, the ID of the group of keys is mapped to a server ID in order to identify a server that is hosting values that are associated with keys from the group of keys. In some embodiments, the mapping may be performed based on the second mapping structure.
Additionally or alternatively, in some embodiments, the ID of the group of keys (or an expression (or portion thereof), such as a regular expression, matching keys from the group) may be mapped to a partition ID (e.g., by using the second mapping structure, such as a hash function). Afterwards, the partition ID may be used to identify a server ID. Each partition may be a logical unit that includes a plurality of database data values. Further discussion of the implementation and use of partitions is provided in the '011 application. In some embodiments the mapping of the ID of the group of keys to the partition ID may be performed by using the second mapping structure. As discussed above, in some embodiments, the second mapping structure may include (or implement) a hashing function.
At 550A, the server ID is mapped to a node 1D in order to identify the node (e.g., a physical device, such as a computer, or a virtual machine) that is hosting the server. In some embodiments, the mapping may be performed based on the third mapping structure.
At 560A, the command is forwarded to the server hosting the values associated with keys in the group of keys and subsequently executed by that server. In some embodiments, a response to the command may be received form the first server and forwarded to the application and/or device that transmitted the command.
FIG. 5B depicts a diagram of an example of a process 500B for executing database commands in accordance with some embodiments of the disclosed subject matter. In some embodiments, process 500B may be used to execute one or more commands as specified by step 350 of process 300. Process 500B may be used additionally or alternatively to processes 500A and 500C (described herein) in some embodiments.
At 510B a command is received that includes a first key. The command may be received from any application and/or device that is capable of using the database. Any suitable type of command may be received in some embodiments. Furthermore, any suitable type of first key may be received, in some embodiments. For example, in some embodiments, the command may be a SORT command, and the first key may be associated with a value that includes a set of references to other values (e.g., a set of second keys) that are to be sorted. Executing the command, by a database server, may include retrieving each of the values referenced by the set of second keys, sorting the values, and providing an indication of the sorted values to the application and/or device that submitted the command.
At 520B, a value that is associated with the first key is identified. As discussed above, in some embodiments, the value may be identified by using the techniques discussed in the '011 application.
At 530B, a determination is made whether the value identified at step 520B includes two or more references to other values. For example, in some embodiments, a determination may be made whether the value includes a plurality of second keys. Although in this example a determination is made whether the value includes a plurality of references to other values, in other examples the determination may be whether the command is one whose syntax requires the first key to point to a value that includes references to other values (e.g., a SORT command). Upon a negative determination, the execution of process 500B terminates. Otherwise, if it is determined that the value includes references to other values (or the command is of the predetermined type), the execution of process 500B proceeds to step 540B.
At 540B, the execution of commands, other than the command received at 510, is blocked. In some embodiments, the blocking of other commands may include refraining from executing any commands that arrive at the entity executing process 500B (e.g., DMC 200).
At 550B, for each second key (or other type of reference) that is included in the value if the first key, a value associated with that second key is identified. An suitable method for identifying values with second keys may be used in some embodiments.
At 560B, a copy of each of the values associated with (e.g., referenced by) the second keys is obtained.
At 570B, each of the obtained copies is stored on a predetermined server.
At 580B, the blocking of the execution of commands other than the command received at 510B is stopped, and the execution of the commands is allowed to proceed.
At 590B, the instruction is forwarded to the predetermined server and subsequently executed by that server. In some embodiments, a response to the command may be received from the first server and forwarded to the application and/or device that transmitted the command.
FIG. 5C depicts a diagram of an example of a process 500C for executing, database commands in accordance with some embodiments of the disclosed subject matter. In some embodiments, process 500C may be used to execute one or more commands as specified by step 350 of process 300. Process 500C may be used additionally or alternatively to processes 500A and 500B (described herein) in some embodiments.
At 510C a command is received indicating the start of a database transaction. Any suitable command that indicates the start of a transaction may be received in some embodiments. For example, the command may be a MULTI command.
At 520C, a second command that is part of the transaction is received. Any suitable command that can be part of a transaction may be received in some embodiments. For example, the command may be an INTERSECT command, a UNION command, a SORT command, a GET command, or a SET command. The command may include at least one key.
At 530C, a server is identified based on the key. The identified server may be one that is storing a database value that is associated with the key. Any suitable method for identifying the server may be used. For example, as discussed with respect to FIG. 5A, the key may be mapped to group ID and the group ID may be mapped to a server ID (e.g., using a hash function). Additionally or alternatively, as discussed in the '011 application, the key may be mapped to a partition ID and the partition ID may be mapped to the server ID.
At 540C, a command for initiating a transaction is transmitted to the identified server. Any suitable command for initiating a transaction may be transmitted in some embodiments. For example, a MULTI command may be transmitted. In some embodiments, the first command may be transmitted to the server only when a first command has not been transmitted during a previous iteration of steps 520C-560C.
At 550C, the second command is transmitted to the second server. Upon receiving the command, the second server may execute the command and generate a response.
At 560C, a determination is made whether a third command marking the end of the transaction started at step 510C is received. Any suitable command for ending a transaction may be received, in some embodiments. For example, an EXEC command may be received. If a third command is not received, steps 520C-540D are executed again with respect to another second command that is part of the transaction started at step 510C. Otherwise, if the third command is received, the execution of process 500C proceeds to step 570C.
At 570C, a command marking the end of a transaction is transmitted to all servers on which a transaction has been started at step 540C during various iterations of steps 520C-540C. Any suitable command for ending a transaction may be transmitted. For example, an EXEC command may be transmitted to each of the servers.
Returning to FIG. 2, DMC 202 can also be used to perform client authentication in some embodiments. Any suitable one or more client authentication mechanisms can be used. For example, Simple Authentication and Security Layer (SASL) authentication, password authentication, source IP authentication, Amazon Web Service Security Group, and/or any other suitable authentication mechanisms can be used in some embodiments.
Any suitable mechanism for identifying a server in a node can be used. For example, a server can be identified by a port number that is appended to an IP address of a node (e.g., NodeIP:ServerPort).
In accordance with some embodiments, DMC 202 can also be used to perform protocol translation. For example, in some embodiments. DMC 202 can perform a translation from a protocol used by clients/applications requesting data to a protocol used by servers 206. More particularly, for example, a client/application can request data using Memcached ASCII, Memcached binary, Redis old protocol, Redis unified protocol, and/or any other suitable protocol. Servers 206 can communicate using a Redis unified protocol modified to include extensions that allows the servers to support operations which are natively not supported by a Redis server (e.g. Memcached commands) without breaking the Redis unified protocol semantics. And DMC 202 can perform a translation between these protocols.
For example, in accordance with some embodiments, DMC 202 can perform translation between the Memcached protocol and a modified Redis unified protocol. In doing so, the following can be performed. First, a client may issue a Memcached cas command using the following format: cas <key> <flags> <exptime> <bytes> <cas unique> [noreply]\r\n. Note that cas is a check and set operation, which means “store this data but only if no one else has updated since I last fetched it.” This command can then be translated to a new command, RCAS, using the following format: RCAS <key> <flags> F(<exptime>) <cas unique> <bytes/body>.
In doing, so, the DMC can perform the following steps when translating, a Memcached cas command received from the client to the new RCAS command. If a noreply argument is used by the client, the DMC does not forward this argument to the server. Instead the DMC remembers that the initial request included this option, and therefore will not transfer the Redis reply to the client. If Memcached expiration time (exptime) uses Unix time format, the DMC can translate the Unix time to offset in seconds from the current time. The DMC can then receive a response to the RCAS command as follows. A “+OK\r\n” response can be received, which indicates success. A “-NOT_STORED” response can be received, which indicates that the data was not stored, but not because of an error. A “−EXIST” response can be received, which indicates that the item trying to be stored with the RCAS command has been modified since it was last fetched. A “-NOT_FOUND\r\n” response can be received, which indicates that the item trying to be stored with a RCAS command did not exist or has been deleted. These received messages can then respectively be translated to “STORED\r\n”, “NOT_STORED”, “EXIST\r\n”, and “NOT_FOUND” of the Memcached protocol.
DMC 202 can also perform load balancing in some embodiments to balance loads among a master server 206 (which can respond to read and write requests for values) for a database (or bucket) partition and one or more slave servers 206 (which can respond to read requests for data objects) for the same partition. For example, in some embodiments, the DMC for a partition that resides over multiple servers 206 (e.g., one master server and N (zero or more) slave servers), can redirect a request for a data object on that partition to a relevant least used server 206 based on shared usage statistics.
In some embodiments, DMC 202 can redirect requests to and responses from a node on which a data object is stored when a request for that data object is received at the wrong node. In some embodiments, this redirection of responses can occur via dedicated connections 604 to one or more other DMCs in other nodes as shown in FIG. 6 or one or more servers in other nodes.
To improve the performance of servers 206, DMC 202 can perform connection pooling in some embodiments. Any suitable technique for connection pooling can be used in some embodiments. For example, as shown in FIG. 6, in some embodiments, the DMC can maintain a number of connections 602 with each server 206 so that each addition or removal of a connection from a server will degrade its throughput and increase its latency. In some embodiments, connections 602 between a DMC 202 and servers 206 can be long-lived, persistent connections that reduce the overhead associated with connection setup and teardown between client and server.
To improve the performance of servers 206, DMC 202 can additionally or alternatively use Unix domain socket connections between the DMC and the servers in some embodiments. When the DMC and the server reside on the same physical/virtual machine. Unix domain socket connections can be used to eliminate the processing, overhead associated with managing TCP connection between the DMC and the servers.
To improve the performance of servers 206, DMC 202 can additionally or alternatively perform request pipelining in some embodiments. Requests can be pipelined by the DMC by sending multiple requests to the same server before responses to the earlier of those requests are received from the server on requests that were previously sent on the same connection.
In some embodiments, the number of connections and the size of a pipeline between DMC 202 and each server 206 can additionally or alternatively be changed based on the average object size in the server. For example, in some embodiments, when the average object size in a server is less than 1 kB, the DMC may open fewer (or reduce the number of) persistent connections with the server and increase the pipeline size (i.e., the number of requests that can be sent to the server without waiting for its response). However, when the average file size is over 20 kB, for example, the DMC may increase the number of persistent connections with the server and reduce the pipeline size.
In some embodiments, the DMC can additionally or alternatively prioritize light processing requests over heavy processing requests coming from different client connections. Any suitable basis for selecting a processing request as being light or heavy can be used in some embodiments. For example, a Redis GET request can be considered to be a light request, where as a Redis ZRANGEBYSCORE request can be considered to be a heavy request.
In some embodiments, the DMC can additionally or alternatively send the same response to multiple requesters when a read request for a data object is received while another read request for the same data object has already been sent by the DMC to a server, but a corresponding response not received. In doing so, the DMC can queue one or more later requests until the response for the first request is received. The DMC can then send that response to the client that sent the first request and to all the clients with queued requests.
In some embodiments, the DMC can additionally or alternatively provide a cache for frequently requested data objects. In some embodiments, the frequently requested objects can be more frequently requested than the frequently requested objects similarly replicated across servers 206 as described below.
In accordance with some embodiments, data management controller (DMC) 202 can be used to manage the data flow of node 200, to manage the data flow between node 200 and one or more other nodes in the same cluster, and to manage the data flow between node 200 and one or more nodes in another cluster.
In accordance with some embodiments, Cluster Node Manager CNM 204, as a cluster manager can make auto-sharding (scaling-out) decisions when a specific database (or Bucket) has to be split over more than one server due to load or memory constraints. This can be accomplished in any suitable manner. For example, the CNM can instruct a server to send one or more user data set partitions to a new server, and then the CNM can instruct the DMC to redirect requests destined to this partition(s) to the new server. Similarly, in some embodiments, CNM 204 as a cluster manager can make auto merging (scaling-in) decisions when a specific database (or Bucket) has to be merged from two or more servers to fewer servers due to reduced load or a reduction in memory size. This can be accomplished in any suitable manner. For example, the CNM can instruct a server to send one or male user data set partitions to a target server that already manages a portion of the user data set, and then the CNM can instruct the DMC to redirect requests destined to this partition(s) to the target server.
In accordance with some embodiments, duster node manager (CNM) 204 can be used to perform node management functions and cluster management functions. For example, the cluster node manager can be used to monitor the status of each node, configure the node, control inter- and intra-node communications, elect nodes for aggregation and supervision functions, and provide a management interface for the cluster operation.
In accordance with some embodiments, CNM 204 as a node manager can monitor for failures and/or degraded status in other nodes and in servers of its own node. Any suitable mechanism can be used for determining that another node or server has failed or become degraded in some embodiments. For example, in some embodiments, periodic messages can be sent out by each server in a node to the CNM of the node. The CNM can detect a failure when a message is missing. As another example, in some embodiments, each CNM can send out a periodic message to CNM(s) in one or more other nodes. The CNM(s) in the one or more other nodes can detect a failure when this message is missing. As yet another example, each server can send out a memory fragmentation status message to the CNM in the same node to alert the CNM if its performance has become degraded. Any suitable action can be taken in response to a failure or degraded status. For example, in some embodiments, a CNM can cause a server or another node to restart its software upon a failure being detected, to perform a de-fragmentation process upon a degraded status being detected, etc. In some embodiments, when the fragmentation ratio crosses a pre-defined threshold, the following can be performed: (1) another server on the same node or on a different node can be started; (2) the fragmented server can be backed-up; (3) all clients can be switched to the new server; and (4) the fragmented server can be released.
In some embodiments, CNM 204 as a node manager can receive, process, and monitor local configuration data, and/or monitor and collect node statistics, such as DMC statistics and server statistics.
In some embodiments, CNM 204 as a node manager can translate server statistics to other non-relational database statistics. For example, when a Memcached user asks to get his/her Memcached statistics for the user's server 206 that is based on a Redis server, the request can be forwarded to the relevant node that contains the user's server 206 that holds the user dataset, and the CNM in that node can process this request and translate the Redis statistics to Memcached statistics.
In some embodiments, CNM 204 as a node manager can provide statistics information to other components upon request.
In some embodiments, CNM 204 as a node manager can monitor the DMC and server(s) for performance alerts. For example, the CNM as a node manager can monitor cases where the throughput or the latency of the node or the server crosses pre-defined thresholds.
In some embodiments, CNM 204 as a node manager can control the load of the replication processes across the servers which reside on the same node, and control the multi-cluster replication operation of the server(s) within a node.
In some embodiments, CNM 204 as a cluster manager can manage cluster configuration by communicating with another CNM designated as a Node Configuration Manager for allocating and releasing resources.
In some embodiments, CNM 204 as a cluster manager can update the cluster DNS servers with any configuration change of the database or Bucket). These updates can be used to ensure that only one protocol hop will be used between the clients and the cluster nodes in some embodiments.
In some embodiments, CNM 204 as a cluster manager can maintain a global configuration map that maps of all users datasets (e.g., databases or Buckets) with their partitions to cluster nodes and servers. In some embodiments, when the CNM receives a request to create a user dataset, or receives a request to increase the size of a user dataset, the CNM can perform an admission control process which will look for additional resources over existing nodes of a cluster or alternatively will add a new node to the cluster. In some embodiments, when the CNM receives a request to delete a user dataset, or receives a request to decrease the size of a user data set, the CNM can perform an admission control process which will release the resources.
In some embodiments, CNM 204 as a duster manager can maintain a global rebalancing process in order to ensure that all resources of a cluster are spread equally across nodes.
In some embodiments, CNM 204 as a cluster manager can forward monitoring requests from the system manager to the proper CNM and aggregate statistics from multiple servers that serve the same database (or Bucket). Information about levels of utilization of the servers 206 can be exchanged between cluster node managers (CNM) 204 in different nodes.
In some embodiments, CNM 204 as a cluster manager, together with the relevant another CNM designated as a Node Replication Controller, can perform cross-clusters replication operations.
In some embodiments, DMC 202 and CNM 204, each of which is a control process, can be combined to form a combined control process.
In some embodiments, server(s) 206 can be used to provide in-memory non-relational database functionality and any other suitable server process(es). In some embodiments, the server(s) can be based on the open-source Redis server with enhanced functionality. In some embodiments, the server(s) can represent one of the following options: (1) a Memcached Bucket; (2) a partition of a Memcached Bucket; (3) a Redis database (DB); or (4) a partition of a Redis DB.
In some embodiments, each cluster's node includes N servers, and there can be any suitable number of servers, including zero, in some embodiments.
In some embodiments, in addition to functionality performed by a Redis server, server 206 can also support data operations which are not natively supported by a Redis server, such as cas and prepend commands and flag and cas arguments of Memcached. Commands and arguments which are not natively supported by the Redis server are transmitted to server 206 using the extension(s) of the modified Redis unified protocol, which extends the supported Redis command while preserving Redis unified protocol semantics. Server 206 supports these extensions by adding new elements to the object key data structure and by implementing the command logic behind the new commands. Examples for the new elements that are added to the Redis Object Key Data Structure are:
    • i. flags field: to support the Memcached flag functionality; and
    • ii. cas field: to support the Memcached cas functionality.
We don't really need to compression staff here but I have no issue with leaving itIn some embodiments, in addition to functionality performed by a Redis server, the server can also perform real-time (or near real-time) class-based compression. In order to perform this compression, in some embodiments, a set of global dictionaries for finding duplicated strings across multiple objects can be provided. These dictionaries can treat all the non-relational objects that belong to a certain class as a one big object. As a result, an object can compressed based on many data patterns which appeared in objects previously processed by the compression process.
Any suitable compression dictionaries can be used in some embodiments. For example, the compression dictionaries can be a LZW/LZ78 dictionary, a LZP dictionary, or any other suitable compression dictionary.
These dictionaries can be saved globally and can be not attached to each object, which provides additional reduction in the size of each compressed object, in some embodiments.
In some embodiments, a classification mechanism can be used to classify objects according to data patterns that were found in the objects and can provide a dedicated global dictionary per object class. This mechanism can find new classes of objects and merge classes of objects. As a result, the total size of each compression dictionary can be reduced, which contributes to additional enhancement in the compression ratio.
In some embodiments, in addition to functionality performed by a Redis server, a real-time for near real-time) compaction mechanism for cached objects can be provided. In some embodiments, this compaction mechanism can delete cached objects immediately (or a soon after) after they have been expired.
In some embodiments, in addition to functionality performed by a Redis Server, a mechanism for storing frequently used objects in a partition that is replicated across multiple nodes can be provided. In this way, multiple nodes can each serve the frequently used objects simultaneously thus improving performance. In some embodiments, these frequently used objects can always be stored in an uncompressed format.
In accordance with some embodiments, CCS 208 is an internal cluster repository service (based on the Redis architecture). In some embodiments, this service can include a per-node and a per-cluster configuration, statistics, and alert information. All the nodes in a cluster can be synchronized with the per-cluster configuration. The DMC and CNM can be registered to the CCS to receive configuration change events which are relevant to their operations.
As shown in FIG. 1, in accordance with some embodiments, each cluster can include one or more Backup Nodes 112.
In some embodiments, the cluster architecture described herein can be provided as a service. A user of this service can be registered through the service manager (SM), which communicates with a CNM designated as a Cluster Orchestrator to allocate user resources in the cluster. In some embodiments, the SM may allocate resources across multiple clusters.
In some embodiments, a user resource can be a Memcached Bucket, a Redis database (DB), or any other suitable non-relational database. Each database (or bucket) can be hosted over one or more servers 206, depending on the transaction load imposed and the memory size of its dataset. In some embodiments, each database for bucket) can be composed of multiple partitions (e.g., 4096 or any other suitable number). In some embodiments, keys can be equally spread over the different partitions by applying a hash function technique. Partition space may scaled dynamically (in or out) if necessary, using a consistent hashing, algorithm.
In some embodiments, a partition in a cluster can be hosted by one master server 206, and by zero or more slave servers 206. Master servers can serve both read and write requests, and slave, servers can only serve read requests, in some embodiments.
When a database (or a bucket) is created, the user can be provided with a single DNS address or a list of DNS addresses to be used for accessing the database (or the bucket) in some embodiments. The user then needs to configure its application servers to use these addresses. This list can include DNS addresses for both master and slave servers 206 that hold the database (or the bucket). Multiple DNS addresses may point to the same server 206.
Application requests may be sent directly to the node where the key is hosted, or may be redirected to that node by another node (through the DMC).
In accordance with some embodiments, any one or more node, any one or more duster, the service manager, and the DNS can be implemented in one or more general or special purpose devices, such as a general purpose computer, a special purpose computer, a client, a server, etc. Any of these general or special purpose devices can include any suitable components such as one or more hardware processor (each of which can be a microprocessor, digital signal processor, a controller, etc.), memory, communication interfaces, display controllers, input devices, etc. Any one or more of the connections between different components herein can be formed from any suitable communication link or combination of communication links. For example, the communication links can include the Internet, a local area network, a wide area network, a telephone network, a satellite network, a cable network, etc.
In some embodiments, any suitable computer readable media can be used for storing instructions for performing the processes described herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as magnetic media (such as hard disks, floppy disks, etc.), optical media (such as compact discs, digital video discs. Blu-ray discs, etc.), semiconductor media (such as flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
It should be noted that FIGS. 3, 4A-4C, and 5A-5C are provided as examples only. At least some of the steps of processes 300, 400A-400C, and 500A-500B may be performed in a different order than represented, performed concurrently, or omitted. It is to be understood that the processes may be executed by at least one of a processor that is part of an in-memory, NoSQL database (e.g., a processor that is part of Node 200 or any other suitable processor). In some embodiments, processes 300, 400A-400C, and 500A-500B may be executed as part of the operation of the in-memory NoSQL database.
Features of the disclosed implementations can be combined and rearranged in various ways. The provision of the examples described herein (as well as clauses phrased as “such as,” “e.g.”, “including” and the like) should not be interpreted as limiting the claimed subject matter to the specific examples; rather, the examples are intended to illustrate only some of many possible aspects.
Although the invention has been described and illustrated in the foregoing illustrative implementations, it is understood that the present disclosed subject matter has been made only by way of example, and that numerous changes in the details of implementation of the invention can be made without departing from the spirit and scope of the invention, which is limited only by the claims which follow.

Claims (18)

What is claimed is:
1. A system for managing an in-memory NoSQL database, comprising:
a hardware processor that is configured to:
identify a first key associated with a first value and a second key associated with a second value as being part of a group of keys in response to determining that:
i) the first value is stored on a first server that is part of the in-memory NoSQL database and the second value is stored on a second server that is part of the in-memory NoSQL database; and
ii) (a) symbols of the first key and symbols of the second key both match a predetermined symbol pattern, (b) the first key and the second key have both been included in a received multi-key NoSQL database command, or (c) symbols of the first key and symbols of the second key both match a predetermined symbol and the first key and the second key have both been included in a received multi-key NoSQL database command; and
in response to identifying the first key and the second key as being part of the group of keys, cause the first value and the second value to be both stored on: the first server; the second server; a third server; the first server and the second server; the first server and the third server; the second server and the third server; or the first server, the second server and the third server.
2. The system of claim 1, wherein the first key and the second key have been both determined to have been included in a received multi-key NoSQL database command and wherein identifying the first key and the second key as being part of the group of keys comprises including the first key and the second key in the group based on a count of received database commands that include both the first key and the second key.
3. The system of claim 1, wherein the first key and the second key have been both determined to have been included in a received multi-key NoSQL database command and wherein identifying the first key and the second key as being part of the group of keys comprises including the first key and the second key in the group in response to receiving a command that includes both the first key and the second key.
4. The system of claim 1, wherein the hardware processor is further configured to:
receive a command including at least two keys;
determine whether the at least two keys are part of the group of keys;
in response to determining that the at least two keys are part of the group of keys, determine an identifier of the group of keys;
identify one of the first server, the second server, the third server, or a fourth server that is part of the in-memory NoSQL database based on the identifier of the group of keys as being one in which values associated with keys from the group are stored; and
forward the command to the one of the first server, the second server, the third server or the fourth server.
5. The system of claim 4, wherein identifying the one of the first server, the second server, the third server, or the fourth server includes:
mapping the identifier of the group of keys to an identifier of a partition; and
mapping the identifier of the partition to an identifier of the one of the first server, the second server, the third server, or the fourth server.
6. The system of claim 5, wherein the mapping the identifier of the group of keys to an identifier of a partition includes performing a hash function on at least a portion of the identifier of the group of keys.
7. A method for managing an in-memory NoSQL database, comprising:
identifying, by a hardware processor, a first key associated with a first value and a second key associated with a second value as being part of a group of keys in response to determining that:
i) the first value is stored on a first server that is part of the in-memory NoSQL database and the second value is stored on a second server that is part of the in-memory NoSQL database; and
ii) (a) symbols of the first key and symbols of the second key both match a predetermined symbol pattern, (b) the first key and the second key have both been included in a received multi-key NoSQL database command, or (c) symbols of the first key and symbols of the second key both match a predetermined symbol and the first key and the second key have both been included in a received multi-key NoSQL database command; and
in response to identifying the first key and the second key as being part of the group of keys, causing the first value and the second value to be both stored on: the first server; the second server; a third server; the first server and the second server; the first server and the third server; the second server and the third server; or the first server, the second server and the third server.
8. The method of claim 7, wherein the first key and the second key have been both determined to have been included in a received multi-key NoSQL database command and wherein identifying the first key and the second key as being part of the group of keys comprises including the first key and the second key in the group based on a count of received database commands that include both the first key and the second key.
9. The method of claim 7, wherein the first key and the second key have been both determined to have been included in a received multi-key NoSQL database command and wherein identifying the first key and the second key as being part of the group of keys comprises including the first key and the second key in the group in response to receiving a command that includes both the first key and the second key.
10. The method of claim 7, further comprising:
receiving a command including at least two keys;
determining whether the at least two keys are part of the group of keys;
in response to determining that the at least two keys are part of the group of keys, determining an identifier of the group of keys;
identifying one of the first server, the second server, the third server, or a fourth server that is part of the in-memory NoSQL database based on the identifier of the group of keys as being one in which values associated with keys from the group are stored; and
forwarding the command to the first server, the second server, the third server, or the fourth server.
11. The method of claim 10, wherein identifying the one of the first server, the second server, the third server, and/or the fourth server includes:
mapping the identifier of the group of keys to an identifier of a partition; and
mapping the identifier of the partition to an identifier of the one of the first server, the second server, the third server, or the fourth server.
12. The method of claim 11, wherein the mapping the identifier of the group of keys to an identifier of a partition includes performing a hash function on at least a portion of the identifier of the group of keys.
13. A non-transitory computer-readable medium containing computer-executable instructions that, when executed by a processor, cause the processor to perform a method for managing an in-memory NoSQL database, the method comprising:
identifying a first key associated with a first value and a second key associated with a second value as being part of a group of keys in response to determining that:
i) the first value is stored on a first server that is part of the in-memory NoSQL database and the second value is stored on a second server that is part of the in-memory NoSQL database; and
ii) (a) symbols of the first key and symbols of the second key both match a predetermined symbol pattern, (b) the first key and the second key have both been included in a received multi-key NoSQL database command, or (c) symbols of the first key and symbols of the second key both match a predetermined symbol and the first key and the second key have both been included in a received multi-key NoSQL database command; and
in response to identifying the first key and the second key as being part of the group of keys, causing the first value and the second value to be both stored on: the first server; the second server; a third server; the first server and the second server; the first server and the third server; the second server and the third server; or the first server, the second server and the third server.
14. The non-transitory computer-readable medium of claim 13, wherein the first key and the second key have been both determined to have been included in a received multi-key NoSQL database command and wherein identifying the first key and the second key as being part of the group of keys comprises including the first key and the second key in the group based on a count of received database commands that include both the first key and the second key.
15. The non-transitory computer-readable medium of claim 13, wherein the first key and the second key have been both determined to have been included in a received multi-key NoSQL database command and wherein identifying the first key and the second key as being part of the group of keys comprises including the first key and the second key in the group in response to receiving a command that includes both the first key and the second key.
16. The non-transitory computer-readable medium of claim 13, wherein the method further comprises:
receiving a command including at least two keys;
determining whether the at least two keys are part of the group of keys;
in response to determining that the at least two keys are part of the group of keys, determining an identifier of the group of keys;
identifying one of the first server, the second server, the third server, or a fourth server that is part of the in-memory NoSQL database based on the identifier of the group of keys as being one in which values associated with keys from the group are stored; and
forwarding the command to the one of the first server, the second server, the third server, or the fourth server.
17. The non-transitory computer-readable medium of claim 16, wherein identifying the one of the first server, the second server, the third server, or the fourth server includes:
mapping the identifier of the group of keys to an identifier of a partition; and
mapping the identifier of the partition to an identifier of the one of the first server, the second server, the third server, or the fourth server.
18. The non-transitory computer-readable medium of claim 17, wherein the mapping the identifier of the group of keys to an identifier of a partition includes performing a hash function on at least a portion of the identifier of the group of keys.
US13/853,030 2010-09-28 2013-03-28 Systems, methods, and media for managing an in-memory NoSQL database Active 2032-02-23 US9436710B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US13/853,030 US9436710B2 (en) 2010-09-28 2013-03-28 Systems, methods, and media for managing an in-memory NoSQL database
US15/255,680 US9984106B2 (en) 2010-09-28 2016-09-02 Systems, methods, and media for managing an in-memory NOSQL database
US15/988,880 US10635649B2 (en) 2010-09-28 2018-05-24 Systems, methods, and media for managing an in-memory NoSQL database

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US38725510P 2010-09-28 2010-09-28
US201061427492P 2010-12-28 2010-12-28
US13/247,371 US8972488B2 (en) 2010-09-28 2011-09-28 System, methods, and media for providing in-memory non-relational databases
US201261616426P 2012-03-28 2012-03-28
US201361802062P 2013-03-15 2013-03-15
US201361802158P 2013-03-15 2013-03-15
US13/853,030 US9436710B2 (en) 2010-09-28 2013-03-28 Systems, methods, and media for managing an in-memory NoSQL database

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/247,371 Continuation-In-Part US8972488B2 (en) 2010-09-28 2011-09-28 System, methods, and media for providing in-memory non-relational databases

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/255,680 Continuation US9984106B2 (en) 2010-09-28 2016-09-02 Systems, methods, and media for managing an in-memory NOSQL database

Publications (2)

Publication Number Publication Date
US20130226971A1 US20130226971A1 (en) 2013-08-29
US9436710B2 true US9436710B2 (en) 2016-09-06

Family

ID=49004448

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/853,030 Active 2032-02-23 US9436710B2 (en) 2010-09-28 2013-03-28 Systems, methods, and media for managing an in-memory NoSQL database
US15/255,680 Active US9984106B2 (en) 2010-09-28 2016-09-02 Systems, methods, and media for managing an in-memory NOSQL database
US15/988,880 Active US10635649B2 (en) 2010-09-28 2018-05-24 Systems, methods, and media for managing an in-memory NoSQL database

Family Applications After (2)

Application Number Title Priority Date Filing Date
US15/255,680 Active US9984106B2 (en) 2010-09-28 2016-09-02 Systems, methods, and media for managing an in-memory NOSQL database
US15/988,880 Active US10635649B2 (en) 2010-09-28 2018-05-24 Systems, methods, and media for managing an in-memory NoSQL database

Country Status (1)

Country Link
US (3) US9436710B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150317335A1 (en) * 2014-04-30 2015-11-05 International Business Machines Corporation Generating a schema of a not-only-structured-query-language database
US20160080402A1 (en) * 2014-09-12 2016-03-17 Level 3 Communications, Llc Dynamic configuration of settings in response to ddos attack
CN107451190A (en) * 2017-06-26 2017-12-08 北京五八信息技术有限公司 Can persistence non-relational database data processing method and device
US11093458B2 (en) * 2015-08-11 2021-08-17 International Business Machines Corporation Automatic attribute structural variation detection for not only structured query language database
US20220382680A1 (en) * 2021-05-28 2022-12-01 Zoom Video Communications, Inc. Versioning Group Cache System

Families Citing this family (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9189503B2 (en) 2012-12-06 2015-11-17 Microsoft Technology Licensing, Llc Database scale-out
US9853826B2 (en) * 2013-02-25 2017-12-26 Qualcomm Incorporated Establishing groups of internet of things (IOT) devices and enabling communication among the groups of IOT devices
US11301514B2 (en) 2013-03-02 2022-04-12 Leon Guzenda System and method to identify islands of nodes within a graph database
US10789294B2 (en) 2013-03-02 2020-09-29 Leon Guzenda Method and system for performing searches of graphs as represented within an information technology system
US20190324678A1 (en) * 2013-09-09 2019-10-24 Whitecanyon Software, Inc. System and Method for Encrypted Disk Drive Sanitizing
WO2015088483A1 (en) * 2013-12-09 2015-06-18 Hewlett-Packard Development Company, L.P. Nosql database data consistency
US9665533B2 (en) 2013-12-20 2017-05-30 Rambus Inc. Blob pools, selectors, and command set implemented within a memory appliance for accessing memory
CN104794123B (en) 2014-01-20 2018-07-27 阿里巴巴集团控股有限公司 A kind of method and device building NoSQL database indexes for semi-structured data
CN105574026B (en) * 2014-10-15 2019-12-13 腾讯科技(深圳)有限公司 Method and device for supporting transaction of non-relational database
US10102030B2 (en) 2015-10-26 2018-10-16 International Business Machines Corporation Using 64-bit storage to queue incoming transaction server requests
US10346423B2 (en) * 2015-11-17 2019-07-09 Leon Guzenda Minimizing resource contention while loading graph structures into a distributed database
CN105808670A (en) * 2016-02-29 2016-07-27 武汉颂大教育科技股份有限公司 NoSQL based task distribution method for realizing electronic scoring
JP6674099B2 (en) * 2016-06-10 2020-04-01 富士通株式会社 Information management program, information management method, and information management device
US11593377B2 (en) 2016-09-26 2023-02-28 Splunk Inc. Assigning processing tasks in a data intake and query system
US11874691B1 (en) 2016-09-26 2024-01-16 Splunk Inc. Managing efficient query execution including mapping of buckets to search nodes
US11281706B2 (en) 2016-09-26 2022-03-22 Splunk Inc. Multi-layer partition allocation for query execution
US10353965B2 (en) 2016-09-26 2019-07-16 Splunk Inc. Data fabric service system architecture
US11615104B2 (en) 2016-09-26 2023-03-28 Splunk Inc. Subquery generation based on a data ingest estimate of an external data system
US11580107B2 (en) 2016-09-26 2023-02-14 Splunk Inc. Bucket data distribution for exporting data to worker nodes
US11003714B1 (en) 2016-09-26 2021-05-11 Splunk Inc. Search node and bucket identification using a search node catalog and a data store catalog
US11461334B2 (en) 2016-09-26 2022-10-04 Splunk Inc. Data conditioning for dataset destination
US12013895B2 (en) 2016-09-26 2024-06-18 Splunk Inc. Processing data using containerized nodes in a containerized scalable environment
US11599541B2 (en) 2016-09-26 2023-03-07 Splunk Inc. Determining records generated by a processing task of a query
US10956415B2 (en) 2016-09-26 2021-03-23 Splunk Inc. Generating a subquery for an external data system using a configuration file
US11023463B2 (en) 2016-09-26 2021-06-01 Splunk Inc. Converting and modifying a subquery for an external data system
US11314753B2 (en) 2016-09-26 2022-04-26 Splunk Inc. Execution of a query received from a data intake and query system
US11604795B2 (en) 2016-09-26 2023-03-14 Splunk Inc. Distributing partial results from an external data system between worker nodes
US10977260B2 (en) 2016-09-26 2021-04-13 Splunk Inc. Task distribution in an execution node of a distributed execution environment
US11222066B1 (en) 2016-09-26 2022-01-11 Splunk Inc. Processing data using containerized state-free indexing nodes in a containerized scalable environment
US11416528B2 (en) 2016-09-26 2022-08-16 Splunk Inc. Query acceleration data store
US20180089324A1 (en) 2016-09-26 2018-03-29 Splunk Inc. Dynamic resource allocation for real-time search
US11442935B2 (en) 2016-09-26 2022-09-13 Splunk Inc. Determining a record generation estimate of a processing task
US11620336B1 (en) * 2016-09-26 2023-04-04 Splunk Inc. Managing and storing buckets to a remote shared storage system based on a collective bucket size
US11269939B1 (en) 2016-09-26 2022-03-08 Splunk Inc. Iterative message-based data processing including streaming analytics
US11126632B2 (en) 2016-09-26 2021-09-21 Splunk Inc. Subquery generation based on search configuration data from an external data system
US11586627B2 (en) 2016-09-26 2023-02-21 Splunk Inc. Partitioning and reducing records at ingest of a worker node
US11321321B2 (en) 2016-09-26 2022-05-03 Splunk Inc. Record expansion and reduction based on a processing task in a data intake and query system
US11294941B1 (en) 2016-09-26 2022-04-05 Splunk Inc. Message-based data ingestion to a data intake and query system
US11562023B1 (en) 2016-09-26 2023-01-24 Splunk Inc. Merging buckets in a data intake and query system
US10984044B1 (en) 2016-09-26 2021-04-20 Splunk Inc. Identifying buckets for query execution using a catalog of buckets stored in a remote shared storage system
US11232100B2 (en) 2016-09-26 2022-01-25 Splunk Inc. Resource allocation for multiple datasets
US11663227B2 (en) 2016-09-26 2023-05-30 Splunk Inc. Generating a subquery for a distinct data intake and query system
US11567993B1 (en) 2016-09-26 2023-01-31 Splunk Inc. Copying buckets from a remote shared storage system to memory associated with a search node for query execution
US11106734B1 (en) 2016-09-26 2021-08-31 Splunk Inc. Query execution using containerized state-free search nodes in a containerized scalable environment
US11243963B2 (en) 2016-09-26 2022-02-08 Splunk Inc. Distributing partial results to worker nodes from an external data system
US11550847B1 (en) 2016-09-26 2023-01-10 Splunk Inc. Hashing bucket identifiers to identify search nodes for efficient query execution
US11860940B1 (en) 2016-09-26 2024-01-02 Splunk Inc. Identifying buckets for query execution using a catalog of buckets
US11163758B2 (en) 2016-09-26 2021-11-02 Splunk Inc. External dataset capability compensation
US11250056B1 (en) 2016-09-26 2022-02-15 Splunk Inc. Updating a location marker of an ingestion buffer based on storing buckets in a shared storage system
US11921672B2 (en) 2017-07-31 2024-03-05 Splunk Inc. Query execution at a remote heterogeneous data store of a data fabric service
US11989194B2 (en) 2017-07-31 2024-05-21 Splunk Inc. Addressing memory limits for partition tracking among worker nodes
US12118009B2 (en) 2017-07-31 2024-10-15 Splunk Inc. Supporting query languages through distributed execution of query engines
US10896182B2 (en) 2017-09-25 2021-01-19 Splunk Inc. Multi-partitioning determination for combination operations
US11151137B2 (en) 2017-09-25 2021-10-19 Splunk Inc. Multi-partition operation in combination operations
CN110019049A (en) * 2017-09-30 2019-07-16 阿里巴巴集团控股有限公司 A kind of processing method and processing device of operation requests
CN108052551B (en) * 2017-11-28 2021-04-30 北京航天云路有限公司 Method for storing large amount of time sequence data realized on REDIS
US10506032B2 (en) * 2018-04-26 2019-12-10 Slack Technologies, Inc. Automated load distribution for a group-based communication platform
US11334543B1 (en) 2018-04-30 2022-05-17 Splunk Inc. Scalable bucket merging for a data intake and query system
US11645237B2 (en) * 2018-05-10 2023-05-09 International Business Machines Corporation Replicating data utilizing a virtual file system and cloud storage
CN109165101B (en) * 2018-09-11 2021-03-19 南京朝焱智能科技有限公司 Memory server design method based on Redis
CN109491988B (en) * 2018-11-05 2021-12-14 北京中安智达科技有限公司 Data real-time association method supporting full-scale updating
CN110032604B (en) 2019-02-02 2021-12-07 北京奥星贝斯科技有限公司 Data storage device, translation device and database access method
CN109951560B (en) * 2019-03-27 2023-11-14 努比亚技术有限公司 Method, terminal and storage medium for improving concurrency and application speed of application service
WO2020220216A1 (en) 2019-04-29 2020-11-05 Splunk Inc. Search time estimate in data intake and query system
US11715051B1 (en) 2019-04-30 2023-08-01 Splunk Inc. Service provider instance recommendations using machine-learned classifications and reconciliation
US11494380B2 (en) 2019-10-18 2022-11-08 Splunk Inc. Management of distributed computing framework components in a data fabric service system
US11922222B1 (en) 2020-01-30 2024-03-05 Splunk Inc. Generating a modified component for a data intake and query system using an isolated execution environment image
US11704313B1 (en) 2020-10-19 2023-07-18 Splunk Inc. Parallel branch operation using intermediary nodes
CN113127490B (en) * 2021-04-23 2023-02-24 山东英信计算机技术有限公司 Key name generation method and device and computer readable storage medium
US12072939B1 (en) 2021-07-30 2024-08-27 Splunk Inc. Federated data enrichment objects
US12105715B2 (en) * 2021-10-19 2024-10-01 Salesforce, Inc. Tenant identification for cache keys
US11734278B2 (en) 2021-10-19 2023-08-22 Salesforce, Inc. Cache management for multiple tenants
US12093272B1 (en) 2022-04-29 2024-09-17 Splunk Inc. Retrieving data identifiers from queue for search of external data system

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060059253A1 (en) 1999-10-01 2006-03-16 Accenture Llp. Architectures for netcentric computing systems
US20070226203A1 (en) 2006-03-23 2007-09-27 Microsoft Corporation Generation of query and update views for object relational mapping
US7420992B1 (en) 2005-03-17 2008-09-02 Packeteer, Inc. Adaptive network traffic compression mechanism including dynamic selection of compression algorithms
US20080215602A1 (en) 2003-10-21 2008-09-04 Jerome Samson "Methods and Apparatus for Fusing Databases"
US20080235292A1 (en) 2005-10-03 2008-09-25 Amadeus S.A.S. System and Method to Maintain Coherence of Cache Contents in a Multi-Tier System Aimed at Interfacing Large Databases
US20100153466A1 (en) 2008-12-17 2010-06-17 Tomas Burger Systems and methods to facilitate report creation for non-relational databases
US20100185593A1 (en) 2006-10-04 2010-07-22 Salesforce.Com, Inc. Methods and systems for recursive saving of hierarchical objects to a database
US20100318498A1 (en) * 2009-06-11 2010-12-16 Vivek Swarnakar Methods and apparatus for organizing data in a database
US20110022575A1 (en) 2007-02-01 2011-01-27 Paul Leslie Tomkins System and method of conclusively verifying the correctness of an information system without needing to test every combination of behaviour at run-time
US20120054197A1 (en) * 2010-08-30 2012-03-01 Openwave Systems Inc. METHOD AND SYSTEM FOR STORING BINARY LARGE OBJECTS (BLObs) IN A DISTRIBUTED KEY-VALUE STORAGE SYSTEM
US20120078974A1 (en) * 2010-09-23 2012-03-29 Microsoft Corporation Data model dualization
US20120117054A1 (en) 2010-11-08 2012-05-10 Lakshmikant Shrinivas Query Analysis in a Database
US20120284317A1 (en) * 2011-04-26 2012-11-08 Dalton Michael W Scalable Distributed Metadata File System using Key-Value Stores
US8805984B2 (en) 2011-07-14 2014-08-12 Red Hat, Inc. Multi-operational transactional access of in-memory data grids in a client-server environment
US9015345B2 (en) * 2010-12-15 2015-04-21 Microsoft Corporation API supporting server and key based networking

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7565349B2 (en) 2005-11-10 2009-07-21 International Business Machines Corporation Method for computing frequency distribution for many fields in one pass in parallel
US7451298B2 (en) 2006-08-03 2008-11-11 Apple Inc. Processing exceptions from 64-bit application program executing in 64-bit processor with 32-bit OS kernel by switching to 32-bit processor mode
US20090119321A1 (en) 2007-11-02 2009-05-07 International Business Machines Corporation Methods and computer program products for implementing low-cost pointer compression and decompression
JP5452602B2 (en) 2009-08-12 2014-03-26 三菱電機株式会社 Data transfer device, data transfer method, and data transfer system
US20110184913A1 (en) * 2010-01-28 2011-07-28 Microsoft Corporation Distributed data backup
US8447754B2 (en) 2010-04-19 2013-05-21 Salesforce.Com, Inc. Methods and systems for optimizing queries in a multi-tenant store
US8799240B2 (en) 2011-06-23 2014-08-05 Palantir Technologies, Inc. System and method for investigating large amounts of data
US8745014B2 (en) * 2011-10-19 2014-06-03 Pivotal Software, Inc. Time series data mapping into a key-value database

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060059253A1 (en) 1999-10-01 2006-03-16 Accenture Llp. Architectures for netcentric computing systems
US20080215602A1 (en) 2003-10-21 2008-09-04 Jerome Samson "Methods and Apparatus for Fusing Databases"
US7420992B1 (en) 2005-03-17 2008-09-02 Packeteer, Inc. Adaptive network traffic compression mechanism including dynamic selection of compression algorithms
US20080235292A1 (en) 2005-10-03 2008-09-25 Amadeus S.A.S. System and Method to Maintain Coherence of Cache Contents in a Multi-Tier System Aimed at Interfacing Large Databases
US20070226203A1 (en) 2006-03-23 2007-09-27 Microsoft Corporation Generation of query and update views for object relational mapping
US20100185593A1 (en) 2006-10-04 2010-07-22 Salesforce.Com, Inc. Methods and systems for recursive saving of hierarchical objects to a database
US20110022575A1 (en) 2007-02-01 2011-01-27 Paul Leslie Tomkins System and method of conclusively verifying the correctness of an information system without needing to test every combination of behaviour at run-time
US20100153466A1 (en) 2008-12-17 2010-06-17 Tomas Burger Systems and methods to facilitate report creation for non-relational databases
US20100318498A1 (en) * 2009-06-11 2010-12-16 Vivek Swarnakar Methods and apparatus for organizing data in a database
US20120054197A1 (en) * 2010-08-30 2012-03-01 Openwave Systems Inc. METHOD AND SYSTEM FOR STORING BINARY LARGE OBJECTS (BLObs) IN A DISTRIBUTED KEY-VALUE STORAGE SYSTEM
US20120078974A1 (en) * 2010-09-23 2012-03-29 Microsoft Corporation Data model dualization
US20120117054A1 (en) 2010-11-08 2012-05-10 Lakshmikant Shrinivas Query Analysis in a Database
US9015345B2 (en) * 2010-12-15 2015-04-21 Microsoft Corporation API supporting server and key based networking
US20120284317A1 (en) * 2011-04-26 2012-11-08 Dalton Michael W Scalable Distributed Metadata File System using Key-Value Stores
US8805984B2 (en) 2011-07-14 2014-08-12 Red Hat, Inc. Multi-operational transactional access of in-memory data grids in a client-server environment

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
DeCandia, G., et al., "Dynamo: Amazon's Highly Available Key-Value Store", In Proceedings of the Symposium on Operating Systems Principles (SOSP '07), Stevenson, WA, USA, Oct. 14-17, 2007, pp. 205-220.
IBM , "A System of Reducing Down Time in a Cached In-Memory Database Environmental Using Checkpointing Mechanism", IP.Com Prior Art Database Technical Disclosure, Apr. 14, 2009, pp. 1-4.
Notice of Allowance dated Jul. 23, 2013 in U.S. Appl. No. 13/247,263.
Notice of Allowance dated Nov. 14, 2014 in U.S. Appl. No. 13/247,371.
Notice of Allowance dated Oct. 1, 2014 in U.S. Appl. No. 13/853,011.
Office Action dated Aug. 10, 2015 in U.S. Appl. No. 14/064,891.
Office Action dated Aug. 4, 2014 in U.S. Appl. No. 14/064,891.
U.S. Appl. No. 13/853,011, filed Mar. 28, 2013.

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150317335A1 (en) * 2014-04-30 2015-11-05 International Business Machines Corporation Generating a schema of a not-only-structured-query-language database
US10055429B2 (en) * 2014-04-30 2018-08-21 International Business Machines Corporation Generating a schema of a not-only-structured-query-language database
US10936556B2 (en) 2014-04-30 2021-03-02 International Business Machines Corporation Generating a schema of a Not-only-Structured-Query-Language database
US20160080402A1 (en) * 2014-09-12 2016-03-17 Level 3 Communications, Llc Dynamic configuration of settings in response to ddos attack
US9712546B2 (en) * 2014-09-12 2017-07-18 Level 3 Communications, Llc Dynamic configuration of settings in response to DDOS attack
US10218720B2 (en) 2014-09-12 2019-02-26 Level 3 Communications, Llc Dynamic configuration of settings in response to DDoS attack
US10609052B2 (en) 2014-09-12 2020-03-31 Level 3 Communications, Llc Dynamic configuration of settings in response to DDoS attack
US11190532B2 (en) 2014-09-12 2021-11-30 Level 3 Communications, Llc Dynamic configuration of settings in response to DDoS attack
US11093458B2 (en) * 2015-08-11 2021-08-17 International Business Machines Corporation Automatic attribute structural variation detection for not only structured query language database
CN107451190A (en) * 2017-06-26 2017-12-08 北京五八信息技术有限公司 Can persistence non-relational database data processing method and device
US20220382680A1 (en) * 2021-05-28 2022-12-01 Zoom Video Communications, Inc. Versioning Group Cache System

Also Published As

Publication number Publication date
US20160371268A1 (en) 2016-12-22
US10635649B2 (en) 2020-04-28
US9984106B2 (en) 2018-05-29
US20180267969A1 (en) 2018-09-20
US20130226971A1 (en) 2013-08-29

Similar Documents

Publication Publication Date Title
US10635649B2 (en) Systems, methods, and media for managing an in-memory NoSQL database
US10984018B2 (en) System, methods, and media for compressing non-relational database objects
US8954478B2 (en) Systems, methods, and media for managing RAM resources for in-memory NoSQL databases
US11445019B2 (en) Methods, systems, and media for providing distributed database access during a network split
US11409721B2 (en) System and method for providing access to a sharded database using a cache and a shard technology
US10275184B2 (en) Framework for volatile memory query execution in a multi node cluster
US11308100B2 (en) Dynamically assigning queries to secondary query processing resources
US10657154B1 (en) Providing access to data within a migrating data partition
US20200159661A1 (en) Cache Coherence for File System Interfaces
US8990243B2 (en) Determining data location in a distributed data store
US10275489B1 (en) Binary encoding-based optimizations at datastore accelerators
US20150254325A1 (en) Managing a distributed database across a plurality of clusters
US9875259B2 (en) Distribution of an object in volatile memory across a multi-node cluster
US10860604B1 (en) Scalable tracking for database udpates according to a secondary index
US20200409584A1 (en) Load balancing for scalable storage system
US11126611B2 (en) Code dictionary generation based on non-blocking operations
US11216421B2 (en) Extensible streams for operations on external systems
US11232000B1 (en) Moving database partitions from replica nodes
Schales et al. FCCE: highly scalable distributed feature collection and correlation engine for low latency big data analytics
US10379780B2 (en) Statistics management for scale-out storage
US11620194B1 (en) Managing failover between data streams
WO2024096984A1 (en) Data deduplication for replication-based migration of virtual machines

Legal Events

Date Code Title Description
AS Assignment

Owner name: GARANTIA DATA, LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHOOLMAN, YIFTACH;BENGAL, OFER;REEL/FRAME:030626/0525

Effective date: 20130605

AS Assignment

Owner name: REDIS LABS LTD., ISRAEL

Free format text: CHANGE OF NAME;ASSIGNOR:GARANTIA DATA LTD;REEL/FRAME:034746/0491

Effective date: 20140205

AS Assignment

Owner name: SILICON VALLEY BANK, MASSACHUSETTS

Free format text: SECURITY AGREEMENT;ASSIGNOR:REDIS LABS LTD;REEL/FRAME:036011/0849

Effective date: 20150623

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: FIRST AMENDMENT TO INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:REDIS LABS LTD.;REEL/FRAME:042328/0088

Effective date: 20170424

AS Assignment

Owner name: SILICON VALLEY BANK, MASSACHUSETTS

Free format text: SECOND AMENDMENT TO INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:REDIS LABS LTD;REEL/FRAME:048301/0621

Effective date: 20190207

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4

AS Assignment

Owner name: REDIS LTD, ISRAEL

Free format text: CHANGE OF NAME;ASSIGNOR:REDIS LABS LTD;REEL/FRAME:057494/0270

Effective date: 20210610

AS Assignment

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:REDIS LTD;REEL/FRAME:062315/0312

Effective date: 20230106

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8