CN115062092A - Database access method, device, system and storage medium - Google Patents

Database access method, device, system and storage medium Download PDF

Info

Publication number
CN115062092A
CN115062092A CN202210956615.6A CN202210956615A CN115062092A CN 115062092 A CN115062092 A CN 115062092A CN 202210956615 A CN202210956615 A CN 202210956615A CN 115062092 A CN115062092 A CN 115062092A
Authority
CN
China
Prior art keywords
database
protocol
partition
node
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210956615.6A
Other languages
Chinese (zh)
Other versions
CN115062092B (en
Inventor
赵百强
岑苏君
汪翔
沈春辉
张为
李飞飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Cloud Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Cloud Computing Ltd filed Critical Alibaba Cloud Computing Ltd
Priority to CN202210956615.6A priority Critical patent/CN115062092B/en
Publication of CN115062092A publication Critical patent/CN115062092A/en
Application granted granted Critical
Publication of CN115062092B publication Critical patent/CN115062092B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/2895Intermediate processing functionally located close to the data provider application, e.g. reverse proxies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/08Protocols for interworking; Protocol conversion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a database access method, equipment, a system and a storage medium. In the embodiment of the application, a protocol proxy node is additionally arranged at a server of a first database, and the protocol proxy node can simulate servers of other second databases, so that for operation requests of clients of the second databases with different supported client protocols to the first database, the protocol proxy node can utilize the simulated server of the second database to perform client protocol conversion on the operation requests, so that the operation requests follow the client protocol of the first database, and thus, the operation requests after protocol conversion can be used for accessing the first database, thereby realizing the compatibility of the first database with the client protocols of the other databases, and being beneficial to improving the client compatibility of the first database.

Description

Database access method, device, system and storage medium
Technical Field
The present application relates to the field of database technologies, and in particular, to a database access method, device, system, and storage medium.
Background
With the development of information technology, data is growing explosively, and databases of various modes are being developed and utilized. However, the variety of databases brings inconvenience to software users, software developers need to develop corresponding client software for each database to communicate with a database server, access to the databases is achieved, so that the databases in different modes can be adapted, the learning cost is high, and the development efficiency is low. Therefore, how to improve the client compatibility of the database so that a user can access other databases by using the client of the existing database becomes a technical problem to be solved in the field.
Disclosure of Invention
Aspects of the present application provide a method, device, system, and storage medium for accessing a database, so as to improve compatibility of the database with a client and improve development efficiency.
An embodiment of the present application provides a database system, including: a server side of a first database and a client side of a second database; the first database and the second database support different client protocols;
the server side of the first database comprises: a protocol proxy node;
the protocol proxy node is used for simulating a server side of the second database; performing protocol conversion on a first operation request provided by a client of the second database and following a client protocol supported by the second database by using the simulated server of the second database to obtain a second operation request following the client protocol supported by the first database; operating the first database based on the second operation request to obtain an operation result corresponding to the second operation request; and providing the operation result to the client of the second database.
An embodiment of the present application further provides a database access method, including:
simulating a server of a second database by using a protocol agent node in the server of the first database; the first database and the second database support different client protocols;
performing protocol conversion on a first operation request provided by a client of the second database and following a client protocol supported by the second database by using the simulated server of the second database to obtain a second operation request following the client protocol supported by the first database;
operating the first database based on the second operation request to obtain an operation result corresponding to the second operation request;
and providing the operation result to the client of the second database.
An embodiment of the present application further provides a computing device, including: a memory, a processor, and a communications component; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps in the database access method described above.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the database access method described above.
In the embodiment of the application, a protocol proxy node is additionally arranged at a server of a first database, and the protocol proxy node can simulate servers of other second databases, so that for operation requests of clients of the second databases with different supported client protocols to the first database, the protocol proxy node can utilize the simulated server of the second database to perform client protocol conversion on the operation requests, so that the operation requests follow the client protocol of the first database, and thus, the operation requests after protocol conversion can be used for accessing the first database, thereby realizing the compatibility of the first database with the client protocols of the other databases, and being beneficial to improving the client compatibility of the first database.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic structural diagram of a database system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a second database provided in the embodiment of the present application;
fig. 3 is a schematic structural diagram of a server of a first database according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a database access process provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of another database system provided in the embodiment of the present application;
FIG. 6 is a flowchart of a verification process for an operation request according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a database access method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In some embodiments of the present application, a protocol proxy node is additionally arranged at a server of a first database, and the protocol proxy node can simulate servers of other second databases, so that, for an operation request of a client of a second database with different supported client protocols to the first database, the protocol proxy node can perform client protocol conversion on the operation request by using the simulated server of the second database, so that the operation request follows the client protocol of the first database, and thus, the operation request after the protocol conversion can be used to access the first database, thereby realizing the compatibility of the first database with the client protocols of the other databases, and contributing to improving the client compatibility of the first database. For developers, corresponding clients do not need to be developed for the first database, and the database development efficiency is improved.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be noted that: like reference numerals refer to like objects in the following figures and embodiments, and thus, once an object is defined in one figure or embodiment, further discussion thereof is not required in subsequent figures and embodiments.
Fig. 1 is a schematic structural diagram of a database system according to an embodiment of the present application. As shown in fig. 1, the database operating system mainly includes: a server 20 of a first database 10 and a client 30 of a second database. Wherein the first database 10 and the second database support different client protocols. In the embodiment of the present application, the data patterns supported by the first database and the second database may be the same or different, but the client protocols supported by the first database and the second database are different. The first database and the second database may be any two databases with different supported client protocols, and the specific implementation forms of the first database 10 and the second database are not limited.
For example, in some embodiments, the first database 10 may be a multimodal database, a relational database, a MySQL database, or the like. The multi-mode database can support data storage and management of multiple modes. For example, some multimodal databases may support a unified access and fusion process for a wide variety of data, such as tables, sequences, text, objects, streams, spaces, etc., and are compatible with a variety of standard interfaces and seamlessly integrated three-way ecotools. The wide table engine may be a distributed non-relational (NoSQL) system designed for massive semi-structured, structured data. In this embodiment, the first database 10 may be a wide table engine of a multi-mode database, or the like.
Of course, the second database may also be a distributed NoSQL database. For example, the second database may be an HBase database or the like. The HBase database is a distributed NoSQL database which is high in reliability, high in performance, column-oriented, scalable and constructed on an HDFS, and is a NoSQL system oriented to massive semi-structural and structured data.
The server side of the database refers to a computer device which manages the database, can respond to an operation request of the client side, provides a database operation service for a user, and generally has the capability of bearing the service and guaranteeing the service. The server may be a single server device, a cloud server array, or a Virtual Machine (VM) running in the cloud server array. In addition, the server device may also refer to other computing devices with corresponding service capabilities, such as a terminal device (running a service program) such as a computer.
Because the client protocols supported by the first database 10 and the second database are different, the client 30 of the second database cannot communicate with the server 20 of the first database 10, and the client 30 of the second database cannot access the first database 10.
In this embodiment, the client 30 of the second database may be a client developed for the second database, or may be an open-source client, such as an open-source HBase client. Therefore, if the first database can be accessed using the client 30 of the second database, the client-side ecological compatibility of the first database can be improved, so that developers do not need to develop specific clients for the first database 10, and the learning and maintenance costs can be reduced.
In order to solve the above technical problem, in the embodiment of the present application, a protocol proxy node 21 is added to the server 20 of the first database 10. The protocol proxy node 21 may be a software function module, a container or a container group deployed on a physical machine or a virtual machine. The number of the protocol proxy nodes 21 may be 1 or more. Plural means 2 or more. Different protocol proxy nodes 21 may be deployed in the same physical machine or in different physical machines.
In the embodiment of the present application, as shown in fig. 1, the protocol proxy node 21 may emulate a server of the second database. In some embodiments, the protocol agent node 21 may reuse code logic of the server side of the second database to implement simulation of the server side of the second database. The specific implementation of how the protocol proxy node 21 simulates the server of the second database will be described in detail below, and will not be described herein again.
In this embodiment, the protocol proxy node 21 simulates a server of the second database, and can respond to the operation request provided by the client 30 of the second database by using the server of the second database. Wherein the operation request provided by the client 30 of the second database complies with the client protocol supported by the second database. In this embodiment, in order to make the first database compatible with the client protocol of the second database, the protocol proxy node 21 may perform protocol conversion on the operation request provided by the client 30 of the second database by using the simulated server of the second database to obtain the operation request conforming to the client protocol supported by the first database 10. In the embodiment of the present application, for convenience of description and distinction, the operation request provided by the client 30 of the second database and conforming to the client protocol of the second database is defined as a first operation request; and defines the operation request following the client protocol supported by the first database 10, which is obtained after the protocol conversion is performed on the first operation request, as a second operation request.
Specifically, the protocol agent node 21 may perform protocol parsing on the first operation request to obtain request content included in the first operation request; further, the protocol proxy node 21 may perform data structure conversion on the request content included in the first operation request according to the data structure of the client protocol supported by the first database 10 to obtain a second operation request conforming to the client protocol supported by the first database 10.
Further, the protocol agent node 21 may utilize the simulated second operation request to operate the first database 10, so as to obtain an operation result corresponding to the second operation request; and provides the operation result to the client 30 of the second database, so that the client 30 of the second database can access the first database.
In the embodiment of the application, a protocol proxy node is additionally arranged at a server of a first database, and the protocol proxy node can simulate servers of other second databases, so that for operation requests of clients of the second databases with different supported client protocols to the first database, the protocol proxy node can utilize the simulated server of the second database to perform client protocol conversion on the operation requests, so that the operation requests follow the client protocol of the first database, and thus, the operation requests after protocol conversion can be used for accessing the first database, thereby realizing the compatibility of the first database with the client protocols of the other databases, and being beneficial to improving the client compatibility of the first database. For developers, corresponding clients do not need to be developed for the first database, and the database development efficiency is improved.
For the above embodiment, the structure of the database engine of the second database is different, and the way of the protocol proxy node 21 simulating the server of the second database is different. In some embodiments, the protocol proxy node 21 may directly multiplex the code logic of the server of the second database, so as to implement simulation of the server of the second database, that is, the protocol proxy node 21 includes a node corresponding to the server of the second database. For example, the second database may be a distributed database, such as a distributed NoSQL database or the like. As shown in fig. 2, for a distributed database, the server side may adopt a Master/Slave management architecture. Accordingly, the server may include: a Master node, a plurality of partition service nodes (Region servers), a distributed coordination service node (ZooKeeper), and the like. Plural means 2 or more.
The partition (Region) is obtained by dividing a data Table (Table) of the database. Each data table only has one partition, but with the insertion of data, the management and control node can horizontally split the table according to a certain rule to form two partitions. As more and more rows are added to the data table, more and more partitions are created that may not be stored on one physical machine, or may be distributed across multiple physical machines. A data table can be split horizontally into several partitions according to Row keys (Row keys), and each partition has a Start Row (Start Row) and an end Row (Stop Row) which represent the data range of the partition.
The management and control node is responsible for managing a plurality of partition service nodes to realize load balance of the management and control node; partitions (regions) are also managed and allocated, such as to allocate new partitions when a partition is split, and to migrate partitions of an exited partition service node to other partition service nodes when the partition service node exits. The partition service node may also manage the metadata partitions (actually stored on the distributed storage nodes) of the data tables in the database, and so on.
The partition service node is used for storing and managing the local partition, and is responsible for reading and writing the storage node of the database and managing the data in the data table. The client of the database can obtain the metadata from the partition service node, and find the partition service node where the row key range (namely the partition) is located to read and write the data.
The distributed coordination service node (such as a Zookeeper node and the like) can store the metadata of the whole database cluster and the state information of the cluster; and the master and standby switching of the control nodes is realized.
For the database engine structure of the second database shown in fig. 2, the process of performing a read-write operation on the second database by a client of the second database mainly includes:
step 1, the client 30 of the second database reads the address of the metadata partition from the distributed coordination service node.
And 2, the client 30 of the second database reads the metadata partition from the partition service node corresponding to the address of the metadata partition. The metadata partition records an address mapping relationship of the partition, that is, in which partition service node the partition is located.
And 3, acquiring the address of the partition where the data to be read and written is located from the metadata partition, namely the partition service node corresponding to the partition where the data to be read and written is located.
And 4, the client 30 of the second database communicates with the partition service node where the partition where the data to be read and written is located, and the partition service node performs read-write operation on the data to be read and written.
Of course, the first database may also be a distributed database, such as a Lindorm database. The database engine of the first database and the database engine of the second database may have the same or similar architecture, so that the management and service logic of the service end of the first database to the first database is the same or similar to that of the service end of the second database. For the embodiment in which the first database is a distributed database, the data table of the first database may also be divided into partitions, and distributed storage is performed on a plurality of physical machines. Accordingly, as shown in fig. 3, the server of the first database may include: and a partition service node 22 for managing the partitioning of the first database. For the description of the architecture of the distributed database, reference may be made to the description of fig. 2, which is not described herein again.
In the embodiment of the present application, for convenience of description and differentiation, the partition service node 22 of the first database is defined as a first partition service node 22; and defining the partition service node of the second database as a second partition service node. Based on the above architecture of the distributed database shown in fig. 2, the server of the second database includes: a second partition service node and a distributed coordination service node. Wherein the protocol proxy node 21 corresponds to the first partitioned service node 22. For example, the protocol proxy node 21 and the first partitioned service node 22 may have a one-to-one correspondence. The corresponding protocol agent node 21 and the first partition service node 22 may be deployed in the same physical machine, or may be deployed in different physical machines. Preferably, the corresponding protocol agent node 21 and the first partition service node 22 are deployed in the same physical machine, so that the subsequent protocol agent node 21 can directly forward the operation request to the partition service node 22 of the same physical machine, thereby avoiding cross-machine access, reducing network hop count, and facilitating optimization of database performance.
In this embodiment of the present application, in order to implement the simulation of the server of the second database, as shown in fig. 3, the protocol proxy node 21 may include: a client proxy component 211 and a distributed coordination service proxy component 212. Wherein the client agent component 211 is operable to emulate a partitioned service node of the second database (i.e., a second partitioned service node); the distributed coordination service broker component 212 may be used to emulate a distributed coordination service component of the second database.
Specifically, the client agent component 211 may simulate the partition of the second database according to the metadata information of the partition of the to-be-accessed data table requested by the first operation and the client protocol of the second database, so as to obtain a virtual partition whose data structure satisfies the client protocol supported by the second database. Wherein the virtual partition is stored on the protocol proxy node 21 where the client proxy component 211 is located. The client agent component 211 hosts and manages the virtual partitions, enabling the simulation of the partitioned service nodes of the second database.
The first database includes metadata tables in addition to the data tables. Each row in the metadata table records information for one partition. The row key may include a table name, a start row key, timestamp information, and the like. The metadata table records the address mapping relationship of the partitions, namely the addresses of the partitioned service nodes of the partitions. When the data table is particularly large, the data table is also very partitioned. The metadata table stores metadata information of these partitions, and also becomes very large. The metadata table also needs to be divided into a plurality of metadata partitions (Meta regions), each of which records metadata of a part of the data table. The metadata partition is obtained by dividing a metadata table of a data table of the first database. The first partitioned service node 22 of the first database may also host and manage the metadata partition of the first database. The metadata partition of the first database is stored and managed by the first partition service node.
Wherein, the metadata information of the partition may include: partition identification and partition address and partitioned. The partition identification may include: table Name of the data table to which the partition belongs, partition Name (Region Name), and Row key (Row key) range of the partition. The partition address is used to locate the location of the partition, i.e., on which partition service node the partition is located, and may be represented by the name of the physical machine on which the partition is located (i.e., the hostname) and the port number.
Because the protocol agent node 21 and the first partition service node 22 both belong to a server of the first database and follow the same protocol, the client agent component 211 in the protocol agent node 21 can query the metadata partition stored in the first partition service node 22 to obtain the metadata information of the partition of the first database; and acquiring the metadata information of the partition of the data table to be accessed from the metadata information of the partition of the first database. In particular, the client agent component 211 may utilize the partition identification to query in a metadata partition stored in the first database to obtain metadata information for a partition of the data table to be accessed. Specifically, the identifier of the data table to be accessed, which is included in the partition identifier, may be used to query the metadata partition stored in the first database to obtain the metadata information of the partition of the data table to be accessed.
Based on the metadata information of the partition of the data table to be accessed, when simulating the partition of the second database according to the metadata information of the partition of the data table to be accessed of the first operation request and the client protocol of the second database, the client proxy component 211 may modify the partition address of the data table to be accessed to the address of the protocol proxy node where the client proxy component 211 is located; and modifying the name of the partition of the first database into a data format which can be identified by the client of the second database according to a client protocol supported by the second database so as to obtain a Virtual partition (Virtual region). For the embodiment in which the corresponding protocol proxy node 21 and the first partition service node 22 are deployed in the same physical machine, the corresponding protocol proxy node 21 and the first partition service node 22 have the same host name, and when a virtual partition is generated, only the port number of the partition address of the data table to be accessed needs to be modified to the port number of the protocol proxy node 21 on the physical machine.
The distributed coordination service broker component 212 may emulate a distributed coordination service node of the second database. In particular, the distributed coordination service broker component 212 can reuse code logic of the distributed coordination service nodes of the second database to implement the functionality of the distributed coordination service nodes.
Based on the simulated partitioned service node and the distributed coordination service node of the second database, the client 30 of the second database can read and/or write the first database. Accordingly, the first operation request initiated by the client 30 of the second database may be a read operation request and/or a write operation request. The read operation request and/or the write operation request include: and (5) identifying the partitions.
For the client 30 of the second database, the address of the target virtual partition, i.e. the target virtual partition, to which the partition identification corresponds, i.e. at which protocol proxy node the target virtual partition is located, may be obtained. In the embodiment of the present application, a specific implementation form of the client 30 of the second database obtaining the address of the first protocol proxy node 21a where the target virtual partition is located is not limited.
In this embodiment, the client 30 of the second database may cache the address of the virtual partition each time the client acquires the address of the virtual partition, that is, cache the mapping relationship between the virtual partition and the protocol proxy node. Therefore, when the operation request is initiated subsequently, the address mapping relation of the cached virtual partition can be utilized to address the virtual partition, which is beneficial to improving the addressing speed. Correspondingly, when initiating a read operation request and/or a write operation request, the client 30 of the second database may perform matching in the address mapping relationship of the cached virtual partition according to the partition identifier; if the address mapping relation of the target virtual partition is matched, the address of the target virtual partition can be obtained from the address mapping relation of the target virtual partition.
In other embodiments, since the metadata partition records the address mapping relationship of the partition, i.e. the address of the partition service node of the partition, the client 30 of the second database may not locally cache the address of the target virtual partition, and accordingly, the client 30 of the second database may initiate a metadata query request to the protocol proxy node 21 based on a read operation request and/or a write operation request (corresponding to step 3 in fig. 4). Wherein, the metadata inquiry request can include: and the partition identification carried by the read operation request and/or the write operation request. Alternatively, the client 30 of the second database may initiate a metadata query request to the protocol proxy node 21. In the embodiment of the present application, for convenience of description and distinction, a protocol proxy node where a target virtual partition is located is defined as a first protocol proxy node 21 a; and defines the protocol proxy node that receives the metadata query request as the second protocol proxy node 21 b. The first protocol proxy node 21a and the second protocol proxy node 21b may be the same protocol proxy node; it is also possible to proxy nodes for different protocols. The second protocol proxy node 21b may be any one of a plurality of protocol proxy nodes, and may also be a protocol proxy node of a metadata partition of a target virtual partition.
For the case where the address of the metadata partition is obtained, the client 30 of the second database may cache the obtained address of the metadata partition; and initiates a metadata query request to the second protocol proxy node 21b to which the address of the metadata partition points, based on the address of the metadata partition.
In this embodiment, the client 30 of the second database does not cache the address of the metadata partition, and the client 30 of the second database may further obtain the address of the metadata partition before initiating the metadata query request to the second protocol proxy node 21 b. Specifically, the client 30 of the second database may access the distributed coordination service broker component based on the connection address of the distributed coordination service broker component; and acquiring the address of the metadata partition of the virtual partition from the distributed coordination service agent component (corresponding to step 1 in fig. 4); further, a metadata query request may be initiated based on the address of the metadata partition of the virtual partition (corresponding to step 2 of fig. 4). The address of the metadata partition of the virtual partition is the address of the protocol proxy node of the metadata partition, that is, the address of the second protocol proxy node 21 b.
In some embodiments, to implement load balancing of traffic of the first partitioned service node of the first database, as shown in fig. 4 and 5, the service end of the first database may further include: a load balancing node 23. The load balancing node 23 is configured to distribute the operation requests to the partitions of the first database to different first partition service nodes 22 in a balanced manner.
For the embodiment in which the first partition service node 22 storing the partition and the protocol proxy node 21 storing the virtual partition generated based on the metadata information of the partition are located in the same physical machine, since the load balancing node 23 of the first database has distributed the partitions uniformly on different first partition service nodes 22, the virtual partitions are also distributed uniformly on different protocol proxy nodes 21, and the load balancing of the protocol proxy node 21 is realized. Each protocol proxy node 21 will cache the metadata information of all the virtual partitions corresponding to each data table locally.
In an embodiment of the present application, the distributed coordination service broker component 212 may determine the address of the load balancing node 23 as the address of the metadata partition of the virtual partition; and returns the address of the load balancing node 23 to the client 30 of the second database (corresponding to step 1 of fig. 4). Accordingly, the client 30 of the second database may initiate a metadata query request to the load balancing node based on the address of the load balancing node (corresponding to step 2 of fig. 4).
Further, the load balancing node 23 may randomly select one protocol proxy node from the plurality of protocol proxy nodes as the second protocol proxy node 21 b; and forwards the metadata query request to the second protocol proxy node 21b (corresponding to step 3 of fig. 4). The metadata query request includes: and (5) identifying the partitions.
Correspondingly, for the second protocol proxy node 21b, the partition identifier may be utilized to obtain the metadata information of the target virtual partition corresponding to the partition identifier; and determines the first protocol proxy node 21a where the target virtual partition is located from the plurality of protocol proxy nodes based on the metadata information of the target virtual partition. Specifically, the second protocol proxy node 21b obtains the metadata information of the virtual partition of the data table to be accessed by using the identifier of the data table to be accessed, which is included in the partition identifier; and based on the partition name, the row key range and the metadata information of the virtual partition of the data table to be accessed, the first protocol proxy node 21a where the target virtual partition is located is determined from the plurality of protocol proxy nodes. Further, the address of the first protocol proxy node 21a may be returned to the client 30 of the second database as the address of the target virtual partition. Accordingly, the client 30 of the second database may receive the address of the first protocol proxy node 21a returned by the second protocol proxy node 21b as the address of the target virtual partition.
Further, the client 30 of the second database may initiate a read operation request and/or a write operation request to the first protocol proxy node 21a where the target virtual partition is located based on the address of the target virtual partition (corresponding to step 4 of fig. 4).
Accordingly, the first protocol proxy node 21a may perform protocol conversion on the read operation request and/or the write operation request according to the client protocol supported by the first database, so as to obtain a target operation request satisfying the client protocol supported by the first database as the second operation request (corresponding to step 5 in fig. 4). Further, the first protocol agent node 21a may perform read and/or write operations on the target partition corresponding to the partition identifier based on the target operation request, so as to obtain an operation result corresponding to the target operation request.
Specifically, the first protocol proxy node 21a may provide the target operation request to a target partition service node corresponding to the target virtual partition. The target partition service node corresponding to the target virtual partition may be a first partition service node where a partition of a first database virtualized by the target virtual partition is located. In some embodiments, the protocol agent node where the target virtual partition is located and the target partition service node corresponding to the target virtual partition may be located in the same physical machine, and accordingly, the target partition service node corresponding to the target virtual partition is the first partition service node located in the same physical machine as the protocol agent node where the target virtual partition is located. The target partition service node can perform read and/or write operations on the target virtual partition based on the target operation request to obtain an operation result corresponding to the target operation request.
Further, the target partition service node may return an operation result of the target operation request to the first protocol agent node 21 a; the first protocol proxy node 21a may return the operation result to the client 30 of the second database, and implement the read and/or write of the first database by the client 30 of the second database. Specifically, the first protocol proxy node 21a may perform protocol conversion on the operation result of the target operation request according to the client protocol supported by the second database to obtain an operation result conforming to the client protocol supported by the second database; and returns the operation result following the client protocol supported by the second database to the client 30 of the second database, so as to realize the reading and/or writing of the first database by the second database.
For the embodiment that the protocol agent node where the target virtual partition is located and the target partition service node corresponding to the target virtual partition are located in the same physical machine, the protocol agent node and the target partition service node forward the target operation request and the result corresponding to the target operation request, and belong to communication between different logic nodes in the same physical machine, cross-machine forwarding of traffic can be avoided, network hop count is reduced, and improvement of access efficiency of a database is facilitated.
In practical applications, problems may occur due to an abnormality of the protocol proxy node 21 or a change in the partition on the first database. If there is an abnormal protocol proxy node 21 and the address of the virtual partition is still an abnormal protocol proxy node, the client 30 of the second database will fail to connect and trigger the operation of re-querying the address of the partition, and if the virtual partition is not updated, the returned address is always the address of the abnormal protocol proxy node 21, so that the operation request will fail after the client 30 of the second database retries for a certain number of times. Yet another problem situation is that the partition of the first database is moved, split or merged, for example, partition 1 of the first database corresponds to virtual partition 1 of the protocol proxy node 21, partition 1 of the first database is split into partition 2 and partition 3, and partition 2 and partition 3 are scattered to different first partitioned service nodes 22 to achieve the effect of load balancing. If the virtual partition 1 of the protocol proxy node 21 cannot sense the change of the partition 1 in time, the operation requests sent to the partition 2 and the partition 3 are still sent to the virtual partition 1, so that the effect of load balancing cannot be achieved.
In order to solve the above problem, in the embodiment of the present application, the protocol proxy node 21 may periodically query the first partition service node 22 according to a set query period to obtain the metadata partition of the first database. Alternatively, the protocol proxy node 21 may periodically query the metadata partition of the first database in the first partitioned service node 22 for granularity with a data table. Further, the protocol agent node 21 may obtain metadata information of the partition of the first database from the metadata partition of the first database obtained in the current query cycle; and updates the virtual partition in the protocol agent node 21 according to the metadata information of the partition of the first database acquired in the current query period.
Optionally, the protocol proxy node 21 may update the address of the partition of the first database acquired in the current query cycle to the address of the protocol proxy node 21; and the name of the partition of the first database acquired in the current query period is modified into a data format recognizable by the client 30 of the second database, so as to obtain an updated virtual partition. Further, the protocol proxy node 21 replaces the virtual partition of the data table in the cache with the updated virtual partition.
In the process of updating the virtual partition, it is also necessary to ensure that the protocol proxy node 21 corresponding to the address of the updated virtual partition is a non-abnormal node. Based on this, in the embodiment of the present application, as shown in fig. 4, a snooping agent component 213 may also be provided for the protocol agent node. The snooping agent component 213 of the plurality of protocol agent nodes 21 may snoop the plurality of protocol agent nodes 21 to monitor the health status of the plurality of protocol agent nodes 21.
Alternatively, the liveliness detection component 213 may detect liveliness of multiple protocol proxy nodes in a epidemic (gossip) protocol. Specifically, the liveliness detection component 213 in the protocol agent node 21 may randomly communicate with other protocol agent nodes 21 according to a set liveliness detection period (e.g., 1 second, etc.) and exchange health status information of the protocol agent node 21. If the protocol proxy node 21 does not answer the exchange request of the other protocol proxy node 21N consecutive times, it is determined as an abnormal protocol proxy node 21. Wherein N is not less than 2 and is an integer. The protocol agent node that requests the exchange of the health status information with the abnormal protocol agent node 21 transmits the abnormal information of the abnormal protocol agent node 21 to the other protocol agent nodes 21.
The newly added protocol agent node 21 exchanges information with the configured seed protocol agent node, so that the newly added protocol agent node 21 can quickly acquire the health state of the protocol agent nodes in the cluster; and the seed protocol proxy node will gradually send the information of the newly joined protocol proxy node 21 to other protocol proxy nodes. Thus, each protocol agent node 21 maintains a list of protocol agent nodes that are healthy according to the proactive agent component 213. Each protocol proxy node 21 can also make all protocol proxy nodes 21 in the cluster know the information of the abnormal protocol proxy node and the newly added protocol proxy node through the alive detection proxy component 213, and update the maintained normal protocol proxy node list in time.
Based on the normal protocol agent node list maintained by the protocol agent node 21, the protocol agent node 21 can judge whether the protocol agent node corresponding to the address to be updated of the virtual partition is a normal protocol agent node or not according to the normal protocol agent node list and the metadata information of the partition of the first database acquired in the current query period in the process of updating the virtual partition according to the metadata information of the partition of the first database acquired in the current query period; and if the protocol proxy node corresponding to the address to be updated is the normal protocol proxy node, updating the address of the virtual partition into the address to be updated, namely the address of the protocol proxy node corresponding to the address.
Correspondingly, if the protocol proxy node corresponding to the address to be updated is an abnormal protocol proxy node, the target protocol proxy node can be selected from the list of normal protocol proxy nodes. In the embodiment of the present application, a specific implementation manner of selecting a target protocol proxy node from a list of normal protocol proxy nodes is not limited.
In some embodiments, a normal protocol proxy node may be randomly selected from the list of normal protocol proxy nodes as the target protocol proxy node. In other embodiments, to load balance virtual partitions in a protocol proxy node, a load balancing algorithm may be used to select a target protocol proxy node from a list of normal protocol proxy nodes. Among them, the load balancing algorithm includes but is not limited to:
(1) polling type load balancing: and each virtual partition is distributed to different normal protocol proxy nodes one by one according to the time sequence. If the protocol agent node is abnormal, the abnormal protocol agent node can be automatically removed.
(2) And load balancing is carried out according to the weight: the weight is used for specifying the polling probability, and the weight of the protocol proxy node is in direct proportion to the access ratio and is used for the condition that the performance of the protocol proxy node is not uniform. The higher the weight of the protocol proxy node, the greater the probability of being accessed.
(3) Hash (Hash) algorithm: the hash algorithm is to perform hash calculation on the names of the virtual partitions, and each virtual partition request is distributed to a normal protocol proxy node according to the hash result of the name of the virtual partition.
(4) fair method: and allocating the virtual partitions according to the response time of the protocol proxy nodes, wherein the response time is short and the allocation is preferential.
In addition to the load balancing algorithm described above, embodiments of the present application may also use a consistent hashing algorithm to select a target protocol proxy node. Specifically, hash calculation may be performed on the name of the virtual partition to obtain a hash result of the name of the virtual partition; and then, selecting a target protocol proxy node from the list of normal protocol proxy nodes by adopting a consistent hash algorithm according to the hash result of the name of the virtual partition. Specifically, the hash result of the name of the virtual partition may be divided by the total number of normal protocol proxy nodes to obtain a remainder X; then, the xth normal protocol proxy node in the list can be determined from the list of normal protocol proxy nodes as the target protocol proxy node.
The implementation manner of selecting the target protocol proxy node provided in the above embodiment is merely an example, and is not limited. After determining the normal target protocol proxy node, the protocol proxy node 21 may write the address of the target protocol proxy node 21 as the address of the virtual partition into the virtual partition to obtain an updated virtual partition. Therefore, the information of the virtual partitions is updated in time, and the virtual partitions of the abnormal protocol proxy nodes can be dispersed to other normal protocol proxy nodes in time.
The above-described embodiment may address the validity of the virtual partition of the protocol proxy node 21 by periodically updating the virtual partition of the protocol proxy node 21 so that the virtual partition of the protocol proxy node 21 is adapted to the partition of the first database 10. However, after the client 30 of the second database searches the metadata information of the virtual partition, the address mapping relationship of the virtual partition is cached locally. Therefore, it is also necessary for the client 30 of the second database to be aware of the change of the virtual partition in time. Since both the read operation request and/or the write operation request initiated by the client 30 of the second database carry the partition identifier, in order to enable the client 30 of the second database to sense the change of the virtual partition in time, in some embodiments of the present application, after receiving the read operation request and/or the write operation request, the protocol proxy node 21 needs to check the read operation request and/or the write operation request, and determine whether the read operation request and/or the write operation request is a request that the protocol proxy node 21 needs to execute.
Specifically, for the first protocol proxy node 21a that receives the read operation request and/or the write operation request, before performing protocol conversion on the read operation request and/or the write operation request, it may be verified whether the read operation request and/or the write operation request is a request that needs to be executed by the first protocol proxy node 21 a.
Specifically, before the first protocol proxy node 21a performs protocol conversion on the read operation request and/or the write operation request, and under the condition that the first protocol proxy node 21a stores the target virtual partition corresponding to the partition identifier carried in the read operation request and/or the write operation request, the health state of the third protocol proxy node where the target virtual partition stored in the first protocol proxy node 21a is located may be determined according to the normal list of protocol proxy nodes. The third protocol proxy node and the first protocol proxy node 21a may be the same node or different nodes, and specifically, it is determined whether the target virtual partition is updated.
For the case that the client 30 of the second database caches the address of the target virtual partition, the first protocol proxy node 21a is the protocol proxy node corresponding to the address of the target virtual partition cached by the client 30 of the second database. In this case, the information of the target virtual partition stored by the first protocol proxy node 21a may be changed, but the client 30 of the second database does not perceive the change of the target virtual partition, which may result in that the address of the target virtual partition stored by the first protocol proxy node 21a is different from the address of the target virtual partition cached by the client 30 of the second database, that is, the third protocol proxy node is a different node from the first protocol proxy node 21 a. For the case that the client 30 of the second database does not cache the address of the target virtual partition, the first protocol proxy node 21a is the protocol proxy node determined in fig. 4, but after the second protocol proxy node 21b returns the address of the first protocol proxy node 21a to the client 30 of the second database, and before the first protocol proxy node 21a receives the read operation request and/or the write operation request, the target virtual partition may also be changed, which may also result in that the address of the target virtual partition stored by the first protocol proxy node 21a is not the address of the first protocol proxy node 21a, and may also result in that the third protocol proxy node is a different node from the first protocol proxy node 21 a.
Further, in the case that the health status of the third protocol proxy node is abnormal, or the health status of the third protocol proxy node is normal and the third protocol proxy node is the first protocol proxy node 21a, the first protocol proxy node 21a executes the read operation request and/or the write operation request. The step of the first protocol proxy node 21a executing the read operation request and/or the write operation request includes: according to a client protocol supported by a first database, performing protocol conversion on the read operation request and/or the write operation request to obtain a target operation request meeting the client protocol supported by the first database, and taking the target operation request as a second operation request; and performing read and/or write operation and the like on the target partition corresponding to the partition identifier based on the target operation request.
The abnormal health state of the third protocol agent node indicates that the third protocol agent node cannot execute the read operation request and/or the write operation request. Since the nature of the plurality of protocol proxy nodes is the same and can be used for processing read and write operation requests, the read operation request and/or the write operation request can be executed by the protocol proxy node, i.e. the first protocol proxy node 21 a. For the case that the health status of the third protocol proxy node is normal and the third protocol proxy node is the same node as the first protocol proxy node 21a, it is described that the read operation request and/or the write operation request is sent to the protocol proxy node, and therefore, the first protocol proxy node 21a can execute the read operation request and/or the write operation request.
Accordingly, for the case that the health status of the third protocol proxy node is normal, but the third protocol proxy node is not the same node as the first protocol proxy node 21a, it means that the information of the virtual partition stored by the protocol proxy node is different from the information of the virtual partition stored by the client 30 of the second database, and the client 30 of the second database needs to search the address of the target virtual partition again. Therefore, in the case where the health status of the third protocol proxy node is normal, but the third protocol proxy node is not the same node as the first protocol proxy node 21a, the first protocol proxy node 21a may notify the client 30 of the second database to re-query the address of the target virtual partition and asynchronously update the virtual partition of the first protocol proxy node 21 a. The process of the client 30 of the second database to re-search the address of the target virtual partition may refer to the related content in fig. 4, and is not described herein again.
Of course, in practical applications, the first protocol proxy node 21a may not store the target virtual partition, which indicates that the information of the virtual partition stored by the protocol proxy node is different from the information of the virtual partition cached by the client 30 of the second database, but if the first protocol proxy node 21a does not store any partition of the table to be accessed, it indicates that the protocol proxy node has not generated the virtual partition for the partition of the table to be accessed. The data table to be accessed is a data table corresponding to the data table identifier in the partition identifier carried by the read operation request and/or the write operation request. The data table identification may be represented by the name of the data table, i.e., the table name. Based on this, if the first protocol proxy node 21a does not store the virtual partition of the data table to be accessed, it indicates that the protocol proxy node has not generated the virtual partition for the partition of the data table to be accessed, because all the protocol proxy nodes are the same, in this case, the first protocol proxy node 21a asynchronously updates the virtual partition of the first protocol proxy node 21 a; and performs a read operation request and/or a write operation request.
In some embodiments, there may be a case where the first protocol proxy node 21a stores the virtual partition to be accessed to the data table but does not store the target virtual partition, which means that the information of the virtual partition stored by the protocol proxy node is biased from the information of the virtual partition cached by the client 30 of the second database, and therefore, in this case, the first protocol proxy node 21a may notify the client 30 of the second database to re-query the address of the target virtual partition and asynchronously update the virtual partition of the first protocol proxy node 21 a.
To facilitate understanding of the verification process for the read/write operation request, an exemplary explanation is provided below in conjunction with the verification flowchart shown in fig. 6. The checking process is mainly executed by the first protocol proxy node 21a, as shown in fig. 6, the checking process mainly includes the following steps:
s1, obtaining the partition identification of the data table to be accessed from the received read operation request and/or write operation request.
S2, judging whether the first protocol proxy node stores the virtual partition corresponding to the table name of the data table to be accessed and contained in the partition identification. If yes, go to step S3; if the determination result is negative, step S8 is executed.
S3, searching whether the first protocol proxy node stores the information of the target virtual partition corresponding to the partition name or not according to the partition name carried by the read operation request and/or the write operation request; if so, executing the step S4; if not found, go to step S7.
S4, judging whether the target virtual partition stored by the first protocol proxy node is a normal protocol proxy node according to the list of the normal protocol proxy node; if yes, go to step S5; if the determination result is negative, step S6 is executed.
S5, judging whether the third protocol proxy node and the first protocol proxy node are the same node; if yes, go to step S6; if the determination result is negative, step S7 is executed.
S6, the first protocol proxy node executes the read operation request and/or the write operation request.
And S7, informing the client of the second database to inquire the address of the target virtual partition again and asynchronously updating the virtual partition of the first protocol proxy node.
S8, the first protocol proxy node asynchronously updates the virtual partition of the first protocol proxy node and executes the read operation request and/or the write operation request.
Through the verification of the read operation request and/or the write operation request, the protocol proxy node can update the information of the virtual partition, and the client 30 of the second database can be informed to re-search the address of the target virtual partition and update the cache of the client 30 in time, so that the conditions of request failure and/or load imbalance caused by the abnormal protocol proxy node or the change of the partition of the first database are avoided.
In the embodiment of the present application, the client 30 of the second database may perform a Data Definition Language (DDL) operation on the first database 10 in addition to performing a read/write operation on the first database 10. The DDL operation is a database operation with a data table as a granularity, and for a distributed database, a Master node (Master) of the distributed database executes the DDL operation. Therefore, in this embodiment of the present application, the client agent component 211 may further simulate a management node of the second database, and the client agent component 211 may reuse the code logic of the management node of the second database to execute the DDL request initiated by the client 30 of the second database. In this embodiment, the first operation request initiated by the client 30 of the second database is a DDL request. In this embodiment, in order to implement the DDL operation of the client 30 of the second database on the first database, the client agent component 211 may obtain the DDL request initiated by the client 30 of the second database, and convert the DDL request into a target DDL request conforming to the client protocol supported by the first database as the second operation request according to the client protocol supported by the first database.
For the client agent component 211, based on the target DDL request, a DDL operation may be performed on the first database to obtain an operation result corresponding to the DDL request.
In the embodiment of the present application, based on the load balancing node 23 of the first database, the client 30 of the second database may request the address of the policing node from the distributed coordination service agent component 212 before initiating the DDL request. The distributed coordination service broker component 212 may provide the address of the load balancing node 23 as the address of the governing node to the client 30 of the second database. Accordingly, the client 30 of the second database may initiate a DDL request to the load balancing node 23 based on the address of the load balancing node 23. Further, the load balancing node 23 may randomly select a target protocol proxy node from the plurality of protocol proxy nodes; and provides the DDL request to the target protocol proxy node.
The client agent component 211 of the target protocol agent node is configured to perform the above-mentioned step of converting the DDL request into a target DDL request conforming to a client protocol supported by the first database; the client agent component 211 is configured to execute a step of performing a DDL operation on the first database based on the target DDL request, obtain an operation result corresponding to the DDL operation, and implement the DDL operation on the first database 10 by the client 30 of the second database.
According to the embodiment of the application, the server side of the first database is compatible with the client protocol of the second database, so that a user can directly use the client of the second database to access the first database. For the user of the client of the second database, the first database can be used as the second database just by changing the connection address of the database, the code and the dependence do not need to be changed, and the use convenience of the first database can be improved. Because the protocol proxy node is compatible with the server of the first database in the embodiment of the application, when the new version of the first database is released, only the server needs to be upgraded, the code of the client does not need to be changed, and the client does not need to be restarted.
In addition to the system embodiments described above, the embodiments of the present application also provide a database access method, and the following provides an exemplary description of the database access method provided in the embodiments of the present application.
Fig. 7 is a schematic flowchart of a database access method according to an embodiment of the present application. As shown in fig. 7, the database access method mainly includes the following steps:
701. simulating a server of a second database by using a protocol agent node in the server of the first database; the first database and the second database support different client protocols.
702. And performing protocol conversion on the first operation request which is provided by the client of the second database and follows the client protocol supported by the second database by utilizing the simulated server of the second database to obtain a second operation request which follows the client protocol supported by the first database.
703. And operating the first database based on the second operation request to obtain an operation result corresponding to the second operation request.
704. And providing the operation result to the client of the second database.
Because the client protocols supported by the first database and the second database are different, the client of the second database cannot communicate with the server of the first database, and the client of the second database cannot access the first database.
In this embodiment of the application, the client of the second database may be a client that has been developed for the second database, and may also be an open-source client, such as an open-source HBase client. Therefore, if the first database can be accessed by the client 30 of the second database, the client ecological compatibility of the first database can be improved, so that developers do not need to develop specific clients for the first database, and the learning and maintenance cost can be reduced.
In order to solve the above technical problem, in the embodiment of the present application, a protocol proxy node is added at a server of a first database. For the description of the protocol proxy node, reference may be made to the related contents of the above system embodiments, and details are not described herein again. In step 701, a protocol proxy node may be utilized to emulate a server of a second database. In some embodiments, the protocol agent node may reuse code logic of the server of the second database to implement emulation of the server of the second database. The specific implementation of how the protocol proxy node simulates the server of the second database will be described in detail below, and will not be described herein again.
In this embodiment, the protocol proxy node simulates a server of the second database, and can respond to the operation request provided by the client of the second database by using the server of the second database. Wherein the operation request provided by the client of the second database follows the client protocol supported by the second database. In this embodiment, in order to make the first database compatible with the client protocol of the second database, in step 702, the simulated server of the second database may be utilized to perform protocol conversion on the operation request provided by the client of the second database, so as to obtain the operation request complying with the client protocol supported by the first database. In the embodiment of the present application, for convenience of description and distinction, an operation request provided by the client of the second database and conforming to the client protocol of the second database is defined as a first operation request; and defining the operation request which follows the client protocol supported by the first database after the protocol conversion is carried out on the first operation request as a second operation request.
Specifically, the protocol analysis may be performed on the first operation request to obtain request content included in the first operation request; further, according to the data structure of the client protocol supported by the first database, the data structure conversion may be performed on the request content included in the first operation request to obtain a second operation request conforming to the client protocol supported by the first database.
Further, in step 703, the simulated second operation request may be utilized to operate the first database, so as to obtain an operation result corresponding to the second operation request; and in step 704, the operation result is provided to the client of the second database, so that the client of the second database accesses the first database.
In the embodiment of the application, a protocol proxy node is additionally arranged at a server of a first database, and the protocol proxy node can simulate servers of other second databases, so that for operation requests of clients of the second databases with different supported client protocols to the first database, the simulated server of the second database can be used for performing client protocol conversion on the operation requests, so that the operation requests follow the client protocol of the first database, and thus, the operation requests after the protocol conversion can be used for accessing the first database, the first database is compatible with the client protocols of the other databases, and the client compatibility of the first database is improved. For developers, corresponding clients do not need to be developed for the first database, and the database development efficiency is improved.
For the above embodiment, the structures of the database engines of the second database are different, and the way in which the protocol proxy node simulates the server of the second database is different. In some embodiments, the protocol proxy node may directly multiplex the code logic of the server of the second database, and implement simulation of the server of the second database, that is, the protocol proxy node includes a node corresponding to the server of the second database. For example, the second database may be a distributed database, such as a distributed NoSQL database or the like. The description of the structure of the distributed database can be referred to the related content of fig. 2, and will not be described herein again.
Of course, the first database may also be a distributed database, such as a Lindorm database. The database engine of the first database and the database engine of the second database may have the same or similar architecture, so that the management and service logic of the service end of the first database to the first database is the same or similar to that of the service end of the second database. For the embodiment in which the first database is a distributed database, the data table of the first database may also be divided into partitions, and distributed storage is performed on a plurality of physical machines. Accordingly, the server side of the first database may include: and the partition service node is used for managing the partition of the first database. For the description of the architecture of the distributed database, reference may be made to the description of fig. 2, which is not described herein again.
In the embodiment of the present application, for convenience of description and differentiation, a partition service node of a first database is defined as a first partition service node; and defining the partition service node of the second database as a second partition service node. Based on the above architecture of the distributed database shown in fig. 2, the server of the second database includes: a second partition service node and a distributed coordination service node. Wherein the protocol proxy node corresponds to the first partitioned service node. For example, the protocol proxy node may have a one-to-one correspondence with the first partitioned service node. The corresponding protocol agent node and the first partition service node can be deployed in the same physical machine or different physical machines. Preferably, the corresponding protocol agent node and the first partition service node are deployed in the same physical machine, so that the subsequent protocol agent node can directly forward the operation request to the partition service node of the same physical machine, cross-machine access can be avoided, network hop count is reduced, and optimization of database performance is facilitated.
In this embodiment of the application, in order to implement the simulation of the server of the second database, the protocol proxy node may simulate the partition of the second database according to the metadata information of the partition of the to-be-accessed data table requested by the first operation request and the client protocol of the second database, so as to obtain a virtual partition whose data structure meets the client protocol supported by the second database. The virtual partition is stored on the protocol proxy node, and simulation of the partition service node of the second database is achieved.
The first database includes metadata tables in addition to the data tables. For the description of the metadata table, reference may be made to the related contents of the above system embodiments, and details are not repeated here. The first partitioned service node of the first database may also host and manage the metadata partition of the first database. The metadata partition of the first database is stored and managed by the first partition service node.
Because the protocol agent node and the first partition service node both belong to the service end of the first database and follow the same protocol, the protocol agent node can inquire the metadata partition stored by the first partition service node and acquire the metadata information of the partition of the first database; and acquiring the metadata information of the partition of the data table to be accessed from the metadata information of the partition of the first database. Specifically, the partition identifier may be utilized to query the metadata partition stored in the first database, and obtain metadata information of the partition of the data table to be accessed.
Based on the metadata information of the partition of the data table to be accessed, when the metadata information of the partition of the data table to be accessed according to the first operation request and the client protocol of the second database simulate the partition of the second database, the partition address of the data table to be accessed can be modified into the address of the proxy node of the protocol; and modifying the name of the partition of the first database into a data format which can be identified by the client of the second database according to a client protocol supported by the second database so as to obtain a Virtual partition (Virtual region). For the embodiment that the corresponding protocol proxy node and the first partition service node are deployed in the same physical machine, the corresponding protocol proxy node and the first partition service node have the same host name, and when the virtual partition is generated, the port number of the partition address of the data table to be accessed is only required to be modified into the port number of the protocol proxy node on the physical machine.
In an embodiment of the present application, the protocol proxy node may also be utilized to emulate a distributed coordination service node of the second database. Specifically, the code logic of the distributed coordination service node of the second database can be reused to realize the function of the distributed coordination service node.
Based on the simulated partition service node and distributed coordination service node of the second database, the read and/or write operation of the client of the second database to the first database can be realized. Accordingly, the first operation request initiated by the client of the second database may be a read operation request and/or a write operation request. The read operation request and/or the write operation request include: the partition label of the partition to be accessed.
For the client of the second database, the address of the target virtual partition corresponding to the partition identifier, that is, in which protocol proxy node the target virtual partition is located, may be obtained. In the embodiment of the present application, a specific implementation form of the client of the second database obtaining the address of the first protocol proxy node where the target virtual partition is located is not limited.
In this embodiment of the present application, the client of the second database may cache the address of the virtual partition each time the client acquires the address of the virtual partition, that is, cache the mapping relationship between the virtual partition and the protocol proxy node. Therefore, when the operation request is initiated subsequently, the address mapping relation of the cached virtual partition can be utilized to address the virtual partition, which is beneficial to improving the addressing speed. Correspondingly, when a client of the second database initiates a read operation request and/or a write operation request, matching can be performed in the address mapping relation of the cached virtual partition according to the partition identifier; if the address mapping relation of the target virtual partition is matched, the address of the target virtual partition can be obtained from the address mapping relation of the target virtual partition.
In other embodiments, since the metadata partition records the address mapping relationship of the partition, i.e., the address of the partition service node of the partition, the client of the second database may not locally cache the address of the target virtual partition, and accordingly, the client of the second database may initiate a metadata query request to the protocol proxy node based on a read operation request and/or a write operation request. Wherein, the metadata inquiry request may include: and the partition identification carried by the read operation request and/or the write operation request. Alternatively, the client of the second database may initiate a metadata query request to the protocol proxy node. In the embodiment of the application, for convenience of description and distinction, a protocol proxy node where a target virtual partition is located is defined as a first protocol proxy node; and defining the protocol proxy node which receives the metadata inquiry request as a second protocol proxy node. The first protocol proxy node and the second protocol proxy node can be the same protocol proxy node; it is also possible to proxy nodes for different protocols. The second protocol proxy node may be any one of a plurality of protocol proxy nodes, and may also be a protocol proxy node of a metadata partition of the target virtual partition.
For the case where the address of the metadata partition is obtained, the client of the second database may cache the obtained address of the metadata partition; and according to the address of the metadata partition, a metadata query request is sent to a second protocol proxy node pointed by the address of the metadata partition.
In this embodiment of the present application, the client of the second database does not cache the address of the metadata partition, and the client of the second database may further obtain the address of the metadata partition before initiating the metadata query request to the second protocol proxy node. Specifically, the client of the second database may access the distributed coordination service broker component based on the connection address of the distributed coordination service broker component; and obtaining the address of the metadata partition of the virtual partition from the distributed coordination service agent component. Correspondingly, for the protocol proxy node, the address of the metadata partition of the virtual partition can be obtained in response to the address query request of the metadata partition sent by the client of the second database; and returns the address to the client of the second database. Further, a metadata query request may be initiated based on an address of a metadata partition of the virtual partition. The address of the metadata partition of the virtual partition is the address of the protocol proxy node of the metadata partition, that is, the address of the second protocol proxy node.
In some embodiments, to implement load balancing of traffic of the first partitioned service node of the first database, the service side of the first database may further include: and (4) load balancing nodes. The load balancing node is used for distributing operation requests of the partitions of the first database to different first partition service nodes in a balanced manner.
For the embodiment that the first partition service node storing the partition and the protocol proxy node storing the virtual partition generated based on the metadata information of the partition are located in the same physical machine, the load balancing node of the first database distributes the partitions uniformly on different first partition service nodes, so that the virtual partitions are also uniformly distributed on different protocol proxy nodes, and the load balancing of the protocol proxy nodes is realized. Each protocol proxy node will cache the metadata information of all the virtual partitions corresponding to each data table locally.
In the embodiment of the application, the protocol proxy node can be used for determining that the address of the load balancing node is the address of the metadata partition of the virtual partition; and returning the address of the load balancing node to the client of the second database. Accordingly, the client of the second database may initiate a metadata query request to the load balancing node based on the address of the load balancing node.
Further, the available load balancing node can randomly select one protocol proxy node from a plurality of protocol proxy nodes as the second protocol proxy node; and forwards the metadata query request to the second protocol proxy node. The metadata query request includes: and (5) identifying the partitions.
Correspondingly, for the second protocol agent node, the partition identification can be used for acquiring the metadata information of the target virtual partition; and determining a first protocol proxy node where the target virtual partition is located from the plurality of protocol proxy nodes based on the metadata information of the target virtual partition. Namely, the address information contained in the meta attribute information of the target virtual partition is the address of the first protocol proxy node. Further, the address of the first protocol proxy node may be returned to the client of the second database as the address of the target virtual partition. Accordingly, the client of the second database may receive the address of the first protocol proxy node returned by the second protocol proxy node as the address of the target virtual partition.
Further, the client of the second database may initiate a read operation request and/or a write operation request to the first protocol proxy node where the target virtual partition is located based on the address of the target virtual partition.
Correspondingly, for the server side of the first database, the first protocol proxy node can perform protocol conversion on the read operation request and/or the write operation request according to the client protocol supported by the first database, so as to obtain a target operation request meeting the client protocol supported by the first database, and the target operation request is used as a second operation request. Further, the first protocol proxy node may perform read and/or write operations on the target partition corresponding to the partition identifier based on the target operation request, so as to obtain an operation result corresponding to the target operation request.
Specifically, the first protocol proxy node may provide the target operation request to a target partition service node corresponding to the target virtual partition. The target partition service node corresponding to the target virtual partition may be a first partition service node where a partition of a first database virtualized by the target virtual partition is located. In some embodiments, the protocol agent node where the target virtual partition is located and the target partition service node corresponding to the target virtual partition may be located in the same physical machine, and accordingly, the target partition service node corresponding to the target virtual partition is the first partition service node located in the same physical machine as the protocol agent node where the target virtual partition is located. The target partition service node can perform read and/or write operations on the target virtual partition based on the target operation request to obtain an operation result corresponding to the target operation request.
Further, the target partition service node may return an operation result of the target operation request to the first protocol agent node; the first protocol proxy node can return the operation result to the client of the second database, and read and/or write of the client of the second database to the first database is realized. Specifically, the first protocol proxy node may perform protocol conversion on the operation result of the target operation request according to the client protocol supported by the second database to obtain an operation result conforming to the client protocol supported by the second database; and returning the operation result following the client protocol supported by the second database to the client of the second database, so as to realize the reading and/or writing of the first database by the second database.
For the embodiment that the protocol agent node where the target virtual partition is located and the target partition service node corresponding to the target virtual partition are located in the same physical machine, the protocol agent node and the target partition service node forward the target operation request and the result corresponding to the target operation request, and belong to communication between different logic nodes in the same physical machine, cross-machine forwarding of traffic can be avoided, network hop count is reduced, and improvement of access efficiency of a database is facilitated.
In practical applications, problems may occur due to an exception of the protocol proxy node or a change in the partition on the first database. If the protocol proxy node is abnormal and the address of the virtual partition is still the abnormal protocol proxy node, the client of the second database fails to connect and triggers the operation of inquiring the address of the partition again, and if the virtual partition is not updated, the returned address is always the abnormal address of the protocol proxy node, so that the client of the second database fails to operate after retrying for a certain number of times. Yet another problem situation is that the partitions of the first database move, split or merge. If the virtual partition of the protocol proxy node cannot sense the change of the partition in time, the operation request sent to the partition and the partition is still sent to the virtual partition 1, so that the effect of load balancing cannot be achieved.
In order to solve the above problem, in this embodiment, the protocol proxy node may be used to periodically query the first partition service node according to a set query period to obtain the metadata partition of the first database. Alternatively, the protocol proxy node may periodically query the metadata partition of the first database in the first partitioned service node for granularity with a data table. Further, the protocol agent node may obtain metadata information of the partition of the first database from the metadata partition of the first database obtained in the current query cycle; and updating the virtual partition in the protocol agent node according to the metadata information of the partition of the first database acquired in the current query period.
Optionally, the protocol proxy node may update the address of the partition of the first database acquired in the current query cycle to the address of the protocol proxy node; and modifying the partition name of the first database acquired in the current query period into a data format recognizable by the client of the second database to obtain an updated virtual partition. Further, the protocol proxy node replaces the virtual partition of the data table in the cache with the updated virtual partition.
In the process of updating the virtual partition, it is further required to ensure that the protocol proxy node corresponding to the address of the updated virtual partition is a non-abnormal node. Based on this, in the embodiment of the present application, a snooping proxy component can also be set for the protocol proxy node. And then, utilizing a live detection agent component in the protocol agent nodes to detect the activity of the protocol agent nodes so as to monitor the health states of the protocol agent nodes.
Optionally, a plurality of protocol agent nodes may be probed via a epidemic disease (gossip) protocol using a snooping agent component. For a specific implementation of detecting a plurality of protocol proxy nodes, reference may be made to the related contents of the above system embodiment, and details are not described herein again.
Based on a normal protocol agent node list maintained by the protocol agent node, in the process of updating the virtual partition according to the metadata information of the partition of the first database acquired in the current query period, whether the protocol agent node corresponding to the address to be updated of the virtual partition is a normal protocol agent node or not can be judged according to the normal protocol agent node list and the metadata information of the partition of the first database acquired in the current query period; and if the protocol proxy node corresponding to the address to be updated is the normal protocol proxy node, updating the address of the virtual partition into the address to be updated, namely the address of the protocol proxy node corresponding to the address.
Correspondingly, if the protocol proxy node corresponding to the address to be updated is an abnormal protocol proxy node, the target protocol proxy node can be selected from the list of normal protocol proxy nodes. In the embodiment of the present application, a specific implementation manner of selecting a target protocol proxy node from a list of normal protocol proxy nodes is not limited.
In some embodiments, a normal protocol proxy node may be randomly selected from the list of normal protocol proxy nodes as the target protocol proxy node. In other embodiments, to load balance virtual partitions in a protocol proxy node, a load balancing algorithm may be used to select a target protocol proxy node from a list of normal protocol proxy nodes. Alternatively, a consistent hashing algorithm may be used to select the target protocol proxy node. Specifically, hash calculation may be performed on the name of the virtual partition to obtain a hash result of the name of the virtual partition; and then, selecting a target protocol proxy node from the list of normal protocol proxy nodes by adopting a consistent hash algorithm according to the hash result of the name of the virtual partition.
The implementation manner of selecting the target protocol proxy node provided in the above embodiment is merely an example, and is not limited. After determining the normal target protocol proxy node, the protocol proxy node may write the address of the target protocol proxy node as the address of the virtual partition into the virtual partition to obtain an updated virtual partition. Therefore, the information of the virtual partitions is updated in time, and the virtual partitions of the abnormal protocol proxy nodes can be dispersed to other normal protocol proxy nodes in time.
The above-described embodiments may address the validity of the virtual partition of the protocol proxy node by periodically updating the virtual partition of the protocol proxy node, such that the virtual partition of the protocol proxy node is adapted to the partition of the first database. However, after the client of the second database searches the metadata information of the virtual partition each time, the address mapping relationship of the virtual partition is cached locally. Therefore, it is also necessary for the client of the second database to be aware of the change of the virtual partition in time. Since both the read operation request and/or the write operation request initiated by the client of the second database carry the partition identifier, in order to enable the client of the second database to sense the change of the virtual partition in time, in some embodiments of the present application, after receiving the read operation request and/or the write operation request, the protocol proxy node needs to check the read operation request and/or the write operation request, and determine whether the read operation request and/or the write operation request is a request that needs to be executed by the protocol proxy node.
Specifically, for the first protocol proxy node that receives the read operation request and/or the write operation request, before performing protocol conversion on the read operation request and/or the write operation request, it may be verified whether the read operation request and/or the write operation request is a request that needs to be executed by the first protocol proxy node.
Specifically, before the first protocol proxy node performs protocol conversion on the read operation request and/or the write operation request, and under the condition that the first protocol proxy node stores the target virtual partition corresponding to the partition identifier carried by the read operation request and/or the write operation request, the health state of a third protocol proxy node where the target virtual partition stored by the first protocol proxy node is located may be determined according to the normal list of protocol proxy nodes. The third protocol proxy node and the first protocol proxy node may be the same node or different nodes, and it is specifically determined whether the target virtual partition is updated.
Further, the first protocol agent node executes a read operation request and/or a write operation request when the health status of the third protocol agent node is abnormal, or the health status of the third protocol agent node is normal and the third protocol agent node is the first protocol agent node. The step of executing the read operation request and/or the write operation request by the first protocol proxy node comprises the following steps: performing protocol conversion on the read operation request and/or the write operation request according to the client protocol supported by the first database to obtain a target operation request meeting the client protocol supported by the first database, and using the target operation request as a second operation request; and performing read and/or write operation and the like on the target partition corresponding to the partition identifier based on the target operation request.
The abnormal health state of the third protocol agent node indicates that the third protocol agent node cannot execute the read operation request and/or the write operation request. Since the information of the virtual partitions stored by the plurality of protocol proxy nodes is the same, the protocol proxy node, i.e. the first protocol proxy node, can execute the read operation request and/or the write operation request. For the case that the health status of the third protocol agent node is normal and the third protocol agent node is the same node as the first protocol agent node, it is described that the read operation request and/or the write operation request is sent to the protocol agent node, and therefore, the first protocol agent node can execute the read operation request and/or the write operation request.
Correspondingly, for the case that the health status of the third protocol proxy node is normal, but the third protocol proxy node is not the same node as the first protocol proxy node, it indicates that the information of the virtual partition stored by the protocol proxy node is different from the information of the virtual partition stored by the client of the second database, and the client of the second database is required to search the address of the target virtual partition again. Therefore, in the case where the health status of the third protocol proxy node is normal, but the third protocol proxy node is not the same node as the first protocol proxy node, the first protocol proxy node may notify the client of the second database to re-query the address of the target virtual partition and asynchronously update the virtual partition of the first protocol proxy node. The process of the client of the second database for re-searching the address of the target virtual partition may refer to the related content in fig. 4, which is not described herein again.
Of course, in practical applications, the first protocol proxy node may not store the target virtual partition, which indicates that the information of the virtual partition stored by the protocol proxy node is different from the information of the virtual partition cached by the client of the second database, but if the first protocol proxy node does not store any partition of the to-be-accessed data table, it indicates that the protocol proxy node has not generated the virtual partition for the partition of the to-be-accessed data table. The data table to be accessed is a data table corresponding to the identifier of the data table in the partition identifiers carried by the read operation request and/or the write operation request. Based on this, if the first protocol proxy node does not store the virtual partition of the data table to be accessed, it indicates that the protocol proxy node has not generated the virtual partition for the partition of the data table to be accessed, because all the protocol proxy nodes are the same, in this case, the first protocol proxy node asynchronously updates the virtual partition of the first protocol proxy node; and performs a read operation request and/or a write operation request.
In some embodiments, there may be a case where the first protocol proxy node stores the virtual partition to be accessed to the data table but does not store the target virtual partition, indicating that the information of the virtual partition stored by the protocol proxy node is biased from the information of the virtual partition cached by the client of the second database, and therefore, in this case, the first protocol proxy node may notify the client of the second database to re-query the address of the target virtual partition and asynchronously update the virtual partition of the first protocol proxy node.
Through the verification of the read operation request and/or the write operation request, the protocol proxy node can update the information of the virtual partition, and can also inform the client of the second database to re-search the address of the target virtual partition and update the cache of the client in time, so that the conditions of request failure and/or load imbalance caused by the abnormal protocol proxy node or the change of the partition of the first database are avoided.
In the embodiment of the present application, the client of the second database may perform a data definition language DDL operation on the first database in addition to performing a read-write operation on the first database. The DDL operation is a database operation with a data table as a granularity, and for the distributed database, the DDL operation is executed by a control node of the distributed database. Therefore, in this embodiment of the present application, the protocol agent component may further simulate a management and control node of the second database, and the protocol agent component may reuse a code logic of the management and control node of the second database to execute a DDL request initiated by a client of the second database. In this embodiment, the first operation request initiated by the client of the second database is a DDL request. In this embodiment, in order to implement the DDL operation of the client of the second database on the first database, a DDL request initiated by the client of the second database may be obtained. And converting the DDL request into a target DDL request following the client protocol supported by the first database as a second operation request according to the client protocol supported by the first database.
For a client agent component in a protocol agent node, based on a target DDL request, a DDL operation may be performed on a first database to obtain an operation result corresponding to the DDL request.
In this embodiment, based on the load balancing node of the first database, the client of the second database may request the address of the management and control node from the distributed coordination service agent component before initiating the DDL request. The distributed coordination service agent component may provide the address of the load balancing node as the address of the policing node to the client of the second database. Accordingly, the client of the second database may initiate a DDL request to the load balancing node based on the address of the load balancing node. Further, the target protocol proxy node can be randomly selected from the plurality of protocol proxy nodes by utilizing the load balancing node; and provides the DDL request to the target protocol proxy node.
The target protocol agent node executes the step of converting the DDL request into a target DDL request following a client protocol supported by the first database; the client agent component is used for executing the step of performing DDL operation on the first database based on the target DDL request to obtain an operation result corresponding to the DDL operation, and the client of the second database is used for realizing the DDL operation on the first database.
According to the embodiment of the application, the server side of the first database is compatible with the client protocol of the second database, so that a user can directly use the client of the second database to access the first database. For the user of the client of the second database, the first database can be used as the second database by only changing the connection address of the database, the code and the dependence do not need to be changed, and the use convenience of the first database can be improved. Because the protocol proxy node is compatible with the server of the first database in the embodiment of the application, when the new version of the first database is released, only the server needs to be upgraded, the code of the client does not need to be changed, and the client does not need to be restarted.
It should be noted that, the executing subjects of the steps of the method provided in the foregoing embodiments may be the same device, or different devices may also be used as the executing subjects of the method. For example, the execution subjects of steps 701 and 702 may be device a; for another example, the execution subject of step 701 may be device a, and the execution subject of step 702 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations occurring in a specific order are included, but it should be clearly understood that these operations may be executed out of order or in parallel as they appear herein, and the sequence numbers of the operations, such as 701, 702, etc., are merely used to distinguish various operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the database access method described above.
Fig. 8 is a schematic structural diagram of a computing device according to an embodiment of the present application. As shown in fig. 8, the computing device includes: a memory 80a, a processor 80b, and a communication component 80 c. The memory 80a is used for storing computer programs.
The processor 80b is coupled to the memory and communication component 80c for executing computer programs for: simulating a server of a second database by using a protocol agent node in the server of the first database; the first database and the second database support different client protocols; performing protocol conversion on a first operation request which is provided by a client of a second database and follows a client protocol supported by the second database by using the simulated server of the second database to obtain a second operation request which follows the client protocol supported by the first database; operating the first database based on the second operation request to obtain an operation result corresponding to the second operation request; the results of the operation are provided to the client of the second database through the communication component 80 c.
In some embodiments, the processor 80b, when simulating the server of the second database by using the protocol proxy node in the server of the first database, is specifically configured to: simulating the partition of the second database by using the protocol proxy node according to the metadata information of the partition of the to-be-accessed data table of the first operation request and the client protocol of the second database to obtain a virtual partition of which the data structure meets the client protocol supported by the second database; a distributed coordination service node of the second database is emulated with the protocol proxy node to manage addresses of metadata partitions of the first database.
Optionally, the processor 80b is specifically configured to, when simulating the partition of the second database: modifying the address of the partition of the data table to be accessed into the address of the protocol proxy node by using the protocol proxy node; and according to a client protocol supported by the second database, modifying the name of the partition of the first database into a data format which can be recognized by the client of the second database so as to obtain the virtual partition.
In some embodiments, the protocol agent node is a plurality; the first operation request includes: a read operation request and/or a write operation request; the read operation request and/or the write operation request include: and (5) identifying the partitions. Correspondingly, when the simulated server of the second database is used to perform protocol conversion on the first operation request provided by the client of the second database and complying with the client protocol supported by the second database, the processor 80b is specifically configured to: acquiring a read operation request and/or a write operation request provided by a client of a second database by using a first protocol proxy node in a plurality of protocol proxy nodes; the first protocol proxy node identifies the address of a corresponding target virtual partition for the partition; performing protocol conversion on the read operation request and/or the write operation request according to the client protocol supported by the first database to obtain a target operation request meeting the client protocol supported by the first database, and using the target operation request as a second operation request;
accordingly, when the processor 80b operates the first database based on the second operation request, it is specifically configured to: and utilizing the first protocol proxy node to perform read and/or write operation on the target partition corresponding to the partition identification based on the target operation request.
Optionally, when the processor 80b performs, by using the first protocol proxy node, a read and/or write operation on the target partition corresponding to the partition identifier based on the target operation request, the following steps are specifically performed: and providing the target operation request to a target partition service node corresponding to the target virtual partition by using the first protocol proxy node, so that the target partition service node performs read and/or write operation on the target virtual partition based on the target operation request.
In some embodiments, the first partitioned service node is further configured to manage a metadata partition of the first database; the metadata partition is obtained by dividing a metadata table of a data table of the first database. Accordingly, the processor 80b is further configured to: acquiring a metadata query request initiated by a client of a second database by using a second protocol proxy node in a plurality of protocol proxy nodes; the metadata query request includes: the partition identification carried by the read operation request and/or the write operation request; determining a first protocol proxy node where a target virtual partition corresponding to the partition identifier is located from a plurality of protocol proxy nodes based on the partition identifier; the address of the first protocol proxy node is returned to the client of the second database as the address of the target virtual partition by the communication component 80c for the client of the second database to use the address of the first protocol proxy node as the address of the target virtual partition.
Optionally, the processor 80b is further configured to: acquiring an address query request of a metadata partition initiated by a client of a second database by using a protocol proxy node; in response to the address query request for the metadata partition, the address of the metadata partition is provided to the client of the second database through the communication component 80c for the client of the second database to initiate the metadata query request based on the address of the metadata partition.
Optionally, when the processor 80b provides the address of the metadata partition to the client of the second database, it is specifically configured to: determining the address of a load balancing node in a server of a first database as the address of a metadata partition of a virtual partition; the address of the load balancing node is returned to the client of the second database through the communication component 80c, so that the client of the second database initiates a metadata query request to the load balancing node based on the address of the load balancing node; randomly selecting one protocol proxy node from the plurality of protocol proxy nodes by using the load balancing node as a second protocol proxy node; the metadata query request is forwarded to the second protocol proxy node via the communication component 80 c.
Optionally, the processor 80b is further configured to: and acquiring the DDL request initiated by the client of the second database by utilizing the protocol proxy node. Correspondingly, when the simulated server of the second database is utilized to perform protocol conversion on the first operation request provided by the client of the second database and conforming to the client protocol supported by the second database, the processor 80b is specifically configured to: and converting the DDL request into a target DDL request meeting the client protocol supported by the first database as a second operation request according to the client protocol supported by the first database. Accordingly, when the processor 80b operates the first database based on the second operation request, it is specifically configured to: simulating a control node of a second database by using the protocol agent node; and performing DDL operation on the first database based on the target DDL request to obtain an operation result corresponding to the DDL request.
Optionally, the processor 80b is further configured to: obtaining, via the communication component 80c, a management node address request provided by a client of the second database; responding to the address request of the control node, providing the address of the load balancing node of the first database as the address of the control node to the client of the second database, so that the client of the second database initiates a DDL request to the load balancing node through the communication component 80c based on the address of the load balancing node; randomly selecting a target protocol proxy node from a plurality of protocol proxy nodes by using a load balancing node; and utilizing the target protocol proxy node to convert the DDL request into a target DDL request meeting the client protocol supported by the first database and perform DDL operation on the first database based on the target DDL request.
Optionally, the number of the first partition service nodes is multiple; the plurality of protocol agent nodes correspond to the plurality of first partition service nodes one by one, and the corresponding protocol agent nodes and the first partition service nodes are deployed on the same physical machine. The processor 80b is configured to: the partitions of the first database are dispatched to the plurality of first partition service nodes in a balanced manner using the load balancing nodes of the first database.
Optionally, the processor 80b is further configured to: according to a set query period, periodically querying a first partition service node to obtain a metadata partition of a first database; acquiring metadata information of a partition of a first database from a metadata partition of the first database; and updating the virtual partition according to the metadata information of the partition of the first database acquired in the current query period.
Optionally, the processor 80b is further configured to: and detecting a plurality of protocol agent nodes and maintaining a list of normal protocol agent nodes. Correspondingly, when the processor 80b updates the virtual partition according to the metadata information of the partition of the first database acquired in the current query cycle, the processor is specifically configured to: judging whether the protocol proxy node corresponding to the address to be updated of the virtual partition is a normal protocol proxy node or not according to the list of the normal protocol proxy nodes and the metadata information of the partition of the first database acquired in the current query period; if the judgment result is negative, selecting a target protocol proxy node from the list of normal protocol proxy nodes; and writing the address of the target protocol proxy node serving as the address of the virtual partition into the virtual partition to obtain the updated virtual partition.
Optionally, when the target protocol proxy node is selected from the list of normal protocol proxy nodes, the method is specifically configured to: performing hash calculation on the name of the virtual partition to obtain a hash result of the name of the virtual partition; and selecting a target protocol proxy node from the list of normal protocol proxy nodes by adopting a consistent hash algorithm according to the hash result of the name of the virtual partition.
Optionally, when the processor 80b performs a probing activity on a plurality of protocol proxy nodes, the processor is specifically configured to: and adopting the epidemic protocol to probe a plurality of protocol agent nodes.
In some embodiments, the client of the second database is further configured to: caching the address of the acquired virtual partition. Accordingly, the processor 80b is further configured to: before performing protocol conversion on a read operation request and/or a write operation request, determining the health state of a third protocol proxy node where a target virtual partition stored by a first protocol proxy node is located according to a normal protocol proxy node list under the condition that the first protocol proxy node stores the target virtual partition; and when the health state of the third protocol agent node is abnormal, or the health state of the third protocol agent node is normal and the third protocol agent node is the first protocol agent node, the first protocol agent node executes a read operation request and/or a write operation request.
Optionally, the processor 80b is further configured to: and under the condition that the health state of the third protocol agent node is normal but the third protocol agent node is not the same as the first protocol agent node, informing the client of the second database to re-search the address of the target virtual partition and asynchronously updating the virtual partition of the first protocol agent node.
Optionally, the processor 80b is further configured to: before the protocol conversion is carried out on the read operation request and/or the write operation request, under the condition that the first protocol proxy node stores the virtual partition of the data table to be accessed but does not store the target virtual partition, the client of the second database is informed to search the address of the target virtual partition again, and the virtual partition of the first protocol proxy node is asynchronously updated; the data table to be accessed is a data table corresponding to the data table identifier in the partition identifiers carried by the read operation request and/or the write operation request.
Optionally, the processor 80b is further configured to: before the protocol conversion is carried out on the read operation request and/or the write operation request, under the condition that the first protocol proxy node does not store the virtual partition of the data table to be accessed, the virtual partition of the first protocol proxy node is asynchronously updated; and performs a read operation request and/or a write operation request.
In some optional implementations, as shown in fig. 8, the computing device may further include: power supply assembly 80d, etc. Only some of the components shown in fig. 8 are schematically represented, and it is not meant that the computing device must include all of the components shown in fig. 8, nor that the computing device can include only the components shown in fig. 8.
In the embodiment of the application, a protocol proxy node is additionally arranged at a server of a first database, and the protocol proxy node can simulate servers of other second databases, so that for operation requests of clients of the second databases with different supported client protocols to the first database, the protocol proxy node can utilize the simulated server of the second database to perform client protocol conversion on the operation requests, so that the operation requests follow the client protocol of the first database, and thus, the operation requests after protocol conversion can be used for accessing the first database, thereby realizing the compatibility of the first database with the client protocols of the other databases, and being beneficial to improving the client compatibility of the first database.
In embodiments of the present application, the memory is used to store computer programs and may be configured to store other various data to support operations on the device on which it is located. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
In the embodiments of the present application, the processor may be any hardware processing device that can execute the above described method logic. Alternatively, the processor may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or a Micro Controller Unit (MCU); programmable devices such as Field-Programmable Gate arrays (FPGAs), Programmable Array Logic devices (PALs), General Array Logic devices (GAL), Complex Programmable Logic Devices (CPLDs), etc. may also be used; or Advanced Reduced Instruction Set (RISC) processors (ARM), or System On Chips (SOC), etc., but is not limited thereto.
In embodiments of the present application, the communication component is configured to facilitate wired or wireless communication between the device in which it is located and other devices. The device in which the communication component is located can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G, 5G or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may also be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, or other technologies.
In embodiments of the present application, a power supply component is configured to provide power to various components of the device in which it is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor do they limit the types of "first" and "second".
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
The storage medium of the computer is a readable storage medium, which may also be referred to as a readable medium. Readable storage media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (14)

1. A database access method, comprising:
simulating a server of a second database by using a protocol agent node in the server of the first database; the first database and the second database support different client protocols;
performing protocol conversion on a first operation request provided by a client of the second database and conforming to a client protocol supported by the second database by using the simulated server of the second database to obtain a second operation request conforming to the client protocol supported by the first database;
operating the first database based on the second operation request to obtain an operation result corresponding to the second operation request;
and providing the operation result to the client of the second database.
2. The method of claim 1, wherein simulating the server side of the second database using the protocol proxy node in the server side of the first database comprises:
simulating the partition of the second database by using the protocol proxy node according to the metadata information of the partition of the to-be-accessed data table of the first operation request and the client protocol of the second database to obtain a virtual partition of which the data structure meets the client protocol supported by the second database;
emulating, with the protocol agent node, a distributed coordination service node of the second database to manage addresses of metadata partitions of the first database.
3. The method of claim 2, wherein the protocol proxy node is plural; the first operation request includes: a read operation request and/or a write operation request; the read operation request and/or the write operation request include: a partition identification;
the performing protocol conversion on the first operation request provided by the client of the second database and conforming to the client protocol supported by the second database by using the simulated server of the second database includes:
acquiring a read operation request and/or a write operation request provided by a client of the second database by using a first protocol proxy node in a plurality of protocol proxy nodes; the first protocol agent node identifies the address of a target virtual partition corresponding to the partition;
according to the client protocol supported by the first database, performing protocol conversion on the read operation request and/or the write operation request to obtain a target operation request meeting the client protocol supported by the first database, and taking the target operation request as the second operation request;
the operating the first database based on the second operation request comprises:
and utilizing the first protocol proxy node to perform read and/or write operation on the target partition corresponding to the partition identification based on the target operation request.
4. The method of claim 3, wherein the partition service node of the first database is configured to manage a metadata partition of the first database; the metadata partition is obtained by dividing a metadata table of a data table of the first database; the method further comprises the following steps:
acquiring a metadata query request initiated by a client of the second database by using a second protocol proxy node in the plurality of protocol proxy nodes; the metadata query request includes: the read operation request and/or the write operation request carry partition identifiers;
determining a first protocol proxy node where the target virtual partition is located from the plurality of protocol proxy nodes based on the partition identification and metadata information of the partition of the first database;
and returning the address of the first protocol proxy node as the address of the target virtual partition to the client of the second database, so that the client of the second database takes the address of the first protocol proxy node as the address of the target virtual partition.
5. The method of claim 4, further comprising:
acquiring an address query request of a metadata partition initiated by a client of the second database by using the protocol proxy node;
in response to an address query request of the metadata partition,
determining the address of a load balancing node in a server of the first database as the address of a metadata partition; the address of the load balancing node is returned to the client of the second database, so that the client of the second database can initiate the metadata query request to the load balancing node based on the address of the load balancing node;
the method further comprises the following steps:
randomly selecting one protocol proxy node from the plurality of protocol proxy nodes as the second protocol proxy node by using the load balancing node;
forwarding the metadata query request to the second protocol proxy node.
6. The method of claim 2, further comprising:
acquiring a DDL request initiated by a client of the second database by using the protocol agent node;
the performing, by using the simulated server of the second database, protocol conversion on the first operation request provided by the client of the second database and conforming to the client protocol supported by the second database includes:
converting the DDL request into a target DDL request meeting the client protocol supported by the first database according to the client protocol supported by the first database, and taking the target DDL request as the second operation request;
the operating the first database based on the second operation request to obtain an operation result corresponding to the second operation request includes:
simulating a management and control node of the second database by using the protocol agent node; and performing DDL operation on the first database based on the target DDL request to obtain an operation result corresponding to the DDL request.
7. The method of claim 6, further comprising:
acquiring a management node address request provided by a client of the second database;
responding to the address request of the management and control node, providing the address of the load balancing node of the first database as the address of the management and control node to the client of the second database, so that the client of the second database initiates the DDL request to the load balancing node based on the address of the load balancing node;
randomly selecting a target protocol proxy node from a plurality of protocol proxy nodes by using the load balancing node;
utilizing the target protocol proxy node to execute the steps of converting the DDL request into a target DDL request meeting a client protocol supported by the first database and performing DDL operation on the first database based on the target DDL request.
8. The method of claim 4, further comprising:
periodically querying the partition service node of the first database according to a set query period to acquire a metadata partition of the first database;
acquiring metadata information of a partition of the first database from a metadata partition of the first database;
and updating the virtual partition according to the metadata information of the partition of the first database acquired in the current query period.
9. The method of claim 8, further comprising:
detecting the protocol agent nodes and maintaining a normal list of the protocol agent nodes;
the updating the virtual partition according to the metadata information of the partition of the first database acquired in the current query cycle includes:
judging whether the protocol agent node corresponding to the address to be updated of the virtual partition is a normal protocol agent node or not according to the list of the normal protocol agent nodes and the metadata information of the partition of the first database acquired in the current query period;
if the judgment result is negative, selecting a target protocol proxy node from the list of the normal protocol proxy nodes;
and writing the address of the target protocol proxy node as the address of the virtual partition into the virtual partition to obtain an updated virtual partition.
10. The method of claim 8, wherein the client of the second database is further configured to: caching the acquired address of the virtual partition;
before performing protocol conversion on the read operation request and/or the write operation request, the method further includes:
under the condition that the target virtual partition is stored in the first protocol proxy node, determining the health state of a third protocol proxy node where the target virtual partition stored in the first protocol proxy node is located according to a normal protocol proxy node list;
when the health state of the third protocol agent node is abnormal, or the health state of the third protocol agent node is normal and the third protocol agent node is the first protocol agent node, the first protocol agent node executes the read operation request and/or the write operation request;
and under the condition that the health state of the third protocol agent node is normal but the third protocol agent node is not the same as the first protocol agent node, informing the client of the second database to search the address of the target virtual partition again and asynchronously updating the virtual partition of the first protocol agent node.
11. The method of claim 10, wherein the first protocol proxy node, prior to protocol converting the read operation request and/or the write operation request, further comprises:
when the first protocol proxy node stores the virtual partition of the data table to be accessed but does not store the target virtual partition, informing the client of the second database to search the address of the target virtual partition again and asynchronously updating the virtual partition of the first protocol proxy node; the data table to be accessed is a data table corresponding to the data table identifier in the partition identifiers;
asynchronously updating the virtual partition of the first protocol proxy node under the condition that the first protocol proxy node does not store the virtual partition of the data table to be accessed; and executing the read operation request and/or the write operation request.
12. A database system, comprising: a server side of a first database and a client side of a second database; the first database and the second database support different client protocols;
the server side of the first database comprises: a protocol proxy node;
the protocol proxy node is used for simulating a server side of the second database; performing protocol conversion on a first operation request provided by a client of the second database and following the client protocol supported by the second database by using the simulated server of the second database to obtain a second operation request following the client protocol supported by the first database; operating the first database based on the second operation request to obtain an operation result corresponding to the second operation request; and providing the operation result to the client of the second database.
13. A computing device, comprising: a memory, a processor, and a communications component; wherein the memory is used for storing a computer program;
the processor is coupled to the memory and the communication component for executing the computer program for performing the steps of the method of any of claims 1-11.
14. A computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 1-11.
CN202210956615.6A 2022-08-10 2022-08-10 Database access method, device, system and storage medium Active CN115062092B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210956615.6A CN115062092B (en) 2022-08-10 2022-08-10 Database access method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210956615.6A CN115062092B (en) 2022-08-10 2022-08-10 Database access method, device, system and storage medium

Publications (2)

Publication Number Publication Date
CN115062092A true CN115062092A (en) 2022-09-16
CN115062092B CN115062092B (en) 2023-02-03

Family

ID=83208555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210956615.6A Active CN115062092B (en) 2022-08-10 2022-08-10 Database access method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN115062092B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320055A (en) * 2023-01-03 2023-06-23 广州市玄武无线科技股份有限公司 Network protocol conversion method and system

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1494022A (en) * 2002-10-30 2004-05-05 华为技术有限公司 Method accessing data bank through protocol agency mode
US20100030995A1 (en) * 2008-07-30 2010-02-04 International Business Machines Corporation Method and apparatus for applying database partitioning in a multi-tenancy scenario
CN104298757A (en) * 2014-10-22 2015-01-21 福建星网视易信息系统有限公司 Method and system allowing compatibility with mobile clients and databases different in version
CN106897344A (en) * 2016-07-21 2017-06-27 阿里巴巴集团控股有限公司 The data operation request treatment method and device of distributed data base
CN107203387A (en) * 2017-06-13 2017-09-26 广东神马搜索科技有限公司 Target database access method and system
CN109739877A (en) * 2018-11-21 2019-05-10 比亚迪股份有限公司 Database Systems and data managing method
CN109815214A (en) * 2018-12-29 2019-05-28 深圳云天励飞技术有限公司 Data bank access method, system, device and storage medium
EP3594824A1 (en) * 2018-07-11 2020-01-15 ServiceNow, Inc. External data management in a remote network management platform
CN111125218A (en) * 2019-12-13 2020-05-08 成都安恒信息技术有限公司 Database compatibility method based on protocol analysis and compatibility proxy device thereof
CN112905567A (en) * 2021-03-23 2021-06-04 杭州沃趣科技股份有限公司 Database replacement method, device, system and medium based on network protocol conversion
CN113392415A (en) * 2021-06-18 2021-09-14 作业帮教育科技(北京)有限公司 Access control method and system for data warehouse and electronic equipment
CN113448942A (en) * 2020-03-27 2021-09-28 阿里巴巴集团控股有限公司 Database access method, device, equipment and storage medium
CN114070833A (en) * 2021-11-18 2022-02-18 中国工商银行股份有限公司 Multi-protocol service compatible method, system, device, medium, and program product
CN114547019A (en) * 2020-11-24 2022-05-27 网联清算有限公司 Database reading and writing method, device, server and medium
CN114625762A (en) * 2020-11-27 2022-06-14 华为技术有限公司 Metadata acquisition method, network equipment and system

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1494022A (en) * 2002-10-30 2004-05-05 华为技术有限公司 Method accessing data bank through protocol agency mode
US20100030995A1 (en) * 2008-07-30 2010-02-04 International Business Machines Corporation Method and apparatus for applying database partitioning in a multi-tenancy scenario
CN104298757A (en) * 2014-10-22 2015-01-21 福建星网视易信息系统有限公司 Method and system allowing compatibility with mobile clients and databases different in version
CN106897344A (en) * 2016-07-21 2017-06-27 阿里巴巴集团控股有限公司 The data operation request treatment method and device of distributed data base
CN107203387A (en) * 2017-06-13 2017-09-26 广东神马搜索科技有限公司 Target database access method and system
EP3594824A1 (en) * 2018-07-11 2020-01-15 ServiceNow, Inc. External data management in a remote network management platform
CN109739877A (en) * 2018-11-21 2019-05-10 比亚迪股份有限公司 Database Systems and data managing method
CN109815214A (en) * 2018-12-29 2019-05-28 深圳云天励飞技术有限公司 Data bank access method, system, device and storage medium
CN111125218A (en) * 2019-12-13 2020-05-08 成都安恒信息技术有限公司 Database compatibility method based on protocol analysis and compatibility proxy device thereof
CN113448942A (en) * 2020-03-27 2021-09-28 阿里巴巴集团控股有限公司 Database access method, device, equipment and storage medium
CN114547019A (en) * 2020-11-24 2022-05-27 网联清算有限公司 Database reading and writing method, device, server and medium
CN114625762A (en) * 2020-11-27 2022-06-14 华为技术有限公司 Metadata acquisition method, network equipment and system
CN112905567A (en) * 2021-03-23 2021-06-04 杭州沃趣科技股份有限公司 Database replacement method, device, system and medium based on network protocol conversion
CN113392415A (en) * 2021-06-18 2021-09-14 作业帮教育科技(北京)有限公司 Access control method and system for data warehouse and electronic equipment
CN114070833A (en) * 2021-11-18 2022-02-18 中国工商银行股份有限公司 Multi-protocol service compatible method, system, device, medium, and program product

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何坤: "基于内存数据库的分布式数据库架构", 《程序员》 *
鲁杰等: "基于XML的报务系统数据分发机制研究", 《计算机工程与设计》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116320055A (en) * 2023-01-03 2023-06-23 广州市玄武无线科技股份有限公司 Network protocol conversion method and system
CN116320055B (en) * 2023-01-03 2023-12-05 广州市玄武无线科技股份有限公司 Network protocol conversion method and system

Also Published As

Publication number Publication date
CN115062092B (en) 2023-02-03

Similar Documents

Publication Publication Date Title
US9563673B2 (en) Query method for a distributed database system and query apparatus
AU2021101420A4 (en) Small-file storage optimization system based on virtual file system in KUBERNETES user-mode application
US8832130B2 (en) System and method for implementing on demand cloud database
US8386540B1 (en) Scalable relational database service
US9037677B2 (en) Update protocol for client-side routing information
US9336284B2 (en) Client-side statement routing in distributed database
CN110019004B (en) Data processing method, device and system
US20130332612A1 (en) Transmission of map/reduce data in a data center
US20140358977A1 (en) Management of Intermediate Data Spills during the Shuffle Phase of a Map-Reduce Job
US20130275457A1 (en) Client-side statement routing for partitioned tables
CN107844274B (en) Hardware resource management method, device and terminal based on super-fusion storage system
CN111090440B (en) Information processing method, system, device and storage medium
US11811839B2 (en) Managed distribution of data stream contents
CN115062092B (en) Database access method, device, system and storage medium
Chang et al. Integration and optimization of multiple big data processing platforms
US9648103B2 (en) Non-uniform file access in a distributed file system
CN111274004B (en) Process instance management method and device and computer storage medium
KR20130038517A (en) System and method for managing data using distributed containers
Zhang et al. HDCache: a distributed cache system for real-time cloud services
US20220075655A1 (en) Efficient accelerator offload in multi-accelerator framework
Garefalakis et al. Strengthening consistency in the cassandra distributed key-value store
CN115729693A (en) Data processing method and device, computer equipment and computer readable storage medium
CN113343045A (en) Data caching method and network equipment
US11036702B1 (en) Generation of search indexes for disparate device information
KR20160050735A (en) Apparatus for Spatial Query in Big Data Environment and Computer-Readable Recording Medium with Program therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant