CN108777718B - Method and device for accessing read-write-more-less system through client side by service system - Google Patents

Method and device for accessing read-write-more-less system through client side by service system Download PDF

Info

Publication number
CN108777718B
CN108777718B CN201810652330.7A CN201810652330A CN108777718B CN 108777718 B CN108777718 B CN 108777718B CN 201810652330 A CN201810652330 A CN 201810652330A CN 108777718 B CN108777718 B CN 108777718B
Authority
CN
China
Prior art keywords
cache
data
mode
read
manager
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810652330.7A
Other languages
Chinese (zh)
Other versions
CN108777718A (en
Inventor
赵国钦
李效锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Nova Technology Singapore Holdings Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN201810652330.7A priority Critical patent/CN108777718B/en
Publication of CN108777718A publication Critical patent/CN108777718A/en
Application granted granted Critical
Publication of CN108777718B publication Critical patent/CN108777718B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Abstract

The embodiment of the specification provides a method and a device for reading data between service systems. In one embodiment, a first service system and a second service system with less reading and writing are connected through a network; the second service system comprises a manager, a database and at least one cache library, wherein the cache library is a distributed cache of the database, and the manager provides control over the database and the distributed cache; the first business system is provided with a client program of the manager; the method is performed by a client program; the method comprises the following steps: acquiring a reading request, wherein the reading request is used for reading data in a second service system; determining a cache bank address according to the reading request, wherein the cache bank address corresponds to a cache bank in at least one cache bank; establishing connection with a cache library according to the address of the cache library; and sending a reading request to the cache library through the connection to read the data. The embodiment of the specification can greatly reduce the time consumption of the network and increase the system throughput.

Description

Method and device for accessing read-write-more-less system through client side by service system
Technical Field
The present description relates to the field of computer technology, and more particularly, to a method and apparatus for reading data from a read-write-many system.
Background
As the name implies, a read-many-write-few system is a system that reads more than writes. As a service provider, the system provides read and write services at the same time, but the stress of the system is mainly from the read operation of a service caller. For this purpose, the system establishes a plurality of read libraries and a master library on the database level, the system reads the read libraries when read operation occurs, and the system operates the master library and synchronizes to the read libraries when write operation occurs.
For a large number of read requests from the service invoker, the system establishes a distributed cache. When the reading service occurs, the data in the distributed cache is preferentially read, if the data does not exist in the cache, the database is read, the result is stored in the cache, and the database is updated firstly and then the database is updated into the distributed cache when the data is operated.
However, as the amount of reading traffic increases and the performance of queries is required, the network time becomes a bottleneck that restricts the system even further. Accordingly, improved solutions are desired that effectively reduce the network overhead between the service invoker and the service provider.
Disclosure of Invention
One or more embodiments of the present specification describe a method and apparatus that can reduce the time required to establish a connection, transmit data, and increase system throughput.
According to a first aspect, a data reading method for a read-many-write-few system is provided. The first service system and the second service system are connected through a network. The second service system is a system with more reading and less writing, and comprises a manager, a database and at least one cache library, wherein the at least one cache library is a distributed cache of the database, and the manager provides control over the database and the at least one distributed cache. The first business system is provided with a client program of a manager of the second business system; the method is performed by the client program. The method comprises the following steps: acquiring a reading request, wherein the reading request is used for reading data in a second service system; in a first mode, according to a reading request, determining a cache bank address, wherein the cache bank address corresponds to a first cache bank in at least one cache bank; establishing a first connection with a first cache bank according to the cache bank address; and sending a reading request to the first cache library through the first connection to read the first data.
In one possible approach, the method includes establishing a second connection with the manager in the second mode and sending a read request to the manager over the second connection to read the second data from the at least one distributed cache or database over the second connection. In a further possible approach, the method includes comparing the first data and the second data, and determining the availability of the first mode based on the comparison.
In one possible arrangement, the method includes the client program including a mode switch, the method including selecting one of the first mode and the second mode to read data in dependence on the mode switch. In another further possible approach, the method includes switching the mode switch to switch between the first mode and the second mode.
In a possible scheme, the determining the address of the cache bank according to the reading request comprises determining a key according to the reading request; and determining the address of the cache bank according to the key and the key value pair.
In one possible approach, the method includes sending a request to a manager to read the database in the event that the first cache bank fails to hit the data.
According to a second aspect, a data reading apparatus of a first business system accessing a second business system is provided. The first service system and the second service system are connected through a network. The second service system is a system with more reading and less writing, and comprises a manager, a database and at least one cache library, wherein the at least one cache library is a distributed cache of the database, and the manager provides control over the database and the at least one distributed cache. The data reading apparatus includes: an obtaining unit configured to obtain a read request, where the read request is used to read data in a second service system; a determining unit configured to determine, in a first mode, a cache bank address according to a read request, the cache bank address corresponding to a first cache bank of at least one cache bank; the first connection unit is configured to establish a first connection with a first cache bank in the at least one cache bank according to the cache bank address; and the first reading unit is configured to send a reading request to the first cache bank through the first connection and read the first data.
According to a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
According to a fourth aspect, there is provided a server comprising a storage device, a network interface, and a processor communicatively coupled to the storage device and the network interface, the storage device storing a client program of a manager, the manager providing control of a database and at least one distributed cache, the at least one cache library being a distributed cache of the database, the processor being operable to execute the client program, thereby implementing the method of the first aspect.
By the method and the device provided by the embodiment of the specification, the time consumption of a network can be greatly reduced, the system throughput is increased, the access speed of an application is improved, and the load of a database is reduced.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic diagram of a scenario in which a business system disclosed in the present specification accesses a read-many-write-few system;
FIG. 2 illustrates a schematic diagram of accessing a distributed cache by a client according to one embodiment of the present description;
FIG. 3 is a flow diagram illustrating a cache bank miss in a first mode;
FIG. 4 illustrates a flowchart of a method for accessing a distributed cache by a client, according to one embodiment;
FIG. 5 illustrates a detailed block diagram of a server that may be used to implement the various techniques described above, according to an embodiment;
FIG. 6 is a schematic structural diagram of an apparatus provided in an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a cache library client embedding manager client.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar modules or modules having the same or similar functionality throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
FIG. 1 is a schematic diagram of a scenario in which a business system disclosed in this specification accesses a read-many-write-few system. As shown in fig. 1, there is a first service system and a second service system, which are connected to each other through a network. The first business system is represented by, for example, servers a1, a2, and A3. In one example, the first service system is a server cluster, each server in the cluster has the same function, and the load balancing server dynamically allocates the user's request to the corresponding server for processing according to the load condition of each server.
The second business system comprises, for example, manager B, database B-M0 and at least one cache library B-H1, B-H2, B-H3 and B-H4, constituting a read-write-many system. Wherein the cache libraries B-H1, B-H2, B-H3 and B-H4 are distributed caches of the database B-M0. The databases B-M0 can be deployed in the server where the manager is located, or can be deployed independently; the databases B-M0 store a lot of data, which can provide the query capability of data through publishing services, and of course allow new data to be written; manager B enables control of the databases B-M0, distributed caches B-H1, B-H2, B-H3, and B-H4. The cache libraries B-H1, B-H2, B-H3, and B-H4 may store at least a portion of the data of database B-M0, and in one example, those data of database B-M0 that are not volatile, as desired. Of course, the databases B-M0 and the cache libraries may also store files, objects, pictures, videos, etc., hereinafter collectively referred to as data for simplicity.
Manager B may distribute the data of database B-M0 into multiple cache libraries B-H1, B-H2, B-H3, and B-H4 through data slicing. The following rules may be employed for data fragmentation: interval fragmentation, hash (hash) fragmentation, slot (slot) fragmentation. Of course, other rules that facilitate data reading and writing are possible. For hash fragmentation, the hash algorithm may employ static hash and consistent hash. For example, based on the data D and the total number N of the cache banks, the hash value corresponding to the data D is calculated by the consistent hash algorithm, and the corresponding cache bank can be found according to the hash value.
In one possible scenario, there is a relationship or dependency between the service of the first service system and the service of the second service system, and therefore, data of the second service system needs to be accessed. For example, the first service system is responsible for transactions, the second service system is responsible for managing member information, and the first service system confirms the progress of transactions by inquiring the member information in the second service system.
The server A1-A3, the manager B, the database B-M0 and the plurality of cache libraries B-H1, B-H2, B-H3 and B-H4 are typically provided on different servers or communication devices, but may be provided on the same server or the same communication device; when being distributed on different servers or communication devices, the servers A1-A3, the manager B, the database B-M0 and the cache libraries B-H1, B-H2, B-H3 and B-H4 can be connected through the Internet, a private network and other networks. This is not limited by the present description.
Fig. 2 is a schematic diagram illustrating a structure of accessing a distributed cache by a client according to an embodiment in the scenario illustrated in fig. 1. As shown in fig. 2, the server a1 in the first business system installs a client program of the manager in the second business system. When the client program is installed, the connection addresses of the manager B, the database B-M0 and the cache libraries B-H1, B-H2, B-H3 and B-H4 can be loaded according to the configuration information, and the corresponding connection addresses can also be obtained from the server.
The databases B-M0 are master databases to which data can be read and written. The cache libraries B-H1, B-H2, B-H3 and B-H4 are used as the cache of the main database, and at least part of data of the database B-M0 are respectively stored according to the slicing rule. The manager B is provided with a server program, realizes control over the database B-M0 and the distributed caches B-H1, B-H2, B-H3 and B-H4, and provides read-write access to the database B-M0 and read access to the caches B-H1, B-H2, B-H3 and B-H4 for the client in response to access requests of the client.
In one access mode, the client program can complete the read and write operation of data through the server program in the manager. Based on manager B address 0, the client program establishes a communication link L1 with the server program of manager B. After establishing the link L1, manager B writes new or modified data to the database B-M0 according to the write request; according to the read operation and the corresponding slicing rule, a read request is initiated to one of the distributed caches B-H1, B-H2, B-H3 and B-H4 (e.g., B-H3), and data is read from the cache bank through a connection L1' between the manager B and the cache bank B-H3 and then returned to the server A1. This manner of reading data by accessing the cache library through the remote server is referred to as a second mode in the specification.
According to the embodiment of the specification, in another access mode, the client program directly establishes connection with the target cache library without passing through the manager, so that data access is realized. Specifically, when the first service system needs to read data in the second service system, the client program obtains a read request, where the read request is used to read data in the second service system. In one example, the read request carries a USER identification (USER ID, UID for short) of the USER. Then, the client program may determine, according to the read request, a cache bank address corresponding to a target cache bank of the at least one cache bank from which data can be read, e.g., B-H3; then, according to the address 3 of the cache library B-H3, establishing a connection L2 with the cache library B-H3; the data is read by sending a read request to the cache bank B-H3 over the connection L2. In one example, a key is determined from the read request, and then a corresponding cache bank address is determined from the cache bank address list according to the key and key value pair. Specifically, the client program may calculate hash values of the cache banks in advance and distribute each cache bank on a circle of 32 th power from 0 to 2; when receiving a reading request, the client program calculates the hash value of the key of the data to be read by the same method and maps the hash value to the circle, clockwise searching is started from the position to which the data to be read is mapped, and the data is read from the cache bank corresponding to the searched first address. Then, the client program obtains the query object according to the deserialization mode provided by the cache library. It will be appreciated by those skilled in the art that the address of the cache pool may also be obtained in other ways, such as querying manager B and then being provided to the client program by manager B.
This manner of data reading in which the client directly interacts with the distributed cache is referred to as a first mode in this specification. An example of implementing the first mode is a client program or a part thereof embedding a cache library in a client program of the manager. FIG. 7 is a schematic diagram of a cache library client embedding manager client. The cache library client is configured with a list of cache library addresses, e.g., B-H1 … B-H4, recorded in the list. In one example, when the number of the cache banks increases or decreases, the cache bank address list in the first service system can be updated in time through interaction between the distributed cache banks and the clients thereof.
The first mode takes significantly different time than the second mode. Taking the path of a 1/B-H3 in the second mode as an example, the process of reading data includes that server a1 establishes communication with manager B via link L1, then manager B and cache B-H3 establish communication via link L1', then data is transmitted from cache B-H3 to manager B to server a1, where the time consumed for network transmission includes the time consumed for server a1 to establish connection with manager B, the time consumed for data transmission, and the time consumed for manager B to establish connection with distributed cache B-H3 and the time consumed for data transmission. Compared with the prior art, the first mode of accessing the read service only includes the time consumption of establishing connection between the server a1 and the distributed cache B3 and data transmission when the network transmission is time-consuming, so that the time consumption of the network can be greatly reduced, and the system throughput can be increased.
It is well known that cache banks may also have miss situations. FIG. 3 is a flow diagram illustrating a cache bank miss in a first mode. As shown in fig. 3, in step S32, it is determined that the first mode is adopted, and the cache library is directly accessed by the client. In step S34, it is determined whether the cache pool hits data? If so, the process proceeds to step S37, where the data is read. If not, step S36 is entered, instructing the manager to query the database or perform other operations.
Obviously, the first mode in which the client interacts directly with the distributed cache and the second mode via the server are thousands of years apart. Thus, according to embodiments of the present description, the ability to select and switch between two modes is provided. Fig. 4 shows a flowchart of a method for accessing a distributed cache by a client according to one embodiment. The method of this embodiment provides the ability to switch one key in certain scenarios, allowing dynamic selection of either a first mode directly accessed through the client or a second mode of remote service through the server. The method can be implemented on the basis of the scenario shown in fig. 1 and the structure shown in fig. 2.
As shown in fig. 4, in step S42, the client receives a request for reading data, and the read request is used for reading data in the second service system. Then, in step S44, the client determines whether the double-read mode is employed. If so, the dual read mode is turned on in step S46, and the result a is first read by the client directly accessing the cache library in the first mode, and then the result b is read by the server in the second mode. Of course, the order of the first mode and the second mode may be changed in step S46, and the access mode in which the first mode and the second mode are parallel may be adopted, or the access mode in which the second mode is first followed by the first mode may be adopted. In one example, the results a and b are hashed and compared, and different times are counted. If the results a and b are not inconsistent, the first mode that the client directly accesses the cache library is available. Such a checking mechanism helps to discover and solve problems at a previous time or on the fly when the server a1 accesses the cache library. In addition, the client may send the counted different times to the manager periodically or aperiodically, and the manager analyzes the different comparison results and arranges corresponding measures, such as updating the distributed cache early.
If it is judged at step S44 that the double read mode is not employed, at step S47, it is judged whether the mode switch is turned on. If the mode switch is turned on, step S49 is entered, and in the second mode, the client reads the data through the remote server, i.e. the cache library is accessed by the manager. If the switch is closed, step S48 is entered, and the client directly accesses the cache bank to read the data in the first mode.
By means of the mode switch, the client can flexibly select the first mode or the second mode according to different load conditions of the second service system. For example, the client selects to work in the second mode under the condition that the second service system is not busy in service or the hit rate of the cache bank is not high; and the client selects to work in the first mode under the condition that the second service system has heavy traffic or the data needs to be read as soon as possible.
By means of the mode switch, the client can also dynamically and quickly switch between the first mode and the second mode.
In the early stage of reading the service, the availability of the first mode can be judged through the double-reading mode. And if the first mode is available after verification, the client selects to start the first mode and directly reads the cache library through the client. In the first mode of client reading, the client can continuously verify the performance of the client directly accessing the cache library according to monitoring and service throughput. If a second service system or a network has local problems and hinders the normal work of the first mode, the mode switch can be switched rapidly, so that the client works in the second mode, the cache library is remotely accessed through the server, and the risk that the cache client directly accesses the cache library can be greatly reduced; when the problem is eliminated, the mode switch can be switched, so that the client returns to the first mode again. The mode switch can help realize seamless switching in the service process without special person on duty.
In one possible implementation, the first service system is a server cluster, each server in the cluster has the same function, and the load balancing server dynamically allocates the user's request to the corresponding server for processing according to the load condition of each server. Each server in the cluster is integrated with a client program of the manager of the second service system, and is configured with a mode switch, so that the client program can be switched between a first mode and a second mode. The mode switches in the servers in the cluster can be switched synchronously or independently according to specific conditions. In one example, to ensure the security of the mode switching, one or a few servers in the first service system may be configured in advance, so that the mode switch of the server is switched to the first mode, and the cache library is directly accessed through the client. If the first mode is found to work normally by monitoring the operation of the server or servers over a period of time, the mode switch of each server in the whole cluster can be switched to the first mode, or switched to the first mode gradually.
Although the first mode in which the client directly accesses the cache library is helpful to quickly read data, it is necessary to update the distributed cache in time because it is difficult to ensure data consistency between the database and the cache library. In one possible implementation, the manager of the second service system writes an update task at the same time as the updating of the database, by means of which a transaction consistency with the operating data can be ensured. The updating task can be driven by a timing task, so that the content of each distributed cache node is supervised to be updated completely, and the problem of data increment during concurrency is prevented.
Fig. 5 illustrates a detailed block diagram of servers of a first business system that can be used to implement the various techniques described above according to an embodiment of this specification. The block diagram illustrates the hardware basis on which the principles shown in fig. 2 and the method flow shown in fig. 4 can be implemented. As shown in fig. 5, the server may include a processor 102 representing a microprocessor or controller 111 for controlling the overall operation of the server. The data bus 115 may facilitate data transfer between the memory device 140, the processor 102, and the controller 117. The controller 111 may be used to interact with and control various devices via a device control bus 117.
The server also includes a storage device 140, which may store client programs; the access addresses of the database and its cache banks may also be stored. The terminal device may also include Random Access Memory (RAM)120 and Read Only Memory (ROM) 122. The ROM 122 may store programs, utilities or processes to be executed, such as an operating system, in a nonvolatile manner. RAM120, also referred to as memory, provides volatile data storage and stores instructions and data related to operating system and client programs.
In operation, a client application program is loaded from RAM140 into RAM120 and thereby controls processor 102 to perform the corresponding operations whereby the client accesses the distributed cache to read data therein.
In one example, a client application program obtains a read request, wherein the read request is used for reading data in a second business system; in a first mode, according to a reading request, determining a cache bank address, wherein the cache bank address corresponds to a first cache bank in at least one cache bank; establishing a first connection with a first cache bank according to the cache bank address; and sending a reading request to the first cache library through the first connection to read the first data. In another example, in the second mode, the client application and the manager establish a second connection and send a read request to the manager over the second connection to read the second data from the at least one distributed cache over the second connection. Of course, the client application may compare the first data and the second data and determine the availability of the first mode based on the comparison.
It should be understood that the server described herein may in many respects utilize or be combined with the previously described method embodiments.
Fig. 6 is a schematic structural diagram of an apparatus according to an embodiment of the present invention. The apparatus corresponds in many respects to the steps of or in combination with the method embodiments described above, and the individual modules of the apparatus can be implemented by software, hardware or a combination of hardware and software. Specifically, the apparatus may include an obtaining unit 200 configured to obtain a read request, where the read request is used to read data in the second service system; the apparatus may further include a determining unit 202 configured to determine, according to the read request, a cache bank address corresponding to a first cache bank of the at least one cache bank; a first connection unit 204 configured to establish a first connection with a first cache bank of the at least one cache bank according to the cache bank address; the first reading unit 206 is configured to send a reading request to the first cache bank through the first connection, and read the first data. Note that the determination unit 202, the first connection unit 204, and the first reading unit 206 operate in the first mode.
In one possible solution, the data reading apparatus includes a second connection unit 212 configured to establish a second connection with the manager; and a second reading unit 214 configured to send a read request to the manager via the second connection for reading the second data from the at least one distributed cache via the second connection. The second connection unit 212 and the second reading unit 214 operate in a second mode different from the first mode. In a further possible approach, the data reading apparatus comprises a first mode availability determination unit 226 configured to compare the first data and the second data, and determine the first mode availability according to the comparison result. In another further possible approach, the data reading device includes a mode switch 238; in one example, the mode switch 238 selects one of the first mode and the second mode to read data; in another example, the mode switch 238 switches between a first mode and a second mode.
In one possible solution, the determining unit includes a key determining unit configured to determine a key according to the read request; and the address determination unit is configured to determine the cache bank address according to the key and the key value pair.
In one possible solution, the data reading apparatus further includes a third reading unit that sends a request for reading the database to the manager in a case where the first cache bank fails to hit the data.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in connection with fig. 2 and 4.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

Claims (16)

1. A data reading method for a first service system to access a second service system, wherein the first service system is connected with the second service system through a network; the second service system is a system with more reading and less writing, and comprises a manager, a database and at least one cache library, wherein the at least one cache library is a distributed cache of the database, and the manager provides control over the database and the at least one distributed cache; the first business system is provided with a client program of a manager of the second business system; the method is performed by the client program; the method comprises the following steps:
acquiring a reading request, wherein the reading request is used for reading data in a second service system;
in a first mode, according to a reading request, determining a cache bank address, wherein the cache bank address corresponds to a first cache bank in at least one cache bank;
establishing a first connection with a first cache bank according to the cache bank address;
and sending a reading request to the first cache library through the first connection to read the first data.
2. The method of claim 1, further comprising in the second mode, establishing a second connection with the manager and sending a read request to the manager over the second connection to read the second data from the at least one distributed cache or database over the second connection.
3. The method of claim 2, further comprising comparing the first data and the second data, and determining the availability of the first mode based on the comparison.
4. The method of claim 2, wherein the client program includes a mode switch, the method including selecting one of the first mode and the second mode to read data based on the mode switch.
5. The method of claim 4, wherein the method comprises switching the mode switch to switch between a first mode and a second mode.
6. The method of claim 1, wherein said determining a cache bank address based on the read request comprises determining a key based on the read request; and determining the address of the cache bank according to the key and the key value pair.
7. The method of claim 1, further comprising sending a request to a manager to read the database if the first cache bank fails to hit the data.
8. A data reading device for a first service system to access a second service system is provided, wherein the first service system and the second service system are connected through a network; the second service system is a system with more reading and less writing, and comprises a manager, a database and at least one cache library, wherein the at least one cache library is a distributed cache of the database, and the manager provides control over the database and the at least one distributed cache; the data reading apparatus includes:
an obtaining unit configured to obtain a read request, where the read request is used to read data in a second service system;
a determining unit configured to determine, in a first mode, a cache bank address according to a read request, the cache bank address corresponding to a first cache bank of at least one cache bank;
the first connection unit is configured to establish a first connection with a first cache bank in the at least one cache bank according to the cache bank address;
and the first reading unit is configured to send a reading request to the first cache bank through the first connection and read the first data.
9. A data reading apparatus according to claim 8, wherein the data reading apparatus comprises a second connection unit configured to establish a second connection with the manager in the second mode, and a second reading unit configured to send a read request to the manager via the second connection to read the second data from the at least one distributed cache or the database via the second connection.
10. A data reading apparatus according to claim 9, wherein the data reading apparatus comprises a first mode availability determination unit configured to compare the first data and the second data, and determine the first mode availability based on a result of the comparison.
11. A data reading apparatus according to claim 9, wherein the data reading apparatus comprises a mode switching unit for selecting one of the first mode and the second mode to read the data.
12. A data reading apparatus according to claim 11, wherein the mode switch is for switching between a first mode and a second mode.
13. The data reading apparatus according to claim 8, wherein the determination unit includes a key determination unit configured to determine a key according to the reading request; and the address determination unit is configured to determine the cache bank address according to the key and the key value pair.
14. A data reading apparatus according to claim 8, the apparatus further comprising a third reading unit that sends a request to read the database to the manager in case the first cache bank fails to hit the data.
15. A computer-readable storage medium, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the method of any one of claims 1-7.
16. A server, comprising: a storage device, a network interface, and a processor communicatively coupled to the storage device and the network interface, the storage device storing a client program of a manager, the manager providing control of a database and at least one distributed cache, the at least one cache being a distributed cache of the database, the processor being operable to execute the client program to implement the method of any of claims 1-7.
CN201810652330.7A 2018-06-22 2018-06-22 Method and device for accessing read-write-more-less system through client side by service system Active CN108777718B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810652330.7A CN108777718B (en) 2018-06-22 2018-06-22 Method and device for accessing read-write-more-less system through client side by service system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810652330.7A CN108777718B (en) 2018-06-22 2018-06-22 Method and device for accessing read-write-more-less system through client side by service system

Publications (2)

Publication Number Publication Date
CN108777718A CN108777718A (en) 2018-11-09
CN108777718B true CN108777718B (en) 2021-03-23

Family

ID=64025420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810652330.7A Active CN108777718B (en) 2018-06-22 2018-06-22 Method and device for accessing read-write-more-less system through client side by service system

Country Status (1)

Country Link
CN (1) CN108777718B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674147B (en) * 2019-08-28 2023-02-28 视联动力信息技术股份有限公司 Data processing method, device and computer readable storage medium
CN111339139A (en) * 2020-02-21 2020-06-26 广州市百果园信息技术有限公司 Data processing method, device, equipment and storage medium
CN112732751B (en) * 2020-12-30 2023-04-28 北京懿医云科技有限公司 Medical data processing method, device, storage medium and equipment
CN113127570B (en) * 2021-05-18 2022-11-04 上海莉莉丝科技股份有限公司 Data operation method, system, equipment and storage medium of distributed server

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186552A (en) * 2011-12-28 2013-07-03 北京新媒传信科技有限公司 Method and system for visiting data by client in business service
CN105338026A (en) * 2014-07-24 2016-02-17 阿里巴巴集团控股有限公司 Data resource acquisition method, device and system
CN105574010A (en) * 2014-10-13 2016-05-11 阿里巴巴集团控股有限公司 Data querying method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750324A (en) * 2012-05-28 2012-10-24 华为技术有限公司 File storage system, file storage device and file access method
CN107231395A (en) * 2016-03-25 2017-10-03 阿里巴巴集团控股有限公司 Date storage method, device and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186552A (en) * 2011-12-28 2013-07-03 北京新媒传信科技有限公司 Method and system for visiting data by client in business service
CN105338026A (en) * 2014-07-24 2016-02-17 阿里巴巴集团控股有限公司 Data resource acquisition method, device and system
CN105574010A (en) * 2014-10-13 2016-05-11 阿里巴巴集团控股有限公司 Data querying method and device

Also Published As

Publication number Publication date
CN108777718A (en) 2018-11-09

Similar Documents

Publication Publication Date Title
CN108777718B (en) Method and device for accessing read-write-more-less system through client side by service system
US11126605B2 (en) System and method for clustering distributed hash table entries
US9888062B2 (en) Distributed storage system including a plurality of proxy servers and method for managing objects
US8463867B2 (en) Distributed storage network
US9875262B2 (en) System and method for fetching the latest versions of stored data objects
US8495013B2 (en) Distributed storage system and method for storing objects based on locations
US10789217B2 (en) Hierarchical namespace with strong consistency and horizontal scalability
US9052962B2 (en) Distributed storage of data in a cloud storage system
JP2019212336A (en) Distributed caching cluster management
CN110474940B (en) Request scheduling method, device, electronic equipment and medium
CN111464615A (en) Request processing method, device, server and storage medium
CN111444157B (en) Distributed file system and data access method
US8549274B2 (en) Distributive cache accessing device and method for accelerating to boot remote diskless computers
US11741081B2 (en) Method and system for data handling
CN111399760A (en) NAS cluster metadata processing method and device, NAS gateway and medium
CN113190619B (en) Data read-write method, system, equipment and medium for distributed KV database
CN107181773A (en) Data storage and data managing method, the equipment of distributed memory system
CN112948178A (en) Data processing method, device, system, equipment and medium
CN112631994A (en) Data migration method and system
US11138231B2 (en) Method and system for data handling
CN116974465A (en) Data loading method, device, equipment and computer storage medium
EP3686751A1 (en) Method and system for data handling
CN111221857B (en) Method and apparatus for reading data records from a distributed system
CN113553314A (en) Service processing method, device, equipment and medium of super-convergence system
US20180275874A1 (en) Storage system and processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200927

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20200927

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240206

Address after: Guohao Times City # 20-01, 128 Meizhi Road, Singapore

Patentee after: Advanced Nova Technology (Singapore) Holdings Ltd.

Country or region after: Singapore

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Patentee before: Innovative advanced technology Co.,Ltd.

Country or region before: United Kingdom