CN105635263A - Access processing method based on background cache and adapter - Google Patents

Access processing method based on background cache and adapter Download PDF

Info

Publication number
CN105635263A
CN105635263A CN201510994074.6A CN201510994074A CN105635263A CN 105635263 A CN105635263 A CN 105635263A CN 201510994074 A CN201510994074 A CN 201510994074A CN 105635263 A CN105635263 A CN 105635263A
Authority
CN
China
Prior art keywords
machine
shared drive
access
address
addresses
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510994074.6A
Other languages
Chinese (zh)
Other versions
CN105635263B (en
Inventor
张建新
康伟
刘硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Travelsky Technology Co Ltd
China Travelsky Holding Co
Original Assignee
China Travelsky Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Travelsky Technology Co Ltd filed Critical China Travelsky Technology Co Ltd
Priority to CN201510994074.6A priority Critical patent/CN105635263B/en
Publication of CN105635263A publication Critical patent/CN105635263A/en
Application granted granted Critical
Publication of CN105635263B publication Critical patent/CN105635263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Hardware Redundancy (AREA)

Abstract

The embodiment of the invention discloses an access processing method based on background cache and an adapter and a system thereof. The method comprises the steps that the adapter receives a request string transmitted by a foreground system, and the request string is formed by access request groups and corresponding host IP addresses and standby machine IP addresses thereof through splicing; whether all the IP addresses in the request string exist in a sharing memory is inquired; if not all the IP addresses in the request string exist in the sharing memory, the sharing memory is initialized and whether all the IP addresses in the request string exist in the sharing memory is inquired again; and if all the IP addresses in the request string exist in the sharing memory, the machine of the minimum number of current IP connection is found to be connected, and a result is obtained and the result is returned to the foreground system. The access load of a background computing system is enabled to be balanced so that data pressure of the background computing system can be greatly reduced, cache hit ratio can be enhanced and high-efficiency host and standby switching can be achieved.

Description

Based on access processing method and the adapter of backstage buffer memory
Technical field
The present invention relates to data processing technique, particularly relate to a kind of access processing method based on backstage buffer memory and adapter.
Background technology
Domestic civil aviaton industry service in recent years obtains flourish, and it is particularly important that shopping engine becomes, and hind computation system also faces huge pressure when processing fare data. In order to reduce the data pressure to hind computation system, balanced access, can switch fast when machinery breakdown, and the adapter of access computation system becomes indispensable.
Adapter is responsible for foreground system to the connection of computing system. If computing system generation fault or access too high, then need the data exchange of foreground system to access backup system. At this moment, it is desired to adapter can automatically switch to backup system and start emergent flow process guarantee foreground system performance. After computing system recovers, adapter continues data are cut back computing system, reuses the main frame of computing system.
Consider that hind computation system has the buffer memory mechanism of machine on single machine, in order to improve the hit rate to buffer memory, even if making to have access to particular machine to requests classification, also the load balancing of whole hind computation system will be destroyed, like this, it is necessary to buffer memory during access and load balancing are optimized.
Therefore, need to propose a kind of new access pattern or access method, it is possible to when foreground system call hind computation system, it is achieved to the load balancing of computing system accesses, reduce the data pressure of computing system, and reach efficient active-standby switch and high level cache hit rate.
Summary of the invention
For solving the technical problem of existing existence, the embodiment of the present invention provides a kind of access processing method based on backstage buffer memory and adapter.
For achieving the above object, the technical scheme of the embodiment of the present invention is achieved in that
Embodiments providing a kind of access processing method based on backstage buffer memory, described method comprises:
Adapter receives the request string that foreground system sends, and described request string divides into groups by access request and host IP address and the standby machine IP address of correspondence are combined into;
Whether inquiry shared drive exist all IP addresses in described request string;
If all IP addresses are not all in shared drive in described request string, then initialize shared drive, and again inquire about whether shared drive exists all IP addresses in described request string;
If all IP addresses in described request string are all in shared drive, then the machine finding current IP linking number minimum, the machine minimum with current IP linking number is connected, and obtains result and described result returns to described foreground system.
Wherein, described method also comprises: before the machine minimum with current IP linking number is connected, and upgrades the IP linking number of described machine, and the data block of described machine is carried out read-write by the controll block controlling described shared drive locks; After obtaining result, control described controll block and the data block of described machine is carried out read-write unblock, again upgrade the IP linking number of described machine.
Wherein, if all IP addresses in described request string are all in shared drive, determine the current only main frame of access computation system or the main frame of access computation system and standby machine, if the main frame of current only access computation system, then access the machine that the host IP address list of shared drive finds IP linking number minimum, if the main frame of current accessed computing system and standby machine, then access the host IP address list of shared drive and standby machine IP address list, the machine finding IP linking number minimum.
Wherein, described initialize, comprising:
The read-write of all data blocks is locked by the controll block controlling described shared drive;
Whether all IP addresses again inquiring about described request string are all in described shared drive; If not, then add the not IP address in described shared drive to described shared drive, upgrade the IP number of addresses of described controll block, control described controll block and the read-write of all data blocks is unlocked;
If it does, control described controll block, the read-write of all data blocks is unlocked.
Wherein, before receiving the access request that foreground system sends, described method also comprises:
The form that foreground system is decided through consultation by agreement, by access request by predetermined strategy grouping, each corresponding one group of host IP address of grouping and one group are for machine IP address, and the request string send host IP address of the grouping of each access request and correspondence thereof and standby machine IP address being combined into is to described adapter.
The embodiment of the present invention additionally provides a kind of adapter, and described adapter comprises:
Receiver module, for receiving the request string that foreground system sends, described request string divides into groups by access request and host IP address and the standby machine IP address of correspondence are combined into;
Inquiry module, for inquiring about all IP addresses whether existing in described request string in shared drive;
Initialize module, for being that in described request string, all IP addresses are not all when shared drive in the result of described inquiry module, then initialize shared drive, and return inquiry module;
Access modules, for the result of described inquiry module be in described request string all IP addresses all when shared drive, the machine then finding current IP linking number minimum, the machine minimum with current IP linking number is connected, and obtains result and described result returns to described foreground system.
Wherein, described access modules also for:
Before the machine minimum with current IP linking number is connected, upgrade the IP linking number of described machine, and the data block of described machine is carried out read-write by the controll block controlling described shared drive locks;
After obtaining result, control described controll block and the data block of described machine is carried out read-write unblock, again upgrade the IP linking number of described machine.
Wherein, described access modules is used for: if all IP addresses in described request string are all in shared drive, determine the current only main frame of access computation system or the main frame of access computation system and standby machine, if the main frame of current only access computation system, then access the machine that the host IP address list of shared drive finds IP linking number minimum, if the main frame of current accessed computing system and standby machine, then access the host IP address list of shared drive and standby machine IP address list, the machine finding IP linking number minimum.
Wherein, described initialize module is used for: the read-write of all data blocks is locked by the controll block controlling described shared drive; Whether all IP addresses again inquiring about described request string are all in described shared drive; If not, then add the not IP address in described shared drive to described shared drive, upgrade the IP number of addresses of described controll block, control described controll block and the read-write of all data blocks is unlocked; If it does, control described controll block, the read-write of all data blocks is unlocked.
The embodiment of the present invention additionally provides a kind of access process system based on backstage buffer memory, described system comprises: foreground system, computing system and above-mentioned adapter, wherein, foreground system, for the form decided through consultation by agreement, by access request by predetermined strategy grouping, each corresponding one group of host IP address of grouping and one group are for machine IP address, and the request string send host IP address of the grouping of each access request and correspondence thereof and standby machine IP address being combined into is to described adapter.
The embodiment of the present invention utilizes the combination of the technology such as shared drive, counter, control lock, when the computing system on system call backstage, foreground, makes the load balancing of hind computation system access, greatly reduces the data pressure of hind computation system. And by the cache policy of hind computation system, and utilize the access strategy of load balancing, further increase cache hit rate, and high efficiency active-standby switch can be reached.
Accompanying drawing explanation
In accompanying drawing (it is not necessarily drawn in proportion), similar Reference numeral can describe similar parts in different views. The similar reference numerals with different letter suffix can represent the different examples of similar component. Accompanying drawing generally shows each embodiment discussed herein by way of example and not limitation.
Fig. 1 is a schematic flow sheet of embodiment of the present invention access processing method;
Fig. 2 is another schematic flow sheet of embodiment of the present invention access processing method;
Fig. 3 is another schematic flow sheet of embodiment of the present invention access processing method;
Fig. 4 is the composition structural representation of embodiment of the present invention adapter;
Fig. 5 is the composition structural representation of embodiment of the present invention access process system.
Embodiment
The embodiment of the present invention utilizes the combination of the technology such as shared drive, counter, control lock, the direct access of foreground system to computing system, be converted to three layers of call relation of adapter intermediate coordination, thus when the computing system on system call backstage, foreground, make the load balancing of hind computation system access.
As shown in Figure 1, the access processing method based on backstage buffer memory that the embodiment of the present invention provides, mainly can comprise the steps:
Step 101: the form that foreground system is decided through consultation by agreement, access request is strategically divided into groups, each grouping corresponding one group of host IP address and one group for machine IP address, and host IP address corresponding with it for each access request and standby machine IP address are combined into request string, it is sent to adapter, adapter receives described request string, analysis request string, decomposites host IP address group and standby machine IP group of addresses;
Step 102: the IP address list in adapter traversal shared drive, whether the IP address of inquiry described request string all exists in shared drive;
If the IP address of described request string all exists in shared drive, then continue step 109; If the IP address of described request string is not all exist in shared drive, then continue step 103; If
Step 103: the read-write of its all data block is locked by the controll block of control shared drive;
Step 104: again travel through the IP address list in shared drive;
Step 105: whether the IP address of inquiry described request string is in the IP address list of shared drive; , if it does, then continue step 108; Otherwise continue step 106;
Step 106: not IP address at shared drive is added in shared drive;
Step 107: the IP number of addresses upgrading controll block;
Step 108: the read-write of all data blocks is unlocked by the controll block of control shared drive;
Step 109: the host computer system determining current only access computation system, or the standby machine system of the host computer system of access computation system and computing system;
Step 1010: if only accessing main frame, then access shared drive, the IP address list of inquiry main frame, the IP address finding linking number minimum; If access host computer system and standby machine system, then access shared drive, the IP address list of inquiry main frame and the IP address list of standby machine, the machine finding current IP linking number minimum;
Step 1011: the IP linking number of minimum for IP linking number machine is added 1 by controll block, and the read-write of this machine data block is locked;
Step 1012: the adapter machine minimum with current IP linking number is connected, obtains result;
Step 1013: the data block of minimum for IP linking number machine is unlocked by controll block, and subtracts 1 by IP linking number;
Step 1014: the result of acquisition is returned to foreground system by adapter, and flow process terminates.
Wherein, also there will be the situation of time-out in step 1012, if time-out does not receive result yet, then adapter still performs step 1013, and in step 1014, time-out message is returned to foreground system.
Concrete, the access method of the embodiment of the present invention can also be realized by the flow process shown in Fig. 2, specific as follows:
Step 201: the form that foreground system is decided through consultation by agreement, access request is strategically divided into groups, each grouping corresponding one group of host IP address and one group for machine IP address, and host IP address corresponding with it for each access request and standby machine IP address are combined into request string, it is sent to adapter, adapter receives described request string, analysis request string, decomposites host IP address group and standby machine IP group of addresses;
Step 202: the IP address of machine in the computing system that adapter is provided by foreground system, inquires about these addresses whether all in shared drive;
Wherein, search concrete shared drive (sharememory) position by system shared drive key value (ipckey) configured, go internal memory reads corresponding structurizing data.
Step 203: if not being all exist in shared drive, enter step 204; Otherwise, directly enter step 208;
Step 204: the read-write of its all data block is locked by the controll block of shared drive, prevents data writing operation conflict between multi-process;
Wherein, controll block is the data in shared drive, prevents data writing operation conflict by locking mechanisms between multiple process.
Step 205: again travel through the IP address list in shared drive, check whether there is address initialize in shared drive not yet after locking. If it does, then continue step 207; Otherwise continue step 206;
Step 206: the not IP address in shared drive is added in shared drive.
Step 207: the IP number of addresses upgrading controll block, the read-write of all data blocks is unlocked by controll block;
Wherein, in step 206-207, controll block needs record existing IP address sum and IP number of addresses, and set up the index of each IP address.
Step 208: the master that adapter checking monitoring program provides is for information, it is determined that the host computer system of current only access computation system, or the standby machine system of the host computer system of access computation system and computing system;
Wherein, it may also be useful to computing system host computer system is monitored by watchdog routine, check that whether computing system host computer system is normal. Shared drive arranges analog value, indicates this use host computer system of adapter or host computer system and standby machine system.
Step 209: access shared drive, finds the machine that in main (standby) machine IP address list, IP linking number is minimum.
Wherein, IP linking number in main frame (standby machine) IP address list is searched by shared drive minimum. Concrete mode is: the internal memory address finding controll block to exist by ipckey, is formatd internal memory address, with IP address as key, and the linking number circulating and finding each IP corresponding. Number of request is the IP address every time asking computing system, adds 1 by the IP linking number of this address, subtracts 1 by IP linking number after obtaining result (time-out).
Wherein, if currently using main frame, just look for the minimum machine of IP linking number of main frame, if currently using standby machine to look for the minimum machine of IP linking number of standby machine.
Step 210: the access number (i.e. IP linking number) of minimum for IP linking number machine is added 1 by controll block, and the read-write of its data block is locked;
Step 211: the machine that in the computing system on adapter and backstage, linking number is minimum connects, send request, obtaining result or time-out, the read-write of the data block of minimum for IP linking number machine is unlocked by controll block, and subtracts 1 by it by access number (i.e. IP linking number);
Wherein, in step 11, last selected computing system machine is sent request by adapter, and obtains result, and result is returned host process.
Step 212: the result information obtained or time-out information are returned to foreground system.
The embodiment of the present invention have employed share memory technology, control lock technology etc., and adapter and watchdog routine can adopt C++ to develop, and repeat no more.
Such as Fig. 3, embodiment of the present invention access processing method can also be realized by following flow process, and detailed process is as follows:
Step 301: the form that foreground is decided through consultation by agreement, strategically divides into groups access request, for machine IP address, each corresponding one group of host IP address of grouping and one group is combined into request string and is sent to computing system adapter.
Wherein, request can be split by the access request on foreground by certain key value strategy, according to the corresponding particular key value of often group request that this strategy will split, by the n of the pre-configured correspondence of key value (n be not less than 1 integer) platform host machine and m (and m be not less than 1 integer) platform for machine machine, be sent to adapter by often organizing request with corresponding IP address (active/standby).
Data are sent between process, under foreground system and adapter can be deployed in same set of environmental services owing to relating to. The embodiment of the present invention adopts tuxedo mode to dispose, request can be split by foreground system according to mentioned above principle, it is divided into multiple request, it is sent to adapter by tpcall (tpacall) mode of tuxedo, multiple adapter all can receive corresponding request, again to the address resolution in request, just relate to transmission agreement between the two here.
Owing to IP address can be multiple, also having main and standby relation, i.e. random length, usual way uses indications or mark byte length. Such as, if using indications to distinguish, it is possible to use indications 1 splits main IP and the actual request of standby IP, it may also be useful to indications 2 splits each the IP address between main (standby) IP address list.
Step 302: the computing system IP address of adapter by providing, checks these addresses whether all in shared drive.
Specifically, it may also be useful to sharememory structured data, under linux, by ipckey, correspondence is done in internal memory address and fixing data, the more corresponding data of structurizing, to the corresponding IP information of data search.
Step 303: if not being all there is shared drive, by step 304 to step 307 initialize, if existed, directly enters step 308.
Step 304: its all data block is locked by the controll block of shared drive;
In practical application, directly operate data in shared drive, these data are added unblock, prevents from conflicting to during IP reading and writing data afterwards.
In practical application, creating shared drive shared drive and be mainly divided into two portions: controll block and data block when service starts, controll block is for pinning or untie the read-write of data block, and each data block is for storing the relevant information of a machine. A controll block can manage multiple data block simultaneously, a machine of a corresponding computing system of data block.
Wherein, controll block can comprise following information: machine quantity, IP number of addresses and the index for each IP foundation preserved in current background state, shared drive. Wherein, current background state (status) value is that P or A, P represent and only access host computing system, does not access standby machine computing system, and A represents can access host computing system and standby machine computing system. The machine quantity preserved in shared drive is recorded as serverNo.
Wherein, what each data block stored is all information that a machine is relevant, and data block can comprise the information such as the IP (sitaServerIp [16]) of a machine, the port (port) of machine, the IP linking number (connNums) of machine.
Step 305: the IP address list again traveled through in shared drive checks whether there is address initialize in shared drive not yet after locking. If it does, then continue step 307; Otherwise continue step 306;
Specifically, again read the IP address information in shared drive, prevent from being revised by other processes to the data of this internal memory in the process locked before locking, throw into question. Find that this IP address information does not still have if again read, then in these structurizing data, write this IP address information, and the corresponding data of initialize. The IP structure index of final updating entirety, writes back to whole structure in shared drive again.
Step 306: the not IP address in shared drive is added in shared drive.
Step 307: upgrading IP number of addresses in controll block, controll block unlocks all data blocks.
Step 308: the master that checking monitoring program provides is for information, it is determined that the host computer system of current only access computation system, or the standby machine system of the host computer system of access computation system and computing system.
Here, watchdog routine, for receiving monitoring message, carries out relating operation. If supervisory system finds that computing system resources of production anxiety exceedes the threshold value of setting, then upgrading shared drive state is A; If computing system host computer system message recovery, then upgrading shared drive state is P.
Step 309: access shared drive, finds the machine that in main (standby) machine IP address list, IP linking number is minimum.
Step 310: the access number of minimum for IP linking number machine is added 1 by controll block, and the read-write of its data block is locked;
Step 311: adapter and the minimum machine of IP linking number connect, and send request to computing system, obtains result that computing system returns or obtains time-out information, and the data block of minimum for IP linking number machine is unlocked by controll block, and subtracts 1 by its access number.
Step 312: result information (time-out information) is returned to foreground system.
In embodiments of the present invention, by foreground system, access request is processed for the adapter of access process and the collaborative work of hind computation these three parts of machine system. Foreground system, when calling hind computation system, utilizes the technology such as shared drive, counter, control lock to realize the load balancing of computing system accesses, reaches that the pressure to computing system reduces, active-standby switch and high level cache hit rate. The share memory technology that the embodiment of the present invention utilizes, timing monitoring backstage system, to modes such as access system counting lock, strategically accesses computing system, strengthens the cache hit rate of computing system, and load balancing to a certain extent.
Such as Fig. 4, the embodiment of the present invention additionally provides a kind of adapter, and for realizing above-mentioned access processing method, this adapter comprises such as lower part:
Receiver module 41, for receiving the request string that foreground system sends, described request string divides into groups by access request and host IP address and the standby machine IP address of correspondence are combined into;
Inquiry module 42, for inquiring about all IP addresses whether existing in described request string in shared drive;
Initialize module 43, for being that in described request string, all IP addresses are not all when shared drive in the result of described inquiry module 42, then initialize shared drive, and return inquiry module;
Access modules 44, for the result of described inquiry module 42 be in described request string all IP addresses all when shared drive, the machine then finding current IP linking number minimum, the machine minimum with current IP linking number is connected, and obtains result and described result returns to described foreground system.
Wherein, described access modules 44 also for: before the machine minimum with current IP linking number is connected, upgrade the IP linking number of described machine, and the data block of described machine is carried out read-write by the controll block controlling described shared drive locks; After obtaining result, control described controll block and the data block of described machine is carried out read-write unblock, again upgrade the IP linking number of described machine.
Wherein, described access modules 44 for: if all IP addresses in described request string are all in shared drive, determine the current only main frame of access computation system or the main frame of access computation system and standby machine, if the main frame of current only access computation system, then access the machine that the host IP address list of shared drive finds IP linking number minimum, if the main frame of current accessed computing system and standby machine, then access the host IP address list of shared drive and standby machine IP address list, the machine finding IP linking number minimum.
Wherein, described initialize module 43 for: the read-write of all data blocks is locked by the controll block controlling described shared drive; Whether all IP addresses again inquiring about described request string are all in described shared drive; If not, then add the not IP address in described shared drive to described shared drive, upgrade the IP number of addresses of described controll block, control described controll block and the read-write of all data blocks is unlocked; If it does, control described controll block, the read-write of all data blocks is unlocked.
Such as Fig. 5, the embodiment of the present invention additionally provides a kind of access process system, and this system comprises foreground system, the computing system on backstage and adapter as shown in Figure 4. During its actual deployment, adapter has multiple, and foreground system also has multiple, and computing system is equally also multiple, and a request string can be issued multiple adapter and process simultaneously by a foreground system. Here, the form that foreground system can be decided through consultation by agreement, by access request by predetermined strategy grouping, each corresponding one group of host IP address of grouping and one group are for machine IP address, and the request string send host IP address of the grouping of each access request and correspondence thereof and standby machine IP address being combined into is to described adapter.
Those skilled in the art are it should be appreciated that embodiments of the invention can be provided as method, system or computer program. Therefore, the present invention can adopt the form of hardware embodiment, software implementation or the embodiment in conjunction with software and hardware aspect. And, the present invention can adopt the form at one or more upper computer program implemented of computer-usable storage medium (including but not limited to multiple head unit and optical memory etc.) wherein including computer usable program code.
The present invention is that schema and/or skeleton diagram with reference to method according to embodiments of the present invention, equipment (system) and computer program describe. Should understand can by the combination of the flow process in each flow process in computer program instructions flowchart and/or skeleton diagram and/or square frame and schema and/or skeleton diagram and/or square frame. These computer program instructions can be provided to the treater of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine so that the instruction performed by the treater of computer or other programmable data processing device is produced for realizing the device of function specified in schema flow process or multiple flow process and/or skeleton diagram square frame or multiple square frame.
These computer program instructions also can be stored in and can guide in computer-readable memory that computer or other programmable data processing device work in a specific way, making the instruction that is stored in this computer-readable memory produce the manufacture comprising instruction device, this instruction device realizes the function specified in schema flow process or multiple flow process and/or skeleton diagram square frame or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform a series of operation steps to produce computer implemented process, thus the instruction performed on computer or other programmable devices is provided for realizing the step of the function specified in schema flow process or multiple flow process and/or skeleton diagram square frame or multiple square frame.
The above, be only the better embodiment of the present invention, be not intended to limit protection scope of the present invention.

Claims (10)

1. the access processing method based on backstage buffer memory, it is characterised in that, described method comprises:
Adapter receives the request string that foreground system sends, and described request string divides into groups by access request and host IP address and the standby machine IP address of correspondence are combined into;
Whether inquiry shared drive exist all IP addresses in described request string;
If all IP addresses are not all in shared drive in described request string, then initialize shared drive, and again inquire about whether shared drive exists all IP addresses in described request string;
If all IP addresses in described request string are all in shared drive, then the machine finding current IP linking number minimum, the machine minimum with current IP linking number is connected, and obtains result and described result returns to described foreground system.
2. method according to claim 1, it is characterised in that, described method also comprises:
Before the machine minimum with current IP linking number is connected, upgrade the IP linking number of described machine, and the data block of described machine is carried out read-write by the controll block controlling described shared drive locks;
After obtaining result, control described controll block and the data block of described machine is carried out read-write unblock, again upgrade the IP linking number of described machine.
3. method according to claim 1 and 2, it is characterised in that:
If all IP addresses in described request string are all in shared drive, determine the current only main frame of access computation system or the main frame of access computation system and standby machine, if the main frame of current only access computation system, then access the machine that the host IP address list of shared drive finds IP linking number minimum, if the main frame of current accessed computing system and standby machine, then access the host IP address list of shared drive and standby machine IP address list, the machine finding IP linking number minimum.
4. method according to claim 1, it is characterised in that, described initialize, comprising:
The read-write of all data blocks is locked by the controll block controlling described shared drive;
Whether all IP addresses again inquiring about described request string are all in described shared drive; If not, then add the not IP address in described shared drive to described shared drive, upgrade the IP number of addresses of described controll block, control described controll block and the read-write of all data blocks is unlocked;
If it does, control described controll block, the read-write of all data blocks is unlocked.
5. method according to claim 1, it is characterised in that, before receiving the access request that foreground system sends, described method also comprises:
The form that foreground system is decided through consultation by agreement, by access request by predetermined strategy grouping, each corresponding one group of host IP address of grouping and one group are for machine IP address, and the request string send host IP address of the grouping of each access request and correspondence thereof and standby machine IP address being combined into is to described adapter.
6. an adapter, it is characterised in that, described adapter comprises:
Receiver module, for receiving the request string that foreground system sends, described request string divides into groups by access request and host IP address and the standby machine IP address of correspondence are combined into;
Inquiry module, for inquiring about all IP addresses whether existing in described request string in shared drive;
Initialize module, for being that in described request string, all IP addresses are not all when shared drive in the result of described inquiry module, then initialize shared drive, and return inquiry module;
Access modules, for the result of described inquiry module be in described request string all IP addresses all when shared drive, the machine then finding current IP linking number minimum, the machine minimum with current IP linking number is connected, and obtains result and described result returns to described foreground system.
7. adapter according to claim 6, it is characterised in that, described access modules also for:
Before the machine minimum with current IP linking number is connected, upgrade the IP linking number of described machine, and the data block of described machine is carried out read-write by the controll block controlling described shared drive locks;
After obtaining result, control described controll block and the data block of described machine is carried out read-write unblock, again upgrade the IP linking number of described machine.
8. adapter according to claim 6 or 7, it is characterised in that, described access modules is used for:
If all IP addresses in described request string are all in shared drive, determine the current only main frame of access computation system or the main frame of access computation system and standby machine, if the main frame of current only access computation system, then access the machine that the host IP address list of shared drive finds IP linking number minimum, if the main frame of current accessed computing system and standby machine, then access the host IP address list of shared drive and standby machine IP address list, the machine finding IP linking number minimum.
9. adapter according to claim 6, it is characterised in that, described initialize module is used for:
The read-write of all data blocks is locked by the controll block controlling described shared drive;
Whether all IP addresses again inquiring about described request string are all in described shared drive; If not, then add the not IP address in described shared drive to described shared drive, upgrade the IP number of addresses of described controll block, control described controll block and the read-write of all data blocks is unlocked;
If it does, control described controll block, the read-write of all data blocks is unlocked.
10. the access process system based on backstage buffer memory, it is characterised in that, described system comprises: the adapter as described in foreground system, computing system and item as arbitrary in claim 6 to 9, wherein,
Foreground system, for the form decided through consultation by agreement, by access request by predetermined strategy grouping, each corresponding one group of host IP address of grouping and one group are for machine IP address, and the request string send host IP address of the grouping of each access request and correspondence thereof and standby machine IP address being combined into is to described adapter.
CN201510994074.6A 2015-12-25 2015-12-25 Access processing method and adapter based on backstage caching Active CN105635263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510994074.6A CN105635263B (en) 2015-12-25 2015-12-25 Access processing method and adapter based on backstage caching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510994074.6A CN105635263B (en) 2015-12-25 2015-12-25 Access processing method and adapter based on backstage caching

Publications (2)

Publication Number Publication Date
CN105635263A true CN105635263A (en) 2016-06-01
CN105635263B CN105635263B (en) 2019-03-05

Family

ID=56049735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510994074.6A Active CN105635263B (en) 2015-12-25 2015-12-25 Access processing method and adapter based on backstage caching

Country Status (1)

Country Link
CN (1) CN105635263B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110597904A (en) * 2018-05-25 2019-12-20 海能达通信股份有限公司 Data synchronization method, standby machine and host machine

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1133993A (en) * 1995-03-15 1996-10-23 三菱电机株式会社 Multi-computer system
CA2230550A1 (en) * 1997-03-14 1998-09-14 At&T Corp. Hosting a network service on a cluster of servers using a single-address image
CN1968166A (en) * 2005-11-18 2007-05-23 联通新时讯通信有限公司 Network structure-based intelligent terminal application system
CN101340327A (en) * 2008-08-21 2009-01-07 腾讯科技(深圳)有限公司 Method, system and domain name parsing server implementing load balance of network server
CN101938504A (en) * 2009-06-30 2011-01-05 深圳市融创天下科技发展有限公司 Cluster server intelligent dispatching method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1133993A (en) * 1995-03-15 1996-10-23 三菱电机株式会社 Multi-computer system
CA2230550A1 (en) * 1997-03-14 1998-09-14 At&T Corp. Hosting a network service on a cluster of servers using a single-address image
CN1968166A (en) * 2005-11-18 2007-05-23 联通新时讯通信有限公司 Network structure-based intelligent terminal application system
CN101340327A (en) * 2008-08-21 2009-01-07 腾讯科技(深圳)有限公司 Method, system and domain name parsing server implementing load balance of network server
CN101938504A (en) * 2009-06-30 2011-01-05 深圳市融创天下科技发展有限公司 Cluster server intelligent dispatching method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110597904A (en) * 2018-05-25 2019-12-20 海能达通信股份有限公司 Data synchronization method, standby machine and host machine
CN110597904B (en) * 2018-05-25 2023-11-24 海能达通信股份有限公司 Data synchronization method, standby machine and host machine

Also Published As

Publication number Publication date
CN105635263B (en) 2019-03-05

Similar Documents

Publication Publication Date Title
CN103248667B (en) A kind of resource access method of distributed system and system
CN102043686B (en) Disaster tolerance method, backup server and system of memory database
CN103019884B (en) Memory page de-weight method and memory page de-weight device based on virtual machine snapshot
CN106708653B (en) Mixed tax big data security protection method based on erasure code and multiple copies
KR20190020105A (en) Method and device for distributing streaming data
CN102118281A (en) Method, device and network equipment for automatic testing
US20230305724A1 (en) Data management method and apparatus, computer device, and storage medium
CN103559319A (en) Cache synchronization method and equipment for distributed cluster file system
CN108595346B (en) Feature library file management method and device
CN103136215A (en) Data read-write method and device of storage system
CN106775501A (en) Elimination of Data Redundancy method and system based on nonvolatile memory equipment
JP2015528957A (en) Distributed file system, file access method, and client device
CN106326014A (en) Resource access method and device
CN105608197A (en) Method and system for obtaining Memcache data under high concurrency
CN115712500A (en) Memory release method, memory recovery method, memory release device, memory recovery device, computer equipment and storage medium
CN116301656A (en) Data storage method, system and equipment based on log structure merging tree
CN113392040B (en) Address mapping method, device and equipment
CN111382429A (en) Instruction execution method, instruction execution device and storage medium
CN105635263A (en) Access processing method based on background cache and adapter
CN103136343A (en) Shared resource real-time interaction method
CN110221778A (en) Processing method, system, storage medium and the electronic equipment of hotel's data
CN105068896A (en) Data processing method and device based on RAID backup
CN102760212B (en) Virtual desktop malicious code detecting method based on storage mirroring cloning mechanism
CN102053863B (en) Method and system for dynamically interacting data
CN117056363B (en) Data caching method, system, equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 100085 Yumin Street, Houshayu Town, Shunyi District, Beijing

Patentee after: CHINA TRAVELSKY HOLDING Co.

Address before: 100010, Beijing, Dongcheng District East Fourth Street, West 157

Patentee before: CHINA TRAVELSKY HOLDING Co.