CN110300132A - Server data caching method, device and system - Google Patents
Server data caching method, device and system Download PDFInfo
- Publication number
- CN110300132A CN110300132A CN201810238424.XA CN201810238424A CN110300132A CN 110300132 A CN110300132 A CN 110300132A CN 201810238424 A CN201810238424 A CN 201810238424A CN 110300132 A CN110300132 A CN 110300132A
- Authority
- CN
- China
- Prior art keywords
- server
- cache server
- cache
- access request
- buffer memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 239000000872 buffer Substances 0.000 claims abstract description 94
- 238000001514 detection method Methods 0.000 claims description 17
- 230000001360 synchronised effect Effects 0.000 claims description 11
- 230000006870 function Effects 0.000 description 6
- 239000000243 solution Substances 0.000 description 5
- RTZKZFJDLAIYFH-UHFFFAOYSA-N Diethyl ether Chemical compound CCOCC RTZKZFJDLAIYFH-UHFFFAOYSA-N 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 241000238366 Cephalopoda Species 0.000 description 1
- 241001269238 Data Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 239000007853 buffer solution Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000006386 memory function Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Information Transfer Between Computers (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Computer And Data Communications (AREA)
Abstract
The present invention provides a kind of server data caching methods, device and system.It is related to computer network field;Solve the problems, such as that more cache servers pull that bandwidth resource consumption caused by L2 cache or Hui Yuan is excessive, time delay.This method comprises: when detecting the uniform resource position mark URL access request for meeting preset draining conditions, ethernet device is that the URL access request configures shared buffer memory server group, and the shared buffer memory server group includes first cache server and at least second cache server;The ethernet device notifies each second cache server to obtain the data cached of the URL access request by first cache server.Technical solution provided by the invention is suitable for SiteServer LBS, realizes access hot spot load balancing rapidly and efficiently, reliable and stable.
Description
Technical field
The present invention relates to computer network field more particularly to a kind of server data caching methods, device and system.
Background technique
Disperse rear end flow usually using Switch software in caching (Cache) server cluster, when there is flow overheat
When concentrating on certain Switch server in the cluster, it will usually which the cache server broken up to rear end is uniformly shared in request
On, L2 cache access evidence or Hui Yuan can be directly returned without caching the cache server of corresponding data at this time, occupies a large amount of bands
It is wide, it is possible to wear source station or L2 cache down.
General service aggregated structure is as shown in Figure 1.By taking certain secondary hot spots URL runs netful card as an example, prior art main attention
Point concentrates on breaing up hot spot URL on all cache servers for being distributed to this group of rear end, such as Fig. 2 of the framework after breaing up institute
Show.
The prior art has the following problems:
1, invalid cache is belonged to for same group of other machines for the group before the URL is data cached.Reduce the utilization of resources
Rate.
2, when cache server does not cache corresponding data, L2 cache or source station pulling data will be returned, causes bandwidth
Waste.The band of full L2 cache or source station is very likely run when the cache server quantity being related to after breaing up dispersion is larger
Width will cause source station delay machine when serious, service unavailable.
3, it is not prepared in advance in face of the flow of burst, temporarily breaks up scheduling there are time delay, broken up and come into force needs extremely
It is 15 minutes or more few, it is ineffective to solve predicament.
4, source access evidence need to be returned again by breaing up rear cache server, and facing inter-network at this time, transprovincially network is slow or even link
The problems such as interruption, influences customer experience.
Summary of the invention
Present invention seek to address that problem as described above.
According to the first aspect of the invention, a kind of server data caching method is provided, comprising:
When detecting the uniform resource position mark URL access request for meeting preset draining conditions, ethernet device is
The URL access request configures shared buffer memory server group, and the shared buffer memory server group includes first buffer service
Device and at least second cache server;
The ethernet device notifies each second cache server to obtain institute by first cache server
State the data cached of URL access request.
Preferably, this method further include:
The working condition of the cache server of ethernet device detection rear end, it is synchronous with other ethernet devices to institute
State the detection result of the working condition of cache server.
Preferably, the ethernet device is that the step of URL access request configures shared buffer memory server group includes:
Configuring the corresponding cache server of the machine is first cache server;
According to preset shared buffer memory algorithm, determine that other are assigned to the ethernet device of the URL access request;
Configuration it is described other be assigned to the URL access request ethernet device it is corresponding and working condition is normal
Cache server is second cache server.
Preferably, the ethernet device notifies each second cache server to pass through first cache server
The data cached step for obtaining the URL access request includes:
The IP address of first cache server is sent to each second buffer service by the ethernet device
Device.
According to another aspect of the present invention, a kind of server data caching method is additionally provided, comprising:
Cache server checks the shared buffer memory server group configuration of the URL when receiving URL access request;
When the machine is configured as the second cache server, the cache server is into the shared buffer memory server group
The first cache server obtain the data cached of the URL access request.
Preferably, cache server checks the shared buffer memory server group of the URL when receiving URL access request
After the step of configuration, further includes:
When the machine is not configured the shared buffer memory server group into the URL, carried out to L2 cache or source data station
The source of returning operation, obtains data cached.
Preferably, this method further include:
The notice that ethernet device is sent is received, the notice includes the configuration to the shared buffer memory server group.
According to another aspect of the present invention, a kind of server data buffer storage is additionally provided, comprising:
Shared buffer memory configuration module, for being described when detecting the URL access request for meeting preset draining conditions
URL access request configures shared buffer memory server group, the shared buffer memory server group include first cache server and
At least second cache server;
Configuration distributing module, for notifying each second cache server to obtain by first cache server
The URL access request it is data cached.
Preferably, the device further include:
Resource detection module, the working condition of the cache server for caching detection rear end;
Resource information synchronization module, for the working condition to the cache server synchronous with other ethernet devices
Detection result.
According to another aspect of the present invention, a kind of server data buffer storage is additionally provided, comprising:
Configuration sharing determining module, for when receiving URL access request, checking the shared buffer memory service of the URL
The configuration of device group;
Data cached acquisition module, for when the machine is configured as the second cache server, Xiang Suoshu shared buffer memory to be taken
The first cache server in business device group obtains the data cached of the URL access request.
Preferably, the data cached acquisition module is also used to not be configured to take into the shared buffer memory of the URL in the machine
When device group of being engaged in, source is carried out go back to L2 cache or source data station and is operated, is obtained data cached.
According to another aspect of the present invention, a kind of server data caching system, including an at least ether are additionally provided
Net equipment and the corresponding cache server of each ethernet device;
The ethernet device, for being described when detecting the URL access request for meeting preset draining conditions
URL access request configures the shared buffer memory server group, and the shared buffer memory server group includes first buffer service
Device and at least second cache server,
Each second cache server is notified to obtain the URL access request by first cache server
It is data cached;
The cache server, for when receiving URL access request, checking the shared buffer memory server of the URL
Group configuration, and the first caching clothes when the machine is configured as the second cache server, in Xiang Suoshu shared buffer memory server group
Business device obtains the data cached of the URL access request.
The present invention provides a kind of server data caching methods, device and system, meet preset shunting detecting
When the URL access request of condition, ethernet device is that the URL access request configures shared buffer memory server group, described shared
Cache server group includes first cache server and at least second cache server, the ethernet device notice
Each second cache server obtains data cached, the shape of the URL access request by first cache server
At primary data cached, the shared buffer memory number between the cache server that more shunt that only needs back L2 cache or Hui Yuan to pull
According to the network architecture.Solve more cache servers pull bandwidth resource consumption caused by L2 cache or Hui Yuan it is excessive, when
Between the problem of postponing, realize access hot spot load balancing rapidly and efficiently, reliable and stable, it is rationally excellent using being cached in group
Gesture, with machine is organized substantially under same switch network, inter-network, speed are not take up node exit bandwidth faster.
Being described below for exemplary embodiment is read with reference to the drawings, other property features of the invention and advantage will
It is apparent from.
Detailed description of the invention
It is incorporated into specification and the attached drawing for constituting part of specification shows the embodiment of the present invention, and with
Principle for explaining the present invention together is described.In the drawings, similar appended drawing reference is for indicating similar element.Under
Attached drawing in the description of face is some embodiments of the present invention, rather than whole embodiments.Those of ordinary skill in the art are come
It says, without creative efforts, other drawings may be obtained according to these drawings without any creative labor.
Fig. 1 schematically illustrates existing server cluster framework;
Fig. 2 schematically illustrates the existing system architecture for breaing up distributed buffer;
Fig. 3 schematically illustrates a kind of process of server data caching method of the offer of the embodiment of the present invention one;
Fig. 4 schematically illustrates a kind of framework of server data caching system of the offer of the embodiment of the present invention two;
Fig. 5 schematically illustrates a kind of structure of server data buffer storage of the offer of the embodiment of the present invention three;
Fig. 6 schematically illustrates the knot of another server data buffer storage of the offer of the embodiment of the present invention three
Structure;
Fig. 7 is exemplarily illustrated a kind of framework of server data caching system of the offer of the embodiment of the present invention three.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.It needs
Illustrate, in the absence of conflict, the features in the embodiments and the embodiments of the present application can mutual any combination.
The prior art has the following problems:
1, invalid cache is belonged to for same group of other machines for the group before the URL is data cached.Reduce the utilization of resources
Rate.
2, when cache server does not cache corresponding data, L2 cache or source station pulling data will be returned, causes bandwidth
Waste.The band of full L2 cache or source station is very likely run when the cache server quantity being related to after breaing up dispersion is larger
Width will cause source station delay machine when serious, service unavailable.
3, it is not prepared in advance in face of the flow of burst, the configuration processing broken up again until hot spot URL occurs, when
Between postpone and efficiency under, and configure it is cumbersome, solve predicament it is ineffective.
4, source access evidence need to be returned again by breaing up rear cache server, and facing inter-network at this time, transprovincially network is slow or even link
The problems such as interruption, influences customer experience.
To solve the above-mentioned problems, the embodiment provides a kind of server data caching method, device and it is
System.One group of shared buffer memory server, buffered first cache server of data of utilization are configured for hot spot URL access request
As data cached data source, the second cache server in other shared buffer memory server groups is to first cache server
The data for requesting it to cache have carried out maximum utilization, effective solution to the data of the first cache server caching
The problem of more cache servers pull excessive bandwidth resource consumption caused by L2 cache or Hui Yuan, time delay, realizes
Rapidly and efficiently, reliable and stable access hot spot load balancing.
First in conjunction with attached drawing, the embodiment of the present invention one is illustrated.
The embodiment of the invention provides a kind of server data caching method, using this method to the flow of hot spot URL into
The process that row breaks up caching process is as shown in Figure 3, comprising:
Step 301, ethernet device detect the working condition of the cache server of rear end, synchronous with other ethernet devices
To the detection result of the working condition of the cache server;
During the present invention is implemented, the ethernet device is specially Switch server, cache server concretely Cache
Other servers such as server or tengine server, squid server with caching function.It is taken in this step with Cache
It is illustrated for business device, specific as follows:
1, Switch server judges whether the URL access request configures the machine hash and shared buffer memory configuration (shows this
Whether URL access request supports shared buffer memory server group).It works in default mode if not.If there is altogether
Cached configuration is enjoyed, then is entered in next step.
2, the spy of one rear end cache server of the internal maintenance of each Switch server pond living, wherein comprising available
Whole cache servers.(such as: serve port is the working condition of each primary each cache server of detection cycle detection
It is no normal), it is that abnormal cache server is rejected from spy pond library living, and is synchronized to other Switch by detection result
Server.
Switch definition distribution rear end is this group of hash or the machine hash mode, and calculation formula is stored in proxy mould
After block, custom feature can not be completed.Because caching must calculate to exist in this group and delay in shared buffer memory server group
That machine IP deposited, then needs to extract out the algorithm, while also to match rear end and visit in function living while a synchronous group
Chained list, other interior Switch of guarantee group are not in problem when being URL the machine hash, avoid arriving URL access request hash
On the cache server of delay machine or flow is run on full cache server.
It should be noted that this step and subsequent each step have no strict sequential order relationship, the spy to rear end cache server
Sustainable progress is surveyed, at any time synchronized update.
Step 302, when detecting the URL access request for meeting preset draining conditions, ethernet device be the URL
Access request configures shared buffer memory server group;
In the embodiment of the present invention, the shared buffer memory server group includes first cache server and at least one the
Two cache servers.First cache server and second cache server are identical cache server, only
When being configured in same shared buffer memory server group, the first cache server can carry out Hui Yuan as specified cache server
Or to pull the operation of L2 cache data cached to obtain, the second cache server is then obtained from first cache server and is cached
Data.Same cache server may be slow as first in the configuration of the shared buffer memory server group of different URL access requests
Deposit server, it is also possible to as the second cache server.The embodiments of the present invention also provide a kind of server data cachings to be
System, framework is as shown in figure 4, Switch server distributes URL access request, shared buffer memory server group shared buffer memory data, LVS
Request uniformly can be published to Switch server in any case as the equal balance system of front end load by SiteServer LBS
On.More Switch servers constitute Switch and distribute Request Systems, calculate in the shared Cache server group of rear end that there are the
The IP address of first cache server is sent to other the second cache servers by one cache server, is shared slow in group
Deposit configuration section.The information that the second cache server in shared Cache server group is sent according to Switch server, Xiang Tong
The first cache server of group fetches data.L2 cache mitigates back source pressure as caching buffer system.Hot spot URL is accessed
After request is broken up, using distributed frame, configuring hot spot URL is shared cache mode in group, directly improves every Cache service
The bearing capacity of device solves the problems, such as before hot spot URL occurs.
This step specifically includes:
1, the corresponding cache server of configuration the machine is first cache server;
2, according to preset shared buffer memory algorithm, determine that other are assigned to the ethernet device of the URL access request;
3, configuration it is described other be assigned to the URL access request ethernet device it is corresponding and working condition is normal
Cache server be second cache server.
The URL access request for meeting draining conditions is alternatively referred to as hot spot URL, is confirmed by business.It can be client
It informs, such as to do the request of some movable page today can be very big.Or the URL request number is very big, by server into
Journey, cpu resource etc. exhaust, and are considered as hot spot URL.
Step 303, the ethernet device notify each second cache server to pass through first buffer service
Device obtains the data cached of the URL access request.
The IP address of first cache server is sent to each second buffer service by the ethernet device
Device.
It is still illustrated for framework shown in Fig. 4, before Switch server calls consistent hashing module calculates
There are the IP address of that data cached the first cache server, and a head share_loop:IP is arranged and passes through respectively
Switch server gives each cache server.It algorithm will extract before, then using synchronous address pool as input number
According to calculating the IP of first cache server, which notified to each cache server, can guarantee so same shared
The second cache server in cache server group is all to the first cache server access evidence.
Step 304, cache server receive the notice that ethernet device is sent;
In this step, the notice includes the configuration to the shared buffer memory server group.According to the configuration, can determine
The identity of this cache server is the first cache server or the second cache server.Further, which is a head
Share_loop:IP carries the IP address of the first cache server.
Cache server saves the IP of notice instruction.For example, can be by defining dynamic_catch module for head
IP information in portion is taken out, and there are in variable CacheIP.
Step 305, cache server check the shared buffer memory server group of the URL when receiving URL access request
Configuration;
In this step, when receiving URL access request, whether the data of this of logic judgment URL have existed the machine
Caching, if there is direct returned data data.If the machine be not present with the URL it is associated data cached, be configured as in the machine
When the second cache server, first cache server of the cache server into the shared buffer memory server group obtains institute
State the data cached of URL access request.Specifically, preserving the IP address of the first cache server and the machine non-in the machine
When one cache server, 306 are entered step, is requested to the IP address data cached.There is no shared buffer memory server in the machine
When group configuration or the machine are the first cache server, 307 pulling datas are entered step.
It is still illustrated by taking the citing in step 304 as an example, the dynamic of cache server fills Cache_src function mould
Block obtains CacheIP variate-value, confirms that IP service is normal by the spy function living between Cache, otherwise directly returns higher level's section
Point obtains caching.
Step 306, when the machine is configured as the second cache server, the cache server takes to the shared buffer memory
The first cache server in business device group obtains the data cached of the URL access request.
Still it is illustrated by taking the citing in step 304 as an example, in this step, the Cache_request mould of cache server
The IP address that block is stored into CacheIP, which initiates similar source of going back to, requests, and fetches data while not transmitting share_loop head, avoids
Cause endless loop.
Step 307, when the machine is not configured the shared buffer memory server group into the URL, to L2 cache or source number
Source operation is carried out back according to station, is obtained data cached.
With reference to the accompanying drawing, the embodiment of the present invention two is illustrated.
The present invention provides a kind of server data caching system, framework is as shown in figure 4, the system is predicting hot spot
In the case of URL exists or in the case of the URL of interim burst hot spot, URL need to be modified plus shared buffer memory configuration and the machine in group
Hash setting configures shared server group for the URL.
S1: hot spot URL is evenly distributed on Switch server to a certain extent by the equilibrium of LVS front end load.It is false
If falling on B server.
S2: Switch detects the URL and is configured with the machine hash algorithm and organizes interior shared buffer memory at this time, then counts in systems
Calculating two values, (the machine hash value, determination transmit the request to D.Calculating this group of hash value simultaneously, i.e. there are the one of caching
Platform server hypothesis is A).
S3: request is directly passed to E cache server by Switch system at this time, while taking a share-
(shared buffer memory function in Cache enabling group is told in share_loop expression to the header value of loop:D_IP, and D_IP is to calculate
There are the D servers of caching).
S4:B cache server internal judgment first whether there is the caching of the URL, if there is direct returned data.Such as
Fruit does not have, then takes out the D_IP in share_loop, and synchronous to D cache server request data.
S5:B Cache quick obtaining data in Intranet, and cache portion, directly solve to need back L2 cache or
Source station access according to the problem of.
With reference to the accompanying drawing, the embodiment of the present invention three is illustrated.
The embodiment of the invention provides a kind of server data buffer storage, structure is as shown in Figure 5, comprising:
Shared buffer memory configuration module 501, for being when detecting the URL access request for meeting preset draining conditions
The URL access request configures shared buffer memory server group, and the shared buffer memory server group includes first buffer service
Device and at least second cache server;
Configuration distributing module 502, for notifying each second cache server to pass through first cache server
Obtain the data cached of the URL access request.
Preferably, the device further include:
Resource detection module 503, the working condition of the cache server for caching detection rear end.
Preferably, the device further include:
Resource information synchronization module 504, for the work feelings to the cache server synchronous with other ethernet devices
The detection result of condition.
Server data buffer storage shown in fig. 5, can be integrated in ethernet device, be realized by ethernet device corresponding
Function.
A kind of server data buffer storage, structure are as shown in Figure 6, comprising:
Configuration sharing determining module 601, for when receiving URL access request, checking that the shared buffer memory of the URL takes
Device group of being engaged in configuration;
Data cached acquisition module 602, for when the machine is configured as the second cache server, Xiang Suoshu shared buffer memory
The first cache server in server group obtains the data cached of the URL access request.
Preferably, the data cached acquisition module 602 is also used to not be configured in the machine into the shared slow of the URL
When depositing server group, source is carried out go back to L2 cache or source data station and is operated, is obtained data cached.
Server data buffer storage as shown in FIG. 6, can be integrated on cache server, realize phase by cache server
Answer function.
The embodiment of the invention also provides a kind of server data caching system, framework is as shown in fig. 7, comprises at least one
Platform ethernet device and the corresponding cache server of each ethernet device;
The ethernet device, for being described when detecting the URL access request for meeting preset draining conditions
URL access request configures the shared buffer memory server group, and the shared buffer memory server group includes first buffer service
Device and at least second cache server,
Each second cache server is notified to obtain the URL access request by first cache server
It is data cached;
The cache server, for when receiving URL access request, checking the shared buffer memory server of the URL
Group configuration, and the first caching clothes when the machine is configured as the second cache server, in Xiang Suoshu shared buffer memory server group
Business device obtains the data cached of the URL access request.
The embodiment provides a kind of server data caching method, device and system, detect meet it is pre-
When the URL access request for the draining conditions set, ethernet device is that the URL access request configures shared buffer memory server group,
The shared buffer memory server group includes first cache server and at least second cache server, the Ethernet
Equipment notifies each second cache server to obtain the caching of the URL access request by first cache server
Data, form that only to need back L2 cache or Hui Yuan to pull primary data cached, between the cache server that more shunt altogether
Enjoy the data cached network architecture.It solves more cache servers and pulls bandwidth resource consumption caused by L2 cache or Hui Yuan
The problem of excessive, time delay, access hot spot load balancing rapidly and efficiently, reliable and stable is realized, rationally using slow in group
The advantage deposited, with machine is organized substantially under same switch network, inter-network, speed are not take up node exit bandwidth faster.
Descriptions above can combine implementation individually or in various ways, and these variants all exist
Within protection scope of the present invention.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations.Although
Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used
To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;
And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and
Range.
Claims (12)
1. a kind of server data caching method characterized by comprising
When detecting the uniform resource position mark URL access request for meeting preset draining conditions, ethernet device is described
URL access request configures shared buffer memory server group, the shared buffer memory server group include first cache server and
At least second cache server;
The ethernet device notifies each second cache server to obtain the URL by first cache server
Access request it is data cached.
2. server data caching method according to claim 1, which is characterized in that this method further include:
The working condition of the cache server of ethernet device detection rear end, it is synchronous with other ethernet devices to described slow
Deposit the detection result of the working condition of server.
3. server data caching method according to claim 2, which is characterized in that the ethernet device is described
URL access request configure shared buffer memory server group the step of include:
Configuring the corresponding cache server of the machine is first cache server;
According to preset shared buffer memory algorithm, determine that other are assigned to the ethernet device of the URL access request;
Configuration it is described other be assigned to the URL access request ethernet device it is corresponding and working condition is normal caching
Server is second cache server.
4. server data caching method according to claim 3, which is characterized in that the ethernet device notifies each
Second cache server obtains the data cached step packet of the URL access request by first cache server
It includes:
The IP address of first cache server is sent to each second cache server by the ethernet device.
5. a kind of server data caching method characterized by comprising
Cache server checks the shared buffer memory server group configuration of the URL when receiving URL access request;
When the machine is configured as the second cache server, the cache server into the shared buffer memory server group
One cache server obtains the data cached of the URL access request.
6. server data caching method according to claim 5, which is characterized in that cache server is receiving URL
When access request, check the URL shared buffer memory server group configuration the step of after, further includes:
When the machine is not configured the shared buffer memory server group into the URL, Hui Yuan is carried out to L2 cache or source data station
Operation obtains data cached.
7. server data caching method according to claim 5, which is characterized in that this method further include:
The notice that ethernet device is sent is received, the notice includes the configuration to the shared buffer memory server group.
8. a kind of server data buffer storage characterized by comprising
Shared buffer memory configuration module, for when detecting the URL access request for meeting preset draining conditions, being the URL
Access request configures shared buffer memory server group, and the shared buffer memory server group is comprising first cache server and at least
One the second cache server;
Configuration distributing module, for notifying each second cache server to pass through described in first cache server acquisition
URL access request it is data cached.
9. server data buffer storage according to claim 8, which is characterized in that the device further include:
Resource detection module, the working condition of the cache server for caching detection rear end;
Resource information synchronization module, the detection for the working condition to the cache server synchronous with other ethernet devices
As a result.
10. a kind of server data buffer storage characterized by comprising
Configuration sharing determining module, for when receiving URL access request, checking the shared buffer memory server group of the URL
Configuration;
Data cached acquisition module, for when the machine is configured as the second cache server, Xiang Suoshu shared buffer memory server
The first cache server in group obtains the data cached of the URL access request.
11. server data buffer storage according to claim 10, which is characterized in that
The data cached acquisition module is also used to when the machine is not configured the shared buffer memory server group into the URL, to
L2 cache or source data station carry out back source operation, obtain data cached.
12. a kind of server data caching system, which is characterized in that including at least an ethernet device and each ethernet device
Corresponding cache server;
The ethernet device, for being visited for the URL when detecting the URL access request for meeting preset draining conditions
Ask that request configures the shared buffer memory server group, the shared buffer memory server group is comprising first cache server and extremely
Few second cache server,
Each second cache server is notified to obtain the caching of the URL access request by first cache server
Data;
The cache server, for when receiving URL access request, checking that the shared buffer memory server of the URL assembles
It sets, and when the machine is configured as the second cache server, the first cache server in Xiang Suoshu shared buffer memory server group
Obtain the data cached of the URL access request.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810238424.XA CN110300132A (en) | 2018-03-22 | 2018-03-22 | Server data caching method, device and system |
CN201911234778.8A CN111131402B (en) | 2018-03-22 | 2018-03-22 | Method, device, equipment and medium for configuring shared cache server group |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810238424.XA CN110300132A (en) | 2018-03-22 | 2018-03-22 | Server data caching method, device and system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911234778.8A Division CN111131402B (en) | 2018-03-22 | 2018-03-22 | Method, device, equipment and medium for configuring shared cache server group |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110300132A true CN110300132A (en) | 2019-10-01 |
Family
ID=68025663
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911234778.8A Active CN111131402B (en) | 2018-03-22 | 2018-03-22 | Method, device, equipment and medium for configuring shared cache server group |
CN201810238424.XA Pending CN110300132A (en) | 2018-03-22 | 2018-03-22 | Server data caching method, device and system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911234778.8A Active CN111131402B (en) | 2018-03-22 | 2018-03-22 | Method, device, equipment and medium for configuring shared cache server group |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN111131402B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090282159A1 (en) * | 2008-04-09 | 2009-11-12 | Level 3 Communications, Llc | Content delivery in a network |
CN101764824A (en) * | 2010-01-28 | 2010-06-30 | 深圳市同洲电子股份有限公司 | Distributed cache control method, device and system |
CN102986189A (en) * | 2010-05-09 | 2013-03-20 | 思杰系统有限公司 | Systems and methods for allocation of classes of service to network connections corresponding to virtual channels |
CN104301741A (en) * | 2014-09-26 | 2015-01-21 | 北京奇艺世纪科技有限公司 | Data live broadcast system and method |
CN104320487A (en) * | 2014-11-11 | 2015-01-28 | 网宿科技股份有限公司 | HTTP dispatching system and method for content delivery network |
CN104580393A (en) * | 2014-12-18 | 2015-04-29 | 北京蓝汛通信技术有限责任公司 | Method and device for expanding server cluster system and server cluster system |
CN104935680A (en) * | 2015-06-18 | 2015-09-23 | 中国互联网络信息中心 | Recursive domain name service system and method of multi-level shared cache |
CN107517241A (en) * | 2016-06-16 | 2017-12-26 | 中兴通讯股份有限公司 | Request scheduling method and device |
CN107801086A (en) * | 2017-10-20 | 2018-03-13 | 广东省南方数字电视无线传播有限公司 | The dispatching method and system of more caching servers |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5864854A (en) * | 1996-01-05 | 1999-01-26 | Lsi Logic Corporation | System and method for maintaining a shared cache look-up table |
JP4221646B2 (en) * | 2002-06-26 | 2009-02-12 | 日本電気株式会社 | Shared cache server |
US9065835B2 (en) * | 2008-07-23 | 2015-06-23 | International Business Machines Corporation | Redirecting web content |
US9081501B2 (en) * | 2010-01-08 | 2015-07-14 | International Business Machines Corporation | Multi-petascale highly efficient parallel supercomputer |
US9361116B2 (en) * | 2012-12-28 | 2016-06-07 | Intel Corporation | Apparatus and method for low-latency invocation of accelerators |
CN104935648B (en) * | 2015-06-03 | 2018-07-17 | 北京快网科技有限公司 | The CDN system and file of a kind of high performance-price ratio push away in advance, the method for fragment cache memory |
-
2018
- 2018-03-22 CN CN201911234778.8A patent/CN111131402B/en active Active
- 2018-03-22 CN CN201810238424.XA patent/CN110300132A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090282159A1 (en) * | 2008-04-09 | 2009-11-12 | Level 3 Communications, Llc | Content delivery in a network |
CN101764824A (en) * | 2010-01-28 | 2010-06-30 | 深圳市同洲电子股份有限公司 | Distributed cache control method, device and system |
CN102986189A (en) * | 2010-05-09 | 2013-03-20 | 思杰系统有限公司 | Systems and methods for allocation of classes of service to network connections corresponding to virtual channels |
CN104301741A (en) * | 2014-09-26 | 2015-01-21 | 北京奇艺世纪科技有限公司 | Data live broadcast system and method |
CN104320487A (en) * | 2014-11-11 | 2015-01-28 | 网宿科技股份有限公司 | HTTP dispatching system and method for content delivery network |
CN104580393A (en) * | 2014-12-18 | 2015-04-29 | 北京蓝汛通信技术有限责任公司 | Method and device for expanding server cluster system and server cluster system |
CN104935680A (en) * | 2015-06-18 | 2015-09-23 | 中国互联网络信息中心 | Recursive domain name service system and method of multi-level shared cache |
CN107517241A (en) * | 2016-06-16 | 2017-12-26 | 中兴通讯股份有限公司 | Request scheduling method and device |
CN107801086A (en) * | 2017-10-20 | 2018-03-13 | 广东省南方数字电视无线传播有限公司 | The dispatching method and system of more caching servers |
Also Published As
Publication number | Publication date |
---|---|
CN111131402A (en) | 2020-05-08 |
CN111131402B (en) | 2022-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103067293B (en) | Method and system for multiplex and connection management of a load balancer | |
KR100754296B1 (en) | Method and apparatus for web farm traffic control | |
CN102187315B (en) | Methods and apparatus to get feedback information in virtual environment for server load balancing | |
CA2859163C (en) | Content delivery network | |
US9491253B2 (en) | Data storage based on content popularity | |
CN105956138B (en) | The control method and device of database connection | |
CN103747274B (en) | A kind of video data center setting up cache cluster and cache resources dispatching method thereof | |
CN103412786B (en) | High performance server architecture system and data processing method thereof | |
CN104158755B (en) | The methods, devices and systems of transmitting message | |
CN109857518A (en) | A kind of distribution method and equipment of Internet resources | |
CN103581276B (en) | Cluster management device, system, service customer end and correlation method | |
CN105933408B (en) | A kind of implementation method and device of Redis universal middleware | |
CN104980478B (en) | Sharing method, equipment and system are cached in content distributing network | |
CN104137083B (en) | Interface arrangement and memory bus system | |
CN103607424B (en) | Server connection method and server system | |
CN109697122A (en) | Task processing method, equipment and computer storage medium | |
CN110442610A (en) | The method, apparatus of load balancing calculates equipment and medium | |
CN109828843A (en) | Method, system and the electronic equipment that data are transmitted between a kind of calculate node | |
CN107517229A (en) | Generation, transmission method and the relevant apparatus of a kind of time source-routed information | |
KR101679573B1 (en) | Method and apparatus for service traffic security using dimm channel distribution multicore processing system | |
CN112583895B (en) | TCP communication method, system and device | |
CN106453625A (en) | Information synchronization method and high-availability cluster system | |
CN107124469A (en) | A kind of clustered node communication means and system | |
CN103955397B (en) | A kind of scheduling virtual machine many policy selection method based on micro-architecture perception | |
CN109802889A (en) | A kind of information transferring method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191001 |