CN104580435A - Method and device for caching network connections - Google Patents
Method and device for caching network connections Download PDFInfo
- Publication number
- CN104580435A CN104580435A CN201410836954.6A CN201410836954A CN104580435A CN 104580435 A CN104580435 A CN 104580435A CN 201410836954 A CN201410836954 A CN 201410836954A CN 104580435 A CN104580435 A CN 104580435A
- Authority
- CN
- China
- Prior art keywords
- network
- select probability
- buffer memory
- probability
- date select
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
Abstract
The embodiments of the invention provide a method and device for caching network connections, wherein the method specifically comprises the following steps: after the completion of server access, determining the latest selection probability of each network connection in a cache on the basis of the characteristics of server load balancing mode, wherein the latest selection probability is used for representing the probability of selecting the server corresponding to the network connection on the basis of the load balancing mode next time; sorting the network connections in the cache according to the latest selection probability; replacing the network connection with the minimum latest selection probability in the cache; keeping the network connection with the maximum latest selection probability in the cache. The embodiments of the invention can avoid the problem of cache hit failure caused by replacement of the network connection with the maximum latest selection probability in the cache, thereby improving the cache hit rate of the network connections.
Description
Technical field
The present invention relates to communication technical field, particularly relate to caching method and the device of the connection of a kind of network.
Background technology
Common web access process is a typical client and server model, as shown in Figure 1, user utilizes the such as browser of the program in client 101 to send request, Web server 103 response request also provides corresponding data, proxy server 102 is in the forwarding of carrying out asking summed data between client 101 and Web server 103, and by realizing request will be evenly distributed on each Web server 103 to the load balancing of Web server 103.Suppose to utilize the load balancing of polling mode realization to 4 Web servers, so, the request of each client can be assigned to different Web server one by one by the order of Web server 1, Web server 2, Web server 3 and Web server 4.
When accessing number of concurrent and being larger, proxy server is the TCP (transmission control protocol kept under the prerequisite not expending system resource between Web server, Transmission Control Protocol) connect, the TCP that the buffer area of preset number can be adopted to carry out buffer memory Web server connects, here, preset number is less than the number of Web server usually, and as when the number of Web server is 4, the number of buffer area is 3 etc.
Suppose to adopt buffer area 1, buffer area 2 and buffer area 3 carry out buffer memory Web server 1, Web server 2, Web server 3 is connected with the TCP of Web server 4, buffer area 1 under current state, buffer area 2 and buffer area 3 distinguish buffer memory Web server 1, Web server 2 is connected with the TCP of Web server 3, so, existing scheme is when carrying out caching replacement in conjunction with above-mentioned polling mode, can after utilizing the complete Web server 1 of access to content in buffer area 1, content in buffer area 2 is replaced, cause the cache hit failure when accessing Web server 2, need the connection re-establishing Web server 2, further, after access Web server 2, content in buffer area 3 is replaced, causing the cache hit failure when accessing Web server 3, needing the connection re-establishing Web server 3.Visible, there is the low problem of cache hit rate in existing scheme.
Summary of the invention
In view of the above problems, the present invention is proposed to provide a kind of the overcoming caching method and device that the problems referred to above or a kind of network that solves the problem at least in part connect.
According to one aspect of the present invention, provide the caching method that a kind of network connects, comprising:
After completing a server access, the up-to-date select probability that in the characteristic determination buffer memory according to server load balancing mode, each network connects; Wherein, described up-to-date select probability for represent according to load balancing mode network connect corresponding server in next time by the probability selected;
According to described up-to-date select probability, network each in described buffer memory is connected and sorts; And
Replace the network that in described buffer memory, up-to-date select probability is minimum to connect, and, retain the network that in described buffer memory, up-to-date select probability is maximum and connect.
Alternatively, described load balancing mode is polling mode, then the step of the described up-to-date select probability connected according to network each in the characteristic determination buffer memory of server load balancing mode, comprising:
According to the nearest service time that network each in buffer memory connects, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability of the network connection that nearest service time is nearest apart from current time is maximum, and the up-to-date select probability that nearest distance current time service time network farthest connects is minimum.
Alternatively, described load balancing mode is hash mode, then the step of the described up-to-date select probability connected according to network each in the characteristic determination buffer memory of server load balancing mode, comprising:
Last select probability is connected to according to each network in buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the last up-to-date select probability by the network connection selected is minimum, and the up-to-date select probability of the network connection that last select probability is minimum is maximum.
Alternatively, described load balancing mode is minimum disappearance mode, then the step of the described up-to-date select probability connected according to network each in the characteristic determination buffer memory of server load balancing mode, comprising:
The history process access number of corresponding server is connected according to network each in described buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability that the server map network that history process access number is maximum connects is minimum, and the up-to-date select probability of the server map network connection that history process access number is minimum is maximum.
Alternatively, described load balancing mode is fastest response mode, then the step of the described up-to-date select probability connected according to network each in the characteristic determination buffer memory of server load balancing mode, comprising:
The response time of corresponding server is connected according to network each in described buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability of the server map network connection that the response time is the shortest is maximum, and the up-to-date select probability of the server map network connection that the response time is the longest is minimum.
According to a further aspect in the invention, provide the buffer storage that a kind of network connects, comprising:
Probability determination module, for after completing a server access, the up-to-date select probability that in the characteristic determination buffer memory according to server load balancing mode, each network connects; Wherein, described up-to-date select probability is for representing the probability selected in next time according to load balancing mode network connection corresponding server;
Probability sorting module, for according to described up-to-date select probability, connects network each in described buffer memory and sorts; And
Displacement retains module, connects for replacing the network that in described buffer memory, up-to-date select probability is minimum, and, retain the network that in described buffer memory, up-to-date select probability is maximum and connect.
Alternatively, described load balancing mode is polling mode, then described probability determination module comprises:
First probability determination submodule, for the nearest service time connected according to network each in buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability of the network connection that nearest service time is nearest apart from current time is maximum, and the up-to-date select probability that nearest distance current time service time network farthest connects is minimum.
Alternatively, described load balancing mode is hash mode, then described probability determination module comprises:
Second probability determination submodule, for being connected to last select probability according to each network in buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the last up-to-date select probability by the network connection selected is minimum, and the up-to-date select probability of the network connection that last select probability is minimum is maximum.
Alternatively, described load balancing mode is minimum disappearance mode, then described probability determination module comprises:
3rd probability determination submodule, for connecting the history process access number of corresponding server according to network each in described buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability that the server map network that history process access number is maximum connects is minimum, and the up-to-date select probability of the server map network connection that history process access number is minimum is maximum.
Alternatively, described load balancing mode is fastest response mode, then described probability determination module comprises:
4th probability determination submodule, for connecting the response time of corresponding server according to network each in described buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability of the server map network connection that the response time is the shortest is maximum, and the up-to-date select probability of the server map network connection that the response time is the longest is minimum.
The caching method connected according to a kind of network of the embodiment of the present invention and device, after completing a server access, the up-to-date select probability that in characteristic determination buffer memory according to server load balancing mode, each network connects, according to described up-to-date select probability, network each in described buffer memory is connected and sorts, and replace the network connection that in described buffer memory, up-to-date select probability is minimum, and, retain the network that in described buffer memory, up-to-date select probability is maximum and connect; Due to described up-to-date select probability for represent according to load balancing mode network connect corresponding server in next time by the probability selected, therefore, the embodiment of the present invention is only replaced network that in described buffer memory, up-to-date select probability is minimum and is connected and retain the means that network that in described buffer memory, up-to-date select probability is maximum connects, network that in described buffer memory, up-to-date select probability is maximum can be avoided to connect the replaced and problem of cache hit failure that is that cause, thus the cache hit rate that network connects can be improved.
Above-mentioned explanation is only the general introduction of technical solution of the present invention, in order to technological means of the present invention can be better understood, and can be implemented according to the content of specification, and can become apparent, below especially exemplified by the specific embodiment of the present invention to allow above and other objects of the present invention, feature and advantage.
Accompanying drawing explanation
By reading the detailed description of hereafter Alternate embodiments, various other advantage and benefit will become cheer and bright for those of ordinary skill in the art.Accompanying drawing only for illustrating the object of Alternate embodiments, and does not think limitation of the present invention.And in whole accompanying drawing, represent identical parts by identical reference symbol.In the accompanying drawings:
Fig. 1 shows a kind of structural representation of HTTP access system;
Fig. 2 shows the steps flow chart schematic diagram of the caching method connected according to a kind of network of the present invention's example; And
Fig. 3 shows the structural representation of the buffer storage that a kind of according to an embodiment of the invention network connects.
Embodiment
Below with reference to accompanying drawings exemplary embodiment of the present disclosure is described in more detail.Although show exemplary embodiment of the present disclosure in accompanying drawing, however should be appreciated that can realize the disclosure in a variety of manners and not should limit by the embodiment set forth here.On the contrary, provide these embodiments to be in order to more thoroughly the disclosure can be understood, and complete for the scope of the present disclosure can be conveyed to those skilled in the art.
With reference to Fig. 2, show the steps flow chart schematic diagram of the caching method that a kind of according to an embodiment of the invention network connects, specifically can comprise the steps:
Step 201, after completing a server access, the up-to-date select probability connected according to network each in the characteristic determination buffer memory of server load balancing mode; Wherein, described up-to-date select probability for represent according to load balancing mode network connect corresponding server in next time by the probability selected;
Step 202, according to described up-to-date select probability, each network in described buffer memory is connected and sorts; And
Step 203, replace network that in described buffer memory, up-to-date select probability is minimum and connect, and, retain the network that in described buffer memory, up-to-date select probability is maximum and connect.
The embodiment of the present invention can be applied in various proxy server, this proxy server is between client and server, can be used for the forwarding of carrying out between clients and servers asking summed data, and realize request will be evenly distributed on each server to the load balancing of server for passing through, the network that also can be used between buffer memory with server is connected, to improve the traffic rate between client and server under the prerequisite not expending system resource.
The load balancing process of server is mainly: after completing a server access, and the characteristic according to load balancing mode determines how to select next server, and new access request is transmitted to it; Wherein, server access process is also transmitted to the process of selected server by access request.
In the load balancing process of server, the successful condition of cache hit is specially: network matching connection success in the server selected according to the characteristic of load balancing mode and buffer memory, also, the network corresponding to server that there is described selection in buffer memory connects.
In order to save system resource, buffer memory can usually be less than the number of server, like this, if the network stored in buffer memory connects constant by the maximum number that connects of storage networking, then always there is the network of a server to connect not used, thus the object of load balancing cannot be realized.And caching replacement can be connected by the network by network linker substitution one or more in buffer memory being other, to realize the object of load balancing.
The low problem of cache hit rate is there is in order to solve existing scheme, the embodiment of the present invention is after completing a server access, the up-to-date select probability that in characteristic determination buffer memory according to server load balancing mode, each network connects, according to described up-to-date select probability, network each in described buffer memory is connected and sorts, and replace the network connection that in described buffer memory, up-to-date select probability is minimum, and, retain the network that in described buffer memory, up-to-date select probability is maximum and connect; Due to described up-to-date select probability for represent according to load balancing mode network connect corresponding server in next time by the probability selected, therefore, the embodiment of the present invention is only replaced network that in described buffer memory, up-to-date select probability is minimum and is connected and retain the means that network that in described buffer memory, up-to-date select probability is maximum connects, network that in described buffer memory, up-to-date select probability is maximum can be avoided to connect the replaced and problem of cache hit failure that is that cause, thus the cache hit rate that network connects can be improved.
The embodiment of the present invention can provide the scheme of the up-to-date select probability that each network connects in the following characteristic determination buffer memory according to server load balancing mode:
Scheme one,
In scheme one, described load balancing mode can be polling mode, the then step of the described up-to-date select probability connected according to network each in the characteristic determination buffer memory of server load balancing mode, specifically can comprise: the nearest service time connected according to network each in buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability of the network connection that nearest service time is nearest apart from current time is maximum, and the up-to-date select probability that nearest distance current time service time network farthest connects is minimum.
The characteristic of polling mode is specifically as follows, and in load balancing process, next server is issued in new request in turn, and so continuously, go round and begin again, i.e., each server is selected in turn under equal status.
According to the characteristic of above-mentioned polling mode, can learn, if certain network connection corresponding server is just used in buffer memory, so after a polling cycle, it just can be used again, therefore, the nearest service time that can connect according to network each in buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, as a rule, the nearest service time that in buffer memory, network connects is newer, then the up-to-date select probability of its correspondence is less, like this, according to described up-to-date select probability, carrying out sequence to network connection each in described buffer memory can be, according to described nearest service time, network each in described buffer memory is connected and sorts.
In a kind of application example 1 of the present invention, suppose to adopt buffer area 1, buffer area 2 and buffer area 3 carry out buffer memory Web server 1, Web server 2, Web server 3 be connected with the TCP of Web server 4, under current state, buffer area 1, buffer area 2 and buffer area 3 are distinguished buffer memory Web server 1, Web server 2 and are connected with the TCP of Web server 3.
So, the embodiment of the present invention is when carrying out caching replacement in conjunction with polling mode, can after utilizing the complete Web server 1 of access to content in buffer area 1, the nearest service time that can connect according to network each in buffer memory, determine that the order of the up-to-date select probability that each network connects in buffer memory is: buffer area 1 < buffer area 2 < buffer area 3, therefore, can replace content in buffer area 1, and retain content in buffer area 2 and buffer area 3, thus the cache hit success when accessing Web server 2 can be ensured, without the need to re-establishing the connection of Web server 2, then, after access Web server 2, determine that the order of the up-to-date select probability that each network connects in buffer memory is: buffer area 2 < buffer area 1 < buffer area 3, therefore can replace content in buffer area 2, thus ensure the cache hit success when accessing Web server 3, without the need to re-establishing the connection of Web server 3.Visible, the present invention substantially increases cache hit rate.
Scheme two,
In scheme two, described load balancing mode can be hash mode, the then step of the described up-to-date select probability connected according to network each in the characteristic determination buffer memory of server load balancing mode, specifically can comprise: be connected to last select probability according to each network in buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the last up-to-date select probability by the network connection selected is minimum, and the up-to-date select probability of the network connection that last select probability is minimum is maximum.
Hash (Hash) mode is by the irreversible Hash function of injection, and according to certain rule, server is mail in request, it has following characteristic usually:
1, balance; Balance refers to that the result of Hash should be evenly distributed to each server, to solve problem of load balancing;
2, monotonicity; Monotonicity refer to newly-increased or delete server time, the value that same key (key) has access to is always the same;
3, dispersed; What dispersiveness referred to that data should disperse leaves on each server.
Such as, in a kind of Application Example of the present invention, hash mode can be disperseed according to server count object delivery result, also namely first tries to achieve the integer cryptographic Hash of key, again this integer cryptographic Hash is carried out modulo operation to number of servers, select server according to corresponding delivery result.
According to the characteristic of above-mentioned hash mode, it is normally contrary with select probability next time that each network is connected to last select probability, also be, the large then select probability next time of last select probability, therefore, last select probability can be connected to according to network each in buffer memory, determine the up-to-date select probability that in buffer memory, each network connects.
For above-mentioned application example 1, the embodiment of the present invention is when carrying out caching replacement in conjunction with hash mode, after utilizing the complete Web server 1 of access to content in buffer area 1, the up-to-date select probability being connected to the last time due to network in buffer area 1 is 100%, then can determine that buffer area 1 is approximately 0 at upper up-to-date select probability once, therefore, can replace content in buffer area 1, and retain content in buffer area 2 and buffer area 3, thus the cache hit success when accessing Web server 2 can be ensured, without the need to re-establishing the connection of Web server 2; Then, after access Web server 2, determine that the up-to-date select probability that in buffer memory 2, network connects is approximately 0, therefore can replace content in buffer area 2, thus ensure the cache hit success when accessing Web server 3, without the need to re-establishing the connection of Web server 3.Visible, the present invention substantially increases cache hit rate.
Scheme three,
In scheme three, described load balancing mode can be minimum disappearance mode, the then step of the described up-to-date select probability connected according to network each in the characteristic determination buffer memory of server load balancing mode, specifically can comprise: the history process access number connecting corresponding server according to network each in described buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability that the server map network that history process access number is maximum connects is minimum, the up-to-date select probability of the server map network connection that history process access number is minimum is maximum.
Minimum disappearance mode can balance the request situation being recorded to each server, and the minimum server of history process access number is issued in next request.Therefore, scheme three can connect the history process access number of corresponding server according to network each in described buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, particularly, for the server that history process access number is more, then the up-to-date select probability of its map network connection is less.
For above-mentioned application example 1, the embodiment of the present invention is when carrying out caching replacement in conjunction with minimum disappearance mode, after utilizing the complete Web server 1 of access to content in buffer area 1, suppose current web services device 1, Web server 2, the history process access number of Web server 3 and Web server 4 is respectively 100w, 80w, 110w and 90w, here, w represents ten thousand, so, can determine that the order of the up-to-date select probability that each network connects in buffer memory is: buffer area 3 < buffer area 1 < buffer area 2, therefore, can replace content in buffer area 3, and retain content in buffer area 1 and buffer area 2, thus the cache hit success when accessing Web server 2 can be ensured, without the need to re-establishing the connection of Web server 2.Visible, the present invention can improve cache hit rate.
Scheme four,
In scheme four, described load balancing mode can be fastest response mode, the then step of the described up-to-date select probability connected according to network each in the characteristic determination buffer memory of server load balancing mode, specifically can comprise: the response time connecting corresponding server according to network each in described buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability of the server map network connection that the response time is the shortest is maximum, and the up-to-date select probability of the server map network connection that the response time is the longest is minimum.
Fastest response mode can be recorded to the network response time of each server, and the request dispatching arrived by the next one is to response time the shortest server.Like this, scheme four can connect the response time of corresponding server according to network each in described buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, particularly, the response time of server is shorter, then the up-to-date select probability of map network connection is larger.
For above-mentioned application example 1, the embodiment of the present invention is when carrying out caching replacement in conjunction with minimum disappearance mode, after utilizing the complete Web server 1 of access to content in buffer area 1, suppose current web services device 1, Web server 2, the response time of Web server 3 and Web server 4 is respectively 10ms, 20s, 25ms and 30ms, so, can determine that the order of the up-to-date select probability that each network connects in buffer memory is: buffer area 3 < buffer area 2 < buffer area 1, therefore, can replace content in buffer area 3, and retain content in buffer area 1 and buffer area 2, thus the cache hit success when accessing Web server 1 can be ensured, without the need to re-establishing the connection of Web server 1.Visible, the present invention can improve cache hit rate.
Above to polling mode, hash mode, minimum disappearance mode is described in detail with the scheme of the up-to-date select probability that each network in the characteristic of fastest response mode and the determination buffer memory of correspondence thereof is connected, it should be noted that, it is arbitrary that those skilled in the art can adopt in such scheme according to actual conditions, or, other load balancing mode can also be adopted and the up-to-date select probability connected according to network each in the characteristic determination buffer memory of other load balancing mode, the scheme of the embodiment of the present invention to the up-to-date select probability connected according to network each in the characteristic determination buffer memory of other load balancing mode of concrete load balancing mode and correspondence thereof is not limited.
To sum up, the embodiment of the present invention is only replaced network that in described buffer memory, up-to-date select probability is minimum and is connected and retain the means that network that in described buffer memory, up-to-date select probability is maximum connects, network that in described buffer memory, up-to-date select probability is maximum can be avoided to connect the replaced and problem of cache hit failure that is that cause, thus the cache hit rate that network connects can be improved.
For embodiment of the method, in order to simple description, therefore it is all expressed as a series of combination of actions, but those skilled in the art should know, the embodiment of the present invention is not by the restriction of described sequence of movement, because according to the embodiment of the present invention, some step can adopt other orders or carry out simultaneously.Secondly, those skilled in the art also should know, the embodiment described in specification all belongs to embodiment, and involved action might not be that the embodiment of the present invention is necessary.
With reference to Fig. 3, show the structural representation of the buffer storage that a kind of according to an embodiment of the invention network connects, specifically can comprise as lower module:
Probability determination module 301, for after completing a server access, the up-to-date select probability that in the characteristic determination buffer memory according to server load balancing mode, each network connects; Wherein, described up-to-date select probability for represent according to load balancing mode network connect corresponding server in next time by the probability selected;
Probability sorting module 302, for according to described up-to-date select probability, connects network each in described buffer memory and sorts; And
Displacement retains module 303, connects for replacing the network that in described buffer memory, up-to-date select probability is minimum, and, retain the network that in described buffer memory, up-to-date select probability is maximum and connect.
In a kind of embodiment of the present invention, described load balancing mode is polling mode, then described probability determination module 301 may further include:
First probability determination submodule, for the nearest service time connected according to network each in buffer memory, determines the up-to-date select probability that in buffer memory, each network connects.
In another preferred embodiment of the invention, described load balancing mode is hash mode, then described probability determination module 301 may further include:
Second probability determination submodule, for being connected to last select probability according to each network in buffer memory, determines the up-to-date select probability that in buffer memory, each network connects.
In another preferred embodiment of the present invention, described load balancing mode is minimum disappearance mode, then described probability determination module 301 may further include:
3rd probability determination submodule, for connecting the history process access number of corresponding server according to network each in described buffer memory, determines the up-to-date select probability that in buffer memory, each network connects.
In another preferred embodiment of the invention, described load balancing mode is fastest response mode, then probability determination module 301 may further include:
4th probability determination submodule, for connecting the response time of corresponding server according to network each in described buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability of the server map network connection that the response time is the shortest is maximum, and the up-to-date select probability of the server map network connection that the response time is the longest is minimum.
For device embodiment, due to itself and embodiment of the method basic simlarity, so description is fairly simple, relevant part illustrates see the part of embodiment of the method.
Intrinsic not relevant to any certain computer, virtual system or miscellaneous equipment with display at this algorithm provided.Various general-purpose system also can with use based on together with this teaching.According to description above, the structure constructed required by this type systematic is apparent.In addition, the present invention is not also for any certain programmed language.It should be understood that and various programming language can be utilized to realize content of the present invention described here, and the description done language-specific is above to disclose preferred forms of the present invention.
In specification provided herein, describe a large amount of detail.But can understand, embodiments of the invention can be put into practice when not having these details.In some instances, be not shown specifically known method, structure and technology, so that not fuzzy understanding of this description.
Similarly, be to be understood that, in order to simplify the disclosure and to help to understand in each inventive aspect one or more, in the description above to exemplary embodiment of the present invention, each feature of the present invention is grouped together in single embodiment, figure or the description to it sometimes.But, the method for the disclosure should be construed to the following intention of reflection: namely the present invention for required protection requires feature more more than the feature clearly recorded in each claim.Or rather, as claims below reflect, all features of disclosed single embodiment before inventive aspect is to be less than.Therefore, the claims following embodiment are incorporated to this embodiment thus clearly, and wherein each claim itself is as independent embodiment of the present invention.
Those skilled in the art are appreciated that and adaptively can change the module in the equipment in embodiment and they are arranged in one or more equipment different from this embodiment.Module in embodiment or unit or assembly can be combined into a module or unit or assembly, and multiple submodule or subelement or sub-component can be put them in addition.Except at least some in such feature and/or process or unit be mutually repel except, any combination can be adopted to combine all processes of all features disclosed in this specification (comprising adjoint claim, summary and accompanying drawing) and so disclosed any method or equipment or unit.Unless expressly stated otherwise, each feature disclosed in this specification (comprising adjoint claim, summary and accompanying drawing) can by providing identical, alternative features that is equivalent or similar object replaces.
In addition, those skilled in the art can understand, although embodiments more described herein to comprise in other embodiment some included feature instead of further feature, the combination of the feature of different embodiment means and to be within scope of the present invention and to form different embodiments.Such as, in the following claims, the one of any of embodiment required for protection can use with arbitrary compound mode.
All parts embodiment of the present invention with hardware implementing, or can realize with the software module run on one or more processor, or realizes with their combination.It will be understood by those of skill in the art that the some or all functions of the some or all parts in the caching method and device that microprocessor or digital signal processor (DSP) can be used in practice to realize connecting according to the network of the embodiment of the present invention.The present invention can also be embodied as part or all equipment for performing method as described herein or device program (such as, computer program and computer program).Realizing program of the present invention and can store on a computer-readable medium like this, or the form of one or more signal can be had.Such signal can be downloaded from Internet platform and obtain, or provides on carrier signal, or provides with any other form.
The present invention will be described instead of limit the invention to it should be noted above-described embodiment, and those skilled in the art can design alternative embodiment when not departing from the scope of claims.In the claims, any reference symbol between bracket should be configured to limitations on claims.Word " comprises " not to be got rid of existence and does not arrange element in the claims or step.Word "a" or "an" before being positioned at element is not got rid of and be there is multiple such element.The present invention can by means of including the hardware of some different elements and realizing by means of the computer of suitably programming.In the unit claim listing some devices, several in these devices can be carry out imbody by same hardware branch.Word first, second and third-class use do not represent any order.Can be title by these word explanations.
Claims (10)
1. a caching method for network connection, comprising:
After completing a server access, the up-to-date select probability that in the characteristic determination buffer memory according to server load balancing mode, each network connects; Wherein, described up-to-date select probability for represent according to load balancing mode network connect corresponding server in next time by the probability selected;
According to described up-to-date select probability, network each in described buffer memory is connected and sorts; And
Replace the network that in described buffer memory, up-to-date select probability is minimum to connect, and, retain the network that in described buffer memory, up-to-date select probability is maximum and connect.
2. the method for claim 1, is characterized in that, described load balancing mode is polling mode, then the step of the described up-to-date select probability connected according to network each in the characteristic determination buffer memory of server load balancing mode, comprising:
According to the nearest service time that network each in buffer memory connects, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability of the network connection that nearest service time is nearest apart from current time is maximum, and the up-to-date select probability that nearest distance current time service time network farthest connects is minimum.
3. the method for claim 1, is characterized in that, described load balancing mode is hash mode, then the step of the described up-to-date select probability connected according to network each in the characteristic determination buffer memory of server load balancing mode, comprising:
Last select probability is connected to according to each network in buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the last up-to-date select probability by the network connection selected is minimum, and the up-to-date select probability of the network connection that last select probability is minimum is maximum.
4. the method for claim 1, is characterized in that, described load balancing mode is minimum disappearance mode, then the step of the described up-to-date select probability connected according to network each in the characteristic determination buffer memory of server load balancing mode, comprising:
The history process access number of corresponding server is connected according to network each in described buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability that the server map network that history process access number is maximum connects is minimum, and the up-to-date select probability of the server map network connection that history process access number is minimum is maximum.
5. the method for claim 1, is characterized in that, described load balancing mode is fastest response mode, then the step of the described up-to-date select probability connected according to network each in the characteristic determination buffer memory of server load balancing mode, comprising:
The response time of corresponding server is connected according to network each in described buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability of the server map network connection that the response time is the shortest is maximum, and the up-to-date select probability of the server map network connection that the response time is the longest is minimum.
6. a buffer storage for network connection, comprising:
Probability determination module, for after completing a server access, the up-to-date select probability that in the characteristic determination buffer memory according to server load balancing mode, each network connects; Wherein, described up-to-date select probability is for representing the probability selected in next time according to load balancing mode network connection corresponding server;
Probability sorting module, for according to described up-to-date select probability, connects network each in described buffer memory and sorts; And
Displacement retains module, connects for replacing the network that in described buffer memory, up-to-date select probability is minimum, and, retain the network that in described buffer memory, up-to-date select probability is maximum and connect.
7. device as claimed in claim 6, it is characterized in that, described load balancing mode is polling mode, then described probability determination module comprises:
First probability determination submodule, for the nearest service time connected according to network each in buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability of the network connection that nearest service time is nearest apart from current time is maximum, and the up-to-date select probability that nearest distance current time service time network farthest connects is minimum.
8. device as claimed in claim 6, it is characterized in that, described load balancing mode is hash mode, then described probability determination module comprises:
Second probability determination submodule, for being connected to last select probability according to each network in buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the last up-to-date select probability by the network connection selected is minimum, and the up-to-date select probability of the network connection that last select probability is minimum is maximum.
9. device as claimed in claim 6, it is characterized in that, described load balancing mode is minimum disappearance mode, then described probability determination module comprises:
3rd probability determination submodule, for connecting the history process access number of corresponding server according to network each in described buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability that the server map network that history process access number is maximum connects is minimum, and the up-to-date select probability of the server map network connection that history process access number is minimum is maximum.
10. device as claimed in claim 6, it is characterized in that, described load balancing mode is fastest response mode, then described probability determination module comprises:
4th probability determination submodule, for connecting the response time of corresponding server according to network each in described buffer memory, determine the up-to-date select probability that in buffer memory, each network connects, wherein, the up-to-date select probability of the server map network connection that the response time is the shortest is maximum, and the up-to-date select probability of the server map network connection that the response time is the longest is minimum.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410836954.6A CN104580435B (en) | 2014-12-27 | 2014-12-27 | A kind of caching method and device of network connection |
PCT/CN2015/095455 WO2016101748A1 (en) | 2014-12-27 | 2015-11-24 | Method and device for caching network connection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410836954.6A CN104580435B (en) | 2014-12-27 | 2014-12-27 | A kind of caching method and device of network connection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104580435A true CN104580435A (en) | 2015-04-29 |
CN104580435B CN104580435B (en) | 2019-03-08 |
Family
ID=53095592
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410836954.6A Active CN104580435B (en) | 2014-12-27 | 2014-12-27 | A kind of caching method and device of network connection |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN104580435B (en) |
WO (1) | WO2016101748A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016101748A1 (en) * | 2014-12-27 | 2016-06-30 | 北京奇虎科技有限公司 | Method and device for caching network connection |
CN106060164A (en) * | 2016-07-12 | 2016-10-26 | Tcl集团股份有限公司 | Telescopic cloud server system and communication method thereof |
CN106657399A (en) * | 2017-02-20 | 2017-05-10 | 北京奇虎科技有限公司 | Background server selection method and device realized based on middleware |
CN107333235A (en) * | 2017-06-14 | 2017-11-07 | 珠海市魅族科技有限公司 | WiFi connections probability forecasting method, device, terminal and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106713163A (en) * | 2016-12-29 | 2017-05-24 | 杭州迪普科技股份有限公司 | Method and apparatus for deploying server load |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6154767A (en) * | 1998-01-15 | 2000-11-28 | Microsoft Corporation | Methods and apparatus for using attribute transition probability models for pre-fetching resources |
CN101184021A (en) * | 2007-12-14 | 2008-05-21 | 华为技术有限公司 | Method, equipment and system for implementing stream media caching replacement |
CN101455057A (en) * | 2006-06-30 | 2009-06-10 | 国际商业机器公司 | A method and apparatus for caching broadcasting information |
CN103347068A (en) * | 2013-06-26 | 2013-10-09 | 中国(南京)未来网络产业创新中心 | Method for accelerating network caching based on proxy cluster |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6317778B1 (en) * | 1998-11-23 | 2001-11-13 | International Business Machines Corporation | System and method for replacement and duplication of objects in a cache |
CN102098290A (en) * | 2010-12-17 | 2011-06-15 | 天津曙光计算机产业有限公司 | Elimination and replacement method of transmission control protocol (TCP) streams |
CN104580435B (en) * | 2014-12-27 | 2019-03-08 | 北京奇虎科技有限公司 | A kind of caching method and device of network connection |
-
2014
- 2014-12-27 CN CN201410836954.6A patent/CN104580435B/en active Active
-
2015
- 2015-11-24 WO PCT/CN2015/095455 patent/WO2016101748A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6154767A (en) * | 1998-01-15 | 2000-11-28 | Microsoft Corporation | Methods and apparatus for using attribute transition probability models for pre-fetching resources |
CN101455057A (en) * | 2006-06-30 | 2009-06-10 | 国际商业机器公司 | A method and apparatus for caching broadcasting information |
CN101184021A (en) * | 2007-12-14 | 2008-05-21 | 华为技术有限公司 | Method, equipment and system for implementing stream media caching replacement |
CN103347068A (en) * | 2013-06-26 | 2013-10-09 | 中国(南京)未来网络产业创新中心 | Method for accelerating network caching based on proxy cluster |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016101748A1 (en) * | 2014-12-27 | 2016-06-30 | 北京奇虎科技有限公司 | Method and device for caching network connection |
CN106060164A (en) * | 2016-07-12 | 2016-10-26 | Tcl集团股份有限公司 | Telescopic cloud server system and communication method thereof |
CN106657399A (en) * | 2017-02-20 | 2017-05-10 | 北京奇虎科技有限公司 | Background server selection method and device realized based on middleware |
CN107333235A (en) * | 2017-06-14 | 2017-11-07 | 珠海市魅族科技有限公司 | WiFi connections probability forecasting method, device, terminal and storage medium |
CN107333235B (en) * | 2017-06-14 | 2020-09-15 | 珠海市魅族科技有限公司 | WiFi connection probability prediction method and device, terminal and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN104580435B (en) | 2019-03-08 |
WO2016101748A1 (en) | 2016-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104580435A (en) | Method and device for caching network connections | |
CN102647482B (en) | Method and system for accessing website | |
CN102970284B (en) | User profile processing method and server | |
US9888048B1 (en) | Supporting millions of parallel light weight data streams in a distributed system | |
CN102946436B (en) | A kind of download system | |
CN105337787A (en) | Multi-server monitoring method, device and system | |
US9280370B2 (en) | System structure management device, system structure management method, and program | |
CN103152354B (en) | To method, system and client device that dangerous website is pointed out | |
CN104579765A (en) | Disaster tolerance method and device for cluster system | |
CN105939313A (en) | State code redirecting method and device | |
CN103391312A (en) | Resource offline downloading method and device | |
CN104572968A (en) | Page updating method and device | |
CN103533080A (en) | Dispatching method and device for LVS (Linux virtual server) | |
CN104991921A (en) | Data query method, client and server | |
CN104468834A (en) | Method and device for processing Cookie data and browser client side | |
CN105530311A (en) | Load distribution method and device | |
CN107333248A (en) | A kind of real-time sending method of short message and system | |
CN105045789A (en) | Game server database buffer memory method and system | |
CN102932434B (en) | A kind of method and device for carrying out load balancing to server | |
CN104486397A (en) | Method for carrying out data transmission in browser, client and mobile terminal | |
CN103634410A (en) | Data synchronization method based on content distribution network (CDN), client end and server | |
US10142415B2 (en) | Data migration | |
CN104519138A (en) | Data transmission method and data transmission system based on distributed FTP | |
CN104580428A (en) | Data routing method, data management device and distributed storage system | |
CN103647622A (en) | Method, apparatus and system for realizing computer room-spanning data transmission |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220718 Address after: Room 801, 8th floor, No. 104, floors 1-19, building 2, yard 6, Jiuxianqiao Road, Chaoyang District, Beijing 100015 Patentee after: BEIJING QIHOO TECHNOLOGY Co.,Ltd. Address before: 100088 room 112, block D, 28 new street, new street, Xicheng District, Beijing (Desheng Park) Patentee before: BEIJING QIHOO TECHNOLOGY Co.,Ltd. Patentee before: Qizhi software (Beijing) Co.,Ltd. |
|
TR01 | Transfer of patent right |