WO2016101748A1 - Method and device for caching network connection - Google Patents

Method and device for caching network connection Download PDF

Info

Publication number
WO2016101748A1
WO2016101748A1 PCT/CN2015/095455 CN2015095455W WO2016101748A1 WO 2016101748 A1 WO2016101748 A1 WO 2016101748A1 CN 2015095455 W CN2015095455 W CN 2015095455W WO 2016101748 A1 WO2016101748 A1 WO 2016101748A1
Authority
WO
WIPO (PCT)
Prior art keywords
network connection
cache
probability
selection probability
latest
Prior art date
Application number
PCT/CN2015/095455
Other languages
French (fr)
Chinese (zh)
Inventor
任勇全
赵安安
陈磊
Original Assignee
北京奇虎科技有限公司
奇智软件(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京奇虎科技有限公司, 奇智软件(北京)有限公司 filed Critical 北京奇虎科技有限公司
Publication of WO2016101748A1 publication Critical patent/WO2016101748A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a method and apparatus for caching a network connection.
  • the ordinary web access process is a typical client and server model.
  • the user makes a request by using a program on the client 101, such as a browser, and the web server 103 responds to the request and provides corresponding data, and the proxy server 102 is
  • the request and data are forwarded between the client 101 and the web server 103, and the request can be evenly distributed on the respective web servers 103 by load balancing of the web server 103. Assuming that load balancing is implemented on four Web servers by means of polling, each client's request can be assigned to different Web servers one by one in the order of Web server 1, Web server 2, Web server 3, and Web server 4.
  • the proxy server When the number of concurrent accesses is large, the proxy server maintains a TCP (Transmission Control Protocol) connection with the Web server without consuming system resources, and may use a preset number of cache areas to cache the Web server.
  • TCP Transmission Control Protocol
  • the TCP connection here, the preset number is usually smaller than the number of web servers, such as when the number of web servers is 4, the number of cache areas is 3, and so on.
  • the cache area 1, the cache area 2, and the cache area 3 are used to cache the TCP connection of the Web server 1, the Web server 2, the Web server 3, and the Web server 4.
  • the cache area 1, the buffer area 2, and the buffer area 3 are respectively cached.
  • the TCP connection between the Web server 1, the Web server 2, and the Web server 3, then, in the existing scheme, when the cache replacement is performed in combination with the above polling mode, the cache area is accessed after the Web server 1 is accessed by using the content in the buffer area 1.
  • the replacement of the content in 2 causes the cache hit to fail when accessing the web server 2, and the connection to the web server 2 needs to be re-established; and after accessing the web server 2, the content in the cache area 3 is replaced, resulting in access to the web server.
  • the present invention has been made in order to provide a method and apparatus for caching a network connection that overcomes the above problems or at least partially solves the above problems.
  • a method for caching a network connection including:
  • the latest selection probability of each network connection in the cache is determined according to the characteristics of the server load balancing mode; wherein the latest selection probability is used to indicate the probability that the network connection corresponding server is selected next time according to the load balancing mode;
  • a computer program comprising computer readable code, when said computer readable code is run on a computing device, causing said computing device to perform a network connection as described above Cache method.
  • a computer readable medium storing a computer program as described above.
  • a cache device for network connection including:
  • the probability determining module is configured to determine, according to a characteristic of the server load balancing manner, a latest selection probability of each network connection in the cache after completing a server access; wherein the latest selection probability is configured to indicate that the network connection corresponding server is under load balancing mode Probability of secondary selection;
  • a probability ordering module configured to sort network connections in the cache according to the latest selection probability
  • a permutation retention module configured to replace a network connection with the least recent selection probability in the cache, and to retain a network connection with the most recent selection probability in the cache.
  • a method and apparatus for caching a network connection after completing a server access, determining a latest selection probability of each network connection in the cache according to characteristics of a server load balancing manner, according to the latest selection probability, Sort each network connection in the cache, and juxtapose Replacing the network connection with the least recent selection probability in the cache, and retaining the network connection with the most recent selection probability in the cache; since the latest selection probability is used to indicate that the network connection corresponding server is selected next time according to the load balancing mode Probability, therefore, the embodiment of the present invention only replaces the network connection with the smallest selection probability in the cache and the network connection that retains the most recent selection probability in the cache, and can avoid the network connection with the most recent selection probability in the cache.
  • the problem of cache hit failure caused by being replaced can improve the cache hit rate of the network connection.
  • FIG. 1 is a schematic structural diagram of an HTTP access system
  • FIG. 2 is a flow chart showing the steps of a method for caching a network connection according to an example of the present invention
  • FIG. 3 is a schematic structural diagram of a cache device for network connection according to an embodiment of the present invention.
  • Figure 4 shows schematically a block diagram of a computing device for performing the method according to the invention
  • Fig. 5 schematically shows a storage unit for holding or carrying program code implementing the method according to the invention.
  • FIG. 2 is a schematic flow chart of a method for caching a network connection according to an embodiment of the present invention, which may specifically include the following steps:
  • Step 201 After completing a server access, determine a latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode; wherein the latest selection probability is used to indicate that the network connection corresponding server is selected next time according to the load balancing mode The probability;
  • Step 202 Sort each network connection in the cache according to the latest selection probability
  • Step 203 Replace the network connection with the smallest latest selection probability in the cache, and retain the network connection with the most recent selection probability in the cache.
  • the embodiment of the present invention can be applied to various proxy servers, which are between the client and the server, can be used for forwarding requests and data between the client and the server, and for implementing load balancing on the server.
  • the request will be evenly distributed on each server, and can also be used to cache the network connection with the server to increase the communication speed between the client and the server without consuming system resources.
  • the load balancing process of the server is mainly: after completing a server access, determining how to select the next server according to the characteristics of the load balancing mode, and forwarding a new access request to it; wherein, one server access process forwards the access request to The process of selecting the server.
  • the condition that the cache hit succeeds in the load balancing process of the server is specifically as follows: the server selected according to the characteristics of the load balancing mode is successfully matched with the network connection in the cache, that is, the network connection corresponding to the selected server exists in the cache.
  • the maximum number of network connections that the cache can store is usually smaller than the number of servers.
  • Cache replacement can achieve load balancing by replacing one or more network connections in the cache with other network connections.
  • the embodiment of the present invention determines the network connections in the cache according to the characteristics of the server load balancing mode after completing a server access.
  • the latest selection probability according to the latest selection probability, sorting each network connection in the cache, and replacing the network connection with the smallest latest selection probability in the cache, and retaining the network connection with the most recent selection probability in the cache. Because the latest selection probability is used to indicate the probability that the network connection corresponding server is selected next time in the load balancing mode, the embodiment of the present invention only replaces the network connection with the smallest latest selection probability in the cache and retains the cache.
  • the latest selection of the most probable network connection means avoids the problem of cache hit failure caused by the replacement of the network connection with the most recent selection probability in the cache, thereby improving the cache hit rate of the network connection.
  • the load balancing mode may be a polling mode
  • the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode may specifically include: according to the latest network connection in the cache.
  • the usage time is used to determine the latest selection probability of each network connection in the cache, wherein the latest selection probability of the network connection closest to the current time is the largest, and the latest selection probability of the network connection most recently from the current time is the smallest.
  • the polling mode may be configured to send new requests to the next server in a load balancing process, so that the servers are successively and repeatedly restarted, that is, each server is alternately selected in an equal position.
  • each network connection corresponding server in the cache can be used again after a polling period, and therefore, according to the latest usage time of each network connection in the cache. Determining the latest selection probability of each network connection in the cache. Generally, the newer the latest usage time of the network connection in the cache is, the smaller the corresponding latest selection probability is, so that the cache is obtained according to the latest selection probability.
  • the sorting of each network connection may be: sorting each network connection in the cache according to the latest usage time.
  • the buffer area 1, the buffer area 2, and the buffer area 3 are used to cache the TCP connection of the web server 1, the web server 2, the web server 3, and the web server 4, and the buffer area in the current state.
  • Cache area 2 and cache area 3 respectively cache web services 1, web server 2 and web server 3 TCP connection.
  • the embodiment of the present invention can determine the network connections in the cache according to the latest usage time of each network connection in the cache after accessing the web server 1 in the cache area 1.
  • the order of the latest selection probability is: buffer area 1 ⁇ cache area 2 ⁇ cache area 3, therefore, the content in the buffer area 1 can be replaced, and the contents in the buffer area 2 and the buffer area 3 can be reserved, thereby ensuring access to the web server.
  • the cache hit succeeds, and there is no need to re-establish the connection to the web server 2; then, after accessing the web server 2, the order of determining the latest selection probabilities of the network connections in the cache is: buffer 2 ⁇ cache area 1 ⁇ cache area 3 Therefore, the content in the buffer area 2 can be replaced, so that the cache hit is successful when accessing the web server 3, and there is no need to re-establish the connection to the web server 3. It can be seen that the present invention greatly improves the cache hit rate.
  • the load balancing mode may be a hash mode
  • the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode may include: The selection probability of one time determines the latest selection probability of each network connection in the cache, wherein the latest selection probability of the network connection selected last time is the smallest, and the latest selection probability of the network connection with the smallest selection probability is the largest.
  • the Hash method can send a request to the server according to a certain rule by a single-shot irreversible hash function, which usually has the following characteristics:
  • Balance means that the results of Hash should be evenly distributed to each server to solve the load balancing problem
  • monotonic means that when adding or deleting a server, the same key (key) access value is always the same;
  • Dispersibility means that data should be distributed and stored on each server.
  • the hash mode may be dispersed according to the modulo result of the number of servers, that is, the integer hash value of the key is first obtained, and then the integer hash value is performed on the number of servers.
  • the modulo operation is performed, and the server is selected according to the corresponding modulo result.
  • each network connection is in the last selection probability and the next time The selection probability is usually reversed, that is, the previous selection probability is greater than the next selection probability. Therefore, the latest selection probability of each network connection in the cache can be determined according to the previous selection probability of each network connection in the cache.
  • the cache replacement when the cache replacement is performed in combination with the hash mode, after the content of the network in the cache area 1 is accessed, the latest selection probability of the network connection in the buffer area 1 is 100. %, it can be determined that the next latest selection probability of the buffer area 1 is approximately 0.
  • the content in the buffer area 1 can be replaced, and the contents in the buffer area 2 and the buffer area 3 can be reserved, thereby ensuring access to the web server 2
  • the cache hit succeeds, there is no need to re-establish the connection to the Web server 2; then, after accessing the Web server 2, it is determined that the latest selection probability of the network connection in the cache 2 is approximately 0, so the content in the buffer 2 can be replaced, thereby It is guaranteed that the cache hit is successful when accessing the web server 3, and there is no need to re-establish the connection to the web server 3. It can be seen that the present invention greatly improves the cache hit rate.
  • the load balancing mode may be the lowest missing mode
  • the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode may specifically include: according to each network connection in the cache Corresponding to the historical processing access number of the server, determining the latest selection probability of each network connection in the cache, wherein the server with the largest number of historical processing accesses has the latest selection probability of the network connection, and the server with the least number of historical processing accesses the latest selection of the network connection. The probability is the biggest.
  • scheme 3 may determine the latest selection probability of each network connection in the cache according to the historical processing access number of the corresponding server in each cache connection in the cache. Specifically, for the server with more historical processing access, the corresponding network connection The latest selection probability is smaller.
  • the cache replacement when the cache replacement is performed in combination with the lowest missing mode, after the content of the cache server 1 is used to access the web server 1, the current web server 1, the web server 2, the web server 3, and the web server are assumed.
  • the number of historical processing accesses of 4 is 100w, 80w, 110w, and 90w respectively.
  • w represents 10,000, then, each of the caches can be determined.
  • the order of the latest selection probability of the network connection is: buffer area 3 ⁇ buffer area 1 ⁇ cache area 2, therefore, the content in the buffer area 3 can be replaced, and the contents in the buffer area 1 and the buffer area 2 can be reserved, thereby being able to guarantee When the web server 2 is accessed, the cache hit is successful, and there is no need to re-establish the connection to the web server 2. It can be seen that the present invention can improve the cache hit rate.
  • the load balancing mode may be the fastest response mode
  • the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode may specifically include: according to each network in the cache
  • the response time of the corresponding server is connected to determine the latest selection probability of each network connection in the cache.
  • the server with the shortest response time has the highest latest selection probability corresponding to the network connection, and the server with the longest response time has the lowest latest selection probability of the network connection.
  • the fastest response method can record the network response time of each server and assign the next incoming request to the server with the shortest response time.
  • scheme 4 may determine the latest selection probability of each network connection in the cache according to the response time of each network connection corresponding server in the cache. Specifically, the shorter the response time of the server, the greater the latest selection probability of the corresponding network connection. .
  • the cache replacement when the cache replacement is performed in combination with the lowest missing mode, after the content of the cache server 1 is used to access the web server 1, the current web server 1, the web server 2, the web server 3, and the web server are assumed.
  • the response time of 4 is 10ms, 20s, 25ms and 30ms respectively.
  • the order of the latest selection probability of each network connection in the cache can be determined as: buffer 3 ⁇ buffer 2 ⁇ buffer 1 and therefore, buffer 3 can be The content is replaced, and the contents of the buffer area 1 and the buffer area 2 are reserved, so that the cache hit is successful when accessing the web server 1, and there is no need to re-establish the connection to the web server 1. It can be seen that the present invention can improve the cache hit rate.
  • the foregoing describes the characteristics of the polling mode, the hash mode, the minimum missing mode, and the fastest response mode, and the corresponding scheme for determining the latest selection probability of each network connection in the cache. It should be noted that those skilled in the art Depending on the actual situation, any one of the foregoing solutions may be used, or the other load balancing mode may be adopted, and the latest selection probability of each network connection in the cache may be determined according to the characteristics of the other load balancing modes. And its corresponding The scheme for determining the latest selection probability of each network connection in the cache according to the characteristics of other load balancing methods is not limited.
  • the embodiment of the present invention only replaces the network connection with the smallest recent selection probability in the cache and the network connection that retains the most recent selection probability in the cache, and can avoid the network connection with the most recent selection probability in the cache being
  • the problem of cache hit failure caused by replacement can improve the cache hit rate of network connections.
  • FIG. 3 a schematic structural diagram of a cache device for network connection according to an embodiment of the present invention is shown, which may specifically include the following modules:
  • the probability determining module 301 is configured to determine, according to the characteristics of the server load balancing manner, the latest selection probability of each network connection in the cache after completing a server access; wherein the latest selection probability is configured to indicate that the server is connected according to the load balancing mode The probability of being selected next time;
  • the probability ranking module 302 is configured to sort the network connections in the cache according to the latest selection probability
  • the permutation retention module 303 is configured to replace the network connection with the least recent selection probability in the cache, and to retain the network connection with the most recent selection probability in the cache.
  • the load balancing mode is a polling mode
  • the probability determining module 301 may further include:
  • the first probability determining submodule is configured to determine a latest selection probability of each network connection in the cache according to a latest usage time of each network connection in the cache.
  • the load balancing mode is a hash mode
  • the probability determining module 301 may further include:
  • the second probability determining submodule is configured to determine a latest selection probability of each network connection in the cache according to a previous selection probability of each network connection in the cache.
  • the load balancing mode is the lowest missing mode
  • the probability determining module 301 may further include:
  • the third probability determining submodule is configured to determine the latest selection probability of each network connection in the cache according to the historical processing access number of the corresponding server in each cache connection in the cache.
  • the load balancing mode is the fastest response mode
  • the probability determining module 301 may further include:
  • the fourth probability determining submodule is configured to determine, according to the response time of the corresponding server in each cache connection in the cache, the latest selection probability of each network connection in the cache, wherein the server with the shortest response time has the highest latest selection probability of the network connection.
  • the server with the longest response time has the lowest probability of the latest selection of network connections.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the various component embodiments of the present invention may be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof.
  • a microprocessor or digital signal processor may be used in practice to implement some or all of the functionality of some or all of the components of the method and apparatus for caching network connections in accordance with embodiments of the present invention.
  • the invention can also be implemented as a device or device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein.
  • Such a program implementing the invention may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an internet platform, provided on a carrier signal, or provided in any other form.
  • Figure 4 illustrates a computing device, such as a search engine server, that can implement the above described method in accordance with the present invention.
  • the computing device conventionally includes a processor 410 and a computer program product or computer readable medium in the form of a memory 430.
  • the memory 430 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM.
  • the memory 430 has a storage space 450 that stores program code 451 for performing any of the method steps described above.
  • storage space 450 storing program code may include various program code 451 for implementing various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such computer program products are typically portable or fixed storage units such as those shown in FIG.
  • the storage unit may have storage segments, storage spaces, and the like that are similarly arranged to memory 430 in the computing device of FIG.
  • the program code can be compressed, for example, in an appropriate form.
  • the storage unit comprises computer readable code 451' for performing the steps of the method according to the invention, ie code that can be read by a processor, such as 410, which, when executed by the server, causes the server to execute Each of the steps in the method described above.

Abstract

Provided in an embodiment of the present invention are a method and device for caching a network connection, the method specifically comprising: after completing one server access, determining a most recent selection probability of each network connection in a cache according to characteristics of a server load balancing mode, the most recent selection probability representing the probability that the server corresponding to the network connection will be selected next on the basis of the load balancing mode; ordering the network connections in the cache according to the most recent selection probabilities; and replacing the network connection having the smallest most recent selection probability in the cache, and preserving the network connection having the largest most recent selection probability in the cache. The embodiment of the present invention can avoid the problem of cache hit failure caused by replacing the network connection having the largest most recent selection probability in the cache, thus improving a network connection cache hit ratio.

Description

一种网络连接的缓存方法和装置Method and device for buffering network connection 技术领域Technical field
本发明涉及通信技术领域,特别是涉及一种网络连接的缓存方法和装置。The present invention relates to the field of communications technologies, and in particular, to a method and apparatus for caching a network connection.
背景技术Background technique
普通的Web访问过程是一个典型的客户端与服务器模型,如图1所示,用户利用客户端101上的程序例如浏览器发出请求,Web服务器103响应请求并提供相应的数据,代理服务器102处于客户端101与Web服务器103之间进行请求和数据的转发,并且还可通过对Web服务器103的负载均衡实现将请求将均匀地分布在各Web服务器103之上。假设利用轮询方式实现对4个Web服务器的负载均衡,那么,可以将每一个客户端的请求按Web服务器1、Web服务器2、Web服务器3和Web服务器4的顺序逐一分配到不同Web服务器。The ordinary web access process is a typical client and server model. As shown in FIG. 1, the user makes a request by using a program on the client 101, such as a browser, and the web server 103 responds to the request and provides corresponding data, and the proxy server 102 is The request and data are forwarded between the client 101 and the web server 103, and the request can be evenly distributed on the respective web servers 103 by load balancing of the web server 103. Assuming that load balancing is implemented on four Web servers by means of polling, each client's request can be assigned to different Web servers one by one in the order of Web server 1, Web server 2, Web server 3, and Web server 4.
在访问并发数较大时,代理服务器为在不耗费系统资源的前提下保持与Web服务器之间的TCP(传输控制协议,Transmission Control Protocol)连接,可以采用预置数目的缓存区来缓存Web服务器的TCP连接,这里,预置数目通常小于Web服务器的数目,如在Web服务器的数目为4时,缓存区的数目为3等等。When the number of concurrent accesses is large, the proxy server maintains a TCP (Transmission Control Protocol) connection with the Web server without consuming system resources, and may use a preset number of cache areas to cache the Web server. The TCP connection, here, the preset number is usually smaller than the number of web servers, such as when the number of web servers is 4, the number of cache areas is 3, and so on.
假设采用缓存区1、缓存区2和缓存区3来缓存Web服务器1、Web服务器2、Web服务器3和Web服务器4的TCP连接,当前状态下缓存区1、缓存区2和缓存区3分别缓存了Web服务器1、Web服务器2和Web服务器3的TCP连接,那么,现有方案在结合上述轮询方式进行缓存置换时,会在利用缓存区1中内容访问完Web服务器1后,对缓存区2中内容进行置换,导致在访问Web服务器2时缓存命中失败,需要重新建立到Web服务器2的连接;并且,在访问Web服务器2后,对缓存区3中内容进行置换,导致在访问Web服务器3时缓存命中失败,需要重新建立到Web服务 器3的连接。可见,现有方案存在缓存命中率低下的问题。Assume that the cache area 1, the cache area 2, and the cache area 3 are used to cache the TCP connection of the Web server 1, the Web server 2, the Web server 3, and the Web server 4. In the current state, the cache area 1, the buffer area 2, and the buffer area 3 are respectively cached. The TCP connection between the Web server 1, the Web server 2, and the Web server 3, then, in the existing scheme, when the cache replacement is performed in combination with the above polling mode, the cache area is accessed after the Web server 1 is accessed by using the content in the buffer area 1. The replacement of the content in 2 causes the cache hit to fail when accessing the web server 2, and the connection to the web server 2 needs to be re-established; and after accessing the web server 2, the content in the cache area 3 is replaced, resulting in access to the web server. 3 o'clock cache hit failed, need to re-establish to the web service The connection of the device 3. It can be seen that the existing scheme has a problem that the cache hit rate is low.
发明内容Summary of the invention
鉴于上述问题,提出了本发明以便提供一种克服上述问题或者至少部分地解决上述问题的一种网络连接的缓存方法和装置。In view of the above problems, the present invention has been made in order to provide a method and apparatus for caching a network connection that overcomes the above problems or at least partially solves the above problems.
依据本发明的一个方面,提供了一种网络连接的缓存方法,包括:According to an aspect of the present invention, a method for caching a network connection is provided, including:
在完成一次服务器访问后,依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率;其中,所述最新选择概率用于表示依负载均衡方式网络连接对应服务器在下次被选择的概率;After the server access is completed, the latest selection probability of each network connection in the cache is determined according to the characteristics of the server load balancing mode; wherein the latest selection probability is used to indicate the probability that the network connection corresponding server is selected next time according to the load balancing mode;
依据所述最新选择概率,对所述缓存中各网络连接进行排序;以及Sorting each network connection in the cache according to the latest selection probability;
置换所述缓存中最新选择概率最小的网络连接,以及,保留所述缓存中最新选择概率最大的网络连接。Replacing the network connection with the least recent selection probability in the cache, and retaining the network connection with the most recent selection probability in the cache.
根据本发明的另一方面,提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算设备上运行时,导致所述的计算设备执行如上文所述的网络连接的缓存方法。According to another aspect of the present invention, there is provided a computer program comprising computer readable code, when said computer readable code is run on a computing device, causing said computing device to perform a network connection as described above Cache method.
根据本发明的再一方面,提供了一种计算机可读介质,其中存储了如上文所述的计算机程序。According to still another aspect of the present invention, there is provided a computer readable medium storing a computer program as described above.
根据本发明的又一方面,提供了一种网络连接的缓存装置,包括:According to still another aspect of the present invention, a cache device for network connection is provided, including:
概率确定模块,配置为在完成一次服务器访问后,依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率;其中,所述最新选择概率配置为表示依负载均衡方式网络连接对应服务器在下次选择的概率;The probability determining module is configured to determine, according to a characteristic of the server load balancing manner, a latest selection probability of each network connection in the cache after completing a server access; wherein the latest selection probability is configured to indicate that the network connection corresponding server is under load balancing mode Probability of secondary selection;
概率排序模块,配置为依据所述最新选择概率,对所述缓存中各网络连接进行排序;以及a probability ordering module configured to sort network connections in the cache according to the latest selection probability;
置换保留模块,配置为置换所述缓存中最新选择概率最小的网络连接,以及,保留所述缓存中最新选择概率最大的网络连接。And a permutation retention module configured to replace a network connection with the least recent selection probability in the cache, and to retain a network connection with the most recent selection probability in the cache.
根据本发明实施例的一种网络连接的缓存方法和装置,在完成一次服务器访问后,依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率,依据所述最新选择概率,对所述缓存中各网络连接进行排序,并置 换所述缓存中最新选择概率最小的网络连接,以及,保留所述缓存中最新选择概率最大的网络连接;由于所述最新选择概率用于表示依负载均衡方式网络连接对应服务器在下次被选择的概率,因此,本发明实施例仅仅置换所述缓存中最新选择概率最小的网络连接及保留所述缓存中最新选择概率最大的网络连接的手段,能够避免所述缓存中最新选择概率最大的网络连接被置换而导致的缓存命中失败的问题,从而能够提高网络连接的缓存命中率。According to an embodiment of the present invention, a method and apparatus for caching a network connection, after completing a server access, determining a latest selection probability of each network connection in the cache according to characteristics of a server load balancing manner, according to the latest selection probability, Sort each network connection in the cache, and juxtapose Replacing the network connection with the least recent selection probability in the cache, and retaining the network connection with the most recent selection probability in the cache; since the latest selection probability is used to indicate that the network connection corresponding server is selected next time according to the load balancing mode Probability, therefore, the embodiment of the present invention only replaces the network connection with the smallest selection probability in the cache and the network connection that retains the most recent selection probability in the cache, and can avoid the network connection with the most recent selection probability in the cache. The problem of cache hit failure caused by being replaced can improve the cache hit rate of the network connection.
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。The above description is only an overview of the technical solutions of the present invention, and the above-described and other objects, features and advantages of the present invention can be more clearly understood. Specific embodiments of the invention are set forth below.
附图说明DRAWINGS
通过阅读下文可选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出可选实施方式的目的,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:Various other advantages and benefits will become apparent to those skilled in the art from a The drawings are only for the purpose of illustrating alternative embodiments and are not to be considered as limiting. Throughout the drawings, the same reference numerals are used to refer to the same parts. In the drawing:
图1示出了一种HTTP访问系统的结构示意图;FIG. 1 is a schematic structural diagram of an HTTP access system;
图2示出了示出了根据本发明一个示例的一种网络连接的缓存方法的步骤流程示意图;2 is a flow chart showing the steps of a method for caching a network connection according to an example of the present invention;
图3示出了根据本发明一个实施例的一种网络连接的缓存装置的结构示意图;FIG. 3 is a schematic structural diagram of a cache device for network connection according to an embodiment of the present invention; FIG.
图4示意性地示出了用于执行根据本发明的方法的计算设备的框图;以及Figure 4 shows schematically a block diagram of a computing device for performing the method according to the invention;
图5示意性地示出了用于保持或者携带实现根据本发明的方法的程序代码的存储单元。Fig. 5 schematically shows a storage unit for holding or carrying program code implementing the method according to the invention.
具体实施方式detailed description
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地 理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the embodiments of the present invention have been shown in the drawings, the embodiments Rather, these embodiments are provided to provide a more thorough The disclosure is to be understood, and the scope of the present disclosure can be fully conveyed to those skilled in the art.
参照图2,示出了根据本发明一个实施例的一种网络连接的缓存方法的步骤流程示意图,具体可以包括如下步骤:2 is a schematic flow chart of a method for caching a network connection according to an embodiment of the present invention, which may specifically include the following steps:
步骤201、在完成一次服务器访问后,依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率;其中,所述最新选择概率用于表示依负载均衡方式网络连接对应服务器在下次被选择的概率;Step 201: After completing a server access, determine a latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode; wherein the latest selection probability is used to indicate that the network connection corresponding server is selected next time according to the load balancing mode The probability;
步骤202、依据所述最新选择概率,对所述缓存中各网络连接进行排序;以及Step 202: Sort each network connection in the cache according to the latest selection probability; and
步骤203、置换所述缓存中最新选择概率最小的网络连接,以及,保留所述缓存中最新选择概率最大的网络连接。Step 203: Replace the network connection with the smallest latest selection probability in the cache, and retain the network connection with the most recent selection probability in the cache.
本发明实施例可以应用于各种代理服务器中,该代理服务器处于客户端与服务器之间,可用于在客户端和服务器之间进行请求和数据的转发,以及用于通过对服务器的负载均衡实现将请求将均匀地分布在各服务器之上,还可用于缓存与服务器之间的网络连接,以在不耗费系统资源的前提下提高客户端和服务器之间的通信速率。The embodiment of the present invention can be applied to various proxy servers, which are between the client and the server, can be used for forwarding requests and data between the client and the server, and for implementing load balancing on the server. The request will be evenly distributed on each server, and can also be used to cache the network connection with the server to increase the communication speed between the client and the server without consuming system resources.
服务器的负载均衡过程主要为:在完成一次服务器访问后,依据负载均衡方式的特性确定如何选择下一个服务器,并将新的访问请求转发给它;其中,一次服务器访问过程也即将访问请求转发给所选择服务器的过程。The load balancing process of the server is mainly: after completing a server access, determining how to select the next server according to the characteristics of the load balancing mode, and forwarding a new access request to it; wherein, one server access process forwards the access request to The process of selecting the server.
服务器的负载均衡过程中缓存命中成功的条件具体为:依据负载均衡方式的特性选择的服务器与缓存中网络连接匹配成功,也即,缓存中存在所述选择的服务器所对应的网络连接。The condition that the cache hit succeeds in the load balancing process of the server is specifically as follows: the server selected according to the characteristics of the load balancing mode is successfully matched with the network connection in the cache, that is, the network connection corresponding to the selected server exists in the cache.
为了节省系统资源,缓存所能存储网络连接的最大数目通常小于服务器的数目,这样,如果缓存中存储的网络连接不变,则总有一个服务器的网络连接不被使用,从而无法实现负载均衡的目的。而缓存置换可以通过将缓存中一个或多个网络连接置换为其它的网络连接,以实现负载均衡的目的。In order to save system resources, the maximum number of network connections that the cache can store is usually smaller than the number of servers. Thus, if the network connection stored in the cache does not change, there is always a server network connection that is not used, so that load balancing cannot be achieved. purpose. Cache replacement can achieve load balancing by replacing one or more network connections in the cache with other network connections.
为了解决现有方案存在缓存命中率低下的问题,本发明实施例在完成一次服务器访问后,依据服务器负载均衡方式的特性确定缓存中各网络连接的 最新选择概率,依据所述最新选择概率,对所述缓存中各网络连接进行排序,并置换所述缓存中最新选择概率最小的网络连接,以及,保留所述缓存中最新选择概率最大的网络连接;由于所述最新选择概率用于表示依负载均衡方式网络连接对应服务器在下次被选择的概率,因此,本发明实施例仅仅置换所述缓存中最新选择概率最小的网络连接及保留所述缓存中最新选择概率最大的网络连接的手段,能够避免所述缓存中最新选择概率最大的网络连接被置换而导致的缓存命中失败的问题,从而能够提高网络连接的缓存命中率。In order to solve the problem that the existing solution has a low cache hit rate, the embodiment of the present invention determines the network connections in the cache according to the characteristics of the server load balancing mode after completing a server access. The latest selection probability, according to the latest selection probability, sorting each network connection in the cache, and replacing the network connection with the smallest latest selection probability in the cache, and retaining the network connection with the most recent selection probability in the cache Because the latest selection probability is used to indicate the probability that the network connection corresponding server is selected next time in the load balancing mode, the embodiment of the present invention only replaces the network connection with the smallest latest selection probability in the cache and retains the cache. The latest selection of the most probable network connection means avoids the problem of cache hit failure caused by the replacement of the network connection with the most recent selection probability in the cache, thereby improving the cache hit rate of the network connection.
本发明实施例可以提供如下依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率的方案:The embodiment of the present invention may provide a solution for determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode:
方案一、Option One,
方案一中,所述负载均衡方式可以为轮询方式,则所述依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率的步骤,具体可以包括:依据缓存中各网络连接的最近使用时间,确定缓存中各网络连接的最新选择概率,其中,最近使用时间距离当前时间最近的网络连接的最新选择概率最大,最近使用时间距离当前时间最远的网络连接的最新选择概率最小。In the first solution, the load balancing mode may be a polling mode, and the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode may specifically include: according to the latest network connection in the cache. The usage time is used to determine the latest selection probability of each network connection in the cache, wherein the latest selection probability of the network connection closest to the current time is the largest, and the latest selection probability of the network connection most recently from the current time is the smallest.
轮询方式的特性具体可以为,在负载均衡过程中将新的请求轮流发给下一个服务器,如此连续、周而复始,也即,各个服务器在相等的地位下被轮流选择。Specifically, the polling mode may be configured to send new requests to the next server in a load balancing process, so that the servers are successively and repeatedly restarted, that is, each server is alternately selected in an equal position.
依据上述轮询方式的特性,可以得知,如果缓存中某网络连接对应服务器刚刚被使用,那么在一个轮询周期后其才能被再次使用,因此,可以依据缓存中各网络连接的最近使用时间,确定缓存中各网络连接的最新选择概率,通常来说,缓存中网络连接的最近使用时间越新,则其对应的最新选择概率越小,这样,依据所述最新选择概率,对所述缓存中各网络连接进行排序可以为,依据所述最近使用时间,对所述缓存中各网络连接进行排序。According to the characteristics of the foregoing polling mode, it can be known that if a network connection corresponding server in the cache has just been used, it can be used again after a polling period, and therefore, according to the latest usage time of each network connection in the cache. Determining the latest selection probability of each network connection in the cache. Generally, the newer the latest usage time of the network connection in the cache is, the smaller the corresponding latest selection probability is, so that the cache is obtained according to the latest selection probability. The sorting of each network connection may be: sorting each network connection in the cache according to the latest usage time.
在本发明的一种应用示例1中,假设采用缓存区1、缓存区2和缓存区3来缓存Web服务器1、Web服务器2、Web服务器3和Web服务器4的TCP连接,当前状态下缓存区1、缓存区2和缓存区3分别缓存了Web服务 器1、Web服务器2和Web服务器3的TCP连接.In an application example 1 of the present invention, it is assumed that the buffer area 1, the buffer area 2, and the buffer area 3 are used to cache the TCP connection of the web server 1, the web server 2, the web server 3, and the web server 4, and the buffer area in the current state. 1. Cache area 2 and cache area 3 respectively cache web services 1, web server 2 and web server 3 TCP connection.
那么,本发明实施例在结合轮询方式进行缓存置换时,会在利用缓存区1中内容访问完Web服务器1后,可以依据缓存中各网络连接的最近使用时间,确定缓存中各网络连接的最新选择概率的顺序为:缓存区1<缓存区2<缓存区3,因此,可以对缓存区1中内容进行置换,以及保留缓存区2和缓存区3中内容,从而能够保证在访问Web服务器2时缓存命中成功,无需重新建立到Web服务器2的连接;接着,在访问Web服务器2后,确定缓存中各网络连接的最新选择概率的顺序为:缓存区2<缓存区1<缓存区3,因此可以对缓存区2中内容进行置换,从而保证在访问Web服务器3时缓存命中成功,无需重新建立到Web服务器3的连接。可见,本发明大大提高了缓存命中率。Then, when the cache is replaced by the polling mode, the embodiment of the present invention can determine the network connections in the cache according to the latest usage time of each network connection in the cache after accessing the web server 1 in the cache area 1. The order of the latest selection probability is: buffer area 1 <cache area 2 < cache area 3, therefore, the content in the buffer area 1 can be replaced, and the contents in the buffer area 2 and the buffer area 3 can be reserved, thereby ensuring access to the web server. At 2 o'clock, the cache hit succeeds, and there is no need to re-establish the connection to the web server 2; then, after accessing the web server 2, the order of determining the latest selection probabilities of the network connections in the cache is: buffer 2 < cache area 1 < cache area 3 Therefore, the content in the buffer area 2 can be replaced, so that the cache hit is successful when accessing the web server 3, and there is no need to re-establish the connection to the web server 3. It can be seen that the present invention greatly improves the cache hit rate.
方案二、Option II,
方案二中,所述负载均衡方式可以为哈希方式,则所述依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率的步骤,具体可以包括:依据缓存中各网络连接在上一次的选择概率,确定缓存中各网络连接的最新选择概率,其中,上一次被选择的网络连接的最新选择概率最小,上一次的选择概率最小的网络连接的最新选择概率最大。In the second solution, the load balancing mode may be a hash mode, and the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode may include: The selection probability of one time determines the latest selection probability of each network connection in the cache, wherein the latest selection probability of the network connection selected last time is the smallest, and the latest selection probability of the network connection with the smallest selection probability is the largest.
Hash(哈希)方式可通过单射不可逆的Hash函数,按照某种规则将请求发往服务器,其通常具有如下特性:The Hash method can send a request to the server according to a certain rule by a single-shot irreversible hash function, which usually has the following characteristics:
1、平衡性;平衡性是指Hash的结果应该平均分配到各服务器,以解决负载均衡问题;1. Balance; balance means that the results of Hash should be evenly distributed to each server to solve the load balancing problem;
2、单调性;单调性是指在新增或者删减服务器时,同一个key(键)访问到的值总是一样的;2, monotonic; monotonic means that when adding or deleting a server, the same key (key) access value is always the same;
3、分散性;分散性是指数据应该分散的存放在各个服务器之上。3. Dispersibility; Dispersibility means that data should be distributed and stored on each server.
例如,在本发明的一种应用实施例中,哈希方式可以根据服务器数目的取模结果进行分散,也即首先求得key的整数哈希值,再将该整数哈希值对服务器数目进行取模运算,根据相应的取模结果来选择服务器。For example, in an application embodiment of the present invention, the hash mode may be dispersed according to the modulo result of the number of servers, that is, the integer hash value of the key is first obtained, and then the integer hash value is performed on the number of servers. The modulo operation is performed, and the server is selected according to the corresponding modulo result.
依据上述哈希方式的特性,各网络连接在上一次的选择概率与下一次的 选择概率通常是相反的,也即,上一次的选择概率大则下一次的选择概率,因此,可以依据缓存中各网络连接在上一次的选择概率,确定缓存中各网络连接的最新选择概率。According to the characteristics of the above hash mode, each network connection is in the last selection probability and the next time The selection probability is usually reversed, that is, the previous selection probability is greater than the next selection probability. Therefore, the latest selection probability of each network connection in the cache can be determined according to the previous selection probability of each network connection in the cache.
对于上述应用示例1,本发明实施例在结合哈希方式进行缓存置换时,在利用缓存区1中内容访问完Web服务器1后,由于缓存区1中网络连接在上一次的最新选择概率为100%,则可以确定缓存区1在下一次的最新选择概率近似为0,因此,可以对缓存区1中内容进行置换,以及保留缓存区2和缓存区3中内容,从而能够保证在访问Web服务器2时缓存命中成功,无需重新建立到Web服务器2的连接;接着,在访问Web服务器2后,确定缓存2中网络连接的最新选择概率近似为0,因此可以对缓存区2中内容进行置换,从而保证在访问Web服务器3时缓存命中成功,无需重新建立到Web服务器3的连接。可见,本发明大大提高了缓存命中率。For the above application example 1, in the embodiment of the present invention, when the cache replacement is performed in combination with the hash mode, after the content of the network in the cache area 1 is accessed, the latest selection probability of the network connection in the buffer area 1 is 100. %, it can be determined that the next latest selection probability of the buffer area 1 is approximately 0. Therefore, the content in the buffer area 1 can be replaced, and the contents in the buffer area 2 and the buffer area 3 can be reserved, thereby ensuring access to the web server 2 When the cache hit succeeds, there is no need to re-establish the connection to the Web server 2; then, after accessing the Web server 2, it is determined that the latest selection probability of the network connection in the cache 2 is approximately 0, so the content in the buffer 2 can be replaced, thereby It is guaranteed that the cache hit is successful when accessing the web server 3, and there is no need to re-establish the connection to the web server 3. It can be seen that the present invention greatly improves the cache hit rate.
方案三、third solution,
方案三中,所述负载均衡方式可以为最低缺失方式,则所述依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率的步骤,具体可以包括:依据所述缓存中各网络连接对应服务器的历史处理访问数量,确定缓存中各网络连接的最新选择概率,其中,历史处理访问数量最多的服务器对应网络连接的最新选择概率最小,历史处理访问数量最少的服务器对应网络连接的最新选择概率最大。In the third solution, the load balancing mode may be the lowest missing mode, and the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode may specifically include: according to each network connection in the cache Corresponding to the historical processing access number of the server, determining the latest selection probability of each network connection in the cache, wherein the server with the largest number of historical processing accesses has the latest selection probability of the network connection, and the server with the least number of historical processing accesses the latest selection of the network connection. The probability is the biggest.
最低缺失方式可以平衡记录到各服务器的请求情况,把下个请求发给历史处理访问数量最少的服务器。因此,方案三可以依据所述缓存中各网络连接对应服务器的历史处理访问数量,确定缓存中各网络连接的最新选择概率,具体地,对于历史处理访问数量越多的服务器,则其对应网络连接的最新选择概率越小。The lowest missing method balances the request records to each server and sends the next request to the server with the least number of historical processing accesses. Therefore, scheme 3 may determine the latest selection probability of each network connection in the cache according to the historical processing access number of the corresponding server in each cache connection in the cache. Specifically, for the server with more historical processing access, the corresponding network connection The latest selection probability is smaller.
对于上述应用示例1,本发明实施例在结合最低缺失方式进行缓存置换时,在利用缓存区1中内容访问完Web服务器1后,假设当前Web服务器1、Web服务器2、Web服务器3和Web服务器4的历史处理访问数量分别为100w、80w、110w和90w,这里,w表示万,那么,可以确定缓存中各 网络连接的最新选择概率的顺序为:缓存区3<缓存区1<缓存区2,因此,可以对缓存区3中内容进行置换,以及保留缓存区1和缓存区2中内容,从而能够保证在访问Web服务器2时缓存命中成功,无需重新建立到Web服务器2的连接。可见,本发明能够提高缓存命中率。For the above application example 1, in the embodiment of the present invention, when the cache replacement is performed in combination with the lowest missing mode, after the content of the cache server 1 is used to access the web server 1, the current web server 1, the web server 2, the web server 3, and the web server are assumed. The number of historical processing accesses of 4 is 100w, 80w, 110w, and 90w respectively. Here, w represents 10,000, then, each of the caches can be determined. The order of the latest selection probability of the network connection is: buffer area 3 <buffer area 1 < cache area 2, therefore, the content in the buffer area 3 can be replaced, and the contents in the buffer area 1 and the buffer area 2 can be reserved, thereby being able to guarantee When the web server 2 is accessed, the cache hit is successful, and there is no need to re-establish the connection to the web server 2. It can be seen that the present invention can improve the cache hit rate.
方案四、Option 4
方案四中,所述负载均衡方式可以为最快响应方式,则所述依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率的步骤,具体可以包括:依据所述缓存中各网络连接对应服务器的响应时间,确定缓存中各网络连接的最新选择概率,其中,响应时间最短的服务器对应网络连接的最新选择概率最大,响应时间最长的服务器对应网络连接的最新选择概率最小。In the fourth solution, the load balancing mode may be the fastest response mode, and the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode may specifically include: according to each network in the cache The response time of the corresponding server is connected to determine the latest selection probability of each network connection in the cache. The server with the shortest response time has the highest latest selection probability corresponding to the network connection, and the server with the longest response time has the lowest latest selection probability of the network connection.
最快响应方式可以记录到每一个服务器的网络响应时间,并将下一个到达的请求分配给响应时间最短的服务器。这样,方案四可以依据所述缓存中各网络连接对应服务器的响应时间,确定缓存中各网络连接的最新选择概率,具体地,服务器的响应时间越短,则对应网络连接的最新选择概率越大。The fastest response method can record the network response time of each server and assign the next incoming request to the server with the shortest response time. In this way, scheme 4 may determine the latest selection probability of each network connection in the cache according to the response time of each network connection corresponding server in the cache. Specifically, the shorter the response time of the server, the greater the latest selection probability of the corresponding network connection. .
对于上述应用示例1,本发明实施例在结合最低缺失方式进行缓存置换时,在利用缓存区1中内容访问完Web服务器1后,假设当前Web服务器1、Web服务器2、Web服务器3和Web服务器4的响应时间分别为10ms、20s、25ms和30ms,那么,可以确定缓存中各网络连接的最新选择概率的顺序为:缓存区3<缓存区2<缓存区1,因此,可以对缓存区3中内容进行置换,以及保留缓存区1和缓存区2中内容,从而能够保证在访问Web服务器1时缓存命中成功,无需重新建立到Web服务器1的连接。可见,本发明能够提高缓存命中率。For the above application example 1, in the embodiment of the present invention, when the cache replacement is performed in combination with the lowest missing mode, after the content of the cache server 1 is used to access the web server 1, the current web server 1, the web server 2, the web server 3, and the web server are assumed. The response time of 4 is 10ms, 20s, 25ms and 30ms respectively. Then, the order of the latest selection probability of each network connection in the cache can be determined as: buffer 3<buffer 2<buffer 1 and therefore, buffer 3 can be The content is replaced, and the contents of the buffer area 1 and the buffer area 2 are reserved, so that the cache hit is successful when accessing the web server 1, and there is no need to re-establish the connection to the web server 1. It can be seen that the present invention can improve the cache hit rate.
以上对轮询方式、哈希方式、最低缺失方式和最快响应方式的特性及其对应的确定缓存中各网络连接的最新选择概率的方案进行了详细介绍,需要说明的是,本领域技术人员可以依据实际情况采用上述方案中的任一,或者,还可以采用其它负载均衡方式并依据其它负载均衡方式的特性确定缓存中各网络连接的最新选择概率,本发明实施例对具体的负载均衡方式及其对应 的依据其它负载均衡方式的特性确定缓存中各网络连接的最新选择概率的方案不加以限制。The foregoing describes the characteristics of the polling mode, the hash mode, the minimum missing mode, and the fastest response mode, and the corresponding scheme for determining the latest selection probability of each network connection in the cache. It should be noted that those skilled in the art Depending on the actual situation, any one of the foregoing solutions may be used, or the other load balancing mode may be adopted, and the latest selection probability of each network connection in the cache may be determined according to the characteristics of the other load balancing modes. And its corresponding The scheme for determining the latest selection probability of each network connection in the cache according to the characteristics of other load balancing methods is not limited.
综上,本发明实施例仅仅置换所述缓存中最新选择概率最小的网络连接及保留所述缓存中最新选择概率最大的网络连接的手段,能够避免所述缓存中最新选择概率最大的网络连接被置换而导致的缓存命中失败的问题,从而能够提高网络连接的缓存命中率。In summary, the embodiment of the present invention only replaces the network connection with the smallest recent selection probability in the cache and the network connection that retains the most recent selection probability in the cache, and can avoid the network connection with the most recent selection probability in the cache being The problem of cache hit failure caused by replacement can improve the cache hit rate of network connections.
对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明实施例并不受所描述的动作顺序的限制,因为依据本发明实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作并不一定是本发明实施例所必须的。For the method embodiments, for the sake of simple description, they are all expressed as a series of action combinations, but those skilled in the art should understand that the embodiments of the present invention are not limited by the described action sequence, because the embodiment according to the present invention Some steps can be performed in other orders or at the same time. In the following, those skilled in the art should also understand that the embodiments described in the specification are optional embodiments, and the actions involved are not necessarily required by the embodiments of the present invention.
参照图3,示出了根据本发明一个实施例的一种网络连接的缓存装置的结构示意图,具体可以包括如下模块:Referring to FIG. 3, a schematic structural diagram of a cache device for network connection according to an embodiment of the present invention is shown, which may specifically include the following modules:
概率确定模块301,配置为在完成一次服务器访问后,依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率;其中,所述最新选择概率配置为表示依负载均衡方式网络连接对应服务器在下次被选择的概率;The probability determining module 301 is configured to determine, according to the characteristics of the server load balancing manner, the latest selection probability of each network connection in the cache after completing a server access; wherein the latest selection probability is configured to indicate that the server is connected according to the load balancing mode The probability of being selected next time;
概率排序模块302,配置为依据所述最新选择概率,对所述缓存中各网络连接进行排序;以及The probability ranking module 302 is configured to sort the network connections in the cache according to the latest selection probability;
置换保留模块303,配置为置换所述缓存中最新选择概率最小的网络连接,以及,保留所述缓存中最新选择概率最大的网络连接。The permutation retention module 303 is configured to replace the network connection with the least recent selection probability in the cache, and to retain the network connection with the most recent selection probability in the cache.
在本发明的一种可选实施例中,所述负载均衡方式为轮询方式,则所述概率确定模块301可以进一步包括:In an optional embodiment of the present invention, the load balancing mode is a polling mode, and the probability determining module 301 may further include:
第一概率确定子模块,配置为依据缓存中各网络连接的最近使用时间,确定缓存中各网络连接的最新选择概率。The first probability determining submodule is configured to determine a latest selection probability of each network connection in the cache according to a latest usage time of each network connection in the cache.
在本发明的另一种优选实施例中,所述负载均衡方式为哈希方式,则所述概率确定模块301可以进一步包括: In another preferred embodiment of the present invention, the load balancing mode is a hash mode, and the probability determining module 301 may further include:
第二概率确定子模块,配置为依据缓存中各网络连接在上一次的选择概率,确定缓存中各网络连接的最新选择概率。The second probability determining submodule is configured to determine a latest selection probability of each network connection in the cache according to a previous selection probability of each network connection in the cache.
在本发明的再一种优选实施例中,所述负载均衡方式为最低缺失方式,则所述概率确定模块301可以进一步包括:In a further preferred embodiment of the present invention, the load balancing mode is the lowest missing mode, and the probability determining module 301 may further include:
第三概率确定子模块,配置为依据所述缓存中各网络连接对应服务器的历史处理访问数量,确定缓存中各网络连接的最新选择概率。The third probability determining submodule is configured to determine the latest selection probability of each network connection in the cache according to the historical processing access number of the corresponding server in each cache connection in the cache.
在本发明的另一种优选实施例中,所述负载均衡方式为最快响应方式,则概率确定模块301可以进一步包括:In another preferred embodiment of the present invention, the load balancing mode is the fastest response mode, and the probability determining module 301 may further include:
第四概率确定子模块,配置为依据所述缓存中各网络连接对应服务器的响应时间,确定缓存中各网络连接的最新选择概率,其中,响应时间最短的服务器对应网络连接的最新选择概率最大,响应时间最长的服务器对应网络连接的最新选择概率最小。The fourth probability determining submodule is configured to determine, according to the response time of the corresponding server in each cache connection in the cache, the latest selection probability of each network connection in the cache, wherein the server with the shortest response time has the highest latest selection probability of the network connection. The server with the longest response time has the lowest probability of the latest selection of network connections.
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。For the device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的网络连接的缓存方法和装置中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网平台上下载得到,或者在载体信号上提供,或者以任何其他形式提供。The various component embodiments of the present invention may be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or digital signal processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components of the method and apparatus for caching network connections in accordance with embodiments of the present invention. . The invention can also be implemented as a device or device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein. Such a program implementing the invention may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an internet platform, provided on a carrier signal, or provided in any other form.
例如,图4示出了可以实现根据本发明上述方法的计算设备,例如搜索引擎服务器。该计算设备传统上包括处理器410和以存储器430形式的计算机程序产品或者计算机可读介质。存储器430可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。 存储器430具有存储用于执行上述方法中的任何方法步骤的程序代码451的存储空间450。例如,存储程序代码的存储空间450可以包括分别用于实现上面的方法中的各种步骤的各个程序代码451。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为例如图5所示的便携式或者固定存储单元。该存储单元可以具有与图4的计算设备中的存储器430类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括用于执行根据本发明的方法步骤的计算机可读代码451’,即可以由例如诸如410之类的处理器读取的代码,当这些代码由服务器运行时,导致该服务器执行上面所描述的方法中的各个步骤。For example, Figure 4 illustrates a computing device, such as a search engine server, that can implement the above described method in accordance with the present invention. The computing device conventionally includes a processor 410 and a computer program product or computer readable medium in the form of a memory 430. The memory 430 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM. The memory 430 has a storage space 450 that stores program code 451 for performing any of the method steps described above. For example, storage space 450 storing program code may include various program code 451 for implementing various steps in the above methods, respectively. The program code can be read from or written to one or more computer program products. These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks. Such computer program products are typically portable or fixed storage units such as those shown in FIG. The storage unit may have storage segments, storage spaces, and the like that are similarly arranged to memory 430 in the computing device of FIG. The program code can be compressed, for example, in an appropriate form. Typically, the storage unit comprises computer readable code 451' for performing the steps of the method according to the invention, ie code that can be read by a processor, such as 410, which, when executed by the server, causes the server to execute Each of the steps in the method described above.
本文中所称的“一个实施例”、“实施例”或者“一个或者多个实施例”意味着,结合实施例描述的特定特征、结构或者特性包括在本发明的至少一个实施例中。此外,请注意,这里“在一个实施例中”的词语例子不一定全指同一个实施例。&quot;an embodiment,&quot; or &quot;an embodiment,&quot; or &quot;an embodiment,&quot; In addition, it is noted that the phrase "in one embodiment" is not necessarily referring to the same embodiment.
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。In the description provided herein, numerous specific details are set forth. However, it is understood that the embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures, and techniques are not shown in detail so as not to obscure the understanding of the description.
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。It is to be noted that the above-described embodiments are illustrative of the invention and are not intended to be limiting, and that the invention may be devised without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as a limitation. The word "comprising" does not exclude the presence of the elements or steps that are not recited in the claims. The word "a" or "an" The invention can be implemented by means of hardware comprising several distinct elements and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means can be embodied by the same hardware item. The use of the words first, second, and third does not indicate any order. These words can be interpreted as names.
此外,还应当注意,本说明书中使用的语言主要是为了可读性和教导的 目的而选择的,而不是为了解释或者限定本发明的主题而选择的。因此,在不偏离所附权利要求书的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。对于本发明的范围,对本发明所做的公开是说明性的,而非限制性的,本发明的范围由所附权利要求书限定。 In addition, it should also be noted that the language used in this specification is primarily for readability and teaching. It is chosen for the purpose, and is not chosen to explain or define the subject matter of the invention. Therefore, many modifications and changes will be apparent to those skilled in the art without departing from the scope of the invention. The disclosure of the present invention is intended to be illustrative, and not restrictive, and the scope of the invention is defined by the appended claims.

Claims (12)

  1. 一种网络连接的缓存方法,包括:A method of caching a network connection, comprising:
    在完成一次服务器访问后,依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率;其中,所述最新选择概率用于表示依负载均衡方式网络连接对应服务器在下次被选择的概率;After the server access is completed, the latest selection probability of each network connection in the cache is determined according to the characteristics of the server load balancing mode; wherein the latest selection probability is used to indicate the probability that the network connection corresponding server is selected next time according to the load balancing mode;
    依据所述最新选择概率,对所述缓存中各网络连接进行排序;以及Sorting each network connection in the cache according to the latest selection probability;
    置换所述缓存中最新选择概率最小的网络连接,以及,保留所述缓存中最新选择概率最大的网络连接。Replacing the network connection with the least recent selection probability in the cache, and retaining the network connection with the most recent selection probability in the cache.
  2. 如权利要求1所述的方法,其中,所述负载均衡方式为轮询方式,则所述依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率的步骤,包括:The method of claim 1, wherein the load balancing mode is a polling mode, and the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode includes:
    依据缓存中各网络连接的最近使用时间,确定缓存中各网络连接的最新选择概率,其中,最近使用时间距离当前时间最近的网络连接的最新选择概率最大,最近使用时间距离当前时间最远的网络连接的最新选择概率最小。The latest selection probability of each network connection in the cache is determined according to the latest usage time of each network connection in the cache, wherein the latest connection probability of the network connection closest to the current time is the largest, and the network with the latest usage time is the farthest from the current time. The latest selection probability of the connection is minimal.
  3. 如权利要求1所述的方法,其中,所述负载均衡方式为哈希方式,则所述依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率的步骤,包括:The method of claim 1, wherein the load balancing mode is a hash mode, and the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode comprises:
    依据缓存中各网络连接在上一次的选择概率,确定缓存中各网络连接的最新选择概率,其中,上一次被选择的网络连接的最新选择概率最小,上一次的选择概率最小的网络连接的最新选择概率最大。According to the last selection probability of each network connection in the cache, the latest selection probability of each network connection in the cache is determined, wherein the latest selection probability of the last selected network connection is the smallest, and the latest network connection with the lowest selection probability is the latest. The selection probability is the largest.
  4. 如权利要求1所述的方法,其中,所述负载均衡方式为最低缺失方式,则所述依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率的步骤,包括:The method of claim 1, wherein the load balancing mode is the lowest missing mode, and the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode includes:
    依据所述缓存中各网络连接对应服务器的历史处理访问数量,确定缓存中各网络连接的最新选择概率,其中,历史处理访问数量最多的服务器对应网络连接的最新选择概率最小,历史处理访问数量最少的服务器对应网络连接的最新选择概率最大。Determining the latest selection probability of each network connection in the cache according to the historical processing access number of the corresponding server in the cache, wherein the server with the largest number of historical processing accesses has the lowest selection probability of the network connection and the least number of historical processing accesses. The server has the highest probability of the latest selection of network connections.
  5. 如权利要求1所述的方法,其中,所述负载均衡方式为最快响应方式,则所述依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选 择概率的步骤,包括:The method of claim 1, wherein the load balancing mode is the fastest response mode, and the determining the latest selection of each network connection in the cache according to the characteristics of the server load balancing mode The steps to choose a probability include:
    依据所述缓存中各网络连接对应服务器的响应时间,确定缓存中各网络连接的最新选择概率,其中,响应时间最短的服务器对应网络连接的最新选择概率最大,响应时间最长的服务器对应网络连接的最新选择概率最小。Determining, according to the response time of each network connection corresponding server in the cache, determining the latest selection probability of each network connection in the cache, wherein the server with the shortest response time has the highest latest selection probability corresponding to the network connection, and the server with the longest response time corresponds to the network connection. The latest selection probability is the smallest.
  6. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算设备上运行时,导致所述计算设备执行根据权利要求1至5中的任一项所述的网络连接的缓存方法。A computer program comprising computer readable code causing the computing device to perform a method of caching a network connection according to any one of claims 1 to 5 when the computer readable code is run on a computing device .
  7. 一种计算机可读介质,其中存储了如权利要求6所述的计算机程序。A computer readable medium storing the computer program of claim 6.
  8. 一种网络连接的缓存装置,包括:A cache device for network connection, comprising:
    概率确定模块,配置为在完成一次服务器访问后,依据服务器负载均衡方式的特性确定缓存中各网络连接的最新选择概率;其中,所述最新选择概率配置为表示依负载均衡方式网络连接对应服务器在下次选择的概率;The probability determining module is configured to determine, according to a characteristic of the server load balancing manner, a latest selection probability of each network connection in the cache after completing a server access; wherein the latest selection probability is configured to indicate that the network connection corresponding server is under load balancing mode Probability of secondary selection;
    概率排序模块,配置为依据所述最新选择概率,对所述缓存中各网络连接进行排序;以及a probability ordering module configured to sort network connections in the cache according to the latest selection probability;
    置换保留模块,配置为置换所述缓存中最新选择概率最小的网络连接,以及,保留所述缓存中最新选择概率最大的网络连接。And a permutation retention module configured to replace a network connection with the least recent selection probability in the cache, and to retain a network connection with the most recent selection probability in the cache.
  9. 如权利要求8所述的装置,其中,所述负载均衡方式为轮询方式,则所述概率确定模块包括:The device of claim 8, wherein the load balancing mode is a polling mode, and the probability determining module comprises:
    第一概率确定子模块,配置为依据缓存中各网络连接的最近使用时间,确定缓存中各网络连接的最新选择概率,其中,最近使用时间距离当前时间最近的网络连接的最新选择概率最大,最近使用时间距离当前时间最远的网络连接的最新选择概率最小。The first probability determining submodule is configured to determine, according to the latest usage time of each network connection in the cache, a latest selection probability of each network connection in the cache, wherein the most recent selection probability of the network connection closest to the current time is the most recent, most recently The latest selection probability of using a network connection that is farthest from the current time is the smallest.
  10. 如权利要求8所述的装置,其中,所述负载均衡方式为哈希方式,则所述概率确定模块包括:The apparatus according to claim 8, wherein the load balancing mode is a hash mode, and the probability determining module comprises:
    第二概率确定子模块,配置为依据缓存中各网络连接在上一次的选择概率,确定缓存中各网络连接的最新选择概率,其中,上一次被选择的网络连接的最新选择概率最小,上一次的选择概率最小的网络连接的最新选择概率 最大。The second probability determining submodule is configured to determine, according to the last selection probability of each network connection in the cache, the latest selection probability of each network connection in the cache, wherein the latest selection probability of the last selected network connection is the smallest, the last time The latest selection probability of the network connection with the least probability of selection maximum.
  11. 如权利要求8所述的装置,其中,所述负载均衡方式为最低缺失方式,则所述概率确定模块包括:The apparatus according to claim 8, wherein the load balancing mode is a minimum missing mode, and the probability determining module comprises:
    第三概率确定子模块,配置为依据所述缓存中各网络连接对应服务器的历史处理访问数量,确定缓存中各网络连接的最新选择概率,其中,历史处理访问数量最多的服务器对应网络连接的最新选择概率最小,历史处理访问数量最少的服务器对应网络连接的最新选择概率最大。The third probability determining sub-module is configured to determine the latest selection probability of each network connection in the cache according to the historical processing access number of each network connection corresponding server in the cache, wherein the server with the largest number of historical processing accesses corresponds to the latest network connection The server with the lowest selection probability and the least number of historical processing accesses has the highest probability of the latest selection of the network connection.
  12. 如权利要求8所述的装置,其中,所述负载均衡方式为最快响应方式,则所述概率确定模块包括:The device of claim 8, wherein the load balancing mode is the fastest response mode, and the probability determining module comprises:
    第四概率确定子模块,配置为依据所述缓存中各网络连接对应服务器的响应时间,确定缓存中各网络连接的最新选择概率,其中,响应时间最短的服务器对应网络连接的最新选择概率最大,响应时间最长的服务器对应网络连接的最新选择概率最小。 The fourth probability determining submodule is configured to determine, according to the response time of the corresponding server in each cache connection in the cache, the latest selection probability of each network connection in the cache, wherein the server with the shortest response time has the highest latest selection probability of the network connection. The server with the longest response time has the lowest probability of the latest selection of network connections.
PCT/CN2015/095455 2014-12-27 2015-11-24 Method and device for caching network connection WO2016101748A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410836954.6 2014-12-27
CN201410836954.6A CN104580435B (en) 2014-12-27 2014-12-27 A kind of caching method and device of network connection

Publications (1)

Publication Number Publication Date
WO2016101748A1 true WO2016101748A1 (en) 2016-06-30

Family

ID=53095592

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/095455 WO2016101748A1 (en) 2014-12-27 2015-11-24 Method and device for caching network connection

Country Status (2)

Country Link
CN (1) CN104580435B (en)
WO (1) WO2016101748A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713163A (en) * 2016-12-29 2017-05-24 杭州迪普科技股份有限公司 Method and apparatus for deploying server load

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580435B (en) * 2014-12-27 2019-03-08 北京奇虎科技有限公司 A kind of caching method and device of network connection
CN106060164B (en) * 2016-07-12 2021-03-23 Tcl科技集团股份有限公司 Telescopic cloud server system and communication method thereof
CN106657399B (en) * 2017-02-20 2020-08-18 北京奇虎科技有限公司 Background server selection method and device based on middleware
CN107333235B (en) * 2017-06-14 2020-09-15 珠海市魅族科技有限公司 WiFi connection probability prediction method and device, terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6317778B1 (en) * 1998-11-23 2001-11-13 International Business Machines Corporation System and method for replacement and duplication of objects in a cache
CN101455057A (en) * 2006-06-30 2009-06-10 国际商业机器公司 A method and apparatus for caching broadcasting information
CN102098290A (en) * 2010-12-17 2011-06-15 天津曙光计算机产业有限公司 Elimination and replacement method of transmission control protocol (TCP) streams
CN104580435A (en) * 2014-12-27 2015-04-29 北京奇虎科技有限公司 Method and device for caching network connections

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6154767A (en) * 1998-01-15 2000-11-28 Microsoft Corporation Methods and apparatus for using attribute transition probability models for pre-fetching resources
CN101184021B (en) * 2007-12-14 2010-06-02 成都市华为赛门铁克科技有限公司 Method, equipment and system for implementing stream media caching replacement
CN103347068B (en) * 2013-06-26 2016-03-09 江苏省未来网络创新研究院 A kind of based on Agent cluster network-caching accelerated method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6317778B1 (en) * 1998-11-23 2001-11-13 International Business Machines Corporation System and method for replacement and duplication of objects in a cache
CN101455057A (en) * 2006-06-30 2009-06-10 国际商业机器公司 A method and apparatus for caching broadcasting information
CN102098290A (en) * 2010-12-17 2011-06-15 天津曙光计算机产业有限公司 Elimination and replacement method of transmission control protocol (TCP) streams
CN104580435A (en) * 2014-12-27 2015-04-29 北京奇虎科技有限公司 Method and device for caching network connections

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713163A (en) * 2016-12-29 2017-05-24 杭州迪普科技股份有限公司 Method and apparatus for deploying server load

Also Published As

Publication number Publication date
CN104580435B (en) 2019-03-08
CN104580435A (en) 2015-04-29

Similar Documents

Publication Publication Date Title
WO2016101748A1 (en) Method and device for caching network connection
US20150317091A1 (en) Systems and methods for enabling local caching for remote storage devices over a network via nvme controller
US20160132541A1 (en) Efficient implementations for mapreduce systems
US10044797B2 (en) Load balancing of distributed services
US9864538B1 (en) Data size reduction
US9594696B1 (en) Systems and methods for automatic generation of parallel data processing code
US8239337B2 (en) Network device proximity data import based on weighting factor
US10691731B2 (en) Efficient lookup in multiple bloom filters
US9705977B2 (en) Load balancing for network devices
US10296485B2 (en) Remote direct memory access (RDMA) optimized high availability for in-memory data storage
KR101719500B1 (en) Acceleration based on cached flows
US20160203102A1 (en) Efficient remote pointer sharing for enhanced access to key-value stores
US10915524B1 (en) Scalable distributed data processing and indexing
JP6770396B2 (en) Methods, systems, and programs to reduce service reactivation time
US20190377683A1 (en) Cache pre-fetching using cyclic buffer
US20170004087A1 (en) Adaptive cache management method according to access characteristics of user application in distributed environment
EP3369238B1 (en) Method, apparatus, computer-readable medium and computer program product for cloud file processing
US8918588B2 (en) Maintaining a cache of blocks from a plurality of data streams
WO2018111696A1 (en) Partial storage of large files in distinct storage systems
US11048758B1 (en) Multi-level low-latency hashing scheme
CN112732667A (en) Usability enhancing method and system for distributed file system
US11023440B1 (en) Scalable distributed data processing and indexing
JP2016081492A (en) Different type storage server and file storage method thereof
CN113806249B (en) Object storage sequence lifting method, device, terminal and storage medium
KR20150015356A (en) Method and apparatus for delivering content from content store in content centric networking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15871817

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15871817

Country of ref document: EP

Kind code of ref document: A1