WO2016101748A1 - Procédé et dispositif de mise en mémoire cache de connexion réseau - Google Patents

Procédé et dispositif de mise en mémoire cache de connexion réseau Download PDF

Info

Publication number
WO2016101748A1
WO2016101748A1 PCT/CN2015/095455 CN2015095455W WO2016101748A1 WO 2016101748 A1 WO2016101748 A1 WO 2016101748A1 CN 2015095455 W CN2015095455 W CN 2015095455W WO 2016101748 A1 WO2016101748 A1 WO 2016101748A1
Authority
WO
WIPO (PCT)
Prior art keywords
network connection
cache
probability
selection probability
latest
Prior art date
Application number
PCT/CN2015/095455
Other languages
English (en)
Chinese (zh)
Inventor
任勇全
赵安安
陈磊
Original Assignee
北京奇虎科技有限公司
奇智软件(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京奇虎科技有限公司, 奇智软件(北京)有限公司 filed Critical 北京奇虎科技有限公司
Publication of WO2016101748A1 publication Critical patent/WO2016101748A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a method and apparatus for caching a network connection.
  • the ordinary web access process is a typical client and server model.
  • the user makes a request by using a program on the client 101, such as a browser, and the web server 103 responds to the request and provides corresponding data, and the proxy server 102 is
  • the request and data are forwarded between the client 101 and the web server 103, and the request can be evenly distributed on the respective web servers 103 by load balancing of the web server 103. Assuming that load balancing is implemented on four Web servers by means of polling, each client's request can be assigned to different Web servers one by one in the order of Web server 1, Web server 2, Web server 3, and Web server 4.
  • the proxy server When the number of concurrent accesses is large, the proxy server maintains a TCP (Transmission Control Protocol) connection with the Web server without consuming system resources, and may use a preset number of cache areas to cache the Web server.
  • TCP Transmission Control Protocol
  • the TCP connection here, the preset number is usually smaller than the number of web servers, such as when the number of web servers is 4, the number of cache areas is 3, and so on.
  • the cache area 1, the cache area 2, and the cache area 3 are used to cache the TCP connection of the Web server 1, the Web server 2, the Web server 3, and the Web server 4.
  • the cache area 1, the buffer area 2, and the buffer area 3 are respectively cached.
  • the TCP connection between the Web server 1, the Web server 2, and the Web server 3, then, in the existing scheme, when the cache replacement is performed in combination with the above polling mode, the cache area is accessed after the Web server 1 is accessed by using the content in the buffer area 1.
  • the replacement of the content in 2 causes the cache hit to fail when accessing the web server 2, and the connection to the web server 2 needs to be re-established; and after accessing the web server 2, the content in the cache area 3 is replaced, resulting in access to the web server.
  • the present invention has been made in order to provide a method and apparatus for caching a network connection that overcomes the above problems or at least partially solves the above problems.
  • a method for caching a network connection including:
  • the latest selection probability of each network connection in the cache is determined according to the characteristics of the server load balancing mode; wherein the latest selection probability is used to indicate the probability that the network connection corresponding server is selected next time according to the load balancing mode;
  • a computer program comprising computer readable code, when said computer readable code is run on a computing device, causing said computing device to perform a network connection as described above Cache method.
  • a computer readable medium storing a computer program as described above.
  • a cache device for network connection including:
  • the probability determining module is configured to determine, according to a characteristic of the server load balancing manner, a latest selection probability of each network connection in the cache after completing a server access; wherein the latest selection probability is configured to indicate that the network connection corresponding server is under load balancing mode Probability of secondary selection;
  • a probability ordering module configured to sort network connections in the cache according to the latest selection probability
  • a permutation retention module configured to replace a network connection with the least recent selection probability in the cache, and to retain a network connection with the most recent selection probability in the cache.
  • a method and apparatus for caching a network connection after completing a server access, determining a latest selection probability of each network connection in the cache according to characteristics of a server load balancing manner, according to the latest selection probability, Sort each network connection in the cache, and juxtapose Replacing the network connection with the least recent selection probability in the cache, and retaining the network connection with the most recent selection probability in the cache; since the latest selection probability is used to indicate that the network connection corresponding server is selected next time according to the load balancing mode Probability, therefore, the embodiment of the present invention only replaces the network connection with the smallest selection probability in the cache and the network connection that retains the most recent selection probability in the cache, and can avoid the network connection with the most recent selection probability in the cache.
  • the problem of cache hit failure caused by being replaced can improve the cache hit rate of the network connection.
  • FIG. 1 is a schematic structural diagram of an HTTP access system
  • FIG. 2 is a flow chart showing the steps of a method for caching a network connection according to an example of the present invention
  • FIG. 3 is a schematic structural diagram of a cache device for network connection according to an embodiment of the present invention.
  • Figure 4 shows schematically a block diagram of a computing device for performing the method according to the invention
  • Fig. 5 schematically shows a storage unit for holding or carrying program code implementing the method according to the invention.
  • FIG. 2 is a schematic flow chart of a method for caching a network connection according to an embodiment of the present invention, which may specifically include the following steps:
  • Step 201 After completing a server access, determine a latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode; wherein the latest selection probability is used to indicate that the network connection corresponding server is selected next time according to the load balancing mode The probability;
  • Step 202 Sort each network connection in the cache according to the latest selection probability
  • Step 203 Replace the network connection with the smallest latest selection probability in the cache, and retain the network connection with the most recent selection probability in the cache.
  • the embodiment of the present invention can be applied to various proxy servers, which are between the client and the server, can be used for forwarding requests and data between the client and the server, and for implementing load balancing on the server.
  • the request will be evenly distributed on each server, and can also be used to cache the network connection with the server to increase the communication speed between the client and the server without consuming system resources.
  • the load balancing process of the server is mainly: after completing a server access, determining how to select the next server according to the characteristics of the load balancing mode, and forwarding a new access request to it; wherein, one server access process forwards the access request to The process of selecting the server.
  • the condition that the cache hit succeeds in the load balancing process of the server is specifically as follows: the server selected according to the characteristics of the load balancing mode is successfully matched with the network connection in the cache, that is, the network connection corresponding to the selected server exists in the cache.
  • the maximum number of network connections that the cache can store is usually smaller than the number of servers.
  • Cache replacement can achieve load balancing by replacing one or more network connections in the cache with other network connections.
  • the embodiment of the present invention determines the network connections in the cache according to the characteristics of the server load balancing mode after completing a server access.
  • the latest selection probability according to the latest selection probability, sorting each network connection in the cache, and replacing the network connection with the smallest latest selection probability in the cache, and retaining the network connection with the most recent selection probability in the cache. Because the latest selection probability is used to indicate the probability that the network connection corresponding server is selected next time in the load balancing mode, the embodiment of the present invention only replaces the network connection with the smallest latest selection probability in the cache and retains the cache.
  • the latest selection of the most probable network connection means avoids the problem of cache hit failure caused by the replacement of the network connection with the most recent selection probability in the cache, thereby improving the cache hit rate of the network connection.
  • the load balancing mode may be a polling mode
  • the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode may specifically include: according to the latest network connection in the cache.
  • the usage time is used to determine the latest selection probability of each network connection in the cache, wherein the latest selection probability of the network connection closest to the current time is the largest, and the latest selection probability of the network connection most recently from the current time is the smallest.
  • the polling mode may be configured to send new requests to the next server in a load balancing process, so that the servers are successively and repeatedly restarted, that is, each server is alternately selected in an equal position.
  • each network connection corresponding server in the cache can be used again after a polling period, and therefore, according to the latest usage time of each network connection in the cache. Determining the latest selection probability of each network connection in the cache. Generally, the newer the latest usage time of the network connection in the cache is, the smaller the corresponding latest selection probability is, so that the cache is obtained according to the latest selection probability.
  • the sorting of each network connection may be: sorting each network connection in the cache according to the latest usage time.
  • the buffer area 1, the buffer area 2, and the buffer area 3 are used to cache the TCP connection of the web server 1, the web server 2, the web server 3, and the web server 4, and the buffer area in the current state.
  • Cache area 2 and cache area 3 respectively cache web services 1, web server 2 and web server 3 TCP connection.
  • the embodiment of the present invention can determine the network connections in the cache according to the latest usage time of each network connection in the cache after accessing the web server 1 in the cache area 1.
  • the order of the latest selection probability is: buffer area 1 ⁇ cache area 2 ⁇ cache area 3, therefore, the content in the buffer area 1 can be replaced, and the contents in the buffer area 2 and the buffer area 3 can be reserved, thereby ensuring access to the web server.
  • the cache hit succeeds, and there is no need to re-establish the connection to the web server 2; then, after accessing the web server 2, the order of determining the latest selection probabilities of the network connections in the cache is: buffer 2 ⁇ cache area 1 ⁇ cache area 3 Therefore, the content in the buffer area 2 can be replaced, so that the cache hit is successful when accessing the web server 3, and there is no need to re-establish the connection to the web server 3. It can be seen that the present invention greatly improves the cache hit rate.
  • the load balancing mode may be a hash mode
  • the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode may include: The selection probability of one time determines the latest selection probability of each network connection in the cache, wherein the latest selection probability of the network connection selected last time is the smallest, and the latest selection probability of the network connection with the smallest selection probability is the largest.
  • the Hash method can send a request to the server according to a certain rule by a single-shot irreversible hash function, which usually has the following characteristics:
  • Balance means that the results of Hash should be evenly distributed to each server to solve the load balancing problem
  • monotonic means that when adding or deleting a server, the same key (key) access value is always the same;
  • Dispersibility means that data should be distributed and stored on each server.
  • the hash mode may be dispersed according to the modulo result of the number of servers, that is, the integer hash value of the key is first obtained, and then the integer hash value is performed on the number of servers.
  • the modulo operation is performed, and the server is selected according to the corresponding modulo result.
  • each network connection is in the last selection probability and the next time The selection probability is usually reversed, that is, the previous selection probability is greater than the next selection probability. Therefore, the latest selection probability of each network connection in the cache can be determined according to the previous selection probability of each network connection in the cache.
  • the cache replacement when the cache replacement is performed in combination with the hash mode, after the content of the network in the cache area 1 is accessed, the latest selection probability of the network connection in the buffer area 1 is 100. %, it can be determined that the next latest selection probability of the buffer area 1 is approximately 0.
  • the content in the buffer area 1 can be replaced, and the contents in the buffer area 2 and the buffer area 3 can be reserved, thereby ensuring access to the web server 2
  • the cache hit succeeds, there is no need to re-establish the connection to the Web server 2; then, after accessing the Web server 2, it is determined that the latest selection probability of the network connection in the cache 2 is approximately 0, so the content in the buffer 2 can be replaced, thereby It is guaranteed that the cache hit is successful when accessing the web server 3, and there is no need to re-establish the connection to the web server 3. It can be seen that the present invention greatly improves the cache hit rate.
  • the load balancing mode may be the lowest missing mode
  • the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode may specifically include: according to each network connection in the cache Corresponding to the historical processing access number of the server, determining the latest selection probability of each network connection in the cache, wherein the server with the largest number of historical processing accesses has the latest selection probability of the network connection, and the server with the least number of historical processing accesses the latest selection of the network connection. The probability is the biggest.
  • scheme 3 may determine the latest selection probability of each network connection in the cache according to the historical processing access number of the corresponding server in each cache connection in the cache. Specifically, for the server with more historical processing access, the corresponding network connection The latest selection probability is smaller.
  • the cache replacement when the cache replacement is performed in combination with the lowest missing mode, after the content of the cache server 1 is used to access the web server 1, the current web server 1, the web server 2, the web server 3, and the web server are assumed.
  • the number of historical processing accesses of 4 is 100w, 80w, 110w, and 90w respectively.
  • w represents 10,000, then, each of the caches can be determined.
  • the order of the latest selection probability of the network connection is: buffer area 3 ⁇ buffer area 1 ⁇ cache area 2, therefore, the content in the buffer area 3 can be replaced, and the contents in the buffer area 1 and the buffer area 2 can be reserved, thereby being able to guarantee When the web server 2 is accessed, the cache hit is successful, and there is no need to re-establish the connection to the web server 2. It can be seen that the present invention can improve the cache hit rate.
  • the load balancing mode may be the fastest response mode
  • the step of determining the latest selection probability of each network connection in the cache according to the characteristics of the server load balancing mode may specifically include: according to each network in the cache
  • the response time of the corresponding server is connected to determine the latest selection probability of each network connection in the cache.
  • the server with the shortest response time has the highest latest selection probability corresponding to the network connection, and the server with the longest response time has the lowest latest selection probability of the network connection.
  • the fastest response method can record the network response time of each server and assign the next incoming request to the server with the shortest response time.
  • scheme 4 may determine the latest selection probability of each network connection in the cache according to the response time of each network connection corresponding server in the cache. Specifically, the shorter the response time of the server, the greater the latest selection probability of the corresponding network connection. .
  • the cache replacement when the cache replacement is performed in combination with the lowest missing mode, after the content of the cache server 1 is used to access the web server 1, the current web server 1, the web server 2, the web server 3, and the web server are assumed.
  • the response time of 4 is 10ms, 20s, 25ms and 30ms respectively.
  • the order of the latest selection probability of each network connection in the cache can be determined as: buffer 3 ⁇ buffer 2 ⁇ buffer 1 and therefore, buffer 3 can be The content is replaced, and the contents of the buffer area 1 and the buffer area 2 are reserved, so that the cache hit is successful when accessing the web server 1, and there is no need to re-establish the connection to the web server 1. It can be seen that the present invention can improve the cache hit rate.
  • the foregoing describes the characteristics of the polling mode, the hash mode, the minimum missing mode, and the fastest response mode, and the corresponding scheme for determining the latest selection probability of each network connection in the cache. It should be noted that those skilled in the art Depending on the actual situation, any one of the foregoing solutions may be used, or the other load balancing mode may be adopted, and the latest selection probability of each network connection in the cache may be determined according to the characteristics of the other load balancing modes. And its corresponding The scheme for determining the latest selection probability of each network connection in the cache according to the characteristics of other load balancing methods is not limited.
  • the embodiment of the present invention only replaces the network connection with the smallest recent selection probability in the cache and the network connection that retains the most recent selection probability in the cache, and can avoid the network connection with the most recent selection probability in the cache being
  • the problem of cache hit failure caused by replacement can improve the cache hit rate of network connections.
  • FIG. 3 a schematic structural diagram of a cache device for network connection according to an embodiment of the present invention is shown, which may specifically include the following modules:
  • the probability determining module 301 is configured to determine, according to the characteristics of the server load balancing manner, the latest selection probability of each network connection in the cache after completing a server access; wherein the latest selection probability is configured to indicate that the server is connected according to the load balancing mode The probability of being selected next time;
  • the probability ranking module 302 is configured to sort the network connections in the cache according to the latest selection probability
  • the permutation retention module 303 is configured to replace the network connection with the least recent selection probability in the cache, and to retain the network connection with the most recent selection probability in the cache.
  • the load balancing mode is a polling mode
  • the probability determining module 301 may further include:
  • the first probability determining submodule is configured to determine a latest selection probability of each network connection in the cache according to a latest usage time of each network connection in the cache.
  • the load balancing mode is a hash mode
  • the probability determining module 301 may further include:
  • the second probability determining submodule is configured to determine a latest selection probability of each network connection in the cache according to a previous selection probability of each network connection in the cache.
  • the load balancing mode is the lowest missing mode
  • the probability determining module 301 may further include:
  • the third probability determining submodule is configured to determine the latest selection probability of each network connection in the cache according to the historical processing access number of the corresponding server in each cache connection in the cache.
  • the load balancing mode is the fastest response mode
  • the probability determining module 301 may further include:
  • the fourth probability determining submodule is configured to determine, according to the response time of the corresponding server in each cache connection in the cache, the latest selection probability of each network connection in the cache, wherein the server with the shortest response time has the highest latest selection probability of the network connection.
  • the server with the longest response time has the lowest probability of the latest selection of network connections.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the various component embodiments of the present invention may be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof.
  • a microprocessor or digital signal processor may be used in practice to implement some or all of the functionality of some or all of the components of the method and apparatus for caching network connections in accordance with embodiments of the present invention.
  • the invention can also be implemented as a device or device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein.
  • Such a program implementing the invention may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an internet platform, provided on a carrier signal, or provided in any other form.
  • Figure 4 illustrates a computing device, such as a search engine server, that can implement the above described method in accordance with the present invention.
  • the computing device conventionally includes a processor 410 and a computer program product or computer readable medium in the form of a memory 430.
  • the memory 430 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM.
  • the memory 430 has a storage space 450 that stores program code 451 for performing any of the method steps described above.
  • storage space 450 storing program code may include various program code 451 for implementing various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such computer program products are typically portable or fixed storage units such as those shown in FIG.
  • the storage unit may have storage segments, storage spaces, and the like that are similarly arranged to memory 430 in the computing device of FIG.
  • the program code can be compressed, for example, in an appropriate form.
  • the storage unit comprises computer readable code 451' for performing the steps of the method according to the invention, ie code that can be read by a processor, such as 410, which, when executed by the server, causes the server to execute Each of the steps in the method described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Information Transfer Between Computers (AREA)
  • Computer And Data Communications (AREA)

Abstract

L'invention concerne, selon un mode de réalisation, un procédé et un dispositif pour mettre en mémoire cache une connexion réseau, le procédé comprenant en particulier les étapes suivantes : après l'achèvement d'un accès à un serveur, la détermination d'une probabilité de sélection la plus récente de chaque connexion réseau dans une mémoire cache en fonction de caractéristiques d'un mode d'équilibrage de charge de serveur, la probabilité de sélection la plus récente représentant la probabilité que le serveur correspondant à la connexion réseau soit sélectionné par la suite sur la base du mode d'équilibrage de charge ; l'ordonnancement des connexions réseau dans la mémoire cache conformément aux probabilités de sélection les plus récentes ; et le remplacement de la connexion réseau ayant la plus petite probabilité de sélection la plus récente dans la mémoire cache, puis la conservation de la connexion réseau ayant la plus grande probabilité de sélection la plus récente dans la mémoire cache. Le mode de réalisation de la présente invention peut éviter le problème de défaillance d'accès aux connexions cachées provoquée par le remplacement de la connexion réseau ayant la plus grande probabilité de sélection la plus récente dans la mémoire cache, ce qui permet d'améliorer un taux de réussite d'accès aux connexions cachées de la connexion réseau.
PCT/CN2015/095455 2014-12-27 2015-11-24 Procédé et dispositif de mise en mémoire cache de connexion réseau WO2016101748A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410836954.6A CN104580435B (zh) 2014-12-27 2014-12-27 一种网络连接的缓存方法和装置
CN201410836954.6 2014-12-27

Publications (1)

Publication Number Publication Date
WO2016101748A1 true WO2016101748A1 (fr) 2016-06-30

Family

ID=53095592

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/095455 WO2016101748A1 (fr) 2014-12-27 2015-11-24 Procédé et dispositif de mise en mémoire cache de connexion réseau

Country Status (2)

Country Link
CN (1) CN104580435B (fr)
WO (1) WO2016101748A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713163A (zh) * 2016-12-29 2017-05-24 杭州迪普科技股份有限公司 一种调配服务器负载的方法及装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580435B (zh) * 2014-12-27 2019-03-08 北京奇虎科技有限公司 一种网络连接的缓存方法和装置
CN106060164B (zh) * 2016-07-12 2021-03-23 Tcl科技集团股份有限公司 一种可伸缩的云服务器系统及其通信方法
CN106657399B (zh) * 2017-02-20 2020-08-18 北京奇虎科技有限公司 基于中间件实现的后台服务器选择方法及装置
CN107333235B (zh) * 2017-06-14 2020-09-15 珠海市魅族科技有限公司 WiFi连接概率预测方法、装置、终端及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6317778B1 (en) * 1998-11-23 2001-11-13 International Business Machines Corporation System and method for replacement and duplication of objects in a cache
CN101455057A (zh) * 2006-06-30 2009-06-10 国际商业机器公司 高速缓存广播信息的方法和装置
CN102098290A (zh) * 2010-12-17 2011-06-15 天津曙光计算机产业有限公司 一种tcp流淘汰替换方法
CN104580435A (zh) * 2014-12-27 2015-04-29 北京奇虎科技有限公司 一种网络连接的缓存方法和装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6154767A (en) * 1998-01-15 2000-11-28 Microsoft Corporation Methods and apparatus for using attribute transition probability models for pre-fetching resources
CN101184021B (zh) * 2007-12-14 2010-06-02 成都市华为赛门铁克科技有限公司 一种实现流媒体缓存置换的方法、设备及系统
CN103347068B (zh) * 2013-06-26 2016-03-09 江苏省未来网络创新研究院 一种基于代理集群网络缓存加速方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6317778B1 (en) * 1998-11-23 2001-11-13 International Business Machines Corporation System and method for replacement and duplication of objects in a cache
CN101455057A (zh) * 2006-06-30 2009-06-10 国际商业机器公司 高速缓存广播信息的方法和装置
CN102098290A (zh) * 2010-12-17 2011-06-15 天津曙光计算机产业有限公司 一种tcp流淘汰替换方法
CN104580435A (zh) * 2014-12-27 2015-04-29 北京奇虎科技有限公司 一种网络连接的缓存方法和装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713163A (zh) * 2016-12-29 2017-05-24 杭州迪普科技股份有限公司 一种调配服务器负载的方法及装置

Also Published As

Publication number Publication date
CN104580435B (zh) 2019-03-08
CN104580435A (zh) 2015-04-29

Similar Documents

Publication Publication Date Title
WO2016101748A1 (fr) Procédé et dispositif de mise en mémoire cache de connexion réseau
US20150317091A1 (en) Systems and methods for enabling local caching for remote storage devices over a network via nvme controller
US20160132541A1 (en) Efficient implementations for mapreduce systems
US10044797B2 (en) Load balancing of distributed services
US9864538B1 (en) Data size reduction
US10083193B2 (en) Efficient remote pointer sharing for enhanced access to key-value stores
US9594696B1 (en) Systems and methods for automatic generation of parallel data processing code
US8239337B2 (en) Network device proximity data import based on weighting factor
US10691731B2 (en) Efficient lookup in multiple bloom filters
US9705977B2 (en) Load balancing for network devices
US10296485B2 (en) Remote direct memory access (RDMA) optimized high availability for in-memory data storage
KR101719500B1 (ko) 캐싱된 플로우들에 기초한 가속
US10915524B1 (en) Scalable distributed data processing and indexing
JP6770396B2 (ja) サービスの再活性化時間を短縮するための方法、システム、およびプログラム
US20190377683A1 (en) Cache pre-fetching using cyclic buffer
EP3369238B1 (fr) Procédé, dispositif, support lisible par ordinateur et produit de programme informatique pour le traitement de fichiers en nuage
US11023440B1 (en) Scalable distributed data processing and indexing
WO2018111696A1 (fr) Stockage partiel de grands fichiers dans des systèmes de stockage distincts
US11048758B1 (en) Multi-level low-latency hashing scheme
CN112732667A (zh) 一种分布式文件系统的可用性增强方法及系统
JP2016081492A (ja) 異種記憶サーバおよびそのファイル記憶方法
CN113806249B (zh) 一种对象存储有序列举方法、装置、终端及存储介质
KR20150015356A (ko) 콘텐트 중심 네트워크 내의 콘텐트 스토어로부터 콘텐트를 전달하는 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15871817

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15871817

Country of ref document: EP

Kind code of ref document: A1