WO2017219813A1 - Traffic processing method and transparent buffer system - Google Patents

Traffic processing method and transparent buffer system Download PDF

Info

Publication number
WO2017219813A1
WO2017219813A1 PCT/CN2017/085382 CN2017085382W WO2017219813A1 WO 2017219813 A1 WO2017219813 A1 WO 2017219813A1 CN 2017085382 W CN2017085382 W CN 2017085382W WO 2017219813 A1 WO2017219813 A1 WO 2017219813A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache system
syn
tcp
client
transparent cache
Prior art date
Application number
PCT/CN2017/085382
Other languages
French (fr)
Chinese (zh)
Inventor
黄凌云
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2017219813A1 publication Critical patent/WO2017219813A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/16Multipoint routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2416Real-time traffic

Definitions

  • the present invention relates to the field of computers, and in particular, to a traffic processing method and a transparent cache system.
  • HTTP Hyper Text Transfer Protocol
  • Web Cache Internet Cache
  • Web Cache caches hot resources to the local area and directly provides services to clients from the local network. This greatly reduces the traffic to the upper-level network.
  • the basic workflow of the Web Cache is: the Web Cache system performs real-time analysis and statistics according to the client's uplink request, and selects the most popular resources for local caching; when the Web Cache system subsequently receives the client's uplink request, it determines that the target resource is local. Whether it has been cached; if it is, the Web Cache system reads the resource data locally and returns it to the client, avoiding the client getting the resource to the original website.
  • Transparent caching is a type of Web Cache. It is usually configured by routing policies on the router to direct upstream and downstream traffic to the transparent cache system.
  • the transparent caching system interacts with the client or web server using the web server address or client address, so that the client and the web server are not aware of the transparent caching system.
  • the Web Cache system detects the traffic of the asymmetric routing through trial and error, and records the network protocol (IP) address corresponding to the asymmetric routing traffic to the local asymmetric routing address record table.
  • IP network protocol
  • the embodiment of the invention provides a traffic processing method and a transparent cache system, which can implement normal access of the client to the web server during asymmetric routing.
  • a method for traffic processing for a transparent cache system comprising: when the transparent cache system receives from a client, establishing a first TCP connection between the client and a service server.
  • SYN Synchronize Sequence Numbers
  • the SYN message is subjected to IP layer forwarding processing; if the transparent buffer system does not receive the SYN+Acknowledgement (ACK) response for the SYN message
  • ACK Acknowledgement
  • the transparent cache system after receiving the SYN packet for establishing the first TCP connection between the client and the service server, the transparent cache system does not immediately establish a TCP connection with the client.
  • the IP layer forwarding process is performed on the SYN packet, and if the SYN+ACK response packet for the SYN packet is not received, when the data packet of the TCP stream transmitted through the first TCP connection is received, Performing IP layer forwarding processing on the data packet, so that the client can access the service server normally during asymmetric routing.
  • the method further includes: if the transparent cache system receives the SYN+ACK response message for the SYN message, the transparent cache system is established by using the TCP protocol stack to simulate a second TCP connection of the client, and a third TCP connection between the transparent cache system and the service server by using a TCP protocol stack; and receiving, by the transparent cache system, a TCP flow transmitted through the first TCP connection The data packet is buffered by the second TCP connection and the third TCP connection.
  • the transparent cache system when the transparent cache system receives the SYN packet sent by the client to the service server, the second TCP connection between the transparent cache system and the client is not established immediately, and the transparent cache system and the service server are established.
  • the third TCP connection but the IP layer forwarding process is performed on the SYN packet, and when the SYN+ACK response message for the SYN packet sent by the service server to the client is received, the transparent cache system and the client are established.
  • a second TCP connection and a third TCP connection between the transparent cache system and the service server.
  • the data packet is processed by the cache service.
  • the cache service In the above manner, on the one hand, it is possible to normally provide a cache acceleration service for a symmetrically routed TCP stream, and on the other hand, it is established immediately when receiving a SYN message sent by the client to the service server according to the commonly used transparent cache system.
  • the embodiment of the present invention can prevent the TCP stream transmitted through the first TCP connection from being asymmetrically routed, causing communication when the second TCP connection between the transparent cache system and the client and the third TCP connection of the transparent cache system and the service server are established. The problem of interruption.
  • the transparent cache system simulates establishing a second TCP connection between the transparent cache system and the client through a TCP protocol stack, and simulates establishing the transparent cache system by using a TCP protocol stack.
  • the third TCP connection of the service server includes: the transparent cache system simulates a three-way handshake between the transparent cache system and the client in the TCP protocol stack according to the SYN packet and the SYN+ACK response packet Establishing a connection, establishing a second TCP connection between the transparent cache system and the client; and the transparent cache system simulating the TCP protocol stack according to the SYN packet and the SYN+ACK response packet
  • the transparent cache system is connected to the three-way handshake of the service server, and a third TCP connection between the transparent cache system and the service server is established.
  • the transparent cache system after receiving the SYN+ACK response message sent by the service server to the client, the transparent cache system can be simulated by the TCP protocol stack according to the SYN message and the SYN+ACK response message.
  • the second TCP connection between the transparent cache system and the client, and the third TCP connection between the transparent cache system and the service server is simulated by the TCP protocol stack, without waiting for the ACK message sent by the client to the service server, and then according to
  • the SYN packet, the SYN+ACK response packet, and the ACK packet are simulated by the TCP protocol stack to establish a second TCP connection between the transparent cache system and the client, and the third TCP of the transparent cache system and the service server is simulated by the TCP protocol stack. Connection, flexible processing.
  • the TCP protocol stack simulates establishing a second TCP connection between the transparent cache system and the client, and simulates establishing the transparent cache system and the service server by using a TCP protocol stack.
  • the method further includes: the transparent cache system forwarding the SYN+ACK by means of IP layer forwarding The response packet is sent to the client; the transparent cache system receives an ACK message sent by the client to the service server for the SYN+ACK response message; the transparent cache system forwards through the IP layer The method sends the ACK message to the service server.
  • the transparent cache system simulates establishing a second TCP connection between the transparent cache system and the client through a TCP protocol stack, and simulates establishing the transparent cache system by using a TCP protocol stack.
  • the third TCP connection of the service server includes: the transparent cache system emulating the transparent cache system in the TCP protocol stack according to the SYN message, the SYN+ACK response message, and the ACK message Establishing a third handshake of the client, establishing a second TCP connection between the transparent cache system and the client; and, the transparent cache system, according to the SYN packet, the SYN+ACK response packet, An ACK packet is simulated in the TCP protocol stack to establish a third handshake connection between the transparent cache system and the service server, and a third TCP connection between the transparent cache system and the service server is established.
  • the transparent cache system can receive the ACK message sent by the client to the service server, and then pass the TCP protocol stack according to the SYN message, the SYN+ACK response message, and the ACK message.
  • the simulation establishes a second TCP connection between the transparent cache system and the client, and simulates establishing a third TCP connection between the transparent cache system and the service server through the TCP protocol stack, and the processing manner is flexible.
  • an embodiment of the present invention provides a transparent cache system, which can implement the functions performed by the transparent cache system in the foregoing method example, and the functions can be implemented by hardware or by executing corresponding software through hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the transparent cache system includes a processor and a transceiver configured to support the transparent cache system to perform the corresponding functions in the above methods.
  • the transceiver is used to support communication between the transparent cache system and other network elements.
  • the transparent cache system can also include a memory for coupling with the processor that holds the program instructions and data necessary for the transparent cache system.
  • an embodiment of the present invention provides a computer storage medium for storing computer software instructions for use in the transparent cache system, including a program designed to perform the above aspects.
  • each TCP stream is an asymmetric route. If the TCP stream is a symmetric route, the system normally provides the cache acceleration service. If the TCP flow is an asymmetric route, the system performs the bypass, which does not affect the normal access of the client to the service server. This mechanism does not add additional performance overhead based on the normal business process flow.
  • FIG. 1 is a schematic diagram of an application scenario of a traffic processing method for a transparent cache system according to an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of a transparent cache system according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for processing traffic of a transparent cache system according to an embodiment of the present invention
  • FIG. 3 is a flowchart of another method for processing traffic of a transparent cache system according to an embodiment of the present invention.
  • FIG. 4 is a signal flow diagram of a traffic processing method for a transparent cache system according to an embodiment of the present invention.
  • FIG. 5 is a signal flow diagram of another traffic processing method for a transparent cache system according to an embodiment of the present invention.
  • FIG. 6 is a structural diagram of another transparent cache system according to an embodiment of the present invention.
  • FIG. 7 is a structural diagram of another transparent cache system according to an embodiment of the present invention.
  • TCP a connection-oriented, reliable, IP-based transport layer protocol in the Open System Interconnection (OSI) reference model, by the Internet Engineering Task Force (IETF) RFC 793 instructions.
  • OSI Open System Interconnection
  • IETF Internet Engineering Task Force
  • HTTP is the most widely used network protocol on the Internet. HTTP provides a way to publish and receive HyperText Markup Language (HTML) pages. RFC 2616 describes HTTP 1.1.
  • Web Cache is a device deployed between a client and a service server.
  • the service server can also be called a web server.
  • the Web Cache monitors the client's uplink request and saves the response of the service server to the local, including the HTML web page. Images and file downloads, etc. Subsequent to receiving a request to access the same resource, the saved response copy is returned to the client instead of the original web server request.
  • the transparent cache system is a kind of Web Cache that uses a web server address or a client address to interact with a client or a web server, so that the client and the web server have no perception of the transparent cache system.
  • Symmetric routing is a routing method in which round-trip packets are transmitted through the same routing path.
  • Asymmetric routing is a routing method in which round-trip packets are transmitted through different routing paths.
  • FIG. 1 is a schematic diagram of an application scenario of a method for processing a traffic of a transparent cache system according to an embodiment of the present invention.
  • the scenario mainly involves a client 101, a server 102, and a transparent cache system. Cache) 103, wherein the location of the transparent cache system 103 is between the client 101 and the service server 102, and the transmission path of the TCP stream between the client 101 and the service server 102 may be a symmetric route or an asymmetric route. Only the case of asymmetric routing is shown in 1, and round-trip packets are transmitted through different routing paths.
  • the transparent cache system is deployed on one of the paths and can only obtain some packets (uplinks or downlinks) of these TCP flows.
  • the service layer cannot be processed normally.
  • the transparent cache system needs to be able to be identified and released.
  • the embodiment of the present invention provides a traffic processing method for a transparent cache system.
  • the transparent cache system 103 receives a SYN packet for establishing a first TCP connection between the client 101 and the service server 102, Establishing a TCP connection between the client 101 and the transparent cache system 103 and a TCP connection between the service server 102 and the transparent cache system 103, but first determining whether the SYN+ACK response message for the SYN message is received according to whether a TCP connection is received between the service server 102 and the transparent cache system 103.
  • the TCP stream belongs to a symmetric route, if it belongs to an asymmetric route, performs IP layer forwarding processing on the data packet, and if it belongs to a symmetric route, establishes a TCP connection between the client 101 and the transparent cache system 103, and the service server 102 and the transparent
  • the TCP connection between the cache systems 103 performs a cache service processing on the data packets, thereby achieving precise control of traffic.
  • the transparent cache system may be specifically referred to as a Web Cache Server, and the Web Cache Server includes a Linux kernel (Kernel), a socket (Socket) management module, and a service.
  • the processing module wherein the Linux Kernel includes a TCP protocol stack, an IP protocol stack, and a Web Cache Kernel module, and the Web Cache Kernel module specifically includes a TCP flow table management submodule.
  • the Web Cache Server receives the SYN packet sent by the client to the Web server, the Web Cache Server does not establish a TCP connection with the client. Instead, it forwards the IP layer through the IP protocol stack.
  • the packets forwarded by the IP layer do not enter the TCP.
  • the protocol stack when the SYN+ACK response of the web server is received, the Web Cache Server determines that the upstream and downstream packets of the TCP stream pass through the Web Cache Server and belong to the symmetric route, and then the Web Cache Server reuses the saved SYN.
  • the SYN+ACK packet recovers the TCP connection with the client or the web server in the TCP protocol stack, so that the Web Cache Server can communicate with the client or the web server separately.
  • the Web Cache Server can be released without affecting client-to-web server access; for symmetrically routed TCP flows, Web Cache can provide cache acceleration services by the service processing module through the TCP protocol stack, for example. , HTTP protocol parsing, target resource unique identifier calculation, resource access heat accumulation, target resource index query, and the like.
  • FIG. 3 is a flowchart of a method for processing traffic of a transparent cache system according to an embodiment of the present disclosure, where the method includes:
  • Step 301 Perform an IP layer forwarding process on the SYN packet when the transparent cache system receives the SYN packet from the client for establishing the first TCP connection between the client and the service server.
  • Step 302 If the transparent cache system does not receive the SYN+ACK response packet for the SYN packet, when the transparent cache system receives the data packet of the TCP stream transmitted through the first TCP connection, the data packet is IP-enabled. Layer forwarding processing.
  • the transparent cache system after receiving the SYN packet for establishing the first TCP connection between the client and the service server, the transparent cache system does not immediately establish a TCP connection with the client.
  • the IP layer forwarding process is performed on the SYN packet, and if the SYN+ACK response packet for the SYN packet is not received, when the data packet of the TCP stream transmitted through the first TCP connection is received, Performing IP layer forwarding processing on the data packet, so that the client can access the service server normally during asymmetric routing.
  • FIG. 3 is a flowchart of another method for processing a traffic of a transparent cache system according to an embodiment of the present invention.
  • the method includes: in addition to the foregoing steps 301 and 302, the method includes:
  • Step 303 If the transparent cache system receives the SYN+ACK response message for the SYN message, the second TCP connection between the transparent cache system and the client is simulated through the TCP protocol stack, and the transparent cache system is established through the TCP protocol stack simulation. A third TCP connection to the business server.
  • the transparent cache system may simulate the three-way handshake establishment of the transparent cache system and the client in the TCP protocol stack according to the SYN message and the SYN+ACK response message.
  • the transparent cache system is connected to the second TCP connection of the client; and the transparent cache system simulates the transparent cache system in the TCP protocol stack according to the SYN packet and the SYN+ACK response packet.
  • the transparent cache system sends the SYN+ACK response message to the client by means of IP layer forwarding; the transparent cache system receives the client sends the message to the service server.
  • the ACK message is sent to the service server in the manner of the IP layer forwarding manner.
  • the transparent cache system simulates the three-way handshake establishment of the transparent cache system and the client in the TCP protocol stack according to the SYN packet, the SYN+ACK response packet, and the ACK packet, and establishes
  • the transparent cache system is connected to the second TCP connection of the client; and the transparent cache system simulates in the TCP protocol stack according to the SYN packet, the SYN+ACK response packet, and the ACK packet.
  • a transparent connection between the transparent cache system and the service server is established, and a third TCP connection between the transparent cache system and the service server is established.
  • Step 304 When the transparent cache system receives the data packet of the TCP stream transmitted through the first TCP connection, the second TCP connection and the third TCP connection are used to perform the buffer service processing on the data packet.
  • the transparent cache system when the transparent cache system receives the SYN packet sent by the client to the service server, the second TCP connection between the transparent cache system and the client is not established immediately, and the transparent cache system and the service server are established.
  • the third TCP connection but the IP layer forwarding process is performed on the SYN packet, and when the SYN+ACK response message for the SYN packet sent by the service server to the client is received, the transparent cache system and the client are established.
  • the embodiment of the present invention can prevent the TCP stream transmitted through the first TCP connection from being asymmetrically routed, causing communication when the second TCP connection between the transparent cache system and the client and the third TCP connection of the transparent cache system and the service server are established.
  • the problem of interruption can prevent the TCP stream transmitted through the first TCP connection from being asymmetrically routed, causing communication when the second TCP connection between the transparent cache system and the client and the third TCP connection of the transparent cache system and the service server are established.
  • FIG. 4 is a signal flow diagram of a traffic processing method for a transparent cache system according to an embodiment of the present invention.
  • the embodiment is directed to processing a symmetrically routed TCP flow, and the Web Cache is configured to establish a client or web after receiving a response from the web server.
  • the server's TCP connection, the method includes:
  • step 401 the client sends a SYN packet to the web server, and the web cache receives the packet.
  • Step 402 The Web Cache performs IP layer forwarding, and forwards the SYN packet of the client to the web server.
  • the Web Cache generates a corresponding flow table in the TCP flow table management module and saves the SYN packet.
  • Step 403 The Web Cache receives the SYN+ACK response packet sent by the web server to the client, and determines that the uplink and downlink packets of the TCP stream pass through the Web Cache, and therefore belong to a symmetric route.
  • Step 404 The Web Cache updates the flow table status in the TCP flow table management module, identifies the current TCP flow as a symmetric route, and saves the SYN+ACK message.
  • step 405 the Web Cache sends a SYN+ACK response to the client.
  • Step 406 The Web Cache receives an ACK message sent by the client to the web server.
  • Step 407 The Web Cache uses the SYN packet, the SYN+ACK packet, and the ACK packet information (source/destination IP address, port number, and TCP sequence number) in steps 402, 404, and 406 in the TCP protocol stack.
  • the three-way handshake is simulated and the TCP connection between the Web Cache and the client is restored.
  • the above simulation is a relatively normal TCP connection establishment process. Since the TCP connection is actually established between the client and the Web server, the Cache translates into two TCP connections in the local TCP protocol stack based on the information of the TCP connection. Unlike the usual calling system API to create a socket, this technology uses the message information to create a socket structure and directly into the kernel TCP connection table.
  • Step 408 The Web Cache uses the SYN packet, the SYN+ACK packet, and the ACK packet information (source/destination IP address, port, and TCP sequence number) in steps 402, 404, and 406 to simulate three times in the TCP protocol stack.
  • the handshake is built and the TCP connection between the Web Cache and the Web server is restored.
  • the technology uses the message information to create a socket structure and directly into the kernel TCP connection table.
  • step 409 the Web Cache sends an ACK packet to the web server.
  • the Web Cache successfully establishes a TCP connection with the client and the web server, and the subsequent received packets are not forwarded by the IP layer, but are sent to the TCP protocol stack for processing.
  • step 404 is optional.
  • the simulated operating system protocol stack restores the TCP connection with the client or the web server. That is, starting from the above step 403, 407 and 408 in the above steps are sequentially performed, and then 405, 406, and 409 in the above steps are sequentially executed.
  • the Web Cache can calculate the serial number of the ACK packet according to the SYN+ACK packet after receiving the SYN packet and the SYN+ACK response packet.
  • the ACK The source/destination IP address and port number carried in the packet are also available in the SYN packet and the SYN+ACK packet. Therefore, the three-way handshake connection can be simulated in the TCP protocol stack when the ACK packet is not obtained. Restore the TCP connection between the Web Cache and the client, and restore the TCP connection between the Web Cache and the Web server.
  • the execution sequence of the steps 407 and 408 is not specifically limited.
  • the step 407 may be performed before the step 408 is performed, or the step 408 may be performed first, and then the step 407 and the step 408 may be performed simultaneously.
  • Web Cache receives the SYN packet sent by the client (IP: 10.1.1.10) to the web server (IP: 30.1.1.2).
  • the Web Cache (IP: 20.1.1.2) performs IP layer forwarding and forwards the SYN packet of the client (IP: 10.1.1.10) to the Web server (IP: 30.1.1.2).
  • the Web Cache generates a corresponding flow table in the TCP flow table management module and saves the SYN packet.
  • the Web Cache receives the SYN+ACK response packet sent by the Web server (IP: 30.1.1.2) to the client (IP: 10.1.1.10), and determines that both the upstream and downstream packets of the TCP stream are received. After Web Cache, it is a symmetric route.
  • Web Cache (IP:20.1.1.2) updates the flow table status in the TCP flow table management module, identifies the current TCP flow as a symmetric route, and saves the SYN+ACK message.
  • Web Cache (IP:20.1.1.2) sends a SYN+ACK response to the client (IP: 10.1.1.10).
  • Web Cache (IP:20.1.1.2) receives an ACK message from the client (IP: 10.1.1.10) to the web server (IP: 30.1.1.2).
  • the Web Cache uses the SYN packet, the SYN+ACK packet, and the ACK packet information to simulate the three-way handshake connection and restore the Web Cache (IP:20.1.1.2) and the client in the TCP protocol stack. (IP: 10.1.1.10) TCP connection. Since the Web Cache performs IP address translation, the TCP connection object seen by the client (IP: 10.1.1.10) is the Web server (IP: 30.1.1.2).
  • Web Cache (IP:20.1.1.2) uses SYN packets, SYN+ACK packets, and ACK packets to simulate three-way handshake establishment and restore Web Cache (IP:20.1.1.2) and Web server in the TCP protocol stack.
  • TCP connection IP: 30.1.1.2
  • the Web Cache performs IP address translation
  • the TCP connection object seen by the Web server (IP: 30.1.1.2) is the client (IP: 10.1.1.10).
  • Web Cache (IP: 20.1.1.2) sends an ACK message to the web server (IP: 30.1.1.2).
  • FIG. 5 is a signal flow diagram of another traffic processing method for a transparent cache system according to an embodiment of the present invention.
  • the embodiment is directed to processing a TCP flow of an asymmetric route, and the Web Cache does not receive a response from the web server, and the TCP The stream is bypassed, and the method includes:
  • step 501 the client sends a SYN packet to the web server, and the web cache receives the packet.
  • Step 502 The Web Cache performs IP layer forwarding, and forwards the SYN packet of the client to the web server.
  • the Web Cache generates a corresponding flow table in the TCP flow table management module and saves the SYN packet.
  • step 503 the SYN+ACK response packet of the web server is returned to the client through other routing paths (without passing through the Web Cache).
  • Step 504 The client sends an ACK packet to the web server, and the web cache receives the packet.
  • the default TCP connection belongs to an asymmetric routing state
  • the Web Cache receives the report of the TCP connection.
  • the text will be forwarded. Since the Web Cache does not receive the SYN+ACK response packet from the web server, the current TCP stream is still in the IP layer forwarding state on the Web Cache (that is, the Web Cache considers the TCP stream to belong to an asymmetric route), and the Web Cache performs the IP layer. Forward, forward the ACK message of the client to the web server.
  • the Web Cache forwards the connection between the client and the Web server, that is, the TCP connection handshake packet is forwarded by the IP layer.
  • the Web Cache itself does not establish a TCP connection with the client or the Web server.
  • the Web Cache forwards the IP layer.
  • Web Cache receives the SYN packet sent by the client (IP: 10.1.1.20) to the web server (IP: 30.1.1.2).
  • the Web Cache (IP: 20.1.1.2) performs IP layer forwarding and forwards the SYN packet of the client (IP: 10.1.1.20) to the Web server (IP: 30.1.1.2).
  • the SYN+ACK response of the web server (IP: 30.1.1.2) is returned to the client via other routing paths (IP: 10.1.1.20).
  • Web Cache receives an ACK message from the client (IP: 10.1.1.20) to the web server (IP: 30.1.1.2).
  • the Web Cache (IP: 20.1.1.2) performs IP layer forwarding and forwards the ACK message of the client (IP: 10.1.1.20) to the Web server (IP: 30.1.1.2).
  • FIG. 6 is a structural diagram of another transparent cache system according to an embodiment of the present invention.
  • the transparent cache system is configured to execute a traffic processing method for a transparent cache system provided by the foregoing embodiment of the present invention, where the system includes: a receiving unit 601. , processing unit 602 and transmitting unit 603;
  • the receiving unit 601 is configured to receive a packet from the client or the service server, where the packet is a SYN packet or a data packet.
  • the processing unit 602 is configured to: when the receiving unit 601 receives a SYN packet for establishing a first transmission control protocol TCP connection between the client and the service server, the sending unit 603 The SYN packet performs an IP layer forwarding process; and, if the receiving unit 601 does not receive the SYN+ACK response message for the SYN packet, when the receiving unit 601 receives the first When the TCP connection transmits the data packet of the TCP stream, the transmitting unit 603 performs IP layer forwarding processing on the data packet.
  • the processing unit 602 is further configured to: if the receiving unit 601 receives the SYN+ACK response packet for the SYN packet, simulate establishing the transparent cache system and the client by using a TCP protocol stack. a second TCP connection, and establishing, by the TCP protocol stack, a third TCP connection between the transparent cache system and the service server; when the receiving unit 601 receives the data of the TCP stream transmitted through the first TCP connection And performing a cache service processing on the data packet by using the second TCP connection and the third TCP connection.
  • the processing unit 602 is specifically configured to simulate the transparent cache system and the TCP protocol stack according to the SYN packet and the SYN+ACK response packet received by the receiving unit 601.
  • the third handshake of the client is established, and the second TCP connection between the transparent cache system and the client is established; and the SYN packet and the SYN+ACK response packet are received according to the receiving unit 601.
  • the sending unit 603 is further configured to: at the processing unit 602, simulate, establish, by using a TCP protocol stack, a second TCP connection between the transparent cache system and the client, and simulate establishing the transparent cache by using a TCP protocol stack. Sending the SYN+ACK response packet by means of IP layer forwarding before the third TCP connection between the system and the service server Sent to the client;
  • the receiving unit 601 is further configured to receive an ACK message sent by the client to the service server for the SYN+ACK response message;
  • the sending unit 603 is further configured to send, by using an IP layer, the ACK message received by the receiving unit 601 to the service server.
  • the processing unit 602 is configured to simulate, according to the SYN packet, the SYN+ACK response packet, and the ACK packet, the transparent cache system and the Establishing a three-way handshake of the client, establishing a second TCP connection between the transparent cache system and the client; and, in the TCP protocol, according to the SYN packet, the SYN+ACK response packet, and the ACK packet A three-way handshake connection between the transparent cache system and the service server is simulated in the stack, and a third TCP connection between the transparent cache system and the service server is established.
  • FIG. 7 is a structural diagram of another transparent cache system according to an embodiment of the present invention.
  • the transparent cache system is configured to execute a traffic processing method for a transparent cache system provided by the foregoing embodiment of the present invention, where the system includes:
  • the memory 701 is configured to store program instructions.
  • the processor 702 is configured to perform the following operations according to the program instructions stored in the memory 701:
  • processor 702 is further configured to perform the following operations according to the program instructions stored in the memory 701:
  • the second TCP connection between the transparent cache system and the client is simulated by using the TCP protocol stack, and the TCP protocol is used.
  • the stack simulation establishes a third TCP connection between the transparent cache system and the service server;
  • the processor 702 performs the second TCP connection between the transparent cache system and the client by using a TCP protocol stack, and simulates establishing the transparent cache system and the service by using a TCP protocol stack.
  • the operation of the server's third TCP connection including:
  • processor 702 is further configured to perform the following operations according to the program instructions stored in the memory 701:
  • the ACK message is sent to the service server through the communication interface 703 by means of IP layer forwarding.
  • the processor 702 performs the second TCP connection between the transparent cache system and the client by using a TCP protocol stack, and simulates establishing the transparent cache system and the service by using a TCP protocol stack.
  • the operation of the server's third TCP connection including:
  • Non-transitory medium such as random access memory, read only memory, flash memory, hard disk, solid state disk, magnetic tape, floppy disk, optical disc, and any combination thereof.

Abstract

The embodiments of the present invention relate to a traffic processing method and a transparent buffer system, the method comprising: when a transparent buffer system receives from a client a synchronous sequence number (SYN) message for establishing a first transfer control protocol (TCP) connection between the client and a service server, performing Internet protocol (IP) layer forwarding of the SYN message; if the transparent buffer system does not receive an SYN + acknowledgment (ACK) response message for the SYN message, when the transparent buffer system receives a data message of a TCP stream transmitted by means of the first TCP connection, performing IP layer forwarding of the data message. As can be seen from the above, in the embodiments of the present invention, a client can achieve normal access to a Web server using asymmetric routing.

Description

一种流量处理方法及透明缓存系统Traffic processing method and transparent cache system
本申请要求于2016年6月23日提交中国专利局、申请号为201610464005.9,发明名称为“一种流量处理方法及透明缓存系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese Patent Application entitled "Flow Processing Method and Transparent Cache System" by the Chinese Patent Office on June 23, 2016, the application number is 201610464005.9, the entire contents of which are incorporated herein by reference. In the application.
技术领域Technical field
本发明涉及计算机领域,尤其涉及一种流量处理方法及透明缓存系统。The present invention relates to the field of computers, and in particular, to a traffic processing method and a transparent cache system.
背景技术Background technique
随着互联网的蓬勃发展,网络流量快速增长,给网络运营商带来了巨大的挑战。网络基础设施扩容投资巨大,难以匹配用户流量的增长速度。现网流量中约八成来自超文本传输协议(Hyper Text Transfer Protocol,HTTP),例如,视频、文件下载、网页浏览等均采用HTTP,由于HTTP是基于客户端/服务器的模式,用户访问资源时都需要到源站获取同样的数据,访问统一资源的人数越多,产生的重复数据流量越大。通过互联网缓存(Web Cache)系统来提供缓存业务是解决这一问题的有效方式。With the rapid development of the Internet, the rapid growth of network traffic has brought enormous challenges to network operators. The network infrastructure expansion investment is huge, and it is difficult to match the growth rate of user traffic. About 80% of the current network traffic comes from Hyper Text Transfer Protocol (HTTP). For example, video, file download, web browsing, etc. use HTTP. Because HTTP is based on client/server mode, users access resources. You need to get the same data from the source station. The more people access the unified resources, the more duplicate data traffic is generated. Providing a cache service through an Internet Cache (Web Cache) system is an effective way to solve this problem.
Web Cache基于存储换带宽、流量本地化的思路,将热门资源缓存到本地,直接从本地为客户端提供服务,大量减少了到上级网络的流量。Web Cache的基本工作流程是:Web Cache系统根据客户端的上行请求进行实时分析和统计,选取最热门的资源进行本地缓存;当Web Cache系统后续再收到客户端的上行请求时,判断目标资源在本地是否已经缓存;如果是,Web Cache系统从本地读取资源数据并返回给客户端,避免了客户端到原始网站获取资源。Based on the idea of storing bandwidth and traffic localization, Web Cache caches hot resources to the local area and directly provides services to clients from the local network. This greatly reduces the traffic to the upper-level network. The basic workflow of the Web Cache is: the Web Cache system performs real-time analysis and statistics according to the client's uplink request, and selects the most popular resources for local caching; when the Web Cache system subsequently receives the client's uplink request, it determines that the target resource is local. Whether it has been cached; if it is, the Web Cache system reads the resource data locally and returns it to the client, avoiding the client getting the resource to the original website.
透明缓存是Web Cache的一种,通常通过在路由器上配置路由策略、将上行和下行流量引导到透明缓存系统。透明缓存系统使用Web服务器地址或客户端地址与客户端或Web服务器交互,使得客户端和Web服务器对透明缓存系统无感知。Transparent caching is a type of Web Cache. It is usually configured by routing policies on the router to direct upstream and downstream traffic to the transparent cache system. The transparent caching system interacts with the client or web server using the web server address or client address, so that the client and the web server are not aware of the transparent caching system.
但是,现网部分网络中存在非对称路由的情况,也就是客户端和服务器之间的往返数据包通过不同的路由路径进行传输。通常Web Cache系统部署在其中一条路径上、只能获得这些传输控制协议(Transmission Control Protocol,TCP)流的部分报文,例如,上行报文或者下行报文,从而导致部分数据包无法正常的进行业务层的处理。However, there are cases of asymmetric routing in some networks of the existing network, that is, round-trip data packets between the client and the server are transmitted through different routing paths. Generally, the Web Cache system is deployed on one of the paths, and only some of the packets of the Transmission Control Protocol (TCP) flow, such as the uplink packet or the downlink packet, are obtained. As a result, some packets cannot be processed normally. Processing of the business layer.
现有技术中,Web Cache系统通过试错对非对称路由流量进行探测,将非对称路由流量对应的网络协议(Internet Protocol,IP)地址记录到本地的非对称路由地址记录表中,Web Cache系统对这些地址的流量直接进行IP层转发,不送入业务层进行处理。In the prior art, the Web Cache system detects the traffic of the asymmetric routing through trial and error, and records the network protocol (IP) address corresponding to the asymmetric routing traffic to the local asymmetric routing address record table. The Web Cache system The traffic of these addresses is directly forwarded by the IP layer and is not sent to the service layer for processing.
但是,在这种处理方式下,当客户端的IP地址被Web Cache系统识别为非对称路由IP地址时,Web Cache系统会中断客户端当前连接,导致本次访问失败。However, in this mode, when the IP address of the client is recognized as an asymmetric routing IP address by the Web Cache system, the Web Cache system will interrupt the current connection of the client, causing the access failure.
发明内容Summary of the invention
本发明实施例提供了一种流量处理方法及透明缓存系统,能够实现非对称路由时客户端对Web服务器的正常访问。The embodiment of the invention provides a traffic processing method and a transparent cache system, which can implement normal access of the client to the web server during asymmetric routing.
一方面,提供了一种用于透明缓存系统的流量处理方法,该方法包括:当所述透明缓存系统从客户端接收到用于建立所述客户端与业务服务器之间的第一TCP连接的同步序列编号(Synchronize Sequence Numbers,SYN)报文时,对所述SYN报文进行IP层转发处理;若所述透明缓存系统未接收到针对所述SYN报文的SYN+确认(Acknowledgement,ACK)响应 报文,则当所述透明缓存系统接收到通过所述第一TCP连接传输的TCP流的数据报文时,对所述数据报文进行IP层转发处理。In one aspect, a method for traffic processing for a transparent cache system is provided, the method comprising: when the transparent cache system receives from a client, establishing a first TCP connection between the client and a service server. When the Synchronize Sequence Numbers (SYN) message is sent, the SYN message is subjected to IP layer forwarding processing; if the transparent buffer system does not receive the SYN+Acknowledgement (ACK) response for the SYN message And a message, when the transparent cache system receives the data packet of the TCP stream transmitted by using the first TCP connection, performing IP layer forwarding processing on the data packet.
由上可见,本发明实施例中,透明缓存系统在接收到用于建立客户端与业务服务器之间的第一TCP连接的SYN报文后,不是立即建立与客户端之间的TCP连接,而是对该SYN报文进行IP层转发处理,并且,若未接收到针对该SYN报文的SYN+ACK响应报文,则当接收到通过第一TCP连接传输的TCP流的数据报文时,对该数据报文进行IP层转发处理,从而能够实现非对称路由时客户端对业务服务器的正常访问。It can be seen that, in the embodiment of the present invention, after receiving the SYN packet for establishing the first TCP connection between the client and the service server, the transparent cache system does not immediately establish a TCP connection with the client. The IP layer forwarding process is performed on the SYN packet, and if the SYN+ACK response packet for the SYN packet is not received, when the data packet of the TCP stream transmitted through the first TCP connection is received, Performing IP layer forwarding processing on the data packet, so that the client can access the service server normally during asymmetric routing.
在一种可能的设计中,该方法还包括:若所述透明缓存系统接收到针对所述SYN报文的SYN+ACK响应报文,则通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接;当所述透明缓存系统接收到通过所述第一TCP连接传输的TCP流的数据报文时,利用所述第二TCP连接和所述第三TCP连接对所述数据报文进行缓存业务处理。In a possible design, the method further includes: if the transparent cache system receives the SYN+ACK response message for the SYN message, the transparent cache system is established by using the TCP protocol stack to simulate a second TCP connection of the client, and a third TCP connection between the transparent cache system and the service server by using a TCP protocol stack; and receiving, by the transparent cache system, a TCP flow transmitted through the first TCP connection The data packet is buffered by the second TCP connection and the third TCP connection.
由上可见,本发明实施例中,透明缓存系统在接收到客户端发送给业务服务器的SYN报文时,不是马上建立透明缓存系统与客户端的第二TCP连接以及建立透明缓存系统与业务服务器的第三TCP连接,而是对该SYN报文进行IP层转发处理,在收到业务服务器发送给客户端的针对该SYN报文的SYN+ACK响应报文时,再建立透明缓存系统与客户端的第二TCP连接以及建立透明缓存系统与业务服务器的第三TCP连接,后续当透明缓存系统接收到通过第一TCP连接传输的TCP流的数据报文时,利用第二TCP连接和第三TCP连接对数据报文进行缓存业务处理。通过上述方式,一方面,能够实现对对称路由的TCP流正常提供缓存加速服务,另一方面,相对于通常采用的透明缓存系统在接收到客户端发送给业务服务器的SYN报文时,马上建立透明缓存系统与客户端的第二TCP连接以及建立透明缓存系统与业务服务器的第三TCP连接的方式,本发明实施例能够避免当通过第一TCP连接传输的TCP流为非对称路由时,导致通信中断的问题。It can be seen that, in the embodiment of the present invention, when the transparent cache system receives the SYN packet sent by the client to the service server, the second TCP connection between the transparent cache system and the client is not established immediately, and the transparent cache system and the service server are established. The third TCP connection, but the IP layer forwarding process is performed on the SYN packet, and when the SYN+ACK response message for the SYN packet sent by the service server to the client is received, the transparent cache system and the client are established. a second TCP connection and a third TCP connection between the transparent cache system and the service server. When the transparent cache system receives the data packet of the TCP stream transmitted through the first TCP connection, the second TCP connection and the third TCP connection pair are used. The data packet is processed by the cache service. In the above manner, on the one hand, it is possible to normally provide a cache acceleration service for a symmetrically routed TCP stream, and on the other hand, it is established immediately when receiving a SYN message sent by the client to the service server according to the commonly used transparent cache system. The embodiment of the present invention can prevent the TCP stream transmitted through the first TCP connection from being asymmetrically routed, causing communication when the second TCP connection between the transparent cache system and the client and the third TCP connection of the transparent cache system and the service server are established. The problem of interruption.
在一种可能的设计中,所述透明缓存系统通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接,包括:所述透明缓存系统根据所述SYN报文、所述SYN+ACK响应报文,在TCP协议栈中模拟出所述透明缓存系统与所述客户端的三次握手建连,建立所述透明缓存系统与所述客户端的第二TCP连接;以及,所述透明缓存系统根据所述SYN报文、所述SYN+ACK响应报文,在TCP协议栈中模拟出所述透明缓存系统与所述业务服务器的三次握手建连,建立所述透明缓存系统与所述业务服务器的第三TCP连接。In a possible design, the transparent cache system simulates establishing a second TCP connection between the transparent cache system and the client through a TCP protocol stack, and simulates establishing the transparent cache system by using a TCP protocol stack. The third TCP connection of the service server includes: the transparent cache system simulates a three-way handshake between the transparent cache system and the client in the TCP protocol stack according to the SYN packet and the SYN+ACK response packet Establishing a connection, establishing a second TCP connection between the transparent cache system and the client; and the transparent cache system simulating the TCP protocol stack according to the SYN packet and the SYN+ACK response packet The transparent cache system is connected to the three-way handshake of the service server, and a third TCP connection between the transparent cache system and the service server is established.
由上可见,本发明实施例中,透明缓存系统可以在收到业务服务器发送给客户端的SYN+ACK响应报文后,就根据SYN报文和SYN+ACK响应报文,通过TCP协议栈模拟建立透明缓存系统与客户端的第二TCP连接,以及,通过TCP协议栈模拟建立透明缓存系统与业务服务器的第三TCP连接,而不必等到收到客户端发送给业务服务器的ACK报文后,再根据SYN报文、SYN+ACK响应报文和ACK报文,通过TCP协议栈模拟建立透明缓存系统与客户端的第二TCP连接,以及,通过TCP协议栈模拟建立透明缓存系统与业务服务器的第三TCP连接,处理方式灵活。It can be seen that, in the embodiment of the present invention, after receiving the SYN+ACK response message sent by the service server to the client, the transparent cache system can be simulated by the TCP protocol stack according to the SYN message and the SYN+ACK response message. The second TCP connection between the transparent cache system and the client, and the third TCP connection between the transparent cache system and the service server is simulated by the TCP protocol stack, without waiting for the ACK message sent by the client to the service server, and then according to The SYN packet, the SYN+ACK response packet, and the ACK packet are simulated by the TCP protocol stack to establish a second TCP connection between the transparent cache system and the client, and the third TCP of the transparent cache system and the service server is simulated by the TCP protocol stack. Connection, flexible processing.
在一种可能的设计中,所述通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接之前,所述方法还包括:所述透明缓存系统通过IP层转发的方式将所述SYN+ACK 响应报文发送给所述客户端;所述透明缓存系统接收所述客户端向所述业务服务器发送的针对所述SYN+ACK响应报文的ACK报文;所述透明缓存系统通过IP层转发的方式将所述ACK报文发送给所述业务服务器。In a possible design, the TCP protocol stack simulates establishing a second TCP connection between the transparent cache system and the client, and simulates establishing the transparent cache system and the service server by using a TCP protocol stack. Before the third TCP connection, the method further includes: the transparent cache system forwarding the SYN+ACK by means of IP layer forwarding The response packet is sent to the client; the transparent cache system receives an ACK message sent by the client to the service server for the SYN+ACK response message; the transparent cache system forwards through the IP layer The method sends the ACK message to the service server.
在一种可能的设计中,所述透明缓存系统通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接,包括:所述透明缓存系统根据所述SYN报文、所述SYN+ACK响应报文、所述ACK报文,在TCP协议栈中模拟出所述透明缓存系统与所述客户端的三次握手建连,建立所述透明缓存系统与所述客户端的第二TCP连接;以及,所述透明缓存系统根据所述SYN报文、所述SYN+ACK响应报文、所述ACK报文,在TCP协议栈中模拟出所述透明缓存系统与所述业务服务器的三次握手建连,建立所述透明缓存系统与所述业务服务器的第三TCP连接。In a possible design, the transparent cache system simulates establishing a second TCP connection between the transparent cache system and the client through a TCP protocol stack, and simulates establishing the transparent cache system by using a TCP protocol stack. The third TCP connection of the service server includes: the transparent cache system emulating the transparent cache system in the TCP protocol stack according to the SYN message, the SYN+ACK response message, and the ACK message Establishing a third handshake of the client, establishing a second TCP connection between the transparent cache system and the client; and, the transparent cache system, according to the SYN packet, the SYN+ACK response packet, An ACK packet is simulated in the TCP protocol stack to establish a third handshake connection between the transparent cache system and the service server, and a third TCP connection between the transparent cache system and the service server is established.
由上可见,本发明实施例中,透明缓存系统可以在收到客户端发送给业务服务器的ACK报文后,再根据SYN报文、SYN+ACK响应报文和ACK报文,通过TCP协议栈模拟建立透明缓存系统与客户端的第二TCP连接,以及,通过TCP协议栈模拟建立透明缓存系统与业务服务器的第三TCP连接,处理方式灵活。It can be seen that, in the embodiment of the present invention, the transparent cache system can receive the ACK message sent by the client to the service server, and then pass the TCP protocol stack according to the SYN message, the SYN+ACK response message, and the ACK message. The simulation establishes a second TCP connection between the transparent cache system and the client, and simulates establishing a third TCP connection between the transparent cache system and the service server through the TCP protocol stack, and the processing manner is flexible.
另一方面,本发明实施例提供了一种透明缓存系统,该透明缓存系统可以实现上述方法示例中透明缓存系统所执行的功能,所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个上述功能相应的模块。On the other hand, an embodiment of the present invention provides a transparent cache system, which can implement the functions performed by the transparent cache system in the foregoing method example, and the functions can be implemented by hardware or by executing corresponding software through hardware. achieve. The hardware or software includes one or more modules corresponding to the above functions.
在一种可能的设计中,该透明缓存系统的结构中包括处理器和收发器,该处理器被配置为支持该透明缓存系统执行上述方法中相应的功能。该收发器用于支持该透明缓存系统与其他网元之间的通信。该透明缓存系统还可以包括存储器,该存储器用于与处理器耦合,其保存该透明缓存系统必要的程序指令和数据。In one possible design, the transparent cache system includes a processor and a transceiver configured to support the transparent cache system to perform the corresponding functions in the above methods. The transceiver is used to support communication between the transparent cache system and other network elements. The transparent cache system can also include a memory for coupling with the processor that holds the program instructions and data necessary for the transparent cache system.
再一方面,本发明实施例提供了一种计算机存储介质,用于储存为上述透明缓存系统所用的计算机软件指令,其包含用于执行上述方面所设计的程序。In still another aspect, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for use in the transparent cache system, including a program designed to perform the above aspects.
相较于现有技术,本发明实施例中,识别每个TCP流是否是非对称路由。如果该TCP流是对称路由,系统正常提供缓存加速服务;如果该TCP流是非对称路由,系统对其进行放行(bypass),不影响客户端对业务服务器的正常访问。该机制在正常业务处理流程基础上不会增加额外的性能开销。Compared with the prior art, in the embodiment of the present invention, it is identified whether each TCP stream is an asymmetric route. If the TCP stream is a symmetric route, the system normally provides the cache acceleration service. If the TCP flow is an asymmetric route, the system performs the bypass, which does not affect the normal access of the client to the service server. This mechanism does not add additional performance overhead based on the normal business process flow.
附图说明DRAWINGS
图1为本发明实施例提供的用于透明缓存系统的流量处理方法的应用场景示意图;FIG. 1 is a schematic diagram of an application scenario of a traffic processing method for a transparent cache system according to an embodiment of the present disclosure;
图2为本发明实施例提供的一种透明缓存系统的结构示意图;2 is a schematic structural diagram of a transparent cache system according to an embodiment of the present invention;
图3为本发明实施例提供的一种用于透明缓存系统的流量处理方法流程图;FIG. 3 is a flowchart of a method for processing traffic of a transparent cache system according to an embodiment of the present invention;
图3a为本发明实施例提供的另一种用于透明缓存系统的流量处理方法流程图;FIG. 3 is a flowchart of another method for processing traffic of a transparent cache system according to an embodiment of the present invention;
图4为本发明实施例提供的一种用于透明缓存系统的流量处理方法信号流图;4 is a signal flow diagram of a traffic processing method for a transparent cache system according to an embodiment of the present invention;
图5为本发明实施例提供的另一种用于透明缓存系统的流量处理方法信号流图;FIG. 5 is a signal flow diagram of another traffic processing method for a transparent cache system according to an embodiment of the present invention;
图6为本发明实施例提供的另一种透明缓存系统结构图;FIG. 6 is a structural diagram of another transparent cache system according to an embodiment of the present invention;
图7为本发明实施例提供的另一种透明缓存系统结构图。FIG. 7 is a structural diagram of another transparent cache system according to an embodiment of the present invention.
具体实施方式 detailed description
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述。The technical solutions in the embodiments of the present invention will be clearly described below in conjunction with the accompanying drawings in the embodiments of the present invention.
首先,对本发明实施例中涉及的术语进行简单说明。First, the terms involved in the embodiments of the present invention will be briefly described.
TCP,是开放式系统互联(Open System Interconnection,OSI)参考模型中一种面向连接的、可靠的、基于IP的传输层协议,由国际互联网工程任务组(The Internet Engineering Task Force,IETF)的RFC 793说明。TCP在IP报文的协议号是6。TCP, a connection-oriented, reliable, IP-based transport layer protocol in the Open System Interconnection (OSI) reference model, by the Internet Engineering Task Force (IETF) RFC 793 instructions. The protocol number of TCP in IP packets is 6.
HTTP,是互联网上应用最为广泛的一种网络协议。HTTP提供一种发布和接收超文本标记语言(HyperText Markup Language,HTML)页面的方法。RFC 2616对HTTP 1.1进行了说明。HTTP is the most widely used network protocol on the Internet. HTTP provides a way to publish and receive HyperText Markup Language (HTML) pages. RFC 2616 describes HTTP 1.1.
Web Cache,是部署在客户端和业务服务器之间的装置,其中,业务服务器也可称为Web服务器,Web Cache对客户端上行请求进行监控,将业务服务器的响应保存到本地,包括HTML网页、图片和文件下载等。后续收到访问相同资源的请求时,使用已保存的响应副本返回给客户端,代替到原始Web服务器请求。Web Cache is a device deployed between a client and a service server. The service server can also be called a web server. The Web Cache monitors the client's uplink request and saves the response of the service server to the local, including the HTML web page. Images and file downloads, etc. Subsequent to receiving a request to access the same resource, the saved response copy is returned to the client instead of the original web server request.
透明缓存系统,是Web Cache的一种,使用Web服务器地址或客户端地址与客户端或Web服务器交互,使得客户端和Web服务器对透明缓存系统无感知。The transparent cache system is a kind of Web Cache that uses a web server address or a client address to interact with a client or a web server, so that the client and the web server have no perception of the transparent cache system.
对称路由,是往返数据包通过相同的路由路径进行传输的一种路由方式。Symmetric routing is a routing method in which round-trip packets are transmitted through the same routing path.
非对称路由,是往返数据包通过不同的路由路径进行传输的一种路由方式。Asymmetric routing is a routing method in which round-trip packets are transmitted through different routing paths.
图1为本发明实施例提供的用于透明缓存系统的流量处理方法的应用场景示意图,参照图1,该场景中主要涉及客户端(Client)101、业务服务器(Server)102和透明缓存系统(Cache)103,其中,透明缓存系统103的位置处于客户端101和业务服务器102之间,客户端101与业务服务器102之间的TCP流的传输路径可能为对称路由也可能为非对称路由,图1中仅示出非对称路由的情况,往返数据包通过不同的路由路径进行传输。通常透明缓存系统部署在其中一条路径上、只能获得这些TCP流的部分报文(上行报文或者下行报文),无法正常的进行业务层的处理。对于非对称路由的流量,透明缓存系统需要能够进行识别和放行处理。FIG. 1 is a schematic diagram of an application scenario of a method for processing a traffic of a transparent cache system according to an embodiment of the present invention. Referring to FIG. 1 , the scenario mainly involves a client 101, a server 102, and a transparent cache system. Cache) 103, wherein the location of the transparent cache system 103 is between the client 101 and the service server 102, and the transmission path of the TCP stream between the client 101 and the service server 102 may be a symmetric route or an asymmetric route. Only the case of asymmetric routing is shown in 1, and round-trip packets are transmitted through different routing paths. Generally, the transparent cache system is deployed on one of the paths and can only obtain some packets (uplinks or downlinks) of these TCP flows. The service layer cannot be processed normally. For traffic with asymmetric routes, the transparent cache system needs to be able to be identified and released.
本发明实施例提供了一种用于透明缓存系统的流量处理方法,当透明缓存系统103接收到用于建立客户端101与业务服务器102之间的第一TCP连接的SYN报文时,先不建立客户端101与透明缓存系统103之间的TCP连接以及业务服务器102与透明缓存系统103之间的TCP连接,而是先根据是否接收到针对该SYN报文的SYN+ACK响应报文确定相应的TCP流是否属于对称路由,若属于非对称路由,对数据报文进行IP层转发处理,若属于对称路由,再建立客户端101与透明缓存系统103之间的TCP连接以及业务服务器102与透明缓存系统103之间的TCP连接,对数据报文进行缓存业务处理,从而实现对流量的精确控制。The embodiment of the present invention provides a traffic processing method for a transparent cache system. When the transparent cache system 103 receives a SYN packet for establishing a first TCP connection between the client 101 and the service server 102, Establishing a TCP connection between the client 101 and the transparent cache system 103 and a TCP connection between the service server 102 and the transparent cache system 103, but first determining whether the SYN+ACK response message for the SYN message is received according to whether a TCP connection is received between the service server 102 and the transparent cache system 103. Whether the TCP stream belongs to a symmetric route, if it belongs to an asymmetric route, performs IP layer forwarding processing on the data packet, and if it belongs to a symmetric route, establishes a TCP connection between the client 101 and the transparent cache system 103, and the service server 102 and the transparent The TCP connection between the cache systems 103 performs a cache service processing on the data packets, thereby achieving precise control of traffic.
图2为本发明实施例提供的一种透明缓存系统的结构示意图,该透明缓存系统具体可以称为Web Cache Server,Web Cache Server包括Linux内核(Kernel)、套接字(Socket)管理模块和业务处理模块,其中,Linux Kernel包括TCP协议栈、IP协议栈和Web Cache Kernel模块(module),Web Cache Kernel module具体包括TCP流表管理子模块。Web Cache Server收到客户端发送给Web服务器的SYN报文时,Web Cache Server先不建立与客户端的TCP连接,而是通过IP协议栈进行IP层转发,IP层转发的报文不会进入TCP协议栈;等到收到Web服务器的SYN+ACK响应时,Web Cache Server判断该TCP流的上行和下行报文都经过了Web Cache Server、属于对称路由,之后Web Cache Server再用保存下来的SYN、 SYN+ACK等报文在TCP协议栈中恢复与客户端或Web服务器的TCP连接,使得Web Cache Server能够与客户端或Web服务器分别进行通信。因此,对于非对称路由TCP流,Web Cache Server能够进行放行、不影响客户端到Web服务器的访问;对于对称路由TCP流,Web Cache能够通过TCP协议栈由业务处理模块正常提供缓存加速服务,例如,HTTP协议解析、目标资源唯一标识计算、资源访问热度累计、目标资源索引查询等。2 is a schematic structural diagram of a transparent cache system according to an embodiment of the present invention. The transparent cache system may be specifically referred to as a Web Cache Server, and the Web Cache Server includes a Linux kernel (Kernel), a socket (Socket) management module, and a service. The processing module, wherein the Linux Kernel includes a TCP protocol stack, an IP protocol stack, and a Web Cache Kernel module, and the Web Cache Kernel module specifically includes a TCP flow table management submodule. When the Web Cache Server receives the SYN packet sent by the client to the Web server, the Web Cache Server does not establish a TCP connection with the client. Instead, it forwards the IP layer through the IP protocol stack. The packets forwarded by the IP layer do not enter the TCP. The protocol stack; when the SYN+ACK response of the web server is received, the Web Cache Server determines that the upstream and downstream packets of the TCP stream pass through the Web Cache Server and belong to the symmetric route, and then the Web Cache Server reuses the saved SYN. The SYN+ACK packet recovers the TCP connection with the client or the web server in the TCP protocol stack, so that the Web Cache Server can communicate with the client or the web server separately. Therefore, for asymmetrically routed TCP flows, the Web Cache Server can be released without affecting client-to-web server access; for symmetrically routed TCP flows, Web Cache can provide cache acceleration services by the service processing module through the TCP protocol stack, for example. , HTTP protocol parsing, target resource unique identifier calculation, resource access heat accumulation, target resource index query, and the like.
图3为本发明实施例提供的一种用于透明缓存系统的流量处理方法流程图,该方法包括:FIG. 3 is a flowchart of a method for processing traffic of a transparent cache system according to an embodiment of the present disclosure, where the method includes:
步骤301,当透明缓存系统从客户端接收到用于建立客户端与业务服务器之间的第一TCP连接的SYN报文时,对SYN报文进行IP层转发处理。Step 301: Perform an IP layer forwarding process on the SYN packet when the transparent cache system receives the SYN packet from the client for establishing the first TCP connection between the client and the service server.
步骤302,若透明缓存系统未接收到针对SYN报文的SYN+ACK响应报文,则当透明缓存系统接收到通过第一TCP连接传输的TCP流的数据报文时,对数据报文进行IP层转发处理。Step 302: If the transparent cache system does not receive the SYN+ACK response packet for the SYN packet, when the transparent cache system receives the data packet of the TCP stream transmitted through the first TCP connection, the data packet is IP-enabled. Layer forwarding processing.
由上可见,本发明实施例中,透明缓存系统在接收到用于建立客户端与业务服务器之间的第一TCP连接的SYN报文后,不是立即建立与客户端之间的TCP连接,而是对该SYN报文进行IP层转发处理,并且,若未接收到针对该SYN报文的SYN+ACK响应报文,则当接收到通过第一TCP连接传输的TCP流的数据报文时,对该数据报文进行IP层转发处理,从而能够实现非对称路由时客户端对业务服务器的正常访问。It can be seen that, in the embodiment of the present invention, after receiving the SYN packet for establishing the first TCP connection between the client and the service server, the transparent cache system does not immediately establish a TCP connection with the client. The IP layer forwarding process is performed on the SYN packet, and if the SYN+ACK response packet for the SYN packet is not received, when the data packet of the TCP stream transmitted through the first TCP connection is received, Performing IP layer forwarding processing on the data packet, so that the client can access the service server normally during asymmetric routing.
图3a为本发明实施例提供的另一种用于透明缓存系统的流量处理方法流程图,该方法除了包括前述步骤301和步骤302之外,还包括:FIG. 3 is a flowchart of another method for processing a traffic of a transparent cache system according to an embodiment of the present invention. The method includes: in addition to the foregoing steps 301 and 302, the method includes:
步骤303,若透明缓存系统接收到针对SYN报文的SYN+ACK响应报文,则通过TCP协议栈模拟建立透明缓存系统与客户端的第二TCP连接,以及,通过TCP协议栈模拟建立透明缓存系统与业务服务器的第三TCP连接。Step 303: If the transparent cache system receives the SYN+ACK response message for the SYN message, the second TCP connection between the transparent cache system and the client is simulated through the TCP protocol stack, and the transparent cache system is established through the TCP protocol stack simulation. A third TCP connection to the business server.
在一个示例中,所述透明缓存系统可以根据所述SYN报文、所述SYN+ACK响应报文,在TCP协议栈中模拟出所述透明缓存系统与所述客户端的三次握手建连,建立所述透明缓存系统与所述客户端的第二TCP连接;以及,所述透明缓存系统根据所述SYN报文、所述SYN+ACK响应报文,在TCP协议栈中模拟出所述透明缓存系统与所述业务服务器的三次握手建连,建立所述透明缓存系统与所述业务服务器的第三TCP连接。In an example, the transparent cache system may simulate the three-way handshake establishment of the transparent cache system and the client in the TCP protocol stack according to the SYN message and the SYN+ACK response message. The transparent cache system is connected to the second TCP connection of the client; and the transparent cache system simulates the transparent cache system in the TCP protocol stack according to the SYN packet and the SYN+ACK response packet. Establishing a third handshake with the service server to establish a third TCP connection between the transparent cache system and the service server.
在另一个示例中,所述透明缓存系统通过IP层转发的方式将所述SYN+ACK响应报文发送给所述客户端;所述透明缓存系统接收所述客户端向所述业务服务器发送的针对所述SYN+ACK响应报文的ACK报文;所述透明缓存系统通过IP层转发的方式将所述ACK报文发送给所述业务服务器。所述透明缓存系统根据所述SYN报文、所述SYN+ACK响应报文、所述ACK报文,在TCP协议栈中模拟出所述透明缓存系统与所述客户端的三次握手建连,建立所述透明缓存系统与所述客户端的第二TCP连接;以及,所述透明缓存系统根据所述SYN报文、所述SYN+ACK响应报文、所述ACK报文,在TCP协议栈中模拟出所述透明缓存系统与所述业务服务器的三次握手建连,建立所述透明缓存系统与所述业务服务器的第三TCP连接。In another example, the transparent cache system sends the SYN+ACK response message to the client by means of IP layer forwarding; the transparent cache system receives the client sends the message to the service server. The ACK message is sent to the service server in the manner of the IP layer forwarding manner. The transparent cache system simulates the three-way handshake establishment of the transparent cache system and the client in the TCP protocol stack according to the SYN packet, the SYN+ACK response packet, and the ACK packet, and establishes The transparent cache system is connected to the second TCP connection of the client; and the transparent cache system simulates in the TCP protocol stack according to the SYN packet, the SYN+ACK response packet, and the ACK packet. A transparent connection between the transparent cache system and the service server is established, and a third TCP connection between the transparent cache system and the service server is established.
步骤304,当透明缓存系统接收到通过第一TCP连接传输的TCP流的数据报文时,利用第二TCP连接和第三TCP连接对数据报文进行缓存业务处理。Step 304: When the transparent cache system receives the data packet of the TCP stream transmitted through the first TCP connection, the second TCP connection and the third TCP connection are used to perform the buffer service processing on the data packet.
由上可见,本发明实施例中,透明缓存系统在接收到客户端发送给业务服务器的SYN报文时,不是马上建立透明缓存系统与客户端的第二TCP连接以及建立透明缓存系统与业务服务器的第三TCP连接,而是对该SYN报文进行IP层转发处理,在收到业务服务器发送给客户端的针对该SYN报文的SYN+ACK响应报文时,再建立透明缓存系统与客户端的第二TCP 连接以及建立透明缓存系统与业务服务器的第三TCP连接,后续当透明缓存系统接收到通过第一TCP连接传输的TCP流的数据报文时,利用第二TCP连接和第三TCP连接对数据报文进行缓存业务处理。通过上述方式,一方面,能够实现对对称路由的TCP流正常提供缓存加速服务,另一方面,相对于通常采用的透明缓存系统在接收到客户端发送给业务服务器的SYN报文时,马上建立透明缓存系统与客户端的第二TCP连接以及建立透明缓存系统与业务服务器的第三TCP连接的方式,本发明实施例能够避免当通过第一TCP连接传输的TCP流为非对称路由时,导致通信中断的问题。It can be seen that, in the embodiment of the present invention, when the transparent cache system receives the SYN packet sent by the client to the service server, the second TCP connection between the transparent cache system and the client is not established immediately, and the transparent cache system and the service server are established. The third TCP connection, but the IP layer forwarding process is performed on the SYN packet, and when the SYN+ACK response message for the SYN packet sent by the service server to the client is received, the transparent cache system and the client are established. Two TCP Connecting and establishing a third TCP connection between the transparent cache system and the service server, and subsequently, when the transparent cache system receives the data packet of the TCP stream transmitted through the first TCP connection, using the second TCP connection and the third TCP connection to the datagram The text is cached for business processing. In the above manner, on the one hand, it is possible to normally provide a cache acceleration service for a symmetrically routed TCP stream, and on the other hand, it is established immediately when receiving a SYN message sent by the client to the service server according to the commonly used transparent cache system. The embodiment of the present invention can prevent the TCP stream transmitted through the first TCP connection from being asymmetrically routed, causing communication when the second TCP connection between the transparent cache system and the client and the third TCP connection of the transparent cache system and the service server are established. The problem of interruption.
现网流量中同时会存在对称路由的TCP流和非对称路由的TCP流,下面通过两个具体的实施例分别描述用于透明缓存系统的流量处理方法对于对称路由的TCP流、非对称路由的TCP流的处理流程。In the current network traffic, there are symmetrically routed TCP flows and asymmetrically routed TCP flows. The following describes the traffic processing methods used for the transparent cache system for symmetrically routed TCP flows and asymmetric routes. The processing flow of the TCP stream.
图4为本发明实施例提供的一种用于透明缓存系统的流量处理方法信号流图,该实施例针对对称路由的TCP流的处理,Web Cache收到Web服务器响应后建立与客户端或Web服务器的TCP连接,该方法包括:4 is a signal flow diagram of a traffic processing method for a transparent cache system according to an embodiment of the present invention. The embodiment is directed to processing a symmetrically routed TCP flow, and the Web Cache is configured to establish a client or web after receiving a response from the web server. The server's TCP connection, the method includes:
步骤401,客户端向Web服务器发送SYN报文,Web Cache接收到该报文。In step 401, the client sends a SYN packet to the web server, and the web cache receives the packet.
步骤402,Web Cache执行IP层转发,将客户端的SYN报文转发给Web服务器。Web Cache在TCP流表管理模块中生成对应的流表,并保存SYN报文。Step 402: The Web Cache performs IP layer forwarding, and forwards the SYN packet of the client to the web server. The Web Cache generates a corresponding flow table in the TCP flow table management module and saves the SYN packet.
步骤403,Web Cache收到Web服务器发给客户端的SYN+ACK响应报文,判断该TCP流的上行和下行报文都经过Web Cache,因此属于对称路由。Step 403: The Web Cache receives the SYN+ACK response packet sent by the web server to the client, and determines that the uplink and downlink packets of the TCP stream pass through the Web Cache, and therefore belong to a symmetric route.
步骤404,Web Cache在TCP流表管理模块中更新流表状态,将当前TCP流标识为对称路由,并保存SYN+ACK报文。Step 404: The Web Cache updates the flow table status in the TCP flow table management module, identifies the current TCP flow as a symmetric route, and saves the SYN+ACK message.
步骤405,Web Cache将SYN+ACK响应发送给客户端。In step 405, the Web Cache sends a SYN+ACK response to the client.
步骤406,Web Cache收到客户端发送给Web服务器的ACK报文。Step 406: The Web Cache receives an ACK message sent by the client to the web server.
步骤407,Web Cache使用步骤402、404、406中的SYN报文、SYN+ACK报文、ACK报文信息(源/目标IP地址和端口(Port)号、TCP序列号),在TCP协议栈中模拟出三次握手建连、恢复出Web Cache与客户端的TCP连接。Step 407: The Web Cache uses the SYN packet, the SYN+ACK packet, and the ACK packet information (source/destination IP address, port number, and TCP sequence number) in steps 402, 404, and 406 in the TCP protocol stack. The three-way handshake is simulated and the TCP connection between the Web Cache and the client is restored.
其中,上述模拟是相对正常的TCP连接建立过程而言。由于实际上是客户端跟Web服务器之间建立了TCP连接,Cache根据这个TCP连接的信息在本地TCP协议栈中转换成了两个TCP连接。与通常调用系统API建立socket不同,该技术利用报文信息创建出socket结构、直接放入内核TCP连接表中。Among them, the above simulation is a relatively normal TCP connection establishment process. Since the TCP connection is actually established between the client and the Web server, the Cache translates into two TCP connections in the local TCP protocol stack based on the information of the TCP connection. Unlike the usual calling system API to create a socket, this technology uses the message information to create a socket structure and directly into the kernel TCP connection table.
步骤408,Web Cache使用步骤402、404、406中的SYN报文、SYN+ACK报文、ACK报文信息(源/目标IP地址和Port、TCP序列号),在TCP协议栈中模拟出三次握手建连、恢复出Web Cache与Web服务器的TCP连接。Step 408: The Web Cache uses the SYN packet, the SYN+ACK packet, and the ACK packet information (source/destination IP address, port, and TCP sequence number) in steps 402, 404, and 406 to simulate three times in the TCP protocol stack. The handshake is built and the TCP connection between the Web Cache and the Web server is restored.
其中,与通常调用系统API建立socket不同,该技术利用报文信息创建出socket结构、直接放入内核TCP连接表中。Different from the usual calling system API to establish a socket, the technology uses the message information to create a socket structure and directly into the kernel TCP connection table.
步骤409,Web Cache向Web服务器发送ACK报文。In step 409, the Web Cache sends an ACK packet to the web server.
至此,Web Cache分别与客户端、Web服务器成功建立TCP连接,后续收到的报文不再进行IP层转发,而是送入TCP协议栈进行处理。At this point, the Web Cache successfully establishes a TCP connection with the client and the web server, and the subsequent received packets are not forwarded by the IP layer, but are sent to the TCP protocol stack for processing.
本发明实施例中,步骤404为可选,替代方案为:Web Cache收到Web服务器的SYN+ACK响应报文后、模拟操作系统协议栈恢复与客户端或Web服务器的TCP连接。即从上述步骤403开始,依次执行上述步骤中的407和408,接下来再依次执行上述步骤中的405、406、409。 In the embodiment of the present invention, step 404 is optional. The alternative is: after the Web Cache receives the SYN+ACK response packet of the web server, the simulated operating system protocol stack restores the TCP connection with the client or the web server. That is, starting from the above step 403, 407 and 408 in the above steps are sequentially performed, and then 405, 406, and 409 in the above steps are sequentially executed.
由于TCP报文之间的序列号是连续的,所以Web Cache可以在收到SYN报文和SYN+ACK响应报文后,根据SYN+ACK报文推算出来ACK报文的序列号,另外,ACK报文携带的源/目标IP地址和端口号,在SYN报文和SYN+ACK报文中也已经有了,因此可以在未获得ACK报文时在TCP协议栈中模拟出三次握手建连、恢复出Web Cache与客户端的TCP连接,以及,恢复出Web Cache与Web服务器的TCP连接。Because the serial number between the TCP packets is continuous, the Web Cache can calculate the serial number of the ACK packet according to the SYN+ACK packet after receiving the SYN packet and the SYN+ACK response packet. In addition, the ACK The source/destination IP address and port number carried in the packet are also available in the SYN packet and the SYN+ACK packet. Therefore, the three-way handshake connection can be simulated in the TCP protocol stack when the ACK packet is not obtained. Restore the TCP connection between the Web Cache and the client, and restore the TCP connection between the Web Cache and the Web server.
其中,步骤407和步骤408的执行顺序不做具体限定,例如,可以先执行步骤407再执行步骤408,也可以先执行步骤408再执行步骤407,还可以同时执行步骤407和步骤408。The execution sequence of the steps 407 and 408 is not specifically limited. For example, the step 407 may be performed before the step 408 is performed, or the step 408 may be performed first, and then the step 407 and the step 408 may be performed simultaneously.
下面给出对称路由的TCP流的处理流程实例:The following gives an example of the processing flow of a symmetrically routed TCP stream:
Web Cache(IP:20.1.1.2)收到客户端(IP:10.1.1.10)发往Web服务器(IP:30.1.1.2)的SYN报文。Web Cache (IP:20.1.1.2) receives the SYN packet sent by the client (IP: 10.1.1.10) to the web server (IP: 30.1.1.2).
Web Cache(IP:20.1.1.2)执行IP层转发,将客户端(IP:10.1.1.10)的SYN报文转发给Web服务器(IP:30.1.1.2)。Web Cache在TCP流表管理模块中生成对应的流表,并保存SYN报文。The Web Cache (IP: 20.1.1.2) performs IP layer forwarding and forwards the SYN packet of the client (IP: 10.1.1.10) to the Web server (IP: 30.1.1.2). The Web Cache generates a corresponding flow table in the TCP flow table management module and saves the SYN packet.
Web Cache(IP:20.1.1.2)收到Web服务器(IP:30.1.1.2)发往客户端(IP:10.1.1.10)的SYN+ACK响应报文,判断该TCP流的上行和下行报文都经过Web Cache,因此属于对称路由。The Web Cache (IP: 20.1.1.2) receives the SYN+ACK response packet sent by the Web server (IP: 30.1.1.2) to the client (IP: 10.1.1.10), and determines that both the upstream and downstream packets of the TCP stream are received. After Web Cache, it is a symmetric route.
Web Cache(IP:20.1.1.2)在TCP流表管理模块中更新流表状态,将当前TCP流标识为对称路由,并保存SYN+ACK报文。Web Cache (IP:20.1.1.2) updates the flow table status in the TCP flow table management module, identifies the current TCP flow as a symmetric route, and saves the SYN+ACK message.
Web Cache(IP:20.1.1.2)将SYN+ACK响应发送给客户端(IP:10.1.1.10)。Web Cache (IP:20.1.1.2) sends a SYN+ACK response to the client (IP: 10.1.1.10).
Web Cache(IP:20.1.1.2)收到客户端(IP:10.1.1.10)发往Web服务器(IP:30.1.1.2)的ACK报文。Web Cache (IP:20.1.1.2) receives an ACK message from the client (IP: 10.1.1.10) to the web server (IP: 30.1.1.2).
Web Cache(IP:20.1.1.2)使用SYN报文、SYN+ACK报文、ACK报文信息,在TCP协议栈中模拟出三次握手建连、恢复Web Cache(IP:20.1.1.2)与客户端(IP:10.1.1.10)的TCP连接。由于Web Cache进行了IP地址转换,客户端(IP:10.1.1.10)看到的TCP连接对象为Web服务器(IP:30.1.1.2)。The Web Cache (IP:20.1.1.2) uses the SYN packet, the SYN+ACK packet, and the ACK packet information to simulate the three-way handshake connection and restore the Web Cache (IP:20.1.1.2) and the client in the TCP protocol stack. (IP: 10.1.1.10) TCP connection. Since the Web Cache performs IP address translation, the TCP connection object seen by the client (IP: 10.1.1.10) is the Web server (IP: 30.1.1.2).
Web Cache(IP:20.1.1.2)使用SYN报文、SYN+ACK报文、ACK报文信息,在TCP协议栈中模拟出三次握手建连、恢复Web Cache(IP:20.1.1.2)与Web服务器(IP:30.1.1.2)的TCP连接。由于Web Cache进行了IP地址转换,Web服务器(IP:30.1.1.2)看到的TCP连接对象为客户端(IP:10.1.1.10)。Web Cache (IP:20.1.1.2) uses SYN packets, SYN+ACK packets, and ACK packets to simulate three-way handshake establishment and restore Web Cache (IP:20.1.1.2) and Web server in the TCP protocol stack. TCP connection (IP: 30.1.1.2). Since the Web Cache performs IP address translation, the TCP connection object seen by the Web server (IP: 30.1.1.2) is the client (IP: 10.1.1.10).
Web Cache(IP:20.1.1.2)向Web服务器(IP:30.1.1.2)发送ACK报文。Web Cache (IP: 20.1.1.2) sends an ACK message to the web server (IP: 30.1.1.2).
图5为本发明实施例提供的另一种用于透明缓存系统的流量处理方法信号流图,该实施例针对非对称路由的TCP流的处理,Web Cache未收到Web服务器响应、对该TCP流进行放行(bypass),该方法包括:FIG. 5 is a signal flow diagram of another traffic processing method for a transparent cache system according to an embodiment of the present invention. The embodiment is directed to processing a TCP flow of an asymmetric route, and the Web Cache does not receive a response from the web server, and the TCP The stream is bypassed, and the method includes:
步骤501,客户端向Web服务器发送SYN报文,Web Cache接收到该报文。In step 501, the client sends a SYN packet to the web server, and the web cache receives the packet.
步骤502,Web Cache执行IP层转发,将客户端的SYN报文转发给Web服务器。Web Cache在TCP流表管理模块中生成对应的流表,并保存SYN报文。Step 502: The Web Cache performs IP layer forwarding, and forwards the SYN packet of the client to the web server. The Web Cache generates a corresponding flow table in the TCP flow table management module and saves the SYN packet.
步骤503,Web服务器的SYN+ACK响应报文通过其它路由路径返回给客户端(不经过Web Cache)。In step 503, the SYN+ACK response packet of the web server is returned to the client through other routing paths (without passing through the Web Cache).
步骤504,客户端向Web服务器发送ACK报文,Web Cache接收到该报文。Step 504: The client sends an ACK packet to the web server, and the web cache receives the packet.
本发明实施例中,默认TCP连接属于非对称路由状态,Web Cache收到该TCP连接的报 文都会转发。由于Web Cache没有收到Web服务器的SYN+ACK响应报文,因此当前TCP流在Web Cache上仍然处于IP层转发状态(即Web Cache认为该TCP流是属于非对称路由),Web Cache执行IP层转发,将客户端的ACK报文转发给Web服务器。In the embodiment of the present invention, the default TCP connection belongs to an asymmetric routing state, and the Web Cache receives the report of the TCP connection. The text will be forwarded. Since the Web Cache does not receive the SYN+ACK response packet from the web server, the current TCP stream is still in the IP layer forwarding state on the Web Cache (that is, the Web Cache considers the TCP stream to belong to an asymmetric route), and the Web Cache performs the IP layer. Forward, forward the ACK message of the client to the web server.
整个过程中,Web Cache对客户端和Web服务器之间的连接建立报文即TCP建连握手报文进行IP层转发,Web Cache本身没有与客户端或Web服务器建立TCP连接。对于后续客户端与Web服务器之间的HTTP交互报文,Web Cache均进行IP层转发。During the whole process, the Web Cache forwards the connection between the client and the Web server, that is, the TCP connection handshake packet is forwarded by the IP layer. The Web Cache itself does not establish a TCP connection with the client or the Web server. For the HTTP interaction packets between the client and the web server, the Web Cache forwards the IP layer.
下面给出非对称路由TCP流的处理流程实例:The following gives an example of the processing flow of asymmetrically routed TCP flows:
Web Cache(IP:20.1.1.2)收到客户端(IP:10.1.1.20)发往Web服务器(IP:30.1.1.2)的SYN报文。Web Cache (IP:20.1.1.2) receives the SYN packet sent by the client (IP: 10.1.1.20) to the web server (IP: 30.1.1.2).
Web Cache(IP:20.1.1.2)执行IP层转发,将客户端(IP:10.1.1.20)的SYN报文转发给Web服务器(IP:30.1.1.2)。The Web Cache (IP: 20.1.1.2) performs IP layer forwarding and forwards the SYN packet of the client (IP: 10.1.1.20) to the Web server (IP: 30.1.1.2).
Web服务器(IP:30.1.1.2)的SYN+ACK响应通过其它路由路径返回给客户端(IP:10.1.1.20)。The SYN+ACK response of the web server (IP: 30.1.1.2) is returned to the client via other routing paths (IP: 10.1.1.20).
Web Cache(IP:20.1.1.2)收到客户端(IP:10.1.1.20)发往Web服务器(IP:30.1.1.2)的ACK报文。Web Cache (IP:20.1.1.2) receives an ACK message from the client (IP: 10.1.1.20) to the web server (IP: 30.1.1.2).
Web Cache(IP:20.1.1.2)执行IP层转发,将客户端(IP:10.1.1.20)的ACK报文转发给Web服务器(IP:30.1.1.2)。The Web Cache (IP: 20.1.1.2) performs IP layer forwarding and forwards the ACK message of the client (IP: 10.1.1.20) to the Web server (IP: 30.1.1.2).
图6为本发明实施例提供的另一种透明缓存系统结构图,该透明缓存系统用于执行本发明前述实施例提供的用于透明缓存系统的流量处理方法,所述系统包括:接收单元601、处理单元602和发送单元603;FIG. 6 is a structural diagram of another transparent cache system according to an embodiment of the present invention. The transparent cache system is configured to execute a traffic processing method for a transparent cache system provided by the foregoing embodiment of the present invention, where the system includes: a receiving unit 601. , processing unit 602 and transmitting unit 603;
接收单元601,用于从客户端或业务服务器接收报文,所述报文为SYN报文或数据报文;The receiving unit 601 is configured to receive a packet from the client or the service server, where the packet is a SYN packet or a data packet.
处理单元602,用于当所述接收单元601从客户端接收到用于建立所述客户端与所述业务服务器之间的第一传输控制协议TCP连接的SYN报文时,由发送单元603对所述SYN报文进行IP层转发处理;以及,若所述接收单元601未接收到针对所述SYN报文的SYN+ACK响应报文,则当所述接收单元601接收到通过所述第一TCP连接传输的TCP流的数据报文时,由发送单元603对所述数据报文进行IP层转发处理。The processing unit 602 is configured to: when the receiving unit 601 receives a SYN packet for establishing a first transmission control protocol TCP connection between the client and the service server, the sending unit 603 The SYN packet performs an IP layer forwarding process; and, if the receiving unit 601 does not receive the SYN+ACK response message for the SYN packet, when the receiving unit 601 receives the first When the TCP connection transmits the data packet of the TCP stream, the transmitting unit 603 performs IP layer forwarding processing on the data packet.
可选地,处理单元602,还用于若所述接收单元601接收到针对所述SYN报文的SYN+ACK响应报文,则通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接;当所述接收单元601接收到通过所述第一TCP连接传输的TCP流的数据报文时,利用所述第二TCP连接和所述第三TCP连接对所述数据报文进行缓存业务处理。Optionally, the processing unit 602 is further configured to: if the receiving unit 601 receives the SYN+ACK response packet for the SYN packet, simulate establishing the transparent cache system and the client by using a TCP protocol stack. a second TCP connection, and establishing, by the TCP protocol stack, a third TCP connection between the transparent cache system and the service server; when the receiving unit 601 receives the data of the TCP stream transmitted through the first TCP connection And performing a cache service processing on the data packet by using the second TCP connection and the third TCP connection.
可选地,所述处理单元602,具体用于根据所述接收单元601接收到的所述SYN报文、所述SYN+ACK响应报文,在TCP协议栈中模拟出所述透明缓存系统与所述客户端的三次握手建连,建立所述透明缓存系统与所述客户端的第二TCP连接;以及,根据所述接收单元601接收到的所述SYN报文、所述SYN+ACK响应报文,在TCP协议栈中模拟出所述透明缓存系统与所述业务服务器的三次握手建连,建立所述透明缓存系统与所述业务服务器的第三TCP连接。Optionally, the processing unit 602 is specifically configured to simulate the transparent cache system and the TCP protocol stack according to the SYN packet and the SYN+ACK response packet received by the receiving unit 601. The third handshake of the client is established, and the second TCP connection between the transparent cache system and the client is established; and the SYN packet and the SYN+ACK response packet are received according to the receiving unit 601. And simulating a three-way handshake connection between the transparent cache system and the service server in the TCP protocol stack, and establishing a third TCP connection between the transparent cache system and the service server.
可选地,发送单元603,还用于在所述处理单元602通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接之前,通过IP层转发的方式将所述SYN+ACK响应报文发 送给所述客户端;Optionally, the sending unit 603 is further configured to: at the processing unit 602, simulate, establish, by using a TCP protocol stack, a second TCP connection between the transparent cache system and the client, and simulate establishing the transparent cache by using a TCP protocol stack. Sending the SYN+ACK response packet by means of IP layer forwarding before the third TCP connection between the system and the service server Sent to the client;
所述接收单元601,还用于接收所述客户端向所述业务服务器发送的针对所述SYN+ACK响应报文的ACK报文;The receiving unit 601 is further configured to receive an ACK message sent by the client to the service server for the SYN+ACK response message;
所述发送单元603,还用于通过IP层转发的方式将所述接收单元601接收的ACK报文发送给所述业务服务器。The sending unit 603 is further configured to send, by using an IP layer, the ACK message received by the receiving unit 601 to the service server.
可选地,所述处理单元602,具体用于根据所述SYN报文、所述SYN+ACK响应报文、所述ACK报文,在TCP协议栈中模拟出所述透明缓存系统与所述客户端的三次握手建连,建立所述透明缓存系统与所述客户端的第二TCP连接;以及,根据所述SYN报文、所述SYN+ACK响应报文、所述ACK报文,在TCP协议栈中模拟出所述透明缓存系统与所述业务服务器的三次握手建连,建立所述透明缓存系统与所述业务服务器的第三TCP连接。Optionally, the processing unit 602 is configured to simulate, according to the SYN packet, the SYN+ACK response packet, and the ACK packet, the transparent cache system and the Establishing a three-way handshake of the client, establishing a second TCP connection between the transparent cache system and the client; and, in the TCP protocol, according to the SYN packet, the SYN+ACK response packet, and the ACK packet A three-way handshake connection between the transparent cache system and the service server is simulated in the stack, and a third TCP connection between the transparent cache system and the service server is established.
图7为本发明实施例提供的另一种透明缓存系统结构图,该透明缓存系统用于执行本发明前述实施例提供的用于透明缓存系统的流量处理方法,所述系统包括:FIG. 7 is a structural diagram of another transparent cache system according to an embodiment of the present invention. The transparent cache system is configured to execute a traffic processing method for a transparent cache system provided by the foregoing embodiment of the present invention, where the system includes:
存储器701、处理器702和通信接口703; Memory 701, processor 702 and communication interface 703;
所述存储器701,用于存储程序指令;The memory 701 is configured to store program instructions.
所述处理器702,用于根据所述存储器701中存储的程序指令执行以下操作:The processor 702 is configured to perform the following operations according to the program instructions stored in the memory 701:
当通过所述通信接口703从客户端接收到用于建立所述客户端与业务服务器之间的第一TCP连接的SYN报文时,对所述SYN报文进行IP层转发处理;When receiving, by the communication interface 703, a SYN packet for establishing a first TCP connection between the client and the service server, performing an IP layer forwarding process on the SYN packet;
若通过所述通信接口703未接收到针对所述SYN报文的SYN+ACK响应报文,则当通过所述通信接口703接收到通过所述第一TCP连接传输的TCP流的数据报文时,对所述数据报文进行IP层转发处理。If the SYN+ACK response message for the SYN message is not received by the communication interface 703, when the data message of the TCP stream transmitted through the first TCP connection is received through the communication interface 703, And performing IP layer forwarding processing on the data packet.
可选地,所述处理器702还用于根据所述存储器701中存储的程序指令执行以下操作:Optionally, the processor 702 is further configured to perform the following operations according to the program instructions stored in the memory 701:
若通过所述通信接口703接收到针对所述SYN报文的SYN+ACK响应报文,则通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接;If the SYN+ACK response message for the SYN message is received through the communication interface 703, the second TCP connection between the transparent cache system and the client is simulated by using the TCP protocol stack, and the TCP protocol is used. The stack simulation establishes a third TCP connection between the transparent cache system and the service server;
当通过所述通信接口703接收到通过所述第一TCP连接传输的TCP流的数据报文时,利用所述第二TCP连接和所述第三TCP连接对所述数据报文进行缓存业务处理。When receiving the data packet of the TCP stream transmitted through the first TCP connection through the communication interface 703, performing buffer processing processing on the data packet by using the second TCP connection and the third TCP connection .
可选地,所述处理器702执行所述通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接的操作,包括:Optionally, the processor 702 performs the second TCP connection between the transparent cache system and the client by using a TCP protocol stack, and simulates establishing the transparent cache system and the service by using a TCP protocol stack. The operation of the server's third TCP connection, including:
根据所述SYN报文、所述SYN+ACK响应报文,在TCP协议栈中模拟出所述透明缓存系统与所述客户端的三次握手建连,建立所述透明缓存系统与所述客户端的第二TCP连接;以及,And simulating a three-way handshake connection between the transparent cache system and the client in the TCP protocol stack according to the SYN packet and the SYN+ACK response packet, and establishing the transparent cache system and the client Two TCP connections; and,
根据所述SYN报文、所述SYN+ACK响应报文,在TCP协议栈中模拟出所述透明缓存系统与所述业务服务器的三次握手建连,建立所述透明缓存系统与所述业务服务器的第三TCP连接。And simulating a three-way handshake connection between the transparent cache system and the service server in the TCP protocol stack, and establishing the transparent cache system and the service server according to the SYN packet and the SYN+ACK response packet. The third TCP connection.
可选地,所述处理器702还用于根据所述存储器701中存储的程序指令执行以下操作:Optionally, the processor 702 is further configured to perform the following operations according to the program instructions stored in the memory 701:
在所述通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接之前,通过IP层转发的方式将所述SYN+ACK响应报文通过所述通信接口703发送给所述客户端;Establishing a second TCP connection between the transparent cache system and the client by using the TCP protocol stack, and performing a third TCP connection between the transparent cache system and the service server by using a TCP protocol stack to simulate Sending, by the IP layer, the SYN+ACK response packet to the client through the communication interface 703;
通过所述通信接口703接收所述客户端向所述业务服务器发送的针对所述SYN+ACK响 应报文的ACK报文;Receiving, by the communication interface 703, the SYN+ACK sent by the client to the service server ACK message of the message;
通过IP层转发的方式将所述ACK报文通过所述通信接口703发送给所述业务服务器。The ACK message is sent to the service server through the communication interface 703 by means of IP layer forwarding.
可选地,所述处理器702执行所述通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接的操作,包括:Optionally, the processor 702 performs the second TCP connection between the transparent cache system and the client by using a TCP protocol stack, and simulates establishing the transparent cache system and the service by using a TCP protocol stack. The operation of the server's third TCP connection, including:
根据所述SYN报文、所述SYN+ACK响应报文、所述ACK报文,在TCP协议栈中模拟出所述透明缓存系统与所述客户端的三次握手建连,建立所述透明缓存系统与所述客户端的第二TCP连接;以及,And simulating a three-way handshake connection between the transparent cache system and the client in the TCP protocol stack according to the SYN packet, the SYN+ACK response packet, and the ACK packet, and establishing the transparent cache system. a second TCP connection with the client; and,
根据所述SYN报文、所述SYN+ACK响应报文、所述ACK报文,在TCP协议栈中模拟出所述透明缓存系统与所述业务服务器的三次握手建连,建立所述透明缓存系统与所述业务服务器的第三TCP连接。And simulating a three-way handshake connection between the transparent cache system and the service server in the TCP protocol stack according to the SYN packet, the SYN+ACK response packet, and the ACK packet, and establishing the transparent cache A third TCP connection of the system to the service server.
专业人员应该还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。A person skilled in the art should further appreciate that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware, computer software, or a combination of both, in order to clearly illustrate hardware and software. Interchangeability, the composition and steps of the various examples have been generally described in terms of function in the above description. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令处理器完成,所述的程序可以存储于计算机可读存储介质中,所述存储介质是非短暂性(non-transitory)介质,例如随机存取存储器,只读存储器,快闪存储器,硬盘,固态硬盘,磁带(magnetic tape),软盘(floppy disk),光盘(optical disc)及其任意组合。It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be performed by a program, and the program may be stored in a computer readable storage medium, which is non-transitory ( Non-transitory medium, such as random access memory, read only memory, flash memory, hard disk, solid state disk, magnetic tape, floppy disk, optical disc, and any combination thereof.
以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权利要求的保护范围为准。 The above is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of changes or within the technical scope disclosed by the present invention. Alternatives are intended to be covered by the scope of the present invention. Therefore, the scope of protection of the present invention should be determined by the scope of the claims.

Claims (10)

  1. 一种用于透明缓存系统的流量处理方法,其特征在于,所述方法包括:A method for processing a traffic of a transparent cache system, the method comprising:
    当所述透明缓存系统从客户端接收到用于建立所述客户端与业务服务器之间的第一传输控制协议TCP连接的同步序列编号SYN报文时,对所述SYN报文进行网络协议IP层转发处理;When the transparent cache system receives a synchronization sequence number SYN packet for establishing a first transmission control protocol TCP connection between the client and the service server, the network protocol IP is performed on the SYN packet. Layer forwarding processing;
    若所述透明缓存系统未接收到针对所述SYN报文的SYN+确认ACK响应报文,则当所述透明缓存系统接收到通过所述第一TCP连接传输的TCP流的数据报文时,对所述数据报文进行IP层转发处理。If the transparent cache system does not receive the SYN+ acknowledgment ACK response message for the SYN packet, when the transparent cache system receives the data packet of the TCP stream transmitted through the first TCP connection, The data packet is subjected to IP layer forwarding processing.
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1 wherein the method further comprises:
    若所述透明缓存系统接收到针对所述SYN报文的SYN+ACK响应报文,则通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接;If the transparent cache system receives the SYN+ACK response message for the SYN packet, the TCP protocol stack is used to simulate establishing a second TCP connection between the transparent cache system and the client, and the TCP protocol stack is established. Simulating establishing a third TCP connection between the transparent cache system and the service server;
    当所述透明缓存系统接收到通过所述第一TCP连接传输的TCP流的数据报文时,利用所述第二TCP连接和所述第三TCP连接对所述数据报文进行缓存业务处理。And when the transparent cache system receives the data packet of the TCP stream transmitted by using the first TCP connection, performing the cache service processing on the data packet by using the second TCP connection and the third TCP connection.
  3. 如权利要求2所述的方法,其特征在于,所述透明缓存系统通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接,包括:The method according to claim 2, wherein the transparent cache system simulates establishing a second TCP connection between the transparent cache system and the client through a TCP protocol stack, and establishing the transparent through a TCP protocol stack simulation. a third TCP connection between the cache system and the service server, including:
    所述透明缓存系统根据所述SYN报文、所述SYN+ACK响应报文,在TCP协议栈中模拟出所述透明缓存系统与所述客户端的三次握手建连,建立所述透明缓存系统与所述客户端的第二TCP连接;以及,The transparent cache system simulates the three-way handshake connection between the transparent cache system and the client in the TCP protocol stack according to the SYN packet and the SYN+ACK response packet, and establishes the transparent cache system and a second TCP connection of the client; and,
    所述透明缓存系统根据所述SYN报文、所述SYN+ACK响应报文,在TCP协议栈中模拟出所述透明缓存系统与所述业务服务器的三次握手建连,建立所述透明缓存系统与所述业务服务器的第三TCP连接。The transparent cache system simulates a three-way handshake connection between the transparent cache system and the service server in the TCP protocol stack according to the SYN packet and the SYN+ACK response packet, and establishes the transparent cache system. A third TCP connection with the service server.
  4. 如权利要求2所述的方法,其特征在于,所述通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接之前,所述方法还包括:The method of claim 2, wherein the establishing, by the TCP protocol stack, the second TCP connection between the transparent cache system and the client is established, and the transparent cache system is simulated by using a TCP protocol stack. Before the third TCP connection of the service server, the method further includes:
    所述透明缓存系统通过IP层转发的方式将所述SYN+ACK响应报文发送给所述客户端;Sending, by the IP layer, the SYN+ACK response packet to the client by using the IP layer forwarding manner;
    所述透明缓存系统接收所述客户端向所述业务服务器发送的针对所述SYN+ACK响应报文的ACK报文;The transparent cache system receives an ACK message sent by the client to the service server for the SYN+ACK response message;
    所述透明缓存系统通过IP层转发的方式将所述ACK报文发送给所述业务服务器。The transparent cache system sends the ACK packet to the service server by means of IP layer forwarding.
  5. 如权利要求4所述的方法,其特征在于,所述透明缓存系统通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接,包括:The method according to claim 4, wherein the transparent cache system simulates establishing a second TCP connection between the transparent cache system and the client through a TCP protocol stack, and establishing the transparent through a TCP protocol stack simulation. a third TCP connection between the cache system and the service server, including:
    所述透明缓存系统根据所述SYN报文、所述SYN+ACK响应报文、所述ACK报文,在TCP协议栈中模拟出所述透明缓存系统与所述客户端的三次握手建连,建立所述透明缓存系统与所述客户端的第二TCP连接;以及,The transparent cache system simulates the three-way handshake establishment of the transparent cache system and the client in the TCP protocol stack according to the SYN packet, the SYN+ACK response packet, and the ACK packet, and establishes The transparent cache system is connected to the second TCP of the client; and
    所述透明缓存系统根据所述SYN报文、所述SYN+ACK响应报文、所述ACK报文,在TCP协议栈中模拟出所述透明缓存系统与所述业务服务器的三次握手建连,建立所述透明缓存系统与所述业务服务器的第三TCP连接。The transparent cache system simulates a three-way handshake connection between the transparent cache system and the service server in the TCP protocol stack according to the SYN packet, the SYN+ACK response packet, and the ACK packet. Establishing a third TCP connection between the transparent cache system and the service server.
  6. 一种透明缓存系统,其特征在于,所述系统包括:接收单元、处理单元和发送单元; A transparent cache system, the system comprising: a receiving unit, a processing unit, and a sending unit;
    所述接收单元,用于从客户端或业务服务器接收报文,所述报文为同步序列编号SYN报文或数据报文;The receiving unit is configured to receive a message from a client or a service server, where the message is a synchronization sequence number SYN message or a data message;
    所述处理单元,用于当所述接收单元从客户端接收到用于建立所述客户端与所述业务服务器之间的第一传输控制协议TCP连接的SYN报文时,由所述发送单元对所述SYN报文进行网络协议IP层转发处理;以及,若所述接收单元未接收到针对所述SYN报文的SYN+确认ACK响应报文,则当所述接收单元接收到通过所述第一TCP连接传输的TCP流的数据报文时,由所述发送单元对所述数据报文进行IP层转发处理。The processing unit is configured to: when the receiving unit receives, from the client, a SYN packet for establishing a first transmission control protocol TCP connection between the client and the service server, by the sending unit Performing network protocol IP layer forwarding processing on the SYN packet; and, if the receiving unit does not receive the SYN+ acknowledgment ACK response message for the SYN packet, when the receiving unit receives the When a TCP connection transmits a data packet of a TCP stream, the transmitting unit performs an IP layer forwarding process on the data packet.
  7. 如权利要求6所述的系统,其特征在于:The system of claim 6 wherein:
    所述处理单元,还用于若所述接收单元接收到针对所述SYN报文的SYN+ACK响应报文,则通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接;当所述接收单元接收到通过所述第一TCP连接传输的TCP流的数据报文时,利用所述第二TCP连接和所述第三TCP连接对所述数据报文进行缓存业务处理。The processing unit is further configured to: if the receiving unit receives the SYN+ACK response message for the SYN message, simulate establishing a second TCP connection between the transparent cache system and the client by using a TCP protocol stack. And establishing, by the TCP protocol stack, a third TCP connection between the transparent cache system and the service server; and when the receiving unit receives the data packet of the TCP stream transmitted by using the first TCP connection, using The second TCP connection and the third TCP connection perform a cache service processing on the data packet.
  8. 如权利要求7所述的系统,其特征在于,所述处理单元,具体用于根据所述接收单元接收到的所述SYN报文、所述SYN+ACK响应报文,在TCP协议栈中模拟出所述透明缓存系统与所述客户端的三次握手建连,建立所述透明缓存系统与所述客户端的第二TCP连接;以及,根据所述接收单元接收到的所述SYN报文、所述SYN+ACK响应报文,在TCP协议栈中模拟出所述透明缓存系统与所述业务服务器的三次握手建连,建立所述透明缓存系统与所述业务服务器的第三TCP连接。The system according to claim 7, wherein the processing unit is configured to simulate in the TCP protocol stack according to the SYN message and the SYN+ACK response message received by the receiving unit. The transparent cache system is connected to the three-way handshake of the client to establish a second TCP connection between the transparent cache system and the client; and, according to the SYN packet received by the receiving unit, The SYN+ACK response message simulates a three-way handshake connection between the transparent cache system and the service server in the TCP protocol stack, and establishes a third TCP connection between the transparent cache system and the service server.
  9. 如权利要求7所述的系统,其特征在于:The system of claim 7 wherein:
    所述发送单元,还用于在所述处理单元通过TCP协议栈模拟建立所述透明缓存系统与所述客户端的第二TCP连接,以及,通过TCP协议栈模拟建立所述透明缓存系统与所述业务服务器的第三TCP连接之前,通过IP层转发的方式将所述SYN+ACK响应报文发送给所述客户端;The sending unit is further configured to: at the processing unit, simulate, establish, by using a TCP protocol stack, a second TCP connection between the transparent cache system and the client, and simulate establishing the transparent cache system by using a TCP protocol stack. Sending the SYN+ACK response packet to the client by means of IP layer forwarding before the third TCP connection of the service server;
    所述接收单元,还用于接收所述客户端向所述业务服务器发送的针对所述SYN+ACK响应报文的ACK报文;The receiving unit is further configured to receive an ACK message sent by the client to the service server for the SYN+ACK response message;
    所述发送单元,还用于通过IP层转发的方式将所述接收单元接收的ACK报文发送给所述业务服务器。The sending unit is further configured to send, by using an IP layer, the ACK message received by the receiving unit to the service server.
  10. 如权利要求9所述的系统,其特征在于,所述处理单元,具体用于根据所述SYN报文、所述SYN+ACK响应报文、所述ACK报文,在TCP协议栈中模拟出所述透明缓存系统与所述客户端的三次握手建连,建立所述透明缓存系统与所述客户端的第二TCP连接;以及,根据所述SYN报文、所述SYN+ACK响应报文、所述ACK报文,在TCP协议栈中模拟出所述透明缓存系统与所述业务服务器的三次握手建连,建立所述透明缓存系统与所述业务服务器的第三TCP连接。 The system according to claim 9, wherein the processing unit is configured to simulate in the TCP protocol stack according to the SYN packet, the SYN+ACK response packet, and the ACK packet. The transparent cache system is connected to the three-way handshake of the client, and establishes a second TCP connection between the transparent cache system and the client; and, according to the SYN packet, the SYN+ACK response packet, and the The ACK message is configured to simulate a three-way handshake connection between the transparent cache system and the service server in the TCP protocol stack, and establish a third TCP connection between the transparent cache system and the service server.
PCT/CN2017/085382 2016-06-23 2017-05-22 Traffic processing method and transparent buffer system WO2017219813A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610464005.9A CN105959228B (en) 2016-06-23 2016-06-23 Traffic processing method and transparent cache system
CN201610464005.9 2016-06-23

Publications (1)

Publication Number Publication Date
WO2017219813A1 true WO2017219813A1 (en) 2017-12-28

Family

ID=56903528

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085382 WO2017219813A1 (en) 2016-06-23 2017-05-22 Traffic processing method and transparent buffer system

Country Status (2)

Country Link
CN (1) CN105959228B (en)
WO (1) WO2017219813A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112202769A (en) * 2020-09-29 2021-01-08 南京艾科朗克信息科技有限公司 Protocol processing system and method for realizing TCP (transmission control protocol) quick report of securities counter
CN112671869A (en) * 2020-12-15 2021-04-16 北京天融信网络安全技术有限公司 Network bridge transparent proxy method, device, electronic equipment and storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959228B (en) * 2016-06-23 2020-06-16 华为技术有限公司 Traffic processing method and transparent cache system
CN107995233B (en) * 2016-10-26 2021-12-17 阿里巴巴集团控股有限公司 Method for establishing connection and corresponding equipment
CN108023900B (en) * 2016-10-31 2020-11-27 中国电信股份有限公司 Method and system for realizing transparent cache
CN107528908A (en) * 2017-09-04 2017-12-29 北京新流万联网络技术有限公司 The method and system of HTTP transparent proxy caches

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1011244A2 (en) * 1998-12-16 2000-06-21 Lucent Technologies Inc. Method and apparatus for transparently directing requests for web objects to proxy caches
US20100281168A1 (en) * 2009-04-30 2010-11-04 Blue Coat Systems, Inc. Assymmetric Traffic Flow Detection
CN103491065A (en) * 2012-06-14 2014-01-01 中兴通讯股份有限公司 Transparent proxy and transparent proxy realization method
CN105959228A (en) * 2016-06-23 2016-09-21 华为技术有限公司 Flow processing method and transparent cache system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101043446A (en) * 2007-03-08 2007-09-26 华为技术有限公司 Method and apparatus for data transmission process
CN101257450A (en) * 2008-03-28 2008-09-03 华为技术有限公司 Network safety protection method, gateway equipment, client terminal as well as network system
CN101594359A (en) * 2009-07-01 2009-12-02 杭州华三通信技术有限公司 Defence synchronous flood attack method of transmission control protocol and transmission control protocol proxy
CN102316044B (en) * 2011-09-29 2015-06-03 迈普通信技术股份有限公司 Method for realizing mutual separation of control and forwarding, and device
CN103209175A (en) * 2013-03-13 2013-07-17 深圳市同洲电子股份有限公司 Method and device for building data transmission connection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1011244A2 (en) * 1998-12-16 2000-06-21 Lucent Technologies Inc. Method and apparatus for transparently directing requests for web objects to proxy caches
US20100281168A1 (en) * 2009-04-30 2010-11-04 Blue Coat Systems, Inc. Assymmetric Traffic Flow Detection
CN103491065A (en) * 2012-06-14 2014-01-01 中兴通讯股份有限公司 Transparent proxy and transparent proxy realization method
CN105959228A (en) * 2016-06-23 2016-09-21 华为技术有限公司 Flow processing method and transparent cache system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FUJIKAWA, KENJI: "LIN6 Extensions for Simultaneous Utilization of Multi- ple Wireless Base Stations", INFORMATION AND TELECOMMUNICATION TECHNOLOGIES, 2005.APSITT 2005 PROCEEDINGS. 6TH ASIA-PACIFIC SYMPOSIUM, 9 November 2005 (2005-11-09) - 13 February 2006 (2006-02-13), pages 282 - 287, XP032391634, DOI: 10.1109/APSITT.2005.203671 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112202769A (en) * 2020-09-29 2021-01-08 南京艾科朗克信息科技有限公司 Protocol processing system and method for realizing TCP (transmission control protocol) quick report of securities counter
CN112671869A (en) * 2020-12-15 2021-04-16 北京天融信网络安全技术有限公司 Network bridge transparent proxy method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN105959228B (en) 2020-06-16
CN105959228A (en) 2016-09-21

Similar Documents

Publication Publication Date Title
WO2017219813A1 (en) Traffic processing method and transparent buffer system
US7975024B2 (en) Virtual personal computer access over multiple network sites
US9172620B2 (en) Cooperative proxy auto-discovery and connection interception
Arregoces et al. Data center fundamentals
US8489670B1 (en) Reducing TCP connection establishment time in an overlay network
US7624184B1 (en) Methods and apparatus for managing access to data through a network device
WO2021063147A1 (en) Packet forwarding method and apparatus for heterogeneous network
US11882199B2 (en) Virtual private network (VPN) whose traffic is intelligently routed
US20240069977A1 (en) Data transmission method and data transmission server
US20150373135A1 (en) Wide area network optimization
US7564848B2 (en) Method for the establishing of connections in a communication system
US11012524B2 (en) Remote socket splicing system
WO2019243890A2 (en) Multi-port data transmission via udp
WO2019100912A1 (en) Data distribution method and distribution server
CN104660550A (en) Method for performing session migration among plurality of servers
Sharma et al. QUIC protocol based monitoring probes for network devices monitor and alerts
US10110646B2 (en) Non-intrusive proxy system and method for applications without proxy support
KR100597405B1 (en) System and method for relaying data by use of socket applicaton program
US20070147376A1 (en) Router-assisted DDoS protection by tunneling replicas
CN113014855B (en) Video conference acceleration method, system and video conference acceleration platform
Nikitinskiy et al. A stateless transport protocol in software defined networks
CN111314447A (en) Proxy server and method for processing access request thereof
Szymaniak et al. Enabling service adaptability with versatile anycast
Ayuso et al. FT-FW: efficient connection failover in cluster-based stateful firewalls
Hayamizu et al. CeforeSim: Cefore Compliant NS-3-Based Network Simulator

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17814551

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17814551

Country of ref document: EP

Kind code of ref document: A1