CN117692512A - Data processing method, device, related equipment and storage medium - Google Patents

Data processing method, device, related equipment and storage medium Download PDF

Info

Publication number
CN117692512A
CN117692512A CN202311698332.7A CN202311698332A CN117692512A CN 117692512 A CN117692512 A CN 117692512A CN 202311698332 A CN202311698332 A CN 202311698332A CN 117692512 A CN117692512 A CN 117692512A
Authority
CN
China
Prior art keywords
data packets
network connection
connection session
processing node
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311698332.7A
Other languages
Chinese (zh)
Inventor
文曦畅
吴隆烽
姚舜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shenxinfu Information Security Co ltd
Original Assignee
Shenzhen Shenxinfu Information Security Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shenxinfu Information Security Co ltd filed Critical Shenzhen Shenxinfu Information Security Co ltd
Priority to CN202311698332.7A priority Critical patent/CN117692512A/en
Publication of CN117692512A publication Critical patent/CN117692512A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a data processing method, a device, related equipment and a storage medium, wherein the method is applied to a processing node and comprises the following steps: receiving the first N data packets of the network connection session sent by the drainage end; the network connection session is a connection session sent by the user terminal and used for accessing the target server, and N is an integer greater than or equal to 1; detecting the first N data packets to determine detection results; at least sending the detection result to the drainage end, so that the drainage end determines whether to drain each data packet of the network connection session according to the detection result; each data packet at least comprises other data packets after the first N data packets.

Description

Data processing method, device, related equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a data processing method, a data processing device, related equipment and a storage medium.
Background
As digitization progresses, cloud technology is used by an increasing number of users. Cloud technology changes the way enterprises and individuals access and store data, and has the advantages of expandability, low cost, high performance and the like. At present, the flow processing mode of the product in the cloud technology comprises a full drainage mode, namely, the full flow is introduced into the cloud or the safety capacity is more placed on the cloud; the full drainage mode has higher bandwidth cost requirement and performance requirement on the network access point; the mode of realizing security capability on the cloud end leads to the limitation of compatibility, complexity and expansibility of the cloud end.
Disclosure of Invention
In order to solve the technical problems, embodiments of the present application provide a data processing method, apparatus, related device, and storage medium.
In a first aspect, an embodiment of the present application provides a data processing method, applied to a processing node, where the method includes:
receiving the first N data packets of the network connection session sent by the drainage end; the network connection session is a connection session sent by the user terminal and used for accessing the target server, and N is an integer greater than or equal to 1;
detecting the first N data packets to determine detection results;
at least sending the detection result to the drainage end, so that the drainage end determines whether to drain each data packet of the network connection session according to the detection result; each data packet at least comprises other data packets after the first N data packets.
In a second aspect, an embodiment of the present application provides a data processing method, applied to a drainage end, where the method includes:
transmitting the first N data packets of the network connection session to the processing node; the network connection session is a connection session sent by the user terminal and used for accessing the target server, and N is an integer greater than or equal to 1;
Receiving at least the detection result sent by the processing node;
determining whether to drain each data packet of the network connection session according to the detection result; each data packet at least comprises other data packets after the first N data packets.
In a third aspect, an embodiment of the present application provides a data processing apparatus, applied to a processing node, the apparatus including:
the first receiving unit is used for receiving the first N data packets of the network connection session sent by the drainage end; the network connection session is a connection session sent by the user terminal and used for accessing the target server, and N is an integer greater than or equal to 1;
the detection unit is used for detecting the first N data packets to determine detection results;
the first sending unit is used for sending the detection result to the drainage end at least, so that the drainage end determines whether to drain each data packet of the network connection session according to the detection result; each data packet at least comprises other data packets after the first N data packets.
In a fourth aspect, an embodiment of the present application provides a data processing apparatus, applied to a drainage end, where the apparatus includes:
A second sending unit, configured to send first N data packets of the network connection session to the processing node; the network connection session is a connection session sent by the user terminal and used for accessing the target server, and N is an integer greater than or equal to 1;
a second receiving unit, configured to at least receive the detection result sent by the processing node;
the determining unit is used for determining whether to drain each data packet of the network connection session according to the detection result; each data packet at least comprises other data packets after the first N data packets.
In a fifth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores computer executable instructions, and the processor executes the computer executable instructions on the memory to implement a method according to any one of the embodiments of the first aspect or the embodiments of the second aspect.
In a sixth aspect, embodiments of the present application provide a computer storage medium having stored thereon executable instructions that when executed by a processor implement the method according to any one of the embodiments of the first aspect or the embodiments of the second aspect.
According to the technical scheme, the processing node can detect information contained in the first N data packets of the network connection session, generate detection results of the first N data packets, and send the detection results to the drainage end, so that the drainage end can judge whether to drain all the data packets of the network connection session according to the detection results; under the condition that each data packet of the network connection session is not drained according to the detection result, each data packet of the network connection session can be directly sent to the target server by the drainage end, so that the flow introduced into the processing node can be reduced to a certain extent, and the problem that the user cannot surf the internet due to the blocking of the outlet IP address of the processing node is solved.
Drawings
FIG. 1 is a flowchart illustrating a data processing method according to an embodiment of the present disclosure;
FIG. 2 is a second flow chart of a data processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a cloud security access service architecture according to an embodiment of the present application;
fig. 4 is a schematic diagram of a drainage and streaming manner of a network connection session according to an embodiment of the present application;
fig. 5 is a flowchart illustrating a data processing method according to an embodiment of the present application;
Fig. 6 is a flow chart of a data processing method according to an embodiment of the present application;
FIG. 7 is a schematic diagram showing the structural components of a data processing apparatus according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram II of a data processing apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, it is not necessary to define and explain it in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
As digitization progresses, cloud technology is used by an increasing number of users. Cloud technology changes the way enterprises and individuals access and store data, and has the advantages of expandability, low cost, high performance and the like. However, in the process of traffic flow of the current client flowing to the cloud, requirements on the transmission performance of the client terminal and the transmission performance of the cloud processing node are improved. Therefore, how to more efficiently process/forward the traffic of the user is an important issue to be solved by the current tap.
Based on the above background, the embodiments of the present application provide a data processing method, apparatus, related device, and storage medium.
Next, a data processing method according to an embodiment of the present application will be described.
Fig. 1 is a flowchart of a data processing method provided in an embodiment of the present application, which is applied to a processing node, as shown in fig. 1, and the method includes the following steps:
s101: receiving the first N data packets of the network connection session sent by the drainage end;
s102: detecting the first N data packets to determine detection results;
s103: and at least sending the detection result to the drainage end, so that the drainage end determines whether to drain each data packet of the network connection session according to the detection result.
In this embodiment, the processing node is a device having a data traffic processing function (such as security identification of traffic and traffic application class identification).
In this embodiment of the present application, the drainage end may be a physical machine device or a virtual machine device.
In one embodiment, the drainage end may include a software drainage device or a hardware drainage device, and may further include an endless drainage device; the software flow director comprises proxy software installed on the processing node, and can realize the forwarding of data processing requests or data streams; the endless flow diverter executes flow diversion operation in the running process of the application on the user terminal without the aid of software installed on the processing node; the hardware flow director is typically deployed at the exit of the user network branch, which may include a gateway, in which case the data flow sent to the gateway may be forwarded by the hardware flow director onto the processing node; for example, in the case where the draining end is set at SASE, the processing node may be a POP.
In this application implementation, the network connection session is a connection session sent by the user terminal and used for accessing the target server, for example, the network connection session is an access flow sent by the user terminal and used for accessing a certain service platform. The target server is a server device in the internet for providing corresponding response data for the network connection session.
In one embodiment, the drainage end may include a user terminal; by way of example, the user terminal may comprise a computer, a server, a mobile electronic device, etc.; by way of example, mobile electronic devices may include smartphones, tablets, notebooks, etc.
In this embodiment of the present application, N is an integer greater than or equal to 1, where the value of N is determined according to an actual application scenario, and exemplary, the value of N may be a value less than or equal to 100. When a user needs to access a server, a session connection is established with the server by using the user terminal, and a session connection process includes transmission of a plurality of data packets, for example, transmission of a plurality of data access data packets. Each data packet carries a plurality of items of information, such as tenant, user, network protocol (e.g. TCP, UDP, ICMP), internet surfing behavior (e.g. browsing web pages, audio, video) and the like.
In the embodiment of the application, the user terminal includes multiple types, such as a desktop computer in an office area of an enterprise, a notebook computer used by a user on business trip or standing, a tablet computer, a mobile phone and the like. Each processing node can be connected with a plurality of user terminals, wherein data transmission is realized between each user terminal and the processing node through a drainage end, and the drainage end can be arranged at the user terminal side, and can be particularly various intelligent drainage devices, drainage plug-ins and the like.
In this embodiment of the present application, each data packet of the network connection session includes various information required for accessing the internet, and after receiving the first N data packets of the network connection session sent by the user terminal, the drainage end detects various information included in the first N data packets, and generates a corresponding detection result.
In this embodiment of the present application, each data packet of the network connection session includes at least one of the following dimensional information listed below: tenant, user, network protocol, internet surfing behavior, internet protocol (IP, internet Protocol) address, domain name, application information. When the processing node detects the first N data packets of the network connection session, the processing node detects the first N data packets based on the several dimensions listed above to generate a detection result.
It is understood that, in addition to the several dimensions listed above, each data packet of the network connection session may further include information of other dimensions, and the information included in each data packet of the network connection session in the embodiment of the present application is not specifically limited.
In one embodiment, the detection result generated by the processing node detecting each item of information of the first N data packets sent by the drainage end includes a result of whether each data packet of the network connection session is drained.
In one embodiment, the detection result generated by the processing node detecting each item of information of the first N data packets sent by the drainage end includes a result for the drainage end to determine whether to drain each data packet of the network connection session.
For example, the detection result includes application information corresponding to the network connection session, and the application information is sent to the drainage end, so that the drainage end can judge whether to drain each data packet of the network connection session according to the application information; for another example, the detection result includes a result of whether the network connection session is safe, and the drainage end can determine whether to drain each data packet of the network connection session according to the result of whether the network connection session is safe.
In this embodiment of the present application, when determining that each data packet of a network connection session is drained according to a detection result, the drainage end drains each data packet of the network connection session to a processing node, and when detecting each data packet of the network connection session by the processing node and detecting that each data packet of the network connection session passes, the drainage end sends each data packet of the network connection session to a target server by the processing node.
It should be noted that, when the drainage end determines to drain, the path through which each data packet of the network connection session flows is: the user terminal-the drainage terminal-the processing node-the destination server, the data packet flow path is called a drainage path.
It can be understood that, when the processing node can directly detect the first N data packets and determine that each data packet of the network connection session is drained, the first N data packets are not required to be sent to the drainage end, except that the result of not conducting the drainage is sent to the drainage end, and the first N data packets are directly sent to the target server or directly discarded.
In the embodiment of the application, the drainage end directly sends each data packet of the network connection session to the target server under the condition that the drainage of each data packet of the network connection session is not determined according to the detection result.
It should be noted that, when the drainage end determines that no drainage exists, paths through which the first N data packets of the network connection session flow are: the user terminal, the drainage terminal, the processing node, the drainage terminal and the target server; the paths followed by the other data packets following the first N data packets of the network connection session are: user terminal-drainage terminal-target server.
It can be understood that, when the processing node can directly detect the first N data packets and determine that the data packets of the network connection session are not drained, the processing node sends the first N data packets to the drainage end in addition to the result of not draining, so that the drainage end sends the first N data packets and other data packets after the first N data packets to the target server.
In one embodiment, the step S102 includes:
identifying application information corresponding to the first N data packets of the network connection session according to the characteristic information contained in the first N data packets; the detection result comprises the application information;
or,
identifying application information corresponding to the first N data packets of the network connection session according to the characteristic information contained in the first N data packets; comparing the application information with preset drainage information, and determining whether the application information is drained; the detection result comprises a result of whether the application information is drained.
Specifically, the processing node can identify application information corresponding to the network connection session through feature information contained in the first N data packets of the network connection session.
The following way of identifying application information corresponding to the first N data packets to the processing node is listed as follows:
mode one: the first N data packets may include application identifiers, and after the processing node receives the first N data packets, the processing node parses the first N data packets, so as to obtain the application identifiers.
Mode two: if the first N data packets do not contain the application identifier, the processing node can perform rule matching on the first N data packets and/or traffic data corresponding to the first N data packets according to the application identification rule after receiving the first N data packets, and determine the application identifier according to the rule matching result; for example, application recognition rules corresponding to different applications may be different; the application recognition rule may be extracted by a security expert according to application recognition experience, or may be automatically extracted by an automated tool; by way of example, the application identification may be determined by a characteristic value or protocol of a packet containing the first N packets, a port, a direction, a packet length match, a packet content match, etc.
Mode three: if the first N data packets do not contain the application identifier, the processing node receives the first N data packets, performs feature extraction on the first N data packets in an artificial intelligence mode, and identifies the application identifier according to a feature extraction result.
After the processing node identifies the application identifier corresponding to the network connection session according to the first N data packets, the application information corresponding to the network connection session can be determined according to the application identifier.
In one possible scheme, after identifying the application information corresponding to the network connection session, the processing node directly sends the application information corresponding to the network connection session to the drainage end; the drainage terminal is preset with drainage information, wherein the drainage information comprises an application information base needing drainage and/or an application information base needing no drainage; the drainage end can determine whether to drain each data packet of the network connection session by matching the application information of the network connection session sent by the processing node with the drainage information.
For example, a first application information base needing to be drained and/or a second application information base needing not to be drained are preset in the drainage end; the first application information base comprises application identification information of a video application A1, a video application A2, a game application A3 and a shopping application A3; the second application information base includes application identification information of the video application B1, the video application B2, the game application B3, and the shopping application B4. If the processing node recognizes that the application identifier corresponding to the network connection session is A1, the application identifier A1 is sent to the drainage end, and after the drainage end receives the application identifier A1, each data packet of the network connection session is confirmed to be drained after the application identifier A1 is matched with a first application information base needing to be drained; if the processing node recognizes that the application identifier corresponding to the network connection session is B1, the processing node sends the application identifier B1 to the drainage end, and after the drainage end receives the application identifier B1, the drainage end determines not to drain each data packet of the network connection session by matching the application identifier B1 with a second application information base which does not need to be drained.
In another possible scheme, the preset drainage information in the processing node comprises an application information base needing to be drained and/or an application information base not needing to be drained, after the processing node recognizes the application information corresponding to the network connection session, the processing node further matches the recognized application information with the preset drainage information in the processing node to obtain a determination result of whether to drain the application information, the determination result of whether to drain the application information is sent to a drainage end, and the drainage end determines whether to drain each data packet of the network connection session.
For example, a first application information base needing to be subjected to drainage and/or a second application information base not needing to be subjected to drainage are preset in the processing node; the first application information base comprises application identification information of a video application A1, a video application A2, a game application A3 and a shopping application A3; the second application information base includes application identification information of the video application B1, the video application B2, the game application B3, and the shopping application B4. If the processing node recognizes that the application identifier corresponding to the network connection session is A1, after the application identifier A1 is matched with the first application information base needing to be drained, each data packet of the network connection session is confirmed to be drained, then, a confirmation result of draining each data packet of the network connection session is sent to a drainage end, and after receiving the confirmation result of draining each data packet of the network connection session, the drainage end drains each data packet of the network connection session to the processing node. If the processing node identifies that the application identifier corresponding to the network connection session is B1, after the application identifier B1 is matched with a second application information base which does not need to be drained, determining that all data packets of the network connection session are not drained, then sending a determination result of not needing to drain all the data packets of the network connection session to a drainage end, and after receiving the determination result of not draining all the data packets of the network connection session, the drainage end directly sends all the data packets of the network connection session which are received subsequently to a target server.
In one embodiment, the step S102 includes:
performing security detection on the first N data packets to obtain a result of whether the data packets are secure; the detection result comprises the result of whether safety is ensured or not;
or,
performing security detection on the first N data packets to obtain a result of whether the data packets are secure; determining whether to drain each data packet of the network connection session according to the result of whether to be safe, and generating a first determination result; the detection result includes the first determination result.
Specifically, the processing node can identify whether the first N data packets are safe or not by identifying each item of information contained in the first N data packets of the network connection session, so as to determine whether the network connection session is safe or not.
The following way for the processing node to determine whether the first N data packets are secure is listed as follows:
mode one: after receiving the first N data packets, the processing node can perform rule matching on the first N data packets and/or traffic data corresponding to the first N data packets according to preset safety traffic rules, and determine whether the first N data packets are safe or not according to rule matching results; the above-mentioned safety traffic rules may be extracted by safety specialists according to application recognition experience, or automatically by an automated tool, for example; illustratively, whether the first N data packets are secure may be determined by a characteristic value or protocol of the data packet containing the first N data packets, a port, a direction, a data packet length match, a data packet content match, etc.
Mode two: after receiving the first N data packets, the processing node performs feature extraction on the first N data packets in an artificial intelligence mode, and judges whether the first N data packets are safe or not according to feature extraction results.
In one possible scheme, after identifying whether the network connection session is safe, the processing node directly sends a result of whether the network connection session is safe to the drainage end; and the drainage end can determine whether to drain each data packet of the network connection session according to the result of whether the network connection session is safe or not.
For example, the processing node stores an attack behavior feature library, and after receiving the first N data packets of the network connection session, the processing node performs feature detection on information contained in the first N data packets to obtain behavior features corresponding to the first N data packets; comparing the behavior characteristics corresponding to the first N data packets with an attack behavior characteristic library, if the characteristics corresponding to the detected behavior characteristics exist in the attack behavior characteristic library, determining that the network connection session is unsafe by the processing node, and then transmitting a result of determining that the network connection session is unsafe to the drainage end; after receiving the result of determining that the network connection session is unsafe, the drainage end determines to drain each data packet of the network connection session, that is, after all the data packets of the network connection session are drained to the processing node to perform security detection, the processing node sends each data packet to the target server.
If the processing node compares the behavior characteristics obtained by the characteristic detection of the information contained in the first N data packets with the attack behavior characteristic library, and determines that the characteristics corresponding to the detected behavior characteristics do not exist in the attack behavior characteristic library, the processing node determines that the network connection session is safe, and then the processing node sends a result of determining the security of the network connection session to the drainage end; after receiving the result of determining the security of the network connection session sent by the processing node, the drainage end determines not to drain each data packet of the network connection session.
In another possible scheme, after identifying whether the network connection session is safe or not, the processing node further generates a result of whether to drain the network connection session according to the identified result of whether to safe or not, and sends the result of whether to drain the network connection session according to the identified result of whether to safe or not to the drainage end, and the drainage end determines whether to drain each data packet of the network connection session or not.
For example, the processing node stores an attack behavior feature library, and after receiving the first N data packets of the network connection session, the processing node performs feature detection on information contained in the first N data packets to obtain behavior features corresponding to the first N data packets; comparing the behavior characteristics corresponding to the first N data packets with an attack behavior characteristic library, and if the characteristics corresponding to the detected behavior characteristics exist in the attack behavior characteristic library, determining that the network connection session is unsafe by the processing node, thereby further generating a result of conducting drainage on each data packet of the network connection session; after receiving the result of the flow guiding of each data packet of the network connection session sent by the processing node, the flow guiding end determines to flow guiding of each data packet of the network connection session, and after the flow guiding of each data packet of the network connection session to the processing node for security detection, the processing node sends each data packet to the target server.
If the processing node compares the behavior characteristics obtained by the characteristic detection of the information contained in the first N data packets with the attack behavior characteristic library, and determines that the characteristics corresponding to the detected behavior characteristics do not exist in the attack behavior characteristic library, the processing node determines that the network connection session is safe, and then the processing node further generates a result of not conducting drainage on each data packet of the network connection session and sends the result of not conducting drainage on each data packet of the network connection session to a drainage end; after receiving the result of not conducting the drainage on each data packet of the network connection session, the drainage end determines not to conduct the drainage on each data packet of the network connection session, namely, directly sends each data packet of the network connection session to the target server.
In one embodiment, the step S102 includes:
matching the information contained in the first N data packets with a flow strategy rule to obtain a matching result; the detection result comprises a matching result;
or,
matching the information contained in the first N data packets with a flow policy rule to obtain a matching result, determining whether to stream each data packet of the network connection session according to the matching result, and generating a second determination result; the detection result includes the second determination result.
Specifically, the processing node can identify the traffic class corresponding to the network connection session by identifying each item of information contained in the first N data packets of the network connection session.
In one embodiment, the flow policy rules include flow rules corresponding to bypass flow and/or flow rules corresponding to non-bypass flow.
The following way of identifying the class to which the first N packets correspond to the processing node is listed as follows:
when the information contained in the first N data packets connected by the network session is matched with the traffic policy rule, the matching can be specifically performed according to the information contained in each data packet in the first N data packets and the traffic rule corresponding to the non-bypass traffic in the traffic policy rule, so as to determine whether the network session belongs to the non-bypass traffic.
In the present embodiment, the non-bypass flow rate includes, but is not limited to, the following: secure sockets layer protocol (SSL, secure Socket Layer) decrypts traffic, kills traffic, data leakage prevention (DLP, data leakage prevention) traffic, content audit traffic.
It can be understood that the non-bypass flow rate in the embodiment of the present application is not limited to the above-listed flows, and is specifically determined according to the transmission information related to the application scenario, and since the processing node mainly implements the security capability of the cloud, any data needing to be security protected may be classified into the range of the non-bypass flow rate. For example, most of the internet traffic can be divided into the range of the bypass traffic except SSL full decryption, disinfection, DLP, content audit, and the like.
In one possible scheme, after the processing node matches the first N data packets with the flow policy rules to obtain a matching result, directly sending the matching result to the drainage end; and the drainage end can determine whether to drain each data packet of the network connection session according to the result of the matching.
For example, the processing node matches the first N data packets with the bypass flow rule, after obtaining the result of matching the first N data packets with the bypass flow rule, sends the result of matching the first N data packets with the bypass flow rule to the drainage end, and after the drainage end receives the result of matching the first N data packets sent by the processing node with the bypass flow rule, determines that each data packet of the network connection session is not drained.
For another example, the processing node matches the first N data packets with the non-detourable flow rule, and after obtaining a result that the first N data packets are not matched with the non-detourable flow rule, sends the result that the first N data packets are not matched with the non-detourable flow rule to the drainage end, and after receiving the result that the first N data packets sent by the processing node are not matched with the non-detourable flow rule, the drainage end determines to drain each data packet of the network connection session.
For example, the processing node stores a detourable traffic rule including detourable IP address and account information, such as detourable IP1 and account 1, detourable IP1 and account 2, detourable IP1 and account 3; after receiving the first N data packets of the network connection session, the processing node analyzes information contained in the first N data packets to obtain access IP and account information corresponding to the network connection session, if the access IP is judged to be IP1 and the account information is judged to be account 1, the processing node determines that the first N data packets are matched with the bypass flow rule, sends a result of the matching of the first N data packets and the bypass flow rule to the drainage end, and the drainage end determines that each data packet of the network connection session is not drained after receiving the result of the matching of the first N data packets and the bypass flow rule.
If the processing node receives the first N data packets of the network connection session, the processing node analyzes the information contained in the first N data packets to obtain access IP and account information corresponding to the network connection session, and if the access IP is judged to be non-IP 1 and/or the account information is judged to be any one of non-account 1, account 2 and account 3, the processing node determines that the first N data packets are not matched with the bypass flow rule, sends the result of the mismatch of the first N data packets and the bypass flow rule to a drainage end, and the drainage end determines to drain each data packet of the network connection session after receiving the result of the mismatch of the first N data packets and the bypass flow rule.
In one possible scheme, after the processing node matches the first N data packets with the traffic policy rule to obtain a matching result, it is further determined whether to stream each data packet of the network connection session according to the matching result, and the determining result of whether to stream is sent to the stream-guiding end, so that the stream-guiding end determines whether to stream each data packet of the network connection session according to the received result of whether to stream.
For example, the processing node matches the first N data packets with the bypass flow rule, further generates indication information for not conducting drainage on the network connection session after obtaining a result of matching the first N data packets with the bypass flow rule, and determines not to conduct drainage on each data packet of the network connection session after receiving the indication information.
For example, the processing node stores a detourable traffic rule including detourable IP address and account information, such as detourable IP1 and account 1, detourable IP1 and account 2, detourable IP1 and account 3; after receiving the first N data packets of the network connection session, the processing node analyzes information contained in the first N data packets to obtain access IP and account information corresponding to the network connection session, if the access IP is judged to be IP1 and the account information is judged to be account 1, the processing node determines that the first N data packets are matched with the bypass flow rule, then further generates a determination result of not conducting drainage on the network connection session, and sends the determination result of not conducting drainage on the network connection session to a drainage end; and the drainage end receives a determination result that the network connection session is not drained, and determines that all data packets of the network connection session are not drained.
If the processing node receives the first N data packets of the network connection session, analyzing information contained in the first N data packets to obtain access IP and account information corresponding to the network connection session, if the access IP is judged to be non-IP 1 and/or the account information is not any one of account 1, account 2 and account 3, the processing node determines that the first N data packets are not matched with the bypass flow rule, then further generates a determination result for conducting drainage on the network connection session, and sends the determination result for conducting drainage on the network connection session to a drainage end; and the drainage end receives a determination result of drainage of the network connection session and determines to drain each data packet of the network connection session.
In one embodiment, the traffic policy rule may be issued to the processing node by the control center, where the traffic policy rule may specifically include a plurality of rules, and exemplary, specific information included in each rule may be: "flow matching xxx rules, walk around flow/drainage path"; for example, it may be specifically "the matching destination address is a flow of 1.2.3.4, and the bypass flow path is taken. The bypass path here is a path through which each packet of the network traffic flows when each packet of the network connection session is not drained.
The processing node can directly generate whether to stream or bypass each data packet of the network connection session by matching the first N data packets with the traffic policy rule.
In the embodiment of the application, the processing node detects information contained in the first N data packets of the network connection session to generate detection results of the first N data packets, and sends the detection results to the drainage end, so that the drainage end can judge whether to drain each data packet of the network connection session according to the detection results; under the condition that each data packet of the network connection session is not drained according to the detection result, each data packet of the network connection session can be directly sent to the target server by the drainage end, so that the flow introduced into the processing node can be reduced to a certain extent, and the problem that the user cannot surf the internet due to the blocking of the outlet IP address of the processing node is solved.
Fig. 2 is a second flowchart of a data processing method according to an embodiment of the present application, applied to a processing node, as shown in fig. 2, where the method includes the following steps:
s201: receiving the first N data packets of the network connection session sent by the drainage end;
S202: detecting the first N data packets to determine whether to drain each data packet of the network connection session;
s203: generating indication information for streaming each data packet of the network connection session under the condition that each data packet of the network connection session is not drained;
s204: and sending the indication information and the first N data packets to the drainage end.
In this embodiment of the present application, after receiving a network connection session sent by a user terminal, a drainage end sends first N data packets of the network connection session to a processing node.
After receiving the first N data packets sent by the drainage end, the processing node detects the first N data packets, and the specific manner of detecting the first N data packets by the processing node may refer to the description of the specific scheme of detecting the first N data packets by the processing node in the embodiment of fig. 1.
The processing node detects the first N data packets, and can generate indication information whether to stream each data packet of the network connection session according to the detection result.
Illustratively, the processing node may identify, from the first N packets of the network session connection, a name of an application of the user terminal corresponding to the network session connection, e.g., may identify that the application of the user terminal corresponding to the network session connection is a hypertext transfer protocol (HTTP, hypertext Transfer Protocol) request, or domain name system (DNS, domain Name System) resolution, etc. After the name of the application corresponding to the network session connection is identified, the name of the application is further matched with an application information base which does not need to be subjected to drainage in the preset drainage information in the processing node, if the name of the application is matched with the application information base which does not need to be subjected to drainage, each data packet of the network session is determined to be not required to be subjected to drainage, indication information which does not need to be subjected to drainage is generated, and then the indication information is sent to a drainage end.
Here, since the processing node determines that the data packets of the network connection session do not need to be drained, that is, determines that the data packets of the network connection session can be directly sent to the target server by the drainage end, the processing node needs to resend the first N data packets of the network connection session to the drainage end, so that the drainage end resends the first N data packets to the target server.
After receiving the indication information which is sent by the processing node and does not need to drain all the data packets of the network connection session, the drainage end sends the received first N data packets sent by the drainage end and other data packets after the received first N data packets sent by the user terminal to the target server.
Here, as an optional implementation manner, after receiving the first N data packets of the network connection session sent by the user terminal, the drainage end may perform buffering and mirroring on the first N data packets to obtain buffered first N data packets and mirrored first N data packets; and then, the drainage end sends the first N data packets of the mirror image to the processing node, the processing node detects the first N data packets of the mirror image to obtain a detection result, and after the processing node sends the detection result to the drainage end, if the drainage end judges that each data packet of the network connection session is not drained according to the detection result, the drainage end can directly send the cached first N data packets and other data packets after the received first N data packets of the network connection session to the target server. In this embodiment, the processing node is not required to send the first N data packets to the drainage end, the original first N data packets are not required to be modified, and the drainage end directly sends the first N data packets to the target server, so that the problem of IP exit change does not exist, that is, the original access IP seen by the target server is the user terminal, but not the processing node, and the condition of access blocking does not exist.
According to the technical scheme, the processing node detects information contained in the first N data packets of the network connection session, generates detection results of the first N data packets, and sends the detection results to the drainage end, so that the drainage end can judge whether to drain all the data packets of the network connection session according to the detection results; under the condition that each data packet of the network connection session is not drained according to the detection result, each data packet of the network connection session can be directly sent to the target server by the drainage end, so that the flow introduced into the processing node can be reduced to a certain extent, and the problem that the user cannot surf the internet due to the blocking of the outlet IP address of the processing node is solved.
Next, a scheme for draining or streaming each data packet of the network connection session according to the embodiment of the present application will be described with reference to fig. 3 and fig. 4.
The technical solution of the embodiment of the present application may be applied to various types of cloud technology products, and in the following, a specific application of the technical solution of the embodiment of the present application in a SASE architecture will be described with reference to a cloud architecture system of a secure access service edge (SASE, secure Access Service Edge) shown in fig. 3.
The SASE in fig. 3 is a cloud-based platform that can provide networking and security functions directly to endpoints (e.g., user terminals) connected to the platform that incorporates services such as routing, SD-WAN, firewall, and secure Web gateway. SASE no longer employs a data-centric network design, but instead treats the data center as another endpoint. The cloud security system provides a cloud security solution, and achieves security capability by guiding traffic to the cloud.
In the SASE architecture of fig. 3, the standard SD-WAN device is a secure intelligent router, abbreviated as SDW-R An Zhilu router. BYOD, a drainage client.
It should be noted that, in fig. 3, the SD-WAN device and BYOD are both one of the drainage terminals, and the AF (a firewall) is also one of the drainage terminals besides the SD-WAN and BYOD. The form of the drainage end is not limited to a hardware drainage device, but can be an application software installed at a user terminal or a software drainage device in a plug-in form.
In fig. 3, a network access point (POP, point Of Presence), which is an edge node accessing the network, typically carries the processing of data traffic in SASE. By establishing public cloud POP nodes in a plurality of areas, such as a plurality of provinces, a uniform resource location system (URL, uniform Resource Locator) is provided for users, and filtering, bandwidth flow management, terminal management and control, leakage analysis service and the like are provided for the users, so that the users do not need to purchase traditional hardware equipment, and can realize the functions completely same as those of local hardware safety equipment by only arranging a lightweight drainage gateway to match with cloud service in a laptop installation drainage plug-in or a network outlet.
SAAS, software, is a service that allows users to connect and use cloud-based applications over the Internet.
vAC, virtualized online behavior management; vDLP, virtualized data leakage prevention; vAF, virtualized firewalls; vEDR, virtualized endpoint detection and response; vVPN, virtualized virtual private network.
In SASE architecture, POP devices are located outside the edge of the enterprise network, with a variety of services deployed in the POP devices, by which user terminals can implement data transmission and request processing between them and the target network. In practical application, the POP device can uniformly forward the processing request of the flow data sent by the user terminal to the SASE cloud for processing, and the request processing mode has higher requirements on the bandwidth of the POP device, the data processing capacity of the SASE cloud and the like.
And, various services are deployed in POP devices included in the SASE, and by means of the services, the user terminal can realize data transmission and request processing between the user terminal and the SASE cloud. In practical application, the POP device can uniformly forward the processing request of the flow data sent by the user terminal to the SASE cloud for processing without difference, so that higher requirements are provided for the bandwidth of the POP device, the data processing capacity of the SASE cloud and the like.
Based on the SASE cloud structure in FIG. 3, in FIG. 4, SDW-R and BYOD are both drainage ends. The POP point includes a firewall as a service (FwaaS) and a data communication interface (DP). POP points include a software security framework (SSF, software Security Framework) that implements cloud security capabilities, including, for example: application identification, user authentication, application control, transmission control protocol (TCP, transmission Control Protocol) proxy, L7 policy (i.e., application layer policy). POP points can interface data of a data center, such as a structured query language (SQL, structured Query Language) database of the data center.
The control center in fig. 4 is cloud equipment, and configures a flow policy rule, for example, "flow matching xxx rule, walk around flow/drainage", fwaas makes actual matching of a data packet and the flow policy rule, and then obtains a drainage/bypass result and sends the result to the drainage end.
The flow paths of fig. 4 include 3 flow paths, specifically, a diversion, bypass slow path and bypass fast path.
The flow guiding path is that the user flow is guided to the POP through the SDW-R/BYOD, and the flow is analyzed and audited through the FwaaS security stack and then goes out of the Internet from the DP out port (namely, is transmitted to a server to be accessed by the user terminal from the DP out port).
In this embodiment of the present application, the drainage end may perform a flow drainage function, as shown in the drainage path in fig. 4, where the drainage end SDW-R or BYOD may drain the flow of the user terminal to the POP point, and after the POP processes (e.g. performs user authentication) the flow data, the POP transmits the flow of the user terminal to the internet.
The bypass flow slow path is that after N packet flows are led to the POP before each connection session, fwaaS analyzes and audits and issues a bypass flow model, the model returns to the leading end from the POP, and finally the Internet is discharged from the leading end (namely, the model is transmitted to a server to be accessed by the user terminal from the leading end).
The bypass fast path means that the (n+1) th data packet and the subsequent flow of each connection session directly exit the internet from the drainage end and do not pass through the POP any more.
In this embodiment of the present application, the bypass channel may be referred to as a bypass model, and the second transmission path may be referred to as a drainage model.
In an actual application scene, fwaaS on the POP can determine whether each data packet connected to each session is a bypass flow model or a flow-around model in the first packet connected to each session, if the data packet is the flow-around model, a flow-guiding end cuts from a flow-around slow path to a flow-around fast path after receiving N data packets connected to the session, and the data packet after the subsequent Nth data packet is directly transmitted to the Internet by the flow-guiding end.
The technical scheme of the embodiment of the application identification method and the application identification device can utilize FwaaS application identification in the POP and abundant security stack capability, and determine whether the network connection session can bypass and when to stop bypass slow-way switching to bypass fast-way by auditing the first N data packets of each network connection session. And under the condition that the main security capability of the network does not have larger loss, the flow introduced into the POP point is reduced.
According to the technical scheme, the flow outlet of the network session connection can be switched, and specifically, when the drainage end transmits all data packets of the network session connection according to the bypass slow path, flow data of the data packets can flow out from the data output outlet of the POP to reach the Internet; when the drainage end transmits each data packet connected with the network session according to the bypass express way, the flow data of the data packet can flow out from the data output outlet of the drainage end to the Internet.
Fig. 5 is a flow chart III of a method for processing a network connection session, which is provided in an embodiment of the present application and is applied to a drainage end, as shown in fig. 5, where the method includes the following steps:
S501: transmitting the first N data packets of the network connection session to the processing node;
s502: receiving at least the detection result sent by the processing node;
and S503, determining whether to drain each data packet of the network connection session according to the detection result.
In one embodiment, the detection result generated by the processing node detecting each item of information of the first N data packets sent by the drainage end includes a result of whether each data packet of the network connection session is drained.
In one embodiment, the detection result generated by the processing node detecting each item of information of the first N data packets sent by the drainage end includes a result for the drainage end to determine whether to drain each data packet of the network connection session.
For example, the detection result includes application information corresponding to the network connection session, and the application information is sent to the drainage end, so that the drainage end can judge whether to drain each data packet of the network connection session according to the application information; for another example, the detection result includes a result of whether the network connection session is safe, and the drainage end can determine whether to drain each data packet of the network connection session according to the result of whether the network connection session is safe.
In this embodiment of the present application, when determining that each data packet of a network connection session is drained according to a detection result, the drainage end drains each data packet of the network connection session to a processing node, and when detecting each data packet of the network connection session by the processing node and detecting that each data packet of the network connection session passes, the drainage end sends each data packet of the network connection session to a target server by the processing node.
In an optional embodiment of the present application, if the drainage end determines to drain each data packet of the network connection session according to the detection result, the drainage end sends other data packets after the first N data packets of the network connection session to the processing node;
and if the drainage end determines that all the data packets of the network connection session are not drained according to the detection result, the first N data packets of the network connection session and other data packets after the first N data packets are sent to a target server.
It should be noted that, when the drainage end determines to drain, the path through which each data packet of the network connection session flows is: the user terminal-the drainage terminal-the processing node-the destination server, the data packet flow path is called a drainage path.
It can be understood that, when the processing node can directly detect the first N data packets and determine that each data packet of the network connection session is drained, the first N data packets are not required to be sent to the drainage end, except that the result of not conducting the drainage is sent to the drainage end, and the first N data packets are directly sent to the target server or directly discarded.
In the embodiment of the application, the drainage end directly sends each data packet of the network connection session to the target server under the condition that the drainage of each data packet of the network connection session is not determined according to the detection result.
It should be noted that, when the drainage end determines that no drainage exists, paths through which the first N data packets of the network connection session flow are: the user terminal, the drainage terminal, the processing node, the drainage terminal and the target server; the paths followed by the other data packets following the first N data packets of the network connection session are: user terminal-drainage terminal-target server.
It can be understood that, when the processing node can directly detect the first N data packets and determine that the data packets of the network connection session are not drained, the processing node sends the first N data packets to the drainage end in addition to the result of not draining, so that the drainage end sends the first N data packets and other data packets after the first N data packets to the target server.
In the following, several specific information possibly included in the detection result and a scheme that the drainage end processes each data packet of the network connection session according to each different detection result are introduced.
The first scheme is that a detection result generated by detecting the first N data packets of the network connection session by the processing node comprises application information corresponding to the network connection session, and the processing node directly sends the application information corresponding to the network connection session to a drainage end after identifying the application information corresponding to the network connection session; the drainage terminal is preset with drainage information, wherein the drainage information comprises an application information base needing drainage and/or an application information base needing no drainage; the drainage end can determine whether to drain each data packet of the network connection session by matching the application information of the network connection session sent by the processing node with the drainage information.
For example, a first application information base needing to be drained and/or a second application information base needing not to be drained are preset in the drainage end; the first application information base comprises application identification information of a video application A1, a video application A2, a game application A3 and a shopping application A3; the second application information base includes application identification information of the video application B1, the video application B2, the game application B3, and the shopping application B4. If the processing node recognizes that the application identifier corresponding to the network connection session is A1, the application identifier A1 is sent to the drainage end, and after the drainage end receives the application identifier A1, each data packet of the network connection session is confirmed to be drained after the application identifier A1 is matched with a first application information base needing to be drained; if the processing node recognizes that the application identifier corresponding to the network connection session is B1, the processing node sends the application identifier B1 to the drainage end, and after the drainage end receives the application identifier B1, the drainage end determines not to drain each data packet of the network connection session by matching the application identifier B1 with a second application information base which does not need to be drained.
In the second scheme, the preset drainage information in the processing node comprises an application information base needing to be drained and/or an application information base not needing to be drained, after the processing node identifies the application information corresponding to the network connection session, the processing node further matches the identified application information with the preset drainage information in the processing node to obtain a determination result of whether to drain the application information, the determination result of whether to drain the application information is sent to a drainage end, and the drainage end determines whether to drain each data packet of the network connection session.
For example, a first application information base needing to be subjected to drainage and/or a second application information base not needing to be subjected to drainage are preset in the processing node; the first application information base comprises application identification information of a video application A1, a video application A2, a game application A3 and a shopping application A3; the second application information base includes application identification information of the video application B1, the video application B2, the game application B3, and the shopping application B4. If the processing node recognizes that the application identifier corresponding to the network connection session is A1, after the application identifier A1 is matched with the first application information base needing to be drained, each data packet of the network connection session is confirmed to be drained, then, a confirmation result of draining each data packet of the network connection session is sent to a drainage end, and after receiving the confirmation result of draining each data packet of the network connection session, the drainage end sends each data packet of the network connection session to the processing node. If the processing node identifies that the application identifier corresponding to the network connection session is B1, after the application identifier B1 is matched with a second application information base which does not need to be drained, determining that all data packets of the network connection session are not drained, then sending a determination result of not needing to drain all the data packets of the network connection session to a drainage end, and after receiving the determination result of not draining all the data packets of the network connection session, the drainage end sends all the data packets of the network connection session which are received subsequently to a target server.
The third scheme is that the processing node identifies whether the network connection session is safe or not according to the first N data packets of the network connection session, and after identifying whether the network connection session is safe or not, the processing node directly sends a result of whether the network connection session is safe or not to the drainage end; and the drainage end determines whether to drain each data packet of the network connection session according to the result of whether the network connection session is safe or not.
For example, the processing node stores an attack behavior feature library, and after receiving the first N data packets of the network connection session, the processing node performs feature detection on information contained in the first N data packets to obtain behavior features corresponding to the first N data packets; comparing the behavior characteristics corresponding to the first N data packets with an attack behavior characteristic library, if the characteristics corresponding to the detected behavior characteristics exist in the attack behavior characteristic library, determining that the network connection session is unsafe by the processing node, and then transmitting a result of determining that the network connection session is unsafe to the drainage end; and after receiving the result of determining that the network connection session is unsafe, the drainage end determines to drain each data packet of the network connection session.
If the processing node compares the behavior characteristics obtained by the characteristic detection of the information contained in the first N data packets with the attack behavior characteristic library, and determines that the characteristics corresponding to the detected behavior characteristics do not exist in the attack behavior characteristic library, the processing node determines that the network connection session is safe, and then the processing node sends a result of determining the security of the network connection session to the drainage end; after receiving the result of determining the security of the network connection session sent by the processing node, the drainage end determines not to drain each data packet of the network connection session.
In the fourth scheme, after identifying whether the network connection session is safe or not, the processing node further generates a result of whether to drain the network connection session according to the identified result of whether to safe or not, and sends the result of whether to drain the network connection session according to the identified result of whether to safe or not to the drainage end, and the drainage end determines whether to drain each data packet of the network connection session or not.
For example, the processing node stores an attack behavior feature library, and after receiving the first N data packets of the network connection session, the processing node performs feature detection on information contained in the first N data packets to obtain behavior features corresponding to the first N data packets; comparing the behavior characteristics corresponding to the first N data packets with an attack behavior characteristic library, and if the characteristics corresponding to the detected behavior characteristics exist in the attack behavior characteristic library, determining that the network connection session is unsafe by the processing node, thereby further generating a result of conducting drainage on each data packet of the network connection session; and the drainage end determines to drain each data packet of the network connection session after receiving a result of draining each data packet of the network connection session sent by the processing node.
If the processing node compares the behavior characteristics obtained by the characteristic detection of the information contained in the first N data packets with the attack behavior characteristic library, and determines that the characteristics corresponding to the detected behavior characteristics do not exist in the attack behavior characteristic library, the processing node determines that the network connection session is safe, and then the processing node further generates a result of not conducting drainage on each data packet of the network connection session and sends the result of not conducting drainage on each data packet of the network connection session to a drainage end; and after receiving the result of not conducting drainage on each data packet of the network connection session, the drainage end determines not to conduct drainage on each data packet of the network connection session.
The fifth scheme is that the processing node matches the first N data packets of the network connection session with the flow policy rules, and after a result of whether the data packets are matched is obtained, the result of whether the data packets are matched is directly sent to the drainage end; and the drainage end determines whether to drain each data packet of the network connection session according to the result of the matching.
For example, the processing node stores a detourable traffic rule including detourable IP address and account information, such as detourable IP1 and account 1, detourable IP1 and account 2, detourable IP1 and account 3; after receiving the first N data packets of the network connection session, the processing node analyzes information contained in the first N data packets to obtain access IP and account information corresponding to the network connection session, if the access IP is judged to be IP1 and the account information is judged to be account 1, the processing node determines that the first N data packets are matched with the bypass flow rule, sends a result of the matching of the first N data packets and the bypass flow rule to the drainage end, and the drainage end determines that each data packet of the network connection session is not drained after receiving the result of the matching of the first N data packets and the bypass flow rule.
If the processing node receives the first N data packets of the network connection session, the processing node analyzes the information contained in the first N data packets to obtain access IP and account information corresponding to the network connection session, and if the access IP is judged to be non-IP 1 and/or the account information is judged to be any one of non-account 1, account 2 and account 3, the processing node determines that the first N data packets are not matched with the bypass flow rule, sends the result of the mismatch of the first N data packets and the bypass flow rule to a drainage end, and the drainage end determines to drain each data packet of the network connection session after receiving the result of the mismatch of the first N data packets and the bypass flow rule.
In the sixth scheme, the processing node matches the first N data packets of the network connection session with the traffic policy rule, and further generates a result of whether to drain the network connection session according to the result of whether to match after the result of whether to match is obtained, and sends the result of whether to drain the network connection session according to the result of whether to match to the drainage end, and the drainage end determines whether to drain each data packet of the network connection session.
For example, the processing node stores a detourable traffic rule including detourable IP address and account information, such as detourable IP1 and account 1, detourable IP1 and account 2, detourable IP1 and account 3; after receiving the first N data packets of the network connection session, the processing node analyzes information contained in the first N data packets to obtain access IP and account information corresponding to the network connection session, if the access IP is judged to be IP1 and the account information is judged to be account 1, the processing node determines that the first N data packets are matched with the bypass flow rule, then further generates a determination result of not conducting drainage on the network connection session, and sends the determination result of not conducting drainage on the network connection session to a drainage end; and the drainage end receives a determination result that the network connection session is not drained, and determines that all data packets of the network connection session are not drained.
If the processing node receives the first N data packets of the network connection session, analyzing information contained in the first N data packets to obtain access IP and account information corresponding to the network connection session, if the access IP is judged to be non-IP 1 and/or the account information is not any one of account 1, account 2 and account 3, the processing node determines that the first N data packets are not matched with the bypass flow rule, then further generates a determination result for conducting drainage on the network connection session, and sends the determination result for conducting drainage on the network connection session to a drainage end; and the drainage end receives a determination result of drainage of the network connection session and determines to drain each data packet of the network connection session.
In the embodiment of the application, the processing node detects information contained in the first N data packets of the network connection session to generate detection results of the first N data packets, and sends the detection results to the drainage end, so that the drainage end can judge whether to drain each data packet of the network connection session according to the detection results; under the condition that each data packet of the network connection session is not drained according to the detection result, each data packet of the network connection session can be directly sent to the target server by the drainage end, so that the flow introduced into the processing node can be reduced to a certain extent, and the problem that the user cannot surf the internet due to the blocking of the outlet IP address of the processing node is solved.
Fig. 6 is a flow chart diagram of a processing method of a network connection session, provided in an embodiment of the present application, applied to a drainage end, as shown in fig. 6, where the method includes the following steps:
s601: transmitting the first N data packets of the network connection session to the processing node;
s602: receiving at least the detection result sent by the processing node; the detection result comprises indication information of the matching of the first N data packets and the bypass flow;
and S603, determining to stream each data packet of the network connection session according to the detection result.
In this embodiment of the present application, after receiving a network connection session sent by a user terminal, a drainage end sends first N data packets of the network connection session to a processing node.
The processing node matches the first N data packets with the bypass flow rule, and after the result of matching the first N data packets with the bypass flow rule is obtained, the result of matching the first N data packets with the bypass flow rule is sent to the drainage end, and after the drainage end receives the result of matching the first N data packets sent by the processing node with the bypass flow rule, it is determined that each data packet of the network connection session is not drained.
Because the processing node determines that all the data packets of the network connection session do not need to be drained, namely, it is determined that all the data packets of the network connection session can be directly sent to the target server by the drainage end, the processing node needs to resend the first N data packets of the network connection session to the drainage end, and the drainage end resends the first N data packets to the target server.
Here, as an optional implementation manner, after receiving the first N data packets of the network connection session sent by the user terminal, the drainage end may perform buffering and mirroring on the first N data packets to obtain buffered first N data packets and mirrored first N data packets; and then, the drainage end sends the first N data packets of the mirror image to the processing node, the processing node detects the first N data packets of the mirror image to obtain a detection result, and after the processing node sends the detection result to the drainage end, if the drainage end judges that each data packet of the network connection session is not drained according to the detection result, the drainage end can directly send the cached first N data packets and other data packets after the received first N data packets of the network connection session to the target server. In this embodiment, the processing node is not required to send the first N data packets to the drainage end, the original first N data packets are not required to be modified, and the drainage end directly sends the first N data packets to the target server, so that the problem of IP exit change does not exist, that is, the original access IP seen by the target server is the user terminal, but not the processing node, and the condition of access blocking does not exist.
According to the technical scheme, the processing node detects information contained in the first N data packets of the network connection session, generates detection results of the first N data packets, and sends the detection results to the drainage end, so that the drainage end can judge whether to drain all the data packets of the network connection session according to the detection results; under the condition that each data packet of the network connection session is not drained according to the detection result, each data packet of the network connection session can be directly sent to the target server by the drainage end, so that the flow introduced into the processing node can be reduced to a certain extent, and the problem that the user cannot surf the internet due to the blocking of the outlet IP address of the processing node is solved.
Fig. 7 is a schematic diagram of a structural composition of a data processing apparatus according to an embodiment of the present application, which is applied to a processing node, where the apparatus includes:
a first receiving unit 701, configured to receive first N data packets of a network connection session sent by a drainage end; the network connection session is a connection session sent by the user terminal and used for accessing the target server, and N is an integer greater than or equal to 1;
a detecting unit 702, configured to detect the first N data packets to determine a detection result;
A first sending unit 703, configured to send at least the detection result to the drainage end, so that the drainage end determines whether to drain each data packet of the network connection session according to the detection result; each data packet at least comprises other data packets after the first N data packets.
In one embodiment, the detecting unit 702 is configured to: identifying application information corresponding to the first N data packets of the network connection session according to the characteristic information contained in the first N data packets; the detection result comprises the application information;
or,
identifying application information corresponding to the first N data packets of the network connection session according to the characteristic information contained in the first N data packets; comparing the application information with preset drainage information, and determining whether the application information is drained; the detection result comprises a result of whether the application information is drained.
In one embodiment, the detecting unit 702 is configured to: performing security detection on the first N data packets to obtain a result of whether the data packets are secure; the detection result comprises the result of whether safety is ensured or not;
or,
performing security detection on the first N data packets to obtain a result of whether the data packets are secure; determining whether to drain each data packet of the network connection session according to the result of whether to be safe, and generating a first determination result; the detection result includes the first determination result.
In one embodiment, the detecting unit 702 is configured to: matching the information contained in the first N data packets with a flow strategy rule to obtain a matching result; the detection result comprises a matching result;
or,
matching the information contained in the first N data packets with a flow policy rule to obtain a matching result, determining whether to stream each data packet of the network connection session according to the matching result, and generating a second determination result; the detection result comprises the second determination result
In one embodiment, each data packet of the network session connection includes at least one of the following dimensions: tenant, user, network protocol, internet surfing behavior, internet protocol IP address, domain name, application information.
Those skilled in the art will appreciate that the implementation of the functions of the units in the data processing apparatus shown in fig. 7 can be understood with reference to the foregoing description of the data processing method. The functions of the respective units in the data processing apparatus shown in fig. 7 may be realized by a program running on a processor or by a specific logic circuit.
Fig. 8 is a schematic diagram ii of the structural composition of the data processing apparatus provided in the embodiment of the present application, where the data processing apparatus is applied to a drainage end, and the apparatus includes:
A second sending unit 801, configured to send first N data packets of a network connection session to a processing node; the network connection session is a connection session sent by the user terminal and used for accessing the target server, and N is an integer greater than or equal to 1;
a second receiving unit 802, configured to at least receive the detection result sent by the processing node;
a determining unit 803, configured to determine whether to stream each data packet of the network connection session according to the detection result; each data packet at least comprises other data packets after the first N data packets.
In one embodiment, the apparatus further comprises:
a third sending unit, configured to send, if it is determined according to the detection result that each data packet of the network connection session is drained, other data packets after the first N data packets of the network connection session to the processing node; and if the data packets of the network connection session are not drained according to the detection result, sending the first N data packets of the network connection session and other data packets after the first N data packets to a target server.
Those skilled in the art will appreciate that the implementation of the functions of the units in the data processing apparatus shown in fig. 8 can be understood with reference to the foregoing description of the data processing method. The functions of the respective units in the data processing apparatus shown in fig. 8 may be realized by a program running on a processor or by a specific logic circuit.
The embodiment of the application also provides electronic equipment. Fig. 9 is a schematic hardware structure of an electronic device according to an embodiment of the present application, as shown in fig. 9, where the electronic device includes: a communication component 903 for data transmission, at least one processor 901 and a memory 902 for storing a computer program capable of running on the processor 901. The various components in the terminal are coupled together by a bus system 904. It is appreciated that the bus system 904 is used to facilitate connected communications between these components. The bus system 904 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration, the various buses are labeled as bus system 904 in fig. 9.
Wherein the processor 901 performs at least the steps of the method shown in fig. 2, 3, 5 or 6 when executing the computer program.
It is to be appreciated that the memory 902 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Wherein the nonvolatile Memory may be Read Only Memory (ROM), programmable Read Only Memory (PROM, programmable Read-Only Memory), erasable programmable Read Only Memory (EPROM, erasable Programmable Read-Only Memory), electrically erasable programmable Read Only Memory (EEPROM, electrically Erasable Programmable Read-Only Memory), magnetic random access Memory (FRAM, ferromagnetic random access Memory), flash Memory (Flash Memory), magnetic surface Memory, optical disk, or compact disk Read Only Memory (CD-ROM, compact Disc Read-Only Memory); the magnetic surface memory may be a disk memory or a tape memory. The volatile memory may be random access memory (RAM, random Access Memory), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (SRAM, static Random Access Memory), synchronous static random access memory (SSRAM, synchronous Static Random Access Memory), dynamic random access memory (DRAM, dynamic Random Access Memory), synchronous dynamic random access memory (SDRAM, synchronous Dynamic Random Access Memory), double data rate synchronous dynamic random access memory (ddr SDRAM, double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random access memory (ESDRAM, enhanced Synchronous Dynamic Random Access Memory), synchronous link dynamic random access memory (SLDRAM, syncLink Dynamic Random Access Memory), direct memory bus random access memory (DRRAM, direct Rambus Random Access Memory). The memory 902 described in embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed in the embodiments of the present application may be applied to the processor 901 or implemented by the processor 901. Processor 901 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 901 or instructions in the form of software. The processor 901 may be a general purpose processor, DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 901 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly embodied in a hardware decoding processor or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium in a memory 902 and the processor 901 reads information in the memory 902, in combination with its hardware, performing the steps of the method as described above.
In an exemplary embodiment, the electronic device may be implemented by one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSP, programmable logic device (PLD, programmable Logic Device), complex programmable logic device (CPLD, complex Programmable Logic Device), FPGA, general purpose processor, controller, MCU, microprocessor, or other electronic component for performing the aforementioned call recording method.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored thereon, wherein the program is at least for performing the steps of the method shown in fig. 2, 3, 5 or 6 when the program is executed by a processor. The computer readable storage medium may be a memory in particular. The memory may be the memory 902 shown in fig. 9.
The technical solutions described in the embodiments of the present application may be arbitrarily combined without any conflict.
In several embodiments provided in the present application, it should be understood that the disclosed method and intelligent device may be implemented in other manners. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one second processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application.

Claims (10)

1. A data processing method, for application to a processing node, the method comprising:
Receiving the first N data packets of the network connection session sent by the drainage end; the network connection session is a connection session sent by the user terminal and used for accessing the target server, and N is an integer greater than or equal to 1;
detecting the first N data packets to determine detection results;
at least sending the detection result to the drainage end, so that the drainage end determines whether to drain each data packet of the network connection session according to the detection result; each data packet at least comprises other data packets after the first N data packets.
2. The method of claim 1, wherein detecting the first N data packets to determine a detection result comprises:
identifying application information corresponding to the first N data packets of the network connection session according to the characteristic information contained in the first N data packets; the detection result comprises the application information;
or,
identifying application information corresponding to the first N data packets of the network connection session according to the characteristic information contained in the first N data packets; comparing the application information with preset drainage information, and determining whether the application information is drained; the detection result comprises a result of whether the application information is drained.
3. The method of claim 1, wherein detecting the first N data packets to determine a detection result comprises:
performing security detection on the first N data packets to obtain a result of whether the data packets are secure; the detection result comprises the result of whether safety is ensured or not;
or,
performing security detection on the first N data packets to obtain a result of whether the data packets are secure; determining whether to drain each data packet of the network connection session according to the result of whether to be safe, and generating a first determination result; the detection result includes the first determination result.
4. The method of claim 1, wherein detecting the first N data packets to determine a detection result comprises:
matching the information contained in the first N data packets with a flow strategy rule to obtain a matching result; the detection result comprises a matching result;
or,
matching the information contained in the first N data packets with a flow policy rule to obtain a matching result, determining whether to stream each data packet of the network connection session according to the matching result, and generating a second determination result; the detection result includes the second determination result.
5. A data processing method, applied to a drainage end, the method comprising:
transmitting the first N data packets of the network connection session to the processing node; the network connection session is a connection session sent by the user terminal and used for accessing the target server, and N is an integer greater than or equal to 1;
receiving at least the detection result sent by the processing node;
determining whether to drain each data packet of the network connection session according to the detection result; each data packet at least comprises other data packets after the first N data packets.
6. The method of claim 5, wherein the method further comprises:
if the data packets of the network connection session are determined to be drained according to the detection result, other data packets after the first N data packets of the network connection session are sent to the processing node;
and if the data packets of the network connection session are not drained according to the detection result, sending the first N data packets of the network connection session and other data packets after the first N data packets to a target server.
7. A data processing apparatus for application to a processing node, the apparatus comprising:
The first receiving unit is used for receiving the first N data packets of the network connection session sent by the drainage end; the network connection session is a connection session sent by the user terminal and used for accessing the target server, and N is an integer greater than or equal to 1;
the detection unit is used for detecting the first N data packets to determine detection results;
the first sending unit is used for sending the detection result to the drainage end at least, so that the drainage end determines whether to drain each data packet of the network connection session according to the detection result; each data packet at least comprises other data packets after the first N data packets.
8. A data processing apparatus for use at a drainage port, the apparatus comprising:
a second sending unit, configured to send first N data packets of the network connection session to the processing node; the network connection session is a connection session sent by the user terminal and used for accessing the target server, and N is an integer greater than or equal to 1;
a second receiving unit, configured to at least receive the detection result sent by the processing node;
the determining unit is used for determining whether to drain each data packet of the network connection session according to the detection result; each data packet at least comprises other data packets after the first N data packets.
9. An electronic device comprising a memory having stored thereon computer executable instructions, and a processor which when executed performs the method of any of claims 1 to 4 or claims 5 to 6.
10. A storage medium having stored thereon a computer program, which when executed by a processor performs the steps of the method of any of claims 1 to 4 or the steps of the method of any of claims 5 to 6.
CN202311698332.7A 2023-12-11 2023-12-11 Data processing method, device, related equipment and storage medium Pending CN117692512A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311698332.7A CN117692512A (en) 2023-12-11 2023-12-11 Data processing method, device, related equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311698332.7A CN117692512A (en) 2023-12-11 2023-12-11 Data processing method, device, related equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117692512A true CN117692512A (en) 2024-03-12

Family

ID=90131359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311698332.7A Pending CN117692512A (en) 2023-12-11 2023-12-11 Data processing method, device, related equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117692512A (en)

Similar Documents

Publication Publication Date Title
US20230336527A1 (en) Efficient Packet Capture for Cyber Threat Analysis
US20240031400A1 (en) Identifying Malware Devices with Domain Name System (DNS) Queries
CN107241186B (en) Network device and method for network communication
US9634943B2 (en) Transparent provisioning of services over a network
US9332020B2 (en) Method for tracking machines on a network using multivariable fingerprinting of passively available information
US9172649B2 (en) Traffic classification and control on a network node
US11516257B2 (en) Device discovery for cloud-based network security gateways
US20210006580A1 (en) System and method for identifying suspicious network traffic
EP2692089B1 (en) Incoming redirection mechanism on a reverse proxy
US20190215308A1 (en) Selectively securing a premises network
JP2005529409A (en) System and method for protocol gateway
US20130294449A1 (en) Efficient application recognition in network traffic
Mazhar Rathore et al. Exploiting encrypted and tunneled multimedia calls in high-speed big data environment
Masoud et al. On tackling social engineering web phishing attacks utilizing software defined networks (SDN) approach
Cho et al. A sophisticated packet forwarding scheme with deep packet inspection in an openflow switch
CN117692512A (en) Data processing method, device, related equipment and storage medium
US11546235B2 (en) Action based on advertisement indicator in network packet
KR20100055147A (en) Network based high performance contents security system and method thereof
WO2008066249A1 (en) System and method of processing keyword and storage medium of storing program executing the same
CN117692513A (en) Data processing method and device, equipment and storage medium
TW201526588A (en) Methods and systems to split equipment control between local and remote processing units
US11799856B2 (en) Application identification
US20230336793A1 (en) Streaming proxy service
CN116260600A (en) Network address identification method, device and system
CN117729254A (en) Request processing method, node, drainage end and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination