CN116961974A - Network anomaly detection method, device and storage medium - Google Patents

Network anomaly detection method, device and storage medium Download PDF

Info

Publication number
CN116961974A
CN116961974A CN202211476980.3A CN202211476980A CN116961974A CN 116961974 A CN116961974 A CN 116961974A CN 202211476980 A CN202211476980 A CN 202211476980A CN 116961974 A CN116961974 A CN 116961974A
Authority
CN
China
Prior art keywords
rule
candidate
service
target
index value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211476980.3A
Other languages
Chinese (zh)
Inventor
周鹏飞
张凯
杨泽
郝立扬
李靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211476980.3A priority Critical patent/CN116961974A/en
Publication of CN116961974A publication Critical patent/CN116961974A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses a network anomaly detection method, a network anomaly detection device and a storage medium, which can be applied to cloud technology, network security, artificial intelligence, intelligent traffic, auxiliary driving and other scenes. The method can save labor cost and improve detection effect and detection coverage rate. The method comprises the following steps: acquiring access flow information of each service device related to a target service; selecting at least one candidate field from service fields of access flow information of each service device, wherein each candidate field is used for reflecting portrait characteristics when each service device is suspected to have abnormal behaviors; generating at least one candidate rule and a device set corresponding to each candidate rule based on each candidate field; selecting at least one target rule from at least one candidate rule based on the evaluation index value of the equipment set corresponding to each candidate rule; and when the service field in the access flow information of the service equipment is matched with at least one target rule, determining that the service equipment is equipment with abnormal behavior.

Description

Network anomaly detection method, device and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a network anomaly detection method, a network anomaly detection device and a storage medium.
Background
In recent years, internet services show explosive growth, and accompanying abnormal behaviors such as illegal invasion and the like are gradually aggravated, so that profits and user experience of internet products are seriously affected.
In the conventional scheme, it is generally necessary to set a detection rule based on expert experience for a detection manner of whether or not the device involves abnormal behavior, and to detect whether or not the device involves abnormal behavior in a manual manner. For example, it is necessary to manually identify whether or not the device a has abnormal behavior according to an empirical rule that the device a frequently switches a plurality of internet protocols (internet protocol, IP) in a short time. However, the conventional method consumes a lot of labor cost, and only can identify abnormal behaviors matched with the detection rules in the expert experience range, so that the detection coverage rate is low and the detection effect is poor.
Disclosure of Invention
The embodiment of the application provides a network anomaly detection method, a network anomaly detection device and a storage medium, which can automatically realize detection processing on whether the business equipment relates to an anomaly behavior, not only save labor cost, but also improve detection effect and detection coverage rate.
In a first aspect, an embodiment of the present application provides a method for detecting network anomalies. The method comprises the following steps: acquiring access flow information of each service device related to a target service, wherein the access flow information of each service device is used for indicating an access condition when the service device accesses the target service; selecting and determining at least one candidate field from service fields of the access flow information of each service device, wherein each candidate field is used for reflecting portrait characteristics when each service device is suspected to have an abnormality; generating at least one candidate rule and a device set corresponding to each candidate rule based on each candidate field, wherein each candidate rule is used for screening each service device, and each device set corresponding to each candidate rule consists of service devices meeting the corresponding candidate rule; selecting at least one target rule from the at least one candidate rule based on an evaluation index value of the device set corresponding to each candidate rule, wherein the evaluation index value is used for indicating the degree of abnormality of the candidate rule corresponding to the device set; and when the business field in the access flow information of the business equipment is matched with the at least one target rule, determining the business equipment as equipment with the abnormal behavior.
In a second aspect, an embodiment of the present application provides an abnormality detection apparatus. The abnormality detection device includes an acquisition unit and a processing unit. The system comprises an acquisition unit, a service device management unit and a service device management unit, wherein the acquisition unit is used for acquiring access flow information of each service device related to a target service, and the access flow information of each service device is used for indicating the access condition of the service device when accessing the target service. The processing unit is used for: selecting at least one candidate field from service fields of access flow information of each service device, wherein each candidate field is used for reflecting portrait characteristics when each service device is suspected to have abnormal behaviors; generating at least one candidate rule and a device set corresponding to each candidate rule based on each candidate field, wherein each candidate rule is used for screening each service device, and each device set corresponding to each candidate rule consists of service devices meeting the corresponding candidate rule; selecting at least one target rule from the at least one candidate rule based on an evaluation index value of the device set corresponding to each candidate rule, wherein the evaluation index value is used for indicating the degree of abnormality of the candidate rule corresponding to the device set; and when the business field in the access flow information of the business equipment is matched with the at least one target rule, determining the business equipment as equipment with the abnormal behavior.
In some optional embodiments, the acquiring unit is configured to acquire the access pipeline information in a preset time period from the access pipeline information of each service device. And the processing unit is used for determining each service field in the access flow information in the preset time period as at least one candidate field.
In other optional embodiments, the processing unit is configured to delete, from each service field of the access flow information of each service device, a service field that matches a preset field, and/or delete a service field that is valued as a null value, so as to obtain at least one candidate field, where the preset field is used to indicate that the corresponding service device does not have the abnormal behavior.
In other alternative embodiments, the processing unit is configured to: calculating a first evaluation index value of the equipment set corresponding to each candidate field, wherein the first evaluation index value is used for indicating the degree of abnormality of the corresponding candidate field; when the first evaluation index value meets a first preset condition, determining the corresponding candidate field as a target field; generating at least one initial rule according to at least one target field; calculating a second evaluation index value in the equipment set corresponding to each initial rule, wherein the second evaluation index value is used for indicating the degree of abnormality of the corresponding initial rule; and when the second evaluation index value meets a second preset condition, determining the corresponding initial rule as a candidate rule, and determining the equipment set corresponding to the initial rule as the equipment set corresponding to the candidate rule.
In other alternative embodiments, the processing unit is configured to: calculating a third evaluation index value of the equipment set corresponding to the first candidate rule and a fourth evaluation index value of the first equipment set corresponding to the second candidate rule, wherein the first candidate rule and the second candidate rule are any two different rules in the at least one candidate rule, the third evaluation index value is used for indicating the abnormality degree of the corresponding first candidate rule, and the fourth evaluation index value is used for indicating the abnormality degree of the corresponding second candidate rule; and when the third evaluation index value is larger than the fourth evaluation index value and a third preset condition is met, determining the first candidate rule as the target rule.
In other optional embodiments, the processing unit is further configured to, after the determining that the first candidate rule is the target rule, reject, from the first device set, the service device that is the same as the device set corresponding to the first candidate rule, and obtain a second device set related to the second candidate rule; calculating a fifth evaluation index value of the second device set; and deleting the second candidate rule when the fifth evaluation index value does not meet the third preset condition.
In other optional embodiments, the processing unit is further configured to delete, when detecting that the validity of the target rule fails, the target rule that corresponds to the failure.
In other alternative embodiments, the processing unit is configured to: calculating the survival time of each target rule; and deleting the corresponding target rule after the survival time of the target rule expires.
In other alternative embodiments, the obtaining unit is configured to obtain a complaint proportion of each of the target rules. And the processing unit is used for deleting the corresponding target rule when the complaint proportion is larger than a preset threshold value.
A third aspect of an embodiment of the present application provides an abnormality detection apparatus, including: memory, input/output (I/O) interfaces, and memory. The memory is used for storing program instructions. The processor is configured to execute program instructions in the memory to perform the method of anomaly detection corresponding to the implementation manner of the first aspect.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform to execute the method corresponding to the embodiment of the first aspect described above.
A fifth aspect of the embodiments of the present application provides a computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to perform the method described above to perform the embodiment of the first aspect described above.
From the above technical solutions, the embodiment of the present application has the following advantages:
in the embodiment of the application, because the access flow information of each service device can indicate the access condition of the corresponding service device when accessing the target service, when the access flow information of each service device related to the target service is obtained, at least one candidate field can be selected from the service fields of the access flow information of each service device, and each candidate field reflects the portrait characteristic when each service device is suspected to have abnormal behavior. Then, at least one candidate rule and a device set corresponding to each candidate rule can be determined according to each candidate field. Moreover, each candidate rule can be used to perform a screening process on each service device, and the device set corresponding to each candidate rule is composed of service devices that satisfy the corresponding candidate rule. In addition, the evaluation index value can be used to indicate the degree of abnormality of the candidate rule corresponding to the device set, and then at least one target rule can be selected from the at least one candidate rule according to the evaluation index value of the device set corresponding to each candidate rule. Further, when the service field in the access flow information of the service device is matched with at least one target rule, the service device can be determined to be the device with abnormal behavior. By the method, the candidate rules and the target rules are determined directly or indirectly by means of the access flow information of each business device related to the target business, are not configured by means of expert experience, can be widely applied to abnormal behavior detection of other devices except for detection rules set by expert experience, and improves coverage rate. Moreover, the application can automatically generate candidate rules and target rules, and automatically detect whether the business equipment relates to abnormal behaviors, without manual participation, thereby saving labor cost and improving detection effect.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application;
fig. 2 shows a flowchart of a network anomaly detection method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a candidate rule generation flow provided by an embodiment of the present application;
FIG. 4 shows a schematic diagram of a device set cross provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a target rule selection process according to an embodiment of the present application;
fig. 6 shows an overall flowchart of a network anomaly detection method provided by an embodiment of the present application;
fig. 7 is a schematic structural diagram of an abnormality detection apparatus according to an embodiment of the present application;
fig. 8 is a schematic diagram showing another configuration of an abnormality detection apparatus provided by an embodiment of the present application;
Fig. 9 shows a schematic hardware structure of an abnormality detection apparatus according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a network anomaly detection method, a network anomaly detection device and a storage medium, which can automatically realize detection processing on whether the business equipment relates to an anomaly behavior, not only save labor cost, but also improve detection effect and detection coverage rate.
It will be appreciated that in the specific embodiments of the present application, related data such as user information, personal data of a user, etc. are involved, and when the above embodiments of the present application are applied to specific products or technologies, user permission or consent is required, and the collection, use and processing of related data is required to comply with relevant laws and regulations and standards of relevant countries and regions.
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The network anomaly detection method provided by the embodiment of the application is realized based on artificial intelligence (artificial intelligence, AI). Artificial intelligence is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and expand human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
In the embodiments of the present application, the artificial intelligence techniques mainly include the above-mentioned directions of machine learning and the like. For example, deep learning (deep learning) in Machine Learning (ML) may be involved, including artificial neural networks, and the like.
The network anomaly detection method provided by the application can be applied to an anomaly detection device with data processing capability, such as a server and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server or the like for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content distribution servers (content delivery network, CDN), basic cloud computing services such as big data and artificial intelligent platforms, and the application is not limited in particular.
The abnormality detection device mentioned above may have a machine learning capability. Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically involve neural networks and the like.
In the related scheme, whether the equipment is involved in abnormal behaviors or not is detected, a large amount of labor cost is required to be consumed, and only abnormal behaviors matched with detection rules in the expert experience range can be identified, so that the detection coverage rate is low, and the detection effect is poor.
Therefore, in order to solve the above-mentioned technical problems, the embodiment of the present application provides a method for detecting network anomalies. The method for detecting network anomaly can be applied to the system architecture schematic diagram shown in fig. 1. As shown in fig. 1, the system architecture may comprise a management device and at least one service device, such as service device 1, service device 2, service device 3, etc. Each service device can access the same target service, and further access flow information in the access process is generated. Thus, each service device transmits the respective access flow information to the management device, and the management device selects at least one candidate field according to the service field of the access flow information of each service device. And because each candidate field can represent the portrait characteristic of each service device when the service device is suspected to have abnormal behavior, the management device determines at least one candidate rule and a device set corresponding to each candidate rule. Further, since the evaluation index value can indicate the degree of abnormality of the candidate rule corresponding to the device set, the management device can determine at least one target rule from at least one candidate rule based on the evaluation index value of the device set corresponding to each candidate rule, and further perform abnormality detection on each service device.
The anomaly detection method provided by the embodiment of the application can be applied to various application scenes such as cloud technology, network security, artificial intelligence, intelligent transportation, auxiliary driving and the like. The described management device may be a server or the like. A service device may be understood as a device such as a terminal device that has access to a target service. The mentioned terminal devices may include, but are not limited to, smart phones, desktop computers, notebook computers, tablet computers, smart speakers, car devices, smart watches, etc. In addition, the mentioned management device and service device may be directly connected or indirectly connected by means of wired communication or wireless communication, etc., and the present application is not particularly limited.
In order to facilitate understanding of the technical scheme of the present application, a method for detecting an abnormality provided by the embodiment of the present application is described below with reference to the accompanying drawings.
Fig. 2 shows a flowchart of a network anomaly detection method according to an embodiment of the present application. As shown in fig. 2, the method for detecting network anomaly may include the following steps:
201. and acquiring access flow information of each service device related to the target service, wherein the access flow information of each service device is used for indicating the access condition of the corresponding service device when the corresponding service device accesses the target service.
In this example, each service device may access the same target service, thereby generating access flow information in the access process. Each service device may send the respective access pipeline information to the management device after generating the respective access pipeline information. In this way, the management device can acquire the access flow information of each service device related to the target service.
The access pipeline information may include, but is not limited to, device information of the corresponding service device, and local operating environment information, access time, etc. where the service device is located. The mentioned device information of the service device may also include, but is not limited to, content of a device Identity (ID), a device model, etc. The mentioned local operating environment information may include, but is not limited to, browser version, operating system version, etc. used by the business device when accessing the target business. The mentioned access time may include, but is not limited to, information such as the time when the service device first accessed the target service, the latest time when the target service was accessed, etc. For example, as a schematic description, table 1 shows a schematic view of accessing a service field in pipeline information, and is understood with specific reference to the following table 1, namely:
TABLE 1
As shown in table 1, the access flow information may be reflected by service fields of a device ID, a model, a browser version, an operating system version, a first access time, a..once, a latest access time, and the like. For example, for the service device 1, its access pipeline information may include the following: model m, browser version n, operating system version 1, first access time of 2022, 1 month, 1 day, 4 months, 5 days, etc.
In addition, the described target services may include, but are not limited to, web pages, text, pictures, videos, applications, applets, and the like, and the embodiments of the application are not limited to the description.
202. At least one candidate field is selected from service fields of the access flow information of each service device, and each candidate field is used for reflecting portrait characteristics when each service device is suspected to have abnormal behaviors.
In this example, the management device, after receiving the access flow information sent by each service device, can determine at least one candidate field according to the access flow information of each service device. Moreover, each candidate field can reflect the portrait features of each business device when the business device is suspected to have abnormal behaviors to a certain extent.
Because the time span of the access flow information of each service device is larger, and redundant information and non-key information are more. Based on this, in order to quickly and timely detect whether the service device involves abnormal behavior, the management device may intercept the access flow information from the angle of the time range, or intercept the access flow information from the angle of eliminating the redundant service field and the service field with the null value, and specifically understand with reference to the following three modes:
in the mode (1), the management device may obtain the access flow information in the preset time period from the access flow information of each service device. Then, the management device determines each service field in the access flow information within a preset period of time as at least one candidate field.
And (2) deleting the service field matched with the preset device from each service field of the access flow information of each service device by the management device so as to obtain at least one candidate field.
And (3) deleting the service field with the null value from each service field of the access flow information of each service device by the management device so as to obtain at least one candidate field.
The preset time period mentioned in the above-mentioned mode (1) may be the first K hours (K > 0) of the current access time, or may be any day, the day before the current access time, etc., which is not limited in the embodiment of the present application. In addition, the preset field described in the mode (2) can indicate that the service device does not have abnormal behavior. In other words, if the service field of the access flow information of the service device matches the preset field, it is indicated that the service field is a field that does not involve malicious traces or abnormal behaviors.
Note that, in the above-mentioned modes (1) to (3), one mode may be selected to determine the candidate field; alternatively, any two ways may be selected to determine the candidate fields; alternatively, the candidate fields may be determined by selecting the three modes together, which is not limited in the embodiment of the present application. For example, taking the example that the candidate fields are commonly determined in the modes (1) to (3), the management device commonly applies the modes (1), (2) and (3) to the access pipeline information of each service device, and finally the determined at least one candidate field can be understood by referring to the content shown in table 2, namely:
TABLE 2
F1 F2 F3 F4
Device ID Model type Browser version Operating system versionThe book is provided with
As can be seen from table 2 above, the candidate fields may include a device ID, model, browser version, operating system version.
It should be noted that table 2 is merely an exemplary description, and in practical application, the determined candidate field may be another service field, which is not limited in the embodiment of the present application.
203. And determining at least one candidate rule and a device set corresponding to each candidate rule based on each candidate field, wherein each candidate rule is used for screening each service device, and the device set corresponding to each candidate rule consists of the service devices meeting the corresponding candidate rule.
In this example, after selecting at least one candidate field from the service fields of the access flow information of each service device, the management device can further generate at least one candidate rule based on the candidate fields, and select the service devices that satisfy the candidate rule to form a corresponding device set. For example, the management device may select the service devices satisfying the candidate field with a single dimension to form a device set, and then calculate an evaluation index value of the device set, for example, a blacklist proportion, a whitelist proportion, a complaint proportion, and the like, so as to indicate the abnormality degree of the candidate field corresponding to the corresponding device set through the evaluation index value. In this way, after traversing and calculating the evaluation index value of the device set corresponding to each candidate field, the management device can reject the candidate field when the evaluation index value is lower than the threshold value, and reduce the calculation amount in the subsequent rule generation process from the aspect of dimension reduction.
Illustratively, at least one candidate rule and a device set corresponding to each candidate rule are determined according to each candidate field in step 203, and the generation process of the candidate rule can be understood with reference to the process flow shown in fig. 3. As shown in fig. 3, the generation process may include at least the following steps:
s301, calculating a first evaluation index value of a device set corresponding to each candidate field, wherein the first evaluation index value is used for indicating the degree of abnormality of the corresponding candidate field.
In this example, the management device may categorize all the service devices for each candidate field after determining each candidate field, where each candidate field corresponds to a device set.
For example, table 2 above shows candidate fields as examples, and for such candidate fields as model, different models may be used by different service devices, such as model 1, model 2, model m, etc. Then, for model 1, it may be determined that all the service devices of the model 1 are service devices of the model 1, and then the device set a associated with the model 1 is obtained. Similarly, for the candidate fields of the other models, the device sets associated with the respective models may be determined by referring to the example of model 1.
In addition, for other candidate fields, such as browser version, different business devices may use different browser versions, e.g., browser version 1, browser version 2, browser version n, etc. Different versions of the operating system may be available, such as operating system version 1, operating system version 2, operating system version k, etc., which are not particularly limited in the embodiments of the present application. For the candidate fields of the browser version and the operating system version, the device set associated with each browser version and each operating system version may be determined by referring to the example of model 1.
Thus, the management device may calculate the first evaluation index value of each device set after determining the device set associated with each candidate field. The first evaluation index value can indicate the degree of abnormality of the corresponding candidate field. The degree of anomaly described may also be understood as the degree of risk that the candidate field may cause the business device to be involved in anomalous behavior.
Illustratively, the first evaluation index value described may include one or more of a blacklist proportion, a whitelist proportion, a complaint proportion.
For example, taking a blacklist proportion and a whitelist proportion as the first evaluation index values as examples, if for the device set a of the model 1, the device set a includes a total of 20 service devices of the model 1. Thus, in the device set a, the management device calculates the number (such as 1) of service devices in the blacklist in the device set a, and solves the ratio of the number of service devices in the blacklist to all service devices in the device set a, so as to obtain the blacklist ratio of the device set a, i.e. 1/20=0.05. Likewise, the white list proportion of the device set a may be understood by referring to the solution process of the black list proportion, so as to obtain the white list proportion of the device set a, i.e. 19/20=0.95.
It should be noted that, for other candidate fields, the solving process of the first evaluation index value corresponding to the model 1 may also be understood, which is not described herein.
S302, when the first evaluation index value meets a first preset condition, determining the corresponding candidate field as a target field.
In this example, after calculating the first evaluation index value of the device set corresponding to each candidate field, the management device determines, by determining whether the first evaluation index value meets a first preset condition, that the corresponding candidate field is the target field if the first evaluation index value meets the first preset condition. Illustratively, the first preset condition may be that the blacklist proportion is greater than or equal to a first threshold value, and the whitelist proportion is less than or equal to a second threshold value, or the like; alternatively, the first preset condition may be that the blacklist proportion is greater than or equal to a first threshold value; alternatively, the first preset condition may be that the complaint ratio is greater than or equal to the third threshold, which is not limited in the embodiment of the present application.
For example, as an illustrative description, if the first preset condition is that the blacklist ratio is greater than or equal to 0.1 and the whitelist ratio is less than or equal to 0.6, then whether the first evaluation index value of the device set corresponding to each candidate field meets the first preset condition may be understood by referring to the following table 3, namely:
TABLE 3 Table 3
As can be seen from table 3, after comparing the blacklist proportion and the whitelist proportion of the device sets of each candidate field with the first preset condition, it can be determined that the blacklist proportion and the whitelist proportion of the device sets of the organic type m, the browser version n, and the operating system version k respectively meet the first preset condition. Therefore, model m, browser version n, operating system version k may be determined as the target fields.
It should be noted that the values of the blacklist proportion and the whitelist proportion of the device set of each candidate field described in the above table 3, for example, 0.05, 0.95, 0.35, 0.40, etc. are only illustrative, and may be other values in practical application, which is not limited in the embodiment of the present application. In addition, the first preset condition is that the blacklist proportion is not less than 0.1 and the whitelist proportion is not more than 0.6, which can be an illustrative description, and in practical application, the critical threshold value of the blacklist proportion can be other threshold values besides 0.1. Similarly, the threshold value of the white list ratio may be other than 0.6, and the description is not limited to the specific one.
S303, generating at least one initial rule according to at least one target field.
In this example, after determining the target field, the management device may combine the determined target fields in the N-Gram dimension to generate an initial rule for the corresponding N-Gram dimension. The described N-Gram dimension can be understood as N dimensions in a multidimensional space, where N.gtoreq.1 and N is an integer.
For example, when n=2, combining the target fields in the 2-Gram dimension can be understood as automatically combining every two target fields in the at least one target field. Taking the 3 target fields (i.e. model m, browser version n, operating system version k) determined in table 3 as an example, the 3 target fields are automatically combined in pairs to form 3 initial rules. The results of the specific initial rules are shown in the contents of table 4, namely:
TABLE 4 Table 4
Sequence number Initial regular discrete values Device set for initial rule association
Initial rule 1 Model m&Browser version n Service devices 1 to 10
Initial rule 2 Model m&Operating system version k Service devices 21 to 25
Initial rule 3 Browser version n&Operating system version k Service devices 3 to 12
As can be seen from table 4 above, model m may be combined with browser version n to generate an initial rule 1, and the initial rule 1 corresponds to the associated device set including business device 1 through business device 10. Similarly, model m is combined with operating system version k to generate initial rule 2, and the set of devices associated with the initial rule 2 includes business devices 21 through 25. The browser version n is combined with the operating system version k to generate an initial rule 3, and the initial rule 3 corresponds to the associated device set including the business devices 3 through 12.
It should be noted that the content shown in table 4 is merely a schematic description, and is not limited in the embodiment of the present application.
S304, calculating a second evaluation index value in the equipment set corresponding to each initial rule, wherein the second evaluation index value is used for indicating the degree of abnormality of the corresponding initial rule;
in this example, after generating each initial rule, the management device may classify the service device after screening for each initial rule, where each initial rule corresponds to a device set. For example, taking the initial rule 1 to the initial rule 3 shown in the foregoing table 4 as an example, it can be seen from the table 4 that the device set corresponding to the initial rule 1 includes the service devices 1 to 10, the device set corresponding to the initial rule 2 includes the service devices 21 to 25, and the device set corresponding to the initial rule 3 includes the service devices 3 to 12.
Thus, the management device may calculate the second evaluation index value of the device set associated with each initial rule after determining the device set associated with each initial rule. The degree of abnormality of the corresponding initial rule can be indicated by the second evaluation index value. Illustratively, the described second evaluation index value may include one or more of a blacklist proportion, a whitelist proportion, a complaint proportion of the corresponding device set.
For example, taking the blacklist proportion and the whitelist proportion as the second evaluation index value as an example, if for the device set of the initial rule 1, the device set includes 10 service devices (i.e. service device 1 to service device 10) conforming to the initial rule 1 in total. Thus, in the device set, the management device determines, through the black-and-white list label of the service devices in the device set, the service devices in the black list and the service devices in the white list included in the device set, for example, the service devices 1 to 2 are service devices in the white list, and the service devices 3 to 10 are service devices in the black list. In this way, the management device calculates the number (for example, 8) of service devices in the blacklist, and solves the ratio between the number of service devices in the blacklist and all service devices in the device set, so as to obtain the blacklist proportion of the device set, namely, 8/10=0.8. Likewise, the white list proportion of the device set may be understood by referring to the solution process of the black list proportion, so that the white list proportion of the device set, i.e. 2/10=0.2, may be obtained.
It should be noted that, for other initial rules, the solving process of the second evaluation index value corresponding to the initial rule 1 may be referred to for understanding, which is not described herein.
S305, when the second evaluation index value meets a second preset condition, determining the corresponding initial rule as a candidate rule, and determining the device set corresponding to the initial rule as the device set corresponding to the candidate rule.
In this example, after calculating the second evaluation index value of the device set corresponding to each initial rule, the management device determines that the corresponding initial rule is a candidate rule by determining whether the second evaluation index value meets a second preset condition, and further determining that the second evaluation index value meets the second preset condition. Illustratively, the second preset condition may be that the blacklist proportion is greater than or equal to a first threshold value, and the whitelist proportion is less than or equal to a second threshold value, or the like; alternatively, the second preset condition may be that the blacklist proportion is greater than or equal to the first threshold value; alternatively, the second preset condition may be that the complaint ratio is greater than or equal to the third threshold, which is not limited in the embodiment of the present application.
For example, as an illustrative description, if the second preset condition is that the blacklist ratio is greater than or equal to 0.8 and the whitelist ratio is less than or equal to 0.2, whether the second evaluation index value of the device set corresponding to each initial rule meets the second preset condition may be understood by referring to the following table 5, namely:
TABLE 5
/>
As can be seen from table 5, after comparing the blacklist proportion and the whitelist proportion of the device sets of each initial rule with the second preset condition, it can be determined that the blacklist proportion and the whitelist proportion of the device sets of each initial rule 1 and each initial rule 3 meet the second preset condition. Thus, both initial rule 1 and initial rule 3 may be determined as candidate rules. For convenience of description, description will be made by taking initial rule 1 as candidate rule 1 and initial rule 3 as candidate rule 3 as an example.
It should be noted that the values of the blacklist proportion and the whitelist proportion of the device set of each initial rule described in the above table 5, for example, 0.8, 0.9, 0.2, 0.1, etc. are only illustrative, and may be other values in practical application, which are not limited in the embodiments of the present application. In addition, the second preset condition is that the blacklist proportion is not less than 0.8 and the whitelist proportion is not more than 0.2, which can be an illustrative description, and in practical application, the critical threshold value of the blacklist proportion can be other threshold values besides 0.8. Similarly, the threshold value of the white list ratio may be other than 0.2, and the description is not limited to the specific one.
204. And selecting at least one target rule from at least one candidate rule based on the evaluation index value of the equipment set corresponding to each candidate rule, wherein the evaluation index value is used for indicating the degree of abnormality of the candidate rule corresponding to the equipment set.
In this example, because there may be a situation in which service devices cross-repeat between device sets associated with the candidate rule that is screened, if the candidate rule is not further screened and purified, more service devices in the white list are easy to determine devices related to abnormal behaviors, which is not beneficial to improving detection accuracy.
For example, taking the initial rule 1 and the initial rule 3 screened in the foregoing table 5 as candidate rules, the situation that the device sets corresponding to the two candidate rules intersect with each other in service devices may be specifically understood by referring to the schematic diagram shown in fig. 4. As shown in fig. 4, the device set intersection of the two candidate rules is all the service devices (e.g., service device 3 to service device 10) in the blacklist, while the non-overlapping portion is mainly the service devices (e.g., service device 1 to service device 2, service device 11 to service device 12) in the whitelist. If the management device detects the service device based on the candidate rule 3 (i.e. the initial rule 3), the service device covered in the blacklist in the device set corresponding to the candidate rule 1 (i.e. the initial rule 1) can be detected. If the management device detects the service device based on the candidate rule 1, it may cause that the service device in the white list in the device set corresponding to the candidate rule 3 is also detected to involve abnormal behavior, which results in poor experience effect of such service device.
Therefore, after determining each candidate rule, the management device needs to further select at least one target rule from at least one candidate rule according to the evaluation index value of the device set corresponding to each candidate rule. For example, the management device may further filter each candidate rule determined based on the Q-learning concept.
Illustratively, for the target rule mentioned in step 204, its selection process may be understood with reference to the flow shown in fig. 5 described below. As shown in fig. 5, the selection process at least includes the following steps:
s501, calculating a third evaluation index value of a device set corresponding to a first candidate rule and a fourth evaluation index value of the first device set corresponding to a second candidate rule, wherein the first candidate rule and the second candidate rule are any two different rules in at least one candidate rule, the third evaluation index value is used for indicating the abnormality degree of the corresponding first candidate rule, and the fourth evaluation index value is used for indicating the abnormality degree of the corresponding second candidate rule.
In this example, after determining each candidate rule, the management device may calculate, for each candidate rule, an evaluation index value for a device set associated with the respective candidate rule. For example, taking any two different rules (i.e., the first candidate rule and the second candidate rule) in the at least one candidate rule as an example, the management device may calculate a third evaluation index value of the device set corresponding to the first candidate rule and a fourth evaluation index value of the device set corresponding to the second candidate rule. Illustratively, the described third evaluation index value may include one or more of a blacklist proportion, a whitelist proportion, a complaint proportion of the device set to which the first candidate rule corresponds. The described fourth evaluation index value may include one or more of a blacklist proportion, a whitelist proportion, a complaint proportion of the first device set corresponding to the second candidate rule.
It should be noted that, the calculation process of the third evaluation index value and the fourth evaluation index value may be understood by referring to the calculation process of the first evaluation index value or the second evaluation index value, which is not described herein.
S502, when the third evaluation index value is larger than the fourth evaluation index value and a third preset condition is met, determining the first candidate rule as a target rule.
In this example, the management apparatus may compare the third evaluation index value with the fourth evaluation index value after calculating the third evaluation index value corresponding to the first candidate rule and the fourth evaluation index value corresponding to the second candidate rule. Then, the management apparatus regards the first candidate rule as the current candidate optimal rule when judging that the third evaluation index value is greater than the fourth evaluation index value. For example, taking the candidate rule 3 shown in the foregoing table 5 as a first candidate rule and the candidate rule 1 as a second candidate rule as an example, it is apparent from table 5 that the blacklist ratio of the device set corresponding to the candidate rule 1 is 0.8 and the whitelist ratio is 0.2; the blacklist proportion of the device set corresponding to the candidate rule 3 is 0.9, and the whitelist proportion is 0.1. The management device determines that the blacklist proportion (i.e. 0.9) corresponding to the candidate rule 3 is greater than the blacklist proportion (i.e. 0.8) corresponding to the candidate rule 1. At this time, the candidate rule 3 is taken as the current candidate optimal rule.
Then, the management apparatus further judges whether or not the third evaluation index value corresponding to the candidate rule 3 satisfies a third preset condition. For example, it may be determined whether the corresponding blacklist proportion is greater than or equal to 0.8 and the whitelist proportion is less than or equal to 0.2. If the blacklist proportion corresponding to the candidate rule 3 is judged to be greater than or equal to 0.8 and the whitelist proportion is judged to be less than or equal to 0.2, the management device determines the candidate rule 3 as the target rule. The management device may also store the determined target rule in a rule pool, for example, to facilitate rapid detection later. For example, as a schematic depiction, table 6 shows one schematic result of the target rule, specifically as follows:
TABLE 6
In other examples, if the third evaluation index value of the first candidate rule is the same as the fourth evaluation index value of the second candidate rule, after the first candidate rule is set as the target rule based on the step S502, the device set corresponding to the second candidate rule may be updated, and it may be further determined whether the second candidate rule may be set as the target rule, so as to write into the rule pool. For example, the management device may reject, from the first device set of the second candidate rule, the same service device as the device set corresponding to the first candidate rule, to obtain the second device set related to the second candidate rule. For example, as can be seen from the schematic diagram of fig. 4, the same service devices, i.e., service device 3 to service device 10, exist between the device set corresponding to candidate rule 3 and the device set corresponding to candidate rule 1. At this time, the service devices 3 to 10 may be removed from the device sets (i.e., the service devices 1 to 10) corresponding to the candidate rule 1, so as to obtain the second device set, i.e., the service devices 1 to 2, corresponding to the candidate rule 1.
In this way, the management device may calculate the fifth evaluation index value of the second device set, and delete the second candidate rule when it is determined that the fifth evaluation index value does not satisfy the third preset condition. It should be noted that, the described third preset condition may be specifically understood by referring to the third preset condition mentioned in the foregoing step S502, which is not described herein.
For example, as a schematic description, taking the second candidate rule as the candidate rule 1 as an example, table 7 shows the correspondence between the candidate rule 1 and the second device set, namely:
TABLE 7
As can be seen from comparing table 7 with table 5, after the same service equipment is rejected, the blacklist proportion of candidate rule 1 (i.e., initial rule 1) decreases from 0.8 to 0, and the whitelist proportion increases from 0.2 to 1. By judging whether the third preset condition is satisfied, it is obvious that the candidate rule 1 no longer satisfies the third preset condition for screening, and at this time, the candidate rule 1 may be eliminated.
In this way, the management device can select a current optimal rule during each iteration, then dynamically reject the device set corresponding to the selected optimal candidate rule, recalculate the evaluation index value of the remaining candidate rules, and judge whether the evaluation index value meets a third preset condition or not, until the last remaining candidate rule does not meet the third preset condition, the target rules which all meet the conditions can be selected from all the candidate rules.
205. And when the service field in the access flow information of the service equipment is matched with at least one target rule, determining that the service equipment is equipment with abnormal behavior.
In this example, the management device, after determining at least one target rule, may use the target rules to detect abnormal behavior for each business device. For example, taking the target rule as the candidate rule 3 mentioned in the foregoing table 6 as an example, if the management device detects that the service field in the access pipeline information of the service device X matches with the candidate rule 3, it may be determined that the service device X is a device related to abnormal behavior. At this time, the management device may perform processing such as penalty processing on the service device X.
In other examples, the management device may further detect validity of the target rule after determining the target rule, and further delete the target rule corresponding to the failure when the validity of the target rule fails.
For example, the management device may further delete the corresponding target rule after the expiration of the time-to-live (TTL) of each target rule by calculating the time-to-live (TTL) of each target rule. For example, if the survival time of candidate rule 3 is 1 hour, candidate rule 3 is deleted if 1 hour expires.
Or the management equipment also detects the complaint condition of each target rule, and further calculates the complaint proportion of each target rule. In this way, when the complaint proportion of the target rule is judged to be larger than the preset threshold value, the corresponding target rule is deleted. For example, if the complaint ratio of the user to the candidate rule 3 is 0.6 and the preset threshold is 0.5, the candidate rule 3 may be deleted.
Fig. 6 is a schematic overall flow chart of a network anomaly detection method according to an embodiment of the present application. As shown in fig. 6, in the network anomaly detection method, access flow information of each service device related to a target service may be acquired first, and then at least one candidate field is selected from service fields of the access flow information of each service device, where each candidate field is used for reflecting portrait characteristics when each service device is suspected to have an anomaly. The service fields described are understood with reference to the description of step 202 in fig. 2, and are not described in detail herein.
Then, a first evaluation index value of the device set corresponding to each candidate field is calculated, and whether the first evaluation index value meets a first preset condition is judged. And if the first evaluation index value meets the first preset condition, determining the corresponding candidate field as the target field. Otherwise, when the first evaluation index value does not meet the first preset condition, discarding the corresponding candidate field. Then, generating at least one initial rule according to the at least one target field; and calculating a second evaluation index value in the equipment set corresponding to each initial rule, and judging whether the second evaluation index value meets a second preset condition, wherein the second evaluation index value is used for indicating the abnormality degree of the corresponding initial rule. And when the second evaluation index value meets a second preset condition, determining the corresponding initial rule as a candidate rule, and determining the equipment set corresponding to the initial rule as the equipment set corresponding to the candidate rule. Otherwise, when the second evaluation index value does not meet the second preset condition, deleting the corresponding initial rule.
Further, a third evaluation index value of the device set corresponding to the first candidate rule and a fourth evaluation index value of the first device set corresponding to the second candidate rule are calculated, wherein the first candidate rule and the second candidate rule are any two different rules in at least one candidate rule. Then, it is determined whether the third evaluation index value is greater than the fourth evaluation index value and whether a third preset condition is satisfied. If the third evaluation index value is determined to be greater than the fourth evaluation index value and the third preset condition is satisfied, the first candidate rule is determined to be the target rule, and the target rule may be stored in the rule pool. The rule pool may be a database, a cloud database, etc., which is not limited in the embodiments of the present application. In addition, when the third evaluation index value does not meet the third preset condition, the same service equipment in the equipment set corresponding to the first candidate rule is removed from the first equipment set, a second equipment set related to the second candidate rule is obtained, and the evaluation index value of the second equipment set, namely a fifth evaluation index value, is recalculated. And deleting the second candidate rule if the fifth evaluation index value is judged to not meet the third preset condition. The contents of the third preset condition, the third evaluation index value, the fourth evaluation index value, the fifth evaluation index value, etc. described above may be understood with reference to the foregoing step 204 in fig. 2 or the content in fig. 5, and will not be described herein.
In addition, when the validity failure of the target rule is detected, the target rule corresponding to the failure can be deleted.
It should be noted that, the details shown in fig. 6 may be understood with reference to the details shown in fig. 2 to 5, which are not described herein.
In the embodiment of the application, because the access flow information of each service device can indicate the access condition of the corresponding service device when accessing the target service, when the access flow information of each service device related to the target service is obtained, at least one candidate field can be selected from the service fields of the access flow information of each service device, and each candidate field reflects the portrait characteristic when each service device is suspected to have abnormal behavior. Then, at least one candidate rule and a device set corresponding to each candidate rule can be determined according to each candidate field. Moreover, each candidate rule can be used to perform a screening process on each service device, and the device set corresponding to each candidate rule is composed of service devices that satisfy the corresponding candidate rule. In addition, the evaluation index value can be used to indicate the degree of abnormality of the candidate rule corresponding to the device set, and then at least one target rule can be selected from the at least one candidate rule according to the evaluation index value of the device set corresponding to each candidate rule. Further, when the service field in the access flow information of the service device is matched with at least one target rule, the service device can be determined to be the device with abnormal behavior. By the method, the candidate rules and the target rules are determined directly or indirectly by means of the access flow information of each business device related to the target business, are not configured by means of expert experience, can be widely applied to abnormal behavior detection of other devices except for detection rules set by expert experience, and improves coverage rate. Moreover, the application can automatically generate candidate rules and target rules, and automatically detect whether the business equipment relates to abnormal behaviors, without manual participation, thereby saving labor cost and improving detection effect.
The foregoing description of the solution provided by the embodiments of the present application has been mainly presented in terms of a method. It should be understood that, in order to implement the above-described functions, hardware structures and/or software modules corresponding to the respective functions are included. Those of skill in the art will readily appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional modules of the device according to the method example, for example, each functional module can be divided corresponding to each function, and two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
An abnormality detection device according to an embodiment of the present application will be described in detail below, and fig. 7 is a schematic diagram of an embodiment of the abnormality detection device according to an embodiment of the present application. The abnormality detection means may be, for example, the aforementioned management device or the like. As shown in fig. 7, the abnormality detection apparatus may include an acquisition unit 701 and a processing unit 702.
The acquiring unit 701 is configured to acquire access flow information of each service device related to a target service, where the access flow information of each service device is used to indicate an access condition when the service device accesses the target service. The processing unit 702 is configured to: selecting at least one candidate field from service fields of access flow information of each service device, wherein each candidate field is used for reflecting portrait characteristics when each service device is suspected to have abnormal behaviors; generating at least one candidate rule and a device set corresponding to each candidate rule based on each candidate field, wherein each candidate rule is used for screening each service device, and each device set corresponding to each candidate rule consists of service devices meeting the corresponding candidate rule; selecting at least one target rule from the at least one candidate rule based on an evaluation index value of the device set corresponding to each candidate rule, wherein the evaluation index value is used for indicating the degree of abnormality of the candidate rule corresponding to the device set; and when the business field in the access flow information of the business equipment is matched with the at least one target rule, determining the business equipment as equipment with the abnormal behavior. It may be specifically understood with reference to the descriptions of steps 201 to 205 in fig. 2, which are not described herein.
In some optional embodiments, the obtaining unit 701 is configured to obtain the access pipeline information in a preset period from the access pipeline information of each service device. The processing unit 702 is configured to determine each service field in the access flow information within the preset period of time as at least one candidate field.
In other optional embodiments, the processing unit 702 is configured to delete, from each service field of the access flow information of each service device, a service field that matches a preset field, and/or delete a service field that has a null value, so as to obtain at least one candidate field, where the preset field is used to indicate that the corresponding service device does not have the abnormal behavior.
In other alternative embodiments, the processing unit 702 is configured to: calculating a first evaluation index value of the equipment set corresponding to each candidate field, wherein the first evaluation index value is used for indicating the degree of abnormality of the corresponding candidate field; when the first evaluation index value meets a first preset condition, determining the corresponding candidate field as a target field; generating at least one initial rule according to at least one target field; calculating a second evaluation index value in the equipment set corresponding to each initial rule, wherein the second evaluation index value is used for indicating the degree of abnormality of the corresponding initial rule; and when the second evaluation index value meets a second preset condition, determining the corresponding initial rule as a candidate rule, and determining the equipment set corresponding to the initial rule as the equipment set corresponding to the candidate rule.
In other alternative embodiments, the processing unit 702 is configured to: calculating a third evaluation index value of the equipment set corresponding to the first candidate rule and a fourth evaluation index value of the first equipment set corresponding to the second candidate rule, wherein the first candidate rule and the second candidate rule are any two different rules in the at least one candidate rule, the third evaluation index value is used for indicating the abnormality degree of the corresponding first candidate rule, and the fourth evaluation index value is used for indicating the abnormality degree of the corresponding second candidate rule; and when the third evaluation index value is larger than the fourth evaluation index value and a third preset condition is met, determining the first candidate rule as the target rule.
In other optional embodiments, the processing unit 702 is further configured to, after the determining that the first candidate rule is the target rule, reject, from the first device set, the service device that is the same as the device set corresponding to the first candidate rule, to obtain a second device set related to the second candidate rule; calculating a fifth evaluation index value of the second device set; and deleting the second candidate rule when the fifth evaluation index value does not meet the third preset condition.
In other optional embodiments, the processing unit 702 is further configured to delete, when detecting that the validity of the target rule fails, the target rule that corresponds to the failure.
In other alternative embodiments, the processing unit 702 is configured to: calculating the survival time of each target rule; and deleting the corresponding target rule after the survival time of the target rule expires.
In other alternative embodiments, the obtaining unit 701 is configured to obtain a complaint proportion of each target rule. The processing unit 702 is configured to delete the corresponding target rule when the complaint proportion is greater than a preset threshold.
Illustratively, fig. 8 shows another schematic structural diagram of the abnormality detection apparatus provided by the embodiment of the present application. As shown in fig. 8, the abnormality detection apparatus may include a preprocessing module, a rule automatic generation module, a rule automatic evaluation module, a rule automatic online module, and a rule online detection module.
The preprocessing module is mainly used for preprocessing the access flow information of each service device to extract at least one candidate field. The process of specific preprocessing may include obtaining access pipeline information for each service device related to the target service. It may also include selecting at least one candidate field from the service fields of the access flow information of each service device. It can be specifically understood with reference to the descriptions of steps 201 to 202 in fig. 2, which are not described herein.
The rule automatic generation module is mainly used for executing the generation of at least one candidate rule based on each candidate field, the selection of at least one target rule from the at least one candidate rule, and the like. For example, generating at least one candidate rule based on each candidate field and a device set corresponding to each candidate rule are mainly executed; or generating at least one initial rule according to at least one target field; or executing the operations of determining the first candidate rule as the target rule and the like when the third evaluation index value is larger than the fourth evaluation index value and a third preset condition is met. It is understood with reference to the foregoing steps 203 to 204 in fig. 2 or the descriptions in fig. 3 to 5, which are not repeated herein.
The rule automatic evaluation module is mainly used for judging evaluation index values corresponding to candidate fields, candidate rules, target rules and the like. For example, executing, based on the evaluation index value of the device set corresponding to each candidate rule, selecting at least one target rule from the at least one candidate rule. Or, executing to judge whether the first evaluation index value in the device set corresponding to each candidate field meets the first evaluation index value; or judging whether a second evaluation index value in the equipment set corresponding to each initial rule meets a second preset condition; or, judging whether the third evaluation index value of the device set corresponding to the first candidate rule is greater than the fourth evaluation index value of the first device set corresponding to the second candidate rule, and whether the third condition is satisfied, etc. And, writing the target rule meeting the condition into the rule pool. Specifically, the description of step S301 to step S302, step S304 to step S305 in fig. 3, or step S501 to step S502 in fig. 5 may be referred to for understanding, which is not repeated herein.
The rule automatic online module is mainly used for extracting target rules in the rule pool and online processing the target rules. The rule online detection module is mainly used for detecting the validity of online target rules in real time and deleting the corresponding target rules when the validity fails.
The abnormality detection device in the embodiment of the present application is described above from the viewpoint of the modularized functional entity, and the abnormality detection device in the embodiment of the present application is described below from the viewpoint of hardware processing. Fig. 9 is a schematic structural diagram of an abnormality detection apparatus according to an embodiment of the present application. The abnormality detection device may have a relatively large difference due to different configurations or performances. The anomaly detection means may be at least one processor 901, a communication link 907, a memory 903 and at least one communication interface 904.
The processor 901 may be a general purpose central processing unit (central processing unit, CPU), microprocessor, application-specific integrated circuit (server IC), or one or more integrated circuits for controlling the execution of programs in accordance with aspects of the present application.
Communication line 907 may include a pathway to transfer information between the aforementioned components.
The communication interface 904, uses any transceiver-like device for communicating with other devices or communication networks, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
The memory 903 may be a read-only memory (ROM) or other type of static storage device that may store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that may store information and instructions, and the memory may be stand-alone and coupled to the processor via a communication line 907. The memory may also be integrated with the processor.
The memory 903 is used for storing computer-executable instructions for executing the present application, and is controlled by the processor 901. The processor 901 is configured to execute computer-executable instructions stored in the memory 903, thereby implementing the method provided by the above-described embodiment of the present application.
Alternatively, the computer-executable instructions in the embodiments of the present application may be referred to as application program codes, which are not particularly limited in the embodiments of the present application.
In a specific implementation, as an embodiment, the abnormality detection device may include a plurality of processors, such as the processor 901 and the processor 902 in fig. 9. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In a specific implementation, as an embodiment, the abnormality detection apparatus may further include an output device 905 and an input device 906. The output device 905 communicates with the processor 901 and may display information in a variety of ways. The input device 906, in communication with the processor 901, may receive input of a target object in a variety of ways. For example, the input device 906 may be a mouse, a touch screen device, a sensing device, or the like.
The abnormality detecting device may be a general-purpose device or a special-purpose device. In a specific implementation, the abnormality detection device may be a server or the like or a device having a similar structure as in fig. 9. The embodiment of the application is not limited to the type of the abnormality detection device.
Note that the processor 901 in fig. 9 may cause the abnormality detection device to execute the method in the method embodiment as corresponding to fig. 2 to 6 by calling the computer-executable instructions stored in the memory 903.
In particular, the functions/implementation of the processing unit 702 in fig. 7 may be implemented by the processor 901 in fig. 9 invoking computer executable instructions stored in the memory 903. The functions/implementation of the acquisition unit 701 in fig. 7 may be implemented by the communication interface 904 in fig. 9.
The embodiment of the present application also provides a computer storage medium storing a computer program for electronic data exchange, where the computer program causes a computer to execute part or all of the steps of any one of the methods for identifying anomaly detection as described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the anomaly detection methods described in the method embodiments above.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above-described embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof, and when implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When the computer-executable instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are fully or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be stored by a computer or data storage devices such as servers, data centers, etc. that contain an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., SSD)), or the like.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. A method for detecting network anomalies, comprising:
acquiring access flow information of each service device related to a target service, wherein the access flow information of each service device is used for indicating an access condition when the service device accesses the target service;
selecting at least one candidate field from service fields of the access flow information of each service device, wherein each candidate field is used for reflecting portrait characteristics when each service device is suspected to have an abnormality;
generating at least one candidate rule and a device set corresponding to each candidate rule based on each candidate field, wherein each candidate rule is used for screening each service device, and each device set corresponding to each candidate rule consists of service devices meeting the corresponding candidate rule;
Selecting at least one target rule from the at least one candidate rule based on an evaluation index value of the device set corresponding to each candidate rule, wherein the evaluation index value is used for indicating the degree of abnormality of the candidate rule corresponding to the device set;
and when the business field in the access flow information of the business equipment is matched with the at least one target rule, determining the business equipment as equipment with the abnormal behavior.
2. The method of claim 1, wherein said determining at least one candidate field based on said per-service device access pipeline information comprises:
acquiring access flow information in a preset time period from the access flow information of each service device;
and determining each service field in the access flow information within the preset time period as at least one candidate field.
3. The method of claim 1, wherein selecting at least one candidate field from the service fields of the access pipeline information of each service device comprises:
deleting a service field matched with a preset field from each service field of the access flow information of each service device, and/or deleting a service field with a null value to obtain at least one candidate field, wherein the preset field is used for indicating that the corresponding service device does not have the abnormal behavior.
4. A method according to any one of claims 1 to 3, wherein said generating at least one candidate rule based on each of said candidate fields, and a set of devices to which each of said candidate rules corresponds, comprises:
calculating a first evaluation index value of the equipment set corresponding to each candidate field, wherein the first evaluation index value is used for indicating the degree of abnormality of the corresponding candidate field;
when the first evaluation index value meets a first preset condition, determining the corresponding candidate field as a target field;
generating at least one initial rule according to at least one target field;
calculating a second evaluation index value in the equipment set corresponding to each initial rule, wherein the second evaluation index value is used for indicating the degree of abnormality of the corresponding initial rule;
and when the second evaluation index value meets a second preset condition, determining the corresponding initial rule as a candidate rule, and determining the equipment set corresponding to the initial rule as the equipment set corresponding to the candidate rule.
5. A method according to any one of claims 1 to 3, wherein selecting at least one target rule from the at least one candidate rule based on an evaluation index value according to the set of devices to which each candidate rule corresponds, comprises:
Calculating a third evaluation index value of the equipment set corresponding to the first candidate rule and a fourth evaluation index value of the first equipment set corresponding to the second candidate rule, wherein the first candidate rule and the second candidate rule are any two different rules in the at least one candidate rule, the third evaluation index value is used for indicating the abnormality degree of the corresponding first candidate rule, and the fourth evaluation index value is used for indicating the abnormality degree of the corresponding second candidate rule;
and when the third evaluation index value is larger than the fourth evaluation index value and a third preset condition is met, determining the first candidate rule as the target rule.
6. The method of claim 5, wherein after the determining that the first candidate rule is the target rule, the method further comprises:
removing the same service equipment in the equipment set corresponding to the first candidate rule from the first equipment set to obtain a second equipment set related to the second candidate rule;
calculating a fifth evaluation index value of the second device set;
and deleting the second candidate rule when the fifth evaluation index value does not meet the third preset condition.
7. A method according to any one of claims 1 to 3, further comprising:
and deleting the target rule corresponding to the failure when the validity failure of the target rule is detected.
8. The method of claim 7, wherein upon detecting a validity failure of the target rule, deleting the target rule corresponding to the failure comprises:
calculating the survival time of each target rule;
and deleting the corresponding target rule after the survival time of the target rule expires.
9. The method of claim 7, wherein upon detecting a validity failure of the target rule, deleting the target rule corresponding to the failure comprises:
acquiring complaint proportion of each target rule;
and deleting the corresponding target rule when the complaint proportion is larger than a preset threshold value.
10. An abnormality detection apparatus, comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring access flow information of each service device related to a target service, and the access flow information of each service device is used for indicating the access condition of the service device when the service device accesses the target service;
The processing unit is used for selecting at least one candidate field from service fields of the access flow information of each service device, wherein each candidate field is used for reflecting portrait characteristics when each service device is suspected to have abnormal behaviors;
the processing unit is used for generating at least one candidate rule and a device set corresponding to each candidate rule based on each candidate field, wherein each candidate rule is used for screening each service device, and the device set corresponding to each candidate rule consists of the service devices meeting the corresponding candidate rule;
the processing unit is used for selecting at least one target rule from the at least one candidate rule based on an evaluation index value of the equipment set corresponding to each candidate rule, wherein the evaluation index value is used for indicating the degree of abnormality of the candidate rule corresponding to the equipment set;
and the processing unit is used for determining the service equipment as the equipment which generates the abnormal behavior when the service field in the access flow information of the service equipment is matched with the at least one target rule.
11. An abnormality detection apparatus, characterized by comprising: an input/output (I/O) interface, a processor, and a memory, the memory having program instructions stored therein;
The processor is configured to execute program instructions stored in a memory and to perform the method of any one of claims 1 to 9.
12. A computer readable storage medium comprising instructions which, when run on a computer device, cause the computer device to perform the method of any of claims 1 to 9.
13. A computer program product, characterized in that the computer program product comprises instructions which, when run on a computer device or a processor, cause the computer device or the processor to perform the method of any of claims 1 to 9.
CN202211476980.3A 2022-11-23 2022-11-23 Network anomaly detection method, device and storage medium Pending CN116961974A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211476980.3A CN116961974A (en) 2022-11-23 2022-11-23 Network anomaly detection method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211476980.3A CN116961974A (en) 2022-11-23 2022-11-23 Network anomaly detection method, device and storage medium

Publications (1)

Publication Number Publication Date
CN116961974A true CN116961974A (en) 2023-10-27

Family

ID=88443265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211476980.3A Pending CN116961974A (en) 2022-11-23 2022-11-23 Network anomaly detection method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116961974A (en)

Similar Documents

Publication Publication Date Title
CN103297435B (en) A kind of abnormal access behavioral value method and system based on WEB daily record
CN105590055B (en) Method and device for identifying user credible behaviors in network interaction system
CN106557695B (en) A kind of malicious application detection method and system
US20200304550A1 (en) Generic Event Stream Processing for Machine Learning
CN110148053B (en) User credit line evaluation method and device, electronic equipment and readable medium
WO2020048056A1 (en) Risk decision method and apparatus
CN111259952A (en) Abnormal user identification method and device, computer equipment and storage medium
CN113378899A (en) Abnormal account identification method, device, equipment and storage medium
CN112839014A (en) Method, system, device and medium for establishing model for identifying abnormal visitor
CN111586695A (en) Short message identification method and related equipment
CN110599278B (en) Method, apparatus, and computer storage medium for aggregating device identifiers
CN116739605A (en) Transaction data detection method, device, equipment and storage medium
CN109376287B (en) House property map construction method, device, computer equipment and storage medium
CN111222032A (en) Public opinion analysis method and related equipment
CN116961974A (en) Network anomaly detection method, device and storage medium
CN115470489A (en) Detection model training method, detection method, device and computer readable medium
CN115168474A (en) Internet of things center station system building method based on big data model
CN113052509A (en) Model evaluation method, model evaluation apparatus, electronic device, and storage medium
CN113869904A (en) Suspicious data identification method, device, electronic equipment, medium and computer program
CN113408579A (en) Internal threat early warning method based on user portrait
CN113850669A (en) User grouping method and device, computer equipment and computer readable storage medium
CN111782967A (en) Information processing method, information processing device, electronic equipment and computer readable storage medium
CN113987309B (en) Personal privacy data identification method and device, computer equipment and storage medium
Ma et al. Abnormal Traffic Detection Method of Educational Network Based on Cluster Analysis Technology
CN116934417A (en) Object recognition method, device, computer equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication