CN112069485A - Safety processing method, device and equipment based on user behaviors - Google Patents

Safety processing method, device and equipment based on user behaviors Download PDF

Info

Publication number
CN112069485A
CN112069485A CN202010800733.9A CN202010800733A CN112069485A CN 112069485 A CN112069485 A CN 112069485A CN 202010800733 A CN202010800733 A CN 202010800733A CN 112069485 A CN112069485 A CN 112069485A
Authority
CN
China
Prior art keywords
user
behavior
attacker
behavior data
verification code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010800733.9A
Other languages
Chinese (zh)
Other versions
CN112069485B (en
Inventor
张伟望
覃建策
田本真
陈邦忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202010800733.9A priority Critical patent/CN112069485B/en
Publication of CN112069485A publication Critical patent/CN112069485A/en
Application granted granted Critical
Publication of CN112069485B publication Critical patent/CN112069485B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2133Verifying human interaction, e.g., Captcha

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a security processing method, device and equipment based on user behaviors, and relates to the technical field of data security. The method comprises the following steps: judging whether the user is an attacker or not according to page browsing behavior data before the user operates the verification code and operation behavior data of the verification code; if the operation behavior data of the user is judged to be behavior data of a real user, based on the operation behavior data of the user, clustering analysis is carried out by utilizing the similarity between fake behaviors, if the suspicious times of the user are set to be larger than a preset time threshold, the verification difficulty is increased, the probability that the model is classified as an attacker is increased, and the behavior data of the user is collected again and is reclassified by utilizing the model; and finally, determining whether the user is an attacker or not by fusing classification results. According to the method and the device, stricter defense against black products attack can be realized based on the user behavior data of the operation verification code, and the website security is ensured.

Description

Safety processing method, device and equipment based on user behaviors
The application is a divisional application of Chinese patent application with application number 2020105367972 and name of 'safety processing method, device and equipment based on user behavior' filed by Chinese patent office on 12/6/2020.
Technical Field
The present application relates to the field of data security technologies, and in particular, to a method, an apparatus, and a device for security processing based on user behavior.
Background
The verification code has been widely adopted by the industry as an effective means for user authentication to resist the attack of internet black products. The main principle is that black products usually need to gain benefits through a large number of repeated accesses, and the verification code can effectively increase the cost of each access.
At present, with the rise of deep learning in recent years, it is becoming easier to automatically identify a website verification code by using a computer. No matter the slider verification code, the picture selection verification code, the character click verification code and even the question and answer requiring semantic understanding, a corresponding mature deep model solution can be provided, so that the difficulty of cracking the picture or the character verification code by the black product is greatly reduced, the attack defense of the black product is failed, and the website security is reduced.
Disclosure of Invention
In view of this, the present application provides a security processing method, device and apparatus based on user behavior, and mainly aims to solve the technical problem that the existing technology is prone to fail in black-generation attack defense, thereby reducing website security.
According to an aspect of the present application, there is provided a security processing method based on user behavior, the method including:
acquiring page browsing behavior data before a user operates a verification code and operation behavior data of the verification code;
classifying the users by utilizing a neural network model according to the page browsing behavior data, wherein the neural network model is obtained by training the page browsing behavior data before the verification code operation of a normal user and an attacker; and a process for the preparation of a coating,
classifying the user by using a single classification model according to the operation behavior data of the user, wherein the single classification model is obtained by training based on the verification code operation behavior data of a normal user;
performing cluster analysis by using the similarity between fake behaviors according to the operation behavior data of the user;
if the number of times that the user is set as suspicious is larger than a preset number threshold according to the clustering analysis result, replacing the verification code with a new verification code which increases the operation difficulty of the user, re-acquiring page browsing behavior data before the user operates the new verification code and operation behavior data of the new verification code, and reclassifying the user by using the neural network model and the single classification model, wherein when the neural network model and the single classification model are reclassified, the model classification threshold is reduced to increase the probability that the user is classified as an attacker;
and determining whether the user is an attacker or not by fusing the classification results.
According to another aspect of the present application, there is provided a security processing apparatus based on user behavior, the apparatus including:
the acquisition module is used for acquiring page browsing behavior data before a user operates the verification code and operation behavior data of the verification code;
the classification module is used for classifying the users by utilizing a neural network model according to the page browsing behavior data, wherein the neural network model is obtained by training the page browsing behavior data before the verification code operation of a normal user and an attacker; and a process for the preparation of a coating,
the classification module is further used for classifying the users by using a single classification model according to the operation behavior data of the users, wherein the single classification model is obtained by training verification code operation behavior data of normal users;
the analysis module is used for carrying out clustering analysis by utilizing the similarity between the fake behaviors according to the operation behavior data of the user;
the classification module is further configured to, if it is determined according to a cluster analysis result that the number of times that the user is set as suspicious is greater than a preset number threshold, replace the verification code with a new verification code that increases user operation difficulty, reacquire page browsing behavior data of the user before operating the new verification code and operation behavior data of the new verification code, and reclassify the user using the neural network model and the single classification model, wherein when the neural network model and the single classification model are reclassified, a model classification threshold is lowered to increase a probability that the user is classified as an attacker;
and the determining module is used for determining whether the user is an attacker or not by fusing the classification results.
According to yet another aspect of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described user behavior-based security processing method.
According to still another aspect of the present application, there is provided a security processing apparatus based on user behavior, including a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, where the processor implements the above-mentioned security processing method based on user behavior when executing the program.
By means of the technical scheme, the application not only refers to the operation behavior data of the user on the verification code, but also refers to the page browsing behavior data of the user before the operation on the verification code, and the page browsing behavior data serves as a basis for further man-machine authentication, so that whether the user is an attacker can be accurately judged. Even if an attacker simulates real user operation through counterfeit behavior data to complete verification, the method and the device can utilize the similarity between counterfeit behaviors to perform cluster analysis, further replace the verification code with a new verification code which increases the user operation difficulty when the user is judged to be set to be suspicious more than a preset time threshold, increase the probability that the user is classified as the attacker through reducing the model classification threshold, and re-collect corresponding user behavior data for reclassification, so that whether the verification process is completed by the real user can be identified with higher probability through increasing the verification difficulty and improving the attacker classification standard. Compared with the mode that an attacker can easily identify the website verification code through a computer at present, the method and the system can realize stricter defense against the black product attack based on the user behavior data of the operation verification code, ensure the security of the website and reduce the risk of the website being attacked by the black product attack. In addition, the correct attacker behavior data can be identified from the verification behavior data log recorded every day according to the trained single classification model, so that training set pollution caused by wrong data input in the training set expansion process is avoided, and further model training failure is caused.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart illustrating a security processing method based on user behavior according to an embodiment of the present application;
fig. 2 is a flowchart illustrating another security processing method based on user behavior according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating an example method in a simple mode according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating an example method in full mode according to an embodiment of the present disclosure;
FIG. 5 is a flow chart illustrating a complete model training process provided by an embodiment of the present application;
fig. 6 shows a schematic structural diagram of a security processing apparatus based on user behavior according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The method aims to solve the technical problem that the existing attacker automatically identifies the website verification code through a computer, so that the black product attack defense is easy to fail, and the website security is reduced. The embodiment provides a security processing method based on user behavior, as shown in fig. 1, the method includes:
101. and acquiring page browsing behavior data before the user operates the verification code and operation behavior data of the verification code.
The page browsing behavior data before the operation on the verification code may include: the user can also comprise a mouse operation sequence and a keyboard input sequence at each time point before the verification code is operated, and a gyroscope change sequence and the like for the mobile terminal. And the operation behavior data of the user on the verification code can comprise: the mouse operation sequence from the time point when the user starts to verify is the time starting point to each time point in the time period when the verification is completed, and the like.
The execution subject of the present embodiment may be a device or equipment for security defense processing based on user behavior, and may be configured on the website side. The method can be used for providing an effective means for defending against the black product attack for the website and reducing the risk of the website being attacked by the black product.
The embodiment collects the page browsing behavior data before the user operates the verification code and the operation behavior data of the verification code as the basis for further man-machine authentication. Specifically, the three judgment and analysis processes shown in steps 102 to 104 may be executed, and it should be noted that the three judgment and analysis processes may be executed concurrently or according to a certain progressive relationship, and the determination may be specifically made according to the timeliness of the actual security detection and the requirement of resource consumption.
102. Classifying the user by using a neural network model according to the page browsing behavior data of the user before the verification code is operated, and classifying the user by using a single classification model according to the operation behavior data of the user on the verification code.
In this embodiment, a neural network model is obtained by training positive and negative sample sets, and the neural network model may be obtained by training page browsing behavior data before verification code operations of normal users and attackers. The Neural Network model is not limited to Deep Neural Networks (DNNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and other structures.
For example, for the same type of the verification code (such as a slider verification code, a picture selection verification code, a word click verification code, a question and answer verification code which needs semantic understanding, and the like), based on the page browsing behavior data before the target user operates the verification code, the neural network model is used for analyzing the similarity of the page browsing behavior between the target user and a normal user, and analyzing the similarity of the page browsing behavior between the target user and an attacker, namely analyzing whether the page browsing behavior of the target user is more biased to the browsing behavior of a real user or a non-real user (machine). If the browsing behavior of the non-real user (machine) is more favored, the target user can be determined to be an attacker.
Different from the neural network model, the single classification model in the embodiment may be trained by only one sample set, and the single classification model may be trained based on the operation behavior data of the verification code of the normal user.
Because the behavior data before the initial stage of the verification has great randomness, and the actions during the verification, such as dragging a slider, clicking characters and the like, have a very definite paradigm structure and are relatively suitable for judging the similarity, the embodiment can use the verification code operation behavior data of a normal user as a reference, analyze whether the user is more inclined to the verification code operation of a real user during the verification code operation, and if not, judge that the user is an attacker. By this discrimination similar to single classification, the difficulty of collecting attacker data can be avoided. Correct attacker data can be identified from the daily recorded verification behavior data log, so that the attacker data in the training set is expanded.
The neural network model used in the embodiment can be a two-class model, positive and negative sample sets are needed to train the neural network model, and a single-class model only needs one sample training set. At present, a sample set of a normal user can be selected more easily by using a white list and other modes, and sample data of a blacklist attacker is relatively difficult to obtain and is easy to make mistakes, so that a neural network model is difficult to train. Therefore, the latest blacklist attacker can be identified by using the single classification model after periodic updating training, and the sample data of the latest blacklist attacker is further excavated to update the positive and negative sample training set required by the neural network model, so that the accurate positive and negative sample training set can be automatically updated periodically, and the accurate updating of the neural network model is ensured. The condition of manually extracting the sample characteristics is reduced, the whole updating training process can be automatically realized, and the model updating efficiency can be improved.
103. And performing cluster analysis by using the similarity between the fake behaviors according to the operation behavior data of the user on the verification code.
104. And if the number of times that the user is set as suspicious is larger than the preset number threshold value according to the clustering analysis result, replacing the verification code with a new verification code which increases the operation difficulty of the user, re-acquiring the page browsing behavior data of the user before the operation of the user on the new verification code and the operation behavior data of the new verification code, and reclassifying the user by using a neural network model and a single classification model.
Wherein, when the neural network model and the single classification model are classified again, the model classification threshold is adjusted down to increase the probability that the user is classified as an attacker.
Because the js code at the front end of the page is very easy to crack, an attacker of the verification code can not only simulate user operation through an automatic testing tool selenium and the like, but also directly send a forged request through a script, and the request is accompanied by real user operation. The real user operation can be from the operation of the attacker, and can also be obtained by collecting the real operations of other users. Therefore, in order to avoid that an attacker successfully bypasses the man-machine behavior verification, the embodiment may perform cluster analysis by using the similarity between the fake behaviors based on the operation behavior data of the user on the verification code, and although the attacker may access thousands of IPs to make the website side unable to locate the existence of the IPs, the attacker may perform binding determination on the IPs by constructing the similarity of behaviors. Thus, although the attacker uses real user behaviors, the existence of the attacker can still be found through similarity clustering.
For example, through cluster analysis of similarity between fake behaviors, the number of times that the target user is set to be suspicious each time the request is sent is counted. When the target user is set to be suspicious for many times and the times reach a certain threshold value, the verification code can be replaced by the verification code which increases the operation difficulty of the user (for example, the original number of the conventional font in the clicked picture is changed into the deformed character in the clicked picture (the user needs to carefully distinguish), and the like) for testing. The specific test process may include: re-collecting user behavior data, specifically comprising a page browsing behavior before verification and a verification code operation behavior; and then, based on the re-collected page browsing behaviors and verification code operation behaviors, performing reclassification by using the neural network model and the single classification model in the step 102, and adjusting a classification threshold value in the two models when performing reclassification so as to increase the probability that the user is classified as an attacker. For example, when the neural network model and the single classification model are classified, the final judgment threshold value is adjusted to be low, and the parameters of the neural network model and the single classification model are not changed (namely, the model is not influenced). The two model output values can be the probability [0-1] of an attacker, whether the user is the attacker is judged by utilizing the judgment threshold, and before the judgment threshold is not lowered, the user can be judged as the attacker if the model output probability is more than or equal to 0.5; after the judgment threshold is reduced, if the model output probability is more than or equal to 0.2, the user can be judged as an attacker.
105. And determining whether the user is an attacker or not by fusing the classification results.
For example, after the classification and reclassification processes of the two models, if at least one of the classification results determines that the user is an attacker, security defense processing can be performed according to the user, such as limiting the user from accessing the website, listing the website as a blacklisted user, and the like.
In the security processing method based on the user behavior provided by this embodiment, because a large number of mouse, keyboard and other operation behaviors exist in the process of operating the verification code by the real user, the embodiment refers to not only the operation behavior data of the user on the verification code, but also the page browsing behavior data before the user operates the verification code, as the basis for further human-computer authentication, that is, it is determined whether the verification process of the verification code is the verification process completed by the verification code being automatically identified by the computer or the verification process completed by the operation of the real user. The method specifically comprises the steps of classifying a neural network model obtained by training page browsing behavior data before verification code operation of normal users and attackers as a positive and negative sample training set, classifying the neural network model obtained by training the verification code operation behavior data of the normal users as a single classification model obtained by training a training set with only one sample, identifying whether a verification process is finished by a real user, and further accurately judging whether the user is an attacker.
Even if an attacker simulates real user operation through counterfeit behavior data to complete verification, the embodiment can also utilize the similarity between counterfeit behaviors to perform cluster analysis, further replace a verification code with a new verification code which increases the user operation difficulty when judging that the suspicious times of the user is greater than a preset time threshold, increase the probability that the user is classified as the attacker through reducing the model classification threshold, and re-collect corresponding user behavior data for reclassification, so that whether the verification process is completed by the real user can be identified with higher probability through increasing the verification difficulty and improving the attacker classification standard. Compared with the mode that an attacker can easily identify the website verification code through a computer at present, the embodiment can realize stricter defense against black product attacks based on the user behavior data of the operation verification code, thereby ensuring the security of the website and reducing the risk of the website being attacked by the black product attacks.
Further, as a refinement and an extension of the specific implementation of the foregoing embodiment, in order to fully describe the implementation of this embodiment, this embodiment further provides another security processing method based on user behavior, as shown in fig. 2, where the method includes:
201. and acquiring user behavior data of a user in a time period from the time when the page where the verification code is located is opened to the time when verification of the verification code is completed.
For example, through a pre-edited script (such as an acquisition module and the like), all operations of a user in a period from opening a page to finishing verification of a verification code are acquired, the operation types include movement of a mouse, clicking, moving out of a boundary, moving in of the boundary, page scrolling, input of a keyboard and the like, and the moving end can also contain changes of a gyroscope and the like. These recorded operations should all contain a timestamp of the time the operation occurred. In order to increase the difficulty of front-end cracking, complex front-end code confusion can be added on the acquisition module.
202. And cutting the user behavior data of the user by taking the time point when the user starts the verification operation of the verification code as a cutting point to obtain the page browsing behavior data before the user operates the verification code and the operation behavior data of the verification code.
The acquisition module cuts the behavior sequence according to the time and divides the behavior sequence into two parts, wherein one part is a page browsing behavior, and the other part is a verification code operation behavior. The interception in this step is mainly because the behavior data before the verification start stage has great randomness, which is not easy to affect the judgment of similarity, and the effect is not good when the behavior data is used as single classification model data. And the actions during verification, such as dragging a slider, clicking characters and the like, have a very definite paradigm structure and are more suitable for judging similarity.
203a, classifying the users by utilizing a neural network model according to the page browsing behavior data before the users operate the verification codes.
Optionally, step 203a may specifically include: firstly, acquiring a first mouse behavior sequence and/or a keyboard input behavior sequence from page browsing behavior data of the user; then according to the mouse operation type (for example, the mouse operation type at least comprises one or more of clicking, pressing, lifting, moving out of the boundary, moving into the boundary and rolling), the coordinate of the mouse and the event occurrence time, carrying out mouse operation characteristic extraction on the first mouse behavior sequence, and determining a first probability value of the user as an attacker according to the mouse operation characteristics and the historical mouse operation characteristics before the verification code operation of the normal user and the attacker; and/or, according to ASCII codes corresponding to characters (such as letters or symbols) input by a keyboard and time corresponding to keyboard input, carrying out keyboard input characteristic extraction on the keyboard input behavior sequence, and determining a second probability value of the user as an attacker according to the keyboard input characteristics and the historical keyboard input characteristics before the verification code operation of a normal user and the attacker is combined; and finally, determining the classification result of the neural network model according to the obtained first probability value and/or second probability value.
Specifically, the mouse behavior sequence and/or the keyboard input behavior sequence may be obtained according to an actual operation condition (e.g., only a mouse operation behavior or a keyboard input behavior, or both). By the optional mode, whether the user is the operation behavior of the real user can be accurately judged based on mouse browsing operation and keyboard input operation in the page before the user operates the verification code, and if the user is not the operation behavior of the real user, the user can be judged to be an attacker.
In order to accurately acquire a mouse behavior sequence and a keyboard input behavior sequence, for example, the acquiring a first mouse behavior sequence and/or a keyboard input behavior sequence from the page browsing behavior data of the user specifically includes: acquiring continuous mouse operation records according to a fixed sampling interval to obtain a first mouse behavior sequence; and/or intercepting the keyboard input record with the longest continuous input of the keyboard, and acquiring a keyboard input behavior sequence according to the preset maximum input length. Preferably, the fixed sampling interval may be 100ms and the preset maximum input length is 64. For example, page view behavior is sampled, where successive mouse movements and successive scrolling are sampled at fixed time intervals of 100 ms. The keyboard input selects a section with the longest continuous input as a representative, and if the maximum length is exceeded by 64, an input sequence with the length of 64 continuous sections is randomly intercepted.
Then the mouse track and the keyboard sequence data (namely the first mouse behavior sequence and the keyboard input behavior sequence) are respectively input into two different depth models for automatic feature extraction. Preferably, a convolutional network may be used to process the sequence data. Compared with the traditional manual feature extraction mode (such as extracting the features of the maximum value, the minimum value, the mean value, the median, the variance, the first-order difference, the second-order difference and the like of different sequence areas based on the manual feature extraction mode), the method can automatically extract the behavior data features based on the depth model, and can well solve the problems caused by the traditional manual feature extraction (such as the mode has great limitation, the overall and local characteristics can only be reflected to a certain degree and the association between the front and the back of the sequence cannot be reflected aiming at indexes of the median of the segmented mean variance and the like of the sequence). The efficiency and the accuracy of feature extraction can be improved, and the accuracy of whether the subsequent identification user is an attacker can be further improved.
Further alternatively, a Long Short-Term Memory network (LSTM) model may be used as the feature extraction model, and a Logistic Regression (LR) model may be selected as the classification model. Correspondingly, according to the mouse operation type, the coordinates of the mouse and the event occurrence time, performing mouse operation feature extraction on the first mouse behavior sequence, which may specifically include: and performing mouse operation characteristic extraction on the first mouse behavior sequence by using an LSTM model, so that each frame of the first mouse behavior sequence is represented as a first characteristic vector, wherein the first bit of the first characteristic vector is a mouse operation type, the second bit and the third bit are respectively an x coordinate and a y coordinate of the mouse, and the fourth bit is event occurrence time.
For example, each frame of a sequence of mouse behaviors is represented as a feature vector, and the vectors are composed in a manner that: the first bit represents the operation type, namely clicking, pressing, lifting, moving out of the boundary, moving in the boundary, rolling and the like; the second and third positions are the x and y coordinates of the mouse respectively; the fourth bit is the time when the event occurred (e.g., the time corresponding to each coordinate value).
Correspondingly, determining a first probability value of the user as the attacker according to the mouse operation characteristics and by combining historical mouse operation characteristics of the normal user and the attacker before the verification code operation, specifically comprising: the mouse operation characteristics are input into an LR model, and the historical mouse operation characteristics before the verification code operation of a normal user and an attacker are referred for classification so as to obtain a first probability value. For example, the classification labels (such as normal user labels and attacker labels) corresponding to the sample features most similar to the user mouse operation features are found, and the probability values of the corresponding classification labels are determined according to the similarity.
Optionally, when performing feature extraction and classification on the keyboard input behavior sequence, the LSTM model may be used as the feature extraction model, and the LR model may be selected as the classification model. Correspondingly, according to the ASCII code corresponding to the characters input by the keyboard and the time corresponding to the keyboard input, performing keyboard input feature extraction on the keyboard input behavior sequence, which may specifically include: and performing keyboard input feature extraction on the keyboard input behavior sequence by using an LSTM model, so that each frame of the keyboard input behavior sequence is represented as a second feature vector, wherein the first bit of the second feature vector is ASCII (American standard code for information interchange) code corresponding to the character input by the keyboard, and the second bit is the time corresponding to the keyboard input.
For example, each frame of a sequence of keyboard input actions is represented as a feature vector, which is composed in a manner that includes: the first bit represents ASCII code corresponding to the letter or symbol input by the keyboard; the second bit represents the time corresponding to the keypad entry.
Correspondingly, determining a second probability value of the user as the attacker according to the keyboard input characteristics and the historical keyboard input characteristics before the normal user and the attacker carry out verification code operation by combining, and specifically may include: and inputting the keyboard input features into an LR model, and classifying by referring to historical keyboard input features of normal users and attackers before verification code operation to acquire a second probability value. For example, the classification labels (such as normal user labels and attacker labels) corresponding to the sample features most similar to the user keyboard operation features are found, and the probability values of the corresponding classification labels are determined according to the similarity.
Because there may be multiple types of verification codes (such as slider verification codes, picture selection verification codes, word selection verification codes, question and answer verification codes requiring semantic understanding, and the like) in practice, if a unified model is used for feature extraction and classification, calculation accuracy is inevitably affected, and therefore, preferably, the LSTM model and the LR model may be obtained by pre-training according to the types of the verification codes, wherein different LSTM models and LR models are pre-trained respectively for different types of the verification codes. For example, the hyper-parameters that need to be tuned include, but are not limited to, cell state size of LSTM, output length, L1 and L2 regularization coefficients, optimization algorithms, learning rates, etc. By using the targeted model to perform feature extraction and classification, the accuracy of analysis and calculation can be improved, and whether the user is an attacker or not can be accurately judged.
For example, determining the classification result of the neural network model according to the first probability value and the second probability value may specifically include: weighting and summing the first probability value and the second probability value; and if the probability value obtained by weighted summation is greater than a preset probability threshold value, judging the user as an attacker. Preferably, the predetermined probability threshold is 0.5. For example, the results of the two LR models (corresponding to mouse behavior and keyboard input behavior, respectively) are weighted and summed to obtain the probability that the operation may come from an attacker, e.g., 1 represents an attacker, 0 represents a normal user, a class with a probability greater than 0.5 is determined as an attacker, and a class with a probability less than 0.5 is determined as a normal user.
And a step 203b parallel to the step 203a, classifying the user by using a single classification model according to the operation behavior data of the user on the verification code.
Optionally, step 203b may specifically include: firstly, acquiring a second mouse behavior sequence from verification code operation behavior data of a user; extracting a vector containing a mouse coordinate time pair from the second mouse behavior sequence; then, encoding the vector containing the mouse coordinate time pair by using a self-encoder to obtain a behavior code with a preset encoding length; finally, according to the behavior code with the preset code length, a single classification model is used for classification to obtain the score of the user as an attacker, wherein the single classification model is obtained by training according to the verification code operation behavior data of a normal user in advance; and if the obtained score is larger than a preset score threshold value, judging the user as an attacker. Preferably, the predetermined encoding length is 64, and the single classification model may be an svdd (support Vector Domain description) model, and the preset scoring threshold may be 1.
For example, the non-mouse operation type in the interception behavior sequence is removed, the operation type field is removed, and only the mouse coordinate and the time field are reserved. The sequence is sampled at regular time intervals to 100 coordinate time pairs, i.e. a vector of length 300. The vector is encoded by using a 4-layer self-encoder, wherein the sizes of three hidden layers are 128, 64 and 128 respectively. Where 64 is the representation length of the final code. Here the depth of the self-encoder, and the size of the hidden layer may be the hyper-parameters that need to be tuned. For this length-64 code, a single classification can be performed using the SVDD model, where the classification label of the normal user row is 0, corresponding to the unique classification of the model. The value threshold value can be selected to be 1, all values with the value larger than 1 do not belong to normal user behaviors, namely, the data are judged to be forged by an attacker, and the values with the value smaller than 1 are normal user data.
For the SVDD model in this embodiment, the training data may all come from normal user data, which is easy to obtain, and the data tag accuracy is high, and online data set expansion and model iteration may be directly performed. For example, the normal user data may be derived from data generated by an intranet IP segment, an IP white list, and data generated by a user white list, or a natural day with normal traffic may be found by analyzing a daily traffic rule of a website, and all data may be used as normal user data. The normal flow refers to that there is no sudden flow peak, and the flow conforms to the long-term regularity, such as peak in the morning and evening, and trough in the middle of the night. The training hyper-parameters required for the SVDD model include, but are not limited to: selection of kernel functions, soft spacing coefficients, etc. The kernel function also contains secondary hyper-parameters such as coefficients, exponents, etc.
Further, before the classification is performed by using the single classification model, the method may further perform the discrimination in a simpler and faster manner, and for example, before step 203b, the method may further include: judging whether the dragging track of the slider corresponding to the verification code is related to the placement position of the slider or not according to the verification code operation behavior data of the user; and/or judging whether the click position of the character corresponding to the verification code is matched with the relative position of the character in the picture; and if the dragging track of the slider is judged to be irrelevant to the placement position of the slider, or the clicked position of the character is not matched with the relative position of the character in the picture, judging that the user is an attacker. Through the optional mode, whether the user is an attacker can be simply and quickly judged, and certain judgment efficiency can be improved.
For example, the operation behavior data of the verification code is firstly subjected to simple rule verification, such as that the dragging track of the slider should be related to the slider placing position, the position of the character click should be matched with the relative position of the character in the picture, and the like. And if the rule verification fails, directly judging to be the behavior of the attacker. Wherein, a certain error tolerance threshold value needs to be added in the rule verification so as to deal with the data acquisition error which may occur in the actual production environment.
Based on the contents in steps 203a and 203b, as shown in fig. 3, after the user behavior data is obtained, the user behavior data may be segmented into a page browsing behavior and a verification code operation behavior, and then the page browsing behavior may be classified using the LSTM + LR model to obtain a classification result, and the verification code operation behavior may be processed using the self-encoder + SVDD single classification model to obtain a classification result. And finally, obtaining final judgment according to the result fusion of the two classification results, namely classifying whether the user is an attacker. Therefore, even if an attacker cracks the verification code by some means, the behavior of the attacker is still effectively detected due to large difference between the behavior of the attacker and a normal user.
204. And performing cluster analysis by using the similarity between the fake behaviors according to the operation behavior data of the user on the verification code.
It is assumed that an attacker can arbitrarily forge the request data and use real user behavior to carry out the attack. Currently existing behavior verification models are very vulnerable to such attacks because these behaviors originate from the user himself and are naturally classified by the model as human rather than machine, whereby an attacker can successfully bypass behavior verification. More seriously, many behavior verification systems adopt an online updating mode, and once the attacks are automatically or manually discovered, the aforementioned real user data flows into machine label data, so that the training data set is polluted, the verification of normal users is directly difficult, and the misjudgment rate of behavior interception is obviously increased.
In order to solve the above problem, the present embodiment may use the similarity between the counterfeit behaviors for cluster analysis. When an attacker uses real user data to attack, the data acquisition channels are limited and not too many, unlike the random trajectory generated by software. Attackers often make minor modifications based on one or a set of human operational data as new counterfeiting actions. However, there is often a similarity that can be found by machine learning models to generate new behaviors in this way, so that they can be effectively classified into one class.
For illustrating a specific implementation, step 204 may specifically include: firstly, collecting behavior codes with preset code length; clustering according to the collected behavior codes; then, a first verification request received after clustering is obtained, and the distance between a behavior code corresponding to the first verification request and each clustering center is calculated; if a target clustering center with the distance smaller than a preset distance threshold exists, binding the user IP address sending the first verification request with the user IP address contained in the clustering corresponding to the target clustering center, wherein the bound user IP addresses are combined to calculate the combined access frequency; if the combined access frequency is greater than a preset frequency threshold, setting the IP address of the user sending the first verification request as suspicious; if the suspicious times of the same user IP address are set to be larger than a preset time threshold, adding the suspicious times into a blacklist; and finally, if the IP address of the user exists in the blacklist, judging that the number of times that the user is set as suspicious is larger than a preset number threshold.
By the optional mode, the attacker can be accurately identified under the condition that the attacker simulates real user data, the fact that the attacker directly uses real user behaviors for false verification after cracking the front-end code is avoided, and black product attack defense can be achieved more comprehensively.
For example, the clustering according to the collected behavior codes may specifically include: and clustering the collected behavior codes by using a Mean-Shift algorithm to obtain n clustering centers, wherein n is determined by the window size of the Mean-Shift algorithm, and the window size is obtained by adjusting according to the data characteristics of the verification codes and the configured security defense level.
For example, the 64-bit behavior code is collected, first, user data (64-bit behavior code) of about 10 minutes is collected, and then the data is clustered by using a Mean-Shift algorithm to obtain n clustering centers, where n depends on the window size of the Mean-Shift algorithm, and the window size needs to be adjusted according to the data characteristics of a specific verification code and the desired monitoring effect. And for the user request entering later, calculating the distance between the behavior code of the user request and each cluster center, and finding out the nearest cluster center. And if the distance between the behavior code and the cluster center is less than a set threshold value, the behavior is considered to belong to the cluster represented by the cluster center. Then the user IP address corresponding to the behavior is bound with the user IP corresponding to other behaviors in the cluster, and the bound IPs are merged to calculate the access frequency. If the merge frequency exceeds a certain threshold, the newly accessed IP is set as suspect, and if the same IP is set as suspect multiple times, it is added to the IP blacklist.
205. And if the number of times that the user is set as suspicious is larger than the preset number threshold value according to the clustering analysis result, replacing the verification code with a new verification code which increases the operation difficulty of the user, re-acquiring the page browsing behavior data of the user before the operation of the user on the new verification code and the operation behavior data of the new verification code, and reclassifying the user by using a neural network model and a single classification model.
Wherein, when the neural network model and the single classification model are classified again, the model classification threshold is adjusted down to increase the probability that the user is classified as an attacker.
For example, if the IP address of the user exists in the blacklist, the verification code is replaced with a verification code that increases the operation difficulty of the user (for example, the number of the conventional font in the clicked picture is originally changed into a deformed character in the clicked picture (which needs to be carefully distinguished by the user), and the like), and a preset probability threshold value and a preset scoring threshold value in the model are reduced; and finally, determining whether the user is an attacker according to the classification result obtained by the test.
The test procedure in this embodiment may include: and after replacing the verification code with the verification code which increases the operation difficulty of the user, re-collecting user behavior data, wherein the user behavior data specifically comprises a page browsing behavior before verification and a verification code operation behavior. Then, according to the classification process in steps 203a and 203b, using the re-collected page browsing behavior and verification code operation behavior to classify the page browsing behavior by using an LSTM + LR model (the preset probability threshold referred to in the classification process is turned down) respectively to obtain a classification result, and using a self-encoder + SVDD single classification model (the preset score threshold referred to in the classification process is turned down) to process the verification code operation behavior to obtain a classification result. And finally, determining whether the user is an attacker according to the fusion classification result. If the user is a non-authentic user, it may be easier to classify it as an attacker in this case. For example, for a user IP address added into a blacklist, some non-user-friendly verification codes are replaced and then a test is carried out, and the score threshold of a behavior classification model is adjusted to be low, so that the user behavior is classified as an attacker with a higher probability, and if the user behavior is subsequently classified as an attacker, the user is determined to be the attacker.
Further optionally, in order to speed up the verification of whether the user IP is added to the blacklist, before collecting the behavior code with the predetermined code length, the method in this embodiment may further include: calculating the MD5 value of the second mouse behavior sequence; and if the MD5 value is the same as the MD5 value of the verification code operation behavior sequence corresponding to the second verification request received previously, adding the IP address of the user into a blacklist. For example, the MD5 value of the sequence of behavior of the captcha operation in step 203b is calculated and cached, and if the same MD5 value has occurred, the corresponding user IP will be requested to be directly added to the blacklist. This step ensures that an attacker cannot use simple, identical behavior for repeated authentication.
Based on the classification process in steps 203a, 203b and the content of the cluster analysis in steps 204 to 205, as shown in fig. 4, the user behavior data may be segmented into a page browsing behavior and a captcha operation behavior after being acquired. The method comprises the steps of sampling fixed time intervals of behaviors such as a mouse, a roller and the like according to a user page browsing behavior, intercepting the maximum fixed length of keyboard data as an extraction feature, and subsequently classifying by using an LSTM + LR model to obtain a classification result. And aiming at the operation behavior of the user verification code, a self-encoder can be used for encoding to obtain behavior encoding as extraction features, and then an SVDD single classification model can be used for classification to obtain a classification result. And clustering can be performed on the behavior codes by using a Mean-Shift algorithm, and related IPs are bound according to clustering results. The access frequency of bound IPs can then be jointly counted, with the newly accessed IP marked as suspect if the joint access frequency exceeds a specified threshold, and blacklisted if the IP is marked as suspect multiple times. And finally, feeding back the blacklist IP to a front end to increase the verification difficulty of the blacklist IP, and feeding back the blacklist IP to a classification model to increase the difficulty of classifying the blacklist IP into normal users.
206. And determining whether the user is an attacker or not by fusing the classification results.
Optionally, step 206 may specifically include: and carrying out weighted summation calculation on the classification results of the neural network model and the single classification model to determine whether the user is an attacker. For example, according to the verification accuracy of the two models, the corresponding weights are configured, and the higher the accuracy is, the higher the corresponding weight is, and the lower the accuracy is, the lower the corresponding weight is. Through the optional mode, the classification result of the user is finally determined, the test accuracy of each classification mode can be considered simultaneously, and a more accurate classification result can be obtained.
After determining that the user is an attacker, optionally, performing security defense processing according to the user, specifically including: limiting the processing of the verification code verification request sent by the user; or, the user is required to carry out mobile phone verification or answer secret protection question verification. For example, according to some specific pages, the user is required to perform mobile phone authentication instead, or the user is required to answer a secret security question, and the like, which greatly increases the cost of violent access of the black-generation user.
The neural network model is a two-classification model, and positive and negative sample sets are needed to train the neural network model so as to ensure the accuracy of model training. Take the positive sample as normal user data and the negative sample as attacker data as an example. The existing scheme cannot obtain an effective attacker tag, and in the process of black product antagonism, behavior data of a common user is very easy to obtain, for example, data of an intranet section of a company is selected as normal user data, or a part of users or a part of devices are set as a white list, and the generated data is used as normal user data, so that positive sample data is easy to obtain. Attackers are very difficult to obtain, and often require a great deal of manual intervention to make an auxiliary judgment. Some schemes, after the initial model training is completed, add the new data judged as "attacker" on-line to the "attacker dataset" for training. Thus, if the model judges wrongly, wrong data can enter the data set, and the error rate can be improved by training the data set containing the wrong data. The farther the model is cycled through the wrong roads, the less accessible the negative sample attacker data source.
In order to solve the above problem and to meet the requirement of the present embodiment for automatic model update, optionally, the method of the present embodiment may further include: storing classification results of different users and respective corresponding user behavior data in a user behavior log; filtering to obtain behavior data of a normal user from a user behavior log on the basis of an intranet IP segment, and/or an IP white list and/or a user white list at regular time, wherein different user behavior data recorded in different time periods are stored in the user behavior log; then, according to the regular behavior data of the normal user, updating an original user data set (such as the behavior data of the normal user) required by the corresponding training of the single classification model so as to train the single classification model by using the updated user data set; then, updating the model by using the single classification model reaching the standard in the test; detecting the user behavior data in the current time period by using the updated single classification model, extracting the behavior data classified as the attackers, and adding the behavior data into the original attacker data set for updating; finally, training an LSTM model and an LR model simultaneously by using the updated user data set and the updated attacker data set; model updating is performed using the LSTM model and LR model that test for compliance.
For example, a script of a training module is edited in advance, and the training module is used for updating online models (such as an LSTM model and an LR model, and an SVDD single classification model) daily and responding to newly generated counterfeiting behaviors of an attacker. Training is based on a daily recorded verification behavior data log, and data which are determined to be user behaviors are filtered out firstly based on an intranet IP segment, a user white list, an IP white list and the like. And updating the original user data set by using the filtered user behavior data, and training the SVDD single classification model by using the updated data set. And verifying by using a pre-segmented test data set, wherein the verification data set simultaneously contains user tag data and attacker tag data, the recall rate and the accuracy rate of the model are verified, and the online model is updated if the online model reaches the standard. And detecting all behavior data of the current day by using the SVDD model, extracting all behavior data classified as attacker data, and adding an attacker data set for updating. And simultaneously training the LSTM + LR model by using the updated user data set and the updated attacker data set, verifying by using a pre-segmented test set to obtain the recall rate and accuracy of the model, and updating the online model if the model reaches the standard.
The improvement of the model training method in this embodiment is mainly the automatic extension/collection of the model training set, such as updating every hour, every day, etc. The LSTM + LR model is a two-class model, requiring positive and negative sample sets. The SVDD model is a single classification model, only one sample set is needed, and at present, sample data of a normal user can be obtained according to a white list, but data of a blacklist attacker is difficult to obtain and is easy to make mistakes, so that the LSTM + LR two classification model is difficult to train. Therefore, the trained SVDD single classification model is adopted to identify the blacklist attacker data, although the data is less (compared with the normal user data), the accuracy is guaranteed, and the LSTM + LR binary classification model can be supplied for training.
However, under the condition that the number of the positive and negative samples is greatly different, the training samples are unbalanced, so that the training method can be further optimized, the positive and negative samples can be trained by adopting different sampling rates, and correspondingly, the training of the neural network model by using the updated user data set and the updated attacker data set specifically includes: if the number of the positive and negative samples of the updated user data set and the updated attacker data set is not balanced, sample collection is carried out on the positive and negative samples by adopting different sampling rates, and a training set which meets preset positive and negative sample balance conditions (such as the number of the positive and negative samples is the same, or the difference value of the number of the positive and negative samples is less than a certain threshold value) is obtained; and training the neural network model by utilizing the training set which accords with the preset positive and negative sample balance conditions.
For example, when the number of positive samples is large and the number of negative samples is small, the number of positive samples may be reduced, for example, the number of positive samples is reduced by about 5% at random, or the number of negative samples is increased by 10 times, so as to reduce imbalance between the positive samples and the negative samples in the training set, and further reduce influence on model training.
As shown in fig. 5, a complete training process may be firstly to collect user behavior logs of the same day, and filter out data determined as user behavior according to an intranet IP segment, an IP white list, a user white list, and the like, so as to expand a user data set; training an SVDD single classification model by using the expanded user data set; verifying the accuracy of the SVDD single classification model by using the test set, and if the accuracy reaches the standard, analyzing the user behavior of the day by using the trained SVDD single classification model to find attacker data of non-user data; then, extending an attacker data set by using the attacker data obtained by filtering; and finally training the LSTM + LR model by using the updated user data set and the updated attacker data set, and performing online model updating by using the LSTM + LR model reaching the standard after verifying the accuracy of the LSTM + LR model by using the test set.
The embodiment provides an automatic feature extraction scheme based on a deep cycle network, and the limitation of manually extracting features is avoided. The embodiment also provides that collecting user behaviors is not limited to the verification process, the user behaviors are divided into page browsing behaviors, two sequences with larger characteristic difference of verification code operation behaviors are obtained, and different models are used for classifying the two sequences. The embodiment provides the method for classifying the user data by using the single classification model, and avoids the difficulty of collecting the attacker data. The embodiment also provides a method for carrying out similarity analysis on the counterfeit behaviors of the attacker by using the clustering model, so that the attacker is prevented from directly using real user behaviors to carry out false verification after cracking the front-end codes.
Further, as a specific implementation of the method shown in fig. 1 and fig. 2, this embodiment provides a security processing apparatus based on user behavior, as shown in fig. 6, the apparatus includes: an acquisition module 31, a classification module 32, an analysis module 33, and a determination module 34.
The acquiring module 31 may be configured to acquire page browsing behavior data before a user operates a verification code, and operation behavior data of the verification code;
the classification module 32 is configured to classify the user according to the page browsing behavior data by using a neural network model, where the neural network model is obtained by training page browsing behavior data before verification code operations of a normal user and an attacker; and a process for the preparation of a coating,
the classification module 32 may be further configured to classify the user according to the operation behavior data of the user by using a single classification model, where the single classification model is obtained by training verification code operation behavior data of a normal user;
the analysis module 33 is configured to perform cluster analysis by using similarity between counterfeit behaviors according to the operation behavior data of the user;
the classification module 32 is further configured to, if it is determined according to the cluster analysis result that the number of times that the user is set as suspicious is greater than a preset number threshold, replace the verification code with a new verification code that increases the difficulty of user operation, re-acquire page browsing behavior data of the user before the user operates the new verification code and operation behavior data of the new verification code, and reclassify the user by using the neural network model and the single classification model, wherein when the neural network model and the single classification model are reclassified, a model classification threshold is adjusted down to increase the probability that the user is classified as an attacker;
and the determining module 34 is configured to determine whether the user is an attacker by fusing the classification results.
In a specific application scenario, the classification module 32 may be specifically configured to obtain a first mouse behavior sequence and/or a keyboard input behavior sequence from the page browsing behavior data of the user; according to the mouse operation type, the coordinates of the mouse and the event occurrence time, performing mouse operation feature extraction on the first mouse behavior sequence, and determining a first probability value of the user as an attacker according to the mouse operation features and by combining historical mouse operation features of normal users and the attacker before verification code operation, wherein the mouse operation type at least comprises: one or more of clicking, pressing, lifting, moving out of a boundary, moving into a boundary, and scrolling; and/or extracting keyboard input features of the keyboard input behavior sequence according to ASCII codes corresponding to characters input by a keyboard and time corresponding to keyboard input, so as to determine a second probability value of the user being an attacker according to the keyboard input features and the historical keyboard input features before the verification code operation of a normal user and the attacker is combined; determining a classification result of the neural network model according to the first probability value and/or the second probability value.
In a specific application scenario, the classification module 32 may be further configured to collect continuous mouse operation records according to a fixed sampling interval, so as to obtain the first mouse behavior sequence; and/or intercepting the keyboard input record with the longest continuous input of the keyboard, and acquiring the keyboard input behavior sequence according to the preset maximum input length.
In a specific application scenario, preferably, the fixed sampling interval is 100ms, and the preset maximum input length is 64.
In a specific application scenario, the classification module 32 may be further configured to perform mouse operation feature extraction on the first mouse behavior sequence by using a long-term and short-term memory network LSTM model, so that each frame of the first mouse behavior sequence is represented as a first feature vector, a first bit of the first feature vector is a mouse operation type, a second bit and a third bit are x and y coordinates where a mouse is located, respectively, and a fourth bit is event occurrence time; the classification module 32 may be further configured to input the mouse operation features into a logistic regression LR model, and classify the mouse operation features with reference to historical mouse operation features of normal users and attackers before verification code operation, so as to obtain the first probability value.
In a specific application scenario, the classification module 32 is further specifically configured to perform keyboard input feature extraction on the keyboard input behavior sequence by using an LSTM model, so that each frame of the keyboard input behavior sequence is represented as a second feature vector, a first bit of the second feature vector is an ASCII code corresponding to a character input by a keyboard, and a second bit is a time corresponding to the keyboard input; the classification module 32 may be further configured to input the keyboard input features into an LR model, and perform classification with reference to historical keyboard input features of normal users and attackers before operation of the verification code, so as to obtain the second probability value.
In a specific application scenario, preferably, the LSTM model and the LR model are obtained by pre-training according to the type of the verification code, wherein different LSTM models and LR models are pre-trained respectively for different verification code types.
In a specific application scenario, the classification module 32 is further specifically configured to perform a weighted summation on the first probability value and the second probability value; and if the probability value obtained by weighted summation is greater than a preset probability threshold value, judging the user as an attacker.
In a specific application scenario, preferably, the preset probability threshold is 0.5.
In a specific application scenario, the classification module 32 may be further configured to obtain a second mouse behavior sequence from the operation behavior data of the user; extracting a vector containing a mouse coordinate time pair from the second mouse behavior sequence; encoding the vector containing the mouse coordinate time pair by using a self-encoder to obtain a behavior code with a preset encoding length; classifying according to the behavior code with the preset code length by using a single classification model to obtain the score of the user as an attacker, wherein the single classification model is obtained by training according to the verification code operation behavior data of a normal user in advance; and if the score is larger than a preset score threshold value, judging that the user is an attacker.
In a specific application scenario, preferably, the predetermined coding length is 64, the single classification model is an SVDD model, and the preset scoring threshold is 1.
In a specific application scenario, the classification module 32 may be further configured to determine, according to the operation behavior data of the user, whether a dragging track of a slider corresponding to the verification code is related to the slider placement position; and/or judging whether the click position of the character corresponding to the verification code is matched with the relative position of the character in the picture; and if the dragging track of the slider is judged to be irrelevant to the placement position of the slider, or the clicked position of the character is not matched with the relative position of the character in the picture, judging that the user is an attacker.
In a specific application scenario, the analysis module 33 may be specifically configured to collect the behavior code with the predetermined code length; clustering according to the collected behavior codes; acquiring a first verification request received after clustering, and calculating the distance between a behavior code corresponding to the first verification request and each clustering center; if the target clustering center with the distance smaller than the preset distance threshold exists, the user IP address sending the first verification request is bound with the user IP address contained in the clustering corresponding to the target clustering center, wherein the bound user IP addresses are combined to calculate the combined access frequency; if the combined access frequency is greater than a preset frequency threshold, setting the IP address of the user sending the first verification request as suspicious; if the suspicious times of the same user IP address are set to be larger than a preset time threshold, adding the suspicious times into a blacklist; and if the IP address of the user exists in the blacklist, judging that the number of times that the user is set as suspicious is greater than a preset number threshold.
In a specific application scenario, the analysis module 33 may be further configured to cluster the collected behavior codes by using a Mean-Shift algorithm to obtain n clustering centers, where n is determined by a window size of the Mean-Shift algorithm, and the window size is obtained by adjusting the data characteristics of the verification code and the configured security defense level.
In a specific application scenario, the analysis module 33 may be further configured to calculate an MD5 value of the second mouse behavior sequence before the collecting of the behavior code with the predetermined code length; and if the MD5 value is the same as the MD5 value of the verification code operation behavior sequence corresponding to the previously received second verification request, adding the IP address of the user into the blacklist.
In a specific application scenario, the obtaining module 31 may be specifically configured to obtain user behavior data of the user in a time period from when the user opens a page where the verification code is located to when verification of the verification code is completed; and cutting the user behavior data of the user by taking the time point when the user starts the verification code verification operation as a cutting point to obtain the operation behavior data and the page browsing behavior data of the user.
In a specific application scenario, the apparatus further comprises: the device comprises a storage module and an updating module;
the storage module can be used for storing the classification results of different users and the corresponding user behavior data in a user behavior log;
the obtaining module 31 may be further configured to filter, from a user behavior log at regular time, behavior data of a normal user based on an intranet IP segment, and/or an IP white list, and/or a user white list, where different user behavior data recorded in different time periods are stored in the user behavior log;
the updating module can be used for updating an original user data set required by the corresponding training of the single classification model according to the regular behavior data of the normal user, so that the single classification model is trained by using the updated user data set;
the updating module can also be used for updating the model by using the single classification model reaching the standard in the test;
the classification module 32 may also be configured to classify the user behavior data in the current time period using the updated single classification model, extract behavior data classified as an attacker, and add the behavior data to the original attacker data set for updating;
the updating module can also be used for training the neural network model by utilizing the updated user data set and the updated attacker data set; and updating the model by using the neural network model which is tested to reach the standard.
In a specific application scenario, the updating module may be specifically configured to, if the numbers of positive and negative samples in the updated user data set and the updated attacker data set are not equal, perform sample acquisition on the positive and negative samples at different sampling rates to obtain a training set meeting a preset positive and negative sample balancing condition; and training the neural network model by using the training set which accords with the preset positive and negative sample balance conditions.
In a specific application scenario, the determining module 34 is specifically configured to perform weighted summation calculation on the classification results of the neural network model and the single classification model to determine whether the user is an attacker.
In a specific application scenario, the apparatus may further include: a defense module;
the defense module can be used for limiting the processing of the verification code verification request sent by the user; or the verification is changed into the verification that the user is required to carry out mobile phone verification or answer a secret protection question.
It should be noted that other corresponding descriptions of the functional units related to the security processing apparatus based on user behavior provided in this embodiment may refer to the corresponding descriptions in fig. 1 and fig. 2, and are not described herein again.
Based on the methods shown in fig. 1 and fig. 2, correspondingly, the present embodiment further provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the security processing method based on user behavior shown in fig. 1 and fig. 2.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method of the embodiments of the present application.
Based on the method shown in fig. 1 and fig. 2 and the virtual device embodiment shown in fig. 6, in order to achieve the above object, an embodiment of the present application further provides a security processing device based on user behavior, which may specifically be a personal computer, a server, a tablet computer, a smart phone, or other network devices, and the device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the user behavior based security processing method as described above with reference to fig. 1 and 2.
Optionally, the entity device may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and the like. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
It will be understood by those skilled in the art that the above-described physical device structure provided in the present embodiment is not limited to the physical device, and may include more or less components, or combine some components, or arrange different components.
The storage medium may further include an operating system and a network communication module. The operating system is a program that manages the hardware and software resources of the above-described physical devices, and supports the operation of the information processing program as well as other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and communication with other hardware and software in the information processing entity device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. By applying the scheme of the embodiment, the verification behavior data submitted by the user or the attacker is automatically extracted based on the depth model. The method comprises the steps of collecting user behavior data before a user starts to verify, and carrying out different-mode detection on the user behavior data divided into two dimensions of page behaviors and verification code behaviors. And the user verification code behaviors are classified by using an abnormal detection model, and the abnormal detection model is a single classification model, so that only one type of data is needed for training. Because normal user data is very easy to obtain, and data of an attacker is difficult to mark, the classification by using the model has no difficulty in data collection. It is assumed that an attacker can arbitrarily forge the request data and use real user behavior to carry out the attack. The scheme of the embodiment utilizes the similarity between the fake behaviors for cluster analysis. Although an attacker can use thousands of IPs for access, making it impossible for the website side to locate their presence, we can make a binding decision on these IPs by constructing similarities in behavior. Thus, although the attacker uses real user behaviors (such as the operation of the attacker or other users), the classification model cannot intercept the real user behaviors, and the existence of the attacker can still be found through similarity clustering.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (12)

1. A security processing method based on user behavior is characterized by comprising the following steps:
acquiring page browsing behavior data before a user operates a verification code and operation behavior data of the verification code;
classifying the users by utilizing a neural network model according to the page browsing behavior data, wherein the neural network model is obtained by training the page browsing behavior data before the verification code operation of a normal user and an attacker; and a process for the preparation of a coating,
classifying the user by using a single classification model according to the operation behavior data of the user, wherein the single classification model is obtained by training based on the verification code operation behavior data of a normal user;
acquiring a behavior code according to the operation behavior data of the user, performing cluster analysis on the behavior code, binding related user IP addresses according to a cluster analysis result, and performing joint counting on the access frequency of the bound user IP addresses;
if the number of times that the user is set as suspicious is larger than a preset number threshold according to the joint counting result, reclassifying the user by using the neural network model and the single classification model, wherein when the neural network model and the single classification model are reclassified, the model classification threshold is reduced to increase the probability that the user is classified as an attacker;
and determining whether the user is an attacker or not by fusing the classification results of the neural network model and the single classification model.
2. The method according to claim 1, wherein the acquiring of the operation behavior data of the user on the verification code and the page browsing behavior data before the verification code operation specifically comprises:
acquiring user behavior data of the user in a time period from the time when the user opens the page where the verification code is located to the time when verification of the verification code is completed;
and cutting the user behavior data of the user by taking the time point when the user starts the verification code verification operation as a cutting point to obtain the operation behavior data and the page browsing behavior data of the user.
3. The method of claim 2, further comprising:
storing classification results of different users and respective corresponding user behavior data in a user behavior log;
regularly filtering from a user behavior log based on an intranet IP section and/or an IP white list and/or a user white list to obtain behavior data of a normal user;
updating an original user data set required by the corresponding training of the single classification model according to the regularly obtained behavior data of the normal user, so that the single classification model is trained by using the updated user data set;
updating the model by using the single classification model reaching the test standard;
classifying the user behavior data in the current time period by using the updated single classification model, extracting the behavior data classified as the attackers, and adding the behavior data into the original attacker data set for updating;
training the neural network model by using the updated user data set and the updated attacker data set;
and updating the model by using the neural network model which is tested to reach the standard.
4. The method according to claim 3, wherein the training the neural network model using the updated user dataset and the updated attacker dataset specifically comprises:
if the number of the positive and negative samples of the updated user data set and the updated attacker data set is not balanced, acquiring samples of the positive and negative samples by adopting different sampling rates to obtain a training set meeting preset positive and negative sample balance conditions;
and training the neural network model by using the training set which accords with the preset positive and negative sample balance conditions.
5. The method according to claim 1, wherein the classifying the user using a neural network model according to the page view behavior data specifically comprises:
acquiring a first mouse behavior sequence and/or a keyboard input behavior sequence from the page browsing behavior data of the user;
according to the mouse operation type, the coordinates of the mouse and the event occurrence time, performing mouse operation feature extraction on the first mouse behavior sequence, and determining a first probability value of the user as an attacker according to the mouse operation features and by combining historical mouse operation features of normal users and the attacker before verification code operation, wherein the mouse operation type at least comprises: one or more of clicking, pressing, lifting, moving out of a boundary, moving into a boundary, and scrolling; and/or the presence of a gas in the gas,
extracting keyboard input features of the keyboard input behavior sequence according to ASCII codes corresponding to characters input by a keyboard and time corresponding to keyboard input, and determining a second probability value of the user being an attacker according to the keyboard input features and by combining historical keyboard input features before normal user and attacker verification code operation;
determining a classification result of the neural network model according to the first probability value and/or the second probability value.
6. The method according to claim 5, wherein the determining the classification result of the neural network model according to the first probability value and/or the second probability value comprises:
weighted summing the first probability value and the second probability value;
and if the probability value obtained by weighted summation is greater than a preset probability threshold value, judging the user as an attacker, wherein the preset probability threshold value is 0.5.
7. The method according to claim 1, wherein the classifying the user using a single classification model according to the operational behavior data of the user specifically comprises:
acquiring a second mouse behavior sequence from the operation behavior data of the user;
extracting a vector containing a mouse coordinate time pair from the second mouse behavior sequence;
encoding the vector containing the mouse coordinate time pair by using a self-encoder to obtain a behavior code with a preset encoding length;
classifying by using the single classification model according to the behavior code with the preset code length to obtain the score of the user as an attacker;
if the score is larger than a preset score threshold value, the user is judged to be an attacker, and the preset score threshold value is 1.
8. The method according to claim 7, wherein the clustering according to the collected behavior codes specifically comprises:
and clustering the collected behavior codes by using a Mean-Shift algorithm to obtain n clustering centers, wherein n is determined by the window size of the Mean-Shift algorithm, and the window size is obtained by adjusting according to the data characteristics of the verification code and the configured security defense level.
9. The method of claim 7, wherein prior to collecting the behavior code, the method further comprises:
calculating MD5 values for the second sequence of mouse behaviors;
and if the MD5 value is the same as the MD5 value of the verification code operation behavior sequence corresponding to the previously received second verification request, adding the IP address of the user into the blacklist.
10. A secure processing apparatus based on user behavior, comprising:
the acquisition module is used for acquiring page browsing behavior data before a user operates the verification code and operation behavior data of the verification code;
the classification module is used for classifying the users by utilizing a neural network model according to the page browsing behavior data, wherein the neural network model is obtained by training the page browsing behavior data before the verification code operation of a normal user and an attacker; and a process for the preparation of a coating,
the classification module is further used for classifying the users by using a single classification model according to the operation behavior data of the users, wherein the single classification model is obtained by training verification code operation behavior data of normal users;
the analysis module is used for carrying out clustering analysis by utilizing the similarity between the forged behaviors according to the operation behavior data of the user, and is particularly used for clustering according to the collected behavior codes; acquiring a target clustering center of which the distance from the behavior code corresponding to the first verification request is smaller than a preset distance threshold; binding the user IP address sending the first verification request with the user IP address contained in the corresponding cluster of the target cluster center; if the combined access frequency obtained by combining and calculating the bound user IP addresses is greater than a preset frequency threshold, setting the user IP address sending the first verification request as suspicious;
the classification module is further configured to reclassify the user by using the neural network model and the single classification model if it is determined according to a cluster analysis result that the number of times that the user is set as suspicious is greater than a preset number-of-times threshold, wherein when the neural network model and the single classification model are reclassified, the model classification threshold is lowered to increase the probability that the user is classified as an attacker;
and the determining module is used for determining whether the user is an attacker or not by fusing the classification results of the neural network model and the single classification model.
11. A storage medium on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 9.
12. A secure processing device based on user behavior, comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 9 when executing the program.
CN202010800733.9A 2020-06-12 2020-06-12 Safety processing method, device and equipment based on user behaviors Active CN112069485B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010800733.9A CN112069485B (en) 2020-06-12 2020-06-12 Safety processing method, device and equipment based on user behaviors

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010536797.2A CN111428231B (en) 2020-06-12 2020-06-12 Safety processing method, device and equipment based on user behaviors
CN202010800733.9A CN112069485B (en) 2020-06-12 2020-06-12 Safety processing method, device and equipment based on user behaviors

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010536797.2A Division CN111428231B (en) 2020-06-12 2020-06-12 Safety processing method, device and equipment based on user behaviors

Publications (2)

Publication Number Publication Date
CN112069485A true CN112069485A (en) 2020-12-11
CN112069485B CN112069485B (en) 2024-05-14

Family

ID=71551351

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010800733.9A Active CN112069485B (en) 2020-06-12 2020-06-12 Safety processing method, device and equipment based on user behaviors
CN202010536797.2A Active CN111428231B (en) 2020-06-12 2020-06-12 Safety processing method, device and equipment based on user behaviors

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010536797.2A Active CN111428231B (en) 2020-06-12 2020-06-12 Safety processing method, device and equipment based on user behaviors

Country Status (1)

Country Link
CN (2) CN112069485B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818868A (en) * 2021-02-03 2021-05-18 招联消费金融有限公司 Behavior sequence characteristic data-based violation user identification method and device
CN113298115A (en) * 2021-04-19 2021-08-24 百果园技术(新加坡)有限公司 User grouping method, device, equipment and storage medium based on clustering
CN113536302A (en) * 2021-07-26 2021-10-22 北京计算机技术及应用研究所 Interface caller safety rating method based on deep learning
CN113554515A (en) * 2021-06-26 2021-10-26 陈思佳 Internet financial control method, system, device and medium
CN114978969A (en) * 2022-05-20 2022-08-30 北京数美时代科技有限公司 Self-adaptive monitoring and adjusting method and system based on user behaviors
CN115277068A (en) * 2022-06-15 2022-11-01 广州理工学院 Novel honeypot system and method based on deception defense

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112134837A (en) * 2020-08-06 2020-12-25 瑞数信息技术(上海)有限公司 Method and system for detecting Web attack behavior
CN112487376A (en) * 2020-12-07 2021-03-12 北京明略昭辉科技有限公司 Man-machine verification method and device
CN112804374B (en) * 2021-01-06 2023-11-03 光通天下网络科技股份有限公司 Threat IP identification method, threat IP identification device, threat IP identification equipment and threat IP identification medium
CN113158183A (en) * 2021-01-13 2021-07-23 青岛大学 Method, system, medium, equipment and application for detecting malicious behavior of mobile terminal
CN113014598A (en) * 2021-03-20 2021-06-22 北京长亭未来科技有限公司 Protection method for robot malicious attack, firewall, electronic device and storage medium
CN114462589B (en) * 2021-09-28 2022-11-04 北京卫达信息技术有限公司 Normal behavior neural network model training method, system, device and storage medium
CN114462588B (en) * 2021-09-28 2022-11-08 北京卫达信息技术有限公司 Training method, system and equipment of neural network model for detecting network intrusion
CN114564114B (en) * 2022-02-18 2024-02-27 北京圣博润高新技术股份有限公司 Bastion machine keyboard auditing method, bastion machine keyboard auditing device, bastion machine keyboard auditing equipment and storage medium
CN114254242B (en) * 2022-03-01 2022-05-03 互联网域名系统北京市工程研究中心有限公司 User portrait method and device based on recursive analysis log
CN117176478B (en) * 2023-11-02 2024-02-02 南京怡晟安全技术研究院有限公司 Network security practical training platform construction method and system based on user operation behaviors

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622072A (en) * 2016-07-15 2018-01-23 阿里巴巴集团控股有限公司 A kind of recognition methods and server, terminal for web page operation behavior
CN109241709A (en) * 2018-08-03 2019-01-18 平安科技(深圳)有限公司 User behavior recognition method and device based on the verifying of sliding block identifying code
CN109446789A (en) * 2018-10-22 2019-03-08 武汉极意网络科技有限公司 Anticollision library method, equipment, storage medium and device based on artificial intelligence
US20190377853A1 (en) * 2018-06-07 2019-12-12 T-Mobile Usa, Inc. User-behavior-based adaptive authentication
CN110619528A (en) * 2019-09-29 2019-12-27 武汉极意网络科技有限公司 Behavior verification data processing method, behavior verification data processing device, behavior verification equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259503A (en) * 2018-01-30 2018-07-06 成都睿码科技有限责任公司 A kind of is the system and method for website and application division machine and mankind's access
CN109271762B (en) * 2018-08-03 2023-04-07 平安科技(深圳)有限公司 User authentication method and device based on slider verification code

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622072A (en) * 2016-07-15 2018-01-23 阿里巴巴集团控股有限公司 A kind of recognition methods and server, terminal for web page operation behavior
US20190377853A1 (en) * 2018-06-07 2019-12-12 T-Mobile Usa, Inc. User-behavior-based adaptive authentication
CN109241709A (en) * 2018-08-03 2019-01-18 平安科技(深圳)有限公司 User behavior recognition method and device based on the verifying of sliding block identifying code
CN109446789A (en) * 2018-10-22 2019-03-08 武汉极意网络科技有限公司 Anticollision library method, equipment, storage medium and device based on artificial intelligence
CN110619528A (en) * 2019-09-29 2019-12-27 武汉极意网络科技有限公司 Behavior verification data processing method, behavior verification data processing device, behavior verification equipment and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818868A (en) * 2021-02-03 2021-05-18 招联消费金融有限公司 Behavior sequence characteristic data-based violation user identification method and device
CN112818868B (en) * 2021-02-03 2024-05-28 招联消费金融股份有限公司 Method and device for identifying illegal user based on behavior sequence characteristic data
CN113298115A (en) * 2021-04-19 2021-08-24 百果园技术(新加坡)有限公司 User grouping method, device, equipment and storage medium based on clustering
CN113554515A (en) * 2021-06-26 2021-10-26 陈思佳 Internet financial control method, system, device and medium
CN113536302A (en) * 2021-07-26 2021-10-22 北京计算机技术及应用研究所 Interface caller safety rating method based on deep learning
CN114978969A (en) * 2022-05-20 2022-08-30 北京数美时代科技有限公司 Self-adaptive monitoring and adjusting method and system based on user behaviors
CN114978969B (en) * 2022-05-20 2023-03-24 北京数美时代科技有限公司 Self-adaptive monitoring and adjusting method and system based on user behaviors
CN115277068A (en) * 2022-06-15 2022-11-01 广州理工学院 Novel honeypot system and method based on deception defense
CN115277068B (en) * 2022-06-15 2024-02-23 广州理工学院 Novel honeypot system and method based on spoofing defense

Also Published As

Publication number Publication date
CN111428231B (en) 2020-09-08
CN112069485B (en) 2024-05-14
CN111428231A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN111428231B (en) Safety processing method, device and equipment based on user behaviors
EP3651043B1 (en) Url attack detection method and apparatus, and electronic device
CN112866023B (en) Network detection method, model training method, device, equipment and storage medium
CN109922065B (en) Quick identification method for malicious website
CN110830490B (en) Malicious domain name detection method and system based on area confrontation training deep network
CN105072214A (en) C&C domain name identification method based on domain name feature
CN114785563B (en) Encryption malicious traffic detection method of soft voting strategy
CN110162958B (en) Method, apparatus and recording medium for calculating comprehensive credit score of device
CN113205134A (en) Network security situation prediction method and system
CN109413047A (en) Determination method, system, server and the storage medium of Behavior modeling
CN115438102A (en) Space-time data anomaly identification method and device and electronic equipment
Harbola et al. Improved intrusion detection in DDoS applying feature selection using rank & score of attributes in KDD-99 data set
WO2021248707A1 (en) Operation verification method and apparatus
EP4169223A1 (en) Method and apparatus to detect scripted network traffic
CN116319065A (en) Threat situation analysis method and system applied to business operation and maintenance
CN116405306A (en) Information interception method and system based on abnormal flow identification
CN110808947A (en) Automatic vulnerability quantitative evaluation method and system
CN114841705B (en) Anti-fraud monitoring method based on scene recognition
CN115828245A (en) Malicious file identification method based on deep learning
CN112287345B (en) Trusted edge computing system based on intelligent risk detection
CN112073362B (en) APT (advanced persistent threat) organization flow identification method based on flow characteristics
CN115964478A (en) Network attack detection method, model training method and device, equipment and medium
CN114218569A (en) Data analysis method, device, equipment, medium and product
CN114422168A (en) Malicious machine traffic identification method and system
CN111507368A (en) Campus network intrusion detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant