CN112069485B - Safety processing method, device and equipment based on user behaviors - Google Patents
Safety processing method, device and equipment based on user behaviors Download PDFInfo
- Publication number
- CN112069485B CN112069485B CN202010800733.9A CN202010800733A CN112069485B CN 112069485 B CN112069485 B CN 112069485B CN 202010800733 A CN202010800733 A CN 202010800733A CN 112069485 B CN112069485 B CN 112069485B
- Authority
- CN
- China
- Prior art keywords
- user
- behavior
- attacker
- behavior data
- verification code
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000006399 behavior Effects 0.000 title claims abstract description 359
- 238000003672 processing method Methods 0.000 title abstract description 15
- 238000012795 verification Methods 0.000 claims abstract description 237
- 238000000034 method Methods 0.000 claims abstract description 60
- 238000007621 cluster analysis Methods 0.000 claims abstract description 15
- 230000007123 defense Effects 0.000 claims abstract description 14
- 238000013145 classification model Methods 0.000 claims description 80
- 238000012549 training Methods 0.000 claims description 69
- 238000003062 neural network model Methods 0.000 claims description 61
- 239000013598 vector Substances 0.000 claims description 23
- 238000004458 analytical method Methods 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 19
- 238000003860 storage Methods 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 10
- 238000005070 sampling Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 7
- 238000005520 cutting process Methods 0.000 claims description 6
- 238000003825 pressing Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 230000003542 behavioural effect Effects 0.000 claims 1
- 238000000605 extraction Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 19
- 208000003443 Unconsciousness Diseases 0.000 description 9
- 230000009471 action Effects 0.000 description 5
- 230000002354 daily effect Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000007477 logistic regression Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000013515 script Methods 0.000 description 3
- 238000005336 cracking Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003203 everyday effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000010998 test method Methods 0.000 description 2
- BUGBHKTXTAQXES-UHFFFAOYSA-N Selenium Chemical compound [Se] BUGBHKTXTAQXES-UHFFFAOYSA-N 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000001351 cycling effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 229910052711 selenium Inorganic materials 0.000 description 1
- 239000011669 selenium Substances 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/36—User authentication by graphic or iconic representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/21—Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/2133—Verifying human interaction, e.g., Captcha
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application discloses a safety processing method, device and equipment based on user behaviors, and relates to the technical field of data safety. The method comprises the following steps: judging whether a user is an attacker or not according to page browsing behavior data before the user operates the verification code and the operation behavior data of the verification code; if the operation behavior data of the user is judged to be the behavior data of a real user, carrying out cluster analysis by utilizing similarity among fake behaviors based on the operation behavior data of the user, and if the suspicious times of the user are set to be larger than a preset times threshold, increasing verification difficulty and increasing probability of classifying a model as an attacker, and reclassifying the collected behavior data of the user by using the model; and finally, determining whether the user is an attacker or not by fusing the classification results. The application can realize more strict attack defense of the black product based on the user behavior data of the operation verification code, and ensures the security of the website.
Description
The application relates to a method, a device and equipment for safely processing user behavior, which are divided applications of China patent application filed by China patent office at 6 months and 12 days in 2020, with application number 2020105367972.
Technical Field
The present application relates to the field of data security technologies, and in particular, to a security processing method, apparatus, and device based on user behavior.
Background
Verification codes are widely used in the industry as an effective means of user authentication to resist Internet blackout attacks. The main principle is that the black-out usually needs to obtain benefits through a large number of repeated accesses, and the verification code can effectively increase the cost of each access.
At present, with the rise of deep learning in recent years, automatic identification of website verification codes by using a computer becomes easier. Regardless of the slide block verification code, the picture selection verification code, the text click verification code and even questions and answers needing semantic understanding, a corresponding mature depth model solution can be provided, so that the difficulty of breaking the picture or the text verification code by black production is greatly reduced, the attack defense by black production is failed, and the website security is reduced.
Disclosure of Invention
In view of the above, the present application provides a method, an apparatus and a device for security processing based on user behavior, which mainly aims to solve the technical problem that in the prior art, blackout attack defense is easy to fail, thereby reducing website security.
According to one aspect of the present application, there is provided a security processing method based on user behavior, the method comprising:
Acquiring page browsing behavior data before the user operates the verification code and operation behavior data of the verification code;
Classifying the users by utilizing a neural network model according to the page browsing behavior data, wherein the neural network model is obtained by training the page browsing behavior data before verification code operation of normal users and attackers; the method comprises the steps of,
Classifying the user by utilizing a single classification model according to the operation behavior data of the user, wherein the single classification model is obtained by training based on verification code operation behavior data of a normal user;
According to the operation behavior data of the user, performing cluster analysis by using the similarity between the fake behaviors;
If the suspicious times of the user are judged to be greater than a preset times threshold according to the clustering analysis result, replacing the verification code with a new verification code for increasing the operation difficulty of the user, re-acquiring page browsing behavior data of the user before the new verification code is operated and operation behavior data of the new verification code, and reclassifying the user by utilizing the neural network model and the single classification model, wherein when the neural network model and the single classification model are reclassifying, the model classification threshold is reduced to increase the probability that the user is classified as an attacker;
and determining whether the user is an attacker by merging the classification results.
According to another aspect of the present application, there is provided a security processing apparatus based on user behaviour, the apparatus comprising:
The acquisition module is used for acquiring page browsing behavior data before the user operates the verification code and operation behavior data of the verification code;
the classification module is used for classifying the users by utilizing a neural network model according to the page browsing behavior data, wherein the neural network model is obtained by training the page browsing behavior data before verification code operation of a normal user and an attacker; the method comprises the steps of,
The classification module is further used for classifying the user by utilizing a single classification model according to the operation behavior data of the user, wherein the single classification model is obtained by training based on verification code operation behavior data of a normal user;
The analysis module is used for carrying out cluster analysis by utilizing the similarity between the fake behaviors according to the operation behavior data of the user;
The classification module is further configured to replace the verification code with a new verification code that increases the operation difficulty of the user if it is determined that the number of times the user is set to be suspicious is greater than a preset number of times threshold according to the clustering analysis result, re-acquire page browsing behavior data of the user before the new verification code is operated and operation behavior data of the new verification code, and reclassify the user by using the neural network model and the single classification model, where when reclassifying the neural network model and the single classification model, the model classification threshold is adjusted down to increase the probability that the user is classified as an attacker;
and the determining module is used for determining whether the user is an attacker or not by integrating the classification results.
According to still another aspect of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described security processing method based on user behavior.
According to a further aspect of the present application there is provided a user behaviour based security processing apparatus comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, the processor implementing the above-described user behaviour based security processing method when executing the program.
By means of the technical scheme, the security processing method, the security processing device and the security processing equipment based on the user behaviors are not only used for referencing the operation behavior data of the user on the verification code, but also used for referencing the page browsing behavior data of the user before the user operates the verification code, and are used as the basis for further man-machine authentication, so that whether the user is an attacker can be accurately judged. Even if an attacker simulates real user operation to finish verification through fake behavior data, the application can also utilize the similarity between fake behaviors to perform cluster analysis, further replace verification codes with new verification codes for increasing user operation difficulty when the user is judged to be suspicious and the number of times is larger than a preset number of times threshold, and meanwhile, increase the probability of classifying the user as an attacker by lowering a model classification threshold, and re-acquire corresponding user behavior data to re-classify, thereby being capable of recognizing whether the verification process is finished for the real user or not with larger probability in a mode of increasing verification difficulty and improving attacker classification standards. Compared with the mode that an attacker can easily automatically identify the website verification code through a computer at present, the method can realize stricter black-out attack defense based on the user behavior data of the operation verification code, ensure the security of the website and reduce the risk of the website being attacked by the black-out. In addition, the application can also identify correct attacker behavior data from the verification behavior data log recorded every day according to the trained single classification model, so as to avoid training set pollution caused by error data input in the training set expansion process, and further cause model training failure.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a schematic flow chart of a security processing method based on user behavior according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating another security processing method based on user behavior according to an embodiment of the present application;
FIG. 3 is a flow chart of an example method in a simple mode provided by an embodiment of the present application;
FIG. 4 is a flow chart of an example method in a complete mode according to an embodiment of the present application;
FIG. 5 shows a flow diagram of a complete model training provided by an embodiment of the present application;
Fig. 6 is a schematic structural diagram of a security processing apparatus based on user behavior according to an embodiment of the present application.
Detailed Description
The application will be described in detail hereinafter with reference to the drawings in conjunction with embodiments. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
Aiming at improving the technical problems that the prior attacker automatically recognizes the website verification code through a computer, and the defense of the black-out attack is easy to fail, thereby reducing the security of the website. The embodiment provides a security processing method based on user behavior, as shown in fig. 1, the method includes:
101. And acquiring page browsing behavior data before the user operates the verification code and operation behavior data of the verification code.
The page browsing behavior data before the verification code operation may include: the user can operate the mouse operation sequence and the keyboard input sequence of each time point before the verification code operation, and the mobile terminal can also comprise a gyroscope change sequence and the like. And the operational behavior data of the user on the verification code may include: the time point from the user to start the verification is a time start point, a mouse operation sequence from each time point in the time period to the completion of the verification, and the like.
The execution subject of the present embodiment may be a device or apparatus for security defense processing based on user behavior, and may be configured on the website side. The method can be used for providing an effective means for defending the website against the blackout attack and reducing the risk of the website against the blackout attack.
The embodiment collects page browsing behavior data before the user operates the verification code and operation behavior data of the verification code as the basis for further man-machine authentication. The three determination and analysis processes shown in steps 102 to 104 may be executed, and it should be noted that the three determination and analysis processes may be executed in parallel at the same time, or may be executed according to a certain progressive relationship, etc., and may be specifically determined according to the timeliness of the actual security detection and the requirement of resource consumption.
102. And classifying the user by using a neural network model according to page browsing behavior data before the user operates the verification code, and classifying the user by using a single classification model according to operation behavior data of the user on the verification code.
In this embodiment, the neural network model is obtained by training positive and negative sample sets, and may be obtained by training page browsing behavior data before verification code operation of a normal user and an attacker. The neural network model is not limited to deep neural networks (Deep Neural Networks, DNN), convolutional neural networks (Convolutional Neural Networks, CNN), recurrent neural networks (Recurrent Neural Network, RNN), and the like.
For example, for the same verification code type (such as a slider verification code, a picture selection verification code, a text click verification code, a question and answer verification code needing semantic understanding, and the like), based on page browsing behavior data of a target user before the verification code is operated, a neural network model is utilized to analyze page browsing behavior similarity between the target user and a normal user, and page browsing behavior similarity between the target user and an attacker is analyzed, namely, the page browsing behavior of the target user is analyzed to be more preferential to the browsing behavior of a real user or to the browsing behavior of a non-real user (machine). If browsing behavior of a non-real user (machine) is favored, the target user may be determined to be an attacker.
Unlike the neural network model, the single classification model in this embodiment may be obtained by training only one sample set, and may be obtained by training based on verification code operation behavior data of a normal user.
Because the behavior data before the verification starting stage has great randomness, and actions during verification, such as dragging a sliding block, clicking characters and the like, have a definite normal form structure and are relatively suitable for judging similarity, the embodiment can use the verification code operation behavior data of a normal user as a reference to analyze whether the user is more biased to the verification code operation of a real user during the verification code operation, and can judge that the user is an attacker if the user is not biased to the verification code operation of the real user. By this way of discrimination, which is similar to a single classification, the difficulty of collecting attacker data can be avoided. Correct attacker data can be identified from the daily recorded verification behavior data log, thereby expanding the training set of attacker data.
The neural network model used in the embodiment may be a two-class model, which requires a positive and negative sample set to train the neural network model, and a single-class model requires only one sample training set. At present, a sample set of a normal user can be more easily selected by using a white list and other modes, and sample data of a blacklist attacker is relatively difficult to acquire and is easy to make mistakes, so that a neural network model is not easy to train. Therefore, the latest blacklist attacker can be identified by utilizing the single classification model after regular updating training, so that the sample data of the latest blacklist attacker is mined to update the positive and negative sample training sets required by the neural network model, and further, the regular automatic updating of the accurate positive and negative sample training sets can be realized, and the accurate updating of the neural network model is ensured. The condition of manually extracting sample features is reduced, the whole updating training process can be automatically realized, and the model updating efficiency can be improved.
103. And carrying out clustering analysis by utilizing the similarity between the fake behaviors according to the operation behavior data of the user on the verification code.
104. If the suspicious times of the user are larger than the threshold value of the preset times according to the clustering analysis result, replacing the verification code with a new verification code for increasing the operation difficulty of the user, re-acquiring page browsing behavior data of the user before the new verification code is operated and operation behavior data of the new verification code, and reclassifying the user by using a neural network model and a single classification model.
Wherein the model classification threshold is lowered to increase the probability that the user is classified as an attacker when the neural network model and the single classification model are reclassified.
Because js code at the front end of the page is very easy to crack, an attacker of the verification code can simulate user operation through an automatic testing tool selenium and the like, and can send a fake request directly through a script and attach real user operation to the request. The real user operation can come from the operation of an attacker himself or can be obtained by collecting the real operation of other users. Therefore, in order to avoid that an attacker thereby successfully bypasses man-machine behavior verification, the embodiment can perform cluster analysis by using similarity between forgery behaviors based on the operation behavior data of the user on the verification code, and although the attacker can use thousands of IPs to access, the website party cannot locate the existence of the attacker, the IPs can be subjected to binding judgment by constructing the similarity of behaviors. Thus, although the attacker uses the real user behavior, their presence can still be found through similarity clustering.
For example, through a cluster analysis of the similarity between the forgeries, the cumulative number of times the target user is set to be suspicious each time the target user sends a request is counted. When the target user is set to be suspicious for multiple times and the times reach a certain threshold value, the verification code can be replaced by a verification code for increasing the operation difficulty of the user (such as original number of a conventional font in the click picture, changed to deformed character in the click picture (requiring careful identification of the user), and the like) for testing. The specific test procedure may include: the method comprises the steps of collecting user behavior data again, wherein the user behavior data comprises page browsing behavior before verification and verification code operation behavior; the neural network model and the single classification model in step 102 are then used to reclassify based on the re-acquired page browsing behavior and captcha operating behavior, and the classification threshold in both models is lowered upon reclassifying to increase the probability that the user is classified as an attacker. Such as when classifying the neural network model and the single classification model, the final decision threshold is lowered and the parameters of the neural network model and the single classification model are not changed (i.e., the model itself is not affected). The output values of the two models can be the probability of an attacker [0-1], whether the user is an attacker or not is judged by utilizing the judging threshold, and before the judging threshold is not lowered, the user can be judged to be the attacker if the output probability of the model is more than or equal to 0.5; and after the judgment threshold value is lowered, if the model output probability is more than or equal to 0.2, the user can be judged to be an attacker.
105. And determining whether the user is an attacker by merging the classification results.
For example, after the classification and reclassification process of the above two models, if at least one classification result determines that the user is an attacker, security defense processing such as restricting the user to access the website, blacklisting the user, and the like may be performed according to the user.
According to the security processing method based on the user behavior, because a large number of operation behaviors such as a mouse and a keyboard exist in the process of operating the verification code by a real user, the embodiment refers to the operation behavior data of the user on the verification code and the page browsing behavior data of the user before the verification code is operated, and the page browsing behavior data is used as a basis for further man-machine authentication, namely judging whether the verification code verification process is completed by automatically identifying the verification code by a computer or by the verification process completed by the operation of the real user. The method can be used for classifying by combining page browsing behavior data before verification code operation of a normal user and an attacker as a neural network model obtained by training a positive sample training set and a negative sample training set, classifying by combining verification code operation behavior data of the normal user as a single classification model obtained by training a training set with only one sample, and identifying whether the verification process is completed by a real user or not, so that whether the user is an attacker or not can be accurately judged.
Even if an attacker simulates real user operation to finish verification through fake behavior data, the embodiment can also utilize the similarity between fake behaviors to perform cluster analysis, further replace verification codes with new verification codes for increasing user operation difficulty when the user is judged to be suspicious and the number of times is larger than a preset number of times threshold, and meanwhile increase the probability that the user is classified as an attacker by lowering a model classification threshold, and re-collect corresponding user behavior data to re-classify, so that the verification process of whether the user is real or not can be identified with larger probability in a mode of increasing verification difficulty and improving attacker classification standards. Compared with the mode that an attacker can easily automatically identify the website verification code through a computer at present, the embodiment can realize stricter black-product attack defense based on the user behavior data of the operation verification code, ensure the security of the website and reduce the risk of the website being attacked by the black-product.
Further, as a refinement and extension of the specific implementation manner of the foregoing embodiment, in order to fully describe the implementation manner of the present embodiment, the present embodiment further provides another security processing method based on user behavior, as shown in fig. 2, where the method includes:
201. and acquiring user behavior data of the user in a time period from opening the page where the verification code is located to finishing verification of the verification code.
For example, by pre-editing scripts (such as an acquisition module, etc.), all operations of the user in the time period from opening the page to completing verification of the verification code are acquired, the operation types include movement of a mouse, clicking, moving out of a boundary, moving in the boundary, page scrolling, input of a keyboard, etc., and the mobile terminal can also comprise changes of a gyroscope, etc. These recorded operations should all contain a timestamp of the time at which the operation occurred at the same time. In order to increase the difficulty of front-end cracking, complex front-end code confusion can be added on the acquisition module.
202. And cutting the user behavior data of the user by taking the time point when the user starts the verification operation of the verification code as a cutting point to obtain page browsing behavior data before the user operates the verification code and operation behavior data of the verification code.
The acquisition information comprises a time point when the user starts to verify, and the acquisition module cuts the behavior sequence according to the time and divides the behavior sequence into two parts, wherein one part is a page browsing behavior, and the other part is a verification code operation behavior. The interception is mainly because behavior data before the verification starting stage has great randomness, is not easy to influence the judgment of similarity, and has poor effect when being used as single classification model data. And the actions during verification, such as dragging the sliding block, clicking the characters and the like, have a very clear normal form structure, and are relatively suitable for judging the similarity.
203A, classifying the users by using the neural network model according to page browsing behavior data before the user operates the verification code.
Optionally, step 203a may specifically include: firstly, acquiring a first mouse behavior sequence and/or a keyboard input behavior sequence from page browsing behavior data of the user; then, according to the mouse operation type (at least one or more of clicking, pressing, lifting, moving out of a boundary, moving into a boundary and rolling), the coordinate of the mouse and the occurrence time of an event, carrying out mouse operation feature extraction on a first mouse behavior sequence so as to determine a first probability value of the user as an attacker according to the mouse operation feature and by combining the historical mouse operation features before verification code operation of a normal user and the attacker; and/or extracting keyboard input characteristics of the keyboard input behavior sequence according to ASCII codes corresponding to characters (such as letters or symbols) input by the keyboard and time corresponding to keyboard input, so as to determine a second probability value of the user as an attacker according to the keyboard input characteristics and by combining the historical keyboard input characteristics before verification code operation of the normal user and the attacker; and finally, determining the classification result of the neural network model according to the obtained first probability value and/or second probability value.
Specifically, the mouse behavior sequence and/or the keyboard input behavior sequence can be obtained according to actual operation conditions (such as only mouse operation behaviors or keyboard input behaviors or both). By the aid of the optional mode, whether the user is an actual user operation behavior can be accurately judged based on mouse browsing operation and keyboard input operation in a page before the user operates the verification code, and if the user is not the actual user operation behavior, the user can be judged to be an attacker.
In order to accurately acquire a mouse behavior sequence and a keyboard input behavior sequence, for example, a first mouse behavior sequence and/or a keyboard input behavior sequence are acquired from page browsing behavior data of the user, which specifically includes: collecting continuous mouse operation records according to fixed sampling intervals to obtain a first mouse behavior sequence; and/or intercepting the longest keyboard input record of continuous keyboard input, and acquiring a keyboard input behavior sequence according to the preset maximum input length. Preferably, the fixed sampling interval may be 100ms and the preset maximum input length is 64. For example, page browsing behavior is sampled, with successive mouse movements, successive scrolling sampling at fixed time intervals of 100 ms. The keyboard input selects a section of which continuous input is longest as a representative, and if the maximum length 64 is exceeded, an input sequence of which continuous section is 64 is randomly intercepted.
Then, the sequence data of the mouse track and the keyboard (namely, a first mouse behavior sequence and a keyboard input behavior sequence) are respectively input into two different depth models for automatic feature extraction. Preferably, a convolutional network may be employed to process the sequence data. Compared with the traditional manual feature extraction mode (such as the mode based on manual feature extraction, extracting the features of maximum values, minimum values, average values, median values, variances, first-order differences, second-order differences and the like of different sequence areas), the automatic feature extraction mode can automatically extract behavior data features based on a depth model, and can well solve the problems caused by the traditional manual feature extraction mode (such as the mode has great limitation, firstly, aiming at indexes such as the mean variance median of a segmentation mean value of a sequence, the whole and partial characteristics can only be reflected to a certain extent, and the correlation before and after the sequence can not be reflected). The feature extraction method and device can improve the efficiency and accuracy of feature extraction, and further improve the accuracy of whether the user is an attacker or not in the follow-up recognition.
Further alternatively, a Long Short-Term Memory (LSTM) model may be utilized as the feature extraction model, while a logistic regression (Logistic Regression, LR) model is selected as the classification model. Correspondingly, the mouse operation feature extraction of the first mouse behavior sequence according to the mouse operation type, the coordinates where the mouse is located and the occurrence time of the event can specifically include: and carrying out mouse operation feature extraction on the first mouse behavior sequence by utilizing the LSTM model, so that each frame of the first mouse behavior sequence is expressed as a first feature vector, wherein the first bit of the first feature vector is the mouse operation type, the second bit and the third bit are the x and y coordinates of the mouse respectively, and the fourth bit is the event occurrence time.
For example, each frame of the mouse behavior sequence is represented as a feature vector, the composition of the vector including: the first bit indicates the operation type, which is click down, lift up, move out of the boundary, move into the boundary, scroll, etc., respectively; the second and third bits are the x and y coordinates of the mouse respectively; the fourth bit is the time at which the event occurred (e.g., the time corresponding to each coordinate value).
Correspondingly, according to the mouse operation characteristics and in combination with the historical mouse operation characteristics before the verification code operation of the normal user and the attacker, the method for determining the first probability value of the user as the attacker specifically comprises the following steps: the mouse operation features are input into an LR model, and classification is performed with reference to historical mouse operation features before verification code operation of a normal user and an attacker to obtain a first probability value. For example, a classification label (such as a normal user label and an attacker label) corresponding to a sample characteristic most similar to the mouse operation characteristic of the user is found, and a probability value of the corresponding classification label is determined according to the similarity.
Also, alternatively, when feature extraction and classification are performed on the keyboard input behavior sequence, the LSTM model may be used as a feature extraction model, and the LR model may be selected as a classification model. Correspondingly, the keyboard input behavior sequence is extracted according to ASCII codes corresponding to characters input by the keyboard and time corresponding to keyboard input, and the method specifically comprises the following steps: and carrying out keyboard input feature extraction on the keyboard input behavior sequence by utilizing the LSTM model, so that each frame of the keyboard input behavior sequence is represented as a second feature vector, wherein the first bit of the second feature vector is an ASCII code corresponding to characters input by the keyboard, and the second bit is time corresponding to keyboard input.
For example, each frame of the keyboard input behavior sequence is represented as a feature vector, and the vector is composed in a manner including: the first bit represents ASCII code corresponding to the letter or symbol input by the keyboard; the second bit indicates the time corresponding to the keyboard input.
Correspondingly, according to the keyboard input characteristics and in combination with the historical keyboard input characteristics before the verification code operation of the normal user and the attacker, the method for determining the second probability value of the user as the attacker can comprise the following specific steps: the keyboard input features are input into the LR model, and classified with reference to historical keyboard input features before verification code operation of normal users and attackers to obtain a second probability value. For example, a classification label (such as a normal user label and an attacker label) corresponding to a sample characteristic most similar to the operation characteristic of the user keyboard is found, and a probability value of the corresponding classification label is determined according to the similarity.
Since there may be multiple types of verification codes (such as a slider verification code, a picture selection verification code, a text click verification code, a question verification code requiring semantic understanding, etc.) in practice, if a unified model is used for feature extraction and classification, the calculation accuracy will be necessarily affected, so it is preferable that the LSTM model and the LR model may be pre-trained according to the types of verification codes, where different LSTM models and LR models are pre-trained for different types of verification codes. For example, super-parameters that need to be optimized include, but are not limited to, cell state size, output length, L1 and L2 regularization coefficients, optimization algorithms, learning rate, etc. of LSTM. By using the targeted model to extract and classify the features, the accuracy of analysis and calculation can be improved, so that whether the user is an attacker can be accurately judged.
Illustratively, determining the classification result of the neural network model according to the first probability value and the second probability value may specifically include: carrying out weighted summation on the first probability value and the second probability value; and if the probability value obtained by the weighted summation is larger than a preset probability threshold value, judging the user as an attacker. Preferably, the preset probability threshold is 0.5. For example, the results of two LR models (corresponding to mouse behavior and keyboard input behavior, respectively) are weighted and summed to obtain a probability that the operation may come from an attacker, e.g., 1 represents an attacker, 0 represents a normal user, a classification with a probability greater than 0.5 is determined as an attacker, and a classification with a probability less than 0.5 is determined as a normal user.
And step 203b, which is parallel to step 203a, classifies the user by using a single classification model according to the operation behavior data of the user on the verification code.
Optionally, step 203b may specifically include: firstly, acquiring a second mouse behavior sequence from verification code operation behavior data of a user; extracting vectors containing mouse coordinate time pairs from the second mouse behavior sequence; then, coding the vector containing the mouse coordinate time pair by using a self-coder to obtain a behavior code with a preset coding length; finally, classifying by using a single classification model according to the behavior codes with the preset code length to obtain the score of the user as an attacker, wherein the single classification model is obtained by training according to the verification code operation behavior data of the normal user in advance; and if the obtained score is larger than a preset score threshold, judging the user as an attacker. Preferably, the predetermined code length is 64, and the single classification model may be SVDD (Support Vector Domain Description) models, and the preset scoring threshold may be 1.
For example, the non-mouse operation type in the intercepting behavior sequence is removed, the operation type field is removed, and only the mouse coordinates and the time field are reserved. The sequence is sampled at uniform time intervals to 100 coordinate time pairs, i.e. a vector of length 300. The vector is encoded using a 4-layer self-encoder, which includes three hidden layers of sizes 128, 64, 128, respectively. Where 64 is the final encoded representation length. Here the depth from the encoder, and the size of the hidden layer may be super parameters that need tuning. For this length 64 code, a single classification can be performed using the SVDD model, with a classification tag of 0 for the normal user row, corresponding to the unique classification of the model. The score threshold value can be selected to be 1, all scores which are larger than 1 do not belong to normal user behaviors, namely, the data are judged to be falsified by an attacker, and the data are judged to be normal user data when the score is smaller than 1.
For the SVDD model in this embodiment, the training data may all come from normal user data, which is easy to obtain, and the data tag accuracy is high, so that online data set expansion and model iteration can be directly performed. For example, the normal user data may be derived from data generated by an intranet IP segment, an IP white list, and data generated by a user white list, or may be obtained by analyzing a daily traffic rule of a website, finding a natural day when the traffic is normal, and using all the data as normal user data. The normal flow means that no sudden flow peak exists, and the flow accords with long-term regularity, such as peak value in the morning and evening, valley in the middle night, and the like. The training hyper-parameters required for the SVDD model include, but are not limited to: selection of kernel functions, soft spacing coefficients, and the like. The kernel function also contains secondary super-parameters such as coefficients, exponentials, etc.
Further, before classifying by using the single classification model, the discrimination may be performed in a simpler and faster manner, and illustratively, before step 203b, the method may further include: judging whether the dragging track of the slide block corresponding to the verification code is related to the placement position of the slide block according to the verification code operation behavior data of the user; and/or judging whether the clicked position of the character corresponding to the verification code is matched with the relative position of the character in the picture; if the dragging track of the slider is irrelevant to the placement position of the slider or the clicking position of the text is not matched with the relative position of the text in the picture, the user is judged to be an attacker. By the alternative mode, whether the user is an attacker or not can be simply and rapidly judged, and a certain judging efficiency can be improved.
For example, first, simple rule verification is performed on the operation behavior data of the verification code, for example, the dragging track of the slider should be related to the placement position of the slider, the position of the text click should be matched with the relative position of the text in the picture, and so on. If the rule verification fails, the rule is directly judged to be an attacker behavior. Wherein, a certain error tolerance threshold is needed to be added in the rule verification so as to cope with the data acquisition errors possibly occurring in the actual production environment.
Based on the content in steps 203a and 203b, as shown in fig. 3, after the user behavior data is acquired, the user behavior data may be split into a page browsing behavior and a verification code operation behavior, and then the page browsing behavior may be classified by using an lstm+lr model to obtain a classification result, and the verification code operation behavior may be processed by using a self-encoder+svdd single classification model to obtain the classification result. And finally, according to the fusion of the results of the two classification results, obtaining final judgment, namely classifying whether the user is an attacker. Thus, even if an attacker breaks the verification code through some means, the verification code can still be effectively detected due to the fact that the behavior and normal users are greatly different.
204. And carrying out clustering analysis by utilizing the similarity between the fake behaviors according to the operation behavior data of the user on the verification code.
It is assumed that an attacker can arbitrarily forge the requested data and use the actual user behavior to implement the attack. The behavior verification models currently existing are very vulnerable to such attacks, since these behaviors originate from the user themselves, and are naturally classified by the model as human rather than machine, by which an attacker can successfully bypass behavior verification. More serious, many behavior verification systems adopt an online updating mode, and once the attacks are automatically or manually discovered, the real user data flow into machine label data, so that the training data set is polluted, the normal user verification is difficult, and the misjudgment rate of behavior interception is obviously increased.
In order to solve the above-described problem, the present embodiment can determine by performing cluster analysis using the similarity between forgeries. Because an attacker often has limited data acquisition channels when using real user data for attack, the data acquisition channels are not very much, and are not endless as random tracks generated by software. An attacker often makes some minor modifications as new forgeries based on one or a set of human operational data. However, generating new behaviors in this manner often has similarities that can be found by machine learning models, so that they can be effectively categorized as one type.
To illustrate a specific implementation, step 204 may specifically include: firstly, collecting behavior codes with preset code lengths; clustering according to the collected behavior codes; then, a first verification request received after clustering is obtained, and the distance between the behavior code corresponding to the first verification request and each clustering center is calculated; if a target cluster center with the distance smaller than the preset distance threshold exists, binding a user IP address for sending a first verification request with a user IP address contained in a cluster corresponding to the target cluster center, wherein the bound user IP addresses are combined and calculated to obtain the combined access frequency; if the combined access frequency is greater than a preset frequency threshold, setting the IP address of the user sending the first verification request as suspicious; if the suspicious times of the same user IP address are set to be larger than the preset times threshold, adding the suspicious times into a blacklist; and finally, if the IP address of the user exists in the blacklist, judging that the suspicious frequency of the user is larger than a preset frequency threshold value.
By the alternative mode, the attacker can be accurately identified as the attacker under the condition that the attacker simulates real user data, the attacker is prevented from directly using the real user behavior to carry out false verification after the front-end code is cracked, and the attack defense of the black product can be more comprehensively realized.
Illustratively, clustering according to the collected behavior codes may specifically include: and clustering the collected behavior codes by using a Mean-Shift algorithm to obtain n clustering centers, wherein n is determined by the window size of the Mean-Shift algorithm, and the window size is obtained by adjusting according to the data characteristics of the verification code and the configured security defense level.
For example, the 64-bit behavior codes are collected, user data (64-bit behavior codes) is collected for about 10 minutes first, and then the data is clustered by using a Mean-Shift algorithm to obtain n cluster centers, wherein n depends on the window size of the Mean-Shift algorithm, and the window size needs to be adjusted according to the data characteristics of a specific verification code and the monitoring effect to be achieved. And calculating the distance between the behavior code of the request and each cluster center of the user request entering later, and finding out the nearest cluster center. If the distance between the behavior code and the clustering center is smaller than the set threshold value, the behavior is considered to belong to the clustering cluster represented by the clustering center. Then the user IP address corresponding to the action will bind with the user IP corresponding to other actions in the cluster, and the bound IP will combine and calculate the access frequency. If the merging frequency exceeds a certain threshold, the latest accessed IP is set as suspicious, and if the same IP is set as suspicious for a plurality of times, the same IP is added to an IP blacklist.
205. If the suspicious times of the user are larger than the threshold value of the preset times according to the clustering analysis result, replacing the verification code with a new verification code for increasing the operation difficulty of the user, re-acquiring page browsing behavior data of the user before the new verification code is operated and operation behavior data of the new verification code, and reclassifying the user by using a neural network model and a single classification model.
Wherein the model classification threshold is lowered to increase the probability that the user is classified as an attacker when the neural network model and the single classification model are reclassified.
For example, if the user's IP address exists in the blacklist, the verification code is replaced with a verification code that increases the difficulty of user operation (e.g., the original number of the conventional font in the click picture is changed to the deformed character in the click picture (which requires careful recognition by the user), etc.), and the preset probability threshold and the preset scoring threshold in the model are lowered; and finally, determining whether the user is an attacker according to the classification result obtained by the test.
The test procedure in this embodiment may include: and after the verification code is replaced by the verification code which increases the operation difficulty of the user, the user behavior data is collected again, and specifically comprises page browsing behavior before verification and verification code operation behavior. And then classifying the page browsing behaviors by using the re-collected page browsing behaviors and verification code operation behaviors according to the classification process in the steps 203a and 203b respectively by using an LSTM+LR model (the preset probability threshold value referenced in the classification process is adjusted to be low) to obtain classification results, and processing the verification code operation behaviors by using a self-encoder+SVDD single classification model (the preset probability threshold value referenced in the classification process is adjusted to be low) to obtain classification results. And finally, determining whether the user is an attacker or not according to the fusion classification result. If the user is an unrealistic user, it will be more easily classified as an attacker in this case. For example, for a user IP address added to a blacklist, testing is performed after replacing some non-user-friendly verification codes, and the score threshold of the behavior classification model is lowered, so that the user behavior is classified as an attacker with a higher probability, and if the user is classified as an attacker later, the user is determined to be the attacker.
Further optionally, in order to expedite verifying whether the user IP is blacklisted, before collecting the behavior codes with the predetermined code length, the method of this embodiment may further include: calculating an MD5 value of the second mouse behavior sequence; if the MD5 value is the same as the MD5 value of the verification code operation behavior sequence corresponding to the second verification request received in advance, the IP address of the user is added into the blacklist. For example, the MD5 values of the verification code operation behavior sequence in step 203b are calculated and cached, and if the same MD5 values already occur, the corresponding user IP will be requested to be directly added to the blacklist. This step ensures that the attacker cannot use simple, identical actions for repeated verification.
Based on the classification process in steps 203a, 203b and the content of the cluster analysis in steps 204 to 205, as shown in fig. 4, the user behavior data may be split into a page browsing behavior and a captcha operating behavior after the user behavior data is acquired. And (3) sampling the user page browsing behaviors at fixed time intervals of the behaviors such as a mouse, a roller and the like, intercepting the keyboard data to obtain the maximum fixed length as an extraction characteristic, and then classifying by using an LSTM+LR model to obtain a classification result. Aiming at the operation behaviors of the user verification codes, the self-encoder can be used for encoding to obtain behavior codes as extraction features, and the SVDD single classification model can be used for classifying to obtain classification results. And the behavior codes can be clustered by using a Mean-Shift algorithm, and related IPs are bound according to a clustering result. The access frequencies of the bound IPs may then be jointly counted, and if the joint access frequency exceeds a specified threshold, the last accessed IP is marked as suspicious, and if the IP is marked as suspicious multiple times, a blacklist is added. And finally, for the blacklist IP, feeding back to the front end to increase the verification difficulty of the blacklist IP, and feeding back to the classification model to increase the difficulty of classifying the blacklist IP into normal users.
206. And determining whether the user is an attacker by merging the classification results.
Optionally, step 206 may specifically include: and carrying out weighted summation calculation on classification results of the neural network model and the single classification model, and determining whether the user is an attacker. For example, according to the verification accuracy of the two models, the respective corresponding weights are configured, and the higher the accuracy is, the higher the corresponding weight is, and the lower the accuracy is, the lower the corresponding weight is. By the alternative mode, the classification result of the user is finally determined, the test accuracy of each classification mode can be considered at the same time, and more accurate classification results can be obtained.
After determining that the user is an attacker, optionally, performing security defense processing according to the user, which specifically includes: limiting and processing a verification code verification request sent by the user; or, the user is required to carry out mobile phone verification or answer secret question verification. For example, depending on some specific pages, the user may be required to perform mobile phone authentication instead, or may be required to answer a security question, etc., which may greatly increase the cost of violent access for a blackout user.
The neural network model is a two-class model, and the neural network model is trained by a positive and negative sample set so as to ensure the accuracy of model training. Taking the positive sample as normal user data and the negative sample as attacker data as an example. The existing scheme can not acquire an effective attacker tag, and in the process of black-producing the reactance, the behavior data of the common user is very easy to acquire, for example, the data of the intranet section of a company is selected as the normal user data, or a part of users or parts of devices are set as a white list, and the generated data is used as the normal user data, so that the positive sample data is easy to acquire. However, the attacker is very difficult to obtain, and a large amount of manual intervention is often required to make an auxiliary judgment. Some schemes use new data of the model on-line judged as an "attacker" to be added into an "attacker data set" for training after the training of the initial model is completed. Thus, if the model judges that the error exists, the error data can enter the data set, and training by using the data set containing the error data can improve the error rate. Such cycling to the model will be the farther the model is walked on the wrong road, the negative-sample attacker data source will not be well acquired.
To solve the above problem, and to meet the requirement of automatic update of the model in the present embodiment, optionally, the method in the present embodiment may further include: storing classification results of different users and corresponding user behavior data in a user behavior log; regularly filtering from a user behavior log based on an intranet IP section and/or an IP white list and/or a user white list to obtain behavior data of a normal user, wherein the user behavior log stores different user behavior data recorded in different time periods; then, according to the regular normal user behavior data, updating the original user data set (such as the normal user behavior data) required by the corresponding training of the single classification model, so as to train the single classification model by using the updated user data set; then, updating the model by using the single classification model which is up to standard; detecting the user behavior data in the current time period by using the updated single classification model, extracting the behavior data classified as an attacker, and adding the behavior data into the original attacker data set for updating; finally, training an LSTM model and an LR model simultaneously by using the updated user data set and the updated attacker data set; model updates were performed using the test-qualified LSTM model and LR model.
For example, a training module script is compiled in advance, and the training module is used for updating the online models (such as an LSTM model and an LR model, and an SVDD list classification model) daily for coping with the newly generated counterfeit behaviors of an attacker. Training is based on the verification behavior data log recorded every day, and firstly, data which is affirmed to the behavior of a user is filtered based on an intranet IP section, a user white list, an IP white list and the like. And updating the original user data set by using the filtered user behavior data, and training the SVDD single classification model by using the updated data set. And (3) verifying by using a pre-segmented test data set, wherein the verification data set simultaneously comprises user tag data and attacker tag data, the recall rate and the accuracy rate of the model are obtained through verification, and if the model meets the standard, the online model is updated. And detecting all the behavior data of the current day by using the SVDD model, extracting all the behavior data classified as the attacker data, and adding the attacker data set for updating. And simultaneously training an LSTM+LR model by using the updated user data set and the attacker data set, verifying by using a pre-segmented test set, and updating the online model if the model meets the standard.
The improvement of the model training method in this embodiment is mainly the automatic expansion/collection of model training sets, such as hourly, daily updates, etc. Wherein the lstm+lr model is a two-class model, requiring positive and negative sample sets. The SVDD model is a single classification model, only one sample set is needed, and sample data of normal users can be obtained according to a white list at present, but data of a blacklist attacker is difficult to obtain and is easy to make mistakes, so that the LSTM+LR classification model is not easy to train. Therefore, the trained SVDD single classification model is adopted to identify the data of the blacklist attacker, the data is less (compared with the data of normal users), but the accuracy is ensured, and the LSTM+LR classification model can be supplied for training.
However, under the condition that the number of positive and negative samples is large, the training samples are unbalanced, so that the training method is further optimized, the positive and negative samples can be trained by adopting different sampling rates, and correspondingly, the neural network model can be trained by using the updated user data set and the updated attacker data set, and the training method specifically comprises the following steps: if the number of positive and negative samples of the updated user data set and the updated attacker data set is unbalanced, sample collection is carried out on the positive and negative samples at different sampling rates, and a training set which meets the preset positive and negative sample balancing condition (for example, the number of the positive and negative samples is the same, or the difference value between the number of the positive and negative samples is smaller than a certain threshold value and the like) is obtained; and training the neural network model by using a training set which accords with preset positive and negative sample equalization conditions.
For example, when the number of positive samples is large and the number of negative samples is small, the positive samples can be downsampled, for example, the number of positive samples is reduced by about 5% randomly, the number of negative samples is increased, or the number of negative samples is increased by 10 times, so as to reduce the imbalance of the positive samples and the negative samples of the training set, and further reduce the influence on model training.
The complete training process can be as shown in fig. 5, firstly, the daily user behavior log can be collected, and the data determined as the user behavior can be filtered out according to the intranet IP section, the IP white list, the user white list and the like so as to expand the user data set; training an SVDD single classification model by using the expanded user data set; verifying the accuracy of the SVDD single classification model by using a test set, and if the SVDD single classification model meets the standard, analyzing the user behavior on the same day by using the trained SVDD single classification model to find 'attacker data' of non-user data; then using the attacker data obtained by filtering to expand the attacker data set; and finally, training the LSTM+LR model by using the updated user data set and the updated attacker data set, and after verifying the accuracy of the LSTM+LR model by using the test set, performing online model updating by using the LSTM+LR model which meets the test standard.
The embodiment provides an automatic feature extraction scheme based on a deep circulation network, and the limitation of manually extracting features is avoided. The embodiment also provides that the collection of the user behaviors is not limited to the verification process, the user behaviors are divided into page browsing behaviors, the two sequences with larger characteristic differences of the verification code operation behaviors are classified by using different models according to the two sequences. The embodiment proposes to use a single classification model to classify the user data, so as to avoid the difficulty of collecting attacker data. The embodiment also provides that the clustering model is used for carrying out similarity analysis on the counterfeits of the attacker, so that the attacker is prevented from directly carrying out false verification by using the real user behaviors after cracking the front-end codes.
Further, as a specific implementation of the method shown in fig. 1 and fig. 2, the present embodiment provides a security processing apparatus based on user behavior, as shown in fig. 6, where the apparatus includes: the device comprises an acquisition module 31, a classification module 32, an analysis module 33 and a determination module 34.
The acquiring module 31 may be configured to acquire page browsing behavior data before the user operates the verification code, and operation behavior data of the verification code;
the classification module 32 is configured to classify the users according to the page browsing behavior data by using a neural network model, where the neural network model is obtained by training based on page browsing behavior data before verification code operation of a normal user and an attacker; the method comprises the steps of,
The classification module 32 is further configured to classify the user according to the operation behavior data of the user by using a single classification model, where the single classification model is obtained by training based on verification code operation behavior data of a normal user;
An analysis module 33, configured to perform cluster analysis according to the operational behavior data of the user by using similarity between forgery behaviors;
The classification module 32 is further configured to replace the verification code with a new verification code that increases the difficulty of user operation if it is determined that the number of times the user is set to be suspicious is greater than a preset number of times threshold according to the result of the cluster analysis, re-acquire page browsing behavior data of the user before the new verification code is operated and operation behavior data of the new verification code, and reclassify the user by using the neural network model and the single classification model, where when reclassifying the neural network model and the single classification model, the model classification threshold is adjusted to increase the probability that the user is classified as an attacker;
A determining module 34 is configured to determine whether the user is an attacker by merging the classification results.
In a specific application scenario, the classification module 32 may be specifically configured to obtain a first mouse behavior sequence and/or a keyboard input behavior sequence from the page browsing behavior data of the user; extracting mouse operation characteristics of the first mouse behavior sequence according to a mouse operation type, coordinates of the mouse and occurrence time of an event, so as to determine a first probability value of the user as an attacker according to the mouse operation characteristics and by combining historical mouse operation characteristics before verification code operation of a normal user and the attacker, wherein the mouse operation type at least comprises: clicking, pressing, lifting, moving, out of bounds, into bounds, scrolling one or more of; and/or extracting keyboard input characteristics of the keyboard input behavior sequence according to ASCII codes corresponding to characters input by the keyboard and time corresponding to keyboard input, so as to determine a second probability value of the user as an attacker according to the keyboard input characteristics and the historical keyboard input characteristics before the verification code operation of the normal user and the attacker; and determining a classification result of the neural network model according to the first probability value and/or the second probability value.
In a specific application scenario, the classification module 32 may be specifically further configured to acquire a continuous mouse operation record at a fixed sampling interval, so as to obtain the first mouse behavior sequence; and/or intercepting the longest keyboard input record of continuous keyboard input, and acquiring the keyboard input behavior sequence according to the preset maximum input length.
In a specific application scenario, preferably, the fixed sampling interval is 100ms, and the preset maximum input length is 64.
In a specific application scenario, the classification module 32 may be further specifically configured to extract a mouse operation feature of the first mouse behavior sequence by using a long-short-term memory network LSTM model, so that each frame of the first mouse behavior sequence is represented as a first feature vector, a first bit of the first feature vector is a mouse operation type, a second bit and a third bit are x and y coordinates where a mouse is located, and a fourth bit is an event occurrence time; the classification module 32 may be further specifically configured to input the mouse operation feature into a logistic regression LR model, and classify the mouse operation feature with reference to historical mouse operation features before verification code operations of a normal user and an attacker, so as to obtain the first probability value.
In a specific application scenario, the classification module 32 may be further specifically configured to perform, using an LSTM model, a keyboard input feature extraction on the keyboard input behavior sequence, so that each frame of the keyboard input behavior sequence is represented as a second feature vector, where a first bit of the second feature vector is an ASCII code corresponding to a character input by the keyboard, and a second bit is a time corresponding to the keyboard input; the classification module 32 may be further specifically configured to input the keyboard input feature into an LR model, and classify the keyboard input feature with reference to historical keyboard input features before the verification code operation of the normal user and the attacker, so as to obtain the second probability value.
In a specific application scenario, preferably, the LSTM model and the LR model are pre-trained according to the type of the verification code, where different LSTM models and LR models are pre-trained for different verification code types.
In a specific application scenario, the classification module 32 may be specifically further configured to perform weighted summation on the first probability value and the second probability value; and if the probability value obtained by the weighted summation is larger than a preset probability threshold value, judging that the user is an attacker.
In a specific application scenario, preferably, the preset probability threshold is 0.5.
In a specific application scenario, the classification module 32 may be specifically further configured to obtain a second mouse behavior sequence from the operation behavior data of the user; extracting a vector containing a mouse coordinate time pair from the second mouse behavior sequence; coding the vector containing the mouse coordinate time pair by using a self-coder to obtain a behavior code with a preset coding length; classifying by using a single classification model according to the behavior codes with the preset code length to obtain the score of the user as an attacker, wherein the single classification model is obtained by training according to verification code operation behavior data of a normal user in advance; and if the score is larger than a preset score threshold, judging that the user is an attacker.
In a specific application scenario, preferably, the predetermined coding length is 64, the single classification model is an SVDD model, and the preset scoring threshold is 1.
In a specific application scenario, the classification module 32 may be further configured to determine, according to the operation behavior data of the user, whether a dragging track of the slider corresponding to the verification code is related to the slider placement position; and/or judging whether the clicking position of the text corresponding to the verification code is matched with the relative position of the text in the picture; and if the dragging track of the sliding block is irrelevant to the sliding block placement position or the clicking position of the characters is not matched with the relative position of the characters in the picture, judging that the user is an attacker.
In a specific application scenario, the analysis module 33 may be specifically configured to collect the behavior codes with the predetermined code length; clustering according to the collected behavior codes; acquiring a first verification request received after clustering, and calculating the distance between a behavior code corresponding to the first verification request and each clustering center; if a target cluster center with the distance smaller than a preset distance threshold exists, binding a user IP address for sending the first verification request with a user IP address contained in a cluster corresponding to the target cluster center, wherein the bound user IP addresses are combined and calculated to obtain the combined access frequency; if the combined access frequency is greater than a preset frequency threshold, setting the IP address of the user sending the first verification request as suspicious; if the suspicious times of the same user IP address are set to be larger than the preset times threshold, adding the suspicious times into a blacklist; and if the IP address of the user exists in the blacklist, judging that the suspicious frequency of the user is larger than a preset frequency threshold.
In a specific application scenario, the analysis module 33 may be specifically configured to cluster the collected behavior codes by using a Mean-Shift algorithm to obtain n cluster centers, where n is determined by a window size of the Mean-Shift algorithm, and the window size is obtained by adjusting according to the data feature of the verification code and the configured security defense level.
In a specific application scenario, the analysis module 33 may be specifically further configured to calculate an MD5 value of the second mouse behavior sequence before the collecting the behavior codes with the predetermined code length; and if the MD5 value is the same as the MD5 value of the verification code operation behavior sequence corresponding to the previously received second verification request, adding the IP address of the user into the blacklist.
In a specific application scenario, the obtaining module 31 may be specifically configured to obtain user behavior data of the user in a period from when the page where the verification code is opened to when verification of the verification code is completed; and cutting the user behavior data of the user by taking the time point when the user starts verification code verification operation as a cutting point to obtain the operation behavior data and the page browsing behavior data of the user.
In a specific application scenario, the device further includes: a saving module and an updating module;
the storage module can be used for storing the classification results of different users and the corresponding user behavior data in the user behavior log;
The obtaining module 31 may be further configured to obtain, from a user behavior log at regular time, behavior data of a normal user based on an intranet IP segment, and/or an IP whitelist, and/or a user whitelist, where the user behavior log stores different user behavior data recorded in different time segments;
the updating module can be used for updating the original user data set required by the corresponding training of the single classification model according to the regular normal user behavior data so as to train the single classification model by using the updated user data set;
The updating module is also used for updating the model by using the single classification model which is up to the standard;
The classification module 32 may be further configured to classify the user behavior data in the current time period using the updated single classification model, extract behavior data classified as an attacker, and add the extracted behavior data to the original attacker data set for updating;
The updating module is further configured to train the neural network model using the updated user data set and the updated attacker data set; and updating the model by using the neural network model which meets the test standard.
In a specific application scenario, the updating module is specifically configured to perform sample collection on the positive and negative samples by using different sampling rates if the number of positive and negative samples of the updated user data set and the number of positive and negative samples of the updated attacker data set are unbalanced, so as to obtain a training set that meets a preset positive and negative sample balancing condition; and training the neural network model by using the training set which accords with the preset positive and negative sample balance condition.
In a specific application scenario, the determining module 34 may be specifically configured to perform weighted summation calculation on the classification results of the neural network model and the single classification model, to determine whether the user is an attacker.
In a specific application scenario, the device may further include: a defense module;
The defending module can be used for limiting and processing verification code verification requests sent by the user; or, changing to require the user to carry out mobile phone verification or answer secret security question verification.
It should be noted that, for other corresponding descriptions of each functional unit related to the secure processing apparatus based on user behavior provided in this embodiment, reference may be made to corresponding descriptions in fig. 1 and fig. 2, and details are not repeated here.
Based on the above-described methods shown in fig. 1 and 2, correspondingly, the present embodiment further provides a storage medium having a computer program stored thereon, which when executed by a processor, implements the above-described security processing method based on user behavior shown in fig. 1 and 2.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to execute the method of each implementation scenario of the present application.
Based on the methods shown in fig. 1 and fig. 2 and the virtual device embodiment shown in fig. 6, in order to achieve the above objects, the embodiment of the present application further provides a security processing device based on user behavior, which may specifically be a personal computer, a server, a tablet computer, a smart phone, or other network devices, where the device includes a storage medium and a processor; a storage medium storing a computer program; a processor for executing a computer program to implement the above-described security processing method based on user behavior as shown in fig. 1 and 2.
Optionally, the entity device may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and so on. The user interface may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
It will be appreciated by those skilled in the art that the above-described physical device structure provided in this embodiment is not limited to this physical device, and may include more or fewer components, or may combine certain components, or may be a different arrangement of components.
The storage medium may also include an operating system, a network communication module. The operating system is a program that manages the physical device hardware and software resources described above, supporting the execution of information handling programs and other software and/or programs. The network communication module is used for realizing communication among all components in the storage medium and communication with other hardware and software in the information processing entity equipment.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented by means of software plus necessary general hardware platforms, or may be implemented by hardware. By applying the scheme of the embodiment, for verification behavior data submitted by a user or an attacker, the behavior data features are automatically extracted based on the depth model. And collecting user behavior data before the user starts to verify, and detecting the user behavior data in different modes by dividing the user behavior data into two dimensions of page behavior and verification code behavior. And the user verification code behaviors are classified by using an anomaly detection model, and the anomaly detection model is a single classification model, so that only one type of data is needed to be trained. Because normal user data is very readily available and data for an attacker is difficult to mark, classification using this model does not present a difficulty in data collection. It is assumed that an attacker can arbitrarily forge the requested data and use the actual user behavior to implement the attack. The embodiment scheme utilizes the similarity between the forgeries to perform cluster analysis. Although an attacker can access thousands of IPs, making it impossible for a website party to locate its presence, we can make a binding decision on these IPs by constructing similarities in behavior. Thus, although the attacker uses real user behavior (such as own or other user operations), the classification model cannot intercept it, but can still find their existence through similarity clustering.
Those skilled in the art will appreciate that the drawing is merely a schematic illustration of a preferred implementation scenario and that the modules or flows in the drawing are not necessarily required to practice the application. Those skilled in the art will appreciate that modules in an apparatus in an implementation scenario may be distributed in an apparatus in an implementation scenario according to an implementation scenario description, or that corresponding changes may be located in one or more apparatuses different from the implementation scenario. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above-mentioned inventive sequence numbers are merely for description and do not represent advantages or disadvantages of the implementation scenario. The foregoing disclosure is merely illustrative of some embodiments of the application, and the application is not limited thereto, as modifications may be made by those skilled in the art without departing from the scope of the application.
Claims (10)
1. A method for secure processing based on user behavior, comprising:
Acquiring page browsing behavior data before the user operates the verification code and operation behavior data of the verification code;
Classifying the users by utilizing a neural network model according to the page browsing behavior data, wherein the neural network model is obtained by training the page browsing behavior data before verification code operation of normal users and attackers; the method comprises the steps of,
Classifying the user by utilizing a single classification model according to the operation behavior data of the user, wherein the single classification model is obtained by training based on verification code operation behavior data of a normal user;
Obtaining behavior codes according to the operation behavior data of the user, performing cluster analysis on the behavior codes, binding related user IP addresses according to a cluster analysis result, and jointly counting the access frequency of the bound user IP addresses;
If the suspicious times of the user are judged to be greater than a preset times threshold according to the joint counting result, the neural network model and the single classification model are utilized to reclassify the user, wherein when the neural network model and the single classification model are reclassify, the model classification threshold is lowered to increase the probability of classifying the user as an attacker;
determining whether the user is an attacker by fusing classification results of the neural network model and the single classification model;
The classifying the users by using a neural network model according to the page browsing behavior data specifically comprises:
Acquiring a first mouse behavior sequence and/or a keyboard input behavior sequence from the page browsing behavior data of the user;
Extracting mouse operation characteristics of the first mouse behavior sequence according to a mouse operation type, coordinates of the mouse and occurrence time of an event, so as to determine a first probability value of the user as an attacker according to the mouse operation characteristics and by combining historical mouse operation characteristics before verification code operation of a normal user and the attacker, wherein the mouse operation type at least comprises: clicking, pressing, lifting, moving, out of bounds, into bounds, scrolling one or more of; and/or the number of the groups of groups,
Extracting keyboard input characteristics of the keyboard input behavior sequence according to ASCII codes corresponding to characters input by a keyboard and time corresponding to keyboard input, so as to determine a second probability value of the user as an attacker according to the keyboard input characteristics and by combining with historical keyboard input characteristics before verification code operation of a normal user and the attacker;
Determining a classification result of the neural network model according to the first probability value and/or the second probability value;
the classifying the user by using a single classification model according to the operation behavior data of the user specifically includes:
acquiring a second mouse behavior sequence from the operation behavior data of the user;
Extracting a vector containing a mouse coordinate time pair from the second mouse behavior sequence;
coding the vector containing the mouse coordinate time pair by using a self-coder to obtain a behavior code with a preset coding length;
Classifying by using the single classification model according to the behavior codes with the preset code length to obtain the score of the user as an attacker;
and if the score is larger than a preset score threshold, judging that the user is an attacker, wherein the preset score threshold is 1.
2. The method according to claim 1, wherein the obtaining page browsing behavior data before the user operates the verification code and the operation behavior data of the verification code specifically includes:
Acquiring user behavior data of the user in a time period from when a page where the verification code is opened to when verification of the verification code is completed;
and cutting the user behavior data of the user by taking the time point when the user starts verification code verification operation as a cutting point to obtain the operation behavior data and the page browsing behavior data of the user.
3. The method according to claim 2, wherein the method further comprises:
storing classification results of different users and corresponding user behavior data in a user behavior log;
regularly filtering from the user behavior log based on an intranet IP section, and/or an IP white list and/or a user white list to obtain behavior data of a normal user;
according to the regular normal user behavior data, updating the original user data set required by the corresponding training of the single classification model, so as to train the single classification model by using the updated user data set;
updating the model by using the single classification model which is up to the standard;
Classifying the user behavior data in the current time period by using the updated single classification model, extracting behavior data classified as an attacker, and adding the behavior data into an original attacker data set for updating;
Training the neural network model using the updated user data set and the updated attacker data set;
And updating the model by using the neural network model which meets the test standard.
4. A method according to claim 3, characterized in that said training of said neural network model using updated user data sets and updated attacker data sets, in particular comprises:
If the number of positive and negative samples of the updated user data set and the updated attacker data set is unbalanced, sample acquisition is carried out on the positive and negative samples by adopting different sampling rates, and a training set which accords with preset positive and negative sample balance conditions is obtained;
And training the neural network model by using the training set which accords with the preset positive and negative sample balance condition.
5. The method according to claim 1, wherein the determining the classification result of the neural network model according to the first probability value and/or the second probability value specifically comprises:
Weighted summation is carried out on the first probability value and the second probability value;
and if the probability value obtained by the weighted summation is larger than a preset probability threshold value, judging that the user is an attacker, wherein the preset probability threshold value is 0.5.
6. The method according to claim 1, wherein the performing cluster analysis on the behavior codes specifically comprises:
And clustering the collected behavior codes by using a Mean-Shift algorithm to obtain n clustering centers, wherein n is determined by the window size of the Mean-Shift algorithm, and the window size is obtained by adjusting according to the data characteristics of the verification code and the configured security defense level.
7. The method of claim 1, wherein prior to collecting the behavioral encoding, the method further comprises:
Calculating an MD5 value of the second mouse behavior sequence;
and if the MD5 value is the same as the MD5 value of the verification code operation behavior sequence corresponding to the previously received second verification request, adding the IP address of the user into a blacklist.
8. A secure processing device based on user behavior, comprising:
The acquisition module is used for acquiring page browsing behavior data before the user operates the verification code and operation behavior data of the verification code;
the classification module is used for classifying the users by utilizing a neural network model according to the page browsing behavior data, wherein the neural network model is obtained by training the page browsing behavior data before verification code operation of a normal user and an attacker; the method comprises the steps of,
The classification module is further used for classifying the user by utilizing a single classification model according to the operation behavior data of the user, wherein the single classification model is obtained by training based on verification code operation behavior data of a normal user;
The analysis module is used for carrying out clustering analysis by utilizing the similarity between the fake behaviors according to the operation behavior data of the user, and particularly used for carrying out clustering according to the collected behavior codes; acquiring a target clustering center of which the distance between the behavior codes corresponding to the first verification request is smaller than a preset distance threshold value; binding the user IP address for sending the first verification request with the user IP address contained in the cluster corresponding to the target cluster center; if the combined access frequency calculated by combining the bound user IP addresses is greater than a preset frequency threshold, setting the user IP address for sending the first verification request as suspicious;
the classification module is further configured to, if it is determined according to the clustering analysis result that the number of times the user is set to be suspicious is greater than a preset number of times threshold, reclassify the user using the neural network model and the single classification model, where when reclassifying the neural network model and the single classification model, reduce a model classification threshold to increase a probability that the user is classified as an attacker;
The determining module is used for determining whether the user is an attacker or not by fusing the classification results of the neural network model and the single classification model;
The classification module is further configured to: acquiring a first mouse behavior sequence and/or a keyboard input behavior sequence from the page browsing behavior data of the user;
Extracting mouse operation characteristics of the first mouse behavior sequence according to a mouse operation type, coordinates of the mouse and occurrence time of an event, so as to determine a first probability value of the user as an attacker according to the mouse operation characteristics and by combining historical mouse operation characteristics before verification code operation of a normal user and the attacker, wherein the mouse operation type at least comprises: clicking, pressing, lifting, moving, out of bounds, into bounds, scrolling one or more of; and/or the number of the groups of groups,
Extracting keyboard input characteristics of the keyboard input behavior sequence according to ASCII codes corresponding to characters input by a keyboard and time corresponding to keyboard input, so as to determine a second probability value of the user as an attacker according to the keyboard input characteristics and by combining with historical keyboard input characteristics before verification code operation of a normal user and the attacker;
Determining a classification result of the neural network model according to the first probability value and/or the second probability value;
The classification module is further configured to: acquiring a second mouse behavior sequence from the operation behavior data of the user;
Extracting a vector containing a mouse coordinate time pair from the second mouse behavior sequence;
coding the vector containing the mouse coordinate time pair by using a self-coder to obtain a behavior code with a preset coding length;
Classifying by using the single classification model according to the behavior codes with the preset code length to obtain the score of the user as an attacker;
and if the score is larger than a preset score threshold, judging that the user is an attacker, wherein the preset score threshold is 1.
9. A storage medium having stored thereon a computer program, which when executed by a processor, implements the method of any of claims 1 to 7.
10. A secure processing device based on user behavior, comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 7 when executing the program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010800733.9A CN112069485B (en) | 2020-06-12 | 2020-06-12 | Safety processing method, device and equipment based on user behaviors |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010536797.2A CN111428231B (en) | 2020-06-12 | 2020-06-12 | Safety processing method, device and equipment based on user behaviors |
CN202010800733.9A CN112069485B (en) | 2020-06-12 | 2020-06-12 | Safety processing method, device and equipment based on user behaviors |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010536797.2A Division CN111428231B (en) | 2020-06-12 | 2020-06-12 | Safety processing method, device and equipment based on user behaviors |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112069485A CN112069485A (en) | 2020-12-11 |
CN112069485B true CN112069485B (en) | 2024-05-14 |
Family
ID=71551351
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010800733.9A Active CN112069485B (en) | 2020-06-12 | 2020-06-12 | Safety processing method, device and equipment based on user behaviors |
CN202010536797.2A Active CN111428231B (en) | 2020-06-12 | 2020-06-12 | Safety processing method, device and equipment based on user behaviors |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010536797.2A Active CN111428231B (en) | 2020-06-12 | 2020-06-12 | Safety processing method, device and equipment based on user behaviors |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN112069485B (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112134837A (en) * | 2020-08-06 | 2020-12-25 | 瑞数信息技术(上海)有限公司 | Method and system for detecting Web attack behavior |
CN112487376A (en) * | 2020-12-07 | 2021-03-12 | 北京明略昭辉科技有限公司 | Man-machine verification method and device |
CN112804374B (en) * | 2021-01-06 | 2023-11-03 | 光通天下网络科技股份有限公司 | Threat IP identification method, threat IP identification device, threat IP identification equipment and threat IP identification medium |
CN113158183A (en) * | 2021-01-13 | 2021-07-23 | 青岛大学 | Method, system, medium, equipment and application for detecting malicious behavior of mobile terminal |
CN112818868B (en) * | 2021-02-03 | 2024-05-28 | 招联消费金融股份有限公司 | Method and device for identifying illegal user based on behavior sequence characteristic data |
CN113014598A (en) * | 2021-03-20 | 2021-06-22 | 北京长亭未来科技有限公司 | Protection method for robot malicious attack, firewall, electronic device and storage medium |
CN113298115A (en) * | 2021-04-19 | 2021-08-24 | 百果园技术(新加坡)有限公司 | User grouping method, device, equipment and storage medium based on clustering |
CN113554515A (en) * | 2021-06-26 | 2021-10-26 | 陈思佳 | Internet financial control method, system, device and medium |
CN113536302A (en) * | 2021-07-26 | 2021-10-22 | 北京计算机技术及应用研究所 | Interface caller safety rating method based on deep learning |
CN114462589B (en) * | 2021-09-28 | 2022-11-04 | 北京卫达信息技术有限公司 | Normal behavior neural network model training method, system, device and storage medium |
CN114462588B (en) * | 2021-09-28 | 2022-11-08 | 北京卫达信息技术有限公司 | Training method, system and equipment of neural network model for detecting network intrusion |
CN114564114B (en) * | 2022-02-18 | 2024-02-27 | 北京圣博润高新技术股份有限公司 | Bastion machine keyboard auditing method, bastion machine keyboard auditing device, bastion machine keyboard auditing equipment and storage medium |
CN114254242B (en) * | 2022-03-01 | 2022-05-03 | 互联网域名系统北京市工程研究中心有限公司 | User portrait method and device based on recursive analysis log |
CN114978969B (en) * | 2022-05-20 | 2023-03-24 | 北京数美时代科技有限公司 | Self-adaptive monitoring and adjusting method and system based on user behaviors |
CN115277068B (en) * | 2022-06-15 | 2024-02-23 | 广州理工学院 | Novel honeypot system and method based on spoofing defense |
CN117176478B (en) * | 2023-11-02 | 2024-02-02 | 南京怡晟安全技术研究院有限公司 | Network security practical training platform construction method and system based on user operation behaviors |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107622072A (en) * | 2016-07-15 | 2018-01-23 | 阿里巴巴集团控股有限公司 | A kind of recognition methods and server, terminal for web page operation behavior |
CN109241709A (en) * | 2018-08-03 | 2019-01-18 | 平安科技(深圳)有限公司 | User behavior recognition method and device based on the verifying of sliding block identifying code |
CN109446789A (en) * | 2018-10-22 | 2019-03-08 | 武汉极意网络科技有限公司 | Anticollision library method, equipment, storage medium and device based on artificial intelligence |
CN110619528A (en) * | 2019-09-29 | 2019-12-27 | 武汉极意网络科技有限公司 | Behavior verification data processing method, behavior verification data processing device, behavior verification equipment and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108259503A (en) * | 2018-01-30 | 2018-07-06 | 成都睿码科技有限责任公司 | A kind of is the system and method for website and application division machine and mankind's access |
US11030287B2 (en) * | 2018-06-07 | 2021-06-08 | T-Mobile Usa, Inc. | User-behavior-based adaptive authentication |
CN109271762B (en) * | 2018-08-03 | 2023-04-07 | 平安科技(深圳)有限公司 | User authentication method and device based on slider verification code |
-
2020
- 2020-06-12 CN CN202010800733.9A patent/CN112069485B/en active Active
- 2020-06-12 CN CN202010536797.2A patent/CN111428231B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107622072A (en) * | 2016-07-15 | 2018-01-23 | 阿里巴巴集团控股有限公司 | A kind of recognition methods and server, terminal for web page operation behavior |
CN109241709A (en) * | 2018-08-03 | 2019-01-18 | 平安科技(深圳)有限公司 | User behavior recognition method and device based on the verifying of sliding block identifying code |
CN109446789A (en) * | 2018-10-22 | 2019-03-08 | 武汉极意网络科技有限公司 | Anticollision library method, equipment, storage medium and device based on artificial intelligence |
CN110619528A (en) * | 2019-09-29 | 2019-12-27 | 武汉极意网络科技有限公司 | Behavior verification data processing method, behavior verification data processing device, behavior verification equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111428231A (en) | 2020-07-17 |
CN112069485A (en) | 2020-12-11 |
CN111428231B (en) | 2020-09-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112069485B (en) | Safety processing method, device and equipment based on user behaviors | |
EP3651043B1 (en) | Url attack detection method and apparatus, and electronic device | |
CN109063456B (en) | Security detection method and system for image type verification code | |
CN109413023B (en) | Training of machine recognition model, machine recognition method and device, and electronic equipment | |
CN109922065B (en) | Quick identification method for malicious website | |
CN111259219B (en) | Malicious webpage identification model establishment method, malicious webpage identification method and malicious webpage identification system | |
CN105072214A (en) | C&C domain name identification method based on domain name feature | |
CN111047173B (en) | Community credibility evaluation method based on improved D-S evidence theory | |
CN110879881A (en) | Mouse track recognition method based on feature component hierarchy and semi-supervised random forest | |
CN115438102B (en) | Space-time data anomaly identification method and device and electronic equipment | |
CN111160797A (en) | Wind control model construction method and device, storage medium and terminal | |
CN115396169B (en) | Method and system for multi-step attack detection and scene restoration based on TTP | |
CN114841705B (en) | Anti-fraud monitoring method based on scene recognition | |
CN116319065A (en) | Threat situation analysis method and system applied to business operation and maintenance | |
CN112287345B (en) | Trusted edge computing system based on intelligent risk detection | |
CN111784360B (en) | Anti-fraud prediction method and system based on network link backtracking | |
WO2021248707A1 (en) | Operation verification method and apparatus | |
CN117473477A (en) | Login method, device and equipment of SaaS interactive system and storage medium | |
CN111079117B (en) | Automatic point-contact verification code identification method based on LeNet and SSD | |
CN112052453A (en) | Webshell detection method and device based on Relief algorithm | |
CN116962089A (en) | Network monitoring method and system for information security | |
CN111970272A (en) | APT attack operation identification method | |
CN111581640A (en) | Malicious software detection method, device and equipment and storage medium | |
CN110808947A (en) | Automatic vulnerability quantitative evaluation method and system | |
CN115828245A (en) | Malicious file identification method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |