CN104980402B - Method and device for identifying malicious operation - Google Patents

Method and device for identifying malicious operation Download PDF

Info

Publication number
CN104980402B
CN104980402B CN201410141592.9A CN201410141592A CN104980402B CN 104980402 B CN104980402 B CN 104980402B CN 201410141592 A CN201410141592 A CN 201410141592A CN 104980402 B CN104980402 B CN 104980402B
Authority
CN
China
Prior art keywords
malicious
account
user
message
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410141592.9A
Other languages
Chinese (zh)
Other versions
CN104980402A (en
Inventor
王俊乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201410141592.9A priority Critical patent/CN104980402B/en
Publication of CN104980402A publication Critical patent/CN104980402A/en
Application granted granted Critical
Publication of CN104980402B publication Critical patent/CN104980402B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method and a device for identifying malicious operation, and belongs to the field of internet. The method comprises the following steps: receiving an operation request message sent by a terminal, wherein the operation request message at least carries a user account and an operation type of a user; according to a user account of a user, acquiring an account malicious value of the user, the operation times of the user for executing the operation type and the judgment probability of spam of a sent message; and determining whether the operation corresponding to the operation type is malicious operation or not according to the malicious value of the account, the operation times and the junk message judgment probability. The device comprises: the device comprises a receiving module, an obtaining module and a determining module. The method can accurately identify whether the operation is the malicious operation, and makes it difficult for lawless persons to make corresponding strategies to avoid being identified as the malicious operation by the server when the lawless persons want to continuously execute the malicious operation by using the automaton at high frequency.

Description

Method and device for identifying malicious operation
Technical Field
The invention relates to the field of internet, in particular to a method and a device for identifying malicious operation.
Background
Nowadays, many lawbreakers often perform malicious operations in the internet, wherein sending spam is a malicious operation. For example, lawless persons often utilize automata to send spam to users at high frequency and uninterruptedly, the spam has no value to users, occupies a large amount of network resources, and may even bring network security risks, so that malicious operations need to be identified and execution of the malicious operations needs to be prevented.
Currently, the prior art provides a method for identifying malicious operations, which may be: receiving an operation request message sent by a terminal, wherein the operation request message carries a user account and an operation type of a user, counting a first time of executing an operation of the user in a first time period, the first time period being a time period before the current time and the time length of which is a preset time length, determining that the operation is a malicious operation if the counted time is greater than a preset time threshold set by a technician, and issuing an authentication code to request the user to authenticate. For example, assuming that the preset time duration set by the technician is 60 seconds and the preset number of times is 30 times, if it is counted that the number of times that the user performs the operation of the same type as the operation in the last 60 seconds is greater than 30 times, it may be determined that the operation is a malicious operation.
In the process of implementing the invention, the inventor finds that the prior art has at least the following problems:
the strategy of judging whether the operation is malicious operation or not only according to whether the first time is greater than the preset time or not is simple, so that lawbreakers can easily summarize the preset time and set corresponding strategies according to the preset time and the preset time, and the identification strategy of the server is avoided. For example, a lawbreaker may set the automaton to send a spam message every 2.5 seconds on average, so that the number of times of sending messages in the last 60 seconds does not exceed 30, and the identification strategy of the server can be avoided.
Disclosure of Invention
In order to prevent lawless persons from avoiding the identification strategy of the server, the invention provides a method and a device for identifying malicious operations. The technical scheme is as follows:
a method of identifying malicious operations, the method comprising:
receiving an operation request message sent by a terminal, wherein the operation request message at least carries a user account and an operation type of a user;
according to the user account of the user, acquiring an account malicious value of the user, the operation times of the user for executing the operation type and a spam message judgment probability of a sent message, wherein the account malicious value is used for representing the degree that the user account is a malicious account, and the spam message judgment probability of the message is used for representing the degree that the message is a spam message;
and determining whether the operation corresponding to the operation type is malicious operation or not according to the account malicious value, the operation times and the junk message judgment probability.
An apparatus to identify malicious operations, the apparatus comprising:
the terminal comprises a receiving module, a sending module and a receiving module, wherein the receiving module is used for receiving an operation request message sent by the terminal, and the operation request message at least carries a user account and an operation type of a user;
the acquisition module is used for acquiring an account malicious value of the user, the operation times of the user for executing the operation type and a spam message judgment probability of a sent message according to the user account of the user, wherein the account malicious value is used for representing the degree that the user account is a malicious account, and the spam message judgment probability of the message is used for representing the degree that the message is a spam message;
and the determining module is used for determining whether the operation corresponding to the operation type is malicious operation according to the account malicious value, the operation times and the junk message judgment probability.
In the embodiment of the invention, when an operation request message sent by a terminal is received, whether the operation is malicious operation can be accurately identified according to the malicious value of the user account of the user, the operation times of the user for executing the preset operation and the judgment probability of the junk message of the sent message, and a lawless person is difficult to make a corresponding strategy to avoid being identified as the malicious operation by a server when the lawless person wants to continuously execute the malicious operation by using the automaton at a high frequency.
Drawings
Fig. 1 is a flowchart of a method for identifying malicious operations according to embodiment 1 of the present invention;
fig. 2 is a flowchart of a method for identifying malicious operations according to embodiment 2 of the present invention;
fig. 3 is a schematic structural diagram of an apparatus for identifying malicious operations according to embodiment 3 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Example 1
Referring to fig. 1, an embodiment of the present invention provides a method for identifying a malicious operation, including:
step 101: receiving an operation request message sent by a terminal, wherein the operation request message at least carries a user account and an operation type of a user;
step 102: according to a user account of a user, acquiring an account malicious value of the user, the operation times of the user for executing the operation type and a spam judgment probability of a sent message, wherein the account malicious value is used for representing the degree that the user account is a malicious account, and the spam judgment probability of the message is used for representing the degree that the message is a spam;
step 103: and determining whether the operation corresponding to the operation type is a malicious operation or not according to the account number malicious value, the operation times and the junk message judgment probability.
In the embodiment of the invention, when an operation request message sent by a terminal is received, whether the operation is malicious operation can be accurately identified according to the malicious value of the user account of the user, the operation times of the user for executing the preset operation and the judgment probability of the junk message of the sent message, and a lawless person is difficult to make a corresponding strategy to avoid being identified as the malicious operation by a server when the lawless person wants to continuously execute the malicious operation by using the automaton at a high frequency.
Example 2
The embodiment of the invention provides a method for identifying malicious operation, which is used for identifying whether the operation requested by an operation request message is malicious operation or not when the operation request message sent by a terminal is received, and if so, subsequent control processing can be started to prevent the malicious operation from being executed.
Referring to fig. 2, the method flow includes:
step 201: receiving an operation request message sent by a terminal, wherein the operation request message carries a user account, an operation type and operation content of a user;
when a user executes a certain operation on a terminal, the terminal acquires a user account of the user, operation content and an operation type of the operation, generates an operation request message and sends the operation request message to a server, wherein the operation request message carries the user account of the user, the operation content and the operation type of the operation; and the server receives the operation request message and takes the user account of the user carried by the operation request message as the user account to be analyzed.
The operation type can be message sending, user account subscribing of other users or friend adding, and when the operation type is message sending, the operation content is a message which needs to be sent by the user; when the operation type is the user account of the other user, the operation content is the user account of the other user subscribed by the user; when the operation type is the friend adding type, the operation content is the account number or nickname and the like of the added friend.
For example, taking a microblog platform as an example, suppose that a user wants to send a message on the microblog platform, and inputs the message to be sent in a message input box of a terminal and submits the message to the terminal; the method comprises the steps that a terminal obtains a microblog account number, an operation type and a message to be sent of a user, wherein the operation type is a message to be sent, an operation request message is generated and sent to a server, and the operation request message carries the microblog account number, the operation type and the message to be sent of the user; and the server receives the operation request message, and takes the microblog account number of the user carried by the operation request message as the microblog account number to be analyzed.
Step 202: acquiring an account malicious value of a user according to a user account of the user, wherein the account malicious value is used for expressing the degree that the user account is a malicious account;
specifically, the following processes (A-1) to (A-4) can be used to realize the following processes:
(A-1): acquiring a first prior probability that a user account to be analyzed is a malicious account and a second prior probability that the user account is a non-malicious account;
specifically, the number of user accounts included in a stored malicious account set, the number of user accounts included in a non-malicious account set, and the total number of user accounts commonly included in the malicious account set and the non-malicious account set are obtained, the ratio between the number of user accounts included in the malicious account set and the total number of user accounts is calculated to obtain a first prior probability, and the ratio between the number of user accounts included in the non-malicious account set and the total number of user accounts is calculated to obtain a second prior probability.
The user accounts stored in the malicious account set are all malicious accounts, and the user accounts stored in the non-malicious account set are all non-malicious accounts.
For example, assuming that the number of user accounts included in the stored malicious account set is 5 ten thousand, and the number of user accounts included in the non-malicious account set is 6 ten thousand, the total number of user accounts included in the malicious account set and the non-malicious account set together is 11 ten thousand, the ratio between 5 ten thousand and 11 ten thousand is 0.45, and the ratio between 6 ten thousand and 11 ten thousand is 0.55, then the first prior probability that the user account to be analyzed is a malicious account is 0.45, and the second prior probability that the user account is a non-malicious account is 0.55.
(A-2): determining at least one service corresponding to a user account to be analyzed;
the user can select at least one service from a plurality of services, register a user account in the selected service, set the same user account corresponding to the user account in each selected service by the server, and simultaneously store the corresponding relation between the user account and the service by the server.
For example, suppose that a user registers one user account in a microblog, a forum, and a network space respectively, and the three user accounts correspond to the same user account, so that the service corresponding to the user account is acquired to include the microblog, the forum, and the network space.
(A-3): calculating the probability that a user account to be analyzed is a malicious account on each service in at least one service corresponding to the user account according to the stored malicious account set and the non-malicious account set, wherein the probability that the user account is a malicious account on one service is used for expressing the degree that the user account is a malicious account on the service;
for each user account in the malicious account set, at least one service corresponding to the user account is a malicious account on the service corresponding to the user account. For each user account in the second account set, at least one service corresponding to the user account is a non-malicious account in the service corresponding to the user account.
Specifically, for each service corresponding to a user account to be analyzed, a malicious account corresponding to the service is acquired from a malicious account set, and the number of the malicious accounts corresponding to the service is obtained by counting the acquired malicious accounts. And acquiring the non-malicious account corresponding to the service from the non-malicious account set, and counting the acquired non-malicious accounts to obtain the number of the non-malicious accounts corresponding to the service. Calculating a first ratio between the number of malicious accounts corresponding to the service and the number of user accounts included in the malicious account set, and calculating a second ratio between the number of non-malicious accounts corresponding to the service and the number of user accounts included in the non-malicious account set; according to the first ratio, the second ratio, the first prior probability and the second prior probability, calculating the probability that the user account to be analyzed is a malicious account on the service according to the following formula (1);
Figure BDA0000488633900000051
in the formula (1), Pn is a probability that the user account to be analyzed is a malicious account on the service, Pd is a first prior probability, Pe is a second prior probability, Pu is a first ratio, and Pv is a second ratio.
The operation is executed for each of the other services corresponding to the user account to be analyzed, so that the probability that the user account to be analyzed is a malicious account on each service can be calculated.
For example, assuming that the number of malicious account numbers corresponding to a service "microblog" is 4000 and the number of corresponding non-malicious account numbers is 5000, calculating that a first ratio between the number of malicious account numbers 4000 corresponding to the service "microblog" and the number of user account numbers 5 ten thousand included in a malicious account number set is 0.08, and a second ratio between the number of non-malicious account numbers 5000 corresponding to the service "microblog" and the number of user account numbers 6 ten thousand included in the non-malicious account number set is 0.083; and calculating the probability that the user account to be analyzed is a malicious account on the service microblog according to the formula (1) to be 0.44.
Assuming that the number of malicious accounts corresponding to the service "forum" is 6000 and the number of corresponding non-malicious accounts is 7000, calculating that a first ratio between the number 6000 of the malicious accounts corresponding to the service "forum" and the number of 5 ten thousand user accounts included in the malicious account set is 0.12, and a second ratio between the number 7000 of the non-malicious accounts corresponding to the service "forum" and the number of 6 ten thousand user accounts included in the non-malicious account set is 0.117; and thus the probability that the corresponding user account is a malicious account in the service forum is calculated to be 0.47 according to the formula (1).
Assume that the number of malicious accounts corresponding to the traffic "network space" is counted as 8000 and the number of corresponding non-malicious accounts is 9000. Calculating a first ratio of 0.16 between 8000 of malicious accounts corresponding to a business network space and 5 tens of thousands of user accounts included in a malicious account set, and a second ratio of 0.15 between 9000 of non-malicious accounts corresponding to a business network space and 6 tens of thousands of user accounts included in a non-malicious account set; and then the probability that the user account to be analyzed is a malicious account on the business network space is calculated to be 0.46 according to the formula (1).
(A-4): and determining the account malicious value of the user according to the first prior probability, the second prior probability and the probability that the user account to be analyzed is a malicious account on each service.
Specifically, according to a first prior probability and the probability that the user account to be analyzed is a malicious account on each service, calculating the probability that the user account to be analyzed is a malicious account according to the following formula (2);
Pa=Pd*P1*P2*P3…Pn……(2);
in the formula (2), Pa is a probability that the user account to be analyzed is a malicious account, Pd is a first prior probability, and P1-Pn are probabilities that the user account to be analyzed is a malicious account in each service, respectively.
Then, according to the second prior probability and the probability that the user account to be analyzed is a malicious account on each service, calculating the probability that the user account to be analyzed is a non-malicious account according to the following formula (3);
Pb=Pe*(1-P1)*(1-P2)*(1-P3)…(1-Pn)……(3);
in the formula (3), Pb is a probability that the user account to be analyzed is a non-malicious account, and Pe is a second prior probability.
Finally, according to the probability that the user account to be analyzed is a malicious account and the probability that the user account to be analyzed is a non-malicious account, determining an account malicious value of the user according to the following formula (4);
Figure BDA0000488633900000061
in the formula (4), Pc is an account malicious value of the user.
For example, according to the above formula (2), the probability that the user account to be analyzed is a malicious account is calculated to be 0.0428076 according to the probability that the user account to be analyzed is a malicious account on the service "microblog" of 0.44, the probability that the user account to be analyzed is a malicious account on the service "forum" of 0.47, the probability that the user account to be analyzed is a malicious account on the service "network space" of 0.46, and the first prior probability of 0.45.
And according to the above formula (3), calculating the probability that the user account to be analyzed is a non-malicious account 0.0523204 according to the probability that the user account to be analyzed is a malicious account on the service microblog, the probability that the user account to be analyzed is a malicious account on the service forum is 0.44, the probability that the user account to be analyzed is a malicious account on the service forum is 0.47, the probability that the user account to be analyzed is a malicious account on the service network space is 0.46, and the second prior probability is 0.55.
And calculating the malicious value of the account number to be analyzed to be 0.72 according to the formula (4) according to the probability 0.0428076 that the user account number to be analyzed is a malicious account number and the probability 0.0523204 that the user account number to be analyzed is a non-malicious account number.
Step 203: acquiring the operation times of the operation type executed by the user according to the user account of the user;
the number of times that the user performs the operation of the operation type may be the number of times that the user performs the operation of the operation type in a first time period, where the first time period is a time period from a first time to a current time, and the first time is before the current time. The server stores a history file corresponding to each account, and when a user executes operation once, a history record is automatically generated and stored in the history file corresponding to the user account, wherein the history record is the operation executed by the user and a corresponding timestamp.
The step can be specifically as follows: the server acquires a history file corresponding to the user account to be analyzed according to the user account to be analyzed, acquires a first operation corresponding to a timestamp in a first time period from the history file, acquires a second operation with the same type as the operation type from the first operation, and counts the number of the second operation to be used as the operation frequency of the user for executing the operation type.
For example, assuming that the user performs 50 operations in the first time period, and the operation types of 30 operations in the 50 operations are all the operation types, the number of times that the user performs the operation type in the first time period is 30.
Step 204: acquiring a junk message judgment probability of a message sent by a user according to a user account of the user, wherein the junk message judgment probability is used for expressing the degree that the message is a junk message;
the message sent by the user may be a message sent in a first time period.
Specifically, the present step can be realized by the following procedures (B-1) to (B-4), including:
(B-1): acquiring a message sent by a user;
for example, assume that the message sent by the user is acquired as "dress style is elegant".
(B-2): acquiring message feature words sent by a user through a TF-IDF (Term Frequency/Inverse Document Frequency) algorithm;
the message characteristic words may be each real word included in the message sent by the user, and the real words may be nouns, verbs, adjectives, numerators, quantifiers, pronouns, and the like.
For example, the characteristic words obtained from the message sent by the user are "dress", "style", and "elegant", respectively.
(B-3): for each feature word, calculating the occurrence probability of the feature word in each preset message set in a plurality of message sets;
the preset message sets comprise a non-malicious message set and at least one malicious message set, messages stored in the non-malicious message set are all non-malicious messages, for any malicious message set, the messages stored in the malicious message set are all malicious messages, the malicious message set corresponds to a theme, and the theme of each message in the message set is the theme corresponding to the malicious message set.
For any feature word, calculating the probability of the feature word appearing in any message set may be: in the message set, counting the number of messages containing the characteristic word, calculating the ratio of the number to the number of the messages included in the message set, and taking the ratio as the probability of the characteristic word appearing in the message set.
For each of the other feature words, the probability of its occurrence in each message set can be calculated in the manner described above.
For example, it is assumed that the preset malicious message sets are respectively a malicious message set corresponding to a theme "professional dress", a malicious message set corresponding to a theme "credit card", and a malicious message set corresponding to a theme "network earning".
Assuming that the malicious message set corresponding to the theme "professional clothing" includes 1000 messages, 200 messages in the 1000 messages include the feature word "style", and the probability that the feature word "style" appears in the malicious message set corresponding to the theme "professional clothing" is 0.2.
Assuming that the calculated probability of the feature word "elegant" appearing in the malicious message set corresponding to the theme "professional dress" is 0.3, the probability of the feature word "one-piece dress" appearing in the malicious message set corresponding to the theme "professional dress" is 0.4, the probability of the feature word "style" appearing in the malicious message set corresponding to the theme "credit card" is 0.5, the probability of the feature word "elegant" appearing in the malicious message set corresponding to the theme "credit card" is 0.6, the probability of the feature word "one-piece dress" appearing in the malicious message set corresponding to the theme "credit card" is 0.7, the probability of the feature word "style" appearing in the malicious message set corresponding to the theme "net earning" is 0.3, the probability of the feature word "elegant" appearing in the malicious message set corresponding to the theme "net earning" is 0.2, and the probability of the feature word "one-piece dress" appearing in the malicious message set corresponding to the theme "net earning" is 0.5, the probability of the feature word "style" appearing in the non-malicious message set is 0.1, the probability of the feature word "elegant" appearing in the non-malicious message set is 0.2, and the probability of the feature word "one-piece dress" appearing in the non-malicious message set is 0.3.
(B-4): and calculating the judgment probability of the spam message of the message according to the occurrence probability of each feature word in each preset message set of the plurality of message sets.
Specifically, for any one of a plurality of preset message sets, calculating the probability of the message appearing in the message set according to the following formula (5) according to the probability of each feature word appearing in the message set;
Pf=Pf1*Pf2*Pf3…Pfn……(5);
in the above formula (5), Pf is the probability of the occurrence of the message in the message set, and Pf1-Pfn is the probability of the occurrence of each feature word in the message set.
Calculating the probability that the message is spam in the message set according to the probability that the message appears in the message set by the following formula (6);
chi=-2ln(Pf);v=L*2;P=invchi2(chi,v)……(6);
wherein, in the above formula (6), the number of L feature words.
For each of the other message sets, the probability that the message is spam in each message set can be calculated according to the above method.
And then, according to the probability that the message is respectively a spam message in each message set, calculating the judgment probability of the spam message of the message.
The probability that the message respectively appears in each message set comprises a first probability that the message respectively appears in each malicious message set and a second probability that the message appears in a non-malicious message set, if the probability with the largest value in the first probability is larger than or equal to the second probability, the probability with the largest value in the first probability is used as the judgment probability of the spam message of the message, and if the probability with the largest value in the first probability is smaller than the second probability, the judgment probability of the spam message of the message is set to be a preset value. The preset value may be 0.001 or 0.002, etc., and the present invention is not limited thereto.
For example, according to the probability 0.2 that the feature word "style" appears in the malicious message set corresponding to the theme "professional clothing", the probability 0.3 that the feature word "elegant" appears in the malicious message set corresponding to the theme "professional clothing", and the probability 0.4 that the feature word "dress" appears in the malicious message set corresponding to the theme "professional clothing", the probability that the message appears in the malicious message set corresponding to the theme "professional clothing" is calculated to be 0.024. Substituting the probability 0.024 of the message appearing in the malicious message set corresponding to the theme 'professional dress' into the formula (6) to calculate the probability 0.12 that the message is a spam message in the malicious message set corresponding to the theme 'professional dress'.
According to the probability 0.5 that the characteristic word 'style' appears in the malicious message set corresponding to the theme 'credit card', the probability 0.6 that the characteristic word 'graceful' appears in the malicious message set corresponding to the theme 'credit card' and the probability 0.7 that the characteristic word 'one-piece dress' appears in the malicious message set corresponding to the theme 'credit card', the probability that the message appears in the malicious message set corresponding to the theme 'credit card' is calculated to be 0.21. Substituting the probability 0.21 of the message appearing in the malicious message set corresponding to the subject 'credit card' into the formula (6) calculates the probability 0.53 of the message being spam in the malicious message set corresponding to the subject 'credit card'.
According to the probability 0.3 that the characteristic word 'style' appears in the malicious message set corresponding to the topic 'net earning', the probability 0.2 that the characteristic word 'elegance' appears in the malicious message set corresponding to the topic 'net earning' and the probability 0.5 that the characteristic word 'one-piece dress' appears in the malicious message set corresponding to the topic 'net earning', the probability that the message appears in the malicious message set corresponding to the topic 'net earning' is calculated to be 0.03. Substituting the probability 0.03 of the message appearing in the malicious message set corresponding to the topic network earning into the formula (6), and calculating the probability 0.17 that the message is a spam message in the malicious message set corresponding to the topic network earning.
According to the probability 0.1 that the characteristic word 'style' appears in the non-malicious message set, the probability 0.2 that the characteristic word 'elegant' appears in the non-malicious message set and the probability 0.3 that the characteristic word 'one-piece dress' appears in the non-malicious message set, the probability 0.006 that the message non-malicious message set appears is calculated. Substituting the probability 0.006 that the message appears in the non-malicious message set into the formula (6) calculates the probability 0.09 that the message is spam in the non-malicious message set.
Since the maximum probability of the probability that the message is spam in each malicious message set is 0.53 and is greater than the probability of being spam in the non-malicious message set by 0.09, 0.53 is used as the spam judgment probability of the message.
Step 205: calculating the malicious value of the operation corresponding to the operation type according to the malicious value of the account of the user, the operation times of the user for executing the preset operation and the judgment probability of the spam message of the sent message;
specifically, calculating a malicious score of an operation corresponding to the operation type according to the following formula (7);
Figure BDA0000488633900000101
in the above formula, S is a malicious score of an operation corresponding to the operation type, a is an account malicious score, C is the number of operations, and M may be
Figure BDA0000488633900000102
Or
Figure BDA0000488633900000103
And P is the judgment probability of the junk message, and G is a preset coefficient.
In this case, G may be 1.8, 2.0, etc., which is not intended to limit the scope of the present invention.
For example, assuming that G is 1.8, substituting a =0.72, C =30, and P =0.53, which have been calculated in the above step, into the above formula (7), the malicious score of the operation corresponding to the operation type is 6.89.
Step 206: determining whether the operation is a malicious operation according to the malicious score of the operation corresponding to the operation type, and if so, executing step 207; if not, go to step 208;
specifically, the malicious score of the operation corresponding to the operation type and the preset threshold are determined, if the malicious score is greater than the preset threshold, the operation is determined to be a malicious operation, step 207 is executed, and if the malicious score is less than or equal to the preset threshold, the operation is determined to be a non-malicious operation, and step 208 is executed.
The preset threshold may be 5.0 or 5.5, and the present invention is not limited thereto.
Step 207: preventing the performance of the operation or authenticating the user;
the operation can be directly prevented from being executed, or the verification code is generated and issued to the user to verify the user, and the user can execute the operation only by inputting the correct verification code.
For example, the server generates a verification code and sends the verification code to a terminal used by a user, the user inputs the verification code in a verification code input box of the terminal and submits the verification code to the terminal, the terminal sends the verification code input by the user to the server, and the server compares the issued verification code with the received verification code, and if the verification code is the same as the received verification code, the terminal is allowed to execute the operation.
Step 208: the terminal is allowed to perform the operation.
In the embodiment of the invention, when an operation request message sent by a terminal is received, an account malicious value of a user is determined according to a first prior probability, a second prior probability and at least one service corresponding to the user account; calculating the judgment probability of the junk messages of the messages sent by the users according to the stored message sets and the feature words included in the messages sent by the users; and calculating the malicious score of the operation corresponding to the operation type in the operation request message according to the malicious value of the account of the user, the operation times of executing the preset operation and the judgment probability of the spam message of the sent message, so that whether the operation is the malicious operation can be accurately identified according to the malicious score, and a lawless person is difficult to make a corresponding strategy to avoid being identified as the malicious operation by the server when the lawless person wants to continuously execute the malicious operation by using the automaton at a high frequency.
Example 3
Referring to fig. 3, an embodiment of the present invention provides an apparatus for identifying a malicious operation, including:
a receiving module 301, configured to receive an operation request message sent by a terminal, where the operation request message at least carries a user account and an operation type of a user;
an obtaining module 302, configured to obtain, according to a user account of a user, an account malicious value of the user, the number of times that the user performs an operation of the operation type, and a spam message determination probability of a sent message, where the account malicious value is used to indicate a degree that the user account is a malicious account, and the spam message determination probability of the message is used to indicate a degree that the message is a spam message;
the determining module 303 is configured to determine whether an operation corresponding to the operation type is a malicious operation according to the account malicious value, the operation frequency, and the spam message determination probability.
Preferably, the obtaining module 302 includes:
the first calculation unit is used for calculating the ratio of the number of user accounts included in the malicious account set to the total number of the user accounts included in the malicious account set and the non-malicious account set to obtain a first prior probability;
the second calculation unit is used for calculating the ratio of the number of the user accounts included in the non-malicious account set to the total number of the user accounts included in the malicious account set and the non-malicious account set to obtain a second prior probability;
the first determining unit is used for determining at least one service corresponding to a user account according to the user account of the user;
a third calculating unit, configured to calculate, according to the stored malicious account set and non-malicious account set, a probability that the user account is a malicious account on each service in at least one service, where the probability that the user account is a malicious account on one service is used to indicate a degree that the user account is a malicious account on one service;
and the second determining unit is used for determining the account malicious value of the user according to the first prior probability, the second prior probability and the probability that the user account is a malicious account on each service respectively.
Preferably, the third calculation unit includes:
the acquiring subunit is used for acquiring a malicious account corresponding to each service from the stored malicious account set and acquiring a non-malicious account corresponding to each service from the stored non-malicious account set according to each service corresponding to the user account;
the first calculating subunit is configured to calculate, according to the number of malicious accounts corresponding to each service, the number of non-malicious accounts corresponding to each service, the first prior probability, and the second prior probability, probabilities that the user account is a malicious account on each service, respectively.
Preferably, the first calculating subunit is specifically configured to calculate a first ratio between the number of malicious accounts corresponding to each service and the number of user accounts included in the malicious account set, and a second ratio between the number of non-malicious accounts corresponding to each service and the number of user accounts included in the non-malicious account set; and calculating the probability that the user account is respectively a malicious account on each service according to the first ratio, the second ratio, the first prior probability and the second prior probability.
Preferably, the second determination unit includes:
the second calculating subunit is configured to calculate, according to the first prior probability, the second prior probability, and the probability that the user account is a malicious account in each service, a probability that the user account is a malicious account and a probability that the user account is a non-malicious account;
and the determining subunit is used for determining the account malicious value of the user according to the probability that the user account is a malicious account and the probability that the user account is a non-malicious account.
Preferably, the obtaining module 302 includes:
the second acquisition unit is used for acquiring the message characteristic words sent by the user;
the fourth calculating unit is used for calculating the probability of each feature word appearing in each of a plurality of preset message sets respectively, and for each of the plurality of preset message sets, the message set corresponds to one theme and the theme of each message in the message set is the theme corresponding to the message set;
and the fifth calculating unit is used for calculating the judgment probability of the spam messages of the messages according to the probability of each feature word appearing in each message set in the preset message sets.
Preferably, the fifth calculation unit includes:
the third calculating subunit is used for calculating the probability of the message appearing in each message set according to the probability of each feature word appearing in each message set;
and the fourth calculating subunit is used for calculating the judgment probability of the spam message of the message according to the probability of the message appearing in each message set.
Preferably, the determining module 303 comprises:
the sixth calculating unit is used for calculating the malicious score of the operation corresponding to the operation type according to the malicious value of the account, the operation times and the judgment probability of the spam message;
the third determining unit is used for determining that the operation corresponding to the operation type is a malicious operation if the malicious score is greater than or equal to a preset threshold;
and the fourth determining unit is used for determining that the operation corresponding to the operation type is a non-malicious operation if the malicious score is smaller than the preset threshold.
Preferably, the sixth calculating unit is specifically configured to calculate a malicious score of an operation corresponding to the operation type according to the following formula;
Figure BDA0000488633900000131
in the formula, S is the malicious score of the operation corresponding to the operation type, A is the malicious score of the account, C is the operation frequency,
Figure BDA0000488633900000141
or
Figure BDA0000488633900000142
P is the judgment probability of the junk message, and G is a preset coefficient.
In the embodiment of the invention, when an operation request message sent by a terminal is received, whether the operation is malicious operation can be accurately identified according to the malicious value of the user account of the user, the operation times of the user for executing the preset operation and the judgment probability of the junk message of the sent message, and a lawless person is difficult to make a corresponding strategy to avoid being identified as the malicious operation by a server when the lawless person wants to continuously execute the malicious operation by using the automaton at a high frequency.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (15)

1. A method of identifying malicious operations, the method comprising:
receiving an operation request message sent by a terminal, wherein the operation request message at least carries a user account, an operation type and operation content of a user;
calculating the ratio of the number of user accounts included in a malicious account set to the total number of user accounts included in the malicious account set and a non-malicious account set to obtain a first prior probability;
calculating the ratio of the number of user accounts included in the non-malicious account set to the total number of user accounts included in the malicious account set and the non-malicious account set to obtain a second prior probability;
determining at least one service corresponding to the user account according to the user account of the user;
calculating the probability that the user account is a malicious account on each service in the at least one service respectively according to the stored malicious account set and the non-malicious account set, wherein the probability that the user account is a malicious account on one service is used for representing the degree that the user account is a malicious account on the one service;
determining an account malicious value of the user according to the first prior probability, the second prior probability and the probability that the user account is a malicious account on each service respectively;
according to the user account of the user, obtaining the operation times of the user in the operation type executed in a first time period and the judgment probability of the spam message of the sent message, wherein the sent message is the operation content when the operation type is the sent message, the malicious value of the account is used for representing the degree that the user account is the malicious account, and the judgment probability of the spam message of the message is used for representing the degree that the message is the spam message;
obtaining a product of the account malicious value and the operation times, obtaining a reciprocal of the judgment probability of the spam message or a sum of the reciprocal and 1, obtaining a ratio of the product to the reciprocal or a preset coefficient power of the sum, and taking the ratio as a malicious value of the operation corresponding to the operation type;
and if the malicious score is larger than or equal to a preset threshold value, determining that the operation corresponding to the operation type is a malicious operation.
2. The method of claim 1, wherein calculating the probability that the user account is a malicious account on each of the at least one transaction according to the stored set of malicious accounts and the set of non-malicious accounts comprises:
according to each service corresponding to the user account, acquiring a malicious account corresponding to each service from a stored malicious account set, and acquiring a non-malicious account corresponding to each service from a stored non-malicious account set;
and calculating the probability that the user account is respectively a malicious account on each service according to the number of the malicious accounts corresponding to each service, the number of the non-malicious accounts corresponding to each service, the first prior probability and the second prior probability.
3. The method of claim 2, wherein the calculating, according to the number of malicious accounts corresponding to each service, the number of non-malicious accounts corresponding to each service, the first prior probability, and the second prior probability, the probability that the user account is a malicious account on each service respectively comprises:
calculating a first ratio between the number of the malicious accounts corresponding to each service and the number of the user accounts included in the malicious account set, and a second ratio between the number of the non-malicious accounts corresponding to each service and the number of the user accounts included in the non-malicious account set;
and calculating the probability that the user account is a malicious account on each service respectively according to the first ratio, the second ratio, the first prior probability and the second prior probability.
4. The method of claim 1, wherein the determining the account maliciousness value of the user according to the first prior probability, the second prior probability and the probability that the user account is a malicious account on each service respectively comprises:
calculating the probability that the user account is a malicious account and the probability that the user account is a non-malicious account according to the first prior probability, the second prior probability and the probability that the user account is a malicious account on each service respectively;
and determining an account malicious value of the user according to the probability that the user account is a malicious account and the probability that the user account is a non-malicious account.
5. The method of claim 1, wherein the obtaining the spam judgment probability of the message sent by the user according to the user account of the user comprises:
acquiring message characteristic words sent by the user;
calculating the probability of each feature word appearing in each message set in a plurality of preset message sets respectively, wherein for each message set in the plurality of preset message sets, the message set corresponds to a theme, and the theme of each message in the message set is the theme corresponding to the message set;
and calculating the judgment probability of the spam messages of the messages according to the probability of each characteristic word appearing in each message set of the preset plurality of message sets.
6. The method of claim 5, wherein the calculating the spam judgment probability of the message according to the probability that each feature word appears in each of the preset plurality of message sets comprises:
calculating the probability of the message appearing in each message set according to the probability of each feature word appearing in each message set;
and calculating the judgment probability of the spam message of the message according to the probability of the message appearing in each message set.
7. The method of claim 1, wherein after obtaining, according to the user account of the user, the number of times the user performs the operation of the operation type in the first time period and the spam determination probability of the sent message, the method further comprises:
and if the malicious score is smaller than the preset threshold, determining that the operation corresponding to the operation type is a non-malicious operation.
8. An apparatus to identify malicious operations, the apparatus comprising:
the terminal comprises a receiving module, a sending module and a receiving module, wherein the receiving module is used for receiving an operation request message sent by the terminal, and the operation request message at least carries a user account, an operation type and operation content of a user;
an acquisition module, which comprises a first calculation unit, a second calculation unit, a first determination unit, a third calculation unit and a second determination unit;
the first calculation unit is configured to calculate a ratio between the number of user accounts included in a malicious account set and the total number of user accounts included in the malicious account set and a non-malicious account set, so as to obtain a first prior probability;
the second calculation unit is configured to calculate a ratio between the number of user accounts included in the non-malicious account set and the total number of user accounts included in the malicious account set and the non-malicious account set, so as to obtain a second prior probability;
the first determining unit is configured to determine, according to a user account of the user, at least one service corresponding to the user account;
the third calculating unit is configured to calculate, according to a stored malicious account set and a non-malicious account set, probabilities that the user account is a malicious account on each service of the at least one service respectively, where the probability that the user account is a malicious account on one service is used to indicate a degree that the user account is a malicious account on the one service;
the second determining unit is configured to determine an account malicious value of the user according to the first prior probability, the second prior probability, and a probability that the user account is a malicious account on each service;
the acquisition module is used for acquiring an account malicious value of the user, the operation times of the user for executing the operation type in a first time period and a spam message judgment probability of a sent message according to a user account of the user, wherein the account malicious value is used for representing the degree that the user account is a malicious account, and the spam message judgment probability of the message is used for representing the degree that the message is a spam message;
the determining module comprises a sixth calculating unit and a third determining unit;
the sixth calculating unit is configured to obtain a product of the account malicious value and the operation frequency, obtain a reciprocal of the spam message determination probability or a sum of the reciprocal and 1, obtain a ratio of the product to the reciprocal or a power of a preset coefficient of the sum, and use the ratio as a malicious score of an operation corresponding to the operation type;
and a third determining unit, configured to determine that an operation corresponding to the operation type is a malicious operation if the malicious score is greater than or equal to a preset threshold.
9. The apparatus of claim 8, wherein the third computing unit comprises:
an obtaining subunit, configured to obtain, according to each service corresponding to the user account, a malicious account corresponding to each service from a stored malicious account set, and obtain a non-malicious account corresponding to each service from a stored non-malicious account set;
a first calculating subunit, configured to calculate, according to the number of malicious accounts corresponding to each service, the number of non-malicious accounts corresponding to each service, the first prior probability, and the second prior probability, probabilities that the user account is a malicious account on each service respectively.
10. The apparatus of claim 9,
the first calculating subunit is specifically configured to calculate a first ratio between the number of malicious accounts corresponding to each service and the number of user accounts included in the malicious account set, and a second ratio between the number of non-malicious accounts corresponding to each service and the number of user accounts included in the non-malicious account set; and calculating the probability that the user account is a malicious account on each service respectively according to the first ratio, the second ratio, the first prior probability and the second prior probability.
11. The apparatus of claim 8, wherein the second determining unit comprises:
a second calculating subunit, configured to calculate, according to the first prior probability, the second prior probability, and the probability that the user account is a malicious account on each service, a probability that the user account is a malicious account and a probability that the user account is a non-malicious account;
and the determining subunit is used for determining the account malicious value of the user according to the probability that the user account is a malicious account and the probability that the user account is a non-malicious account.
12. The apparatus of claim 8, wherein the acquisition module comprises:
the second acquisition unit is used for acquiring the message characteristic words sent by the user;
a fourth calculating unit, configured to calculate a probability that each feature word appears in each of a plurality of preset message sets, where for each of the plurality of preset message sets, a message set corresponds to one topic, and the topic of each message in the message set is a topic corresponding to the message set;
and a fifth calculating unit, configured to calculate a spam message judgment probability of the message according to a probability that each feature word appears in each of the preset multiple message sets.
13. The apparatus of claim 12, wherein the fifth computing unit comprises:
a third calculating subunit, configured to calculate, according to the probability of occurrence of each feature word in each message set, the probability of occurrence of each message in each message set;
and the fourth calculating subunit is configured to calculate a spam judgment probability of the message according to the probabilities that the message appears in each message set.
14. The apparatus of claim 8, wherein the determining module further comprises:
a fourth determining unit, configured to determine that an operation corresponding to the operation type is a non-malicious operation if the malicious score is smaller than the preset threshold.
15. A computer-readable storage medium, in which at least one program is stored, the program being loaded and executed by a processor to perform operations performed by a method of identifying malicious operations according to any one of claims 1 to 7.
CN201410141592.9A 2014-04-09 2014-04-09 Method and device for identifying malicious operation Active CN104980402B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410141592.9A CN104980402B (en) 2014-04-09 2014-04-09 Method and device for identifying malicious operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410141592.9A CN104980402B (en) 2014-04-09 2014-04-09 Method and device for identifying malicious operation

Publications (2)

Publication Number Publication Date
CN104980402A CN104980402A (en) 2015-10-14
CN104980402B true CN104980402B (en) 2020-02-21

Family

ID=54276512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410141592.9A Active CN104980402B (en) 2014-04-09 2014-04-09 Method and device for identifying malicious operation

Country Status (1)

Country Link
CN (1) CN104980402B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701684B (en) * 2016-01-11 2020-12-08 腾讯科技(深圳)有限公司 Data processing method and device
CN107645483B (en) * 2016-07-22 2021-03-19 创新先进技术有限公司 Risk identification method, risk identification device, cloud risk identification device and system
CN107124391B (en) * 2016-09-22 2021-11-16 北京星选科技有限公司 Abnormal behavior identification method and device
CN106528680A (en) * 2016-10-25 2017-03-22 智者四海(北京)技术有限公司 Identification method and device for junk information
CN108243142A (en) * 2016-12-23 2018-07-03 阿里巴巴集团控股有限公司 Recognition methods and device and anti-spam content system
CN107220867A (en) * 2017-04-20 2017-09-29 北京小度信息科技有限公司 object control method and device
CN107335220B (en) * 2017-06-06 2021-01-26 广州华多网络科技有限公司 Negative user identification method and device and server
CN112533019B (en) * 2020-12-02 2023-04-07 中国联合网络通信集团有限公司 Detection method and device for user equipment
CN114039772B (en) * 2021-11-08 2023-11-28 北京天融信网络安全技术有限公司 Detection method for network attack and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877680A (en) * 2010-05-21 2010-11-03 电子科技大学 Junk mail sending behavior control system and method
CN102368842A (en) * 2011-10-12 2012-03-07 中国联合网络通信集团有限公司 Detection method of abnormal behavior of mobile terminal and detection system thereof
CN102739683A (en) * 2012-06-29 2012-10-17 杭州迪普科技有限公司 Network attack filtering method and device
CN103248472A (en) * 2013-04-16 2013-08-14 华为技术有限公司 Operation request processing method and system and attack identification device
CN103368904A (en) * 2012-03-27 2013-10-23 百度在线网络技术(北京)有限公司 Mobile terminal, and system and method for suspicious behavior detection and judgment
CN103428183A (en) * 2012-05-23 2013-12-04 北京新媒传信科技有限公司 Method and device for identifying malicious website
CN103595614A (en) * 2012-08-16 2014-02-19 无锡华御信息技术有限公司 User feedback based junk mail detection method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877680A (en) * 2010-05-21 2010-11-03 电子科技大学 Junk mail sending behavior control system and method
CN102368842A (en) * 2011-10-12 2012-03-07 中国联合网络通信集团有限公司 Detection method of abnormal behavior of mobile terminal and detection system thereof
CN103368904A (en) * 2012-03-27 2013-10-23 百度在线网络技术(北京)有限公司 Mobile terminal, and system and method for suspicious behavior detection and judgment
CN103428183A (en) * 2012-05-23 2013-12-04 北京新媒传信科技有限公司 Method and device for identifying malicious website
CN102739683A (en) * 2012-06-29 2012-10-17 杭州迪普科技有限公司 Network attack filtering method and device
CN103595614A (en) * 2012-08-16 2014-02-19 无锡华御信息技术有限公司 User feedback based junk mail detection method
CN103248472A (en) * 2013-04-16 2013-08-14 华为技术有限公司 Operation request processing method and system and attack identification device

Also Published As

Publication number Publication date
CN104980402A (en) 2015-10-14

Similar Documents

Publication Publication Date Title
CN104980402B (en) Method and device for identifying malicious operation
RU2708508C1 (en) Method and a computing device for detecting suspicious users in messaging systems
US10063584B1 (en) Advanced processing of electronic messages with attachments in a cybersecurity system
US9774626B1 (en) Method and system for assessing and classifying reported potentially malicious messages in a cybersecurity system
CN108920947B (en) Abnormity detection method and device based on log graph modeling
CN108810831B (en) Short message verification code pushing method, electronic device and readable storage medium
WO2015144058A1 (en) Account binding processing method, apparatus and system
US9122866B1 (en) User authentication
WO2018045977A1 (en) Shared resource display method, device and storage medium
CN110798488B (en) Web application attack detection method
US9092599B1 (en) Managing knowledge-based authentication systems
US10567374B2 (en) Information processing method and server
CN106470204A (en) User identification method based on request behavior characteristicss, device, equipment and system
US10284565B2 (en) Security verification method, apparatus, server and terminal device
CN107688733B (en) Service interface calling method, device, user terminal and readable storage medium
CN106878275B (en) Identity verification method and device and server
US11165801B2 (en) Social threat correlation
WO2019218476A1 (en) Data exporting method and device
WO2015106728A1 (en) Data processing method and system
JP6629973B2 (en) Method and apparatus for recognizing a service request to change a mobile phone number
US10897479B1 (en) Systems and methods for machine-learning based digital threat assessment with integrated activity verification
CN108234454B (en) Identity authentication method, server and client device
US9754209B1 (en) Managing knowledge-based authentication systems
Elyusufi et al. Social networks fake profiles detection based on account setting and activity
CN112182520B (en) Identification method and device of illegal account number, readable medium and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant