CN111125658B - Method, apparatus, server and storage medium for identifying fraudulent user - Google Patents

Method, apparatus, server and storage medium for identifying fraudulent user Download PDF

Info

Publication number
CN111125658B
CN111125658B CN201911410138.8A CN201911410138A CN111125658B CN 111125658 B CN111125658 B CN 111125658B CN 201911410138 A CN201911410138 A CN 201911410138A CN 111125658 B CN111125658 B CN 111125658B
Authority
CN
China
Prior art keywords
user
identified
behavior data
similarity
fraudulent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911410138.8A
Other languages
Chinese (zh)
Other versions
CN111125658A (en
Inventor
钱信羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Fenqile Network Technology Co ltd
Original Assignee
Shenzhen Fenqile Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Fenqile Network Technology Co ltd filed Critical Shenzhen Fenqile Network Technology Co ltd
Priority to CN201911410138.8A priority Critical patent/CN111125658B/en
Publication of CN111125658A publication Critical patent/CN111125658A/en
Application granted granted Critical
Publication of CN111125658B publication Critical patent/CN111125658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a method, a device, a server and a storage medium for identifying fraudulent users. The method for identifying the fraudulent user comprises the following steps: acquiring user behavior data of a user to be identified and reference behavior data of a reference user; determining the similarity of the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data; and judging whether the user to be identified is a fraudulent user or not based on the similarity. And determining whether the user to be identified is a fraudulent user or not according to the similarity between the user to be identified and the reference user, thereby achieving the effect of improving the accuracy of identifying the fraudulent user.

Description

Method, apparatus, server and storage medium for identifying fraudulent user
Technical Field
The embodiment of the invention relates to the technical field of risk identification, in particular to a method, a device, a server and a storage medium for identifying fraudulent users.
Background
Currently, a common way to identify fraudulent users is to use tagged data for unsupervised modeling (e.g., clustering). The unsupervised modeling means that among the obtained groups, the fraud duty ratio of each group is calculated, and the group with higher fraud duty ratio is judged as a bad group. And performing unsupervised modeling according to the characteristics of the fraudulent users in the bad group to obtain an identification model. The user needing to be identified is input into the identification model, so that whether the user is a fraudulent user or not is judged.
However, in the manner of performing the unsupervised modeling, since the algorithms of the common KNN, k-means and other methods have larger uncertainty and instability in the high-dimensional data, the obtained results are often unstable, and the recognition effect is inaccurate.
Disclosure of Invention
The embodiment of the invention provides a method, a device, a server and a storage medium for identifying a fraudulent user, so as to achieve the effect of improving the accuracy of identifying the fraudulent user.
In a first aspect, an embodiment of the present invention provides a method for identifying a fraudulent user, including:
acquiring user behavior data of a user to be identified and reference behavior data of a reference user;
determining the similarity of the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data;
and judging whether the user to be identified is a fraudulent user or not based on the similarity.
Optionally, the determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data includes:
calculating a first feature vector corresponding to the user to be identified according to the user behavior data;
calculating a second feature vector corresponding to the reference user according to the reference behavior data;
Calculating the feature distance of the first feature vector and the second feature vector;
and taking the characteristic distance as the similarity between the user to be identified and the reference user.
Optionally, the calculating the first feature vector corresponding to the user to be identified according to the user behavior data includes:
inputting the user behavior data into a convolutional neural network based on a first preset model;
and acquiring a first eigenvector calculated based on the convolutional neural network.
Optionally, the reference user is a user with no record of fraudulent activity, and the determining whether the user to be identified is a fraudulent user based on the similarity includes:
judging whether the similarity is larger than a preset threshold value or not;
and when the similarity is larger than the preset threshold, judging the user to be identified as a fraudulent user.
Optionally, the number of the users to be identified is multiple, and the determining whether the users to be identified are fraudulent users based on the similarity includes:
grouping a plurality of users to be identified according to the similarity of each user to be identified to obtain at least one group to be identified, wherein the group to be identified corresponds to at least one user to be identified;
judging whether each group to be identified has a fraudulent user or not;
And when the fraudulent user exists in the group to be identified, judging that all the users to be identified corresponding to the group to be identified are the fraudulent users.
Optionally, the determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data includes:
inputting the user behavior data and the reference behavior data as an identification sample into a trained second preset model;
and acquiring a comparison result of the second preset model based on the user behavior data and the reference behavior data, and determining the similarity of the user to be identified and the reference user.
Optionally, before the user behavior data and the reference behavior data are input as one recognition sample to the trained second preset model, the method includes:
acquiring a plurality of training samples, each training sample comprising the reference behavior data and historical behavior data;
marking each training sample to obtain a plurality of marked training samples;
and training the second preset model based on the plurality of marked training samples to obtain a trained second preset model.
In a second aspect, an embodiment of the present invention provides an apparatus for identifying a fraudulent user, including:
the acquisition module is used for acquiring user behavior data of the user to be identified and reference behavior data of the reference user;
the similarity determining module is used for determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data;
and the fraudulent user judging module is used for judging whether the user to be identified is a fraudulent user or not based on the similarity.
In a third aspect, an embodiment of the present invention provides a server, including:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a method of identifying fraudulent users according to any embodiment of the present invention.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method of identifying fraudulent users according to any of the embodiments of the present invention.
The embodiment of the invention obtains the user behavior data of the user to be identified and the reference behavior data of the reference user; determining the similarity of the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data; whether the user to be identified is a fraudulent user or not is judged based on the similarity, the problem that the obtained result is unstable and the identification effect is inaccurate due to the fact that the algorithms of the common KNN, k-means and other methods have large uncertainty and instability in high-dimensional data is solved, and the effect of improving the accuracy of identifying the fraudulent user is achieved.
Drawings
FIG. 1 is a flow chart of a method for identifying fraudulent users according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of calculating a first eigenvector by a convolutional neural network based on a first preset model according to a first embodiment of the present invention;
FIG. 3 is a flow chart of a method for identifying fraudulent users according to a second embodiment of the present invention;
FIG. 4 is a flow chart of a method for identifying fraudulent users according to a third embodiment of the present invention;
FIG. 5 is a schematic diagram of a second preset model according to a third embodiment of the present invention;
FIG. 6 is a schematic diagram of an apparatus for identifying fraudulent users according to a fourth embodiment of the present invention;
fig. 7 is a schematic structural diagram of a server according to a fifth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts steps as a sequential process, many of the steps may be implemented in parallel, concurrently, or with other steps. Furthermore, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Furthermore, the terms "first," "second," and the like, may be used herein to describe various directions, acts, steps, or elements, etc., but these directions, acts, steps, or elements are not limited by these terms. These terms are only used to distinguish one direction, action, step or element from another direction, action, step or element. For example, a first feature vector may be referred to as a second feature vector, and similarly, a second feature vector may be referred to as a first feature vector, without departing from the scope of the present application. Both the first feature vector and the second feature vector are feature vectors, but they are not the same feature vector. The terms "first," "second," and the like, are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Example 1
Fig. 1 is a schematic flow chart of a method for identifying a fraudulent user according to a first embodiment of the present invention, which is applicable to a scenario for identifying a fraudulent user, where the method may be performed by a device for identifying a fraudulent user, and the device may be implemented in a software and/or hardware manner and may be integrated on a server.
As shown in fig. 1, a method for identifying a fraudulent user according to an embodiment of the present invention includes:
s110, acquiring user behavior data of a user to be identified and reference behavior data of a reference user.
The user to be identified refers to a user needing to be identified. The user behavior data refers to behavior data of a user to be identified. Optionally, the user behavior data includes, but is not limited to, the type of web page browsed, the time spent in web pages, the time when web pages begin to be browsed, etc., without limitation. The reference user refers to a user who is compared with the user to be identified. In this embodiment, the identity of the reference user is known. Alternatively, the reference user may be a user with no fraud records, or a user with fraud records (i.e., a rogue user), without limitation. Preferably, the reference user is a fraud-free recording user. Specifically, the behavior of the fraudulent user is changeable, the variation of the user without the fraudulent behavior record is smaller, the user without the fraudulent behavior record is used as a reference user, and the effect of identifying the fraudulent user is more accurate. The reference behavior data refers to behavior data of a reference user. Likewise, the reference behavior data includes, but is not limited to, the type of web page being browsed, the time spent in the web page, the time when the web page is started to be browsed, etc., and is not limited herein.
S120, determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data.
The comparison result is a result obtained by comparing the user behavior data with the reference behavior data. The similarity represents the degree of similarity between the user to be identified and the reference user. Specifically, the smaller the value of the similarity, the higher the similarity degree between the user to be identified and the reference user is. In this embodiment, specifically, the similarity is a value between 0 and 1.
In an optional embodiment, the determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data includes:
calculating a first feature vector corresponding to the user to be identified according to the user behavior data; calculating a second feature vector corresponding to the reference user according to the reference behavior data; calculating the feature distance of the first feature vector and the second feature vector;
and taking the characteristic distance as the similarity between the user to be identified and the reference user.
The first feature vector refers to feature vectors of user behavior data of the user to be identified. The second feature vector refers to a feature vector referencing the user's reference behavior data. The feature distance refers to the degree of similarity between the first feature vector and the second feature vector. The smaller the feature distance, the more similar the first feature vector and the second feature vector are, i.e. the more similar the corresponding users to be identified and the reference users are. In this embodiment, the feature distance is a similarity between the user to be identified and the reference user. Alternatively, the feature distance is a cosine distance or a euclidean distance, and is not particularly limited herein.
In an optional embodiment, calculating the first feature vector corresponding to the user to be identified according to the user behavior data includes:
inputting the user behavior data into a convolutional neural network based on a first preset model; and acquiring a first eigenvector calculated based on the convolutional neural network.
Among them, convolutional neural networks (Convolutional Neural Networks, CNN) are a type of feedforward neural network (Feedforward Neural Networks) that includes convolutional calculation and has a deep structure, and are one of representative algorithms of deep learning (deep learning). Convolutional neural networks have the capability of token learning (representation learning) and are capable of performing a shift-invariant classification (shift-invariant classification) on input information in their hierarchical structure. In this embodiment, optionally, the first preset model is an attention model. Specifically, the user behavior data and the reference behavior data are both time-series-based data, and therefore the positions of the key behavior points are different. The attention model can better focus on key points of the user behavior data and the reference behavior data through the attention mechanism, the obtained first feature vector is more accurate, and the behavior of the feature distance, which is influenced by invalidation, is smaller. Correspondingly, the calculation mode of the second feature vector is consistent with that of the first feature vector, and the reference behavior data is input into a convolutional neural network based on a first preset model to obtain the second feature vector. Specifically, the first preset model corresponding to the first feature vector has the same model parameters and weights as the first preset model corresponding to the second feature vector.
In particular, reference may be made to fig. 2. Fig. 2 is a schematic diagram of calculating a first eigenvector by a convolutional neural network based on a first preset model according to an embodiment of the present invention. Referring to fig. 2, the user behavior data 100 of the user to be identified includes user behavior 1, user behaviors 2, … …, and user behavior N. Wherein, each user behavior corresponds to a time, and user behavior 1, user behaviors 2 and … … and user behavior N correspond to a time sequence. Specifically, each user action includes one or more of a type of web page browsed, a time to stay in the web page, and a time to start browsing the web page. After the user behavior data 100 passes through the convolutional neural network 200 based on the first preset model, the convolutional neural network 200 calculates the user behavior data 100 to obtain a first feature vector 300.
S130, judging whether the user to be identified is a fraudulent user or not based on the similarity.
In this step, specifically, taking the reference user as the user with no fraud record as an example, the smaller the value of the similarity between the user to be identified and the reference user is, the higher the likelihood that the user to be identified is the user with no fraud record is, otherwise, the greater the value of the similarity is, the higher the likelihood that the user to be identified is the fraud user is. Taking the reference user as a fraudulent user as an example, the smaller the similarity value between the user to be identified and the reference user is, the higher the probability that the user to be identified is the fraudulent user is, otherwise, the larger the similarity value is, the higher the probability that the user to be identified is free of fraudulent behavior is.
In an optional implementation manner, the reference user is a user with no record of fraud, and the determining, based on the similarity, whether the user to be identified is a fraudulent user includes:
judging whether the similarity is larger than a preset threshold value or not; and when the similarity is larger than the preset threshold, judging the user to be identified as a fraudulent user.
The preset threshold value is a condition for judging whether the user to be identified is a fraudulent user or not. Specifically, because the reference user is a user with no record of fraudulent activity, when the similarity is greater than a preset threshold, the user to be identified is a fraudulent user. Alternatively, the preset threshold may be set to a fixed value, for example any one of values 0.5-1.0, alternatively, the preset threshold is 0.7. Optionally, the preset threshold may further calculate the similarity of a batch of users to be identified at a time to obtain a plurality of similarities, rank the plurality of similarities, and take the lowest similarity before ranking 20% as the preset threshold according to the ranking from big to small. In this embodiment, the setting manner and specific numerical values of the preset threshold are not limited.
According to the technical scheme, the user behavior data of the user to be identified and the reference behavior data of the reference user are obtained; determining the similarity of the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data; and judging whether the user to be identified is a fraudulent user based on the similarity, and judging whether the user to be identified is the fraudulent user according to the similarity by calculating the similarity without performing unsupervised modeling, so that the problem that the KNN has uncertainty in a high-dimensional data algorithm is solved, and the technical effect of improving the accuracy of identifying the fraudulent user is achieved. In addition, the first feature vector is calculated by the convolutional neural network based on the attention model, and the result of the first feature vector is more accurate, so that the similarity is calculated more accurately, and whether the user is a fraudulent user or not is identified more accurately.
Example two
Fig. 3 is a flowchart of a method for identifying a fraudulent user according to a second embodiment of the present invention. The embodiment is a further refinement of the above technical solution, and is suitable for a scenario of identifying a plurality of users to be identified as fraudulent users. The method may be performed by a device that identifies a rogue user, which may be implemented in software and/or hardware, and may be integrated on a server.
As shown in fig. 3, a method for identifying a fraudulent user according to a second embodiment of the present invention includes:
s210, acquiring user behavior data of a user to be identified and reference behavior data of a reference user, wherein the number of the users to be identified is multiple.
The user to be identified refers to a user needing to be identified. The user behavior data refers to behavior data of a user to be identified. Optionally, the user behavior data includes, but is not limited to, the type of web page browsed, the time spent in web pages, the time when web pages begin to be browsed, etc., without limitation. The reference user refers to a user who is compared with the user to be identified. In the present embodiment, the reference behavior data refers to behavior data of a reference user. Likewise, the reference behavior data includes, but is not limited to, the type of web page being browsed, the time spent in the web page, the time when the web page is started to be browsed, etc., and is not limited herein. The number of users to be identified in this embodiment is plural. Specifically, the reference user in this embodiment refers to one of a plurality of users to be identified, and the identity of the reference user in this embodiment is uncertain, that is, whether the reference user is a fraudulent user is uncertain.
S220, determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data.
The comparison result is a result obtained by comparing the user behavior data with the reference behavior data. The similarity represents the degree of similarity between the user to be identified and the reference user. Specifically, the smaller the value of the similarity, the higher the similarity degree between the user to be identified and the reference user is. In this embodiment, since there are a plurality of users to be identified, each user to be identified corresponds to a degree of similarity with the reference user.
S230, grouping a plurality of users to be identified according to the similarity of each user to be identified to obtain at least one group to be identified, wherein the group to be identified corresponds to at least one user to be identified.
The group to be identified refers to a group obtained by grouping a plurality of users to be identified according to the similarity, and each group to be identified corresponds to at least one user to be identified. Specifically, in this embodiment, the plurality of users to be identified are grouped according to the similarity of each user to be identified, which may be that a threshold, for example, 0.5, is set, when the users to be identified with the similarity greater than 0.5 are grouped into a group, the users to be identified with the similarity less than or equal to 0.5 are grouped into a group; it is also possible to divide users to be identified having a similarity between 0.3 and 0.7 into one group and divide users to be identified having a similarity between 0.3 and 0.7 into one group by setting two thresholds, for example, 0.3 and 0.7. The manner how to group the plurality of users to be identified according to the similarity to obtain the group to be identified is not particularly limited in this embodiment.
S240, judging whether each group to be identified has a fraudulent user or not.
In this step, optionally, one user to be identified in the group to be identified may be selected, and compared with the user having no record of fraudulent activity to obtain a comparison similarity. And when the comparison similarity is larger than a preset threshold value, indicating that the user to be identified is a fraudulent user, namely that the group to be identified has the fraudulent user. The judgment can also be made based on manual experience, and is not particularly limited herein.
S250, when the to-be-identified group has the fraudulent user, judging that all to-be-identified users corresponding to the to-be-identified group are the fraudulent user.
Specifically, when one fraudulent user exists in the group to be identified, judging that all the users to be identified in the group to be identified are fraudulent users. By grouping the plurality of users to be identified according to the similarity, the identities of all the users to be identified in the group to be identified can be determined only by determining one of the users to be identified in the group to be identified, and the plurality of users to be identified can be accurately identified when only a small number of samples for determining the identities are available.
According to the technical scheme, the user behavior data of the user to be identified and the reference behavior data of the reference user are obtained, and the number of the users to be identified is multiple; determining the similarity of the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data; grouping a plurality of users to be identified according to the similarity of each user to be identified to obtain at least one group to be identified, judging whether each group to be identified has a fraudulent user or not according to the similarity of each user to be identified, and judging whether all the users to be identified corresponding to the group to be identified are the fraudulent users when the group to be identified has the fraudulent users, wherein the similarity is calculated and whether the users to be identified are the fraudulent users is judged according to the similarity, so that unsupervised modeling is not needed, and the problem that the uncertainty exists in a high-dimensional algorithm due to the use of KNN is avoided, and the technical effect of improving the accuracy of identifying the fraudulent users is achieved. In addition, the plurality of users to be identified are grouped according to the similarity, the identities of all the users to be identified in the group to be identified can be determined only by determining one of the users to be identified in the group to be identified, and the plurality of users to be identified can be accurately identified when only a small number of samples for determining the identities are available.
Example III
Fig. 4 is a flowchart of a method for identifying a fraudulent user according to a third embodiment of the present invention. The embodiment is further refined in the technical scheme, and is suitable for a scene of determining the similarity between the user to be identified and the reference user by using the trained model. The method may be performed by a device that identifies a rogue user, which may be implemented in software and/or hardware, and may be integrated on a server.
As shown in fig. 4, a method for identifying a fraudulent user according to a third embodiment of the present invention includes:
s310, acquiring user behavior data of a user to be identified and reference behavior data of a reference user.
The user to be identified refers to a user needing to be identified. The user behavior data refers to behavior data of a user to be identified. Optionally, the user behavior data includes, but is not limited to, the type of web page browsed, the time spent in web pages, the time when web pages begin to be browsed, etc., without limitation. The reference user refers to a user who is compared with the user to be identified. In this embodiment, the identity of the reference user is known. Alternatively, the reference user may be a user with no fraud records, or a user with fraud records (i.e., a rogue user), without limitation. Preferably, the reference user is a fraud-free recording user. Specifically, the behavior of the fraudulent user is changeable, the variation of the user without the fraudulent behavior record is smaller, the user without the fraudulent behavior record is used as a reference user, and the effect of identifying the fraudulent user is more accurate. The reference behavior data refers to behavior data of a reference user. Likewise, the reference behavior data includes, but is not limited to, the type of web page being browsed, the time spent in the web page, the time when the web page is started to be browsed, etc., and is not limited herein.
S320, inputting the user behavior data and the reference behavior data as an identification sample into a trained second preset model;
the second preset model is a model for calculating the identification sample to obtain a comparison result of the user behavior data and the reference behavior data, so as to determine the similarity between the user to be identified and the reference user. Optionally, the second preset model is a twin network model. In this embodiment, specifically, the user behavior data and the reference behavior data are input as a whole recognition sample to a trained second preset model, so as to obtain the similarity between the user to be recognized corresponding to the user behavior data and the reference user corresponding to the reference behavior data.
S330, acquiring a comparison result of the second preset model based on the user behavior data and the reference behavior data, and determining the similarity between the user to be identified and the reference user.
The comparison result is a result obtained by comparing the user behavior data with the reference behavior data. In this embodiment, the comparison result is determined according to the identification sample through the second preset model. The similarity represents the degree of similarity between the user to be identified and the reference user. Specifically, the smaller the value of the similarity, the higher the similarity degree between the user to be identified and the reference user is. In this embodiment, specifically, the similarity is a value between 0 and 1.
In an optional embodiment, the determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data includes:
calculating a first feature vector corresponding to the user to be identified according to the user behavior data; calculating a second feature vector corresponding to the reference user according to the reference behavior data; calculating the feature distance of the first feature vector and the second feature vector;
and taking the characteristic distance as the similarity between the user to be identified and the reference user.
The first feature vector refers to feature vectors of user behavior data of the user to be identified. The second feature vector refers to a feature vector referencing the user's reference behavior data. The feature distance refers to the degree of similarity between the first feature vector and the second feature vector. The smaller the feature distance, the more similar the first feature vector and the second feature vector are, i.e. the more similar the corresponding users to be identified and the reference users are. In this embodiment, the feature distance is a similarity between the user to be identified and the reference user. Alternatively, the feature distance is a cosine distance or a euclidean distance, and is not particularly limited herein.
In an optional embodiment, calculating the first feature vector corresponding to the user to be identified according to the user behavior data includes:
inputting the user behavior data into a convolutional neural network based on a first preset model; and acquiring a first eigenvector calculated based on the convolutional neural network.
Among them, convolutional neural networks (Convolutional Neural Networks, CNN) are a type of feedforward neural network (Feedforward Neural Networks) that includes convolutional calculation and has a deep structure, and are one of representative algorithms of deep learning (deep learning). Convolutional neural networks have the capability of token learning (representation learning) and are capable of performing a shift-invariant classification (shift-invariant classification) on input information in their hierarchical structure. In this embodiment, optionally, the first preset model is an attention model. Specifically, the user behavior data and the reference behavior data are both time-series-based data, and therefore the positions of the key behavior points are different. The attention model can better focus on key points of the user behavior data and the reference behavior data through the attention mechanism, the obtained first feature vector is more accurate, and the behavior of the feature distance, which is influenced by invalidation, is smaller. Correspondingly, the calculation mode of the second feature vector is consistent with that of the first feature vector, and the reference behavior data is input into a convolutional neural network based on a first preset model to obtain the second feature vector. Specifically, the first preset model corresponding to the first feature vector has the same model parameters and weights as the first preset model corresponding to the second feature vector.
In this embodiment, the first predetermined model is a sub-model in the second predetermined model. Referring to fig. 5, fig. 5 is a schematic diagram of a second preset model according to an embodiment of the present invention. As can be seen from fig. 5, the convolutional neural network 600 of the first predetermined model is a submodel included in the second predetermined model. Specifically, the user behavior data 400 of the user to be identified and the reference behavior data 500 of the reference user are input as an identification sample to a second preset model, the convolutional neural network 600 of the first preset model included in the second preset model calculates to obtain a first feature vector 700 corresponding to the user behavior data and a second feature vector 800 corresponding to the reference behavior data, and then calculates the similarity of the first feature vector and the second feature vector, thereby obtaining the similarity of the user to be identified and the reference user.
S340, judging whether the user to be identified is a fraudulent user or not based on the similarity.
In this step, specifically, taking the reference user as the user with no fraud record as an example, the smaller the value of the similarity between the user to be identified and the reference user is, the higher the likelihood that the user to be identified is the user with no fraud record is, otherwise, the greater the value of the similarity is, the higher the likelihood that the user to be identified is the fraud user is. Taking the reference user as a fraudulent user as an example, the smaller the similarity value between the user to be identified and the reference user is, the higher the probability that the user to be identified is the fraudulent user is, otherwise, the larger the similarity value is, the higher the probability that the user to be identified is free of fraudulent behavior is.
According to the embodiment, the user behavior data and the reference behavior data are calculated as one identification sample through the trained second preset model, and the comparison result of the user behavior data and the reference behavior data is obtained, so that the similarity between the user to be identified and the reference user is determined, and the identification efficiency is improved.
In an alternative embodiment, before said inputting the user behavior data and the reference behavior data as one recognition sample into the trained second preset model, the method comprises:
acquiring a plurality of training samples, each training sample comprising the reference behavior data and historical behavior data; marking each training sample to obtain a plurality of marked training samples; and training the second preset model based on the plurality of marked training samples to obtain a trained second preset model.
The training sample is a sample for training a second preset model. Specifically, the training samples include reference behavior data and historical behavior data. Historical behavior data refers to behavior data of users of known identity. In particular, during training, users of known identity need to include fraudulent users and users without records of fraudulent activity. The reference behavior data may be behavior data of the user without fraud records, or may be behavior data of a fraudulent user, which is not limited herein. Preferably, the reference behavior data is behavior data of a user having no fraudulent behavior records. Marking the training sample by using the reference behavior data as the behavior data of the user recorded without fraudulent behavior, namely marking the reference behavior data and the historical behavior data. When the historical behavior data corresponds to behavior data of a fraudulent user, the training sample is marked as 1; when the historical behavior data corresponds to behavior data of a user without fraudulent behavior records, the training sample is marked as 0, and after marking is carried out on the training sample, a plurality of marked training samples are obtained. Training the second preset model by using the plurality of marked training samples to obtain a trained second preset model, wherein the trained second preset model has the capability of distinguishing the similarity between the recognition samples.
According to the technical scheme, the user behavior data of the user to be identified and the reference behavior data of the reference user are obtained; inputting the user behavior data and the reference behavior data as an identification sample into a trained second preset model; acquiring a comparison result of the second preset model based on the user behavior data and the reference behavior data, and determining the similarity of the user to be identified and the reference user; and judging whether the user to be identified is a fraudulent user based on the similarity, and judging whether the user to be identified is the fraudulent user according to the similarity by calculating the similarity without performing unsupervised modeling, so that the problem that the KNN has uncertainty in a high-dimensional data algorithm is solved, and the technical effect of improving the accuracy of identifying the fraudulent user is achieved. In addition, the user behavior data and the reference behavior data are calculated as a recognition sample through the trained second preset model, and a comparison result of the user behavior data and the reference behavior data is obtained, so that the similarity of the user to be recognized and the reference user is determined, and the recognition efficiency is improved.
Example IV
Fig. 6 is a schematic structural diagram of an apparatus for identifying a rogue user according to a fourth embodiment of the present invention, where the present embodiment is applicable to a scenario in which a rogue user is identified, and the apparatus may be implemented in software and/or hardware and may be integrated on a server.
As shown in fig. 6, the apparatus for identifying a fraudulent user according to this embodiment may include an acquisition module 410, a similarity determination module 420, and a fraudulent user determination module 430, where:
an obtaining module 410, configured to obtain user behavior data of a user to be identified and reference behavior data of a reference user; a similarity determining module 420, configured to determine a similarity between the user to be identified and the reference user based on a comparison result of the user behavior data and the reference behavior data; and a fraudulent user judging module 430, configured to judge whether the user to be identified is a fraudulent user based on the similarity.
Optionally, the similarity determining module 420 includes: the feature vector calculation unit is used for calculating a first feature vector corresponding to the user to be identified according to the user behavior data; calculating a second feature vector corresponding to the reference user according to the reference behavior data; a feature distance calculating unit for calculating feature distances of the first feature vector and the second feature vector; and the similarity determining unit is used for taking the characteristic distance as the similarity between the user to be identified and the reference user.
Optionally, the feature vector calculation unit is specifically configured to input the user behavior data into a convolutional neural network based on a first preset model; and acquiring a first eigenvector calculated based on the convolutional neural network.
Optionally, the fraud user determining module 430 is specifically configured to determine whether the similarity is greater than a preset threshold; and when the similarity is larger than the preset threshold, judging the user to be identified as a fraudulent user.
Optionally, the fraud user determining module 430 includes: the grouping unit is used for grouping a plurality of users to be identified according to the similarity of each user to be identified to obtain at least one group to be identified, and the group to be identified corresponds to the at least one user to be identified; a fraudulent user judging unit for judging whether each group to be identified has a fraudulent user; and when the fraudulent user exists in the group to be identified, judging that all the users to be identified corresponding to the group to be identified are the fraudulent users.
Optionally, the similarity determining module 420 is specifically configured to input the user behavior data and the reference behavior data as one recognition sample to a trained second preset model; and acquiring a comparison result of the second preset model based on the user behavior data and the reference behavior data, and determining the similarity of the user to be identified and the reference user.
Optionally, the apparatus further includes a training module configured to obtain a plurality of training samples, each training sample including the reference behavior data and the historical behavior data; marking each training sample to obtain a plurality of marked training samples; and training the second preset model based on the plurality of marked training samples to obtain a trained second preset model.
The device for identifying the fraudulent user provided by the embodiment of the invention can execute the method for identifying the fraudulent user provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the method. Reference may be made to the description of any method embodiment of the invention for details not explicitly described in this embodiment of the invention.
Example five
Fig. 7 is a schematic structural diagram of a server according to a fifth embodiment of the present invention. Fig. 7 illustrates a block diagram of an exemplary server 612 suitable for use in implementing embodiments of the invention. The server 612 shown in fig. 7 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 7, the server 612 is in the form of a general-purpose server. Components of server 612 may include, but are not limited to: one or more processors 616, a memory device 628, and a bus 618 that connects the various system components, including the memory device 628 and the processor 616.
Bus 618 represents one or more of several types of bus structures, including a memory device bus or memory device controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include industry standard architecture (Industry Subversive Alliance, ISA) bus, micro channel architecture (Micro Channel Architecture, MAC) bus, enhanced ISA bus, video electronics standards association (Video Electronics Standards Association, VESA) local bus, and peripheral component interconnect (Peripheral Component Interconnect, PCI) bus.
Server 612 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by server 612 and includes both volatile and nonvolatile media, removable and non-removable media.
The storage 628 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory, RAM) 630 and/or cache memory 632. Terminal 612 can further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 634 can be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, commonly referred to as a "hard drive"). Although not shown in fig. 7, a magnetic disk drive for reading from and writing to a removable nonvolatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable nonvolatile optical disk such as a Read Only Memory (CD-ROM), digital versatile disk (Digital Video Disc-Read Only Memory, DVD-ROM), or other optical media, may be provided. In such cases, each drive may be coupled to bus 618 through one or more data medium interfaces. The storage 628 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the embodiments of the present invention.
A program/utility 640 having a set (at least one) of program modules 642 may be stored, for example, in the storage 628, such program modules 642 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 642 generally perform the functions and/or methods of the described embodiments of the present invention.
The server 612 may also communicate with one or more external devices 614 (e.g., keyboard, pointing terminal, display 624, etc.), with one or more terminals that enable a user to interact with the server 612, and/or with any terminal (e.g., network card, modem, etc.) that enables the server 612 to communicate with one or more other computing terminals. Such communication may occur through an input/output (I/O) interface 622. Also, the server 612 may communicate with one or more networks (e.g., local area network (Local Area Network, LAN), wide area network (Wide Area Network, WAN) and/or public network, such as the internet) via the network adapter 620. As shown in fig. 7, network adapter 620 communicates with the other modules of server 612 over bus 618. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with server 612, including, but not limited to: microcode, end drives, redundant processors, external disk drive arrays, disk array (Redundant Arrays of Independent Disks, RAID) systems, tape drives, data backup storage systems, and the like.
Processor 616 executes various functional applications and data processing by running programs stored in storage 628, such as implementing a method of identifying fraudulent users provided by any embodiment of the present invention, which may include:
acquiring user behavior data of a user to be identified and reference behavior data of a reference user;
determining the similarity of the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data;
and judging whether the user to be identified is a fraudulent user or not based on the similarity.
According to the technical scheme, the user behavior data of the user to be identified and the reference behavior data of the reference user are obtained; determining the similarity of the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data; and judging whether the user to be identified is a fraudulent user based on the similarity, and judging whether the user to be identified is the fraudulent user according to the similarity by calculating the similarity without performing unsupervised modeling, so that the problem that the KNN has uncertainty in a high-dimensional data algorithm is solved, and the technical effect of improving the accuracy of identifying the fraudulent user is achieved.
Example six
A sixth embodiment of the present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of identifying fraudulent users as provided by any embodiment of the present invention, the method may include:
acquiring user behavior data of a user to be identified and reference behavior data of a reference user;
determining the similarity of the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data;
and judging whether the user to be identified is a fraudulent user or not based on the similarity.
The computer-readable storage media of embodiments of the present invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or terminal. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
According to the technical scheme, the user behavior data of the user to be identified and the reference behavior data of the reference user are obtained; determining the similarity of the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data; and judging whether the user to be identified is a fraudulent user based on the similarity, and judging whether the user to be identified is the fraudulent user according to the similarity by calculating the similarity without performing unsupervised modeling, so that the problem that the KNN has uncertainty in a high-dimensional data algorithm is solved, and the technical effect of improving the accuracy of identifying the fraudulent user is achieved.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (6)

1. A method of identifying a rogue user, comprising:
acquiring user behavior data of a user to be identified and reference behavior data of a reference user;
determining the similarity of the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data;
judging whether the user to be identified is a fraudulent user or not based on the similarity;
the method for judging whether the user to be identified is a fraudulent user based on the similarity comprises the following steps:
grouping a plurality of users to be identified according to the similarity of each user to be identified to obtain at least one group to be identified, wherein the group to be identified corresponds to at least one user to be identified;
judging whether each group to be identified has a fraudulent user or not;
when the group to be identified has the fraudulent user, judging that all the users to be identified corresponding to the group to be identified are the fraudulent users;
the determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data comprises the following steps:
inputting the user behavior data and the reference behavior data as an identification sample into a trained second preset model; the second preset model is a twin network model;
Acquiring a comparison result of the second preset model based on the user behavior data and the reference behavior data, and determining the similarity of the user to be identified and the reference user; the determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data comprises the following steps:
calculating a first feature vector corresponding to the user to be identified according to the user behavior data;
calculating a second feature vector corresponding to the reference user according to the reference behavior data;
calculating the feature distance of the first feature vector and the second feature vector;
taking the characteristic distance as the similarity between the user to be identified and the reference user;
the calculating the first feature vector corresponding to the user to be identified according to the user behavior data comprises:
inputting the user behavior data into a convolutional neural network based on a first preset model;
acquiring a first eigenvector calculated based on the convolutional neural network; the first preset model is a sub-model in the second preset model.
2. The method of identifying fraudulent users of claim 1, wherein said reference user is a user who has no record of fraudulent activity, said determining whether said user to be identified is a fraudulent user based on said similarity includes:
Judging whether the similarity is larger than a preset threshold value or not;
and when the similarity is larger than the preset threshold, judging the user to be identified as a fraudulent user.
3. A method of identifying fraudulent users according to claim 1, including, prior to said inputting said user behavior data and reference behavior data as an identification sample into a trained second predetermined model:
acquiring a plurality of training samples, each training sample comprising the reference behavior data and historical behavior data;
marking each training sample to obtain a plurality of marked training samples;
and training the second preset model based on the plurality of marked training samples to obtain a trained second preset model.
4. An apparatus for identifying fraudulent users, comprising:
the acquisition module is used for acquiring user behavior data of the user to be identified and reference behavior data of the reference user;
the similarity determining module is used for determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data;
the fraud user judging module is used for judging whether the user to be identified is a fraud user or not based on the similarity;
The fraud user judgement module includes: the grouping unit is used for grouping a plurality of users to be identified according to the similarity of each user to be identified to obtain at least one group to be identified, and the group to be identified corresponds to the at least one user to be identified; a fraudulent user judging unit for judging whether each group to be identified has a fraudulent user; when the group to be identified has the fraudulent user, judging that all the users to be identified corresponding to the group to be identified are the fraudulent users;
the similarity determining module is specifically configured to input the user behavior data and the reference behavior data as an identification sample to a trained second preset model; the second preset model is a twin network model; acquiring a comparison result of the second preset model based on the user behavior data and the reference behavior data, and determining the similarity of the user to be identified and the reference user;
the similarity determination module comprises: the feature vector calculation unit is used for calculating a first feature vector corresponding to the user to be identified according to the user behavior data; calculating a second feature vector corresponding to the reference user according to the reference behavior data; a feature distance calculating unit for calculating feature distances of the first feature vector and the second feature vector; a similarity determining unit, configured to take the feature distance as a similarity between the user to be identified and a reference user;
The feature vector calculation unit is specifically used for inputting the user behavior data into a convolutional neural network based on a first preset model; acquiring a first eigenvector calculated based on the convolutional neural network; the first preset model is a sub-model in the second preset model.
5. A server, comprising:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of identifying fraudulent users of any of claims 1-3.
6. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a method of identifying a rogue user according to any of claims 1-3.
CN201911410138.8A 2019-12-31 2019-12-31 Method, apparatus, server and storage medium for identifying fraudulent user Active CN111125658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911410138.8A CN111125658B (en) 2019-12-31 2019-12-31 Method, apparatus, server and storage medium for identifying fraudulent user

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911410138.8A CN111125658B (en) 2019-12-31 2019-12-31 Method, apparatus, server and storage medium for identifying fraudulent user

Publications (2)

Publication Number Publication Date
CN111125658A CN111125658A (en) 2020-05-08
CN111125658B true CN111125658B (en) 2024-03-22

Family

ID=70506283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911410138.8A Active CN111125658B (en) 2019-12-31 2019-12-31 Method, apparatus, server and storage medium for identifying fraudulent user

Country Status (1)

Country Link
CN (1) CN111125658B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348519A (en) * 2020-10-21 2021-02-09 上海淇玥信息技术有限公司 Method and device for identifying fraudulent user and electronic equipment
CN112308703A (en) * 2020-11-02 2021-02-02 创新奇智(重庆)科技有限公司 User grouping method, device, equipment and storage medium
CN112365338B (en) * 2020-11-11 2024-03-22 天翼安全科技有限公司 Data fraud detection method, device, terminal and medium based on artificial intelligence
CN113362070A (en) * 2021-06-03 2021-09-07 中国工商银行股份有限公司 Method, apparatus, electronic device, and medium for identifying operating user
CN117455518B (en) * 2023-12-25 2024-04-19 连连银通电子支付有限公司 Fraudulent transaction detection method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958215A (en) * 2017-11-23 2018-04-24 深圳市分期乐网络科技有限公司 A kind of antifraud recognition methods, device, server and storage medium
CN108009531A (en) * 2017-12-28 2018-05-08 北京工业大学 A kind of face identification method of more tactful antifraud
CN108268624A (en) * 2018-01-10 2018-07-10 清华大学 User data method for visualizing and system
CN108629593A (en) * 2018-04-28 2018-10-09 招商银行股份有限公司 Fraudulent trading recognition methods, system and storage medium based on deep learning
CN109461068A (en) * 2018-09-13 2019-03-12 深圳壹账通智能科技有限公司 Judgment method, device, equipment and the computer readable storage medium of fraud
CN110188602A (en) * 2019-04-17 2019-08-30 深圳壹账通智能科技有限公司 Face identification method and device in video

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657536B (en) * 2017-02-20 2018-07-31 平安科技(深圳)有限公司 The recognition methods of social security fraud and device
US11102225B2 (en) * 2017-04-17 2021-08-24 Splunk Inc. Detecting fraud by correlating user behavior biometrics with other data sources
US20190295087A1 (en) * 2018-03-23 2019-09-26 Microsoft Technology Licensing, Llc System and method for detecting fraud in online transactions by tracking online account usage characteristics indicative of user behavior over time

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958215A (en) * 2017-11-23 2018-04-24 深圳市分期乐网络科技有限公司 A kind of antifraud recognition methods, device, server and storage medium
CN108009531A (en) * 2017-12-28 2018-05-08 北京工业大学 A kind of face identification method of more tactful antifraud
CN108268624A (en) * 2018-01-10 2018-07-10 清华大学 User data method for visualizing and system
CN108629593A (en) * 2018-04-28 2018-10-09 招商银行股份有限公司 Fraudulent trading recognition methods, system and storage medium based on deep learning
CN109461068A (en) * 2018-09-13 2019-03-12 深圳壹账通智能科技有限公司 Judgment method, device, equipment and the computer readable storage medium of fraud
CN110188602A (en) * 2019-04-17 2019-08-30 深圳壹账通智能科技有限公司 Face identification method and device in video

Also Published As

Publication number Publication date
CN111125658A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111125658B (en) Method, apparatus, server and storage medium for identifying fraudulent user
CN110728313B (en) Classification model training method and device for intention classification recognition
CN110929525B (en) Network loan risk behavior analysis and detection method, device, equipment and storage medium
CN112990294B (en) Training method and device of behavior discrimination model, electronic equipment and storage medium
CN112149754B (en) Information classification method, device, equipment and storage medium
CN112818162A (en) Image retrieval method, image retrieval device, storage medium and electronic equipment
CN112596964A (en) Disk failure prediction method and device
CN113239702A (en) Intention recognition method and device and electronic equipment
CN114255381B (en) Training method of image recognition model, image recognition method, device and medium
CN110020638B (en) Facial expression recognition method, device, equipment and medium
CN116662555B (en) Request text processing method and device, electronic equipment and storage medium
CN110929285B (en) Method and device for processing private data
CN111597336A (en) Processing method and device of training text, electronic equipment and readable storage medium
CN116956171A (en) Classification method, device, equipment and storage medium based on AI model
CN115952800A (en) Named entity recognition method and device, computer equipment and readable storage medium
CN115511104A (en) Method, apparatus, device and medium for training a contrast learning model
CN117523218A (en) Label generation, training of image classification model and image classification method and device
CN112733645B (en) Handwritten signature verification method, handwritten signature verification device, computer equipment and storage medium
CN115393100A (en) Resource recommendation method and device
CN110059180B (en) Article author identity recognition and evaluation model training method and device and storage medium
CN112861974A (en) Text classification method and device, electronic equipment and storage medium
CN113407666B (en) Target crowd search intention recognition method and device, electronic equipment and medium
CN116187299B (en) Scientific and technological project text data verification and evaluation method, system and medium
CN114630185B (en) Target user identification method and device, electronic equipment and storage medium
CN116668059A (en) Account security detection method, medium, device and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant