CN109242307B - Anti-fraud policy analysis method, server, electronic device and storage medium - Google Patents

Anti-fraud policy analysis method, server, electronic device and storage medium Download PDF

Info

Publication number
CN109242307B
CN109242307B CN201811029029.7A CN201811029029A CN109242307B CN 109242307 B CN109242307 B CN 109242307B CN 201811029029 A CN201811029029 A CN 201811029029A CN 109242307 B CN109242307 B CN 109242307B
Authority
CN
China
Prior art keywords
user information
rule subset
fraud
comparing
strong
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811029029.7A
Other languages
Chinese (zh)
Other versions
CN109242307A (en
Inventor
刘瑜晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Everbright Bank Co ltd Credit Card Center
Original Assignee
China Everbright Bank Co ltd Credit Card Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Everbright Bank Co ltd Credit Card Center filed Critical China Everbright Bank Co ltd Credit Card Center
Priority to CN201811029029.7A priority Critical patent/CN109242307B/en
Publication of CN109242307A publication Critical patent/CN109242307A/en
Application granted granted Critical
Publication of CN109242307B publication Critical patent/CN109242307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Theoretical Computer Science (AREA)
  • Educational Administration (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Technology Law (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

According to the anti-fraud policy analysis method, the server, the electronic device and the storage medium provided by the embodiment of the invention, the obtained user information is scored through an anti-fraud model, the fraud risk level of the user is divided, and the user information is analyzed for the user through a preset anti-fraud matrix, wherein the anti-fraud matrix comprises the fraud risk level and an anti-fraud rule set. The anti-fraud matrix formed by the anti-fraud rule set and the anti-fraud model solves the problem that some factors with obvious influence are not considered in the existing model, and can fully mine data value for banks and credit institutions, save data query cost and improve fraud prevention response efficiency.

Description

Anti-fraud policy analysis method, server, electronic device and storage medium
Technical Field
The invention relates to the field of data processing, in particular to an anti-fraud policy analysis method, a server, electronic equipment and a storage medium.
Background
The scoring model is widely applied to the existing application anti-fraud, in a general anti-fraud strategy analysis method, the model can obtain a score, a threshold value is set according to the verification effect of the score, customers with the score larger than the threshold value can enter the next examination and approval link, and the customers with the score smaller than the threshold value can be regarded as suspected fraud refusal.
However, the strategy is formulated by simply depending on the model scoring in the application of anti-fraud, and even if the model has good analysis index performance, the problems still exist: the model is a result of comprehensive action of a plurality of variables on a full-scale modeling sample, all model algorithms consider global optimization, and some factors with obvious influence are not considered in the existing model.
Disclosure of Invention
In view of the above, an object of the embodiments of the present invention is to provide an anti-fraud policy analysis method, a server, an electronic device, and a storage medium, so as to solve the above technical problems.
The embodiment of the invention is realized by the following steps:
in a first aspect, an embodiment of the present invention provides an anti-fraud policy analysis method, including obtaining user information of a user; calculating by using a pre-established anti-fraud model according to the user information to obtain a fraud risk level corresponding to the user; analyzing the user by utilizing a pre-established anti-fraud policy matrix according to the user information to obtain a corresponding analysis result; wherein the anti-fraud policy matrix includes a fraud risk level and an anti-fraud rule set.
Further, the user information includes any one or combination of name, mobile phone number, identification number, professional information, residence address and contact person information.
Further, the anti-fraud strategy analysis method further comprises the steps of obtaining a plurality of training samples, wherein the training samples comprise training user information and training fraud risk levels; training the neural network by taking the training user information as input and the training fraud risk level as output to obtain an anti-fraud model.
And further, analyzing the user by utilizing a pre-established anti-fraud policy matrix according to the user information and the fraud risk level, wherein the user information and an anti-fraud rule set in the anti-fraud policy matrix are analyzed according to an analysis sequence corresponding to the fraud risk level.
Further, fraud risk levels include a strong trust level, a weak trust level, a neutral level, a weak challenge level, and a strong challenge level; the anti-fraud rule set comprises a direct batch rule subset, a strong trust rule subset, a weak suspicion rule subset, a strong suspicion rule subset and a direct refusal rule subset;
correspondingly, the user is analyzed by utilizing the anti-fraud strategy matrix established in advance according to the user information and the fraud risk level, including,
if the user is in a strong trust level, comparing the user information with the direct rejection rule subset; if the user information does not conform to the direct refusal rule subset, comparing the user information with the strong questioning rule subset;
if the user is in a weak trust level, comparing the user information with the direct rejection rule subset; if the user information does not conform to the direct refusal rule subset, comparing the user information with the strong questioning rule subset; if the user information does not accord with the strong questioning rule subset, comparing the user information with the weak questioning rule subset;
if the user is in the neutral level, comparing the user information with the straight batch rule subset; if the user information does not conform to the direct batch rule subset, comparing the user information with the direct refusal rule subset; if the user information does not conform to the direct refusal rule subset, comparing the user information with the strong questioning rule subset; if the user information does not conform to the strong doubt rule subset, comparing the user information with the strong trust rule subset; if the user information does not conform to the strong trust rule subset, comparing the user information with the weak trust rule subset; if the user information does not conform to the weak trust rule subset, comparing the user information with the weak challenge rule subset to obtain corresponding analysis;
if the user is in the weak suspicion level, comparing the user information with the direct batch rule subset; if the user information does not conform to the direct batch rule subset, comparing the user information with the strong trust rule subset; if the user information does not conform to the strong trust rule subset, comparing the user information with the weak trust rule subset;
if the user is in a strong doubt level, comparing the user information with the direct batch rule subset; and if the user information does not conform to the direct trust rule subset, comparing the user information with the strong trust rule subset.
Further, the analysis results include reject, pending and pass.
In a second aspect, an embodiment of the present invention provides a server, including: an acquisition unit configured to acquire user information of a user; the calculating unit is used for calculating by utilizing a pre-established anti-fraud model according to the user information to obtain a fraud risk level corresponding to the user; and the analysis unit is used for analyzing the user by utilizing a pre-established anti-fraud strategy matrix according to the user information and the fraud risk level to obtain a corresponding analysis result.
Further, the server also comprises a plurality of training samples, wherein the training samples comprise training user information and training fraud risk levels; training the neural network by taking the training user information as input and the training fraud risk level as output to obtain an anti-fraud model.
In a third aspect, an embodiment of the present invention provides an electronic device, including: the system comprises a processor, a memory and a bus, wherein the processor and the memory are communicated with each other through the bus; the memory stores program instructions executable by the processor, which invokes the program instructions to perform the method described above.
In a fourth aspect, embodiments of the present invention provide a non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the above-described method.
According to the anti-fraud policy analysis method, the server, the electronic device and the storage medium provided by the embodiment of the invention, the anti-fraud model is used for grading the user information and dividing the fraud risk level of the user, and the user information is used for analyzing the user through the preset anti-fraud matrix, wherein the anti-fraud matrix comprises the fraud risk level and the anti-fraud rule set. The anti-fraud matrix formed by the anti-fraud rule set and the anti-fraud model solves the problem that some factors with obvious influence are not considered in the existing model, and can more accurately position the customer for banks and credit institutions.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of an anti-fraud policy analysis method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an anti-fraud policy matrix according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a hit condition of an anti-fraud policy according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an apparatus applying an anti-fraud policy analysis method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a block diagram illustrating an electronic device 100 applicable to an embodiment of the present application. The electronic device 100 may include an anti-fraud policy matrix based apparatus 100, a memory 101, a memory controller 102, a processor 103, a peripheral interface 104, an input-output unit 105, an audio unit 106, and a display unit 107.
The memory 101, the memory controller 102, the processor 103, the peripheral interface 104, the input/output unit 105, the audio unit 106, and the display unit 107 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The anti-fraud policy matrix based apparatus 100 includes at least one software function module which may be stored in the memory 101 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the anti-fraud policy matrix based apparatus 100. The processor 103 is configured to execute an executable module stored in the memory 101, such as a software function module or a computer program included in the anti-fraud policy matrix based apparatus 100.
The Memory 101 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 101 is configured to store a program, and the processor 103 executes the program after receiving an execution instruction, and the method executed by the server defined by the flow process disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 103, or implemented by the processor 103.
The processor 103 may be an integrated circuit chip having signal processing capabilities. The Processor 103 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor 103 may be any conventional processor or the like.
The peripheral interface 104 couples various input/output devices to the processor 103 as well as to the memory 101. In some embodiments, the peripheral interface 104, the processor 103, and the memory controller 102 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input and output unit 105 is used for providing input data for a user to realize the interaction of the user and the server (or the local terminal). The input/output unit 105 may be, but is not limited to, a mouse, a keyboard, and the like.
Audio unit 106 provides an audio interface to a user, which may include one or more microphones, one or more speakers, and audio circuitry.
The display unit 107 provides an interactive interface (e.g., a user interface) between the electronic device 100 and a user or for displaying image data to a user reference. In this embodiment, the display unit 107 may be a liquid crystal display or a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. Supporting single-point and multi-point touch operations means that the touch display can sense touch operations simultaneously generated from one or more positions on the touch display, and the sensed touch operations are sent to the processor 103 for calculation and processing.
The peripheral interface 104 couples various input/output devices to the processor 103 as well as to the memory 101. In some embodiments, the peripheral interface 104, the processor 103, and the memory controller 102 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input and output unit 105 is used for providing input data for a user to realize the interaction of the user and the processing terminal. The input/output unit 105 may be, but is not limited to, a mouse, a keyboard, and the like.
It is to be understood that the configuration shown in fig. 1 is merely exemplary, and that the electronic device 100 may include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof. Fig. 2 is a schematic flow chart of an anti-fraud policy analysis method provided in an embodiment of the present invention, as shown in fig. 2, the method includes:
step 210: user information of a user is acquired.
Specifically, when a user applies for a service, the user needs to fill in various information of the user in detail on a terminal, the terminal sends the information filled by the user to a server, and the information obtained by the server is the user information. The user information may include: any one or combination of name, mobile phone number, identification card number, occupation information, living address and contact person information.
Step 220: and calculating by using a pre-established anti-fraud model according to the user information and the fraud risk level to obtain the fraud risk level corresponding to the user.
Specifically, after receiving user information sent by the terminal, the server inputs the user information obtained before into the established anti-fraud model, and the anti-fraud model outputs a user fraud risk level corresponding to the user information through multiple learning and training. It should be noted that the anti-fraud model is pre-established and trained. The fraud risk level of the user can be used to indicate the trustworthiness of the user, and specifically, the fraud risk level can be divided into: a strong trust level, a weak trust level, a neutral level, a weak challenge level, and a strong challenge level.
Step 230: and analyzing the user by utilizing a pre-established anti-fraud policy matrix according to the user information and the fraud risk level to obtain a corresponding analysis result.
Wherein the anti-fraud policy matrix comprises the fraud risk level and an anti-fraud rule set.
Specifically, the fraud risk level divided by the anti-fraud model is the vertical axis, and the subset of the anti-fraud rule set is the horizontal axis, so that the anti-fraud policy matrix can be formed. It should be noted that the fraud risk levels divided by the anti-fraud model may be vertical or horizontal, and the subsets of the anti-fraud rule set may be vertical or horizontal, and the form is not fixed, and may be adjusted according to specific requirements.
Each fraud risk level has a corresponding execution sequence, and the rule set of the anti-fraud policy matrix is used for correcting the fraud risk level of the user.
According to the embodiment of the invention, the fraud risk level corresponding to the user can be obtained after the obtained user information is calculated by using the anti-fraud model, and the user is re-analyzed by using the rule set, the user information and the fraud risk level in the anti-fraud policy matrix, so that the problems that the existing model only considers the global optimum and does not consider some factors with obvious influence are solved, and the customer can be more accurately positioned for banks and credit institutions.
On the basis of the above embodiment, the user information includes any one or a combination of a name, a mobile phone number, an identification number, professional information, a residential address, and contact information.
Specifically, the user information is obtained by a user filling out and credit agency initiating a query. The professional information comprises a professional type, a company name, a company address and a company type. The contact information includes a contact name, a relationship of the contact to the user, a professional type of the contact, a company name of the contact, and a phone number of the contact. The user information includes the above information but is not limited to the above information, and the adjustment of obtaining the user information can be performed according to the actually needed user data.
On the basis of the above embodiment, the method further includes: obtaining a plurality of training samples, wherein the training samples comprise training user information and training fraud risk levels; and training the neural network by taking the training user information as input and the training fraud risk level as output to obtain the anti-fraud model.
Specifically, a large amount of user information is obtained to perform annotation processing, so as to obtain fraud risk levels corresponding to the part of users, and then the part of processed user information and corresponding fraud risk levels are used as training samples. The anti-fraud model is a result of training a plurality of training samples by a neural network, user information of the training samples is input into the neural network, and a required output result is achieved through a certain preset condition. The invention inputs the user information of the training sample, and then carries out the calculated training according to the preset conditions, and outputs the fraud risk level.
Further, the preset conditions may be, but are not limited to, verifying whether the basic information of the residential address is consistent with the credit investigation report, whether the certificate number, the mobile phone number and the name are consistent with each other in the operator and can be checked and verified, whether the online duration of the mobile phone number is normal, and whether the number of incoming calls applied in the same unit is normal in about 3 months; the specific preset conditions can be changed according to the requirements of actual conditions, so that the response speed of the model to the sudden risk point is more timely, and the adaptability is stronger.
It should be noted that the anti-fraud model may also be implemented by using a vector support machine and other learning models, and the specific model execution method may be adjusted according to the specific implementation.
On the basis of the above embodiment, the analyzing the user by using the anti-fraud policy matrix established in advance according to the user information and the fraud risk level includes analyzing the user information and an anti-fraud rule set in the anti-fraud policy matrix according to an analysis sequence corresponding to the fraud risk level.
Specifically, each fraud risk level has a corresponding execution order, and the rule set of the anti-fraud policy matrix corrects the fraud risk level of the user by considering some factors having significant influence.
Fig. 3 is a schematic diagram of an anti-fraud policy matrix according to an embodiment of the present invention, and fig. 4 is a schematic diagram of hit conditions of an anti-fraud policy according to an embodiment of the present invention, as shown in fig. 3 and fig. 4, where the fraud risk levels include a strong trust level, a weak trust level, a neutral level, a weak suspicion level, and a strong suspicion level; the anti-fraud rule set comprises a direct batch rule subset, a strong trust rule subset, a weak suspicion rule subset, a strong suspicion rule subset and a direct refusal rule subset;
specifically, the strong questioning rule subset may include, but is not limited to, multiple applications with the terminal device, the number of the filled mobile phone, the name of the unit, and the address of the unit are all inconsistent with the information of the personal credit, and the number of applications by the near three monthly credit institutions is too many. The strong trust rule subset can have, but is not limited to, the filling information is consistent with the people's bank credit report information, the social security and public deposit payment state is normal, and the continuous payment time is long. The content of the rule subset is not limited, and may be set according to the actual situation and the development situation of the object.
For example, the user information of one user is: if the same computer is used for applying for loans to different credit institutions for many times, the user information of the user accords with one of the strong questioning rule subset, and even if the user hits the strong questioning rule subset, the user can find out the strong questioning rule subset. The user information of another existing user is: the social security of the user is always paid from work, and the user information of the user is consistent with one of the strong trust rule subsets from beginning to end, even if the user hits the strong trust rule subset.
Correspondingly, the analyzing the user by utilizing the pre-established anti-fraud strategy matrix according to the user information and the fraud risk level comprises the following steps,
if the user is in the strong trust level, comparing the user information with the direct rejection rule subset, if the user information accords with any one of the direct rejection rule subset, and outputting Z-51. And if the user information does not accord with the direct-refusal rule subset, comparing the user information with the strong-doubt rule subset, if the user information accords with one strong-doubt rule subset, hitting the user information, and outputting Z-52. And if all the user information does not accord with the strong questioning rule subset, outputting Z-5. It is worth noting that users with strong trust levels are no longer analyzed against the straight batch rule subset, the strong trust rule subset, the weak trust rule subset, and the weak challenge rule subset. And the output Z-51, Z-52 and Z-5 need to search the corresponding analysis results in FIG. 5 for output, the corresponding analysis results of Z-51 and Z-52 in FIG. 5 are pending, and the corresponding analysis results of Z-5 in FIG. 5 are passing.
If the user is in the weak trust level, comparing the user information with the direct rejection rule subset, if the user information accords with any one of the direct rejection rule subset, and outputting Z-41. And if the user information does not accord with the direct-refusal rule subset, comparing the user information with the strong-doubt rule subset, if the user information accords with a strong-doubt rule subset, hitting the user information, and then outputting Z-42. And if the user information does not accord with the strong doubt rule subset, comparing the user information with the weak doubt rule subset, if the user information accords with a weak doubt rule subset, outputting Z-43 if the user information is hit, and if the user information does not accord with the weak doubt rule subset, outputting Z-4. It is worth noting that users with a weak trust level are no longer analyzed against the straight batch rule subset, the strong trust rule subset, and the weak trust rule subset. And the output Z-41, Z-42, Z-43 and Z-4 need to search the corresponding analysis results in FIG. 5 for output, the corresponding analysis results of Z-41, Z-42 and Z-43 in FIG. 5 are pending, and the corresponding analysis results of Z-4 in FIG. 5 are passing.
And if the user is the neutral level, comparing the user information with the straight batch rule subset, if the user information accords with any one of the straight batch rule subset, hitting the user information, and outputting Z-36. And if the user information does not accord with the direct batch rule subset, comparing the user information with the direct rejection rule subset, and if the user information accords with any one of the direct rejection rule subset, if hit, outputting Z-31. And if the user information does not accord with the direct-refusal rule subset, comparing the user information with the strong-questioning-rule subset, if the user information accords with any one of the strong-questioning-rule subset, hitting the user information, and outputting Z-32. And if the user information does not accord with the strong doubt rule subset, comparing the user information with the strong trust rule subset, if the user information accords with any one of the strong trust rule subset, hitting the user information, and outputting Z-35. And if all the user information does not accord with the strong trust rule subset, comparing the user information with the weak trust rule subset, if the user information accords with any one of the weak trust rule subset, hitting the user information, and then outputting Z-34. And if all the user information does not accord with the weak trust rule subset, comparing the user information with the weak challenge rule subset, if the user information accords with any one of the weak challenge rule subset, hitting the user information, and outputting Z-33. And if all the user information does not conform to the weak questioning rule subset, outputting Z-3. And the output Z-31, Z-32, Z-33, Z-34, Z-35, Z-36 and Z-3 need to search the corresponding analysis results in FIG. 5 for output, and the corresponding analysis results of Z-31, Z-32, Z-33, Z-34, Z-35, Z-36 and Z-3 in FIG. 5 are pending.
And if the user is the weak challenge grade, comparing the user information with the straight batch rule subset, if the user information accords with any one of the straight batch rule subset, hitting the user information, and outputting Z-26. And if the user information does not accord with the direct batch rule subset, comparing the user information with the strong trust rule subset, if the user information accords with any one of the strong trust rule subset, hitting the user information, and outputting Z-25. And if all the user information does not accord with the strong trust rule subset, comparing the user information with the weak trust rule subset, if the user information accords with any one of the weak trust rule subset, hitting the user information, and then outputting Z-24. And if all the user information does not accord with the strong trust rule subset, outputting Z-2. It is worth noting that users with weak questioning level do not perform comparative analysis with the direct questioning rule subset, the strong questioning rule subset and the weak questioning rule subset. And the output Z-26, Z-25, Z-24 and Z-2 need to search the corresponding analysis results in FIG. 5 for output, the corresponding analysis results of Z-26, Z-25 and Z-24 in FIG. 5 are pending, and the corresponding analysis results of Z-2 in FIG. 5 are rejected.
If the user is the strong challenge level, comparing the user information with the direct batch rule subset; and if the user information conforms to any one of the straight batch rule subsets, outputting Z-16 if hit. And if the user information does not accord with the direct batch rule subset, comparing the user information with the strong trust rule subset, if the user information accords with any one of the strong trust rule subset, hitting the user information, and outputting Z-15. And if all the user information does not accord with the strong trust rule subset, outputting Z-1. It is worth mentioning that the strongly questionable users do not compare with the directly questionable rule subset, the strongly questionable rule subset, the weakly questionable rule subset and the weakly trusting rule subset. And the output Z-16, Z-15 and Z-1 need to search the corresponding analysis results in FIG. 5 for output, the analysis results corresponding to Z-16 and Z-15 in FIG. 5 are pending, and the analysis results corresponding to Z-1 in FIG. 5 are rejected.
It should be noted that the fraud risk level of the user may not only be a strong trust level, a weak trust level, a neutral level, a weak suspicion level and a strong suspicion level, but also be a direct refusal level, a direct batch level or other level division, and the specific division number may be determined according to actual requirements. The anti-fraud rule set not only comprises a direct batch rule subset, a strong trust rule subset, a weak suspicion rule subset, a strong suspicion rule subset and a direct refusal rule subset, but also comprises a neutral rule subset or the division of other rule subsets, and the specific division number can be determined according to actual requirements. And the execution sequence of each fraud risk level corrected according to the rule set is not fixed, and can be changed according to the actual demand condition.
It should be noted that the anti-fraud policy matrix may also determine the analysis result without looking up a table, and directly output the corresponding analysis result after comparison, and no longer determine the analysis result by the number of the corresponding flow.
On the basis of the above embodiment, the analysis result includes reject, pending and pass.
Specifically, the results output by the anti-fraud policy matrix may include, but are not limited to, three types of results, the user whose analysis result is that the user is rejected may be directly rejected to perform service processing, the user whose analysis result is that the user passes the analysis result may directly pass the audit, and start service processing, and the user whose analysis result is to be determined may enter a manual review stage. And the output result not only is the analysis result of the anti-fraud policy matrix, but also comprises user information, fraud risk level and processing flow in the anti-fraud policy matrix, so that the subsequent manual review stage can be facilitated, and the method is also important for monitoring, rechecking and optimizing the anti-fraud policy analysis method.
It is worth to be noted that, under the framework of the policy matrix, the anti-fraud policy model is an overall optimal solution and requires high coverage and stability, while the anti-fraud rule set is a local optimal solution and is more accurate and flexible, and the anti-fraud policy model and the local optimal solution supplement each other to form the anti-fraud policy matrix. Variables with obvious influences in the anti-fraud policy model can be combined to form an anti-fraud rule set, and outliers are identified through an anomaly detection method. The anti-fraud policy model can be used for classifying the client according to the three-party data which cannot be applied, so that a differentiated three-party data query scheme can be formulated, query cost is saved, and on the other hand, a rule can be formulated independently for the client querying the three-party data, so that the problems of low coverage and poor stability in the three-party data application are solved. The deployment of the anti-fraud rule set is easier to modify compared with an iteration relative model, and if a new risk point is found or the model obviously generates deviation, the scheme of rule deployment can quickly respond and reduce loss.
Fig. 5 is a schematic structural diagram of an apparatus for applying an anti-fraud policy analysis method according to an embodiment of the present invention, where the apparatus includes:
an obtaining unit 510, configured to obtain user information of a user. An obtaining unit 510, configured to perform calculation by using a pre-established anti-fraud model according to the user information, and obtain a fraud risk level corresponding to the user. An analyzing unit 530, configured to analyze the user according to the user information and the fraud risk level by using a pre-established anti-fraud policy matrix, to obtain a corresponding analysis result.
The server 500 provided in the embodiment of the present invention is configured to execute the method described above, and a specific implementation manner of the server 500 is consistent with an implementation manner of the method, which is not described herein again.
On the basis of the above embodiment, the user information includes any one or a combination of a name, a mobile phone number, an identification number, professional information, a residential address, and contact information.
The server 500 provided in the embodiment of the present invention is configured to execute the method described above, and a specific implementation manner of the server 500 is consistent with an implementation manner of the method, which is not described herein again.
On the basis of the above embodiment, the server 500 further includes: the obtaining unit 510 obtains a plurality of training samples, where the training samples include training user information and training fraud risk levels; the calculating unit 520 takes the training user information as input, and the training fraud risk level as output of the calculating unit 520, and trains the neural network to obtain the anti-fraud model.
The server 500 provided in the embodiment of the present invention is configured to execute the method described above, and a specific implementation manner of the server 500 is consistent with an implementation manner of the method, which is not described herein again.
On the basis of the foregoing embodiment, the analyzing unit 530 includes analyzing the user information and the anti-fraud rule set in the anti-fraud policy matrix according to the analysis order corresponding to the fraud risk level.
The server 500 provided in the embodiment of the present invention is configured to execute the method described above, and a specific implementation manner of the server 500 is consistent with an implementation manner of the method, which is not described herein again.
On the basis of the embodiment, the fraud risk level comprises a strong trust level, a weak trust level, a neutral level, a weak suspicion level and a strong suspicion level; the anti-fraud rule set comprises a direct batch rule subset, a strong trust rule subset, a weak suspicion rule subset, a strong suspicion rule subset and a direct refusal rule subset;
accordingly, the analyzing unit 530, including,
if the user is the strong trust level, comparing the user information with the direct rejection rule subset; if the user information does not conform to the direct doubt rule subset, comparing the user information with the strong doubt rule subset;
if the user is the weak trust level, comparing the user information with the direct rejection rule subset; if the user information does not conform to the direct doubt rule subset, comparing the user information with the strong doubt rule subset; if the user information does not conform to the strong question rule subset, comparing the user information with the weak question rule subset;
if the user is the neutral level, comparing the user information with the batch rule subset; if the user information does not conform to the direct batch rule subset, comparing the user information with the direct batch rule subset; if the user information does not conform to the direct doubt rule subset, comparing the user information with the strong doubt rule subset; if the user information does not conform to the strong challenge rule subset, comparing the user information with the strong trust rule subset; if the user information does not conform to the strong trust rule subset, comparing the user information with the weak trust rule subset; if the user information does not conform to the weak trust rule subset, comparing the user information with the weak challenge rule subset to obtain corresponding analysis;
if the user is the weak challenge grade, comparing the user information with the direct batch rule subset; if the user information does not conform to the straight batch rule subset, comparing the user information with the strong trust rule subset; if the user information does not conform to the strong trust rule subset, comparing the user information with the weak trust rule subset;
if the user is the strong challenge level, comparing the user information with the direct batch rule subset; and if the user information does not conform to the straight batch rule subset, comparing the user information with the strong trust rule subset.
The server 500 provided in the embodiment of the present invention is configured to execute the method described above, and a specific implementation manner of the server 500 is consistent with an implementation manner of the method, which is not described herein again.
On the basis of the above embodiment, the analysis result includes reject, pending and pass.
The server 500 provided in the embodiment of the present invention is configured to execute the method described above, and a specific implementation manner of the server 500 is consistent with an implementation manner of the method, which is not described herein again.
An embodiment of the present invention further provides an electronic device, including: the system comprises a processor, a memory and a bus, wherein the processor and the memory are communicated with each other through the bus; the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method.
Embodiments of the present invention also provide a non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method.
In summary, according to the anti-fraud policy analysis method, the server, the electronic device, and the storage medium provided in the embodiments of the present invention, the anti-fraud model is used to score the obtained user information, so as to classify the fraud risk level of the user, and the user information is used to analyze the user through the pre-set anti-fraud matrix, where the anti-fraud matrix includes the fraud risk level and the anti-fraud rule set. The anti-fraud matrix formed by the anti-fraud rule set and the anti-fraud model solves the problem that some factors with obvious influence are not considered in the existing model, and can more accurately position the customer for banks and credit institutions.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (8)

1. An anti-fraud policy analysis method, comprising,
acquiring user information of a user;
calculating by using a pre-established anti-fraud model according to the user information to obtain a fraud risk level corresponding to the user;
analyzing the user by utilizing a pre-established anti-fraud policy matrix according to the user information and the fraud risk level to obtain a corresponding analysis result, including;
the anti-fraud policy matrix comprises the fraud risk level and an anti-fraud rule set, wherein the fraud risk level comprises a strong trust level, a weak trust level, a neutral level, a weak suspicion level and a strong suspicion level; the anti-fraud rule set comprises a direct batch rule subset, a strong trust rule subset, a weak suspicion rule subset, a strong suspicion rule subset and a direct refusal rule subset;
if the user is the strong trust level, comparing the user information with the direct rejection rule subset; if the user information does not conform to the direct doubt rule subset, comparing the user information with the strong doubt rule subset;
if the user is the weak trust level, comparing the user information with the direct rejection rule subset; if the user information does not conform to the direct doubt rule subset, comparing the user information with the strong doubt rule subset; if the user information does not conform to the strong question rule subset, comparing the user information with the weak question rule subset;
if the user is the neutral level, comparing the user information with the batch rule subset; if the user information does not conform to the direct batch rule subset, comparing the user information with the direct batch rule subset; if the user information does not conform to the direct doubt rule subset, comparing the user information with the strong doubt rule subset; if the user information does not conform to the strong challenge rule subset, comparing the user information with the strong trust rule subset; if the user information does not conform to the strong trust rule subset, comparing the user information with the weak trust rule subset; if the user information does not conform to the weak trust rule subset, comparing the user information with the weak challenge rule subset to obtain corresponding analysis;
if the user is the weak challenge grade, comparing the user information with the direct batch rule subset; if the user information does not conform to the straight batch rule subset, comparing the user information with the strong trust rule subset; if the user information does not conform to the strong trust rule subset, comparing the user information with the weak trust rule subset;
if the user is the strong challenge level, comparing the user information with the direct batch rule subset; and if the user information does not conform to the straight batch rule subset, comparing the user information with the strong trust rule subset.
2. The anti-fraud policy analysis method according to claim 1, wherein the user information comprises any one or a combination of a name, a mobile phone number, an identification number, professional information, a residential address, and contact information.
3. The anti-fraud policy analysis method according to claim 1, further comprising:
obtaining a plurality of training samples, wherein the training samples comprise training user information and training fraud risk levels;
and training the neural network by taking the training user information as input and the training fraud risk level as output to obtain the anti-fraud model.
4. An anti-fraud policy analysis method according to any of claims 1-3, characterized in that the analysis results include reject, pending and pass.
5. A server, comprising:
an acquisition unit configured to acquire user information of a user;
the calculating unit is used for calculating by utilizing a pre-established anti-fraud model according to the user information to obtain a fraud risk level corresponding to the user;
an analysis unit, configured to analyze the user according to the user information and the fraud risk level by using a pre-established anti-fraud policy matrix to obtain a corresponding analysis result, where the analysis unit includes:
the anti-fraud policy matrix comprises the fraud risk level and an anti-fraud rule set, wherein the fraud risk level comprises a strong trust level, a weak trust level, a neutral level, a weak suspicion level and a strong suspicion level; the anti-fraud rule set comprises a direct batch rule subset, a strong trust rule subset, a weak suspicion rule subset, a strong suspicion rule subset and a direct refusal rule subset;
if the user is the strong trust level, comparing the user information with the direct rejection rule subset; if the user information does not conform to the direct doubt rule subset, comparing the user information with the strong doubt rule subset;
if the user is the weak trust level, comparing the user information with the direct rejection rule subset; if the user information does not conform to the direct doubt rule subset, comparing the user information with the strong doubt rule subset; if the user information does not conform to the strong question rule subset, comparing the user information with the weak question rule subset;
if the user is the neutral level, comparing the user information with the batch rule subset; if the user information does not conform to the direct batch rule subset, comparing the user information with the direct batch rule subset; if the user information does not conform to the direct doubt rule subset, comparing the user information with the strong doubt rule subset; if the user information does not conform to the strong challenge rule subset, comparing the user information with the strong trust rule subset; if the user information does not conform to the strong trust rule subset, comparing the user information with the weak trust rule subset; if the user information does not conform to the weak trust rule subset, comparing the user information with the weak challenge rule subset to obtain corresponding analysis;
if the user is the weak challenge grade, comparing the user information with the direct batch rule subset; if the user information does not conform to the straight batch rule subset, comparing the user information with the strong trust rule subset; if the user information does not conform to the strong trust rule subset, comparing the user information with the weak trust rule subset;
if the user is the strong challenge level, comparing the user information with the direct batch rule subset; and if the user information does not conform to the straight batch rule subset, comparing the user information with the strong trust rule subset.
6. The server according to claim 5, further comprising,
obtaining a plurality of training samples, wherein the training samples comprise training user information and training fraud risk levels;
and training the neural network by taking the training user information as input and the training fraud risk level as output to obtain the anti-fraud model.
7. An electronic device, comprising: a processor, a memory, and a bus, wherein,
the processor and the memory are communicated with each other through the bus;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any one of claims 1-4.
8. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of any one of claims 1-4.
CN201811029029.7A 2018-09-04 2018-09-04 Anti-fraud policy analysis method, server, electronic device and storage medium Active CN109242307B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811029029.7A CN109242307B (en) 2018-09-04 2018-09-04 Anti-fraud policy analysis method, server, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811029029.7A CN109242307B (en) 2018-09-04 2018-09-04 Anti-fraud policy analysis method, server, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN109242307A CN109242307A (en) 2019-01-18
CN109242307B true CN109242307B (en) 2022-02-01

Family

ID=65067279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811029029.7A Active CN109242307B (en) 2018-09-04 2018-09-04 Anti-fraud policy analysis method, server, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN109242307B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110727922B (en) * 2019-10-11 2023-08-29 集奥聚合(北京)人工智能科技有限公司 Anti-fraud decision model construction method based on multi-dimensional data flow
CN111008086A (en) * 2019-12-04 2020-04-14 集奥聚合(北京)人工智能科技有限公司 Anti-fraud policy optimization method based on message queue
CN111898931B (en) * 2020-08-24 2024-04-30 深圳市富之富信息科技有限公司 Variable-based strategy type wind control engine implementation method and device and computer equipment
CN112561685B (en) * 2020-12-15 2023-10-17 建信金融科技有限责任公司 Customer classification method and device
CN113821425B (en) * 2021-09-30 2024-03-08 奇安信科技集团股份有限公司 Tracking method and device for trust risk event, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679046A (en) * 2016-08-01 2018-02-09 上海前隆信息科技有限公司 A kind of detection method and device of fraudulent user
CN107785058A (en) * 2017-07-24 2018-03-09 平安科技(深圳)有限公司 Anti- fraud recognition methods, storage medium and the server for carrying safety brain
CN107818505A (en) * 2017-09-27 2018-03-20 上海维信荟智金融科技有限公司 Finance data Intelligent Decision-making Method and system
CN108154442A (en) * 2017-12-25 2018-06-12 杭州七炅信息科技有限公司 The anti-fraud detection algorithm of property insurance

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10607008B2 (en) * 2017-02-09 2020-03-31 International Business Machines Corporation Counter-fraud operation management

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679046A (en) * 2016-08-01 2018-02-09 上海前隆信息科技有限公司 A kind of detection method and device of fraudulent user
CN107785058A (en) * 2017-07-24 2018-03-09 平安科技(深圳)有限公司 Anti- fraud recognition methods, storage medium and the server for carrying safety brain
CN107818505A (en) * 2017-09-27 2018-03-20 上海维信荟智金融科技有限公司 Finance data Intelligent Decision-making Method and system
CN108154442A (en) * 2017-12-25 2018-06-12 杭州七炅信息科技有限公司 The anti-fraud detection algorithm of property insurance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
神经网络模型在银行互联网金融反欺诈中的应用探索;李赟妮;《金融科技时代》;20180831(第8期);第24-28页 *

Also Published As

Publication number Publication date
CN109242307A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109242307B (en) Anti-fraud policy analysis method, server, electronic device and storage medium
CN110874778B (en) Abnormal order detection method and device
CN109242261B (en) Method for evaluating security risk based on big data and terminal equipment
TWI673666B (en) Method and device for data risk control
CN109711955B (en) Poor evaluation early warning method and system based on current order and blacklist base establishment method
US20140172681A1 (en) Process for Verifying Data Identity for Lending Decisions
CN110264288A (en) Data processing method and relevant apparatus based on information discriminating technology
CN108876188B (en) Inter-connected service provider risk assessment method and device
WO2021254027A1 (en) Method and apparatus for identifying suspicious community, and storage medium and computer device
TW202020888A (en) Risk control method and apparatus, and server and storage medium
CA2840050A1 (en) System and methods for producing a credit feedback loop
WO2020177478A1 (en) Credit-based qualification information auditing method, apparatus and device
CN111562965A (en) Page data verification method and device based on decision tree
CN110796553A (en) Service request processing method, device, terminal and storage medium
CN110851298A (en) Abnormality analysis and processing method, electronic device, and storage medium
CN112529575A (en) Risk early warning method, equipment, storage medium and device
CN114116802A (en) Data processing method, device, equipment and storage medium of Flink computing framework
CN111047146B (en) Risk identification method, device and equipment for enterprise users
CN114139931A (en) Enterprise data evaluation method and device, computer equipment and storage medium
US20210200746A1 (en) System and method for multivariate anomaly detection
CN107862599B (en) Bank risk data processing method and device, computer equipment and storage medium
CN111639903A (en) Review processing method for architecture change and related equipment
CN111651500A (en) User identity recognition method, electronic device and storage medium
CN111277465A (en) Abnormal data message detection method and device and electronic equipment
CN111242773A (en) Virtual resource application docking method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Wei Long

Inventor before: Liu Yuxiao