WO2018097653A1 - Procédé et programme de prédiction d'un utilisateur frauduleux avec rejet de débit - Google Patents

Procédé et programme de prédiction d'un utilisateur frauduleux avec rejet de débit Download PDF

Info

Publication number
WO2018097653A1
WO2018097653A1 PCT/KR2017/013539 KR2017013539W WO2018097653A1 WO 2018097653 A1 WO2018097653 A1 WO 2018097653A1 KR 2017013539 W KR2017013539 W KR 2017013539W WO 2018097653 A1 WO2018097653 A1 WO 2018097653A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
user
feature
classification
performance
Prior art date
Application number
PCT/KR2017/013539
Other languages
English (en)
Korean (ko)
Inventor
서재현
최대선
Original Assignee
공주대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 공주대학교 산학협력단 filed Critical 공주대학교 산학협력단
Publication of WO2018097653A1 publication Critical patent/WO2018097653A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0609Buyer or seller confidence or verification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4016Transaction verification involving fraud or risk level assessment in transaction processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions

Definitions

  • the present invention relates to a method and program for predicting a fraudulent fraud user. More particularly, the present invention provides a machine learning model that satisfies the target prediction classification performance by processing transaction history data of a conventional user, and implements the implemented machine learning model. And a method and program for predicting chargeback fraud for a new user's transaction history data.
  • FIG. 1 shows a flowchart of a chargeback fraud of a game user.
  • the chargeback fraud by the online game user as shown in Figure 1, after the user (game user) has spent the game money and the like paid by the game company purchased with a credit card, Established by requesting a bank for a chargeback, it is a huge problem because it can cause enormous damage to game companies and the like.
  • Patent Document 1 KR10-2016-0017629 A
  • the present invention implements a machine learning model that satisfies the targeted prediction classification performance by processing and using transaction history data of a conventional user, and implemented the machine learning model.
  • the purpose of the present invention is to provide a method and program for predicting a chargeback fraud user that predicts a chargeback fraud on a transaction history data of a new user.
  • a method of predicting a chargeback fraud user includes: (1) transaction history data of a conventional normal user and a chargeback fraud user for each user; A data processing step of processing the data based on one case, (2) A data classification step of dividing the processed transaction history data into training data and test data, and (3) A chargeback fraud user among the training data.
  • Predictive classification step for predicting whether a chargeback fraud user is against data (5) performance for predictive classification (6)
  • the data processing step may include: (1) a first feature deletion step of deleting features falling below a criterion by performing evaluation on each feature and a plurality of feature sets of transaction history data, and (2) a first feature deletion step The feature generation step of generating a new feature while processing the historical data into a single data for each user using a statistical method, (3) performing the evaluation of the generated feature to delete features that do not meet the evaluation criteria 2 Car feature deletion step.
  • the primary feature deleting step includes: (1) evaluating each feature of the transaction history data using an information gain technique, and (2) trading using a principal component analysis technique. And performing an evaluation on the plurality of feature sets of the historical data.
  • the performance measuring step may measure the performance of the prediction classification by using a confusion matrix.
  • the program for predicting a chargeback fraud user may be stored in a medium for predicting a chargeback fraud user according to the above-described method for predicting a chargeback fraud user.
  • Prediction method and program of the chargeback fraud user can implement a machine learning model that satisfies the target prediction classification performance to predict chargeback fraud on the transaction history data of the new user Therefore, there is an advantage that can be prevented in advance due to the chargeback fraud.
  • 1 shows a flow diagram of a chargeback fraud.
  • FIG. 2 illustrates a method for predicting a chargeback fraud user according to an embodiment of the present invention.
  • FIG. 3 illustrates a data processing step S10 of a method for predicting a chargeback fraud user according to an embodiment of the present invention.
  • FIG. 2 illustrates a method for predicting a chargeback fraud user according to an embodiment of the present invention.
  • the method of predicting a chargeback fraud user may be performed by a computer.
  • the computer may be a desktop personal computer, a laptop personal computer, a netbook computer, a tablet personal computer, or the like, but is not limited thereto.
  • the method of predicting a chargeback fraud user according to an embodiment of the present invention, as shown in Figure 2, the data processing step (S10), data classification step (S20), data adjustment step (S30), prediction A classification step S40, a performance measurement step S50, an iteration step S60, and a prediction step S70 are included.
  • the data processing step S10 is a step of processing transaction history data of a conventional user.
  • the conventional transaction history data includes in-flight details of the normal user and the chargeback fraud user, respectively, and may be provided from a database such as a game company that stores and manages them.
  • the data processing step (S10) processes the transaction history data into data of one reference for each user.
  • the transaction history data includes a plurality of attributes having different data characteristics and physical forms (record format, record length, etc.) of the data. This attribute of data is hereinafter referred to as "feature".
  • Table 1 shows the characteristics of the actual transaction history data stored in the database of a game company.
  • the actual transaction history data provided by the game company included transaction history data (hundreds of thousands) of 62,092 normal users and transaction history data of 372 (thousands) of chargeback fraud users.
  • the transaction history data may include a plurality of features as shown in Table 1. That is, each transaction history data may include features of "user_no, standard_country_code, charge_status, charge_no, payment_method_no, charge_amount, bonus_amount, datetime, charge_product_name and hash_ip, ip_addr".
  • Characteristic Contents One user_no User's identifier 2 standard_country_code User's Country Code 3 charge_status User's charging stage 4 charge_no Charging identifier 5 payment_method_no Form of payment identifier 6 charge_amount Charge amount 7 bonus_amount Bonus amount 8 datetime Transaction date 9 charge_product_name Payment gateway name 10 hash_ip IP address of the user converted to a hash function 11 ip_addr IP address of the user
  • FIG. 3 illustrates a data processing step S10 of a method for predicting a chargeback fraud user according to an embodiment of the present invention.
  • the data processing step S10 may include a first feature deletion step S11, a feature generation step S12, and a second feature deletion step S13.
  • the primary feature deletion step S11 is a step of evaluating each feature and a plurality of feature sets of the transaction history data and deleting a feature that does not meet the evaluation criteria.
  • each feature eg, user_no, standard_country_code, charge_status, charge_no, payment_method_no, charge_amount, bonus_amount, datetime, charge_product_name and hash_ip, ip_addr.
  • the information gain is an amount of reduction in entropy expected when one feature is selected, and the higher the value, the better the data can be distinguished. That is, in the first feature deletion step (S11), a value for the degree of discrimination of the chargeback fraud user according to the selection of each feature is obtained according to the information gain technique.
  • the corresponding feature having an information gain value less than a predetermined criterion is deleted. This is because a feature with an information gain value below a certain criterion corresponds to a feature that is not necessary to distinguish between chargeback fraud users.
  • a plurality of feature sets of transaction history data are evaluated using a principal component analysis technique.
  • Principal component analysis is a technique of reducing high-dimensional data to low-dimensional data, and finds a principal component of distributed data. That is, in the first feature deletion step (S11), a plurality of feature sets of principal components that extract the chargeback fraud users can be extracted according to the principal component analysis technique.
  • the plurality of feature sets of the transaction history data are a combination of two or more features, for example, ⁇ user_no, standard_country_code ⁇ , ⁇ user_no, charge_status ⁇ ,... ⁇ user_no, standard_country_code, charge_status ⁇ , ⁇ user_no, standard_country_code, charge_no ⁇ ... And the like.
  • a feature included in a plurality of feature sets that fall below a predetermined criterion, that is, does not correspond to a main component is deleted. This is because a feature included in a plurality of feature sets that does not correspond to a main component corresponds to a feature that is not necessary to distinguish between chargeback fraud users.
  • the feature generation step (S12) is a step of generating a new feature while processing the transaction history data that has undergone the first feature deletion step into data of one reference for each user using a statistical method.
  • the statistical method may include, but is not limited to, methods such as count, sum, difference, average, standard deviation, maximum value, minimum value, date statistics, time statistics, and the like with respect to data.
  • the secondary feature deletion step S13 is a step of deleting a feature that does not meet the evaluation criteria by performing an evaluation on the generated feature.
  • each feature of the transaction history data is evaluated by using an information gain technique.
  • the corresponding feature having an information gain value less than a predetermined criterion is deleted. This is because a feature with an information gain value below a certain criterion corresponds to a feature that is not necessary to distinguish between chargeback fraud users.
  • Table 2 shows the features of the transaction history data shown in Table 1 through the data processing step (S10), the first feature deletion step (S11) and the second feature deletion step (S13).
  • standard_country_code_kind is additionally created from standard_country_code
  • charge_stat10, charge_stat20, and charge_stat30 are additionally created from charge_status
  • payment_method_no_kind is additionally created from payment_method_no
  • charge_amount_sum is additionally created from payment_method_no
  • charge_amount_sum is additionally created from payment_amount_av_
  • transaction_recent_monthday, transaction_recent_hour, transaction_cnt_sum, transaction_cnt_1_month, transaction_cnt_2_month, transaction_cnt_3_month, transaction_cnt_6_month and transaction_cnt_else were additionally created from datetime
  • charge_product_name_kind_ was added from ip_addrkind ip_addrkind.
  • charge_no and hash_ip are deleted, and a class is added to distinguish
  • the information gain values of the characteristics of Table 2 were determined using a ClassifierSubsetEval attribute evaluator based on a decision tree (DT) and a genetic algorithm. 4, 5, 7, 8, 10, 11 , 12, 17, 18, 19, and 20 corresponding to features of charge_stat10, charge_stat20, payment_method_no, payment_method_no_kind, charge_amount_avg, charge_amount_stddev, bonus_amount_sum, transaction_cnt_sum, transaction_cnt_1_month, transaction_cnt_2_month, and transaction_cnt_3.
  • the data classification step S20 is a step of dividing the processed transaction details data into training data and test data.
  • the training data is data used as training data of a specific machine learning to be used later
  • the test data is data used to test the performance of the learned machine learning model.
  • Table 3 shows the various dataset types for dividing processed transaction history data into training data and test data.
  • 66% split is 66% of transaction history data divided by training data and the remaining 34% by test data. 10-fold is a case in which 9/10 is divided into training data and 1/10 is divided into test data among transaction details data, and a cross validation method is performed.
  • 50% split is a case in which 50% of transaction history data is divided into training data and test data, and the data is divided by StratifiedFolds preprocessing.
  • the data adjustment step (S30) is a step of adjusting the number of data by oversampling the data for the chargeback fraud user among the training data. Since the transaction history data of the chargeback fraud user is less than that of the normal user, the performance of the machine learning model learned from the training data may be degraded. Accordingly, the performance of the machine learning model may be improved by oversampling data for the chargeback fraud user in the training data through the data adjustment step S30. Specific experimental examples for improving the performance of the machine learning model through the data adjustment step (S30) will be described later.
  • training data whose number of data is adjusted is used as training data to be trained by a predetermined machine learing technique.
  • the machine learning includes various algorithms such as supervised learning, unsupervised learning, semi-supervised learning, and is not particularly limited.
  • Supervised learning may include a Support Vector Machine (SVM), Hidden Markov model, Regression, Neural Network, Naive Bayes Classification, and the like. .
  • the performance measurement step S50 is a step of measuring the performance for the prediction classification. That is, the performance measurement step S50 measures performance indicating the accuracy of the test data predicted and classified by the machine learning model in the prediction classification step S40. In this case, the performance measurement step S50 may measure the performance of the prediction classification by using a confusion matrix.
  • Table 4 shows the chaos matrix
  • TP predicts that a machine learning model predicts a test fraud user as a bogus fraud user, but actually a chargeback fraud user
  • TN predicts that a machine learning model predicts a test user as a normal user but is actually a normal user. Appears respectively.
  • FP predicted that the machine learning model predicted chargeback fraud users for a test data but was actually a normal user
  • FN predicted that the machine learning model predicted it as a normal user for a test data but was actually rejected Each case represents a fraudulent user.
  • the machine learning model collects the number of results classified and predicted for each test data according to the chaotic matrix of Table 4. Thereafter, the performance measurement step (S50) calculates the value of the performance indicator.
  • the present invention may include, but is not limited to, any measure that can measure performance of data classification accuracy.
  • Table 5 shows each performance index for measuring the performance of the machine learning model predicted and classified in the prediction classification step (S40).
  • Tables 6 and 7 show the prediction classification step (S40) and the performance measurement step (S50) for each data set type shown in Table 3 using the machine learning technique of the decision tree (DT) and the support vector machine (SVM). The result of measuring the prediction classification performance is shown.
  • Table 6 shows the results of performing the data adjustment step (S30), the prediction classification step (S40) and the performance measurement step (S50).
  • the support vector machine shows better predictive classification performance than the decision tree (DT), and the data conditioning step (S30). Performance shows better predictive classification performance.
  • the iteration step S60 may oversample or undersample the data for the chargeback fraud user of the training data until the performance of the prediction classification reaches the target value, thereby predicting classification S40 and The step of repeating the performance measurement step (S50). At this time, if the oversampling ratio is too high and the training data amount is excessively increased, an overload may occur when learning in the predictive classification step S40, and thus, the repeating step S60 may perform undersampling in addition to oversampling of the training data. Can be done. In addition, in the repeating step (S60), undersampling may be performed to reach a more accurate target value.
  • the ratio of oversampling or undersampling at the time of performing the repetition step S60 may be regular or arbitrary, and is not particularly limited.
  • the oversampling ratio at the nth iteration can be defined as A ⁇ B n (where A and B are natural numbers and n is an integer).
  • the undersampling ratio at the mth iteration can be defined as A ⁇ B n ⁇ (C ⁇ m) (where A, B and C are natural numbers, n and m are integers, n ⁇ m)
  • the prediction classification step S40 and the performance measurement step S50 when the oversampling is 100% are performed.
  • the oversampling is increased to 200% in the first iteration, and the prediction classification step S40 and the performance measurement step S50 are performed again.
  • the oversampling is raised to 300% in the second iteration, and the prediction classification step S40 and the performance measurement step S50 are performed again.
  • Recall has 0.948, which is above the target value.
  • undersampling is performed in the third iteration, that is, the oversampling is set to less than 300%, for example, 280% to predict the classification step ( S40) and the performance measurement step S50 may be performed again.
  • the predicting step S70 is a step of predicting a chargeback fraud on transaction history data of a new user using a machine learning model that has reached the target predictive classification performance.
  • the chargeback fraud user prediction program according to an embodiment of the present invention is stored in the medium to perform the chargeback fraud user prediction according to the above-described method of chargeback fraud user according to an embodiment of the present invention.
  • the predictive program of a chargeback fraud user may be recorded in a recording medium readable by a computer or similar device.
  • the recording medium may be a hard disk type, a magnetic media type, a compact disc read only memory (CD-ROM), an optical media type, a magnetic-optical medium Type (magneto-optical media type), multimedia card micro type, memory of the card type (e.g., SD or XD memory, etc.), flash memory type, ROM (read only memory); ROM, random access memory (RAM), or a combination of a memory composed of a memory, a main memory, or a secondary memory device, but is not limited thereto.
  • a hard disk type a magnetic media type, a compact disc read only memory (CD-ROM), an optical media type, a magnetic-optical medium Type (magneto-optical media type), multimedia card micro type, memory of the card type (e.g., SD or XD memory, etc.), flash memory type, ROM (read only memory); ROM, random access memory (RAM), or a combination of a memory composed of a memory, a main memory, or a secondary memory device, but is not
  • the program comprises a communication network such as the Internet, an intranet, a local area network (LAN), a wide area network (WLAN), or a storage area network (SAN), or a combination thereof. It may be stored in an attachable storage device accessible through a communication network.
  • a communication network such as the Internet, an intranet, a local area network (LAN), a wide area network (WLAN), or a storage area network (SAN), or a combination thereof. It may be stored in an attachable storage device accessible through a communication network.

Abstract

L'invention concerne un procédé permettant de prédire un utilisateur frauduleux avec rejet de débit, le procédé étant caractérisé en ce qu'il comprend : une étape de traitement de données consistant à traiter les données de détail de transactions classiques concernant des utilisateurs normaux et des utilisateurs frauduleux avec rejet de débit en données basées sur un cas pour chaque utilisateur ; une étape de classification de données consistant à diviser les données de détail des transactions traitées en données d'apprentissage et données de test ; une étape de configuration de données consistant à suréchantillonner les données concernant les utilisateurs frauduleux avec rejet de débit parmi les données d'apprentissage, ce qui permet d'ajuster le nombre de données ; une étape de prédiction/classification consistant à réaliser un apprentissage d'après une technique d'apprentissage automatique spécifique à l'aide des données d'apprentissage dont le nombre d'éléments a été ajusté, ainsi qu'à prédire/classer si des données de test correspondent ou non à un utilisateur frauduleux avec rejet de débit à l'aide du modèle d'apprentissage automatique ; une étape de mesure de performances consistant à mesurer les performances relatives à la prédiction/classification ; une étape de réalisation répétée consistant à suréchantillonner ou de sous-échantillonner les données concernant les utilisateurs frauduleux avec rejet de débit parmi les données d'apprentissage jusqu'à ce que les performances relatives à la prédiction/classification atteignent une valeur cible, ce qui permet de réaliser de manière répétée l'étape de prédiction/classification ainsi que l'étape de mesure de performances ; et une étape de prédiction consistant à prédire une fraude avec rejet de débit par rapport aux données de détail de transactions d'un nouvel utilisateur à l'aide d'un module d'apprentissage automatique ayant atteint les performances de prédiction/classification cibles.
PCT/KR2017/013539 2016-11-25 2017-11-24 Procédé et programme de prédiction d'un utilisateur frauduleux avec rejet de débit WO2018097653A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2016-0158491 2016-11-25
KR1020160158491A KR20180059203A (ko) 2016-11-25 2016-11-25 지불 거절 사기 사용자의 예측 방법 및 프로그램

Publications (1)

Publication Number Publication Date
WO2018097653A1 true WO2018097653A1 (fr) 2018-05-31

Family

ID=62195250

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/013539 WO2018097653A1 (fr) 2016-11-25 2017-11-24 Procédé et programme de prédiction d'un utilisateur frauduleux avec rejet de débit

Country Status (2)

Country Link
KR (1) KR20180059203A (fr)
WO (1) WO2018097653A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675220A (zh) * 2019-09-12 2020-01-10 深圳前海大数金融服务有限公司 欺诈用户识别方法、系统及计算机可读存储介质
US11151573B2 (en) * 2017-11-30 2021-10-19 Accenture Global Solutions Limited Intelligent chargeback processing platform
CN114297054A (zh) * 2021-12-17 2022-04-08 北京交通大学 一种基于子空间混合抽样的软件缺陷数目预测方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180128874A (ko) * 2018-11-14 2018-12-04 주식회사 미탭스플러스 트랜잭션 검증을 이용한 암호화폐 거래소 입금 승인 장치 및 방법
KR102607383B1 (ko) * 2021-01-05 2023-11-29 중소기업은행 자금세탁의심거래 파악 방법 및 그 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020086695A (ko) * 2000-03-24 2002-11-18 알티코 인크. 부정 트랜잭션을 검출하기 위한 시스템 및 방법
US20080288405A1 (en) * 2007-05-20 2008-11-20 Michael Sasha John Systems and Methods for Automatic and Transparent Client Authentication and Online Transaction Verification
US20120158540A1 (en) * 2010-12-16 2012-06-21 Verizon Patent And Licensing, Inc. Flagging suspect transactions based on selective application and analysis of rules
KR20160017629A (ko) * 2014-08-06 2016-02-16 아마데우스 에스.에이.에스. 예측적 사기 차단
US20160328715A1 (en) * 2015-05-06 2016-11-10 Forter Ltd. Gating decision system and methods for determining whether to allow material implications to result from online activities

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020086695A (ko) * 2000-03-24 2002-11-18 알티코 인크. 부정 트랜잭션을 검출하기 위한 시스템 및 방법
US20080288405A1 (en) * 2007-05-20 2008-11-20 Michael Sasha John Systems and Methods for Automatic and Transparent Client Authentication and Online Transaction Verification
US20120158540A1 (en) * 2010-12-16 2012-06-21 Verizon Patent And Licensing, Inc. Flagging suspect transactions based on selective application and analysis of rules
KR20160017629A (ko) * 2014-08-06 2016-02-16 아마데우스 에스.에이.에스. 예측적 사기 차단
US20160328715A1 (en) * 2015-05-06 2016-11-10 Forter Ltd. Gating decision system and methods for determining whether to allow material implications to result from online activities

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11151573B2 (en) * 2017-11-30 2021-10-19 Accenture Global Solutions Limited Intelligent chargeback processing platform
CN110675220A (zh) * 2019-09-12 2020-01-10 深圳前海大数金融服务有限公司 欺诈用户识别方法、系统及计算机可读存储介质
CN114297054A (zh) * 2021-12-17 2022-04-08 北京交通大学 一种基于子空间混合抽样的软件缺陷数目预测方法
CN114297054B (zh) * 2021-12-17 2023-06-30 北京交通大学 一种基于子空间混合抽样的软件缺陷数目预测方法

Also Published As

Publication number Publication date
KR20180059203A (ko) 2018-06-04

Similar Documents

Publication Publication Date Title
WO2018097653A1 (fr) Procédé et programme de prédiction d'un utilisateur frauduleux avec rejet de débit
Pillar How sharp are classifications?
Sahiner et al. Classifier performance prediction for computer‐aided diagnosis using a limited dataset
CN107122669B (zh) 一种评估数据泄露风险的方法和装置
Ekina et al. Application of bayesian methods in detection of healthcare fraud
CN110706026A (zh) 一种异常用户的识别方法、识别装置及可读存储介质
CN113688042A (zh) 测试场景的确定方法、装置、电子设备及可读存储介质
WO2022199185A1 (fr) Procédé d'inspection d'opération d'utilisateur et produit de programme
CN113011888A (zh) 一种针对数字货币的异常交易行为检测方法、装置、设备及介质
CN112948823A (zh) 一种数据泄露风险评估方法
CN110348471B (zh) 异常对象识别方法、装置、介质及电子设备
CN110490750B (zh) 数据识别的方法、系统、电子设备及计算机存储介质
CN109308615B (zh) 基于统计序列特征的实时欺诈交易检测方法、系统、存储介质及电子终端
CN115018210B (zh) 业务数据分类预测方法、装置、计算机设备和存储介质
CN113177733B (zh) 基于卷积神经网络的中小微企业数据建模方法及系统
CN110728585A (zh) 核保方法、装置、设备及存储介质
CN113518010B (zh) 一种链路预测方法、装置及存储介质
KR102336462B1 (ko) 신용평가정보 제공 장치 및 방법
CN111444362B (zh) 恶意图片拦截方法、装置、设备和存储介质
CN111062800B (zh) 数据处理方法、装置、电子设备及计算机可读介质
Borkar et al. Comparative study of supervised learning algorithms for fake news classification
Kang Fraud Detection in Mobile Money Transactions Using Machine Learning
Huang et al. Performance measures for rare event targeting
CN114201394A (zh) 一种巡检方法及系统
CN115511428A (zh) 一种数据处理方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17874986

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17874986

Country of ref document: EP

Kind code of ref document: A1